Page 1

Volume 3 Issue 1

International Pharmaceutical Industry

Supporting the industry through communication

Peer reviewed

The Value of Intellectual Property An Overview of Methods Used The Diabetes Pandemic Responding to FDA Guidance on Cardiovascular Risk in Type 2 Diabetes Treatment Application of Toxicogenomics in Safety Assessment Early Phase Japanese Bridging Studies Their Global Significance

DIRECTORS: Martin Wright Mark A. Barker


Quality Staff Recruitment and Retention In 2009 the pharmaceutical industry was affected just the same as everyone else by the economic downturn. Biotechnology companies found their funding being withdrawn or just not being there in the first place; large pharma suffered from drug pipelines drying up; and the related services industries found it difficult to win work, with their margins being squeezed hard by clients who found themselves in a very strong bargaining position. Jim Gleeson from CK Clinical explores from a purely recruitment perspective a number of effects of this turmoil.


Patent Data – Why More is More In order to obtain a patent, it is necessary to show that your invention is novel and inventive. One aspect of this is that the invention should make a contribution to the relevant field of research. As a result, when a patent application is filed, it ought to contain data, usually in the form of experimental results, to convince the patent office that this is the case. Care must be taken to ensure that in the rush to secure an early filing date, the need to ensure that sufficient data is included in the application at filing is not neglected. In this article Charlotte Dale and Charlotte Fox of Forresters discuss how much data a patent application should ideally contain at filing, with a focus on the views of the European Patent Office (EPO).


Clinical Trials Insurance: Integrated Expertise and Technology Enable Precise Aim at a Moving Target Clinical trials have gone global. According to the National Institutes of Health, a division of the US Department of Health and Human Services, there are currently more than 58,000 trials taking place in more than 150 countries. One of the main challenges life sciences companies face, however, is keeping abreast of the various insurance requirements imposed by these countries. Kathleen Burns of Aon eSolutions discusses what needs to be considered when securing appropriate clinical trials insurance, especially for multinational studies.


PRINTED BY: SW TWO UK PUBLISHED BY: Pharma Publications Building K, Unit 104 Tower Bridge Business Complex, 100 Clements Road, London, SE16 4DG, UK Tel: +44 0207 2375685 Fax: +44 0207 3947415 Email:

The next issue of IPI will be published in May 2011. ISSN No. International Pharmaceutical Industry ISSN 1755-4578. The opinions and views expressed by the authors in this magazine are not necessarily those of the Editor or the Publisher. Please note that although care is taken in preparation of this publication, the Editor and the Publisher are not responsible for opinions, views and inaccuracies in the articles. Great care is taken with regards to artwork supplied, the Publisher cannot be held responsible for any loss or damage incurred. This publication is protected by copyright. 2011 PHARMA PUBLICATIONS

Supporting the industry through communication

The Value of Intellectual Property: An Overview of Methods Used Although intellectual property (IP) is regarded by many as an intangible asset for some companies, especially those in the life science arena, it can be the company’s sole or primary asset. Intellectual property includes patents, trademarks, service marks, copyrights, and trade secrets. IP valuation is more than creative accounting. Dr Louise Sarup of IP Pragmatics Ltd. explains that there are a number of methods of valuation of IP, and each represents an available option. Inclusion of IP as an asset in its accounts reflects a company’s true and fair financial position.

BOOK MANAGER: Anthony Stewart

All rights reserved. No part of this publication may be reproduced, duplicated, stored in any retrieval system or transmitted in any form by any means without prior written permission of the Publishers.

International Pharmaceutical Industry



COVER IMAGE: iStockphoto ©

Editor’s letter

Regulatory & Marketplace

PUBLISHER: Clive Baigent

BUSINESS DEVELOPMENT: George Brookman-Mensah




Supporting the industry through communication



International Pharmaceutical Industry

EDITOR: Dr. Patricia Lobo




Contents 26

Flanders – An attractive BioPharma Cluster. Innovation is the key word for the life sciences and biotechnology sector. Flanders has a long and successful tradition of discovery and innovation. Joke Comijn of FlandersBio explores some of the major scientific breakthroughs achieved, including the first unravelling of the DNA sequence of a gene, the discovery of tPA, a major treatment for heart failure, and the discovery of treatments for schizophrenia, pain management, gastro-intestinal disorders and parasitic infections. Innovation combined with entrepreneurship, the presence of a life sciences network of more than 145 companies with biotech activities, and numerous biopharma collaborations, have turned a small region into a world player. Drug Discovery/Development & Delivery





Point of Care Testing; The New Panacea? The future demands on the Health Service are going to be partly dictated by demographic change and partly by resources available; including money (current funding for the NHS from Central Government is £98 billion per annum). With more people living longer and requiring healthcare, the pressure on the Health Service is steadily increasing. The ability to manage patients more efficiently will mean that there will be a need for quicker ways to triage patients, potentially reducing the burden on an already stretched Health Service. One of the methods being developed and evaluated is Point of Care Testing (POCT). In this article, Dr Phil Luton of the HPA examines the POCT issues. Seeing Beyond the Visible with Near-Infrared Dyes Optical imaging enables non-invasive study of molecular targets inside the body of the living animal. Harry Osterman and Amy Schutz-Geschwender of LICOR explain how this technology can be used to follow the progression of disease, the effects of drug candidates on the target pathology, the pharmacokinetic behaviour of drug candidates, and the development of biomarkers indicative of disease and treatment outcomes. A ‘Mix and Measure’ Multiplexed Assay to Assess Oxygen Consumption and Microbial Metabolism MitoXpress is a water-soluble oxygen-sensitive phosphorescent probe that facilitates microtitre-plate based analysis of microbial oxygen consumption. The ‘mix and measure’ procedure allows rapid and specific detection of microbial oxygen consumption, providing a simple yet sensitive means of assessing the impact of a given manipulation on cellular function. Areas of application include mode of drug action elucidation, screening for antimicrobial compounds, the assessment of bacterial load, and the optimisation of culture conditions. Dr James Hynes of Luxcel Biosciences explains how MitoXpress facilitates simple and convenient probing of microbial metabolism, and can be applied to the analysis of both bacteria and yeast. The Diabetes Pandemic: Responding to FDA Guidance on Cardiovascular Risk in Type 2 Diabetes Treatment Diabetes is increasing at a disturbing rate both in the US and Europe, and in China. In America, approximately 4000 people are diagnosed with Type 2 diabetes each


day. In recognition of this alarming growth and the ongoing research and development of both preventive and palliative diabetes treatment, in December 2008 the FDA released guidance on new anti-diabetic drugs. This guidance recommends that sponsors demonstrate that new anti-diabetic therapies for the treatment of Type 2 diabetes are not associated with increased cardiovascular risk. This article by Robert B. Kleiman of ERT discusses the implications of the FDA guidance recommendations, and how sponsors can respond in order to ensure compliance. Clinical Research 50 Application of Toxicogenomics in Safety Assessment Dr Ali Faqi of MPI Research summarises the issue of application of toxicogenomics in safety assessment, as the cost of developing new active and safe drugs increases due to the high attrition rate. The risk of unexpected toxicity can be observed at any time during drug development or after marketing approval, thereby leading to the failure of the compound or its withdrawal from the market. 54 Developing Innovative Cancer Medicines of the Future via Patient-Relevant Models There is currently a high attrition rate for new cancer drugs which enter into clinical trials, due to pre-clinical models not accurately predicting efficacy and/or toxicity. As a result, there is a growing need within the pharmaceutical and biotech industry for more patient-relevant and predictive cancer modelling for anti-cancer drug development. This requirement is particularly important as a new generation of molecular-targeted cancer drugs, with fewer potential side-effects, are coming through the drug discovery pipeline. Professor Sue Watson of PRECOS discusses the new technology which is now being developed in an effort to create patient-relevant models, which ensure preclinical efficacy assessment. 58 Basic Biostats for Clinical Research – Multiple Comparisons in Drug Development, Part II Multiplicity can have a significant impact on clinical trials, in the study design, the analysis of the data, and the interpretation of the results. In this second part of this series, Dr Rick Turner and Russell Reeve of Quintiles look into the concepts of interim analysis, and at the programme level. 64 Early Phase Japanese Bridging Studies; Their Global Significance and What to Look for when Selecting a Suitable Contract Research Organisation to Conduct these Studies This article examines the issues of early phase Japanese bridging studies and their global significance, and what to look for when selecting a suitable contract research organisation to conduct these studies. As the pharmaceutical and biotechnology industries are forced to continue to introduce internal efficiencies, companies within these industries must equally ensure they enforce these efficiencies on their external providers to maximise their return on investment (ROI) in their R&D spend. These issues are examined here by K S Berelowitz and J Taubel of Richmond Pharmacology. Volume 3 Issue 1

Contents Labs/Logistics & Cold Chain Supply



100 Conversion Processes Now Key to Assuring Compliance in Pharmaceutical Packaging. Pharmaceutical packaging manufacturers increasingly agree to ‘zero fault’ supply agreements from brand owners in order to win contracts. To meet the requirements of these agreements, manufacturers have to make their quality assurance procedures more robust than ever before - a costly and time-consuming business. However, help is on hand from an unlikely source. Although traditionally seen simply as stages in the manufacture of folding cartons, in recent years new developments in conversion processes such as hot-foil stamping, die-cutting, and folding & gluing have made them key factors in ensuring carton quality. Conversion processes are now key to assuring compliance in pharmaceutical packaging. These issues are examined here by Marco Lideo of Bobst.


CMO PAT Implementation: Time for Strategic Decisions Process analytical technology (PAT) is a vital step towards a future where continuous manufacturing and real-time product release can become real in the pharmaceutical sector. After a slow start, PAT is high on the agenda for Big Pharma. But it is not so common among contract manufacturers (CMOs). What is holding them back, and what are the issues they need to consider? Rebecca Vangenechten of Siemens argues that, while the pace of change will vary from company to company, all CMOs need to have a PAT strategy in place. PAT offers immense potential for pharmaceutical manufacturing.

104 Pharmaceutical Piracy – Colour-Coded Security and Traceability The problem of counterfeit medicines was first addressed at the international level at a World Health Organization (WHO) conference in 1985. Things have come a long way from there, and much has been undertaken to combat the growing threat of fake medication. Rolf Simons, of 3S Simons Security Systems GmbH, explains how a multi-layer approach consisting of authentication, serialisation and tamper-evident features can help secure the products along the entire supply chain and in all aspects; micro colour-code technology can decisively contribute to the success of all these endeavours.


LyoSeal: Compliance, Quality and Production Tool Injectable medicines are most often available in a glass vial closed by a rubber stopper and a crimp seal, an arrangement more than a century old. Is it true to say that the pharmaceutical industry has lacked innovative power during that time? On the contrary, this simple packaging format has led to numerous progresses in the past decades. Philippe LeGall and Tony Bouzerar of BIOCORP argues, however, that there is still a significant potential for improvement in fill and finish operations.


Innovative Glatt Fluid Bed Pelletising Technologies In this in-depth article, Dr Norbert Pöllinger and Dr Armin Prasch examine innovative fluid bed pelletising technologies. In multi-particulate systems, the dosage of the drug substance is – in contrast to classic singleunit dosage forms like tablets – divided on a plurality of sub-units, consisting of thousands of spherical pellet particles with a diameter of typically 100-2000 μm. Although their manufacture and design is more complex in comparison to classic single-unit dosage forms, multi-particulate dosage forms offer a magnitude of different interesting options and advantages to accomplish unique product characteristics, and in particular specific drug-release patterns. In contrast to non-disintegrating monolithic single-unit forms which retain their structure in the digestive tract, the multiparticular preparations consist of numerous sub-units which disperse after administration.

108 Authentication: A Sensible Investment When counterfeiting is reported globally, pharmaceutical products receive most of the attention because people fear poisoning, contamination, and medicines that don’t cure. Yet the pharmaceutical market spends less on authentication technologies as a percentage of sales than the luxury market. Why? Part of this is due to the unique way prescription drugs are distributed in the largest global market, the US, where the patient seldom sees original packaging. In the developing world, generics are the most consumed, and until recently were not believed to be heavily counterfeited. But globally, counterfeiting of all categories of drugs continues to grow: generic, branded, prescription, and selfmedication drugs. Randall Burgess of Reconnaissance International explains why pharmaceutical companies should invest more in authentication.


Bridging the Gap between Temperature Control Packaging and Logistics Cool chain is a core element in the transportation of temperature-controlled pharmaceutical product. Most cool chain products are licensed to be stored between +2°C and +8°C, and indeed these temperatures are usually the ‘magic numbers’ in the industry. Cool chain is an expanding part of the industry, and will continue to be so, given increasing compliance requirements. This, coupled with larger numbers of new drugs in clinical trials and R&D requiring chilled temperature control in storage, means a potentially prosperous future for temperature-controlled logistics. This article by Harriet King of Biocair International focuses on bridging the gap between the logistics provider and the cool chain.


112 An Interview Dr Patricia Lobo, Editor in Chief of IPI, speaks with Mr. Hendrik Kneusels, CEO of Laetus about observing the means to a secured supply chain.

Volume 3 Issue 1

Editor’s Letter Type 2 Diabetes – Dealing with a Slow Motion Medical Tsunami…

Diabetes, a disease that affects glucose uptake and utilisation, is a slow and insidious killer. The poorest tend to suffer more often than the affluent, while in the long term, complications numb nerves in the feet, damage retinal cells in the eyes, clog up the cardiovascular system, cut circulation and destroy tiny filters in the kidneys. Until the discovery of insulin in 19211922 at the University of Toronto, and painstaking work by Frederick Banting, Charles Best, James Collip and Professor JJR MacCleod to develop the pancreatic hormone as a treatment for diabetes, the outlook for the acute form of diabetes, juvenile or Type 1 diabetes was extremely poor. While not a cure, insulin proved to be a life-saver, a wonder-drug and a blockbuster. The success of insulin paradoxically paved the way to a rising incidence of lateonset or Type 2 diabetes in adults. People who develop Type 2 diabetes are usually older than those with Type 1 – and with the demographic shift towards an aging population, this form of diabetes is now more common, accounting for about 90% of all people with diabetes. The distinction between the two types in all respects, including age, is becoming blurred. Short-term complications can happen quickly, such as hypoglycemia, hyperglycemia, ketoacidosis, and hyper-osmolar syndrome. Long-term complications such as heart disease, kidney disease, neuropathy, diseases of the eyes and peripheral vascular disease can seriously compromise the wellbeing of patients with diabetes. One of the major problems with Type 2 diabetes is that in the early stages, symptoms can be so mild they go unnoticed. As many as half of those diagnosed with Type 2 diabetes may have the condition for months or even years before they are aware of it. A very high proportion of people with Type 2 diabetes already show signs of tissue damage to 6 INTERNATIONAL PHARMACEUTICAL INDUSTRY

the eyes or hardening of the arteries by the time of diagnosis. The burden on healthcare provision from complications of diabetes is predicted to be enormous, like a tidal wave in slow motion. This will drive up the demand for dialysis and kidney transplantation, for example, while the risk of heart disease is also greatly increased. Development of a consistent manufacturing process for injectable insulin, a biologic, was a major milestone in the pharmaceutical industry that in later years was built around mostly smallmolecular chemicals, for example aspirin. Interestingly, there are clinical trials under way to answer key questions about the risks and benefits of using low-dose aspirin and statins to treat patients with complications of diabetes, involving up to 15,000 participants and due to be completed from 2011-2013. What is the future for treatment of diabetes? Each year, about 1300 people with Type 1 diabetes receive whole-organ pancreas transplants. After a year, 83 per cent of these patients, on average, have no symptoms of diabetes, and do not have to take insulin to maintain normal glucose concentrations in the blood. However, the demand for transplantable pancreases outweighs their availability. To prevent the body from rejecting the transplanted pancreas, patients must take powerful drugs that suppress the immune system for their entire lives, a regimen that makes them susceptible to other diseases. In the meantime, interesting research work is in progress with transplanting pancreatic islet cells that produce insulin, and with stem cell technologies to regenerate insulin-producing pancreatic tissue. A small number of people with Type 1 diabetes can benefit from islet transplantation. This is an experimental procedure in which healthy islet cells are taken from donor pancreases and injected into the patient’s liver. Once implanted, the

beta cells in these islets begin to make and release insulin. Researchers hope that islet transplantation will help people with Type 1 diabetes. However, there is a requirement for steroid immunosuppressant therapy to prevent rejection of the cells. This increases the metabolic demand on insulin-producing cells, and eventually they may exhaust their capacity to produce insulin. The adverse effects of steroids are greater for islet cell than for whole-organ transplants, so the success rate of islet cell transplants has been less than 8%. Work by James Shapiro and his colleagues in Edmonton, Alberta, Canada, led to the development of an experimental protocol for transplanting islet cells using a much larger amount of islet cells and a different type of immunosuppressant therapy. In one study, they reported that seven out of seven patients who received islet cell transplants no longer needed to take insulin, and their blood glucose concentrations were normal one year after surgery. The success of the Edmonton protocol is now being tested at other centres around the world. The FDA released guidance which recommends that sponsors demonstrate that new anti-diabetic therapies for the treatment of Type 2 diabetes are not associated with cardiovascular risk. Robert Kleiman of ERT discusses the implications and suggests how sponsors can respond in order to ensure compliance (page 46). Other articles in this issue of IPI include early phase Japanese Bridging Studies, developing innovative cancer medicines of the future, application of toxicogenomics in safety assessment, cold chain logistics, security in the supply chain, IP, and patents. Thanks to all our authors for their interesting articles n Dr Patricia Lobo, Senior Consultant, Life Science Business Solutions. Email: Volume 3 Issue 1


The Value of Intellectual Property: An Overview of Methods Used Although intellectual property (IP) is regarded by many as an intangible asset for some companies, especially those in the life science arena, it can be the company’s sole or primary asset. Intellectual property includes patents, trademarks, service marks, copyrights, and trade secrets. Like any other asset, IP can be bought and sold as a part of everyday commerce. Even though IP is frequently traded, the valuation of IP does attract some scepticism due to doubt as to whether the value can be measured reliably. Patents are often the most valuable IP asset for life science companies, and it is important for a company to know what IP it has and how to enhance the company’s IP position, which, in turn, enhances the company’s valuation. Intellectual property valuations are required for many reasons. It can be a commercial reason such as exploitation of the IP via licensing, where in order to make an agreement to license the parties must know as accurately as possible the real value of the IP. Other commercial reasons for a valuation exercise include litigation, M&A activities, tax planning and fundraising, where IP is used as collateral for financing. When the key asset of a company is its patent portfolio, the value of it must be assessed as a prerequisite to investment by third parties. It is important from a portfolio management and resource allocation point of view to have an understanding of the value of the company’s IP portfolio. Knowing which projects will generate value, and which will not, is vital to the company’s success. Valuations are also required for accounting and taxation requirements, as well as external and internal reporting. Whatever the reason, there are several benefits in valuing IP. 8 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Compared to other industries, valuation in life sciences is more complicated and demanding. This is due to the inherent complexity and the timespan of the R&D and regulatory processes required before a product reaches the market. Much of the value contained in life science IP is in the promise of successfully developing a commercially valuable drug. The key to attaining realistic valuations of this IP lies in estimating the potential future earnings embodied in the early stage of development of the products. Valuation Methods – Common Methodologies used in the Pharma and Biotech Industries There are a number of valuation methods that can be used for valuing IP. Prior to an IP valuation, the purpose and nature of the valuation must be understood in order to determine the basis of the valuation and identify the most appropriate method. Discounted cash flow (DCF) or net present value (NPV), comparables, rules of thumb, auction, direct cost and opportunity cost are some of the common methodologies used to value IP. Each method has its own strengths and weaknesses, and the selection of an appropriate method generally depends on the circumstances of the valuation. There is no definitive, single method that is appropriate in all circumstances. Different approaches have benefits and disadvantages and fit different circumstances. Table 1 shows a summary of the strengths and weaknesses of the three major approaches. Generally the most suitable method of valuation is selected as a primary method, and one or two alternative methods are used to confirm and validate the conclusions reached using the primary method. There are valuation methods based

on the cost to create or recreate the asset (cost approach), methods based on sales of comparable IP (market approach), and methods based on the future economic benefits produced by the IP (income approach). Whichever method is used, it must be remembered that it is the market that determines the value of the IP; its valuation simply reflects or estimates that value determined by the marketplace at that particular point in time. Cost Method This method attempts to determine the value by way of determining the actual historical cost of generating the IP. The cost method aims to value the patents on the basis of the cost of developing the patent to date. The valuation includes experimental costs and patent costs; the experimental costs (labour and overheads) are those associated with exemplifying the inventions in the patents rather than conceiving of, or reducing to practice, the invention. These costs are inflated to allow for the time value of money. The principle is analogous to discounting, providing a consideration to the time value of money based on the expected return on such investment. Alternatively this rate can be viewed as a rate of return on costs invested in recreating the work required to develop the patents. This can be a difficult method to use as it is often difficult to apportion historic R&D expenditure strictly relating to the specific IP. It is rarely used for valuing life science IP as it does not take into account the future economic potential of the valued patents. Market-Based/Comparable Deals Market-based approaches measure the value of assets by obtaining a consensus of what others in the marketplace have judged it to be. Volume 3 Issue 1

REGULATORY & MARKETPLACE Using the market-based approach method, the value of IP is determined by reference to the prices obtained for comparable assets in recent transactions. Theoretically this method is very attractive as it is credible, objective and, for market-based valuation exercises, relevant. The method is very effective when good comparator deals can be identified. However, comparison is only possible where the transaction relates to an identifiable unit of IP or platform technology that is reasonably comparable or, in the case of a company, where that company is virtually identical and analogous to the company/IP to be valued. The uniqueness of life science companies often prevents direct comparison. This method requires an active public market, an exchange of comparable assets, access to prices information at which assets were exchanged and arm’s length transactions. These requirements are also the method’s major shortcoming as there is often not sufficient information relating to these transactions in the public domain. This method should be used with caution as, generally speaking, one commercial transaction is quite distinct from another, particularly where IP is involved. Furthermore, the number of comparable IP transactions is very limited due to the unique nature of IP and, generally, information in the public domain concerning such transactions is very limited. Market fluctuations should also be taken into account when using this method. Income or Economic Approach These methods determine the value of the IP by estimating future profits attributable to the IP and discounting such a revenue stream to a present net value. There are two main incomebased methodologies which are used to value IP, particularly patents, on an economic value basis. These are the discounted cash flow (DCF) or net present value (NPV) method, and the royalty method. The net cash flow method entails the estimation or projected cash flow to be obtained over the term of the patent, which is then discounted in proportion with the risk(s) attached to the projection. The royalty method values IP by capitalising the 10 INTERNATIONAL PHARMACEUTICAL INDUSTRY

estimated royalty payable for the use of the IP under a licensing arrangement. Royalty Relief Method The royalty relief method of patent valuation assumes that the value of the patent is equal to the present value of royalties that would have to be paid on product sales were the patent in-licensed. Relief from royalties is a commonly used method for patent valuation. The method involves projecting revenues from sales of products covered by the IP, and projecting patent-specific costs associated with these, to give a cash flow projection for the patent. A suitable royalty rate is then applied to the cash flows, and the resulting royalty streams are discounted to give a present value for the IP. Table 2 shows the risk-adjusted

royalties calculated for a point-ofcare diagnostics patent. A number of different scenarios have been modelled to calculate a range of possible valuations using the royalty relief method. This method values patents by assessing the turnover attributable to a patent and using this as the royalty base. The royalty rate is assessed by examining either previous or current licences relating to the patent, as this gives the market value of the patent (assuming the previous licence is comparable to the current situation). If previous licences are not available, an industry standard rate can be taken and adjusted up or down. Alternatively, the royalty rate may be assessed by allocating the economic benefits deriving from the licensee’s use of the patent.

Table 1: Comparison of approaches (Source: Zeiger and Scheffer 2005)


Cost Method

Market-based/Comparable Deals

Income/Economic Approach

Objective and consistent

Practical approach which makes use of prices actually paid for comparable assets.

Theoretically superior to other approaches as focussed on future earnings or cash flow.

Reliability of historic cost data


If a recent acquisition cost of patent exists it is a reliable indicator of value

Variety of market-based approaches such as comparable companies, comparable transactions or a premium price-earnings-multiple approach allows comparison.

Widely accepted and understood.

No correlation between expenditure on an asset and its value.

Given the uniqueness of patents, third party arm’s length transactions involving similar patents are infrequent.

Requires subjective cash flow allocation.

Difficult to distinguish between ‘normal’ operating expenses and patent investment expenditure.

Transactions involving the shares of companies owning patents are more frequent but allocating value between the business and the patent is difficult.

Translation of theory into practice requires assumptions which are limiting.

Only used in limited circumstances where replacement cost can be estimated with a reasonable degree of reliability and confidence.

Very important indicator of value if information on recent transactions involving patents exists.

Primary valuation methodology and most widely used where information of appropriate quality is available.

Cost is a relevant benchmark where a patent has recently been aquired.

In practice sufficient information is rarely disclosed.

Limiting nature of assumptions needs to be understood and where possible scenario analysis should be performed.

Subjective nature of estimate of costs of replacement and some patent may not be replaceable. Typical Use

Consistency can be achieved facilitating comparison across a patent portfolio.

This method is mainly used as a cross-check on other, more theoretical methodologies.

Relevant information is not always readily accessible.

Volume 3 Issue 1

REGULATORY & MARKETPLACE Discounted Cash Flow (DCF)/Net Present Value Model (NPV) Method The universal approach of DCF/NPV considers net cash flows over a period of time and discounts it with the cost of capital or expected rate of return. The discount rate in the traditional DCF models covers the risk of production to market. One of the most difficult challenges in this approach is how to select a discount rate. Factors affecting the discount rate include inflation, liquidity, real interest and risk premium. In the case of life science (pharma/ biotech) IP, however there is an additional risk factor of R&D and regulatory approval. This process is both expensive and time-consuming. The improved risk-adjusted model, rDCF or rNPV, considers this aspect and utilises different probabilities of success at different stages of product development, and cash flows at these stages are adjusted accordingly. Risk-adjusted NPV valuations are only as good as the assumptions they are based on. Considered and detailed information is required to achieve a useful valuation. The advantage of this method is that it recognises sources of risk, something that other valuation methodologies do not always do, and which is often a criticism of IP valuation in general. Disadvantages are that the probabilities can be hard to agree on and using industry averages can skew results. The method does not necessarily reflect all the associated uncertainties or convey the range of possible outcomes. Table 3 shows an example of a simple rNPV calculation. Conclusion As with any valuation there is a defined relationship between risk and value. Greater risk associated with revenue and profit translates into less value today. Numerous questions exist which are impossible to answer with certainty; will the product work, will the product be approved, will there be a market for the product, how long will it take to get the product to market, will the product be valuable 10-20 years in the future? There is no definitive answer to these questions; only forecasts and estimates and educated assumptions. Assumptions must be made based on experience, historical data, research 12 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Table 2: Risk-adjusted royalty relief for a point-of-care diagnostic Market Share

Probability (Risk)

Risk adjusted royalties 10% 7% 5%

Total market: £1.2million






Discount rate 10%











Table 3: Example of a rNPV calculation using a discount rate of 15%. Year
















Net Cash Flow












Risk adjusted Cash Flow






Discount Factor (1/(1+r)t






Discounted Cash Flows








and instinct. IP valuation is more than creative accounting. However, there are a number of methods of valuation of IP and each represents an available option. Inclusion of IP as an asset in its accounts reflects a company’s true and fair financial position n Reference: Zieger, M, and Scheffer, G.V. Methods for Patent Valuation, presentation at EPO-OECD-BMWA International Conference on Intellectual Property as an Economic Asset: Key Issues in Valuation and Exploitation, 30 June – 1 July 2005, Berlin

Dr Louise Sarup is a Senior Business Development Manager for IP Pragmatics Ltd (IPPL). She has global experience of evaluating and negotiating licensing deals for clinical and late pre-clinical stage products from multinationals and SMEs across a wide range of therapeutic areas as well as commercialising early stage university medical research. In addition to her business development experience Louise has developed PR and marketing campaigns for Life Science technology companies, global pharmaceutical products and biotech start-ups. Louise is currently working for IPPL in Singapore. Email: louise.sarup@

Volume 3 Issue 1

Company profile

Qualogy Qualogy Contract Regulatory Archive Service is a fast growing, secure archive providing high quality contract services to the GLP and GCP regulated industries. The archive facilities are operated to the exacting standards required by the Good Clinical Practice (GCP), Good Laboratory Practice (GLP) and Good Manufacturing Practice (GMP) Regulations and the Good Clinical Laboratory Practice (GCLP) guidelines. Founded in 1997, Qualogy Ltd began as solely an Independent consultancy company, specialising in providing expertise to organisations that are required to implement the requirements of Good Laboratory and Good Clinical Practice. The company was founded by Tim Stiles, who has worked within the regulatory arena for 30 years, including working as the Director of Quality Assurance within a large Contract Research Organisation (CRO). Tim was also involved in the inception of BARQA QA professional development courses within the UK and has lectured and trained in Europe, East Africa, India, US, Asia, South Africa and Japan on GLP and GCP topics. In the course of the companies consultancy business the need for a secure and reliable independent contract archive storage facility for Investigator Site Files (ISFs) was identified. In response to this identified need, our archive facility was established in 2002 for the archive storage of ISFs and other GLP/GCP/GMP materials on behalf of sponsor/client organisations.

The basic principle of operation is that material is received or collected by Qualogy in sealed archive boxes. Transportation of the material from the client to the archive facility can be undertaken by the client or by Qualogy’s own dedicated courier service. Full inventories of the boxes held will be maintained and copied to the client on receipt of each new batch of material received. A full “chain of custody” for each box of material received will also be maintained from the point of collection/receipt of the material. Services Offered Include • Secure and safe storage of materials • Confidentiality and client anonymity of stored material • Dedicated service for GLP, GCP, GMP and GCLP material • Operated according to regulatory requirements • Independent storage of Investigator site files •F  acilities operated and managed by an experienced Regulatory Archivist • Private courier service for transport of material • Collection of material from across Europe • Supply of boxes on request


The archive facility is a secure storage area which is operated in accordance with documented standard operating procedures (SOPs) and stringent environmental and security controls. The contract archive service includes a detailed documented chain of custody from point of collection, storage in secure facilities, and rapid retrieval when required. The archive facility is managed and operated by an experienced Archivist qualified in Scientific Archive Management and with over ten years experience as an Archivist in a large CRO.


If you require further information on any of the services provided by Qualogy please contact: The Archivist Qualogy Ltd PO Box 6255 Thrapston Northamptonshire NN14 4ZL Telephone: +44 (0)1234 783466 Email:

Contract Archive Services


Quality Staff Recruitment and Retention Where to start? The recession of course! In 2009 the pharmaceutical industry was affected just the same as everyone else by the economic downturn. Biotechnology companies found their funding being withdrawn or just not being there in the first place, large pharma suffered from drug pipelines drying up, and the related services industries found it difficult to win work, with their margins being squeezed hard by clients who found themselves in a very strong bargaining position. From a purely recruitment perspective there have been a number of effects of this turmoil. For much of 2009 hiring permanent staff slowed, and then slowed more. People were getting to final stage interviews and occasionally even being offered roles before a decision came that permanent recruitment had been frozen. Despite this, many organisations, both large and small, found themselves in need of staff of all types, so interim hires boomed as head count was blocked. Large numbers of highly experienced people found themselves redundant, some for the first time, though for others in the biotechnology sector it was becoming a familiar experience. Over the past year though, a sense of normality has been returning. Biotechs are finding backers, big pharma has been acquiring products, and headcount is getting signed off again as development is ramping up. So we now have fewer, hopefully fitter, companies, a resource of skilled candidates in many specialisms (though not all) and it’s time to think about how to fill the gaps in the workplace. Getting it right is not easy, and sometimes it isn’t cheap. Getting it wrong can be very expensive; finding you’ve taken on the wrong person not only means the cost of hiring them has been wasted, but the operational consequences in lost productivity, lengthening development timelines and morale can be major as the process begins again. So what to do? As a recruiter, my obvious answer would be to ring me and 14 INTERNATIONAL PHARMACEUTICAL INDUSTRY

help pay my mortgage, though there are other options. If you are looking to bring an extra person into your organisation, word of mouth does bring results. If it’s a role you are able to let people in the company know about, let them know! It’s a small world in pharma, and while it’s not quite true that everyone knows everyone (in some niches it can sometimes seem like that), you will get names that your employees know, trust and would happily work with again. Offer a small introduction fee and you will get people’s attention. Advertising works; use your company’s website and commercial ones, and also consider the print and online journals that your ideal candidate is likely to read. Such advertising can be surprisingly expensive though, especially if you are not planning the volume required to negotiate discounted fees. Maximising success via the internet means optimising what you put out so that people will actually see it, a task that does require specialist expertise to be done properly. Many larger organisations have the capacity for dedicated recruiters to be employed internally, or for the function to be outsourced to a service provider. Both solutions work well if you can be sure you have the volume of positions needed to be filled to justify the cost. However, whether you go down the path of dedicated recruiters or decide to task the work to human resources, they will need, in turn, to go to third parties in order to find specialist staff. That’s where recruitment consultancies come in. The thing with recruitment consultancies is that we charge a fee, hence we are often considered an expensive alternative. Not necessarily so. Choose the right consultancy for the right position and you have an ally who will take much of the hard work from you, be well networked with the right people, and act as an ambassador for your organisation. There are a lot of recruiters out there. What there is not is a real one-stop shop which can reliably

fill every vacancy you have (unless your company might be a consultancy offering something along the lines of pharmacovigilance, QA or medical writing services). Consider cost against value. If you called thirty different recruiters you could probably find half of that number would agree to very low fees and you’d have an instant preferred supplier list to send a bulk email out to for every vacancy. Low cost and low maintenance, but you wouldn’t have the time to brief each of those companies on the detailed requirements of each role. As those companies’ consultants will be working on a large number of vacancies in order to justify their salary because of those low fees, the level of attention given to your roles will be as low at that given to all their other clients. Remember, that consultant is the first point of contact on behalf of your company with potential candidates. While there are many types of recruitment offerings available, which can be tailored to your needs, the basic three are temporary / interim staffing, and either contingent or executive search for permanent requirements. Companies that offer both forms of permanent recruitment (and that really can deliver on both!) are few, though most of either type are likely to fulfil your contract needs also. The market for retained executive search has changed over recent years. The need to pay a commencement fee to begin the search for a senior level role has lessened, as there are more very experienced and well networked recruiters working on a contingency basis. Why pay a large invoice up front when you could fill that role on a contingent basis and only pay on a successful hire? Whichever path of recruitment you choose to go down though, please choose your recruitment company with some care. There are many out there and their quality does vary hugely. A good place to start would again be your own staff; ask them for recommendations of Volume 3 Issue 1

REGULATORY & MARKETPLACE recruiters that have impressed them with a good knowledge of the industry and the utmost honesty. Once you think you have found a company worth speaking to, ask them to supply references, to agree terms from the outset, and to be quite clear on what they can’t do for you as well as what they can. To repeat, there really isn’t that one-stop shop which will be able to fill every requirement you have. A good recruiter should be able to advise you on organisations to speak to for areas in which they aren’t specialists. If you view your recruiter as a business partner, they will view you in the same way. If you have a small list of trusted suppliers you’ll have the time to properly brief them with the full details of positions, background to roles and the personality required to fit well with the team, and they’ll be able to offer advice upon the pool of candidates available. Once armed with that information they can begin the search process, speak to candidates with a sense of conviction, and supply you with a shortlist of highly targeted applicants. It’s useful at this point to plan timelines, and agree dates up front for CV submission, management review and interviews, and you will have that person on board sooner than if each stage in the process was managed adhoc. After review, the interview process for the selected candidates should be conducted with care. As well as having their potential line manager question them on their technical suitability, consider holding a competency-based interview with a trained member of staff, using a series of targeted questions to assess their suitability for the demands of the role. One step in this that remains comparatively rare, but actually proves extremely informative, is getting candidates to meet the team they will be working with. Hiring someone who will disrupt things or ‘just not fit in’ will be a big mistake, and can often be avoided. Once a potential employee has been identified you need to think about the offer. Do you know enough about the candidate to be sure that they will accept? Ensure that you are fully briefed on points such as other roles the person may be interviewing for, their full package details (not just the base salary) and any family commitments that may influence their decision. There is often a degree 16 INTERNATIONAL PHARMACEUTICAL INDUSTRY

of negotiation when an offer has been made, so be prepared for this, but go in too low and you risk losing credibility in the eyes of your chosen candidate. When an offer has been accepted it is vital that the candidate’s point of contact (whether internal HR or your recruitment partner) stays in touch. Candidates often have mixed feelings about leaving an employer, especially if it’s one they’ve been with for some time and have a sense of security with. At this point they are likely to be susceptible to a counteroffer, and having a contact to discuss the emotional aspects of the decision with is likely to be the only way of ensuring they do not reconsider. So it’s day one and there’s a new employee in reception. How to ensure they stay an employee for a reasonable amount of time? Unfortunately it’s not unusual for staff to leave a new employer within the first six months; the usual cause of this is that expectations have not been met. It’s important not to overpromise during the interview process - if they have been told things will be in place, then they must be. Also, the first week is a bad time to find there are issues in a new place of work. It’s better to make people aware of these things in the latter stages of interview, and maybe suggest they are issues your new person could be involved in improving. Any company that has a highly skilled workforce stands the constant risk of losing employees to their competitors. If you can afford to do so, the most effective means of assuring retention is obviously to offer attractive rates of pay and a good package of benefits. Many senior staff in the larger pharmaceutical companies have spent their whole careers with the same employer, partly due to the fact that other organisations just cannot compete financially. If you don’t have the budget for this though, you can still minimise turnover by ensuring your employees really value their roles and feel valued themselves. Ensure that there is the opportunity for training and development, and then that the acquired skills are made use of. Offer a clear path for career progression, whether it’s a linear one within the same specialism, or one that gives people the chance to develop in new areas and broaden their horizons. None of this is rocket science, but any of this can be and does get forgotten

from time to time, as an organisation’s focus on recruitment has to compete with a multitude of other pressures. Take time to consider how to recruit and retain your staff and you will have highly motivated people on board who are likely to stay with you for some time, and be amongst the greatest assets in your organisation n

Jim Gleeson, Senior Consultant. Jim recruits senior level staff within Clinical Operations, Clinical QA and Non-Clinical Science on a global basis. His clients in include Pharmaceutical Companies of all sizes, Biotechnology Companies, Clinical Research Organisations and Healthcare Charities. After beginning his career in analytical science and drug manufacturing development he moved into pharmaceutical recruitment twelve years ago. Jim is a member of the Institute of Clinical Research (and sits on their Resourcing Special Interest Group) and the British Association of Quality Assurance.. Email:

Volume 3 Issue 1


Patent Data – Why More is More In order to obtain a patent it is necessary to show that your invention is novel and inventive. One aspect of this is that the invention should make a contribution to the relevant field of research. As a result, when a patent application is filed it ought to contain data, usually in the form of experimental results, to convince the patent office that this is the case. Care must be taken to ensure that in the rush to secure an early filing date, the need to ensure that sufficient data is included in the application at filing is not neglected. In this article we discuss how much data a patent application should ideally contain at filing, with a focus on the views of the European Patent Office (EPO). The UK courts are increasingly following the decisions made by the EPO. No Data At one extreme, a patent application could be filed with no data at all. In this instance, the patent office would be likely to raise an objection that it is not plausible that the invention would work. In other words, a person reading

the application would have no reason to believe that the invention would achieve what it is supposed to achieve. This is likely to be a particular issue if the fact that the invention works is surprising. The principle behind this is that patents should only be granted for an invention that has some kind of practical effect; otherwise it could not make any contribution to the field. If it is not plausible that the invention does have a practical effect, then a patent should not be granted. This is exemplified by a case in which the patent application contained no data to support the applicant’s claim that they had identified a further member of the TGF-ß family of proteins. The only technical information the application contained was a sequence listing, and this showed that the protein identified had low sequence homology to other family members, and also did not possess the major unifying feature of the family. What is more, the applicants provided no functional data, so there was not even anything to suggest a similarity

in function. It was therefore decided that it was unlikely, in view of the lack of data, that the protein was part of the family in question, and the patent was refused. To prevent an objection to plausibility only minimal data is needed. It is not necessary to show that the invention definitely does work; only that it probably does, so the standard and amount of data necessary is relatively low. For example, a patent application for a new drug need only contain data from in vitro studies, which show that the drug has the potential to treat a disease in a biological system. It is not necessary to include studies that demonstrate its actual effectiveness in vivo for it to be plausible that it would work. Importantly, if the EPO does raise an objection to the plausibility of your invention, it is not possible to remedy this after the application has been filed. Such an objection is raised on the basis of the application alone, and no further evidence to prove plausibility may be filed later. This may pose problems from a commercial perspective, since there is often a temptation to race to be the first to file a patent application, in order to beat off competitors. Care must be taken to ensure that, in doing so, you do not neglect to include enough data to show that the invention is plausible. Some Data Once the first data hurdle is overcome by providing at least minimal data in the application, then the patent office must also be convinced that the second data hurdle, which is that the invention is credible, has been overcome. Part of a patent application is made up of “claims”. These are statements that define the invention and the extent of the monopoly being sought. An objection to credibility would mean that the patent office believes that it is not credible to claim a monopoly for every permutation of the invention that is being claimed. In other words, the patent office doubts that every version of the invention you have claimed would work, and so you are not justified in seeking a monopoly for all claimed versions. This


Volume 3 Issue 1

REGULATORY & MARKETPLACE would be against the general principle that patents should only be granted for inventions that have a practical effect and make a contribution. The issue is therefore that the data in the patent application must be complete enough to warrant giving a monopoly for all that you have claimed. If the application contains an example of at least one permutation (e.g. compound) of the invention with data showing that permutation works, this is normally enough for it to be argued that the data for that permutation can be extrapolated to other similar permutations (e.g. structurally similar compounds). In other words, it could be inferred that the other permutations would work and so the extent of the monopoly claimed is justified. The EPO can only object to the credibility of the invention if they provide technical facts to show that the invention does not work across the whole scope of what is claimed. This is exemplified by a case relating to herbicidal compounds, in which the EPO found evidence that not all of the compounds claimed had herbicidal activity. The data in the patent application only related to some of the compounds, so it was decided that it was not reasonable to conclude from this data that the other compounds claimed would also have herbicidal activity. In response to the facts presented by the EPO, the applicant was invited to provide further data to show that at least the vast majority of the compounds claimed had herbicidal activity. Only the applicant’s failure to provide such further evidence resulted in the refusal of the patent application. Therefore, this case also shows that, unlike with plausibility, if a credibility objection is raised you will have a chance to remedy the situation after the application is filed, by providing further experimental evidence to supplement the data in the application as filed. Ideally, to prevent a credibility objection, data should be provided in the application to support everything that was claimed, or at least the vast majority. If this is not possible and it is pertinent to secure a filing date, the application when filed should at least contain data showing one permutation of the claimed invention works. This can then be supplemented with experimental data showing the other permutations work, 20 INTERNATIONAL PHARMACEUTICAL INDUSTRY

if the patent office raises a credibility objection. Compounds for Medical Use – a Special Case As is shown by the herbicide example above, the eventual use that an invention will be put to is taken into consideration by the patent office, so it is necessary to give enough data to show that it is suitable for that use. The EPO is particularly strict on this when it comes to compounds for use in a particular medical or therapeutic application. Part of the reason for the EPO’s different attitude towards medical use compounds is that biological systems are so complex. Many variables affect the in vivo action of a drug, and these may or may not also affect in vitro action. As a result, in vitro studies may not be a good indicator of the in vivo effect. Therefore, in vitro data alone is not always convincing evidence of the suitability of a drug to treat a particular disease in a biological system. If the use of the drug in a medical or therapeutic application is claimed, and the application when filed only contains in vitro data, then you may get an objection to credibility. This can usually be remedied by providing further in vivo experimental evidence to supplement the data in the application as filed. However, if your application only contains in vitro data for the medical use of a claimed compound, and the patent office later determines that in vitro data was publicly available for that compound before you filed your application, the patent office will object that the application is not inventive because no advancement has occurred. This objection cannot be overcome by filing in vivo data to supplement the data in the application as filed. Therefore, the application when filed should contain studies done on an animal test subject as well as in vitro studies, if possible. Of course, if clinical trial data is available, this should also be included. Summary The need to secure a filing date must be carefully balanced with the need to ensure that sufficient data is included in the application at filing. We recommend including as much data in the patent application at filing as practically

possible. At least some data must be included in the application so that there will be opportunities to overcome objections later by filing more evidence. In particular, we recommend including data to show at least one permutation of the claimed invention works, and for medical uses, to include in vitro data and ideally, in vivo data. If in doubt include data, unless it positively shows that your invention does not work n Jenny Donald Senior Associate – Biotechnology Jenny specialises in patent prosecution in the UK, Europe and elsewhere in the world. She mainly works in the fields of biotechnology, pharmaceuticals and medical devices. Jenny has particular experience of patent prosecution at the European Patent Office, including experience with opposition and appeal procedures. She also has comprehensive experience in obtaining Supplementary Protection Certificates. Jenny is primarily based in our London office, but spends several weeks each year in Munich for dealings with the EPO. Jenny is a UK and European patent attorney and is a member of epi and CIPA. Email:

Charlotte Dale - Associate – Biotechnology - Charlotte specialises in biochemistry and is primarily involved in the drafting and prosecution of patent applications in the fields of biotechnology, pharmaceuticals and medical devices. She also handles Supplementary Protection Certificates in Europe, including in the new European member states. Charlotte is primarily based in our London office, but spends several weeks each year in Munich for dealings with the EPO. Charlotte is a UK and European patent attorney and a member of epi and CIPA. Email:

Volume 3 Issue 1


Clinical Trials Insurance: Integrated Expertise & Technology Enable Precise Aim at a Moving Target Clinical trials have gone global. According to the National Institutes of Health, a division of the US Department of Health and Human Services, there are currently more than 58,000 trials taking place in more than 150 countries. In addition, clinical trials have been moving from Europe and the United States to emerging markets such as India, China, Vietnam, and South America. Moving to new regions has helped trial sponsors to reduce the costs of conducting trials and to recruit participants from new pools of patients. One of the main challenges life sciences companies face, however, is keeping abreast of the various insurance requirements imposed by these countries. Companies with multinational trials must juggle myriad various regulations regarding policy terms and conditions, which include issues such as discovery period, run-off phase, capacity limits and liability levels. In some cases, life sciences companies may need to secure a different policy for each host country. Securing appropriate clinical trials insurance, especially for multinational studies, can be costly and logistically difficult to manage. Many factors must be taken into consideration early in the trial planning process. Otherwise, sponsors may find out too late that coverage in a certain region is prohibitively expensive, or that it would take too long to secure a local policy. Missteps in the placement or certification process could delay or disrupt trials, with far-reaching financial implications. In this article, we discuss the role that integrated insurance expertise and web-based technology can play in streamlining the clinical trials insurance process. Trials Insurance: A Fast-Moving Target Although most trial sponsors would not 22 INTERNATIONAL PHARMACEUTICAL INDUSTRY

consider launching a study without first obtaining clinical trials insurance, it isn’t always a prerequisite. Many countries require it in a sponsor’s regulatory filing package, but some countries do not. Of the countries that mandate it, some accept certificates of insurance from a foreign-based insurance policy carrier, while others demand certificates from a locally licensed carrier, and it must be written in the local language. Needless to say, clinical trials insurance is a fastmoving target with mandates varying by country and frequently changing. Producing insurance certificates establishes that a company has the coverage to indemnify and pay the medical bills for trial participants, should they claim that the study allegedly caused harm. Securing certificates can be time-consuming and complex. The turnaround time can take longer when a locally admitted insurance company is used. Delays or errors in certification can consequently delay regulatory filings and approvals by government agencies. The last thing a company wants is for the insurance requirement to hold up the clinical trial. Every day lost translates into lost opportunity for the sponsoring company. By some estimates, a company can potentially lose $3 million a day when a blockbuster drug spends extra time in development. The goal is to quickly identify the required coverage, when it must be in place and how much is required, and to make the procurement process as efficient and seamless as possible. Broker Expertise to Take Aim Toward this goal of obtaining adequate coverage, life sciences companies must realise that clinical trials insurance is not a commodity; it is a sophisticated coverage that requires specialised knowledge and counsel. By partnering with the right brokerage firm, companies

can keep pace with the latest clinical trials insurance requirements. It’s important to utilise a firm that has a comprehensive and integrated approach, which includes a worldwide network of offices, operations in popular host countries, an understanding of local trial requirements, and rapid issuance of insurance certificates. With the right partner, trial sponsors benefit from expert services in identifying the right carrier and insurance products, negotiating the best coverage terms and conditions, and gaining access to sophisticated web-based technology that automates, streamlines and expedites key risk and insurance management processes. Today, an expert brokerage firm will have a proprietary database outlining the regulations and market practices for clinical trials insurance in more than 100 countries. As a result, the brokerage firm will be well versed in local regulatory requirements. A specialised firm will also provide clients with a clinical trials risk map, which geographically catalogues and continually updates insurance requirements by region. This map can help life sciences companies in their planning to identify trial sites, recruit patients, project budgets, and plan for other locally mandated requirements. All these factors help to ensure that life sciences companies have the right expertise and coverage in place. Web-based Technology that Hits the Mark In the past, paper-based insurance processes created delays and inefficiencies in the clinical trials insurance process. In recent years, browser-based risk management information systems (RMIS) have been developed, which can act as global platforms that streamline operations. These are designed to provide companies with an enterpriseVolume 3 Issue 1



REGULATORY & MARKETPLACE Reporting & Analysis Obtaining a consolidated overview of a company’s clinical trials was previously a resource-intensive task. An RMIS enhances data reporting and analysis, giving a complete and immediate understanding of the trials being undertaken and the extent of the exposure. In essence, today’s RMIS provides a wider perspective on clinical trials, enabling companies to manage risk and insurance requirements across their company, and with more control than ever before. wide view of clinical trial information with multi-lingual, multi-currency capabilities. At the same time, an RMIS can help cut development costs and reduce a company’s “time to market” by streamlining key steps in the clinical trials insurance process: Insurance Submissions Underwriters are now demanding more detailed clinical trial information. Risk managers can leverage a browserbased RMIS to gather and submit the advanced data that is required, such as trial location, number of participants and trial dates. Companies can also differentiate themselves by providing information that shows good loss experience and exemplary patient safety standards. The better the information, the better leverage companies will have in the marketplace to optimise insurance costs and coverage. Certificates of Insurance General, professional and product liability insurance coverages may be required prior to trial launch, as well as throughout the numerous phases of the clinical trial. At various stages, sponsoring companies must distribute insurance certificates to the right regulatory agencies and ethic committees. An RMIS tracks the insurance requirements in various host countries, and whether proper insurance documentation was provided to regulators within an appropriate timeframe. With automated tracking and scheduling capabilities, the time and effort required to obtain certificates is dramatically reduced. Streamlined Workflow In the past, clinical trials relied on paperbased forms and spreadsheets, which were not dynamic or secure. Today, a 24 INTERNATIONAL PHARMACEUTICAL INDUSTRY

web-based RMIS enables electronic data capture and transfer via the internet. This ensures quality data and real-time sharing of information, which further streamlines the management of clinical trials. In fact, delays in initiating and executing clinical trials due to poor record management and planning are virtually eliminated when an RMIS is used. Centralised Data Repository In the past, having a trial master file was an elusive goal. Typically, the master file included “essential study documents,” such as brochures, protocols, and informed consent forms. Traditionally, this information was stored in binders and could amount to more than 10,000 individual documents, which made it extremely difficult to manage because information was not electronically stored or consolidated. A browserbased RMIS enables companies to create a centralised data repository for these documents to be electronically accessed and shared. This provides substantial savings by reducing paper documents and manual processing. Instead, through the web-based platform, companies can view clinical trial information holistically - which is instrumental to timely and efficient startup and ongoing trial progress. Improved Communication A web-based RIMS fosters connectivity among key stakeholders across the globe, and eliminates traditional silos of information among countries, and between research and development (R&D), safety, clinical research organisations, and other clinical trial professionals. This helps to improve communication and collaboration among all parties involved in the clinical trials risk and insurance process.

A Future Focused on Speed and Efficiency With mounting discovery and development costs, life sciences companies can no longer wait for a blockbuster breakthrough, nor can they afford to have insurance requirements hinder speed-to-market efforts. A more integrated approach between insurance expertise and information management is required - one that is collaborative, and consolidates information to make the process more efficient and transparent. Most life sciences companies need an insurance programme that will keep pace with their future plans for growth and product expansion. Traditionally, companies relied on disjointed communication and planning that included email requests and spreadsheets to share clinical trial information. An RMIS enhances day-to-day administration of clinical trials. Since it’s web-based, authorised users can log on to view the current status of studies. Streamlining the procurement and proof of insurance are critical to ensuring companies can quickly get through the clinical trial and regulatory approval process. The faster they can get through these steps, the faster they can reap the benefits of bringing that product to market n

Kathleen Burns is CEO of Aon eSolutions. For more information, visit first or email

Volume 3 Issue 1




Flanders – An Attractive Biopharma Cluster

Innovation is the key word for the life sciences and biotechnology sector. Flanders has a long and successful tradition of discovery and innovation. Some of the major scientific breakthroughs achieved there include the first unravelling of the DNA sequence of a gene, the discovery of tPA, a major treatment for heart failure, and the discovery of treatments for schizophrenia, pain management, gastro-intestinal disorders and parasitic infections. Innovation combined with entrepreneurship, the presence of a life sciences network of more than 145 companies with biotech activities, and numerous biopharma collaborations have turned a small region into a world player. Flanders: Strong in Life Sciences Flanders is one of three regions in Belgium, situated to the north of Brussels, the capital city of both Belgium and Flanders. It houses many life sciences companies, whether small or large, newly or long-established, concentrated over a small geographical area (13,500 km2 or 2/3 the area of Silicon Valley). Flanders counts more than 145 life sciences companies with biotech activities, employing more than 13,000 people. Local and international investors are providing the capital for their continued growth and the full range of competences required to bring new and innovative products to the market are locally available. The sector is divided into three segments: red (medical and healthcare) biotech is the biggest segment with companies active in drug discovery and the development of therapies, vaccines and diagnostics; green (agricultural) biotech companies are active in marker-assisted breeding and genetic modifications of crops; and white (industrial) biotech focuses 26 INTERNATIONAL PHARMACEUTICAL INDUSTRY

on bioprocessing technologies for production, from enzymes to biofuels. In total 71 companies are active in healthcare and 15 in agriculture and food, 24 produce industrial products, 15 supply specialised technology and 21 provide R&D or production services. They are supported by numerous specialised service providers. Flanders is characterised by a unique interplay between business, universities, research centres and hospitals; the entire value chain is present in close physical proximity. Good interaction and extensive networking between the various public and private life sciences players provides a dynamic environment, rich in innovation and knowledge-sharing, in which new companies constantly join the ranks of a fast-growing life sciences cluster. A Dense Biomedical Network Flanders’ five top-class universities strongly support life sciences. Four of these are involved in the renowned biotech research institute VIB where researchers stand at the forefront of biomedical research. In addition to expanding their knowledge and understanding of the molecular mechanisms of diseases like cancer, Alzheimer’s and infectious diseases, they strive to translate that knowledge into clinical practice. The Centre for Medical Innovation (CMI) was set up to boost biomedical research from the lab to the patient. Through multi-disciplinary collaborations between academics and biopharma actors, this institute focuses on translational and personalised medicine to secure the future R&D pipelines. Knowledge and inventions from the research institutes are transferred to new and existing biotech companies for further development and

commercialisation. Flanders is home to a large number of listed and privately held red biotech (=biopharma) ventures: Galapagos, Ablynx, Movetis, Innogenetics, Tibotec, ThromboGenics, TiGenix, MDxHealth, reMYND, Pronota, ActogeniX, Okapi, Trinean … which are active in the treatment of a number of diseases. Complementing these smaller ventures, many top pharmaceutical companies with very significant R&D activities (J&J, GSK, UCB, Genzyme, etc.) are also established in the region. The medical research in these Flemish ventures can rely on the support of Belgium’s dense medical network of 167 well equipped hospitals (seven university hospitals), with an unmatched reputation in clinical trials. Belgium is the world’s number one in clinical trials per capita, with one of the fastest approval times for clinical trials (Phase I < two weeks). This was one of the reasons why the international biotech company Amgen chose this region to perform many of its clinical trials in various therapeutic fields. The region’s positive environment for the biopharma sector results in major investments: Janssen Pharmaceutica, for instance, invested 130 M€ in a chemical development pilot plant, Pfizer invested 61 M€ in a new production line and a pilot plant for the production of new drugs, and Swiss company Biocartis invested 30 M€ for its new production and R&D facilities. Furthermore, Genzyme invested over 300 M€ in a new biotech manufacturing plant. Its investment and commitment to the region is attributable to the ongoing high performance of the plant and its ability to create value for the company – a direct correlation with the region’s highly educated workforce and the clear commitment, from both the public and private sector, to the life sciences industry. Volume 3 Issue 1

REGULATORY & MARKETPLACE “Small” Biotech Collaborates with “Big” Pharma The presence of big pharma and smaller biotech companies, specialising in all areas of biopharmaceutical fundamental and clinical research and manufacturing, provides a large, dynamic and stimulating environment for inter-company partnerships. Most of the biotech companies are SMEs which are investing heavily in knowledge development over a longer term than the traditional business model would allow. Biotech companies usually require 12 years before turning a profit, which means that they need a long-term vision and regular capital injections from patient investors, and must focus on scientific and technological innovations that can make a difference. To develop successful technology platforms, validate trials or monitor clinical research trajectories, companies work together or collaborate with academic research centres. Companies capable of completing the entire R&D and commercial development process without partners are a minority. Galapagos, for example, which specialises in the discovery and development of small molecule and antibody therapies, has entered into long term alliances for the majority of its research programmes with some of the top ten pharma companies – including J&J, GSK and Roche. This enables Galapagos to build a pipeline of more than 40 programmes. Ablynx is another very successful Flemish biotech firm, focusing on the discovery and development of Nanobodies®, a novel class of therapeutic proteins. It has initiated Phase II trials with its antithrombotic nanobody and collaborates with Boehringer, Merck and Pfizer for the development of its therapeutic drugs. Also, ThromboGenics signed a major partnership deal with Roche for its novel anti-cancer monoclonal antibody. It has also completed patient enrolment in the US Phase III trial of microplasmin for the non-surgical treatment of eye disease. In 2010, ThromboGenics signed a 10-year supply agreement with MSD Biologics (UK) for the production of microplasmin. Finally, the small biotech company reMYND, developing disease-modifying treatments for Alzheimer’s and Parkinson’s diseases, has a very promising R&D pipeline. In 28 INTERNATIONAL PHARMACEUTICAL INDUSTRY

2010, reMYND entered into a strategic alliance with Roche to develop first-inclass disease-modifying treatments for Parkinson’s and Alzheimer’s diseases (500 M€ milestone payments). Furthermore, an initial collaboration can also lead to an effective acquisition: the biotech company Movetis was acquired by Shire, one of the world’s leading speciality biopharmaceutical companies. In 2010, Movetis obtained approval from the European

Commission for its lead product Resolor® for the symptomatic treatment of chronic constipation, now launched on the German market. Open Innovation Model for Biopharma The open innovation model raises pharma/biotech partnerships to an even higher level. Today, the in-house knowledge of a given company is no longer sufficient for it to develop Volume 3 Issue 1

REGULATORY & MARKETPLACE innovative products that will benefit patients and consumers. It is necessary to break down the walls of research departments and try to bring in as much external knowledge and expertise into the company. In the case of the Flemish biopharma sector, this open innovation model is promoted by Janssen Pharmaceutica (part of the worldwide Johnson & Johnson). Together with its affiliated companies, they are decisively pursuing open innovation in order to offer patients new therapeutic solutions that are less expensive and available more quickly. The battle against complex diseases such as AIDS, Alzheimer’s or cancer requires much more knowledge than one single company is able to provide. In such a model, partners are engaged in a collaboration at an equal level and for a much longer period. This sustainable relation is based on clear agreements about confidentiality, intellectual property, etc. Internal and external companies, institutes or researchers must combine their expertise with the know-how present in the biopharma companies. These partnerships will lead to a win-win situation for all collaborating parties and even to a 1+1=3 situation: the return on investment will be much higher. FlandersBio: Network Organisation for the Life Sciences and Biotech Sector Even though scientific collaborations often have quite a global character, one should not underestimate the importance of developing a local network, as a good network can provide a fertile soil for growth and development. Direct contacts with capital providers, government representatives, peers from other companies or service and technology providers can offer strategic advantages at just the right time. FlandersBio brings together all these players into a dynamic network. FlandersBio is the umbrella organisation for the life sciences and biotechnology sector in Flanders, a dynamic non-profit, fee-based organisation with 221 members. Its mission is to support and facilitate the sector’s sustained development. Its objective is to ensure that the sector remains a strong driver of economic growth in the region. The FlandersBio network brings together 30 INTERNATIONAL PHARMACEUTICAL INDUSTRY

companies with innovative, R&Ddriven activities in life sciences – companies that are for example developing biopharmaceuticals, medical technologies, or agricultural or industrial biotech products. FlandersBio welcomes companies with production activities based in Flanders, as well as academic research institutes and providers of capital, services and technologies to the life sciences community. FlandersBio’s main activities are networking and supporting innovative R&D. As a sector organisation, it is the gateway to an extended regional and international network of life sciences actors. Within this network, it stimulates the exchange of knowledge and contributes to national and international collaborations. Via the active support of innovative R&D activities, FlandersBio creates an added value for the life sciences sector as a whole. FlandersBio also plays an important role in proactive lobbying with the regional and federal governments to ensure a supportive environment for all life sciences actors, which in turn helps increase the region’s competitive edge. And finally, it supports life sciences companies looking for investment opportunities in Flanders and it enhances the international visibility of the Flemish life sciences and biotechnology sector. Knowledge for Growth: Europe’s Largest Regional Biotech Convention On Thursday 5 May 2011, FlandersBio will hold the seventh edition of its annual life sciences convention ‘Knowledge for Growth’ at the ICC Ghent. Knowledge for Growth is Europe’s largest regional biotech convention. The principal ingredients of the convention are networking, keynote scientific lectures, science, technology and management presentations and a trade and poster exhibition. On that occasion, FlandersBio also organises a job fair, giving companies and jobseekers in life sciences an opportunity to get to know each other. The participants are biotech, medical technology and pharma companies, as well as academic research institutes and providers of capital, services and technologies to the life sciences community. More than 200 companies were

present at Knowledge for Growth 2010; 90 scientists presented their scientific abstract as a poster at the convention, and more than 900 individual participants (62% industry, 38% academic) registered for the convention. This made the 2010 edition of Knowledge for Growth the most successful since its inception. Knowledge for Growth 2011 will focus on Biotech meets global needs. The Flemish biotech industry has experienced significant growth in recent years. Some companies have products on the market for the first time; others have signed key partnerships that will have a significant impact on their continued growth. These important developments create a new commitment with patients and consumers, but also with financial stakeholders and other partners. This explains why, for the first time, FlandersBio will also organise a dedicated financial programme – in addition to the traditional scientific programme featuring Jim Healy (Sofinnova). Other speakers on the programme include Jean Stéphenne (GSK Biologicals), Hans Hofstraat (Philips Research), Dirk Reyn (Shire/Movetis), Patrik De Haes (ThromboGenics), Koen De Witte, (reMYND), Edwin Moses (Ablynx), Rudi Pauwels (Biocartis), Piet Houwen (Genzyme), Onno Van de Stolpe (Galapagos) and many more n

Joke Comijn Communications Manager at FlandersBio As a Communications Manager at FlandersBio, Joke Comijn is responsible, among other things, for the (inter)national positioning of the Flemish biotech sector, and has developed several communication tools to that effect. Joke has a university degree as a biotechnologist and a Ph.D. in molecular biology. Interested in science and biotechnology communications, she started her career in science communications at VIB in 2005 and in 2009 she joined FlandersBio. Email:

Volume 3 Issue 1



Drug Discovery, Development & Delivery

Point of Care Testing – the New Panacea?

The future demands on the Health Service are going to be partly dictated by demographic change and partly by resources available, including money (current funding for the NHS from Central Government is £98 billion per annum). With more people living longer and requiring healthcare, the pressure on the Health Service is steadily increasing. The ability to manage patients more efficiently will mean that there will be a need for quicker ways to triage patients, and potentially reduce the burden on an already stretched Health Service. One of the methods being developed and evaluated is Point of Care Testing (POCT). Point of Care Testing is an allencompassing term used to label a diagnostic test that is performed near to the patient or subject under test. The term indicates that the test is easy to complete, and the results are delivered in a short timescale and are easy to understand. Alternatively POCT can be referred to as near-patient, bedside testing and Point of Use. At the same time, developments in diagnostics are moving increasingly towards Point of Care Tests, some of which can be performed by the patients themselves. POCT is therefore an area of diagnostics that is growing rapidly. At the moment, approximately 70 per cent of sample analyses are performed in a laboratory, and 30 per cent at the point of care. An aim is to move to a position where the majority of tests are POCTs, with only specialist confirmatory tests requiring laboratory analysis. It is estimated that the potential global POCT market will be at least $18 billion by 2014. In the western world the predicted rise in healthcare spending is going to be between 3 and 6% during the same period. The downward pressure on expenditure and the upward demand for services will help drive the development of POCT. In the southern hemisphere, 32 INTERNATIONAL PHARMACEUTICAL INDUSTRY

where the “Tiger and Tango” economies have largely escaped the economic downturn, the growth is likely to be between 12 and 20%. These areas too will have an increasing need for POCT as the populations’ demand for high quality healthcare grows. POCT can be in the form of simple-

to-use dipsticks (for example the pregnancy test kits widely available from retailers), compact handheld electrochemical devices (such as the roadside breathalyser tester), and small benchtop analysers, all of which require little or no maintenance. This equipment can be applied to a range of different Volume 3 Issue 1

Drug Discovery, Development & Delivery tests such as blood gases, electrolytes, coagulation, pregnancy assessment and drugs of abuse testing, covering a variety of fields including biochemistry, haematology and virology. The fields of use will undoubtedly expand with the incorporation of novel technology developed from the nanotech and antibody sciences. The Health Protection Agency (HPA) has a Translational Research programme which links into the POCT area at all levels, from proof of principle studies right through the development process to the final validation of tests to support CE marking. The HPA is able to apply the years of experience gained in developing its own tests for use in the diagnostic and detection areas to assist companies in developing their tests and equipment for the POCT market. Next, we would like to consider some of the issues that need to be resolved to get a potential test from the laboratory to market. If a company has a platform technology which is thought would have value in being developed as a POCT, the company must have a clear plan of action to deliver the end product. One of the first considerations to be assessed is what is the test and where is the target market? Is this POCT going to be one that is performed in a hospital or doctors’ surgery by trained staff, or could this be a field test. Field tests are ones that are likely to be performed by someone as an adjunct to their existing role, such as fire or police officers who are dealing with an incident, or veterinary surgeons and their support staff dealing with an animal health emergency such as foot and mouth disease. Worldwide, there is a need for such devices for use by agencies delivering healthcare to remote locations where there is little or no power supply or clean water. The most physically demanding use would be by the armed forces for use in a theatre of war, before, during and after the conflict. Each of these potential uses imposes different requirements onto the final delivered product. A POCT device would have to be adapted for each of the scenarios above. A military device would have to be more rugged, more “battle-hardened”, than a civilian fieldor hospital-based device, even if the same test was being performed within the end device. It may be that during this initial review the final chosen market 34 INTERNATIONAL PHARMACEUTICAL INDUSTRY

may not be the one first envisioned, in which case expertise from other organisations may be sought to deliver the project. The HPA team on its site at Porton Down has worked with a number of companies in all of these areas, and has been able to bring over fifty years of experience to such projects. Once the target market has been identified, the next challenge is to adapt the test from the laboratory-based test to the POCT. When companies are considering which of the many opportunities to pursue in relation to which POCT, it’s worth pointing out that if your test is going to be a clinical test, then you should have a clinician as part of the team, and similarly with veterinary and military tests, it will be worth having a vet or

military input into your team. There are some key issues which have to be addressed from an end user’s perspective; the POCT must be easy to use - a test that’s not easy to use will not be used. The POCT must be quick - these tests are there to give rapid results to enable high throughputs of test samples. This is so that the test subjects can be streamed at an early stage, to reduce the load on what is likely to be a stretched and stressed workforce. The POCT must be accurate and the result must be clear, This is to avoid the operator having to make a judgement that they may not have the qualifications for. And the final issue from the procuring service for the end user is that the POCT must be low cost. Volume 3 Issue 1

Drug Discovery, Development & Delivery POCTs are likely to be adapted from tests routinely performed in specialised analytical laboratories, managed and staffed by highly skilled, trained and qualified operators. The challenge is therefore to develop robust tests and devices for use in non-specialised environments with operators that require only minimal training. There are a number of common background technologies behind most tests; two common ones are the polymerase chain reaction (PCR) and antibody-based tests. Others include electrochemical measurement, rapid automated visual identification and measurement of fluorescence. The HPA at Porton Down has worked with organisations developing POCT from all of these technologies. As an example, in relation to the PCR tests, the HPA has a group that can work on optimising the process, to give consistent results. With antibody-based tests, the HPA can raise specific antibodies for companies to use in their process, whilst other HPA teams can further refine and evaluate the test method, and contribute to the successful completion of the development phase. Irrespective of the background technology in any test, there are a number of fundamental stages that all POCTs have. A typical testing cycle will have at least the four main stages listed below: Sample collection Sample preparation Analysis Readout/result presentation As each stage leads onto the next stage, and most tests are usually single-use unidirectional ones, once started it is often impossible to stop a test without invalidating it. The accurate collection of test samples is a critical stage in the POCT. Samples must be free from contaminants other that those expected in such sample material. A sampling protocol must be in place for taking samples to ensure that the cohort of samples is coherent. Samples for POCTs are usually those that can be taken by non-invasive means; they include breath, sputum (nose and throat) and urine. Other more invasive samples include faeces, semen, Central Nervous System fluid, swabs and blood. These tend to be for hospital-based 36 INTERNATIONAL PHARMACEUTICAL INDUSTRY

tests, and again a sampling protocol must be in place to ensure sample standardisation. Samples taken may require some preparation before being introduced to the test system. Sample preparation is a complex area and there are whole scientific conferences devoted to just this aspect. Where the sample preparation phase is not part of the device it can be part of the collection system. The collected sample is placed in a tube of carrier liquid and then that liquid passes into the POCT device. This is useful where the samples are highly viscous. Where a sample is introduced directly into the POCT device, the first operation could be that the sample is irrigated with process fluid, as part of the test system. The sample is further prepared by passing over some form of separation matrix that only allows the biomarker to be tested to pass through to the detection and analysis section of the device. With the improvements in microfluidics, nanotechnology and reagent design, there have been advances in detection sensitivity. This means that test systems are able to analyse smaller quantities and identify lower concentrations of biomarker material. This is ideal where only small volumes of material can be collected, as in samples from neonatal children. The greater sensitivity also means that conditions can be identified at an earlier clinical stage, hopefully leading to earlier medical intervention and a more successful outcome. With greater sensitivity there also comes the risk of exposing the sensor to substances that could result in false positives. This is where a substance different from the one the device is designed to specifically detect triggers the sensor to give a positive result. As part of the development of the POCT there should be a plan for robust evaluation and validation. This will prove that the POCT does not give false positives or perhaps worse, false negatives, where the POCT does not identify the target biomarker. The HPA Porton Down has an integrated quality team which works closely with the scientific team, to produce qualified results which can be used to support a CE marking programme. The final part of the test system is the output to the operator. What type of display is wanted? Is it a simple red-

green light output, go/no-go, or an alphanumeric display? For a POCT used in a military or emergency environment, highly complex readouts should be avoided as the operator’s interest is mainly to get a rapid assessment of the situation. Where a device is going to be used by a patient for home monitoring, similar to home monitoring of glucose levels for diabetics, a simple numerical display giving a value would be all that was needed. Such a simple numeric display would also allow patients to monitor trends. Some home devices can be linked to telephone autodiallers and internet access points so that a record of the test result can be downloaded to a patient’s file. Using this link POCTs can be part of the assisted living at home drive to prevent the unnecessary hospitalisation of patients. In a hospital or surgery setting, a more comprehensive display may be useful to assist in the management of the potential patient. The decision on what type of display is needed is again an area where the clinician, industrial, veterinary or military link will be able to offer input. There is a huge worldwide market for POCTs as identified earlier in this paper. In order for companies to capitalise on the opportunity, they must have the right test, at the right time and for the right price. To get to this point a POCT has to be carefully designed, developed and validated to address the needs of the targeted section of the market. The HPA can support companies who are looking to bring a POCT to market at all levels of the development process n

Mitch Rogers - Mitch is a Business Manager for the HPA , with extensive project management experience including leading two large research groups before moving to Business Development He has a broad portfiolo ranging from research, “Proof of Principle” to “Close to Market” projects. Including diagnostic Point of Care”project. E-mail:

Volume 3 Issue 1



Drug Discovery, Development & Delivery

Seeing Beyond the Visible With Near-Infrared Dyes

Optical imaging enables non-invasive study of molecular targets inside the body of the living animal. This technology can be used to follow the progression of disease, the effects of drug candidates on the target pathology, the pharmacokinetic behaviour of drug candidates, and the development of biomarkers indicative of disease and treatment outcomes. Currently, the three major types of labels used in optical imaging are bioluminescence, fluorescent proteins, and fluorescent dyes or nanoparticles. Bioluminescence and fluorescent proteins require engineering of cell lines or transgenic animals that carry the appropriate gene. Because fluorescent dyes do not have this requirement, they have the potential to translate to clinical applications. For example, the carbocyanine dye indocyanine green (ICG; also known as Cardiogreen R), has been used in the clinic for over 25 years as a dilution indicator for studies involving the heart, liver, lungs, and circulation.1 Near-infrared (NIR) fluorophores

minimise the optical challenges of detecting photons in tissues. A fundamental consideration in optical imaging is maximising the depth of tissue penetration, which is limited by absorption and scattering of light. Light is absorbed by hemoglobin, melanin, lipids, and other compounds present in living tissue.2 Because absorption and scattering decrease as wavelength increases, fluorescent dyes and proteins absorbing below 700 nm are difficult to detect in small amounts at depths below a few millimetres.3 In the NIR region (700-900 nm), the absorption coefficient of tissue is at its lowest and light can penetrate much more deeply.4 Above 900 nm, light absorption by water begins to cause interference. Autofluorescence is also an important consideration. Naturally-occurring compounds in animal tissue can cause considerable autofluorescence throughout the visible spectral range up to ~700 nm, which can mask the desired signal. A number of NIR dyes have been employed for in vivo imaging, including

Cy5.5 and Cy7 (GE Healthcare, Piscataway, NJ), Alexa Fluor 680 and Alexa Fluor 750 (Invitrogen Corp., Carlsbad, CA) and IRDye 680 and IRDye 800CW (LI-COR Biosciences, Lincoln, NE). Cy5.5 has been used in the past, primarily due to the lack of other dyes more suitable for imaging. The excitation/emission maxima for this dye (675 nm/694 nm) fall in the range affected by tissue autofluorescence, impacting its overall performance5, and Cy5.5 has also been shown to cause higher background in cellular assays due to non-specific binding. In contrast, IRDye 800CW has excitation/ emission maxima at 785 nm/810 nm, precisely centred in the region known to give optimal signal-to-background ratio for optical imaging.4 Quantum dots, with their photostability and bright emissions, have generated a great deal of interest; however, their size precludes efficient clearance from the circulatory and renal systems and there are questions about their longterm toxicity.6

Figure 1. Validation and use of an NIR fluorescent probe. After probe labeling, in vitro cellular assays and microscopy are used to confirm specificity. The desired target is then imaged in animals. Excised organs and tissues can be examined for more detailed localization of the probe. Probe labeling

In vitro validation


In vitro imaging

Excised tissues

Volume 3 Issue 1



Drug Discovery, Development & Delivery The Molecular Imaging Workflow The basic steps for making, validating and using an NIR fluorescent probe are summarised below, and in Figure 1. A more comprehensive discussion of approaches for the development of fluorescent contrast agents has also been published.7 Probe Preparation In vivo imaging projects typically begin with identification of a possible targeting agent or probe, such as a receptor ligand, peptide, small molecule, or antibody. Dyes with NHS ester reactive groups, such as IRDye 800CW NHS ester, can be used to label primary amines such as lysine residues to prepare the labelled probe. Kits are commercially available for labelling and purification of some of these targeting agents. In Vitro Validation Cell-based assays can often be used to evaluate binding and specificity in vitro before animal studies begin. A variety of

approaches have been used for in vitro testing, including the “In-Cell Western” format.7,8 In this assay, cultured tumour cells in microplates are incubated with the labelled targeting agent to assess binding. Specificity is evaluated by methods such as blocking access to the target with an antibody, or competition with an excess of unlabelled agent. Fluorescence emission from each microplate well is then quantified. Fluorescence microscopy is also used to validate targeting and localisation of probes.9 In Vivo Clearance Clearance studies (with both the unconjugated dye, and the labelled probe) are important for accurate interpretation of imaging data. Signal may be nonspecifically retained in regions of the body that block or mimic the intended target (such as the liver, kidneys, or bladder), and could result in misinterpretation of data if these controls are not performed. Time courses of probe clearance also help to

establish the optimal time for imaging in subsequent experiments. Imaging The probe can then be used to image the desired target in animal studies. If possible, specificity should also be confirmed in vivo. One approach is to pre-inject the animal with an excess of unlabelled agent or other compound that blocks or competes with binding of the targeting agent.8 Tissues and Organs At the end of the imaging study, animals can be sacrificed, and organs or tissues can be excised and imaged to confirm the presence of the probe in the desired location. Imaging of whole organs provides a quick and semi-quantitative estimate of signal intensity, and can be used to evaluate biodistribution of the probe. For more detailed analysis, sections can be prepared from frozen or paraffin-embedded tissue and imaged at higher resolution.

Figure 2. A) Imaging of A431 tumor with IRDye® 800CW EGF (800 nm; pseudo-color). IRDye® 680 BoneTag™ was used to visualize skeletal structures (700 nm; grayscale), aiding in anatomical localization of the tumor. Image captured with Pearl Imager. B) Fluorescent microscopy confirms probe localization. EGF probe (red) localizes to the cell membrane. Punctate fluorescence is also seen in the cytoplasm, due to internalization of the probe. Sytox Green DNA staining is shown in green.

Figure 3. A) Athymic male nu/nu mouse, ~4 hr after injection of IRDye 800CW PEG probe. Large blood vessels and tumor are visible. B) Higher-resolution (85 µm) image of the tumor region shows large blood vessels recruited to feed the tumor. Sequestration of contrast agent in tumor is likely due to enhanced permeability and retention.


Volume 3 Issue 1

Drug Discovery, Development & Delivery Near-Infrared Fluorescent Probes for Optical Imaging A number of studies have demonstrated the use of near-infrared fluorophores for optical imaging. • In a comparison of gamma scintigraphy and NIR imaging, a cyclopentapeptide dual-labelled with 111indium and IRDye 800CW was used to image ανβ3-integrinpositive melanoma xenografts.(10) The tumour regions were clearly delineated by optical imaging of the IRDye 800CW signal. In contrast, tumour boundaries could not be identified by scintigraphy due to high noise levels. • IRDye 800CW has been conjugated to epidermal growth factor (EGF) for imaging of tumour progression.8 In a longitudinal study, probe accumulation was monitored in orthotopically-implanted prostate tumours that overexpress the EGF receptor. Fluorescence intensity correlated well with tumour size, and lymph node metastasis could be imaged upon endpoint dissection. Figure 2 shows tumour imaging with IRDye 800CW EGF. • Cy5.5 and IRDye 800CW were used to label EGF, and the effectiveness of these probes for in vivo imaging of breast cancer cell lines in subcutaneous tumours in mice was evaluated.5 The study showed a significant reduction in background and an enhanced tumour-tobackground ratio when IRDye 800CW was compared to Cy5.5, suggesting that longer-wavelength dyes may produce more effective targeting agents for optical imaging. • The increased glucose metabolism characteristic of cancer cells has been used as an imaging target. IRDye 800CW conjugated to 2-deoxyglucose (2-DG) is taken up by several types of tumour cells, and fluorescence microscopy shows accumulation of this probe in the cytoplasm. Uptake can be blocked with an excess of unlabelled 2-DG or D-glucose, and experimental evidence implicates the GLUT1 glucose transporter.11 • Calcium-chelating BoneTag probes, labelled with either IRDye 680 or IRDye 800CW, target bone mineralisation. (12) These probes are used to image

bone structure and remodelling, and a BoneTag probe can be combined with a second, spectrally distinct probe for simultaneous dual-probe imaging (Figure 2A). •U  se of human serum albumin labelled with IRDye 800CW (HSA800) as a tracking agent for mapping of sentinel lymph nodes was demonstrated, using an intraoperative NIR fluorescence imaging system.(13) HSA800 demonstrated good entry to lymphatics, flow to the sentinel lymph nodes, retention in the sentinel lymph nodes, fluorescence yield, and reproducibility. •E  nhanced permeability and retention (EPR) is a common characteristic of tumour vasculature. The vascular endothelium in the tumour microenvironment is often discontinuous, allowing molecules to diffuse into the surrounding tumour tissue, and lymphatic drainage is poor. Because of these characteristics, larger molecules tend to accumulate in and around the tumour. Agents such as IRDye 800CW exploit enhanced permeability and retention for tumour imaging (Figure 3). These agents also highlight surface vasculature, and can be used as lymph tracking agents when administered intradermally n References 1. N  ahimisa, T., Tokai, J. Exp. Clin. Med. 7:419 (1982). 2. Licha, K. Topics Curr. Chem. 222:1 (2002). 3. Frangioni, J.V. Curr. Opinion Chem. Biol. 7:626 (2003). 4. Hawryz, D.J. and Sevick-Muraca, E.M. Neoplasia 2:388 (2000). 5. Adams, K.E., Ke, S., Kwan, S., Liang, F., Fan, Z., Lu, Y., Barry, M.A., Mawad, M.E., and Sevick-Muraca, E.M. J. of Biomed. Optics 12:024017 (2007). 6. Shah, K. and Weissleder, R. J. Amer. Soc. Exp. Neurother. 2:215 (2005). 7. Kovar, J.L., Simpson, M.A., SchutzGeschwender, A., and Olive, D.M. Anal. Biochem. 367:1 (2007). 8. Kovar, J.L., Johnson, M.A., Volcheck, W.M., Chen, J., and Simpson, M.A. Am. J. Pathol. 169:1415 (2006). 9. Gong, H., Kovar, J., Little, G., Chen, H., and Olive, D.M. Neoplasia 12:139 (2010).

10. H  ouston, J.P., Ke, S., Wang, W., Li, C., and Sevick-Muraca, E.M. J. Biomed. Optics 10:054010 (2005). 11. K  ovar, J.L., Volcheck, W., SevickMuraca, E., Simpson, M.A., and Olive, D.M. Anal. Biochem. 384:254 (2009). 12. S  noeks, T.J.A., Khmelinskii, A., Lelieveldt, B.P.F., Kaijzel, E.L., and Lowik, C.W.G.M. Bone 48:106 (2011). 13. O  hnishi, S., Lomnes, S.J., Laurence, R.G., Gogbashian, A., Mariani, G., and Frangioni, J.V. Mol. Imaging 4:172 (2005).

Harry Osterman is Market Innovation Manager at LI-COR Biosciences. He received a Ph.D. in biochemistry from Kent State University, and was a postdoctoral fellow in molecular biophysics and biochemistry at Yale University. Email:

Amy SchutzGeschwender is a Principal Scientist at LI-COR Biosciences. She received her Ph.D. in molecular and cellular biology from the University of Colorado at Boulder. Email:


Drug Discovery, Development & Delivery

A ‘Mix and Measure’ Multiplexed Assay to Assess Oxygen Consumption and Microbial Metabolism

A multiplexed assay allows the measurement of several entities in one run, be it for instance two second messengers, two different membrane receptors, or in this case O2 consumption and cell metabolism. Mulitplexing of both growth and microbial metabolism is achieved using the BMG LABTECH scripting function. Bacterial growth is measured by absorbance at 600 nm to yield the OD 600 value, and the oxygen consumption is measured using the MitoXpress® probe which is measured using dual delay, time-resolved measurements. Optimal filter wavelengths are 340TR H for excitation and 655-50 nm emission. Delay times of 30 and 70 µs are used,

Materials and Methods We used clear 96- or 384-well microplates from Costar, the MitoXpress® chemistry from Luxcel Biosciences, Ireland, and the FLUOstar Omega multidetection 42 INTERNATIONAL PHARMACEUTICAL INDUSTRY

both with a measurement window of 30 µs respectively. These dual intensity measurements are used to calculate emission lifetime using the following function: t=t1-t2/ Ln (D2/D1) [t=delay time, D=measured intensity value].

Results and Discussion Analysis of Bacterial Growth (E. coli): Fig. 1: A) Oxygen-based growth curves from serial dilution of the bacterium Escherichia coli. As bacteria replicate, oxygen consumption rate increases. At a critical point, oxygen consumption exceeds back diffusion. This is seen as an increased probe signal. B) Correlation between time-to-threshold and seeding concentration. The time required to reach the threshold signal (24 µs) reflects the seeding concentration and is dependent on the replication rate and cellular oxygen consumption rate. Comparison with OD 600 Absorbance Oxygen and OD 600 data can be obtained from the same well, thereby allowing multiparametric analysis of cell growth. OD 600 values reflect microbial replication rate, while oxygenbased analysis reflects both growth and alterations in cell metabolism. Oxygen gradients are detectable considerably earlier than increased in OD600

Figure 1: Analysis of bacterial growth (E.coli) A


Assay Principle MitoXpress® Oxygen Probe The assay provides information on the rate of microbial oxygen consumption. Such measurements can provide insight into the metabolic effect of a specific manipulation, or can be used as a measure of survival and replication. Multiplexing the MitoXpress® oxygen probe with the FLUOstar Omega microplate reader to measure both O2 consumption and microbial growth allows high throughput analysis of O2 consumption as well as the associated metabolism. The assay is based on the ability of dissolved O2 to quench the phosphorescence of a soluble, oxygensensitive probe (MitoXpress®). Probe emission is quenched by molecular oxygen via a physical (collisional) mechanism, whereby depletion of dissolved oxygen causes an increase in probe emission. Changes in probe signal therefore reflect changes in oxygen concentration within the sample.

microplate reader from BMG LABTECH, UK. The assay format is a great advantage since it is a simple ‘mix and measure’ test and does not need great time and effort to be performed: 1. Microbes are dispensed into the wells of a 96-well plate in 100 µl volumes in the appropriate growth medium 2. 10 µl of the MitoXpress® probe is added to each well 3. 100 µl of mineral oil is added to exclude ambient O2 4. The plate is measured kinetically at the required temperature 5. Oxygen profiles are then related to metabolic activity










45 35 25


15 0



6 8 Time (h)



Time to threshold (h)

MitoXpress® is a water-soluble oxygensensitive phosphorescent probe that facilitates microtitre-plate based analysis of microbial oxygen consumption. The ‘mix and measure’ procedure allows rapid and specific detection of microbial oxygen consumption providing a simple yet sensitive means of assessing the impact of a given manipulation on cellular function. Areas of application include mode of drug action elucidation, screening for antimicrobial compounds, the assessment of bacterial load, and the optimisation of culture conditions.

4 3 2 1 0 1E+00 1E+01 1E+02 1E+04 1E+05 1E+06 1E+07 1E+08

Seeding concentration (cells/ml)

Volume 3 Issue 1

Drug Discovery, Development & Delivery (Fig. 2) and give a more robust readout. This multiparametric approach can be useful when probing cell metabolism and elucidating modes of action, and can detect any shift from aerobic to anaerobic metabolism in facultative anaerobes. Fig. 2: Comparison between O2 and OD600 profiles (Multiplexed measurement - E. coli seeded at 13.000 cells/ml)

cell metabolism. A dose response analysis demonstrating this is presented in Figure 3.

numbers and extended measurement times) facilitates analysis of effect on cell growth and metabolism.

Fig. 3: S. aureus seeded at at ~10 million cells/ml in EB, exposed to increasing concentrations of the indicated antibiotic and measured kinetically at 37°C

Drug Treatment The electron transport chain inhibitor Antimycin (Fig. 4A) and the polyene antifungal Amphotericin B (Fig. 4B) cause an immediate and dose-dependent decrease in oxygen consumption while the triazole antifungal Fluconazole (Fig. 4C) caused no appreciable decrease in oxygen consumption. These observations correlate with mode of drug action and demonstrate how such measurements can be used to assess the specific metabolic affects of compound treatment.

Analysis of Fungal Growth – C. albicans Data indicates that analysis of C. albicans can be assessed using the described assay. Short term analysis (high cell numbers measured for ~20min) allows the assessment of immediate effects on cell metabolism, while longer term analysis (lower cell

Antibiotic Treatment Microtitre plate-based analysis of microbial oxygen consumption allows the high throughput generation of IC-50 and MIC (minimum inhibitory concentration) values and can be used to screen for compounds that perturb

Fig. 4: Oxygen consumption profiles from C. albicans (~30.000 cells/ml) treated with increasing concentrations Antimycin (from 30 µM), Amphotericin B (from 16 µg/ml) and Fluconazone (from 65 µg/ml) in RPMI.

Figure 2:  66

E.Coli at 1.3 x 105 cells/ml





46 0,6

41 36


31 26


Corrected OD600

56 51

Conclusion MitoXpress® facilitates simple and convenient probing of microbial metabolism, and can be applied to the analysis of both bacteria and yeast. The metabolism implications of treatments such as drug exposure, genetic manipulation or altered culture conditions can be easily accessed, and elucidation of mode of action is facilitated. The assay provides the throughput and resolution necessary for screening, and is capable of detecting antimicrobial activity and generating IC50 and MIC data n

21 0




4 Time (h)



Figure 3: 

This article was also co authored by Conn Carey of Luxcel Bioscience. & Catherine Wark of BMG LABTECH. Figure 4:  A

B 36 34

32 30 28 26

36 34

Amphotericin B






36 34


30 28 26

32 30 28 26










2 4 Time (h)





2 4 Time (h)



2 4 Time (h)


Dr. James Hynes R&D Manager at Luxcel Biosciences, Cork, Ireland, where he leads a team focused on the development of luminescent probe technologies for a variety of biological applications. He has a particular interest in the application of oxygen sensing technology to the assessment of cell metabolism and has published widely in this area.


Drug Discovery, Development & Delivery

The Diabetes Pandemic: Responding to FDA Guidance on Cardiovascular Risk in Type 2 Diabetes Treatment Diabetes is increasing at a disturbing rate both in the US and Europe, and in China. In America approximately 4000 people are diagnosed with type 2 diabetes each day. In recognition of this alarming growth and the ongoing research and development of both preventive and palliative diabetes treatment, in December, 2008 the FDA released guidance on new anti-diabetic drugs. This guidance recommends that sponsors demonstrate that new antidiabetic therapies for the treatment of type 2 diabetes are not associated with increased cardiovascular risk. This article will discuss the implications of the FDA guidance recommendations and how sponsors can respond in order to ensure compliance. The Growing Threat of Diabetes Type 2 diabetes has been described as an emerging global pandemic, presenting a serious public health problem on an international level. There are approximately 2801 million people with diabetes mellitus worldwide; 90% of these have type 2 (non-insulin dependent) diabetes. In the US alone there are currently 24 million diabetics, with steadily increasing numbers. This seems to track with the rising prevalence of obesity. The sequellae of diabetes are severe and often fatal. Diabetes is currently the leading cause of blindness, kidney failure and limb amputation, and is a major contributor to myocardial infarction and stroke. Approximately 70% of diabetes mellitus-related deaths occur as a result of cardiovascular disease. As a result, diabetes now accounts for a substantial proportion of healthcare expenditure. There are currently many medications used for the treatment of diabetes, and these have had a great impact on many of the immediate consequences of elevated blood sugar, as well as on the development of diabetic blindness, 46 INTERNATIONAL PHARMACEUTICAL INDUSTRY

kidney failure and peripheral neuropathy. However, the rate of cardiovascular complications remains high in diabetics, making the continued development of safe and effective new agents for the treatment and prevention of diabetes a high priority. FDA Guidance Concerns related to the cardiovascular risk incurred by treatment for type 2 diabetes has heightened in recent years, as some approved drugs appear to actually increase the risk of cardiovascular events. On a broader level, regulators are becoming more stringent regarding the need to ensure the cardiovascular safety of all noncardiovascular drugs. Highlighting these concerns, in 2008 the FDA released the guidance document: ‘Guidance for Industry – Diabetes Mellitus – Evaluating Cardiovascular Risk in New Anti-diabetic Therapies to Treat Type 2 Diabetes’.2 The FDA Guidance on Diabetes provides non-binding recommendations to ensure that new medications for treating diabetes are demonstrated to be safe (from a cardiovascular standpoint) as well as effective at treating elevated blood sugar. The release of the FDA guidance on the treatment of type 2 diabetes reflects the growing regulatory consensus that the lack of a preclinical signal and a lack of cardiovascular events in early trials are not sufficient evidence to demonstrate the cardiovascular safety of a new agent. As such, the guidance recommends that Phase II/III must be adequately powered to demonstrate cardiovascular safety compared to current therapies. As assessment of cardiovascular safety has become such a priority, the FDA recommended that pharmaceutical companies should establish an independent cardiovascular endpoints committee to prospectively adjudicate, in a blinded fashion,

cardiovascular events during all Phase II and Phase III trials. Further, it is recommended that ‘to obtain sufficient endpoints to allow meaningful estimate of risk, the phase II and phase III programs should include patients at higher risk of cardiovascular events, such as patients with relatively advanced disease, elderly patients and patients with some degree of renal impairment.’ The FDA guidance document makes explicit recommendations about the statistical tests which will be required to demonstrate cardiovascular safety, and have provided estimates of the number of subjects required for these safety studies. It is estimated that the late phase trials to demonstrate cardiovascular safety will require upwards of 2500 subjects, with at least 1300 – 1500 subjects exposed to investigational product for one year or more, and at least 300 subjects exposed for 18 months or more. This is a marked contrast to the far smaller size and shorter duration of previous trials for new anti-diabetic agents. Centralisation of ECGs Part of the assessment of a new drug’s short- and long-term cardiovascular effects can be assessed by performing serial electrocardiograms (ECGs). However, the quality of the data generated by a test is only as good as the quality of the interpretation of the test itself. In order to ensure the value of ECG data for evaluating cardiovascular safety, sponsors are encouraged to use a centralised approach to their ECG programme in order to achieve a standardised and consistent database. When a decentralised model is employed, ECGs are performed at the investigator sites using local ECG machines, which may be of many different makes and models. As a result, the automated measurements and interpretations may be very inconsistent Volume 3 Issue 1

Drug Discovery, Development & Delivery due to different types of instruments using a variety of different computer algorithms for calculations. In contrast, a centralised approach overcomes this issue of inconsistency by digitally collecting high quality data in a standardised format for assessment, with the use of consistent and validated systems. All interval duration measurements (IDMs) are assessed by a qualified individual, and every ECG is evaluated by a qualified cardiologist who is trained to follow standardised procedures which are continually validated through a quality control programme. As a consequence, more consistent and cleaner data will be generated. Additionally, centralisation facilitates proactive data monitoring and tracking, with demography and missing visits noted automatically, thereby enabling the collection of valuable data as studies progress. Accurate and comprehensive capture of cardiovascular events requires an instrument that can detect even the most inconspicuous of indicators. For example, myocardial infarction (MI) does

not always have a classic presentation, since as many as 40% of MIs are “silent”, and are associated with either no symptoms at all, or with atypical symptoms which are not recognised by the patient. These “silent” MIs may be detected on the serial ECGs collected during the trial. The capture of these otherwise unrecognised events may help to reduce the duration and cost of these large cardiovascular safety trials. However, the capture of this data requires that ECGs be evaluated in a consistent manner such that analysis of the database can reliably identify these subjects for adjudication by the cardiovascular events committee. Decentralised ECG reading, with interpretations generated by ECG machines or by a wide variety of noncardiologist physicians, may yield databases with so much noise that new events cannot be reliably resolved. Another advantage to the use of a centralised cardiac safety core lab is the enhanced availability of ECG

data for review by the sponsor and cardiovascular review committee. When ECGs are analysed locally, the paper ECG remains at the site. In contrast, a centralised core lab stores all of the ECG data in a central repository, and can provide online access to ECG data via a centralised portal. The ability to evaluate data across a specific patient and across all patients globally may be invaluable. Cost-Effective Compliance Upon its initial release, the FDA Guidance on the development of treatment for type 2 diabetes produced some controversy. The recommendations for extended trial periods with higher numbers of subjects translate into increased expense for sponsors. The use of a centralised cardiac safety core lab will improve the quality of ECG data generated, but is often thought to be more costly than use of decentralised ECG reading. In reality, however, the use of a core lab may actually be more cost-effective than having multiple individual sites perform


Drug Discovery, Development & Delivery ECG evaluations. Contracting with a core lab reduces fees paid to each site for technical support and ECG reading (often by unskilled readers). Additionally, by eliminating errors in collection and transcription of ECG data, sponsors can minimise the amount of retesting that must be carried out. In addition, recent technological innovations have led to the introduction of new, highly compact ECG instrumentation, providing the same industry-leading performance as conventional systems at a lower cost, and making a centralised approach easier to implement. In a recent comparison of cost compiled by a leading provider of technology services, it was shown that the use of centralised ECG provides an

overall cost reduction of 34.6%.3 Conclusion In order to comply with the FDA Guidance on the development of new anti-diabetic drugs, sponsors must now demonstrate that a new agent does not increase cardiovascular events. It is recommended that trial endpoints be clearly designed, that independent cardiovascular endpoint committees be established, and that sponsors prepare to enroll higher-risk subjects in longer and larger cardiovascular safety trials. Trial periods should be extended up to two years, and trials must be adequately powered to detect the requisite number of cardiovascular events in order to

satisfy the statistical requirements outlined in the FDA guidance. To comply with these recommendations and avoid inconsistency, inaccuracy and unnecessary extra cost, the use of centralised ECGs is crucial. As a result of centralised ECG collection and evaluation, data management quality and consistency is improved. With improved quality, greater consistency and ultimately more accurate data, false positive and negative ECG findings can be avoided. The overall duration, size and cost of the drug development programme can be decreased, while still ensuring compliance with the new, more stringent regulatory requirements n References 1. International  Diabetes Federation, unode=013B84FE-189A-4A8084F8-C994CAFB17E6 2. G  uidance for Industry – Diabetes Mellitus – Evaluating Risk in New Antidiabetic Therapies to Treat Type 2 Diabetes, U.S. Department of Health and Human Services, Food and Drug Administration, cder/guidance/index.htm 3. “ Mapping Adoption of Centralized Cardiac Safety Assessment”, Tufts Center for the Study of Drug Development: Feb 2010, http://www. Dr. Kleiman is a board certified cardiologist and cardiac electrophysiologist who has performed research in both basic cellular electrophysiology as well as clinical electrophysiology. Following completion of his training at the University of Pennsylvania, Dr. Kleiman was a member of a busy cardiology practice for a dozen years. He began working in cardiac safety as a part time consultant in 1995 and joined ERT full time in 2003. Dr. Kleiman is currently ERT’s Vice President, Global Cardiology and is responsible for the conduct of ERT’s cardiology group as well as participating in ERT’s consulting group Email:


Volume 3 Issue 1

Clinical Research

Application of Toxicogenomics in Safety Assessment The cost of developing new active and safe drugs is increasingly high due to the high attrition rate. The risk of unexpected toxicity can be observed at any time during drug development or after marketing approval, thereby leading to the failure of the compound or its withdrawal from the market. Many drugs may fail in late clinical development, when up to 90% of the development cost has been incurred, putting the pharmaceutical company at financial risk. Toxicity is a significant contributor to the high attrition rate of drug failure, with approximately 30% of all new drug candidates being terminated because of unexpected animal toxicity profiles and/or side-effects in clinical studies. Traditional toxicology studies focus on phenotypic changes in an organism that result from exposure to the drug; however, this approach does


not address the cellular or molecular changes that lead to the phenotypic changes. Therefore, the need for better and more robust scientific tools to predict toxicity is critical and highly desirable. In the past 20 years, new technologies have emerged that have enlightened current approaches, and are leading to novel predictive approaches for studying disease risk. An increased understanding of the mode of action and the use of scientific tools to predict toxicity is expected to reduce the attrition rate, with a resultant decrease in the cost of developing new drugs. In fact, many large pharmaceutical companies have begun using improved model systems for predicting potential drug toxicity, both to decrease the rate of drug-related adverse reactions and to reduce attrition rates.

One of the rapidly growing scientific disciplines that can provide insight into the mechanism of action and enable development of targeted cellular assays is the discipline of toxicogenomics. Toxicogenomics is defined as the application of global mRNA, protein, and metabolite analysis-related technologies to study the effects of test articles on organisms. Observing the patterns of altered molecular expression caused by specific exposures can reveal how toxicants act and cause disease. Identification of toxicity pathways and development of targeted assays to systemically assess potential modes of action will allow a thorough safety assessment. The promise of this new technology is such that it can be used to generate data on a large number of compounds and exposure scenarios, which will lead

Volume 3 Issue 1

to the development of an extraordinary database that can be used to guide future research, improve the drug development process, and aid in regulatory decisions. Toxicogenomics is a promising tool and has been applied in several areas of safety assessment, but in the face of rapid technological change, it is impractical to predict all of the opportunities and challenges that could arise. Overall, there are high expectations for toxicogenomics to predict potential drug toxicity, better assess toxicity, and reduce attrition rates n

Ali Said Faqi, DVM, PhD, DABT, is Senior Director of Developmental and Reproductive Toxicology and a Senior Principal Study Director at MPI Research. He received his PhD from the University of Leipzig in Germany in 1995, and DVM from Somali National University. He earned a diploma of specialisation in Experimental Pharmacology from the University of Milan in Italy. He was a postdoctoral fellow at the Institute of Clinical Pharmacology and Toxicology at the Free University of Berlin, Germany from 1996-1998. Prior to joining MPI Research, Dr Faqi was a Senior Scientist at Allergan Pharmaceuticals in Irvine, California, and a Research Toxicologist at IIT Research Institute in Chicago. He is a Diplomate of the American Board of Toxicology (DABT) and has served on the Editorial Board of the Reproductive Toxicology Journal and the Board Scientific Counselors (BOSC) Computational Toxicology at the United States Environmental Protection Agency (US EPA). He is a past chairman of the membership committee of the Teratology Society and a past president of the Michigan Chapter of the Society of Toxicology. Dr Faqi is a visiting professor of Pharmacology and Toxicology at the University of Palermo, Italy. He has published extensively in the field of Developmental and Reproductive Toxicology. Email:

Company profile

Cardio Analytics was founded by David Morris and Jeffrey Batson in 1994. Ian Jarvis was the Company’s Accountant on incorporation. •D  avid Morris: Previously Senior Chief Cardiac Clinical scientific Officer/ Cardiology Manager at Derriford Hospital, Plymouth, U.K. • J effrey Batson: Previously Chief Cardiac Clinical Scientific Officer at Derriford Hospital, Plymouth, U.K. • Ian Jarvis: Chartered Accountant joined the Board of Directors in September 1999 The primary objective of Cardio Analytics was to provide an easy access, high quality cardiac investigation service to General Practitioners (GPs), local community hospitals and Clinical Research Organisations (CROs). Cardio Analytics soon became a major service provider in cardiac monitoring and cardiac data analysis for the global clinical trial industry. As an independent and privately owned company Cardio Analytics are uniquely positioned within the cardiac investigative monitoring industry to ensure clients receive the highest standards and performance to meet their cardiovascular safety needs. The wide ranging services provided by Cardio Analytics are specifically tailored to the ever changing individual needs of our clients at competitive prices, which we believe we will not only meet but far exceed your expectations. All data analysis investigations at Cardio Analytics are undertaken using full operator interaction and are conducted by a professional, highly dedicated and skilled team who are fully qualified in every aspect of their role. Our pharmaceutical clients are therefore assured of consistent accuracy throughout our investigations and all data reporting. The clinical trial industry is highly quality dependent, time sensitive market. Cardio Analytics has proven success providing measurement and interpretation for 12-Lead ECG, 12-Lead and 3-Lead Holter monitoring, Echocardiography and Ambulatory Blood Pressure Monitoring.

ECG Service ECG Measurement & Analysis Cardio Analytics utilises the very latest in Digital ECG analysis and measurement software and was one of the first ECG service providers to utilise the MUSE CV for ECG analysis. Single or Multiple Cardiac Data Analysts (CDAs) who specialise in electro cardiology measurement are assigned to a clinical study and work to standardised reporting and measurement criteria specifically compiled and unique to each study. All ECG measurements can receive a second review from one of our in house Cardiologists. To provide and maintain the highest level of accuracy in ECG reporting, all of Cardio Analytics Analysts are continually assessed and are required to perform regular statistically proven “Mean Intra Operator Variability Record Testing” procedures. ECG Databasing ECGs are transmitted digitally to the Cardio Analytics MUSE CV System. Full demographic data cleaning is then performed with each ECG checked for the accuracy of all demographic data entry and the quality and accuracy of the recording. Notification of errors and data correction performed is communicated to the study site as required. The ECG is then digitally data based within the MUSE CV System, for possible later interpretation if required for the development of the compound.


Thorough QT Studies Cardio Analytics has tirelessly strived over the last sixteen years to achieve a worldwide reputation of excellence for QT measurement and analysis and has supported approval of a number of “Blockbuster” medicines.

Holter Service Holter Monitoring Cardio Analytics can offer both 3-lead and 12-lead Holter monitoring services. The Mortara 12-lead Holter allows continuous 30 hour 12 lead ECG data collection offering the potential for 12 lead ECG extractions for subsequent QT evaluation at protocol specific time points. 12-lead Holter monitoring can capture every heartbeat for a more comprehensive view of cardiac activity over a 24 hour or 48 hour period and up to 7 days with a 3 lead recorder. Report Generation Every report is checked to make sure it meets our exacting standards. A full analysts report is then produced in accordance with Cardio Analytics standard reporting criteria (or Client specific). Utilising such enhanced manually reviewed computer technology, Cardio Analytics analysts are able to generate highly accurate and consistent measurements and reports with a full audit trail. Standard arrhythmia analysis, Heart rate variability, Signal averaging, Spectral analysis, ST segment and QT interval analysis options can all be performed.

Volume 3 Issue 1


Customer Support Cardio Analytics unsurpassed customer support enables studies to run smoothly from initial client contact to the final data lock. We offer a wide range of training from on site demonstrations to training presentations and teleconferences to ensure effective data collection and procedures are followed throughout the duration of the study. Project Management Team Our Project Management Team is the focal point of communication and interface with the client, providing a personalised service to the study sites and to deal with all study issues and ensuring that each element of the study from start up to final release of data is dealt with smoothly and efficiently. A dedicated project manager is assigned to each individual study. Cardio Analytics can provide complete training and system support for study sites. Our project managers ensure the delivery and set up of equipment is done to meet study deadlines and coordinates all activities to ensure all study requirements and timelines are met with the highest confidence. Information Technology & Data Management Department Our highly experienced and dedicated in-house IT & Data Management department is totally committed to ensure that the study data delivered to our clients is of the highest quality and is designed to be as flexible as possible. Databases, reports, formats and schedules can be tailored exactly to our clientâ&#x20AC;&#x2122;s demands. Data can be supplied in a wide variety of formats, and at the frequency you require to meet your study timelines, including transfer of study data direct to the FDA ECG Warehouse.

are conducted to ensure the effectiveness of our quality management system and SOP adherence. All of our standard operating procedures as well as our computer systems are validated and maintained under a controlled change system until no longer required or obsolete. Equipment, Maintenance & logistics Department Cardio Analytics has a dedicated in-house Equipment, Maintenance and Logistics department giving our clients the assurance that each machine required for their study is fully serviced and complies with the necessary regulatory requirements. Our large equipment inventory of standard 12 lead ECG machines, 3 lead and 12 lead Holter recorders and ABP monitors means we have the ability to offer flexible charging options for both short and long-term study rental needs. Mission Statement Cardio Analyticâ&#x20AC;&#x2122;s mission is to provide the highest quality cardiac safety services to support clinical trials, through continued development of bespoke applications and processes. Being an independent organisation provides Cardio Analytics with the flexibility needed to respond to a continually changing global market with unique strategies, commitment, passion and consistency. If you would like any further information or to discuss Cardio Analytics services in more detail please contact Kerri Mellor, Senior Business Development Associate, Tel:- +44 (0) 1752 201144 or email:-

Quality Control Department Cardio Analytics in-house Quality Control Department follows strict regulatory guidelines ensuring the consistently highest quality data possible. Regular internal process audits and client audits/evaluations


Clinical Research

Developing Innovative Cancer Medicines of the Future via Patient-Relevant Models There is currently a high attrition rate for new cancer drugs which enter into clinical trials, due to pre-clinical models not accurately predicting efficacy and/ or toxicity. In a recent study provided by Cancer Research UK, it was shown that between 1995 and 2007, 77% of 800 cancer drugs entering Phase I clinical trials failed to reach the market (1). As a result, there is a growing need within the pharmaceutical and biotech industry for more patient-relevant and predictive cancer modelling for anti-cancer drug development. This requirement is particularly important as a new generation of molecular-targeted cancer drugs, with fewer potential side-effects, are coming through the drug discovery pipeline. In response to this requirement, new technology is now being developed in an effort to create patient-relevant models which ensure pre-clinical efficacy assessment. It is important to evaluate the innovative cancer medicines of the future via these patient-relevant models to ensure they are predictive of how the drug is likely to behave in clinical trials with cancer patients. This article will provide examples of two innovations that have been specifically developed in order to fast-track new agents into the clinic. Advantages of these innovations will be explored, detailing how they are superior when compared to traditional cancer models in that they provide maximum data over a short timeframe together with associated cost benefits, and aid in successfully attaining regulatory approval. The need for specialist service providers is driven by increasing regulatory and scientific rigour and the pressure to reduce everincreasing cancer drug attrition rates and R&D costs through outsourcing, thereby identifying potential ‘lead candidates’ as early as possible. The Challenge The high attrition rate for new drugs entering clinical trials can be related to a variety of factors. A key problem is that the large majority of major pharmaceutical and biotech companies 54 INTERNATIONAL PHARMACEUTICAL INDUSTRY

developing products in oncology use only a limited and basic portfolio of cancer models, most of which are based on the use of animal cells or tissue. These basic models typically use cells that are not relevant to the human situation, and models that either do not allow continuous measurement of response and/or optimally model the biology of the cancer. Failure to do this has resulted in new cancer drugs not being challenged to the same degree as in the patient, with a number of false positive drugs entering into clinical trials as a result. For effective cancer drug development it is crucial to maximise pre-clinical information and ensure the drug is challenged in models reflective of the patient, many of whom have advanced cancer and may have lost responsiveness to existing standards of care. The global cancer market was forecast to grow to $53.1 billion by 2009 (2). Based on an estimated 18% of domestic sales spent on R&D (3) the potential R&D spend on anti-cancer therapies is currently $7 billion, with approximately one-quarter of this spend devoted to pre-clinical R&D ($1.75 billion). An estimated 25% of this is outsourced, with outsourcing predicted to expand significantly further by 2015. Although currently human tissue research is not something which the pharma and biotech community is required to do by law, there is increasing demand from regulators for non-clinical human safety data. It is interesting to note that oncology, one of the areas of drug development with the highest rate of attrition, is also the area in which animal models are not very predictive of the true human pathophysiology. For example a large majority of pharmaceutical companies continue to use basic xenograft models for oncology testing. When using this model, a tumour cell line that might have little relevance to the patient’s tumour is injected into a nonhuman model. This method is vulnerable to flaws due to the immunology of the non-human model not resembling the immunology of the human target, and the artificial location of the tumour offering

no real resemblance to what happens in vivo during tumorigenesis. The scientific community must work with industry to advise on study design to ensure that results are patient-relevant and meaningful. Innovative New Technology The demand for pharmaceutical companies to meet their business objectives, alongside the demand from consumers for the contained cost of prescription medicines, is forcing the industry to develop new methods to increase efficiency and reduce attrition rates. Innovative new approaches have targeted the urgent requirement to improve the efficacy of cancer modelling by expanding the utility of reporter systems and applying them to complex multi-cellular in vitro- and in vivo-based systems. Through the development of bio-imaging techniques applied to patient-relevant multi-cellular threedimensional models, disadvantages of previous models, including aspects such as maintaining individual cell growthand hypoxia, can be overcome. An established approach is the derivation of cancer models from cells derived from the patient’s tumour tissue, to evaluate drugs in patient-relevant models. This approach is being further developed by PRECOS under full ethical permission by modelling the tumour micro-environment in three-dimensional multi-cellular systems which challenge the new therapeutic entity to the greatest degree by incorporating relevant cell types, including those associated with the tumour stroma. This is opposed to evaluation in monolayer single cell cultures, which are standards within the pharma and biotech industries, and whilst predictive of the biology and mechanism-of-action, are not predictive of clinical efficacy. These 3D models with established human stromal epithelial interactions can then be transplanted in vivo. Real-time imaging allows continuous temporal information from a single Volume 3 Issue 1

Clinical Research experimental model resulting in improved efficiency and increased scientific information. In addition, lower costs are incurred due to more robust statistical analysis from fewer experimental repetitions. For example, it allows optimal timing of drug administration to be determined based on the microenvironmental signals measured. The technology also limits the need for additional monitoring and post-test procedures such as histology, and therefore reduced timeframes, thereby reducing cost and maximising the margin. Use of medium throughput threedimensional in vitro model screens may reduce the need for larger scale in vivo models, due to their capacity for modelling the tumour micro-environment more effectively, particularly in the discovery phase, resulting in a more streamlined drug development process. Further added value of the system is provided through the temporal analysis of cancer development in these models, which allows optimal timing of drug administration based on the microenvironmental signals measured by the innovations, delivering valuable insight in determining the efficacy of the new anticancer agent. A second new approach allows the monitoring of the tumour microenvironment and biological changes within cells in real-time in the presence of a cancer drug. This new approach is facilitated by the development of innovative new technology which involves bioluminescent/fluorescent biological reporters. These biological reporters are expressed in human cancer cells so that they emit light or fluorescence in response to different environmental stimuli. The reporters also emit light in response to changes and progression of the disease in response to drugs. For example it is possible to monitor the presence of hypoxia (low oxygen levels), blood vessel formation and cell proliferation and cell death. In addition, the technology can also evaluate genes up-regulated in response to radiotherapy or similar insult, intra-cellular signalling activated by ligand binding to cell surface receptors and cells with cancer stem cell-like properties, and that are undergoing epithelial: mesenchymal transition; a phenotype linked to cell invasion and metastasis. These reporter 56 INTERNATIONAL PHARMACEUTICAL INDUSTRY

systems therefore cover a number of key tumour properties including predictivity of secondary spread and resistance to standard of care treatments. These approaches have been developed within an academic setting, and this academic link and pipeline ensures the technology remains ‘cutting edge’. Overall, both of the discussed innovative approaches offer the advantage of reducing the need for additional monitoring and post-test procedures and the use of expensive supporting technology. In addition, these new approaches have the potential to ensure maximum data over a short timeframe with associated cost benefits and, most importantly, enable new cancer drugs to reach patients sooner. Products that have been developed with the use of these discussed innovations include CCK-2 receptor antagonists (reached Phase II/III in pancreatic cancer), G17DT immunogen (reached Phase III in pancreatic cancer) and Her-2 ligand trap (pre-clinical Phase), c-met inhibitor (Phase I/II) and the HDAC inhibitor (Phase II/III). Conclusion There is currently a high attrition rate for new oncology drugs entering into clinical trials, due to flawed methods of toxicity and efficacy evaluation providing inconsistent and inaccurate results. Innovative new approaches are being developed to overcome these challenges through the creation of bio-imaging techniques which can be applied to patient-relevant threedimensional models both in vitro and in vivo. These approaches have involved investment into designing reporter systems, performing molecular biological techniques to preserve the characteristics of the cancer cells, and testing to ensure that the systems still respond to standard of care cancer agents in a manner predictive of the patient, thus providing robust validation and documentation for each of the models. As a result of these innovative developments, new anti-cancer agents feed into more robust, complex and clinically predictive models. The need for specialist service providers is driven by increasing regulatory and scientific rigour and the pressure to reduce everincreasing cancer drug attrition rates and R&D costs, in order to identify

potential ‘lead candidates’ as early as possible. As a result it is now becoming common practice to outsource this activity to specialised service providers. Advantages of such outsourcing to specialist providers include time- and cost-efficiency, in addition to independent validation of ‘in-house’ data which, as a result, generates a stronger pre-clinical package for regulatory submission by pharma and biotech organisations. Innovative technologies developed by specialist providers can quickly and costeffectively be applied across all cancer types, benefiting the wider scientific community and, vitally, enabling new cancer drugs to reach patients sooner n References 1. Fricker et al. Lancet Oncology. (2008). 2. The Cancer Market Outlook to 2009. October (2004). 3. PhRMA News Release. January 22. (2004).

Professor Sue Watson: Deputy Chairman, Chief Science Officer & Founder of PRECOS Sue is the Head of Pre-Clinical Oncology at the University of Nottingham and is the principal founder of PRECOS Ltd. She has worked in the field of Cancer Pharmacology for 23 years with more than 85 peer-reviewed publications. She is currently part of a NCRI committee tasked with updating and rewriting national guidelines for in vivo cancer research, a member of the CRUK Discovery Committee, the Yorkshire Cancer Research Scientific Committee, the European Association of Cancer Research Council, is part of European Framework Programme (FP) 7 consortium tasked with deriving new cancer drugs from Chinese Herbal Medicines and is ethical advisor for a second cancer hypoxia-focused FP7 programme on in vivo research. She was awarded the National Research Medal from the British Society of Gastroenterology in 2002 for her contributions to gastro-intestinal cancer research. Email:

Volume 3 Issue 1

Clinical Research

Basic Biostats for Clinical Research Multiple Comparisons in Drug Development, Part II Multiplicity can have a significant impact on clinical trials, in the study design, the analysis of the data, and the interpretation of the results. It can show up in several ways: 1. M  ultiple comparisons among the treatment groups; 2. C  omparisons of different endpoints; 3. C  omparisons of endpoints at different time points; 4. Interim analyses; 5. A  t the programme level. The first three concepts were discussed in Part I of this series1. This paper covers the remaining two concepts. Interim Analyses Let us set the stage with a simplified version of multiple testing. Consider a study where we wish to show that white talc reduces headaches more effectively than yellow talc, with both forms of talc encapsulated and not visible to the study subjects. Furthermore, we will study subjects in groups of 20, with 10 randomised to white and 10 to yellow talc. And we will continue this study until we show one of these colours of talc is superior at reducing pain. For each cohort, we will test whether either colour is superior in reducing pain versus the other colour, testing the null hypothesis of equal ability with a Type I error rate of 5%. Will this study ever end? At some point, a test of white versus yellow talc will show statistical significance, and hence the study will end. Let us look at this study in some more detail. For the first cohort of 20, the probability of declaring one colour of talc superior is 5%, the Type I error rate; hence, we have a 5% chance that the study will end with the first cohort. More than likely, we will need to proceed onto the second cohort of 20, where again the probability of declaring one colour superior to the other is 5%; the probability of end on or before this cohort is 9.8%, as shown in Table 1’s “Independent Cohort” column. Again, most likely we shall proceed onto the third cohort, where the probability of end on or before this 58 INTERNATIONAL PHARMACEUTICAL INDUSTRY

cohort is now 14.3%. This can continue as long as funds are available. Looking at Table 1, we see that the probability increases as the number of cohorts increases, surpassing 50% by cohort 15. In other words, the probability of this study having more than 15 cohorts is less than 46.3%. Now let us suppose that the study stopped on the 10th cohort with a p-value less than 0.05. Do we then declare that one colour or the other is better at reducing pain? Most individuals would likely be hesitant to make such a declaration: certainly, statisticians would argue against it. Why? The probability of stopping on or before the 10th cohort is 40.1%. Given the design, we know that the study has to stop at some point, and the point at which it did stop is entirely consistent with the null hypothesis. The reason that the study eventually stops is due to multiplicity of testing. All studies that have interim analyses need to deal

with this issue in one way or the other. Now, consider the same study setup, but now we pool all data collected for the analysis. Does this change the results? From Table 1, “Cumulative Cohorts” column, we see that again the Type I error rate exceeds 5%, and increases with the number of interim analyses. But also notice that the rate of increase is smaller than in the independent cohort case, since the use of prior data acts like a shock absorber. We see that testing the same hypothesis repeatedly induces an increase in the Type I error rate. The differences in Table 1 between the independent cohort column and the cumulative cohorts column lie in the correlation of the data being testing at each cohort. In the independent cohort column, the analyses are always independent cohort-to-cohort. In the cumulative cohorts column, the tests are correlated, since the data used in the analysis of one cohort use the

Table 1: Maximum probability of committing a Type I error when each hypothesis is tested at a=0.05

Maximum probability of a Type I error Number of Cohorts

Independent Cohort

Cumulative Cohorts

Adjusted p-value for Cumulative Cohorts To Control Type I Error to 5% Overall











































Derived from Shao and Feng [2], Table 1 using t-distribution approximation, assuming 10 subjects per treatment arm per cohort. * For number of cohorts 6 and higher, the numbers appear equal due to rounding.

Volume 3 Issue 1

Clinical Research same data from the prior cohorts. For instance, for cohort 2, we use the data in the test for cohort 1, and also the new, independent data for cohort 2; the correlation in the data is 1/2. Similarly, the data used for testing cohort 3 also includes the data used for testing cohort 2; hence, the correlation is 2/3. This correlation makes the computations of the Type I error more difficult to perform and reduces the Type I error inflation, but conceptually the problem is the same as for the independent cohort testing. What would happen if we were to “adjust” the p-values by dividing out the Type I error inflation factor that is documented in Table 1? For cohort 2, instead of using a p-value cutoff of 0.05 as we usually do, we instead use 0.0257for both tests? The test of cohort 1 is now more stringent, but this is necessary to reduce down to the nominal 0.05 rate of the cumulative Type I error rate for tests of both cohorts 1 and 2. However, if we are going to have more than two cohorts, the subsequent tests result in a Type I error rate that again exceeds 0.05. We must adjust the rate based on the number of tests we are going to perform if we are to succeed at controlling the Type I error rate. The ‘a’ values needed to maintain an overall 5% Type I error rate under the cumulative cohorts analysis can be found in the fourth column of Table 1. We should remain aware that we can always control the Type I error rate by dividing the nominal error rate of 0.05 by some factor based on the number of tests to be performed, but the price we pay is decreasing the power of the test. If the power is to be preserved, we must increase the sample size. Each time we perform a test on a cohort, we say that we have “looked” at the data. The literature will sometimes refer to these analyses as “interim looks” or “interim analyses”. As we have seen, the greater the number of interim looks, the more that we must reduce the cutoff for the p-value of the tests in order to declare a statistically significant difference between the treatments. However, it turns out that we need not use the same divisor for each interim look. If we would really like to preserve more power for the final interim look, we can reduce the ‘a’ in the initial looks. How we structure the Type I error rates among the interim looks is referred to 60 INTERNATIONAL PHARMACEUTICAL INDUSTRY

as an ‘a’-spending rule. The idea is that there is a fixed amount of ‘a’ (‘a’ is used to denote Type I error) in a study (typically 0.05), and we must spend some of it on each interim look. If we think that we may have a very strong effect, we may take a chance and spend more of the ‘a’ early on in the hopes of ending the study early. In contrast, if we think we may need the full contingent of subjects to establish statistical significance, we will probably want to preserve most of the ‘a’ for spending on the final analysis, and hence not spend much early on. There are numerous ‘a’-spending rules, but the most commonly employed are those attributable to Pocock3 and O’Brien and Fleming4. The literature in this area can sometimes be confusing. These bounds are sometimes referred to as Lan-DeMets ‘a’-spending rules. Lan and DeMets5 came up with an efficient algorithm for calculating the ‘a’-spending rules based on different correlations among the analyses. Recall that the correlations are induced because we use the same data in different analyses. Cohorts of 1 The typical practice in clinical trial work is to set up a series of cohorts, which may or may not be equal in size, and perform an interim analysis once we have collected the data on each cohort. But we can take this down in size so that each subject is their own cohort. The ‘a’-spending approach of Pocock and others gets a bit unwieldy in this context. Fortunately, statistical methodology has been around to solve this problem since the 1940s: the sequential trial. Interestingly, sequential trials are actually easier to set up mathematically than group cohort analyses. We simply adjust the Type I error rate to be log(‘a’/(1−‘a’)). The power is similarly adjusted. Blinded versus Unblinded Looks Not all interim looks (analyses) have the same effect. Up to now, we have been performing interim analyses where we have been testing the hypothesis that the treatments are equal. To do this test, we must be unblind. Other interim analyses are possible on blinded data. For instance, suppose that we just want to estimate the standard deviation so that we can re-estimate the sample size, if necessary. This is a common

problem in practice, particularly in cases where the variability of the endpoint has not previously been observed in the population being studied. As an example, Dr Reeve once worked on a Phase II asthma study. The endpoint used in the study was to be used as a secondary endpoint in the current study, and another endpoint based on a validated questionnaire was to be used as the primary endpoint. The questionnaire had been validated in a population of about 30 asthmatic patients in England, whereas the study was to be conducted on 200 asthmatic patients in Australia. Even if the population had been the same, the uncertainty in the variability of the instrument is still significant, around 40% of the standard deviation in this case; see Table 2. Note that we must have a sample size of greater than 200 to be able to say with confidence that we know the standard deviation to within 10%. Since the variability of the primary endpoint is one of the most important determinants to the successful design of the trial, accepting this level of uncertainty in the amount of variability is a gamble that we would rather not take. In this case, the sample size re-estimation during the trial was important to reduce the risks of a failed trial. The statistical ramifications for doing an unblinded analysis are controversial, and different statisticians may have different views on the topic. What causes Type I error rate to inflate? It is the fact that we may take action with the results and stop, or not stop, the study early if we see a statistically significant result. Blinding versus unblinding is not the statistical issue, it is that we now have an opportunity to intervene in the study conduct. Most regulators believe that if an organisation running a trial observes a statistically significant result on an interim analysis, almost surely the sponsor will want to act on that information. And we can run the tests only if the results are unblinded; in fact, unblinding is almost guaranteed to induce calls for an analysis of the data. Hence, most regulators will almost always ask for a Type I error penalty if any unblinded analyses are to be performed. For blinded analyses, hypothesis testing is impossible, and therefore the Volume 3 Issue 1

Clinical Research need to suffer a Type I error penalty is reduced. This does not imply that Type I error rates are unaffected by a blinded look, particularly for a sample size adjustment look. However, as discussed in the commentary by Shih6, the effect is typically rather small. Multiplicity and Regulatory Filings The desire to reduce the deleterious effects of multiplicity in new drug applications (NDAs) shows up in the regulators’ requirements for two wellcontrolled, independent trials that establish efficacy and safety. Individuals new to the industry are often puzzled by this requirement for two studies. Why not one study with a well-established Type I error rate? After all, it can be argued that this compound will be presented to the agency only once, and therefore we have control of the probability of declaring that the compound is efficacious when the compound is in fact not, particularly since the agency will also have all of the Phase I and II data to support that conclusion. To understand this issue, we need to look at the process from the agency’s perspective. From the sponsor side, there is only one NDA, and multiplicity is not an issue. From the agency’s perspective, they will observe multiple NDAs within a given year, and they wish to control the Type I error rate of their process. The issue is difficult, since not only do they not know the number of NDAs to expect in any given year, but they also have essentially an unlimited number of years to consider. Hence, a firm statistical solution for controlling the Type I error rate is not possible in this context, but, at best, the Type I error rate can be controlled approximately on a yearly basis. The approach is multipronged: require two confirmatory trials (or sometimes one confirmatory trial with more stringent Type I error rate to compensate); require secondary endpoints to support the primary endpoint; require an integrated summary of efficacy which takes into account all of the data generated in the drug development programme, and similarly an integrated summary of safety; and sometimes require Phase IV studies to address lingering questions. Within the sponsoring organisations, drug developers need to grapple with similar issues of multiplicity. The

Table 2: Uncertainty (95% confidence) in the population standard deviation when given a sample standard deviation, based on a total sample size of n and two treatment groups. This is based on a 95% confidence interval for the population standard deviation, and is expressed as a percentage of the standard deviation.


Uncertainty Range of Standard Deviation













deleterious effect of multiplicity can cause significant financial pain. One manifestation of this is that clinical effects in Phase III tend to be smaller, on average, than clinical effects in Phase II. To see why this might be the case, let us consider the process by which we decide which compounds move into Phase III. Consider a series of compounds in the drug development compound, each with its own characteristic clinical effect versus placebo or standard of care. Only those that can be shown to be statistically significant (or that are predicted to be statistically significant in Phase III) can advance into Phase III. We can characterise the variation in clinical significance among the compounds as belonging to some distribution, i.e., treat the clinical effect as a random variable, with each compound having a realisation of that random variable. By clinical effect, we mean the difference between the treated group relative to the placebo or standard of care group, in standard deviation units, i.e., the signal to noise ratio. On top of that distribution, we are estimating the clinical effect, and the estimate has variability of its own. This variability is caused by our reliance on using data from clinical trials to estimate the clinical effect, and these trials are heterogeneous, as are the subjects in the trials. Even in the ideal world where all compounds in late stage Phase II trials have the same clinical effect, the estimates of the clinical effect would vary from compound to compound. And which compounds are selected as Phase III candidates? The compounds with the largest estimated clinical effect, not necessarily the compounds with the

largest true clinical effects: see Figure 1. Hence, Phase III trials demonstrate smaller clinical effects than do Phase II trials: multiplicity of analyses at the Phase II level generates the upwards bias in Phase II. Additionally, consider the case where several endpoints from the Phase II trials are being investigated for inclusion as endpoints for the Phase III trial. We can choose the endpoint with the largest clinical effect among the endpoints investigated, the endpoint predicted to have the best chance of having a statistically significant effect. Due to multiplicity, its effect is likely to be overstated in the Phase II trial, with the result of a smaller clinical effect in the Phase III trial, a process known as regression to the mean n

Russell Reeve, PhDAssociate Biostatistics Director, Center for Statistics in Drug Development, Quintiles&Adjunct Professor, Clinical Research Department, Campbell University & Adjunct Professor, Analytics Department, North Carolina State University J. Rick Turner, PhD Senior Scientific Director, Cardiac Safety Services, Quintiles & Affiliate Clinical Associate Professor, University of Florida College of Pharmacy. Email:


Company profile

Company Intana Bioscience, based in Martinsried/Germany, was founded in 2008 by Frank Becker and Stefan Hannus and received seed financing from BioM, the High Tech Gr端nder Fonds and Bayern Kapital. Intana uses FCCS (Fluorescence Cross Correlation Spectroscopy), a fluorescence spectroscopic based approach to analyze drug target interactions with respect to affinity, rate constants and off-target effects. Customers of Intana take advantage of the fast assay development, the high success rate and assay conditions that resemble the physiological environment of the future drug. No purified proteins are required. Technology FCCS is a single molecule sensitive method that exploits fluorescence fluctuations induced by the diffusion of low concentrated labeled molecules through a microscopic detection volume illuminated by two focused laser lines. When a fluorescence-labeled particle enters this illuminated spot it is excited and emits photons which are recorded by ultra sensitive detectors. The recorded photon count trace carries information on the mobility parameters of the molecule that can be extracted by autocorrelation of the signal. At higher concentrations, the average number of particles in the detection spot and thus the exact concentration of fluorescent molecules can be calculated.

For Fluorescence Cross Correlation Analysis spectrally different fluorophores are used to label two molecular species to investigate their interaction. If the molecules are bound to each other they diffuse through the confocal detection volume in a synchronized way and induce simultaneous fluctuation in both detection channels, giving rise to a positive cross correlation read out. The information on free and bound fractions of both interactors and the concentration of formed complexes can be used to calculate the dissociation constant by the law of mass action. FCCS yields precise information on particle concentration, molecular brightness of the molecules, the diffusional speed of subspecies (bound and free) and their respective fractional contribution. Rate constants of binding events are determined by repeated measurements and competition experiments allow affinity determination of unlabeled molecules. The FCCS technology is generically applicable to all molecules that can be labeled with a fluorophore and exhibit sufficient diffusional mobility to migrate through the detection volume. Working concentrations in the nanomolar range make the approach economic and short data acquisition times allow high throughput applications. Most importantly, protein purification is not necessary because both interactors are labeled and non fluorescent molecules are not recognized.

Basic Measuring Method: FCCS To facilitate interaction analysis of a specific target protein (green dots) and an inhibitor (red dots) in cellular lysates, both interactors are labled with spectrally distinct fluorophores. Targets are expressed as GFP fusion proteins and inhibitors covalently linked to Cy5. After mixing auto and crosscorrelation analysis yield concentrations of free and complexed particles and allow to determine binding and rate constants.


Volume 3 Issue 1


Bioscience GmbH Applications Based on its proprietary Fluorescence Cross Correlation Spectroscopy (FCCS) approach, Intana offers profiling and screening services for companies in the pharmaceutical and life science industries with active drug discovery programs. At Intana we believe that proteins seldom act alone but depend on interactors and cofactors to unfold their functions. Consequently, assays and screens, aiming to identify compounds acting on a designated target should be carried out in an environment that resembles the physiological conditions. To meet these requirements we have employed the FCCS technology to characterize binding- and rate constants of compound target interactions in cellular lysates. The approach has successfully been applied to compound-target, protein-protein and protein- nucleic acid interactions and has shown to be particularly useful in the context of kinases and kinase inhibitors. For this, the targets are expressed as fusions to autofluorescent proteins, lysates comprising the labeled target are generated by standard protocols. Additionally, an interacting compound is labeled with a second, spectrally divergent dye. After incubation of lysates with labeled interactor at nanomolar concentrations, complex formation can be quantified using FCCS. The correlation analysis reveals the number of particles and the fractional contribution of bound and free particles in both channels. Additionally, interacting particles comprising both fluorophores are detected. Based on this

information, the dissociation constant as well as on- and off-rates can be calculated. Once an interaction has been established, unlabeled compounds can be tested in competition assays to identify molecules with affinity for the labeled target. Surprisingly short acquisition times are necessary to yield reliable datasets and thus enable HTS applications. With less than 1 second per datapoint we have recently screened a diverse compound library against a novel kinase target. Moreover, we have generated a library of kinase fusions covering the entire set of human protein kinases including many pharmacologically relevant mutations. This library allows the affinity based profiling of drug candidates against the entire kinome in a time and cost effective manner. Taken together, the FCCS approach combines the advantages of a physiological assay environment with greatly accelerated and facilitated assay development, miniaturized sample volumes and flexible applications. Further information can be obtained: Dr. Frank Becker

Affinity based iteraction profiling: Expression of the complete set of human protein kinase domains as GFP Fusions allows affinity profiling of a (Cy5-labeled) drug candidates.

Compound PD-173955 was coupled to Cy5 and incubated at a concentration of ~ 10nM with different cellular lysates each of which comprising a specific kinase domain as GFP fusion. The left panel depicts selected kinases binding with affinity between <10nM and >1ÂľM to PD-173955.

Increasing number of compound-target complexes upon addition of higher concentrations of compound (blue bar). The examples illustrate the dose dependent increase in complex formation without affecting the KD-value of the interaction (red line). (green line: amount of GFP-Target protein)


Clinical Research

Early Phase Japanese Bridging Studies; Their Global Significance and What to Look for when Selecting a Suitable Contract Research Organisation to Conduct these Studies As the pharmaceutical and biotechnology industries are forced to continue to introduce internal efficiencies, companies within these industries must equally ensure they enforce these efficiencies on their external providers to maximise their return on investment (ROI) in their R&D spend. Subject recruitment for clinical trials is high on the agenda of pharmaceutical and biotechnology companies when deciding to which country and which clinical research organisation (CRO) to award the conduct of their study. Delays in the conduct of clinical trials are more often than not a result of insufficient subject recruitment, classically resulting in delays in compound development timelines, leading to increased R&D spend. Thus it is essential that outsourcing managers and project teams choose their third party providers carefully. Healthy volunteer trials can readily be conducted by any number of CROs across the globe, however healthy volunteer trials in specific populations, typically trials of Japanese healthy volunteers, need to be carefully considered in terms of geographic and CRO-specific placement. The Japanese pharmaceutical market is the second largest behind the US, and changes by the Japanese regulators for developing and introducing new chemical entities (NCEs) for the Japanese market heralded a change in the development process of NCEs within this population demographic. This was and is still seen 64 INTERNATIONAL PHARMACEUTICAL INDUSTRY

as essential for pharmaceutical and biotechnology sponsors seeking to expand their presence in the Japanese market. Prior to these changes, NCEs reached market in Japan often much later than other countries, primarily due to reluctance to conduct these trials in Japan on Japanese subjects because of the perceived lack of availability of potential volunteers, and the cost associated with this research. In 1998 Japan’s Pharmaceutical and Medical Devices Agency (PMDA), adopted the International Conference on Harmonisation (ICH) “Guideline on Ethnic Factors in the Acceptability of Foreign Clinical Data” (E5). This recognised procedures under which clinical trial data gathered in one region could be used to fulfil certain regulatory requirements in other regions. This change in approach began the drive towards studies involving Japanese subjects conducted outside of Japan. This, coupled with the acceptance by the PDMA in 2007 of clinical data from non-Japanese patients, has helped to bring NCEs to the Japanese pharmaceutical market in both a costand time-efficient manner. Nevertheless the requirements of the PDMA are expectedly very strict regarding clinical data generated from clinical trials conducted outside of Japan on Japanese and non-Japanese subjects. It is therefore the ability to adhere to the strict regulations, coupled with the ability to find suitable subjects and conduct the clinical trials in a costand time-efficient manner, that makes some CROs stand out in the minds of

Japanese- and non-Japanese-based pharmaceutical and biotechnology sponsors, when considering placing clinical studies outside Japan with the intention of submitting the data to the PDMA. Global Recruitment of Japanese Subjects for Clinical Trials cites 129 Phase I clinical studies enrolling or due to enroll healthy volunteer Japanese subjects globally since 19981. The data available demonstrates that a dramatic rise has occurred globally since 1998 with regard to the conduct of Phase I studies involving healthy volunteer subjects (peaking at 27% in 2008), in line with the changes in requirements by the PDMA (Figure 1). As one might expect, Japan still accounts for ~50% of listed Phase I trials involving Japanese healthy volunteers, with the USA accounting for 29.46% and Europe 12.40% (UK 10.07% overall)1 - see Figure 2. In line with this is the number of Japanese subjects entered/entering trials, with ~45% in Japan, and ~34% in the US and UK (Figure 3)1. However, what is interesting is the ratio of population sizes versus those enrolled or to be enrolled. Worldwide, approximately 130 million people are of Japanese descent; of these, approximately 127 million are residents of Japan2. According to the 2001 UK Census, 37,535 Japaneseborn people were residing in the UK3. The Office for National Statistics estimates that, in 2009, 34,000 people Volume 3 Issue 1

Company profile

B & C Group is a clinical expert that provides world-class packaging and logistics services through innovative technology and continuous improvement in quality standards. By re-engineering logistics processes through our centres of excellence, we provide flexible and efficient solutions for the packaging, distribution, transport and storage or archiving of all clinical trial materials, investigational medicinal products and biological specimens. Clinical research logistics have not evolved to keep pace with the outsourcing of related activities to CMOs, CROs, central laboratories, centralized ECG and bio-imaging providers. Even though the logistics processes for clinical trial materials do not vary significantly from study to study, the supply chain remains fragmented among various subcontractors and sponsors. This results in increased complexity, redundancy, higher costs and even confusion at investigator sites. Our new fully integrated facility provides full support to clinical trials requiring a supply chain that becomes more demanding and challenging. To meet those demands, B & C applies a global services view to design and optimize its own integrated ERP system (ISIS). That

way, you, the investigator and the patient can feel at ease, because we care. Our approach of using centres of excellence ensures that specialists are highly involved from the very beginning of study setup to study completion. They care about your clinical research logisticsâ&#x20AC;&#x2122; needs. To assure that standard operating procedures are well applied, the B & C quality assurance team closely supports your study management at B & C. B & C Group still is the pioneer in the centralization and integration of clinical research logistics, providing a unique suite of comprehensive, end-to-end services to streamline your supply chain. B & C Group cares about every project; therefore we design your study with the highest level of efficiency and flexibility to meet your research requirements while applying the relevant GxPs. Watson & Crick Hill Rue GranbonprĂŠ 11 B-1348 Mont-Saint-Guibert Belgium Tel +32 (0)10 237 444 Fax +32 (0)10 237 430

Clinical Research born in Japan were resident in the UK4. In the USA, according to the 2000 Census there were circa 796,700 people of Japanese descent residing in the US, with large percentages of the Japanese population residing in California and New York State5. Interestingly, the UK stands out as the highest recruiter of Japanese subjects into Phase I trials per capita based on the data provided above, with 1.521% of Japanese subjects born in Japan taking part in clinical trials, compared to Japan - 0.002%, and the USA - 0.212%. One must note that not all of the population are either suitable for, or interested in, taking part in clinical studies, and thus although the UK accounts for just 10% of clinical Phase I studies of Japanese volunteers, the data demonstrates the UK is an essential participant in clinical development of NCEs designated for release in the Japanese market. What Sponsors Require for the PDMA All clinical studies require a number of criteria to be met in order for the study to be deemed as valid, and the requirements of the PDMA are no different. The regulations regarding subject selection will differ from study to study, however the PDMA have some strict criteria that must be adhered to if the data is to be accepted by the PDMA. Japanese volunteers taking part in clinical trials in Japan must be a minimum of 20 years of age at the time of randomisation, and thus this most basic of criteria must be met in clinical trials conducted in Japanese subjects outside of Japan. In addition, in order for a subject to be described as an eligible Japanese subject, both of the volunteer’s parents and all grandparents must be Japanese. The volunteer must have been born in Japan, have a valid Japanese passport and must not have lived outside Japan for more than five years. However, most important is the issue of ethics. As with all subjects involved in clinical studies, each person randomised must fully understand the conditions of the study and what is expected of them, and what they might expect during participation in a clinical study. With Japanese subjects it is essential that the study is explained 66 INTERNATIONAL PHARMACEUTICAL INDUSTRY

to them in detail (ideally by a nativespeaking Japanese person), and that the patient information sheet and informed consent are provided to them in Japanese so that each subject can make a valid and informed decision. The above criteria do not form a comprehensive list of requirements of Japanese subjects becoming involved in clinical studies outside of Japan, in which data will be presented to the PDMA, but they are essential and should be the minimum a CRO with experience of conducting studies on Japanese subjects outside of Japan should expect when reviewing a clinical study protocol involving this demographic of subjects. What to Look For when Choosing a Suitable CRO for Studies Involving Japanese Subjects “We are world leaders…we are the best…experts in our field…” These are all throwaway slogans employed by CROs and clinical trial recruitment “specialists” time and time again - on websites, at conferences and at presentations given to many of you. However, what does this mean? More often than not, a CRO will promote the virtues of why they are better than their competitors via these bold statements, however as the old adage goes, a picture speaks a thousand words. As with all clinical studies, the recruitment of enough suitable subjects in the timeframe provided by sponsors is more often than not the critical factor in preventing delays to the development of a sponsor’s compound. As such, the merits of a CRO with regard to

recruitment and subject retention must be evaluated with at least the same care and attention as is given to a CRO’s ability to conduct a study clinically and produce good quality viable data. When choosing a CRO or recruitment specialist to conduct the recruitment of specialist subject populations, which Japanese subjects certainly are, sponsors must consider a number of factors, namely; • Company culture and structure • Track record • Safety record •A  pproaches to recruitment – Advertising – Attraction – Retention • Safeguard against over-volunteering Company culture and structure – Significant placement on the importance of appropriate company culture and infrastructure is very important both to sponsors and subjects considering taking part (especially for the first time) in a clinical study. A good CRO will understand this and ensure that their staff are appropriately trained to deal with a variety of populations, understanding small but sometimes very important nuances that can show that the CRO is really able to appropriately integrate within the target population with success. A company involved in the conduct of clinical trials involving Japanese subjects can only be taken seriously if they place the appropriate degree of significance on bridging the cultural gap. Naturally the clinical trial environment subjects enter is foreign to them if this

Figure 1. Illustrating the year on year conduct of Phase I trials involving Japanese subjects as documented on Percentage of Phase I Studies Conducted in Japanese Volunteers 1998-2012 30.00% 25.00% 20.00% 15.00% 10.00% 5.00% 0.00% 1998








Volume 3 Issue 1

Clinical Research is the first trial they have decided to participate in. This environment can be even more daunting for Japanese subjects, and thus it is vital that a CRO ensures a cultural blend within its employees to meet the needs of its volunteers. This blend must be evident throughout the company, originating from the first point of contact (within the marketing arm) that a potential Japanese subject has with the company, through to its interaction with the subject recruitment agents and clinical staff. Ensuring the company has a core group of professional and well-trained Japanese employees will typically ease initial concerns a Japanese subject may have about becoming involved in clinical research, and their presence in all facets of the company is the extended infrastructure to the Japanese subjects, and helps to cement the bond of trust that is necessary to be successful in specialist research. When looking for a good indicator of how well a CRO has done in integrating itself appropriately within the target recruitment population, one typically does not need to look further than the repeat rate of Japanese trial participants, and the extent to which previous trial participants recommend friends and colleagues to take part in a trial with a CRO. This is more readily achievable with an inbuilt Japanese infrastructure within the CRO, as it demonstrates to existing and potential Japanese clinical trial participants the importance of these subjects to new drug development within the Japanese population. This inbuilt culture and infrastructure coupled with the trust that still must be worked on continuously, routinely translates into a CRO on which a sponsor can confidently depend. Track record - As previously stated, in today’s environment it is not simply good enough to say “we are the best” or “we are world leaders”; these very bold statements need to be backed up with a strong record of achievement. This can more readily be achieved in simpler to recruit standard non-Japanese studies. However, when building up a reputation within a community where reputation is critical to success, as is the case within the Japanese community, a stand-out track record is essential. Sponsors must look at similar previous studies 68 INTERNATIONAL PHARMACEUTICAL INDUSTRY

Figure 2. Global placement of Phase I trials as documented on Geographical Location of Phase I Trials Involving Healthy Volunteer Japanese Subjects (1998-2012)




East Asia*


United States Europe Pacifica Southeast Asia


Canada South Asia

50.39% Figure 3.D  emonstrates the enrolment ratio of Japanese subjects by geographical location as documented on Number of Healthy Volunteer Japanese Subjects to be/already enrolled in Phase I Studies (1998-2012)


Japan United States UK



conducted by a CRO or recruitment specialist, and scrutinise how well that company performed in terms of conduct and delivery, and most importantly how a company has adapted at times when changes have needed to be made to ensure delivery. This final point is significant to the success of a company, and the delivery of promises made to a sponsor. Typically, a CRO with an inhouse specialist recruitment team will have the advantage over a standalone recruitment specialist working with a CRO without the necessary recruitment infrastructure, as they will be able to more readily adapt to change via early warning signals that would have been displayed in the early stages of the recruitment and screening process. If this process is disjointed, with the CRO and recruitment team working in different geographical locations,



necessary and sometimes timecritical changes cannot be effected as quickly as one would like, to ensure the continued smooth provision of service. Safety record – This must not simply be interpreted by the number of incidences of adverse events seen within a CRO, or simply by the safety accreditation a CRO has received from its regulatory bodies. Clinical trial participants understand there is an element of risk associated with taking part in clinical studies, hence the significance placed on providing adequate and ethically approved patient information sheets. However it is about the general care given by a CRO to its trial participants, whether they are at the screening stage of a study, currently enrolled in a clinical study, or are at the follow-up stage. This level of care for a trial participant’s Volume 3 Issue 1

Clinical Research medical wellbeing is one major factor that distinguishes a CRO with an inbuilt recruitment team from companies offering standalone recruitment of potential trial subjects. Recruitment companies commissioned to work with CROs or trial sites may never meet a potential trial participant face to face, and may only ever communicate with subjects “on their books” via telephone, SMS and email. Whilst this is efficient in terms of being able to scan greater numbers of potential volunteers in a shorter period of time, with an associated lower initial cost, it does not allow for the development of a relationship between the potential trial subject and the company. In addition, this little (if any) direct contact does not allow for “care” to be established between the potential trial subject and the company asking them to take part in a trial, and thus the level of initial and aftercare, especially if the need for medical care is limited. Whilst this may not be important to many populations, it is clearly evident from the experience of the authors that this is very important to Japanese subjects who are considering becoming involved in clinical trials. Recruitment – Whilst company culture and infrastructure and a good safety record are all important when providing clinical trial services involving specialist populations, a good understanding of the recruitment process required to identify and enrol the correct number of clinically suitable subjects is critical when choosing a supplier of Japanese bridging studies. Within recruitment, specialist recruitment companies are, as one might expect, usually as adept as the next at setting up marketing strategies to identify and attract subjects to respond to the various advertising activities they employ compared to CROs who have an inbuilt recruitment team. In some cases, where the CRO has not carefully built a solid internal recruitment infrastructure, these specialist recruitment companies are the better option to reduce the risk of not meeting recruitment objectives. This approach may involve more work for the sponsor, as they now have two separate suppliers to interact with and control, but this method clearly can and does often work. The potential problem that a sponsor must consider if taking 70 INTERNATIONAL PHARMACEUTICAL INDUSTRY

this approach is that of responsibility for tasks. For example, if a recruitment company is responsible for generating marketing strategies that translate into ‘interested’ subjects of an adequate number, but the CRO does not handle these interested subjects in the same manner in which the recruitment company set about recruiting them, this may lead to a shortfall in the final number of ‘interested’ subjects. Conversely, if a recruitment specialist does not target the population correctly, this may lead to a number of ‘interested’ yet ‘unsuitable’ subjects whom the CRO has to deal with, once again potentially leading to a recruitment shortfall. Whilst the above may not have too negative an impact on a large and readily available target population, potential issues as exampled above will have an amplified impact on smaller populations, such as the Japanese community outside of Japan. A small miscalculation can have an enormous resultant effect, which invariably leads in the short term to delays to a trial, and in the mid to long term to a distrust within the target community, who will think twice about returning to a situation

where they have had a bad experience, but will not think twice about expressing the poor experience to friends and colleagues. As such, a sponsor must cautiously consider how the recruitment of their clinical study will be conducted, by whom, and with whom the ultimate responsibility for both success and failure lies. A sponsor should ensure that a provider of Japanese recruitment and clinical services can show a realistic and detailed recruitment strategy, with numerous examples of ethically approved past advertising material. A tell-tale sign of lack of experience in conducting clinical studies involving Japanese subjects outside of Japan is the promise of being able to include with ease more than 20 Japanese subjects into a trial in a month. In addition, the volume of material a company can show is more often than not a reflection of how active the company is within the recruitment of Japanese subjects for trials. Nevertheless, do not just rely on quantity; quality is equally, if not more, important. It should always be the intent of a CRO to be conservative with a Volume 3 Issue 1

Clinical Research recruitment budget and use it sparingly, ensuring a maximum return on investment. A good CRO or recruitment specialist will know which forms of media, and more specifically, which publications, internet and paper-based sites work best for this population, and in what quantity. One must understand that within the Japanese community, saturating the market with advertising can have as negative an effect as not placing enough advertising material in the chosen media. One must be subtle, assured in one’s approach, and inventive, to capture the attention of the sub-population of Japanese subjects that would take part in clinical trials. Once attracted to an offering, many CROs and recruitment specialists who do not employ Japanese employees within subject recruitment do not appreciate the time this population require to make a considered decision. This process can be time-consuming, leaving little time to undertake screening before the intended enrolment date, and this is where a CRO with an inbuilt and suitably equipped recruitment team comes into its own. A CRO offering the full recruitment function is able to adapt quickly and deal with any unexpected needs of the potential subjects, as the recruitment team is always on site to assist the clinical team in ensuring that an adequate number of suitable subjects are available for inclusion at the time prescribed by the sponsor, by being able to cater to the needs of the potential subjects whist the clinical team go about their business. The key here is that within a CRO offering the full service, the responsibility is shared equally and felt by the whole team, as it is their collective efforts that determine success. These combined efforts, as discussed previously, are also essential in ensuring subject retention within the study, thus minimising the risk of selfwithdrawal by the enrolled subjects, which reduces the risk of having to find additional subjects for the study, in turn minimising study timelines and cost. Over-volunteering – It is not the existence of over-volunteering that a sponsor should be concerned with, but more importantly how companies involved in research of Japanese subjects outside of Japan deal with this problem. Numerous over72 INTERNATIONAL PHARMACEUTICAL INDUSTRY

volunteering prevention systems are used globally to safeguard against volunteers attempting to take part in more than one trial at any one time. Whilst this problem is not widespread, it must still be dealt with as if it were, to prevent this problem from growing. How a company safeguards against what is a relatively small problem day to day is indicative of the attention to detail they place on all aspects of the work they conduct. Those who take this problem seriously are more likely to have all aspects of their working process in order, compared to those who deem this to be a small problem not worth much investment in time and personnel. Summary and Conclusions The conduct of clinical studies of Japanese subjects outside of Japan is an expanding arena, and a necessity to limit the ‘lag’ time observed in the past in the approval of new medicines intended for the Japanese market. Whilst the PDMA appreciate it is necessary to conduct these studies outside of Japan, the PDMA still maintain a very strict set of requirements if the data produced is to be accepted by the Japanese regulators. Whilst there are numerous countries conducting clinical trials in Japanese subjects outside of Japan, sponsors must be cautious in their approach about the country into which these often intricate studies are placed. The UK and the US stand out as the major contributors of this type of research in the early phases, with no sign that this is due to change in the short to mid term future. Geographical selection is only half the task when choosing where to place a study of this nature. Fortunately for sponsors, as regulations have tightened globally and CROs have evolved to provide a more tightly regulated and professional service, so the conduct of trials in Japanese subjects has evolved. A number of key factors one should look out for when choosing a site have been identified in this paper, hopefully helping sponsors’ decision-making that little bit easier n References and Data Sources 1 2 Internal Affairs and Communications

Ministry, August 11, 2009 cited in The Daily Yomiuri, August 13, 2009 3 Country-of-birth Database. Organisation for Economic Co-operation and Development. 4 Office for National Statistics. September 2009 5 Le, C.N. 2011. “Population Statistics & Demographics” Asian-Nation: The Landscape of Asian America.

Keith Berelowitz has worked in both the academic and commercial environment of scientific research for over 10 years. As the Director of Operations for Richmond Pharmacology Ltd, Keith is responsible for co-ordinating the integration of employees and processes related to the efficient conduct of early phase clinical trials. Before taking on his current role Keith was responsible for the recruitment of healthy and patient volunteers for the early phase clinical trials, (including TQT and Bridging studies), at Richmond Pharmacology. Email: k.berelowitz@

Dr Jörg Täubel is a medical practitioner and CEO of Richmond Pharmacology which he cofounded in 2001. He has worked in clinical pharmacology for over 20 years conducting more than 500 early phase studies in patient, pediatric and healthy volunteer populations with a special interest in cardiac safety and thorough QT studies. He is the author of over 50 publications and is a Fellow of the Faculty of Pharmaceutical Medicine of the Royal Colleges of Physicians of the United Kingdom and a member of learned societies in Pharmacology and Therapeutics in Europe, the US and Japan.. Email: j.taubel@

Volume 3 Issue 1

Labs & Logistics & COLD CHAIN SUPPLY

Bridging the Gap between Temperature-Controlled Packaging and Logistics Most cool chain articles discuss the challenges in designing a temperaturecontrolled shipping system. This piece will concentrate on bridging the gap between the logistics provider and the cool chain. Cool Chain A cool chain (or cold chain) is a supply chain along which a product’s temperature is maintained from the point of manufacture until its end use. Cool chain is a core element in the transportation of temperature-controlled pharmaceutical product. Most cool chain products are licensed to be stored between +2°C and +8°C, and indeed, these temperatures are usually the ‘magic numbers’ in the industry. Cool chain is an expanding part of the industry, and will continue to be so, given increasing compliance requirements. This, coupled with larger numbers of new drugs in clinical trials and R&D requiring chilled temperature control in storage, means a potentially prosperous future for temperature-controlled logistics. Temperature-controlled supply chains are not always ‘cool’. Some products have to be kept frozen – this is often achieved by packing them with dry ice. Other products must be kept warm – usually this means a room temperature band of something like +15 to +25°C. Temperature-Controlled Logistics Perhaps the most difficult phase of the cool chain for a pharmaceutical supply chain manager occurs when a product leaves the control and security of the production and warehouse environment, and goes into a third-party courier network or to a specialist logistics provider. The product is out of validated territory and away from qualified cold store rooms, stringent SOP-led procedures and so on. The two main options for transfers of 76 INTERNATIONAL PHARMACEUTICAL INDUSTRY

goods are to move them in a climatecontrolled vehicle, or in a self-contained temperature-controlled shipping system. Climate-controlled vehicles are usually only practical for regular routes

that take a large number of shipments. These ‘fridge-to-fridge’ routes rely on immediate refrigeration capabilities at the destination. Difficulties soon arise if deliveries are made out of hours, or fridge space is otherwise unavailable. Volume 3 Issue 1

Labs & Logistics & COLD CHAIN SUPPLY If deliveries have multiple destination addresses, this means a premium climate-controlled vehicle must visit them all. Delays from customs or rerouting all put the temperature-controlled environment at risk. So, for flexibility over critical environmental requirements, selfcontained temperature-controlled packaging is often preferred. It is also frequently used by specialist couriers who have the most control over their routings. Passive cool chain shipping systems are chiefly based on ice cool packs that provide a 0°C constant temperature as they melt and go through their phase change to water. The packaging designer arranges these cool packs with other components to balance this 0°C cooling effect against the heat coming into the insulated box. When balanced correctly, the contents are then kept between +2°C and +8°C. The challenge is in choosing the right system for the route in question. The correct functioning of the system depends on the box experiencing the appropriate external environment, and delivery being made within the system’s lifetime (i.e. before the cool packs have completely melted). The system is designed by testing it in the lab under a temperature profile modelled to reflect its real-world experience. Often an ‘upper’ and a ‘lower’ temperature profile are chosen. By qualifying under both, a system is confidently validated for a range of real-world experiences. Temperature mapping of routes can help determine these test profiles, which means using a data logger to record the ambient temperature experienced by a package on a particular route. By comparing different recorded profiles of different routes, an overall ‘hottest’ and ‘coolest’ likely experience can be found. Of course this becomes complicated in the case of the logistics provider, where collection points, consignee addresses and routes (including methods of transport) will vary between each shipment. Uniting Cool Chain with a Specialist Logistics Provider The specialist logistics provider must have greater competencies than simply moving validated boxes from collection point to consignee. By having 78 INTERNATIONAL PHARMACEUTICAL INDUSTRY

an understanding of the capabilities of temperature-controlled packaging, they should manage and maintain their packages in transit to minimise temperature deviations and breaks in the cool chain. Specialist logistics providers will usually start by supplying appropriately qualified packaging for their shipment. They will prepare the necessary cool packs beforehand, and at the time of collection load the customer’s product into it themselves. This means the customer, shipper and consignee are all relieved of these responsibilities. A good specialist courier will be able to reduce clearance time in customs by ensuring correct documentation is present and accurate. But some things are out of human control; delayed flight times and volcanic eruptions, for example! On the rare occasions these problems do occur, intervention may be required. This could include reading and monitoring temperature logger data, replacing frozen cool packs with fresh ones, or topping up dry ice in the case of frozen product shipments. By having a worldwide network of offices, agents and partners, a logistics provider can operate these maintenance procedures wherever their boxes may be. Bonded warehouse access is also critical when customs delays are involved. These procedures will extend the lifetime that shipping systems are able to operate under, and ensure their customer’s product reaches its final destination in perfect condition. Temperature Monitors A specialist courier should offer temperature logger data management. A temperature logger gives a journey history of the temperature experienced, and can provide the deciding information for product release. One of the main reasons that people opt for using temperature loggers is because of the ever-increasing volume of regulations in the pharmaceutical industry. The FDA regulation CFR21 states that drugs and medicines must be stored at appropriate temperatures – you would need a temperature monitor to prove this. In order to ensure that the data recorded remains valid, it also specifies that the temperature log must be tamper-proof. However, there are pros and cons to using temperature loggers.

Plus Points: You have recorded data on what has happened to your product throughout its entire journey – giving you confidence that everything is in order. This can be particularly useful if the regulators need to see something to prove that the product has stayed within its agreed temperature. Downsides: Assuming the information collected is all in order and there are no excursions, there aren’t really any cons … just the added safety of having the information. If, however, the data shows that there were problems during transit, then you may need to have these explained by someone who can interpret the excursions. You may not be able to decipher the details of the excursions yourself, and therefore need to enlist the help of someone with experience. This could lead to extra cost. There are many different types of data logger that can perform a range of different tasks. So depending on the information you need and the level of confidence you require, it can be quite cost-effective. By mastering these practices, the specialist logistics provider can be a champion of bespoke and routine cool chain shipments n

Harriet King, Marketing Executive, Biocair International. Graduated with a BA(Hons) in Marketing, Advertising and PR; Harriet brings a fresh look to Pharmaceutical Marketing by combining traditional marketing techniques with New Media practices. During her education , she worked with a number of PR & Events agencies and spent a year with car giants BMW MINI, before making her debut in Pharmaceutical Marketing with Biocair. Email:

Volume 3 Issue 1


CMO PAT Implementation: Time for Strategic Decisions Process analytical technology (PAT) is a vital step towards a future where continuous manufacturing and realtime product release can become real in the pharmaceutical sector. After a slow start, PAT is high on the agenda for Big Pharma. But it is not so common among contract manufacturers (CMOs). What is holding them back and what are the issues they need to consider? Rebecca Vangenechten argues that, while the pace of change will vary from company to company, all CMOs need to have a PAT strategy in place. PAT offers immense potential for pharmaceutical manufacturing. With PAT, quality becomes something that is designed


into the process rather than checked afterwards. Introducing PAT, thus, has a considerable positive impact on reducing production costs. PAT speeds decisions on the unit operation level and improves quality/efficiency of process steps. This leads to shorter batch runs, increased quality consistency and reduced waste. However, investment in PAT comes at a price, and CMOs have to balance that against a wide range of considerations. In other industries, such as food and beverages, PAT is commonplace and the stuff of competitive advantage. In pharmaceuticals, however, regulation has been framed around batch processes, and it was not until 2004 that the FDA’s ‘critical path initiative’ provided

a starting point for the development of PAT. The ‘critical path initiative’ gave a very clear regulatory encouragement to PAT. Even so, it is encouragement rather than compulsion, and only pockets of pharma manufacturing are becoming PAT-enabled. In many respects, life would be simpler for CMOs if there was the same ‘keep up or fall behind’ compulsion to introducing PAT in contract manufacturing that is present in other industries, and is beginning to be felt in Big Pharma. So how should CMOs judge whether to implement PAT or not in their manufacturing facilities? Five key questions need to be addressed: • What do their clients need?

Volume 3 Issue 1

Manufacturing • What part of the CMO market is the CMO in? • What stage of the product life-cycle is the product in? • What is the CMO’s attitude towards, and willingness to, change? • What is the overall cost benefit and return on investment picture? A crucial consideration is what part of the pharma sector they are serving. A ‘wait and see’ approach, or even worse a ‘wait and be pushed’ approach, could be highly risky in a context where some of their Big Pharma clients are moving forward on PAT. For example, a major benefit of PAT in the new product market is to reduce development time by reducing the gap between R&D and manufacturing. Big Pharma companies who have deployed laboratory-scale PAT are unlikely to be impressed by CMOs who are not ready for a quick PAT-enabled transfer to full production. CMOs need to be ready and, in some contexts, working in partnership with their clients, dovetailing their respective PAT capabilities. Indeed, in some instances, there may be advantages to CMOs setting the pace and guiding their clients on the benefits of a PAT-enabled manufacturing solution. Decisions on PAT will also be closely related to what services CMOs are offering to their clients. Is the focus on specific processes, such as filling, blending or drying, or is it full service covering the whole production process from raw material to packaging? PAT can deliver significant added value for each. CMOs covering the full production process are likely to find it easier to introduce for new product contracts because there is a chance to start with PAT built-in, rather than having to retrofit existing plant. Nonetheless, there are also relatively easily obtainable gains to be derived from introducing PAT into existing specific processes. For example, the blending of active pharmaceutical ingredients (APIs) with various excipients is a common step in the pharmaceutical solid dosage form manufacturing process. The homogeneity of the blend is critical in defining the uniformity of dosage units within a batch of tablets. Near infrared (NIR) spectra can be used to show when the blended product has reached 82 INTERNATIONAL PHARMACEUTICAL INDUSTRY

its blend endpoint. A key consideration for CMOs considering PAT is what stage the client’s product is in its life-cycle. Is it a product that is about to go offpatent, where the drive to bring costs down is crucial to counter competition from generics,or is it a completely new product? As discussed earlier, the latter offers the chance to introduce PAT from the start. It also means that PAT can be implemented at a stage where formulation development is still ongoing when there is a lot to learn from process understanding. Such implementation also offers regulatory advantages. Unlike implementing PAT with products that are already being manufactured, where such a change requires revalidation, ‘green field’ PAT can be part of initial validation. The CMO’s attitude towards innovation, and indeed the attitude of their clients, is also very important in any consideration of PAT. CMOs need to consider how well their organisation will embrace and deliver change. How will its people react to and adapt to the more multi-disciplinary environment needed for PAT? Culture and people considerations are central to making the move to PAT effectively. The migration is much more than just a technological one. While innovation in technology lies at the heart of PAT, its successful application relies on full integration with a company’s people, knowledge systems and risk management. Finally, there is the company’s calculation of the overall cost picture and return on

investment (ROI). As we have seen, this will be influenced by the fact that there are a number of different levels of PAT implementation, from ‘easy wins’ such as endpoint detection in blending or drying to PAT on a number of unit operations or a whole production process. Timing, and match with where the client is and the product life-cycle stage, will also be key determinants on the ROI n

Rebecca Vangenechten is a Life Sciences Industry Consultant with Siemens. She is responsible for business development in life sciences in the US and focuses on innovative technologies, including process analytic technology (PAT), out of the Siemens Headquarter Pharma, located in Antwerp (Belgium). She holds an Msc. in BioMedical Sciences from the Catholic University of Louvain, and a Master in Global Management from the University Antwerp Management School. Working for one of the largest electro-technical companies as a scientist, Rebecca understands the importance for life sciences companies to build bridges between R&D and manufacturing. Email: rebecca.vangenechten@ Volume 3 Issue 1


LyoSeal速: Compliance, Quality and Production Tool Our injectable medicines are most often available in a glass vial closed by a rubber stopper and a crimp seal, an arrangement more than a century old. Is it true to say that the pharmaceutical industry has lacked innovative power during that time? On the contrary, this simple packaging format has led to numerous progresses in the past decades. However, recent studies, new regulations and emerging products suggest that there is still a significant potential for improvement in fill and finish operations. Comparing the picture of a 1919 vial of vaccines to a similar vaccine in 2011 may suggest that primary packaging for parenteral medicines has evolved little over the past century. Indeed, the most common presentation for injectable drugs remains a container closure system composed of a glass vial, an elastomeric closure and an aluminum crimp seal. This reality masks a tremendous evolution of sterile drug presentations from multidose vials to unit dose format, then unit dose in syringes, and the emergence of plastic material1 as a replacement to glass, such as COP (Cyclic Olefin Polymers) to address compatibility issues between container closure systems and demanding drug compounds (i.e. biotechnologies). However, fill and finish operations remain substantially unchanged since

the early 1900s and involve the key steps of filling, stoppering, freeze-drying (where applicable) and crimp-sealing. Significant progresses have been made in the manufacturing environment and in the reduction of preparation steps for packaging components. On the manufacturing environment side, isolators and RABS (Restricted Access Barrier Systems) have replaced standard cleanroom processing and thus transformed the concepts of risks as related to microbial contaminations. As for component preparation, the manufacturers of elastomeric closures initiated a significant change in the manufacturing of sterile products by taking ownership of the preparation of the stoppers in the 1980s2. Nowadays all stoppers are provided pre-cleaned with specifications of cleanliness, and are offered in formats compatible with fill and finish processes (steam sterilisable bags, bags with rapid transfer port). Component preparation by the suppliers was first acknowledged by the FDA in 20043 and since the introduction of pre-cleaned aluminum caps and combiseals, and the recent inroads for vial pre-processing4 all primary components are now available ready-tosterilise or ready-to-use. In addition, the introduction of automated inspection to eliminate defective components5 and the possibility to have specified leachable profiles6 contribute to achieving the ultimate goals of quality for packaging elements, and more importantly, for the drug products. With the emergence of disposable downstream and upstream processing components7,8, the roadmap to a reduced involvement of the pharmaceutical industry in manufacturing operations is well defined, where it is not already implemented. Current Issues related to Vial-based Container Closure Systems Are we then close to the end of the road?


Recent regulations, literature and recalls suggest that this common packaging format which we thought grandfathered for decades is yet to be scrutinised and prone to generate significant work in packaging development, quality assurance and engineering. According to Guazzo9, 140 million containers were recalled between 2003 and 2007, 40% of which related to container closure integrity issues, and 45% to vial issues. Lam & Stern10 and Simianu11 demonstrate in separate studies that standard vials and standard closures may not always have the geometric compatibility to give a perfect fit for freeze-drying applications or where an inert gas headspace is required. Their studies provide a rationale for the issue of stoppers that raise between the lyophilisation and crimping processes, either through modelling of the impact of stacked tolerances on the robustness of the seal, or by actual visualisation of the sealing surfaces hidden by the flange of the vial. While other elements such as surface treatment for both vials and stoppers may also be involved, they appear to touch a potential problem ignored for decades: the possibility that standard vials may not always be compatible with standard elastomeric closures! In addition, headspace studies also demonstrate that container closure integrity may not be assured by the stopper alone12) and lead to vacuum loss and oxygen ingress between lyophilisation and crimping. Regulatory agencies have acknowledged the issue. EMEA Annex 1 to Good Manufacturing Practices 13 specifies new requirements for Class A air supply protection of the vial until the final seal (crimping) is achieved, and the control of containers packaged under vacuum. At the same time that Annex 1 was in preparation, a PDA (Parenteral Drug Association) task force on risk management also investigated the transfer of lyophilised vials from Volume 3 Issue 1

Manufacturing the freeze-dryer to the crimp-sealer, and concluded the same needs that are now regulations in Europe14. An open question remains: knowing that risks exist for container closure integrity before crimping, are these risks such that Class A air supply is enough, or such that it is a first step to full Class A with cleaning and sterilisation of the caps, as is already the practice when isolators are used? PIC/S published a recommendation in 2010 that adds that unless it is verified that the stoppers are correctly set, and that misplaced or missing stoppers are rejected, aseptic conditions should apply15. Beyond regulation, the issue of raised stoppers is critical for several reasons: -P  revention of vacuum loss for lyophilised products packaged under vacuum. -P  revention of vial rejects because of imperfect land-seal and raised stoppers. -P  revention of false positives when alternate methods are used for sterility testing.16 Class A air supply crimping resulted in recent years in many practical issues. On the manufacturing side, automatic loaders and downloaders for freezedried products found application to facilitate transfers in the Class A environment requirements. Crimpsealers were upgraded to meet Class A air supply requirements. On the control side online raised stopper detectors have been developed, and headspace analysers complement the existing offer for online container closure integrity testing. In-line controls meet the regulatory requirements, but still fail to fully address two issues: - Sorting production into quality does not meet the emerging concept of quality by design17. - The Annex 1 statement that “crimping of the cap should be performed as soon as possible after stopper insertion”13. Risk and Compliance Management: Early Container Closure Integrity. It appears that many of the current issues discussed above are related to the fact that the components used require the dissociation of the two operations that ultimately achieve container closure integrity. In most manufacturing set86 INTERNATIONAL PHARMACEUTICAL INDUSTRY

ups, stoppering and crimp-sealing are separate in time and space. Bringing these two operations together, in a single operation or in two immediately consecutive processes, addresses the risk of raising stoppers and the compliance objective. Freeze-Dried Drug Products For freeze-dried products the earliest possibility to achieve final container closure integrity is inside the freezedryer, immediately after the insertion of the stopper. Products such as LyoSeal® were specifically designed for that purpose. An all plastic cap LyoSeal® is designed to snap on the flange of the vial when pressure is applied. Placed on the vialstopper system before loading of the vials in the freeze-dryer, the cap uses the pressure of the shelves to insert the stopper to achieve simultaneous stoppering and final crimping. Picture 2 shows a LyoSeal® 20mm after placement on the stopper, before lyophilisation, and Picture 3 after the stoppering cycle of the freeze-dryer. The overall process is summarised in Diagram 1.

LyoSeal® has the attributes to become a standard component for industrial manufacturing: - Compatibility with standard neck finishes (ISO, GPI) - Compatibility with validated lyophilisation cycles (18) - Compatibility with most industrial freeze-dryers. In addition, it resolves another frequent issue of freeze-drying operation when coated or flange-coated closures are not used: the sticking of stoppers to the shelves of the freeze-dryer. Initial work with pharmaceutical laboratories assesses the validity of the concept 19. Sterile Liquid Products The use of all-plastic caps in crimping operations, for both lyophilised and liquid drug products, is expected to eliminate the downsides of aluminum components: particle generation, variability of the crimping process and easy promotion of cosmetic defects. In particular, the possibility to use plastic components in the immediate vicinity of the filling operation allows for crimping immediately after stoppering, thus bringing final container closure integrity for liquid products in the filling core In association with ready-to-use

Diagram 1

Diagram 2

Volume 3 Issue 1

Manufacturing Diagram 3

vials and closures (Diagram 3, Picture 4), the concept satisfies both the compliance issues discussed above and the possibility to further reduce the number of pharmaceutical fill and finish operations. Production Management: Reduction of the Number of Manufacturing Operations A closer look at the evolution of fill and finish operations in the past decades shows that the number of processes managed on the production floor is decreasing. For standard liquid processing, up to 12 steps are involved, depending on whether cleanroom processing or isolation technology is used (Tables 1 and 2). Lyophilisation adds two additional operations (Table 3). Ready-to-sterilise and ready-to-use elastomeric closures, now common in the industry, have reduced by one or two the number of operations on the fill 88 INTERNATIONAL PHARMACEUTICAL INDUSTRY

side. The recent introduction of clean and sterile aluminum crimp seals has had the same consequence on the finish side. Plastic caps and ready-touse vials have the potential to further reduce manufacturing steps down to five or seven. Conclusion Fill and finish operations for the manufacturing of sterile products in vials has evolved in the past years due to the fact, now well documented, that container closure integrity is not achieved until the final crimp is in place. The requirement for Class A crimping requirements and the new control tools now implemented address the risks that a container closure system loses integrity between stoppering and crimping, but do not address the root cause, which is the separation in time and space of the stoppering and crimping operations. LyoSeal速 has been developed to

address that very point. In addition it provide n Bibliography 1. Vilivalam, V. and DeGrazio, F. Chapter 12, Plastic Packaging for Parenteral Drug Delivery. Pharmaceutical Dosage Forms, Parenteral Medications, 3rd Edition. 2. LeGall, P. Ready to Use Elastomeric Closures. PDA Anuual Meeting, 1989 : s.n., 1989. 3. Guidance to Industry, Sterile Drug products Produced by Aseptic Processing, Current Good Manufacturing Practices. FDA. 2004. 4. Golfetto, P. EZ-Fill Vials and Cartridges: A solution for clean and sterile pharmaceutical glass containers ready to fill. Chicago : s.n., 2010. 5. Schaefers, M. Elastomeric Components for Pharmaceutical Application. PDA workshop on Container Closure Systems. 6. Paskiet, D. Container Closure Systems: Strategies for Assessment of Leachables. PMPN. 2010. 7. Pora, Volume 3 Issue 1

Manufacturing H.; Rawlings, B. A Userâ&#x20AC;&#x2122;s Checklist for Introducing Single-Use Components into Process Systems. BioProcess International. 2009, April. 8. Holman, N.; Furey, J. Disposable Filling Technology for Sterile Pharmaceuticals. BIOPHARM International. 2004, January 2004. 9. Guazzo, D. Package Integrity Leak Test, ASTM Approved. PDA 2nd Annual Global Conference on Pharmaceutical Microbiology : s.n., 2007. 10. Lam, P.; Stern, A. Visualization Techniques for Assessing Design Factors That Affect the Interaction between Pharmaceutical Vials and Stoppers.. PDA J Pharm Sci and Tech, 2010, 64 182-187 : s.n., 2010. 11. . Simianu, M. Modeling Applied for the Design and Qualification of Container/ Closure Integrity, PDA Annual Meeting 2009. 12. Lighthouse. Application Note 103, Detecting Raised Stoppers in Sterile Freeze Dried Vials. 13. Volume 4, EU Guidelines to Good Manufacturing Practice Medicinal Products for Human and Veterinary Use, Annex 1, Manufacture of Sterile Medicinal Products. EudraLex, The Rules Governing Medicinal Products in the European Union. 2008. 14. PDA Technical Report 44, (TR 44) Quality Risk

Management for Aseptic Processes. 2008. 15. Recommendation GMP Annex 1 Revision 2008, Interpretation of most important changes for the Manufacture of Sterile medicinal Product. PIC/S 2010. 16. Guidance for Industry, Container and Closure System Integrity Testing in Lieu of Sterility Testing as a Component of the Stability Protocol for Sterile Products. FDA. 2008. 17. Harmonized Tripartite

Guideline, Pharmaceutical Development. ICH Q8(R2). 2004. 18. Tchessalov, S. and A.l. Investigation of LyosealĂ&#x2019; impact on sublimation rate during primary drying using TDLAS. Poster Presentation, Breckenridge, 2008. 19. Kayens, Chris. LyoSeal: qualification and validation. PDA Conference on Pharmaceutical Freeze Drying. 2009.

Philippe LeGall. Philippe has over 20 years of industry experience in both primary packaging for sterile parenterals and pharmaceutical compliance. He participated in the early development and marketing of ready-to-sterilize elastomeric closures before opening the European branch of a pharmaceutical compliance and engineering firm. He joined Biocorp in 2008 as director, sales and marketing. Email:

Tony Bouzerar Tony has over 15 years of healthcare industry experience in business development and international collaboration management. Within several global corporations, he participated in the successful launch of several drug delivery systems for blockbusters and orphan drugs, in the US and in Europe. He joined Biocorp in 2009 as Technical & Commercial support manager. Email:


Innovative Glatt Fluid bed Pelletising Technologies

In multi-particulate systems the dosage of the drug substance is – in contrast to classic single-unit dosage forms like tablets – divided on a plurality of sub-units, consisting of thousands of spherical pellet particles with a diameter of typically 100 – 2000 μm. Although their manufacture and design is more complex in comparison to classic single-unit dosage forms, multi-particulate dosage forms offer a magnitude of different interesting options and advantages to accomplish unique product characteristics and in particular specific drug release patterns. In contrast to nondisintegrating monolithic single-unit forms which retain their structure in the digestive tract, the multiparticular preparations consist of numerous sub-units which disperse after administration. Each single sub-unit acts as an individual modified release entity. As a consequence of this property, the multiple-unit approach offers certain advantages for a modified release dosage form over monolithic preparations like tablets: • reduced variability of the gastric emptying • reduced dependency on the nutrition state • minimised risk of high local drug concentrations within the GI tract • reduced risk of sudden dose dumping • lower intra- and inter-individual variability • controlled onset time of drug release • delivery of the active ingredient to distal sites within the GI tract With multiparticular pharmaceutical drugs an optimised pharmacokinetic behaviour can go together with good patient compliance. Many creative options in order to end up with intelligent, sophisticated and reliably acting pharmaceutical dosage forms are technically available. The question is: do we have feasible technologies in 92 INTERNATIONAL PHARMACEUTICAL INDUSTRY

order to establish reproducible product and process quality? The described multiparticulate pellet units can be formulated to different drug application forms (fig. 1): the most conventional form is the capsule. Pellets may further be compressed to tablets – after disintegration of the tablet in the stomach the pellets are set free acting as multiparticulates. Pellets having a particle size < 500 μm can be applied as oral suspensions without providing a sandy mouthfeel. To achieve such small pellet sizes particular technologies providing micropellets are required – the extrusion technique is not applicable therefore (fig. 2). With classic fluid bed drug layering and coating technologies like the Wurster and the Rotor technology such pellet particle sizes are basically achievable taking into account that the Wurster process is limited to drug layering approaches; an optimised Rotor technology could lead to an even better performance than the existing one. In addition to said existing and established pelletising technologies GLATT has developed new pelletising technologies (fig. 3) allowing new formulation options and product qualities. In particular, unique benefits and opportunities such as a small pellet size range of 100 – 500 μm, uniformity of particle size distribution, smooth particle surface, high density and high drug loading are achievable. 1. CPS™ Technology (Controlled Release Pelletising Technology) CPS™ Technology is a direct pelletisation process resulting in matrix type pellets. Release characteristics of API from CPS™ pellets depend both on the pellet formulation and on the pelletising process. The CPS™ technology is an advanced fluid bed rotor technology allowing the preparation of matrix pellets with particular properties in a batch

process; extremely low dosed and high potent drug can be formulated to CPS™ matrix pellets as well as high dosed APIs (fig. 2); the drug concentration can vary from < 1% up to 90%. Due to its modifications compared to the established GLATT Rotor system the CPS™ Technology works with a conical shaped rotating disc and additional devices ensuring a directed particle movement (fig. 4). Inert starting beads are not required for the CPS™ Technology; typically, microcrystalline cellulose powder is used as a basic excipient; moreover, other functional excipients like polymers, disintegrants, solubilizers and the like can be part of the CPS™ formulations in combination with the API. The starting powder (blend) is wetted with the pelletising liquid until a defined stage of moisture will have been achieved; at this time, spherical pellets begin to form (fig. 5). The pelletising liquid can be water and / or organic solvents which may also contain functional compounds. As an option, dry powder may be fed into Figure 1. Final drug application forms with pellets

Figure 2. Product Characteristics with different pelletization technologies

Volume 3 Issue 1

Manufacturing the process. With the help of torque measurement at the CPS™ rotor the endpoint of the pelletisation can be defined. By means of a characteristic rolling particle movement and thereby the application of different forces, in particular of centrifugal forces on the arising pellet cores, a defined densification of the particles can be reached. Finally the pellets are dried in the CPS™ or in a classical fluid bed dryer configuration. Fig. 6 shows the characteristics of CPS™ pellets containing 75% of an API in comparison with the same pellet formulation manufactured by extrusion (fig. 7): the CPS™ pellets provide a higher density due to the particular spheronisation process; their surface is smoother than the one of the extruded pellets and therefore provides ideal prerequisites for coating applications. Outstanding product characteristics of CPS™ pellets: • spherical and smooth pellet surfaces = ideal for coating applications • high density / low porosity of pellets • broad potency range for APIs • low attrition and friability • dust free surfaces • mean particle size range: 100 – 1500 μm • narrow particle size distribution (fig. 8) • controlled drug release from the CPS matrix (fig. 9) 2. MicroPx™ Technology The MicroPx™ Technology is a fluid bed agglomeration process resulting in matrix type pellets. Particle size could be rather small, e.g. < 400 μm together with a high drug loading of typically 95% (fig. 2). Functional pharmaceutical excipients, e.g. for bioavailability enhancement or controlled drug release can be integrated in the pellet matrix. The MicroPx™ Technology is a continuous fluid bed process: again, for the pelletisation, no starting cores are required. Typically, all formulation components like the API, pharmaceutical binder(s) and other functional ingredients are contained in a liquid which is fed into the MicroPx™ process via spray guns; the spraying liquid can be a solution, suspension, emulsion or the like. The design of the MicroPx™ technology is shown in fig. 10: in the pilot and the commercial scale a rectangular shaped processing chamber provides an ideal product flow. The fluidising air is led through a Konidur inlet air distribution 94 INTERNATIONAL PHARMACEUTICAL INDUSTRY

plate into the processing area; by this means a directed air stream is provided allowing a directed product transport over the inlet air distribution plate towards the classifying unit. One or more spray guns are mounted in the air distribution plate. A set of cartridge filters will blow back dust into the processing area in a controlled manner. At the front of the processing chamber an on-line classification unit – a zig-zag sifter – is mounted in order to continuously discharge well-sized product from the continuous process and in order to keep product still being too small in size in the process. By adjustment of the classification air flow the particle size of the “good” product which must be discharged from the process is defined. As a number of channels – each of them having a number of edges – is used for the classification, a narrow particle size distribution is achieved (fig. 11). The direct pelletisation process starts with spraying the API containing liquid into the empty MicroPx™ fluid bed unit. Initially, powder is generated by spray drying; the powder is stepwise agglomerated to seeds. The online provided seeds are continuously layered with droplets from the bottom spray nozzles ending up in onion-like structured micropellets. The process is characterised by a permanently balanced ratio of spray drying and layering of already existing seeds. Well-sized pellets are continuously discharged out of the process through a rotary valve after classification by the zigzag sifter. In order to allow spray drying besides the layering of existing pellets the product bed in the process must not be too high; this requirement is also true when the directed product flow towards the sifter should be put into effect. Besides the classical fluid bed operating parameters such as inlet air volume, inlet air temperature, atomisation air pressure and liquid feed rate the classification air volume defining the particle size of the discharged product is characteristic for the MicroPx™ process. As a certain degree of spray drying is an important requirement for the performance of the continuous pelletisation process it is easily understandable that the product temperature is typically higher than in a Wurster layering or coating process where losses of product by spray drying must be absolutely avoided. Outstanding

Figure 3. Innovative GLATT pelletisation technologies

Figure 4. Rotor Technolog

Figure 5. CPS™ Pelletising process in progress

Figure 6. CPS™ Matrix Pellets

Figure 7. Pellets from an Extrusion / Spheronisation process

Figure 8. Particle size distribution of extended release CPS™ pellets

Volume 3 Issue 1

Manufacturing product characteristics of MicroPx™ pellets: • spherical and smooth pellet surfaces = ideal for coating applications like taste masking, controlled release coating etc. • high density / low porosity of pellets • high drug loading: typically 95% • low attrition and friability • dust free surfaces • mean particle size range: 100 – 500 μm • narrow particle size distribution (e.g. > 90% between 100 – 300 μm without sieving) • inclusion of bioavailability enhancers, controlled release polymers etc. The MicroPx™ technology can ideally be applied when taste-masked micropellets must be manufactured – for the use in oral suspensions, sachets etc. For example, an extremely bitter tasting antibiotic should be formulated to an oral suspension to be given to children. The high dosed drug substance must be taste-masked in order to give a water based suspension which after the preparation must stay perfectly tastemasked for a 2 weeks period at room temperature. Nevertheless, the in vitro dissolution of the API from the taste masked form should be fast ( > 75% after 15 min). To fulfil these requirements the formulation of the API to micropellets using the MicroPx™ technology was decided. In the end, the taste-masked coated pellets should be smaller than 500 μm; fulfilling this requirement an unpleasant sandy mouthfeel is avoided. Fig. 12 shows the formulation principle of the taste-masked micropellets: the API is included in the core which is seal coated before a final functional tastemasking coating is applied. Fig. 13 demonstrates the appearance of the uncoated micropellets and the narrow and reproducible particle size distribution thereof. Fig. 14 shows the cross-sectional view of the micropellets with the 2-layer coating. Fig. 15 presents the results of the taste-masking performance and the in vitro dissolution results: with the MicroPx™ pellet concept a perfect tastemasking could be achieved together with a very fast in vitro dissolution. Comparison of CPS™ Technology and MicroPx™ technology. The described innovative fluid bed technologies should be applied for different applications: 96 INTERNATIONAL PHARMACEUTICAL INDUSTRY

MicroPx™ technology is the most feasible technology when particles with drug loading > 90 % must be provided in a particle size range of 100 – 400 μm; such small pellets are needed frequently for taste-masking applications but also for the compression of pellets into tablets. CPS™ is also able to provide a similar particle size range – typically lower API loads are intended and reached; a regular API load range is from 0,01 – 75%. As the densification can be well controlled by adjustment of the CPS™ processing parameters – in particular by the form and speed of the rotating disc – the CPS™ matrix pellets are most appropriate for a modified drug release from the matrix. Any functional coating can in addition be applied onto the CPS™ matrix pellets in order to achieve a particular in vitro dissolution profile. Both processes are appropriate to provide high valuable pharmaceutical products with unique properties. 3. ProCell™ Technology The Glatt ProCell™ Technology is a spouted-bed type pelletising process for the preparation of very high concentrated pellet-shaped particles; ideally, no additional excipients may be required for the formation of ProCell™ particles – in this case particles consisting of pure API are reached. Particles are fluidised in the ProCell™ spouted bed by a vertical process airflow: the process air enters the processing chamber through slots at the side and not through the usual bottom screen or inlet air distribution plate as in conventional fluid bed processing (fig. 16). The cross section of the processing chamber becomes significantly broader towards the top, resulting in a sharp decrease of the fluidising velocity of the process air. This effect provides a controlled flow pattern and circulation of the particles in the processing chamber. Spray nozzles are usually arranged in the bottom spray position - right in between the two inlet air slots; in this position they spray at the point of the highest energy input inside the unit. The ProCell™ Technology is a direct granulation and pelletising process: again – like with the CPS™ and the MicroPx™ technology – no inert starting beads are required and either, solutions, suspensions, emulsions or the like, containing the API, can be processed. ProCell™ Technology performs in the most effective way when a melt of a

Figure 9. Effect of particle size distribution on the in vitro dissolutionof extended release CPS™ pellets (phosphate buffer pH 6.8, 37°C)

Figure 10. MicroPx™ Technology Pilot and Commcercial Scale

Figure 11. MicroPx™ Technology Zig-Zag-Sifter

Figure 12. Taste masked MicroPx™ Pellets : Formulation Principle

Figure 13. Appearance and particle size distribution of MicroPx™ Pellets

Figure 15. Taste masking performance and in vitro dissolution of taste-masked MicroPx™ pellets

Volume 3 Issue 1

Manufacturing Figure16. ProCell™ Technology

Figure 20. in vitro dissolution profiles of Ibuprofen modified release tablets (potency: 96%, ProCell™ granule) and a market product (potency: 66%)

Figure 17. Ibuprofen DC grade 200 – 400 μm from ProCell™ Technology

Figure 21. Innovative GLATT Technologies as Modules for Standard Fluid Bed Units: All-in-One Option

Figure 18. in vitro dissolution profile of Ibuprofen 200 mg tablets made with Ibuprofen DC grade from ProCell™ Technology

•h  igh density / low porosity of particles • low attrition and friability •p  articularily suitable for processing of products with inherent stickiness Ibuprofen, a classic worldwide used API for pain killing has two negative drawbacks: during compression of conventionally produced granules very often sticking occures. Furthermore, Ibuprofen can cause a very unpleasant scratching in the throat during swallowing. In order to overcome the sticking problem during compression a Direct Compressible Ibuprofen grade was developed by Glatt applying the ProCell™ technology. The Ibuprofen DC grade was then compressed into 200 and 400 mg tablets. By this means - as a positive side effect - lower tablet weight and smaller tablet size can be achieved – and tablet manufacturers can eliminate the granulation step from their production when using the direct compressible Ibuprofen grade instead of standard Ibuprofen drug quality. The compaction properties of Ibuprofen DC grade were deeply investigated in commercial scale long time compaction studies n

Figure 22. Product Quality / Throughput / Manufacturing cost with innovative GLATT continuous fluid bed technologies

Figure 19. Ibuprofen modified release tablets made with ProCell™ granule

material is processed, as in this case neither water nor organic solvents have to be evaporated; the formation of granules and pellets takes place by means of spray solidification and agglomeration. By this means, high through-puts and cost effective processes are possible. The continuously arising product quantities can be fractionated online by means of a zig-zag-sifter or offline by means of a sieving unit. In any case, separated

material can be recirculated into the ongoing process; product losses are minimised in this way. Typically, ProCell™ particles may be spheronised less perfectly than with the CPS™ and the MicroPx™ technology Outstanding product characteristics of ProCell™ granules and pellets: • very high drug load up to 100% • mean particle size range from 50 – 1500 μm • narrow particle size distribution

Marco Lideo Originally a graduate nuclear engineer, Marco Lideo joined Bobst, the Swissheadquartered multinational packaging equipment manufacturer, in 2001. Here he has undertaken a variety of roles and also gained his MBA. Marco is currently Marketing & Sales Director for folder-gluer products worldwide, and is in particular responsible for the introduction to the market of the latest innovations. Email:



Conversion Processes Now Key to Assuring Compliance in Pharmaceutical Packaging Pharmaceutical packaging manufacturers increasingly agree to ‘zero fault’ supply agreements from brand owners in order to win contracts. To meet the requirements of these agreements, manufacturers have to make their quality assurance procedures more robust than ever before - a costly and time-consuming business. However, help is on hand from an unlikely source. Although traditionally seen simply as stages in the manufacture of folding cartons, in recent years new developments in conversion processes such as hot-foil stamping, die-cutting, and folding & gluing have made them key factors in ensuring carton quality. Ten years is a long time in any industry, and the changes in pharmaceutical packaging manufacture over the last decade are sometimes hard to believe. One of the biggest has come about as a result of the increasingly keen implementation by brand owners of justin-time (JIT) and lean manufacturing processes. When carried out to maximum efficiency, JIT requires cartons to be delivered by the packaging maker direct to the packing line, almost to the minute, which leaves no time for brand owner or contract packer staff to quality-check the consignment before use. Hence the requirement from brand owners for ‘zero fault’ packaging, and the substantial penalties applied if that is not what the carton maker delivers. Effectively, almost all the quality control (QC) procedures that brand owners and contract packers used to employ in-house have been put back onto the carton manufacturer, so making the carton production processes the last line of defence when it comes to ensuring compliance. As the name suggests, the primary function of the conversion processes is to turn a printed sheet into a flat carton ready for erection on the filling line. In 100 INTERNATIONAL PHARMACEUTICAL INDUSTRY

terms of format, this carton may be a simple ‘straightline’ box, requiring both ends to be glued after the product has been inserted, or a more complex ‘crash-lock bottom’ or ‘lock-bottom’ box, requiring gluing only at one end after product insertion. In the pharma market, cartons are almost exclusively printed, but may also feature enhancements

such as hot-foil stamping and/or hologram application, often as an anticounterfeiting measure, along with embossing. This latter process has seen a huge rise in utilisation thanks to EU Council Directive 2001/83/EC, which requires that, as of last October, the packaging of all medicines destined for the EU must carry information in Braille. Volume 3 Issue 1

PACKAGING While printing presses have a range of QC options available, few packaging manufacturers go as far as to check the print on each and every sheet, relying instead on random sampling to identify persistent problems such as colour drift, and on downstream processes to pick up intermittent quality problems such as mis-registers between the print and the sheet edges. The latest generations of hot-foil stamping presses, hologram applicators and die-cutters are available with dynamic register systems which can adjust such mis-registered sheets to ensure that the foil, holograms, or die-cutting are applied to exactly the right place every time, with any sheets beyond the operating range of these devices being ejected. In the case of dynamic register systems like the Power Register which is now built into many Bobst die-cutters and foiling presses, a loading bar takes control of the incoming sheet while high definition cameras and sophisticated software analyse the position of the print relative to the sheet edge. With modern foiling presses and die-cutters being capable of speeds of up to 12,000 sheets an hour, Power Register has only a fraction of a second to realign the sheet into the correct position before it must hand it off to the main part of the machine. The result is that the print and the foiling dies, hologram forces, or die-cutting tools are always in perfect alignment, even if the sheet is skewed, and the resultant blank will be perfectly foiled, hologrammed, or die-cut. This is not only important for the aesthetics of the box, but where foil or holograms are being applied may also be needed to ensure the effectiveness of anti-counterfeiting measures. Systems such as Power Register have an instant and substantial effect on the quality of the print-to-cut, printto-emboss, print-to-foil, or print-tohologram registration, but they cannot check that it is the correct sheet, or that every blank is correctly embossed or windowed, which is where folder-gluer mounted systems come in. Typically, a folder-gluer runs at several hundred metres a minute, often churning out over a hundred thousand straightline cartons, or tens of thousands of crashlock cartons, during each hour of production. Long before JIT became so popular, folder-gluer lines had been supplemented by simple detection 102 INTERNATIONAL PHARMACEUTICAL INDUSTRY

systems which monitored the correct application of the adhesive to the carton flaps, prior to their folding. These systems allowed operators to rectify a problem and remove any poorly glued or unglued cartons. In subsequent years these systems have proved to be the perfect platforms on which to piggyback additional QC checks. Add-ons to adhesive monitoring systems have been developed which check that the carton has the correct print (using the pharma codes or barcodes printed on them), check the print-to-cut registration (if the sheet was not die-cut on a machine with a dynamic register system), and can even ensure that two cartons have not fed together. Modern folder-gluers integrate or interface with such systems, and may even offer ‘flipper ejectors’ which instantly remove non-conforming cartons from the line without affecting production. The latest development in these systems is Braille-scanning, which has become an increasingly important issue. Traditionally applied at the diecutter stage, Braille is often now being applied in-line within the folder-gluer because using a rotary process is quicker, more efficient, and requires less investment in tooling [Fig.1]. When using a die-cutter, Braille is applied by bespoke dies, one for each carton on the sheet, something which may prove expensive on a multi-up pharmaceutical job. The application of Braille in this way can often require a separate run through a die-cutter, especially where the Braille text needs to lie close to the edges of the finished box. Setting Braille embossing on a die-cutter also requires a high level of skill, especially if die-cutting is being carried out in the same process, and the high tonnages used for die-cutter-applied embossing can quickly lead to degradation of the tool and hence the Braille dot. Rotary Braille application using a folder-gluer avoids most of these problems. A single long-life tool, comprising an inexpensive male and reusable female element, means drastically reduced tooling costs, with no restriction as to position relative to box edges. Rather than being difficult to set like die-cutter-applied Braille-embossing, a system such as Bobst’s AccuBraille is set in five minutes by the operator while they also set the folder-gluer, so negating the need for

a separate run or lengthy make-ready. The nature of the rotary process means that tools last longer and dots stay well defined. But even though rotary-applied Braille is considerably more reliable than diecutter embossed, there is always the opportunity for error, something which has led to the latest additions to foldergluer-mounted QC systems. Systems such as Bobst’s Braille-Scan read every carton passing through a folder-gluer and check the Braille applied against a reference sample, ejecting any nonconforming examples. Capable of scanning each carton, even at the high running speeds of modern folder-gluers, Braille-Scan also translates the Braille on the carton into the operator’s own language, displaying it on a monitor as a double check. Braille-Scan can be used to check the quality of both rotary and die-cutter-embossed Braille. By using systems such as Power Register, AccuBraille, and Braille-Scan, along with the other gluer-mounted QC, packaging manufacturers and brand owners can be much more certain that the cartons reaching the packing line will be ready for the insertion of the product. This doesn’t mean that there is no need for post-production QC; in fact ‘zero fault’ packaging contracts mean that there is probably more need than ever. Nonetheless, machine-mounted quality control and fault rectification equipment like that discussed above helps pharmaceutical packaging manufacturers meet their ‘zero fault’ commitments, and also helps them sleep a little easier at night n Marco Lideo Originally a graduate nuclear engineer, Marco Lideo joined Bobst, the Swissheadquartered multinational packaging equipment manufacturer, in 2001. Here he has undertaken a variety of roles and also gained his MBA. Marco is currently Marketing & Sales Director for folder-gluer products worldwide, and is in particular responsible for the introduction to the market of the latest innovations. Email:

Volume 3 Issue 1



Pharmaceutical Piracy Colour-Coded Security and Traceability The problem of counterfeit medicines was first addressed at the international level at a World Health Organization (WHO) conference in 1985. Things have come a long way from there, and much has been undertaken to combat the growing threat of fake medication. Yet, while governments and organisations are discussing the implementation of a uniform definition into their respective legislations, and guidance is being published for mass serialisation of prescription medicines, the counterfeiting ‘business’ is shifting to new markets. In just two months, 34 million counterfeit tablets were seized by customs authorities in all European Union member countries. “This exceeded our worst fears,” former European industry commissioner Gunter Verheugen told the German daily ‘Die Welt’. But what exactly is a counterfeit medicine? Up to now, no uniform definition has yet been agreed upon. In the WHO Factsheet on counterfeit medicines, we read: “Counterfeit medicines are medicines that are deliberately and fraudulently mislabelled with respect to identity and/or source.” The International Medical Products Anti-Counterfeiting Taskforce (IMPACT) carries on: “Their quality is unpredictable as they may


contain the wrong amount of active ingredients, wrong ingredients or no active ingredients. In all cases counterfeit medicines are manufactured in clandestine laboratories with no possibility of control.” Additionally, primary or secondary packaging, patient information leaflets or labels and documentation accompanying the medicines are prone to counterfeiting. “A First Step” The first definition intended for international use was formulated by the WHO in 1992. In the Declaration of Rome from 18th February 2006, newly-founded IMPACT stated that “counterfeiting medicines is widespread and has escalated to such an extent that effective coordination and cooperation at the international level are necessary for regional and national strategies to be more effective.” Yet, neither a common definition nor a common solution in the fight against counterfeit medicines has yet been found. The UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) also observes that “the absence of a universally accepted definition makes information exchange between countries very difficult, limits the ability to understand the true extent of the problem at global level, and hinders the development of global strategies to combat the problem”.

The American Food and Drug Administration (FDA) Amendments Act of 27th September 2007 imposed March 2010 as the deadline for developing or adopting a standardised numerical identifier (SNI) for prescription drug packaging. The guidance is compatible with the California e-pedigree law, which has been delayed until 2015/2017. The final guidance for a standardised numerical identification recommends the introduction of a serialised National Drug Code (sNDC), “composed of the National Drug Code (NDC) that corresponds to the specific drug product combined with a unique serial number, generated by manufacturer or repackager for each individual package.” The FDA is still far from implementing any rules into legislation: “defining the SNI is expected to be a first step to facilitate the development of other standards and systems for securing the drug supply chain”. A Highly Profitable Market In 2008, the European Commission went a lot further into the question with its public consultation in preparation of a legal proposal to combat counterfeit medicines for human use. 128 responses from stakeholders were received and considered in the impact assessment leading to the current proposal amending Directive 2001/83/EC. The proposal has been submitted to the European Parliament and the Council, where it will be discussed and voted in a co-decision procedure. Nearly three years ago, the Commission was already aware of the fact that “Member States are starting to consider taking unilateral action to address the problem”. Some countries are already implementing their own national solutions while industrybased tests are being carried out in others. A subsequent unification of these different schemes further complicates Volume 3 Issue 1

PACKAGING the implementation of international standards. The pharmaceutical industry is moving fast, and counterfeiters have discovered a new and highly profitable market to distribute their products: the internet. A Pfizer-sponsored survey indicates that Western Europeans spend approximately 10.5 billion euros a year on illicit medicines, of which many are believed to be counterfeits. Nearly half of the transactions are for weight-loss medicines, followed by prescription drugs for flu treatment. The WHO has been warning about the risks of online pharmacies for quite some time: “In over 50 percent of cases, medicines purchased over the Internet from illegal sites that conceal their physical address have been found to be counterfeit.” A study released by the European Alliance for Access to Safe Medicines (EAASM) provides even more worrying figures. Of the over 100 online pharmacies tested, 84.5 per cent do not physically exist; only 6.2 per cent are linked to a named, verifiable pharmacist. Laboratory analysis revealed that 62 per cent of all products ordered online by the research team were counterfeit, substandard or unapproved generic. No Prescription Required For the third time in three years, Interpol conducted their Operation Pangea, “an international week of action tackling the online sales of counterfeit and illicit medicine”. 45 countries took part in 2010, leading to the seizure of 2.6 million US Dollars worth of illicit and counterfeit pills. Among these were antibiotics, steroids, anti-cancer, anti-depression and anti-epileptic, as well as slimming or food supplement pills. 694 websites were found to be engaged in illegal activity; 290 were shut down. These figures clearly show how illegal internet sales of medicines are developing. But they also highlight how difficult it is for authorities to intervene in the long term. Prescription drugs can be obtained online without difficulty. Patients no longer need a valid prescription from an accredited practitioner. The prescription is either issued by an online doctor or is not required at all. The EAASM study found that in 90.3 per cent of all cases, no prescription was required for prescription-only drugs. So how can patients find out whether 106 INTERNATIONAL PHARMACEUTICAL INDUSTRY

they have bought a substandard or counterfeit medicinal product? The Royal Pharmaceutical Society of Great Britain (RPSGB), now replaced by the General Pharmaceutical Council, has developed guidelines for what legitimate online pharmacies should do: name, address and owner must be clearly displayed; patients must be required to provide a medical history evaluation and a prescription signed by a registered national doctor before purchasing medicines; the pharmacy must have a physical address in the country in which it claims to carry out its business, as well as a telephone number and a privacy and security policy. All registered pharmacies can be found on the website of the General Pharmaceutical Council, and must display a logo containing its unique seven-digit number on its website. Data Security at Risk Only 4.4 per cent of the websites checked by EAASM are listed as legitimate websites, while 20 per cent bear a logo or stamp of approval from a recognised society. From these 20, nearly 86 per cent were found to link to a bogus website. Counterfeiting logos on products and packaging is not too hard, as customs statistics demonstrate every year. But implementing a fake logo on a website is a piece of cake for anyone who has worked with a graphic program a few times. The user needs to know how to distinguish false from original: original logos must link to the website of the issuing society. But so many organisations, associations and societies worldwide supply online shops with security seals that, here again, it is difficult to feel reassured. Even if the medicines bought online are not dangerous, the transaction itself reveals many personal details like medical and financial information to dubious traders. The EAASM expert panel states that “while several of the medicines bought online were considered highly likely to be substandard and/or counterfeit, the vast majority of typical European consumers would be unable to detect this”. This surely is one of the main issues with the online purchase of medicines: many patients are not aware of the threats that counterfeit medicines impose on their health. If a website looks trustworthy, the medicines are promptly delivered

and the pills, syrups and syringes look authentic, most consumers will not check the packaging for evidence of tampering. This leads many experts to the opinion that online sales of medicines should be banned immediately, and especially that prescription drugs should only be sold to patients in physically existing pharmacies. Yet, it is far too late for such steps. Online sales are rising continuously, and it would simply be impossible to put a ban on internet sales into practice in the near future. Multi-Layer Approach On one point, nearly all experts agree: a so-called multi-layer or multilevel approach is needed. A study by Cambridge Consultants names three different types of technologies, i.e. tamper-evident, serialisation and authentication features. Tamper-evident packaging or closures are seen as the first layer of protection, and can include security features like perforation or seals integrated into the packaging design. Serialisation implies a unique serial number to be printed on each package. Each pharmaceutical product needs to be traceable along the entire supply chain, to ensure that only originals are being sold to the end-customer. Authentication can be overt, like holograms and watermarks, or covert, like micro colour-codes. Because many security features are counterfeited in next to no time, different technologies must be combined, and more than one layer should be used. One example for such combined technology is a system based on micro colour-codes. These codes, with a size of 8 to 90 micrometres, are manufactured in four to ten different colour layers. An individual user code is exclusively allocated to the products of a specific brand-owner. For example, secondary packaging is marked with a data matrix code printed onto a label used for tracking purposes. This label can also be tamper-evident, e.g. when applied over the closure of the packaging, and reveals at first sight whether the pack has been opened. It may also include, for example, a hologram as an overt authentication feature. The colour-code is applied onto this label, providing, in a first step, covert authentication. In a second step, it not only secures the product and its packaging, but also the Volume 3 Issue 1

PACKAGING label or seals and the traceability code. Additionally, patient information leaflets and authentication certificates can be secured with micro colour-codes. Double-Checking Information Many technology companies advertise their track & trace solutions as counterfeit-proof. Yet, with a little knowledge of the printing business, counterfeiters can easily print their own fake data matrix codes or copy existing ones from solution providers which they multiply thousands-fold. As soon as a code has been checked in the corresponding database, the next code with the same specifications will be identified as fake. But what if the first code was copied and the second or third one is the original? To ensure that these problems do not hinder the supply chain of original medicines, another code needs to be applied to the traceability label, making the label and the product identifiable as originals. The information sent to the databases needs to be double-checked. This can be done by applying a micro colour-

code to the traceability device. The producer-inherent code is deposited in the databases and can be verified by use of a simple microscope. Internet sales of pharmaceutical products have risen so fast over the past years that it would be impossible to forbid them altogether. To get to the heart of the problem, better patient education must prevent unknowing purchases of fake medicine. Patients who consciously buy their medication from unreliable sources need to be warned about the dangers they expose themselves to. The adoption of a uniform definition and international protection standardisation would greatly assist the fight against counterfeiters. Activities like Operation Pangea or international seizure operations need to be enhanced in order to get to the operators of bogus online pharmacies and the producers of counterfeit or illicit medicines. A multi-layer approach consisting of authentication, serialisation and tamperevident features can help secure the products along the entire supply chain. In all aspects, micro colour-code

technology can decisively contribute to the success of all these endeavours n

Rolf Simons General Management, 3S Simons Security Systems GmbH Rolf Simons is founder and general manager of 3S Simons Security Systems GmbH. During the 1970s Simons was already devoted to counterfeit protection. He founded Simons Druck und Vertrieb GmbH in 1984 and developed the patented, legally binding micro colour-codes SECUTAG速. Since 2008, he runs 3S Simons Security Systems GmbH, together with his son Dirk Simons. Besides his duties in management and R&D, Rolf Simons takes part in conferences and trade fairs and is involved in international standardisation activities. Email:


Authentication; A Sensible Investment When counterfeiting is reported globally, pharmaceutical products receive most of the attention because people fear poisoning, contamination, and medicines that don’t cure. Yet the pharmaceutical market spends less on authentication technologies as a percentage of sales than the luxury market. Why? Part of this is due to the unique way prescription drugs are distributed in the largest global market, the US, where the patient seldom sees original packaging. In the developing world, generics are the most consumed, and until recently were not believed to be heavily counterfeited. But globally, counterfeiting of all categories of drugs continues to grow: generic, branded, prescription, and selfmedication drugs. Why should pharmaceutical companies invest more in authentication? Authentication: • Enables investigators, customs, and law enforcement to quickly detect fakes • Helps patients, doctors and nurses detect fakes • Assures patients, doctors, and nurses you care about protecting them from fakes • Creates supply chain transparency • Creates evidence and prosecution bandwidth • Allows for adjustments for security features for risk markets without major package design changes • Expands an organisation’s overall capabilities for anti-counterfeiting and product security efforts Enabling investigators, customs, and law enforcement to quickly detect fakes: often customs agents and drug regulatory personnel are the first to come across suspect products. During field 108 INTERNATIONAL PHARMACEUTICAL INDUSTRY

investigations, undercover operations, and seizures, investigators and law enforcement agents need to be able to quickly detect fakes. Sophisticated counterfeits make this difficult and they can easily slip past inspection points. As checkpoints struggle to distinguish real from fake, legitimate drugs may be held too long and delayed from reaching their destination. Drugs requiring refrigeration may have cold packs compromised if kept too long in a customs warehouse. Authentication technologies help investigators who are trained to recognise features more quickly distinguish between real and fake products, saving time and preventing confusion. Helping patients, doctors, and nurses detect fakes: Using a security feature designed for patient use can give them a tool to assure them that a product is genuine. Doctors and nurses also want to make sure they are administering medicines that are curing the patient. Through the use of authentication tools and training, doctors and nurses can become passionate allies in helping identify spurious drugs. Some brand owners through the use of authentication designed for patient use have been able to regain market share lost to counterfeits in regions like sub-Saharan Africa that have high levels of counterfeiting, and where buyers worry about the presence of fakes. Assuring patients you care about protecting them from fakes: In much of the world, patients know there is a problem. Genuine product messages have been favourably received in Europe and in other regions by patients. Positive messages around this topic resonate well with patients who need reassurance and protection. Creating supply chain transparency: Authentication technologies using item level identifiers, lot or batch identifiers,

and channel identifiers can be used to make sure products end up in the intended countries. They can help distribution partners identify fakes more easily. Stolen products can also be identified more easily when they re-enter the supply chain. Creating evidence and prosecution bandwidth: By using layers of authentication, trademarks in authentication, and trademarking all parts of package design, a brand owner demonstrates that all parts of the product, package, and label are real without question. No one dealing in fakes can plead, “I thought the products were real.” Global generic manufacturers need to make sure their trademarks are adequately registered in the countries where they do business as well. Also by layering authentication, putting trademarks in authentication, and trademarking all parts of the package design, all relevant laws can be engaged to obtain the greatest possible prosecution bandwidth: multiple trademark violations, multiple violations of tampering with recall codes, etc. Adjusting security features for risk markets without major package design changes: Although authentication technologies are sometimes integrated into the master design of a product or package, in most cases it is possible to add or change authentication features for a given market or update them more frequently than the master design. This flexibility also enables a manufacturer to use more security layers for markets or channels at greater risk. Expanding an organisation’s overall capabilities for anti-counterfeiting and product security efforts: When it is necessary to rely on experts or laboratory analysis to identify fakes, days or weeks can go by before results are known. The time of inspectors, investigators, and others is squandered. Opportunities to Volume 3 Issue 1

PACKAGING catch criminals may be lost. Resources may be misallocated tying up shipments of legitimate product which has had a packaging change. Using authentication saves time. When item level identifiers are used, product that has been stolen can be flagged when it re-enters the supply chain. Authentication enhances communication and makes it easier to train people throughout the supply chain. For authentication to work effectively it must go hand in hand with an effective communication and training programme. Which features need to be communicated to which user audiences? What features are for patients, doctors, and nurses? What features are for customs? Training can be classroom-, internet-, or handbook-based. Johnson & Johnson for example will be presenting a new internet portal resource for global customs to use to view the packaging of various products at the sixth Global Forum on Pharmaceutical Anticounterfeiting, to be held in London, May 4-6, 2011 The types of authentication tools available to pharmaceutical companies are categorised as: overt, covert, forensic, and random encoding. Track and trace is also useful, but it should be considered a verification tool to be used with authentication. Overt authentication tools are detectable and verifiable by one or more of the human senses without recourse to a tool (other than everyday tools which correct imperfect human senses, such as spectacles or hearing aids). Examples include: holograms, special printing techniques, colour-shifting inks, scratch-off messages, anti-copy inks, etc. Tamper-evident technologies can also be viewed as overt authentication tools. Covert authentication elements are hidden from the human senses until the use of a tool by an informed person reveals them to their senses, or else allows automated interpretation. Covert tools require special sophisticated readers. Examples include: readers of optical taggants, readers of elemental taggants, readers of microscopic barcodes, readers of microscopic optical character recognition (OCR), and others. Forensic authentication elements are detectable and verifiable only by 110 INTERNATIONAL PHARMACEUTICAL INDUSTRY

laboratory equipment, depending on various laboratory analytical techniques. Some methods of authenticating rely on the unique surface and composition of products for verification. Random encoding relies upon generating a unique code from the unique molecular, sub-micron, or microscopic surface properties of a productâ&#x20AC;&#x2122;s materials. It is a way of tying the forensic properties to a readable code. Some systems need a database connection; others do not. It is important to layer technologies. No one solution can solve all authentication problems and protect against all potential risks. Each type of authentication technology has strengths and weaknesses, and it is best to develop a layering strategy to maximise the strengths of available technologies. This part of the strategy should be developed along with the total authentication strategy. Authentication is a tool. It alone cannot defeat counterfeiting. Counterfeiting can only be defeated by people working together and using the best tools to optimise resources. The upcoming sixth Global Forum on Pharmaceutical Anticounterfeiting is about people working together to combat the global scourge of counterfeiting. The first Global Forum, held in Geneva in association with the World Health Organization in 2002, established a pattern of inclusiveness with global representatives from industry, regulators, law enforcement, and anti-counterfeiting service providers. The second Forum,

held in Paris, extended the range of attendees to include healthcare professionals and patient groups. With this range of stakeholders, the Forum provides the most constructive and stimulating environment for sharing experience, ideas and planning by those making, administering and protecting medicines. The theme for the sixth Global Forum is Optimizing Capabilities to Combat Counterfeits. With the world emerging from global recession; resources are tight, yet the need to fight pharmaceutical counterfeits does not diminish. So the challenge is to optimise and build on current capabilities. Authentication technologies offer one way to do this. Optimizing Capabilities means achieving more with available resources, changing processes to increase productivity, finding global partners, engaging technology to optimise programmes, and creating collaboration between developed and developing nations and commercial and public sectors. All this will be covered at the 2011 Global Forum on Pharmaceutical Anti-Counterfeiting n

Randall Burgess holds a BA from UNC-Greensboro and is trained as a provisional ISO 28000 Supply Chain Security Management Program auditor. He is experienced in brand protection, product security, and logistical security with over 20 years of experience in business development, new product commercialisation, and complex project management supporting North American and global companies. He has launched and assisted with the successful launch of several printing, electronic, display, and optical technologies used in product security and authentication. He is responsible for secretariat functions of the International Authentication Association (IAA), serves as Assistant Editor of Authentication News and is Editor of Pharma Anti-Counterfeiting News. Email:

Volume 3 Issue 1


IPI Speaks with Hendrik Kneusels, CEO of Laetus Observing the Means to a Secured Supply Chain 1. Can we start with a brief history of Laetus? Where did you start from? Laetus was founded in 1974, and is the pioneer of pharmaceutical packaging security. It all started with the barcode identification system ARGUS, which over the years became the industry standard for packaging material identification. Based on that sound know-how, Laetus developed further solutions to increase security for patients. In the 90s, we developed the first camera system in the market for blister inspection. It was followed by highly sophisticated vision systems to increase and ensure the quality and effectiveness of the products for our customers. Also as a pioneer, Laetus in 2003 showed the first Track & Trace system on ACHEMA, a system which created the basis of the current top-end Laetus Track & Trace system SECURE TTS. 2. What are your core competencies? How do you position yourself as unique within the market place? Laetus stands for top quality with top service. The reliability of the ARGUS control devices is famous, and the local service in over 20 countries around the world underlines our approach to serving customers to their satisfaction. The core competencies are the transfer of an efficient expertise in the needs of the pharmaceutical market and their demands in regard to security into reliable technical solutions. The uniqueness of Laetus is provided by a combination of easy-touse solutions, high-end technology, and seamless services. 3. Let us talk about the joint venture between GD and Laetus. How will this be advantageous to the pharmaceutical & medical industry? When projects are getting into categories where not only is the efficiency of the solution a key factor, but also the 112 INTERNATIONAL PHARMACEUTICAL INDUSTRY

financial stability of a company and the management of higher risks is increasingly valuable. Beside this, Coesia’s approach to these four key points: 1. Innovation and development 2. Involvement and professional growth 3. Social commitment, and 4. E  xcellence in economical performance will support Laetus in extraordinary growth and performance. Furthermore, the cooperation between the independent departments of the corporate companies are already beneficial for Laetus in terms of a huge knowledge base, manufacturing synergies and know-how exchange between the experts. 4. How big is the problem of counterfeiting and brand protection in the pharma and medical devices industry? Do you have any statistical data explaining the amount of revenue lost – financially & socially (the human factor)? The figures for the financial losses reach several billion USD every year. But this is not the primary key driver to implement Track & Trace solutions. The main intention is to take care of the lives of our customers’ patients. The risk of harm caused by fake drugs cannot be valued in money, so the focus should be on how to save life and wellbeing. Let me express it in a different way: The fact that many governments are seeing the need to react shows the urgent need to improve customers’ health. For instance, Swiss pharmacists estimate that every year approximately 50,000 packs of illegal drugs are delivered to patients. The WHO estimates the number of people killed by fake drugs at 200,000 every year. And the EU Commissioner, Günter Verheugen, puts fake drugs on the same level as mass murder.

5. What is the uptake of brand protection solutions (overt or covert) by the industry? Do you see pharma companies willingly changing their processes to incorporate such measures? Many of our customers have started the discussion already in the late 90s, and implemented such solutions in the early 2000s. In close cooperation with the federal criminal agencies, the combination of overt and covert brand protection solutions is seen as the preferred way. The willingness is there, as shown through the thought of one of our customers: “This is only to make our product more complicated to copy than that of our competitor.” This will drive the development towards a national or even international solution under Track & Trace. The actual challenge is to standardise the systems as much Volume 3 Issue 1


as possible, as the EFPIA is doing right now. 6. What, in your opinion, should governments do to curb counterfeit medication entering their supply chain? Which regions do you see the biggest problem rising from? Governments should work in close cooperation with the pharmaceutical industry and their distributors. This would also include the revision of laws controlling re-imported products, which sometimes aim to achieve the lowest price but not the highest security. Another critical factor is the influence of the Track & Trace solutions in the production and logistic chains: on the one hand it gives security to the product, but on the other hand it should not impact the efficiency of production, as it is your requirement and mine that we achieve wellbeing at

reasonable prices. There, our solution comes back to us as a solution provider to delivery user-friendly and effective systems. 7. Finally – where do you see Laetus in the next five years? What other innovations do you have in the pipeline? Our vision “Laetus-2015” describes dedicated targets beyond economic areas. We see our responsibility in providing security for the complete packaging process, from the very beginning to the very end, supplied by a highly committed team of specialists. And as we see the pharmaceutical industry setting trends, I am sure that these will be adopted also by further industries which are delivering goods which have a direct impact on our health, such as food and cosmetics. According

to our three-year development plan, we will deliver solutions which are in accordance with these demands n

Dirk Hendrik Kneusels Kneusels started at Laetus as a trainee, during his university studies as a engineer of physical technology, to deepen his knowledge directly from the industry sector. After university he started at Laetus in the field of engineering of optoelectronical sensors, changes the job over the project planning department to the sales department, where 2005 he took over the overall direction. INTERNATIONAL PHARMACEUTICAL INDUSTRY 113

subscription & ad Index

Subscription offer Advertisers index Guarantee you receive your copy of IPI: International Pharmaceutical Industry, 4 issues per year. Pharma Publications is delighted to be able to offer it’s readers a great one year’s subscription offer. Subscribe now to receive your 20% discount Post: Complete form below, detach and post to: Pharma Publications, Building K Unit 104, Tower Bridge Business Complex, Tower Point, 100 Clements Road, London, SE16 4DG, UK Tel:

+44 (0) 207 237 5685


Complete and fax this form to +1 480 247 5316

Email: Your details to Please tick the relevant boxes below: UK & N. Ireland £120 inc p&p Europe €132 inc p&p USA $200 inc p&p I enclose a cheque made payable to Pharmapubs I wish to pay by credit card (Mastercard or Visa) Card number

Valid from

Expiry date

Security code Name: Job title: Company: Address:


Tel: Fax: Email: Signature

For advertising opportunities, contact us on +44 (0) 207 2375685 or Email


Page 105 3S Simons Security Systems GmbH Page 116 9th Annual SFE 2011 IBC Acurian Inc Page 55 Almac Group Page 21 Amphora Research Systems Page 31 Amsterdam BIO Page 5 Analytical Biochemical Laboratory BV Page 117 Anglo Nordic Biotech conference VIII Page 23 Aon Corporation Page 35 Atoll GmbH Page 67 BARC - Bio Analytical Research Corporation Page 65 B & C Group Page 71 BIOCAIR Page 85 BIOCORP Page 114 BIO Trinity 2011 Page 29 BIOWIN – Health Cluster of Wallonia Page 73 Bioskin GmbH Page 44 & 45 BMG Labtech Ltd. Page 101 BOBST Group Page 52 & 53 Cardio Analytics Ltd Page 69 Centrical Global Ltd. Page 37 Cellgenix gmbH Page 17 CK Clinical Ltd Page 115 DIA Euromeeting – Geneva 2011 Page 89 DIA Conference Page 111 ELPRO – BUCHS AG Page 79 Envirotainer AB Page 59 ERT Page 47 Excard Research GmbH Page 118 FlandersBios’s Annual Life Sciences Convention Page 19 Forresters Page 74 & 75 Geodis Wilson Page 93 Glycotope Biotechnology GmbH Page 95 Health Protection Agency Page 62 & 63 Intana Bioscience GmbH Page 9 IP Pragmatics Page 27 Janssen Pharmaceutica NV Page 103 Korber Medipak GmbH Page 107 - Laetus GmbH Page 39 LI-COR Biosciences UK ltd. Page 33 Ludger Ltd. Page 7 Lufthansa Cargo Page 83 LYOFAL S.A.S. Page 25 Miller Insurance Services Limited Page 90 & 91 Moorfields Pharmaceuticals Page 3 MPI Research Page 51 M-Scan Page 11 Niedersachen Global GmbH – NGlobal Page 81 Patheon Inc. Page 15 Phage Consultants Page 119 Pharma Packaging & Labelling Compliance Conference Page 13 Qualogy Ltd Page 57 Richmond Pharmacology Page 98 & 99 Rovi Contract Manufacturing & Rovi Alcala Page 77 SCA Cool Logistics Page 87 STELMI OBC Swiss World Cargo IFC Tosoh Bioscience Page 109 West Pharmaceutical Services Page 49 Woodley Equipment Company Ltd.

Volume 3 Issue 1

IPI - International Pharmaceutical Industry  

IPI - is a peer reviewed Journal - loking into the best practice for out sourcing management for the pharmaceuitcal and durug discovery and...

Read more
Read more
Similar to
Popular now
Just for you