Quasar 163 - May 2023

Page 1

QUASAR #163

INFORMING AND ADVANCING OUR MEMBERS

THE ROLE OF QUALITY ASSURANCE IN PHARMACEUTICAL R&D

HOME SUPPLY –A PATIENT-CENTRIC APPROACH TO DIGITALLY MITIGATE DTP RISKS

30 PAGES OF ARTICLES

ROCHE/RQA COLLABORATION: IMPACT OF THE CERTIFICATE IN DATA BASICS eLEARNING IN THE PHARMACEUTICAL INDUSTRY, INCLUDING THE LAUNCH OF THE IMPALA CONSORTIUM

DID THE AUDIT GO SMOOTHLY? FOR SPONSORS, CLINICAL SITES AND AUDITORS

ANALYSIS OF THE EMA DRAFT GUIDANCE ‘GUIDELINE ON COMPUTERISED SYSTEMS AND ELECTRONIC DATA IN CLINICAL TRIALS’

KEEPING YOU POSTED REGULATIONS AND GUIDELINES P38

THE LATEST COURSES P43

www.therqa.com MAY 2023 | ISSN 1463-1768
CHATS WITH CHATGPT

QUASAR #163

ARTICLES

6 Analysis of the EMA Draft Guidance ‘Guideline on Computerised Systems and Electronic Data in Clinical Trials’ Mario E. Corrado, Marianna Esposito, Laura Monico, Anna Piccolboni, Massimo Tomasello

10 Roche/RQA Collaboration: Impact of the Certificate in Data Basics eLearning in the Pharmaceutical Industry, Including the Launch of the IMPALA Consortium Sharon Havenhand, Tim Menard

14 The Role of Quality Assurance in Pharmaceutical R&D Nigel Crossland

SUBMITTING AN ARTICLE

If you would like to submit an article for a future edition of Quasar on a topic you feel would be of interest to our membership, please contact the editor: editor@therqa.com Visit www.therqa.com for guidelines on article submission.

NEXT EDITION AUGUST 2023

CARBON CAPTURE QUASAR #162

353 8.84

18 Did the Audit go Smoothly? For Sponsors, Clinical Sites and Auditors Wenjing Zhu

22 Home Supply – A Patient-Centric Approach to Digitally Mitigate DTP Risks Fatemeh Jami, Richard Knight, Marcus Wajngarten

30 Chats with ChatGPT Melvyn Rapprecht

Quasar is published four times a year and articles can be submitted to Quasar at anytime. If you are interested, please email editor@therqa.com.

The final copy deadlines are as follows:

Final copy date Publication

12th June 2023 August 2023

11th September 2023 November 2023

11th December 2023 February 2024

11th March 2024 May 2024

To connect with the RQA just follow these links:

LinkedIn

https://uk.linkedin.com/company/rqa

Facebook www.facebook.com/researchqualityassociation

Twitter

https://twitter.com/The_RQA

YouTube www.youtube.com/user/researchqualityass

Vimeo

https://vimeo.com/therqa/videos

This publication is a service offered by RQA to its members and RQA cannot and does not guarantee that the information is complete.

RQA shall not be responsible for any errors or omissions contained in this publication and reserves the right to make changes without notice.

The information provided by third parties is provided ‘ as is� and RQA assumes no responsibility for the content. In no way is RQA liable for any damages whatsoever and in particular RQA shall not be liable for special, indirect, consequential or incidental damages or damages for lost profits, loss of revenue arising out of any information contained in this publication.

MAY 2023 | 3
REGULARS
Welcome 5 Update 38 Keeping You Posted – Regulations and Guidelines GCP, Animal and Veterinary Products 43 Calendar FEATURES 36 10 Questions to… 37 How Did You Get into QA? CONTENTS
4
30

Dear Members,

When I sat down to write this message for Quasar, my thoughts reflected on the recent past… and what the future of RQA might hold. Whilst much effort in recent years has been focused on adapting how RQA functions to meet the challenges of the COVID pandemic (hopefully, that is all behind us), we can look to the future regarding RQA’s offering to members and customers.

At the recent RQA Board meeting, we reflected on the structure of RQA with a particular focus on the Board, Management Committee and Committees and how we collectively drive the future direction and efforts of RQA in serving its members and customers. My conclusion, following this reflection, was that as an association we need to be more coordinated in our approach to the information, services and products we provide to members in an ever-changing regulatory environment. In addition, the scope of RQA and its members’ interests are ever-changing and developing through the emergence of new technology, the ever-expanding breadth of science practised and from greater globalisation of the RQA’s member and customer base.

Despite many recent initiatives such as the Volunteer Programme, the Community Hub and remote training courses, to mention just a few, all of which have been successful in delivering products and services valued by members and customers, the management structure has changed little.

A few years ago, the RQA Board Chair was elected and changed every two years and board members were elected every three years. In addition to these elected Board members, the Chairs of each committee also served as Board members. This structure formed the Board of Directors, who would meet approximately four times a year to fulfil its corporate responsibilities and agree on the direction and strategy for RQA going forward. However, this structure was extremely fragile in that every Board meeting would consist of over 15 members, with a regular change of individuals attending those meetings.

To address this fragility, the Board was reconfigured through a change in the RQA constitution, agreed by the members, to include seven elected RQA members to serve on the Board. From those selected, the Chair of the Board is appointed by the Board members each year.

The Association structure includes a Board of Directors which oversees the operational performance and drives the direction of RQA, a Management Committee (comprising of Board members and Chairs of each committee) and several focused committees, including GCP, GLP, Animal Heath, IT and Pharmacovigilance – bearing in mind that all these individuals provide their input and time voluntarily.

The RQA Office does a magnificent job in providing the mechanisms by which the products and services are delivered to the members and customers.

The current committee structure was introduced in the early days of QAG and provided focused groups to explore the different areas of interest of the membership. These committees formed ‘silos of interest’ and worked well when each RQA member had a single interest focus. However, today that same structure of member interest does not exist; for example, IT crosses all areas of regulatory compliance and is of interest to most members.

All of this begs the question: Is the current operating structure of RQA capable of ensuring that the Association continues to provide the level of information, products and services required by members and customers in the present work and regulatory environment?

In the coming months the Board, working closely with Committee Chairs and Committees, with input and direction from the RQA Office, will rethink and reshape the makeup of the RQA. The objective is to ensure the products and services provided by RQA meet the current and future needs of members and customers and the scope of services continues to adapt to those requirements – doing all this while maintaining influence and input to applicable global standards and regulations.

I will provide updates on the progress of the organisational restructuring in future issues of Quasar.

Kindest regards,

WELCOME Tim Stiles chair@therqa.com
Tim
4 | MAY 2023
‘In the coming months the Board, working closely with Committee Chairs and Committees, with input and direction from the RQA Office, will rethink and reshape the makeup of the RQA.’

7TH GLOBAL QA CONFERENCE

Along with the Society of Quality Assurance (SQA) and Japan Society of Quality Assurance (JSQA), RQA is a co-organiser of the Global QA Conference (GQAC), held every three years.

The 7th Global QA Conference in March 2023 – hosted by SQA – was held at the Gaylord National Resort and Convention Centre in National Harbour, Maryland. The Convention Centre, encased in a monumental 19-storey glass atrium, stands on the banks of the Potomac River, just a stone’s throw from Washington, D.C. The venue is possibly the largest and most impressive convention centre I’ve ever visited, boasting almost 2000 bedrooms and over 94 meeting spaces – the largest holding 10,000 guests (a meeting space, not a bedroom).

SQA delivers a Quality College – a week of intense learning – twice yearly. The GQAC sat alongside the Quality College, offering SQA members a broad choice of learning and networking opportunities. The conference ran for two and a half days across several topic-driven streams. It was interspersed with meetings of Chapters, Speciality Sections and National Societies, creating a constant buzz of activity from dawn until dusk.

RQA, as usual, was offered the opportunity to exhibit, which we gladly accepted – a chance to connect with QA specialists, mainly from the US, and discover their professional needs. There were around 40 exhibitors, primarily consultants and auditors, plus the five societies – SQA, JSQA, KSQA (Korea), GQMA (Germany), and RQA. The visitors to the RQA stand were mainly interested in learning and membership; we’d taken a handout to briefly explain what RQA offers and it was exceptionally well received. We also had RQA USB cables and mouse mats to give away to enquirers.

The opening plenary featured a warm welcome from the Programme Committee Chairs, the SQA President, representatives from the co-host partners and the keynote speaker, David Lim, a.k.a. Mr Everest after he conquered the world’s highest mountain twice – the second time with a permanent disability in both legs. David’s address was humorous, heart-warming and highly motivational.

SQA made good use of the available networking time to plan events in social settings. We used the opportunities to meet with RQA members, collaborators, sponsors, exhibitors, and colleagues, old and new. It was good to see everyone; previously, I hadn’t experienced such an enormous benefit of networking.

The RQA session, scheduled for 10:30am on Wednesday, focused on Learning and Development Trends with a panel of experts comprising Carl Lummis, Zillery Fortner, Yasser Farooq, Richard Crossland and Miland Nadgouda. The discussion was lively and stimulating, with around 60-70 session attendees and a steady flow of excellent points, questions and responses from all sides. At international conferences, we always make the opportunity to meet with other national societies and discuss membership-related matters, including any opportunities for collaboration. This year, the meeting room seemed busier than ever, with representatives from SEGCIB (Spain), GQMA (Germany), SARQA (Sweden), SOFAQ (France), KSQA (Korea), ICSQA (India), JSQA (Japan), RQA, and SQA. Each representative introduced themselves and their organisation, explaining what they had done during and after the pandemic. It was clear that some societies had struggled to maintain their support to members – being a purely voluntary organisation cannot have been easy in such challenging times. JSQA and RQA regularly meet at conferences where both organisations are present. The topics discussed are mainly around collaboration and have, in the past, led to the publication of booklets, for example. In the agreement between JSQA and RQA, there is an annexe relating to the relationship between the JSQA GCP Division and the RQA GCP Committee. We agreed that this document should be reviewed since it has existed for around four years. At the same time, both parties should consider similar annexes for GLP and GPvP. The RQA Community Hub is also to be considered to facilitate those collaborative activities and relationships.

A key part of the closing plenary was a regulator Q&A session chaired by SQA President Cheryl McCarthy. The regulators included:

• Martijn Baeten, GLP Inspector, Sciensano, Belgium

• Chrissy Cochran, Director, US FDA

• Anne Johnson, District Director PHI-DO, DHHS/FDA

• Francisca Liem, Director, GLP Program, US EPA

• Thomas Lucotte, Inspection des Essais et des Vigilances, France

• Michael McGuinness, Head of GLP & Laboratories, MHRA

• Eric Pittman, Program Division Director, US FDA

• Naoyuki Yasuda, Director, MHLW, Japan.

A set of 16 questions had been prepared and the panellists took turns answering them as appropriate. Additionally, the regulators were asked to share interesting experiences, which were largely entertaining.

The closing address was mainly congratulatory; the Programme Committee Chairs, the SQA President and representatives from the co-host partners thanked all involved, including the attendees, for their support. And then, the big announcement… the next Global QA Conference.

While I received the commemorative GQAC flag for the next GQAC, Carl announced that the 8th Global QA Conference in 2026 will be held in London! Who knew?

UPDATE MAY 2023 | 5

ANALYSIS OF THE EMA DRAFT GUIDANCE ‘GUIDELINE ON COMPUTERISED SYSTEMS AND ELECTRONIC DATA IN CLINICAL TRIALS’

We are all aware that clinical trials are increasingly turning digital. Gone are the days when, at the words ‘source documents’, the image of a bunch of scribbled pages popped-out in our mind. No nostalgia or regrets. Now the data life cycle involves several structured computerised systems of increasing complexity, from local devices to delocalised cloud applications.

Massimo Tomasello Anna Piccolboni Laura Monico Marianna Esposito Mario E. Corrado
QUASAR 6 | MAY 2023

The COVID-19 pandemic has further accelerated this process and boosted the digitisation of clinical trials, introducing new challenges like remote monitoring and remote inspections. Still, it is not all rosy. The digital environment can be difficult to understand. With paper, data source was usually easy to locate. With digital, the concept of ‘source data’ is much more difficult to figure out. Compliance with GCP principles like ALCOAC+ could be also challenging. With perfect timing, in June 2021 EMA opened the public consultation for the draft guideline ‘Guideline on computerised systems and electronic data in clinical trials’. The document was released by the GCP Inspectors Working Group and it is therefore meant to represent the current EMA Inspectors’ expectation. This is not out of the blue. In recent years the European Inspectors published several Q&As on topics related to computerised systems, meaning that the inspectors’ attention in this area is high (and probably inspections findings are common). This document intends to replace the old 2010 EMA ‘Reflection paper on expectations for electronic source data and data transcribed to electronic data collection tools in clinical trials’. While the old reflection paper was only 13 pages long and with a narrow scope, this new guideline is an impressive 47 page document, highly detailed and demanding. As written by the inspectors ‘Development of and experience with such systems has progressed. A more up to date guideline is needed’.

The premise has been truly fulfilled, since the updated document now covers ‘current hot topics’ like electronic Clinical Outcome Assessment (eCOA), electronic Patient Reported Outcome (ePRO), electronic Informed Consent (eIC), cloud systems and Artificial Intelligence (AI). The recipients of the guideline are sponsors, CROs, investigators but also other parties like software vendors. An important focus is given to migration and transfer of data across different systems and to the requirement for audit trail and audit trail review. After introduction, scope and legal and regulatory background, the guideline summarises the principles and key concepts of computerised systems in clinical trials. A very precise definition of ‘electronic source data’ is given as ‘the first obtainable permanent data from an electronic data generation/capture’ and examples of common but incorrect identification of source data are provided. Details on requirements for computerised systems are given. A full chapter is dedicated to electronic data (and audit trail) and the challenges of their management during the whole life cycle. Finally, five annexes provide detailed requirements on topics like contracts, validation, user management, security and specific types of systems. An outline of the guideline structure is provided in Figure 1.

The Italian Group of Quality Assurance in Research (GIQAR), part of the Italian Society of Pharmaceutical Medicine (SIMeF, https://simef.it) has recently established a working group on GCP and computerised systems in clinical trials.

Our team analysed the draft guideline and sent comments, requests for clarification and suggestions on different aspects of the document to EMA. From this analysis we came up with some topics that, unmistakably, will be posing new challenges to sponsors, CROs and mostly to clinical sites. We summarised them in the following points:

NEW COMPUTERISED SYSTEMS IN SCOPE

Compared to the 2010 EMA reflection paper, additional computerised systems used in clinical trials are included in the scope of the guideline, such as:

• Applications for the use by the trial participants on their own device, ‘bring your own device (BYOD)’

• Tools that automatically capture data related to transit and storage temperatures for IMP or clinical samples

• eTMFs

• Electronic Informed Consents

• Interactive Response Technologies (IRT)

• Portals for supplying information from the sponsor to the sites

• Computerised systems implemented by the sponsor holding/managing and/ or analysing data relevant to the clinical trial e.g. Clinical Trial Management Systems (CTMS), pharmacovigilance databases, statistical software programming pharmacovigilance databases, statistical software, document management systems and central monitoring software

• Artificial intelligence (AI) used in clinical trials.

EXECUTIVE SUMMARY

GLOSSARY AND ABBREVIATIONS

ANNEX 1: CONTRACTS

ANNEX 2: COMPUTERISED SYSTEMS VALIDATION

ANNEX 3: USER MANAGEMENT

ANNEX 4: SECURITY

ANNEX 5: REQUIREMENTS RELATED TO SPECIFIC TYPES OF SYSTEMS, PROCESSES AND DATA

EMA GUIDELINE ON COMPUTERISED SYSTEMS AND ELECTRONIC DATA IN CLINICAL TRIALS

1. INTRODUCTION

2. SCOPE

3. LEGAL AND REGULATORY BACKGROUND

4. PRINCIPLES AND DEFINITION OF KEY CONCEPTS

5. COMPUTERISED SYSTEMS

6. ELECTRONIC DATA

FIGURE 1: OUTLINE OF EMA DRAFT GUIDELINE ON COMPUTERISED SYSTEMS AND ELECTRONIC DATA IN CLINICAL TRIALS
MAY 2023 | 7 QUASAR

EXTENT AND RESPONSIBILITIES FOR COMPUTERISED SYSTEM VALIDATION

The guideline clearly requires that the computerised systems used in clinical trials and in scope of the guideline are validated during their entire life cycle. The extent of validation required for each computerised system in scope is not totally clear and specific instructions are not provided. The guideline then clarifies that the investigator is ultimately responsible for the validation of the computerised systems implemented by the investigator’s institution as it is the sponsor for all the remaining systems used in each clinical trial. However, while sponsors and CROs are generally accustomed with concepts and techniques of validations and, have specific procedures and resources in place, this requirement may be challenging for clinical sites. Indeed, investigators/institutions will need dedicated personnel and/or consultants to validate their systems, to plan periodic reviews and to implement change control processes. Finally, the guideline requires that in case of regulatory inspections, the validation documentation for all the systems in scope is made available upon request of the inspectors in a timely manner, irrespective of whether it is provided by the responsible party, a CRO or the vendors of the systems.

ELECTRONIC DATA TRANSMISSION AND eSOURCE DATA IDENTIFICATION

Details of the transmission of electronic data should be described together with a dedicated diagram, including information on their transfer, format, origin and destination, the parties accessing them, the timing of the transfer and any actions that might be applied to the data, for example, validation, reconciliation, verification and review. This also applies when data are captured by an electronic device and are temporarily stored in the device local memory before being uploaded to a central server; this data transfer should be validated and, only once the data are permanently stored in the server, are they considered source data.

Certain source data might be directly recorded into the eCRF and this is true also for electronic tools directly collecting patient data: eCOAs or ePROs, such as electronic diaries, wearables, laboratory equipment, ECGs, etc. Those data should be accompanied by metadata related to the device used (e.g. device version, device identifiers, firmware version, last calibration, data originator, UTC time stamp of events).

All these electronically captured source data must be precisely identified in the study protocol. The guideline clearly states that any data generated/captured and transferred to the sponsor or CRO that is not stated in the protocol or study related documents will be considered GCP non-compliant. ePRO data should not be kept on servers under the exclusive control of the sponsor until the end of the study but they must be made available to the investigator in a timely manner, since he/she is responsible for the oversight of safety and compliance of trial participants’ data.

CONTROL OF DATA AND MANAGEMENT OF DYNAMIC DATA

The sponsor should never have the exclusive control of data entered in a computerised system. The investigator should be able to download a certified copy of the data at any time. Moreover, after a database is decommissioned, the investigator should receive a certified copy of the data entered at the site including metadata (i.e. audit trail) and the provided file should capture all the dynamic aspects of the original file. This means that static formats of dynamic data (e.g. PDF copies containing fixed/frozen data which allow no interaction) will not be considered adequate. Also, before revoking the investigator read-only access, he/she should be able to perform a review of the received certified copy versus the original database to assess its exact correspondence. However, the guideline remains quite vague on the expectations of this review and on where and how its performance should be documented.

Finally, the integrity of data must be preserved through its life cycle together with its dynamic features; after decommissioning of the database, the possibility of restoration to a full functional status must be ensured, including dynamic features, for example, for inspection purposes. The long-term retention of data in a fully functional status appears technically and economically challenging and hardly feasible in consideration of the retention time (up to 25 years) required by the Regulation (EU) No. 536/2014 on clinical trials on medicinal products for human use.

AUDIT TRAIL AND AUDIT TRAIL REVIEW

As anticipated, the guideline really highlights the importance of audit trails. The extension of ALCOAC principles to ALCOAC+, with the addition of the ‘traceable’ requirement, is easy proof of where the focus is and it explicitly requires that all changes must be documented in the metadata.

The guideline provides detailed requirements on the audit trail content; it should include all information on changes in local memory, changes by queries and edit check results, extractions for internal reporting and statistical analysis, and access logs. Even the exceptional case when a system administrator is forced to deactivate the audit trail, should be part of the audit trail itself.

The guideline goes ahead with the requirements for audit trail review and specifies that ‘the entire audit trail should be available as an exported dynamic data file in order to allow for identification of systematic patterns or concerns in data across trial participants, sites etc...’. This audit trail analysis should be focused on:

• Missing data

• Data manipulation

• Abnormal data

• Outliers

• Implausible dates and times

• Incorrect data processing

• Unauthorised access

• Malfunctions

• Direct data capture not performed as planned.

Therefore, the raising questions are: do we have the resources to deeply review the audit trail to the extent required? Is the end-user appropriately trained and qualified for this type of analysis? Will this be achievable, from a technical point of view, in a user-friendly way?

‘The sponsor should never have the exclusive control of data entered in a computerised system.’
8 | MAY 2023 QUASAR
‘Even the exceptional case when a system administrator is forced to deactivate the audit trail, should be part of the audit trail itself.’

TRAINING

The guideline reinforces the staff qualification needs foreseen by ICH E6 (R2), explicitly requiring training on applicable legislations and guidelines for all those involved in developing, building and managing trial-specific computerised systems, such as those employed at a contract organisation providing eCRF, IRT, ePRO, trial specific configuration and management of the system during the clinical trial conduct.

Whether or not technical providers are always up-to-date and aware of all the applicable legislations and guidelines is the main concern. Indeed, such vendors often provide systems for different industrial sectors and are not limited to the pharma industry. Therefore, while they are technically skilled and should be well aware of Software Development Life Cycle requirements, more specific and documented knowledge on GCP and specific clinical requirements may be needed.

For investigators and site staff competence, the guideline states that ‘all training should be documented, and the records retained in the appropriate part of the Investigator Site File/sponsor TMF’. It is clearly indicated that investigators should receive training on how to navigate the audit trail of own data to be able to review changes and that such training needs to be documented; however, the guideline does not clarify if non-study specific training (on systems used in multiple studies, such as training on CTMS, PV database, etc.) and training on IT security and serious breaches management should also be retained in ISF/TMF.

SECURITY

After widening the computerised systems in scope and involving new stakeholders, the guideline indirectly introduces new requirements for involved parties, such as clinical sites. For instance, availability of controlled SOP for defining and documenting security incidents, rating their criticality and implementing CAPAs, is required. Our question here is: are clinical sites equipped with such a well-organised quality system to support these activities? The list of required security measures includes:

• Anti-viral software

• Task manager monitoring

• Regular penetration testing

• Intrusion attempts detection system

• Effective system for detecting any unusual activity from a user (e.g. excessive file downloads, copying or moving or backend data changes).

As stated, while sponsors and CROs are quite used to work in a deeply regulated environment, clinical sites and computerised systems’ vendors might need additional resources in terms of employees and/or consultants to be able to fully satisfy the above requirements.

CONCLUSION

The new guideline provides directions to sponsors, CROs, investigators and other parties involved in the design, conduct and reporting of clinical trials on the management of computerised systems and clinical data. It does not really introduce new concepts but finally clarifies inspectors’ expectations on several compliance areas. It provides a fresh and modern view on new and emerging technologies (e.g. wearables, AI, cloud) and establishes a solid ground to support and enforce providers and site compliance.

PROFILES

Mario has 23 years of experience in Clinical Research. During his career he has worked as Clinical Monitor, Project Leader, Quality Assurance and Auditor. Mario leads the Quality Assurance Unit of the CROs Alliance Group. For seven years he has been one of the coordinators of the GCP working group of the Italian Group of Quality Assurance in Research (GIQAR).

Marianna has over 22 years of professional life in the pharmaceutical environment with extensive experience in GCP/ GCLP/GVP/CSV compliance and Quality Assurance (QA). She possesses wide knowledge of ICH guidelines, European pharmacovigilance regulations, GCP, GCLP (WHO), 21 CFR Part 11, EU Annex 11, ALCOA+ Data Integrity Principles, GAMP 5 and main FDA Guidance(s) for industry. Since 2000

Marianna has been working for PQE where she developed wide expertise in clinical and PV auditing, QMS implementation and computer systems validation projects. She is currently the GCP Compliance Operation Manager at PQE Group.

Nevertheless, through the analysis performed, we can conclude that not all the requests are clear and some of them are deeply technically challenging (e.g. retention of data preserving their dynamic state). A great deal of work will be needed to achieve full compliance when the guideline comes into force, especially for trials already ongoing. Additionally, some requirements could be really complex for clinical sites and they will require a totally new approach.

Electronic systems and data are here to stay and it is important that the compliance requirements are clear and feasible. GIQAR, as stated, has requested several clarifications on the document and we assume that many more questions and suggestions must have been received by the stakeholders in the pharma industry and maybe investigators. We hope that EMA will take them into account and will respond with a clear and well-applicable final guidance.

Article first printed in the Journal of the Society of Pharmaceutical Medicine.

Laura has a Master’s Degree in Pharmaceutical Chemistry and Technology and had her first experience in Fidia Farmaceutici S.p.A. Here she covered the role of Corporate R&D Quality & Compliance Coordinator and gained experience on GCP and GVP compliance, especially for vendor management and validation on computerised systems used in clinical studies and for pharmacovigilance processes.

Anna has a degree in pharmacy and more than 30 years’ experience in pharma companies covering the role of Head of QA for GLP, GCP, GVP. She also worked in project management and science information during her career. Anna is currently a freelance GxP auditor, coordinator of the Italian Group of Research QA (GIQAR) and Vice-president of the Italian Society of Medicinal Farmacy (SIMeF).

Massimo is GLP/GCP Senior Specialist and Auditor at Chiesi Farmaceutici. He has 19 years’ experience in research quality assurance across GLP and GCP areas. Massimo started working as QA Auditor in a toxicology test facility in 2004 and in 2016 joined Chiesi where he is responsible of preclinical quality assurance activities and he is involved in the implementation and maintenance of the GCP quality system in R&D projects.

MAY 2023 | 9 QUASAR

ROCHE/RQA COLLABORATION: IMPACT OF THE CERTIFICATE IN DATA BASICS eLEARNING IN THE PHARMACEUTICAL INDUSTRY, INCLUDING THE LAUNCH OF THE IMPALA CONSORTIUM

QUASAR 10 | MAY 2023
Tim Menard Sharon Havenhand

Launched back on the 21st April 2021, Roche and the Research Quality Association partnered together in a new industry collaboration to bring its members the Certificate in Data Basics eLearning course, which is part of Roche’s Data Analytics University.

IMPACT

Reviewing the initial end of course survey feedback provided by quality professionals indicated a positive response to the eLearning:

• 97% said they will prioritise implementing the course concepts into their work

• 88% said that the eLearning will help them improve their work outcomes

• Overall 96% of RQA members would recommend this course to their colleagues. Building on this, we contacted RQA members that had completed the course to hear what impact the training has had on their work and the changes they have made as a result.

The course was designed to equip quality professionals with:

• The basic analytical skills and knowledge they need to interpret and use data effectively

• Recognising the challenges of data processing and analysis

• Describing different statistical methods used to interpret and analyse data

• Practical application by using statistical methods on a simplified data set

• Developing critical judgement when it comes to data, statistics and visualisations.

The self-paced learning course which takes approximately three hours to complete, includes a final self-assessment knowledge check which, when successfully passed, enables you to collect three CPD points. Since its initial launch back in 2021, a total of 284 RQA members have enrolled to take the course, with 152 completing the training to date. Based on the demographic characteristics of current RQA members who would be likely to take the training, just under half – 44%, have either enrolled or taken it so far.

With that in mind, Roche and the RQA began assessing what impact the Certificate in Data Basics eLearning has had in the pharmaceutical industry since its launch and how we can broaden our reach with this eLearning to other quality professionals across industry.

Taking this course has helped me with supporting my clients in preparing for inspections, especially with reference to the statistics part of inspections as it has given me a better understanding of the terminology and principles of data handling.

Helen Powell from HYP Pharma Ltd, Pharmacovigilance and GCP QA Consultancy

The course provided me with a good revision of basic statistics and is a helpful reference point for me when auditing clinical lab data.

Gail Todd, Associate Director LabsQA, AstraZeneca I applied what I learned towards the assessment of R and its use in data analytics at Flatiron. I was able to partner with the Quantitative Sciences (QS) team to develop and implement a risk-based framework. The course provided me with the basics which I was able to use in discussions with the QS team to further my knowledge about data analytics.

Manan Patel, Manager, Integrated Quality Management, Flatiron

Other feedback obtained said that it helped ‘give non-data scientists an understanding of how data analytics works’ and that they ‘really enjoyed putting the insights learnt into practice’

Carl Lummis, Events, Marketing, Publications and Online Learning Manager at RQA summarises that “The eLearning course has helped bring a stronger, more diverse perspective to Data Basics for RQA members. This initiative has given all members involved a new opportunity for growth in this area”. The insights derived from the data points collected indicates that the eLearning course is being utilised by a mixed group of people for varying activities, that most quality professionals were making or had made changes as a result of taking the eLearning course and that it was making a positive impact for those who had. In conclusion, the data supports that the course concepts are indeed making an impact in industry. Roche aims to be industry leaders in this space and with the support of the RQA have made great inroads to achieving this.

BROADENING OUR REACH

To build on the great response and uptake of the course by RQA members, Roche are now looking to gain a wider audience and global reach of other quality professionals across industry, outside of the RQA. As a result, Roche and the RQA have collaborated to release the first three parts of the Certificate in Data Basic eLearning free to non-members*. The courses, which are available to access on the RQA website, are:

• Introduction: Welcome to the Course

• Chapter 1 – Demystifying Analytics –Part 1

• Chapter 1 – Demystifying Analytics –Part 2

By taking these short, self-directed courses, it will give quality professionals a basic introduction to data analytics and provide a good platform to build upon. We hope to inspire and spark an interest with non-members for them to continue with learning and benefit from training like RQA members have. In turn, Roche will achieve its objective to broaden their reach to other quality professionals across industry.

12 | MAY 2023 QUASAR
‘Overall 96% of RQA members would recommend this course to their colleagues.’
” “ ”
“ ”
‘We hope to inspire and spark an interest with non-members for them to continue with learning and benefit from training like RQA members have.’

LAUNCH OF IMPALA CONSORTIUM

Continuing on the theme of data analytics, Tim Menard shares his insights into the newly launched Inter-Company Quality Analytics Industry group (IMPALA) and what that means for RQA Members.

WHAT WOULD RQA MEMBERS WANT TO KNOW ABOUT IT?

IMPALA are committed to developing an open source product (i.e. co-development of open source tools: analytics packages, templates, methodologies, etc.), meaning once these work products are released, they are accessible beyond IMPALA members. Our current work products are auditing schedule, data integrity, anomaly detection in audit trial and quality briefs, more information on these can be found on our website.

HOW CAN RQA MEMBERS GET INVOLVED?**

WHAT IS THE INTER-COMPANY QUALITY ANALYTICS INDUSTRY GROUP (IMPALA)?

The Inter coMPany quALity Analytics (IMPALA) consortium was initially established in July 2019 as an informal group of biopharmaceutical organisations with the common goal to share knowledge and better understand opportunities in applying advanced analytics for QA.

IMPALA’s mission is to transform the Biopharmaceutical Clinical Quality Assurance process in the Good Clinical Practice and Good Pharmacovigilance Practice areas, by using advanced analytics and promoting the adoption of this new approach and associated methodologies by key industry stakeholders (e.g. pharma quality professionals, health authorities) to assure safe use and therefore building patient trust, accelerating approvals and ultimately benefiting patients globally.

The IMPALA consortium will provide the strategic focus for working across the biopharmaceutical ecosystem to develop and gain industry-wide consensus for the adoption of improved QA using advanced analytics and best practices to be used across the industry.

To achieve this, several Work Product Teams have been established and are currently working on focused projects to advance the IMPALA consortium’s objectives. Current members include Astellas, Bayer, Biogen, Boehringer Ingleheim, Bristol Myers Squibb, Johnson & Johnson, Merck KGaA, Merck & Co., Novartis, Pfizer, Roche and Sanofi.

Some RQA members may remember that IMPALA ran an interactive session ‘How can we leverage data analytics for clinical quality and accelerate drug development? at the RQA International QA Conference back on 10th November 2022. The IMPALA panel engaged with the audience to define, discuss and gather business requirements for data analytics use cases in clinical quality. RQA members can continue to get involved in the discussions as IMPALA plans to attend the next RQA conference in November 2023 for another interactive session. More information to follow.

On 26th June 2023, as part of the DIA Global Annual Meeting in Boston, MA, IMPALA will present:

1. The vision of IMPALA and examples of IMPALA’s quality analytics products.

2. Ongoing feedback from health authoritieson the quality analytics methods.

3. Guidance on how sponsors and regulators can join IMPALA. At a fee, RQA members are welcome to attend and more information can be found on the DIA website. www.diaglobal.org

WHERE CAN RQA MEMBERS FIND OUT MORE INFORMATION?

To keep up to date on the latest news, RQA members can follow IMPALA’s LinkedIn page: www.linkedin.com/company/intercompany-quality-analytics. They can also head over to IMPALA’s website, which has a wealth of information on work products, publications and upcoming events: https://impala-consortium.org/

PROFILES

Sharon is currently a Principal Quality Solutions Lead at Roche, responsible for leading and delivering the Data Analytics Learning Strategy for her function. She has been working in the pharmaceutical industry since 2006, holding a variety of positions in Quality Assurance from Lead International Clinical Auditor to Quality Assurance Specialist, gaining extensive experience across the GxP disciplines. Sharon holds a Diploma in Learning and Development and is an ILM Accredited Advanced Professional Trainer.

Tim is currently Head of Quality Data Science at Roche. He started his industry career in 2009 in PV with different roles (operational and strategic) that brought him to live in various parts of the world. Tim joined Roche as an auditor and dived into the ‘magic world of analytics’ somewhere in 2017. He has been leading the PDQ Data Science Team at Roche since 2018 and his team is creating and implementing data-driven solutions that help understand, early detect or predict clinical quality issues.

MAY 2023 | 13 QUASAR
* In order to take the remainder of the eLearning course they will need to sign up to become RQA Members. **Please note that at this time, only Bio Pharma organisations are eligible for membership to IMPALA. Biopharma sponsors can apply to join via the IMPALA website.
‘RQA members can continue to get involved in the discussions as IMPALA plans to attend the next RQA conference in November 2023 for another interactive session.’

THE ROLE OF QUALITY ASSURANCE IN PHARMACEUTICAL R&D

It would be unreasonable to think that patients should be expected to agree to the taking of drugs if there was no proof of their effectiveness or relative safety. The idea of taking drugs merely on unsubstantiated trust belongs to the past and is not acceptable in the modern age of scientific evaluation.

QUASAR 14 | MAY 2023

As pharmaceutical products are manufactured, licensed, sold, prescribed and used worldwide it is essential that evidence of quality, efficacy and safety are proven and supportive documentation is made available to the respective national regulatory authorities for approval and licensing purposes. To successfully achieve this requires consistent and disciplined systems of project management, involving research scientists, manufacturing chemists, data managers and analysts, monitoring personnel, report writers, clinicians and volunteer subjects. The facilities containing these projects typically include preclinical as well as clinical laboratories, manufacturing plant, data processing and office accommodation.

Pharmaceuticals (referred to as products throughout this article) encompasses drugs (e.g. birth control pill), medicines (e.g. anti-cancer) and devices, including engineered technologies and biosynthetics (e.g. pacemakers and wound healing dressings). The starting point of most products is either an idea based on further development and modifications to the chemical structures of existing compounds, copying a naturally occurring material in which it is noted there is potential pharmacology or simply through the fortunes of serendipity. For medical devices it is more a case of developing mechanical or physiological capabilities of the product. Initial testing, which may include invitro, microbial or biochemical models may lead, after many months or years of effort, to a product that can be then tested in a clinical setting involving human subject volunteers.

The results of these studies will hopefully lead to a candidate product that after extensive further evaluation, can be presented to the licensing authorities. However, along this tortuous journey, the product will be very carefully monitored for clinical efficacy as well as any indications of untoward effects. Formulation developments, consistency in manufacturing ability and if it could be economically developed, are also determined and estimated.

When dealing with generic products, as opposed to those still protected under the terms of licence, it must be appreciated that the manufacturers are still required to submit their version (copy) to detailed testing in order to show that the generic version possesses similar pharmacological activity, i.e. there is bioequivalence. Such clinical trials involve healthy volunteers receiving treatments with both the original and generic versions of the product with blood samples taken and chemically analysed for the presence of the active product, i.e. pharmacokinetic analysis.

MAY 2023 | 15

In order to survive this labyrinthine plan and achieve success, systems are developed and put in place in order to best standardise and document work procedures: projects are defined, those in charge are identified, routine procedures are documented as Standard Operating Procedures (SOPs), equipment records are maintained, reagents and consumables recorded, job responsibilities are defined and staff training/competency-achieved records established; with proper maintenance of all study records, including retention and archiving being performed as a final step. Such rigorous performance of work procedures and the resulting collection of records are collectively referred to as Quality Assurance (QA) systems and this article attempts to provide the reader with an informative appraisal summary of this subject.

QA involves all aspects of the project, rather than the in-process series of checks performed by quality control (QC) processes. The management of QA is undertaken by an entirely separate group of professionals who can provide directional advice as well as extensive and thorough audit scrutiny. QA is the system and processes that are in place to ensure that the work is performed and the data generated are in compliance with Good Practice, including employment of SOPs, data reporting, staff training and procedures for ethical conduct in the case of clinical trials. With regard to the need to confirm whether standards are being maintained (in compliance), QA audits are conducted by specialists who have extensive training and experience in evaluating research processes. QA audits of clinical research look at compliance with GCP standards, thus obtaining assurance that the data and reported results are credible and accurate, and that the rights, integrity and confidentiality of trial subjects are protected. They are authorised by senior management, following formal request procedures. Their findings will be reported in accordance with confidential systems. It is notable that QA auditors focus attention on the questions rather than starting with the answers and this helps in dealing with the myriad of issues needing detailed scrutiny. In this way, confirmation is sought of whether the particular pieces of work, including equipment and facilities have been or are of a satisfactory standard. Audits can look at compliance with expected and approved practice in each and every one of the stages of a given research project. Both projectspecific elements, for example clinical study records and test results, as well as nonproject specific elements, such as staff qualification, training records and facilities, can be examined and evaluated.

Audit reports are prepared and supplied to the testing organisation senior management from where it is expected that any deficiencies identified will be corrected following review and acceptance by the facility.

Just as an audit cannot examine everything, it should also be noted that it is not a repeat of in-process QC procedures. Instead it will examine how QC was performed, by whom and how this was recorded. A typical audit, rather than looking for compliance, will instead concentrate on non-compliance, in other words: are there any problems? An experienced auditor should quickly see whether, for instance, work procedures were not standardised, the study protocol (plan) was not being followed, whether there was incompetence, facility deficiencies, inadequate record keeping and the principles of ethics were not adhered to. In looking at preclinical research and safety testing, it was here that deficiencies first came to the attention of regulators (especially the FDA in the USA) where it was thought that companies were not doing enough in safeguarding people from the effects of drugs such as Thalidomide (the USA largely avoided the problem as a licence for the drug was presciently withheld by the Licensing authority). Thalidomide was originally licensed for over-the-counter (OTC) sale in Germany in July 1956. It took five years for connections between the drug taken by pregnant women and severe deformities observed at birth of their offspring, forcing governments and medical authorities to review their pharmaceutical licensing policies, (Lancet, William McBride, 1961). As a result, changes were made to the way drugs were marketed, tested and approved in the UK and across the world. One key change was that pharmaceutical products intended for human use could no longer be approved purely on the basis of animal testing. And trials for substances marketed to pregnant women also had to provide evidence that they were safe for use in pregnancy.

During the clinical research phases of testing and evaluation, the candidate product is administered to volunteers; these may be non-therapeutic trials in healthy subjects, while therapeutic trials involve patients. In Phase I trials, i.e. first time in humans, special care is applied to deal with the possibility of potentially dangerous untoward events and this includes medically qualified staff and arrangements for emergency admission to an associated or nearby ICU. To ensure ethical standards are maintained, signed consent procedures are conducted at the start and all records are handled with strict application of confidentiality. Trial conduct is monitored by trained QC personnel who are independent of the clinical team. Care is taken to ensure volunteers are only identified by a coded identification number in order that no volunteers identification details are released to the sponsoring pharmaceutical company. Statutory regulations applicable include The Medicines for Human Use (Clinical Trials) Regulations 2004 + Amendments, EU No. 536/2014, Clinical Trials on Medicinal Products for Human Use and the Declaration of Helsinki – please see the Medicines and Healthcare Regulatory Agency (MHRA) website for details.

Quality concerning the manufacture, formulation, packaging and labelling of pharmaceutical products is described within the GMP regulations. Label design as well as the textual contents of the patient information leaflet are under the tight control of the regulatory function within the manufacturers offices.

Before clinical trials are initiated, approval must be sought from the relevant authority, which in the UK is the MHRA. In addition, prior approval must also be sought from an independent Ethics Committee, constituted of medical professionals, legal as well as lay public, whose sole responsibility is to ensure the rights, protection and safety of human trial subjects are protected. This is achieved using ethical standards based on the World Medical Association’s Declaration of Helsinki (1989 3rd revision). The starting point of this is a study-specific approved consent form, in which before the trial starts, the volunteer is asked (without coercion) to sign in the presence of a physician, to show that they understand the nature of the study and their own personal responsibilities while in the trial.

Details of each clinical research study should be described and defined within a written study protocol, which essentially provides answers to: ‘what, why, when, how, where and who’. This lists who is in charge, the study’s objectives, the identity of experimental product, its formulation details, a description of the methodology and where the study will be performed.

16 | MAY 2023 QUASAR
‘It is notable that QA auditors focus attention on the questions rather than starting with the answers and this helps in dealing with the myriad issues needing detailed scrutiny.’

QA conduct within preclinical laboratories is customarily defined by the standards of GLP. During the clinical phases, including the laboratories which are fulfilling analytical duties as part of a clinical study (e.g. clinical chemistry, cytology and haematology) and are often in hospitals, if not an independent facility, it is customary for confirmation of fitness for purpose to be established before study initiation. This commonly comprises the review and filing of the laboratory accreditation certificate for inclusion in the study file. It should be noted however, that independent QA audits using MHRA approved standards of GCLP provide more meaningful support for the quality of laboratories, especially in various other regions in the world. GCLP is similar to GLP but with the additional requirement of patient confidentiality, anonymity and enhanced data integrity, which are essential pre-requisites under the rules of GCP. Both clinical as well as technical procedures undertaken as part of a study should preferably be written up if they are routine, as Standard Operating Procedures (SOPs); if not then details of procedures must be described as part of the study protocol or as study-specific records. In all cases such methodologies should be version controlled, dated and display an approval signature; any modifications or deviations from the routine procedures should also be fully documented and signed off by the senior scientist in charge. Any departure from protocol methodology must also be explained and fully documented as part of study records. Data management, record maintenance and final storage are integral components in any enterprise involving the research and development of pharmaceuticals. These may be paper or electronic records, including statistical data sheets or biosamples and slides. The purpose of having such records is to ensure there is sufficient detailed information available post study to understand and even reconstruct how the study was undertaken if, at some future point, it is necessary to investigate a questionable result, as well as to provide documented evidence that the study was in compliance with all good practice requirements.

An example demonstrating the comprehensiveness of such records is illustrated by analytical laboratories where original result prints are collected and also staff lists, SOPs, equipment details and calibration, including internal and experimental quality control standardisation reagents.

The safety of products, especially new ones, is taken very seriously by the industry and the monitoring of untoward events while receiving treatment (pharmacovigilance) is a legal requirement of all pharmaceutical companies and all new reports are examined by medically qualified personnel. During the clinical research stages, the occurrence of any events is considered carefully and will strongly influence the furtherance of human exposure to a new drug undergoing development.

During GCP audit of clinical trials there should be an account of every trial subject, including detailed examination of all ‘dropouts’. Under such levels of scrutiny, it is believed that any significant issues such as gross errors, fraudulence or clinical wrongdoing will inevitably be found. However, far more common occurrences of non-compliance are more likely to include contravention of standardised procedures or planned events as detailed within the study protocol, inadequate training, documentation failings and also facility limitations.

My own personal audit experiences produced the following noteworthy and significant findings:

• Unworkable emergency procedures and equipment in a phase 1 clinical laboratory

• Falsification of Case Record Forms (CRF) in an attempt to hide an earlier improper inclusion of a particular patient who violated the trial’s exclusion criteria

• Design and usage of paediatric informed consent forms that prevented accurate recording of parent/guardian names

• Unbeknown to the sponsoring pharmaceutical company, the responsible therapeutic study physician, contrary to the trial’s inclusion criteria, was treating an additional patient with a reclaimed trial product

• Totally improper standards for the design, maintenance and use of a first time in human (Phase 1) test product formulation facility

• An analytical laboratory that was found to be totally unsatisfactory despite possessing a dossier of accreditation certificates

• Trial monitoring in a cancer study that improperly relied on nurse supplied summary records rather than any checks being conducted against source clinical records.

Sometimes it’s too easy to forget how fundamental many questions are that should be asked when performing assessments of product research and development:

• Were the test formulations actually consumed by the volunteers and is there written proof of this?

• Was the study ethical?

• Are the source records believable, complete, dated and signed?

• If records are electronic, was there adherence to these same standards?

• Did the trial monitor actually carry out full and acceptable data verification?

• Were all staff trained and approved for the work they performed?

• Are there accountability records showing proof of receipt, storage and usage of experimental products, including comparators and placebos?

• Are the data and reports believable?

Without the performance of QA audits and objective assessments undertaken, how do we know that the quality achieved was good enough?

Based on an article written by Nigel Crossland and originally published in the Biomedical Scientist (the journal of the Institute of Biomedical Science).

PROFILE

Nigel has international experience developing and evaluating R&D working practices during a career spent in pharmaceutical companies, laboratories and hospitals.

MAY 2023 | 17 QUASAR
‘During the clinical research stages, the occurrence of any events is considered carefully and will strongly influence the furtherance of human exposure to a new drug undergoing development.’
‘Any departure from protocol methodology must also be explained and fully documented as part of study records.’

DID THE AUDIT GO SMOOTHLY? FOR SPONSORS, CLINICAL SITES AND AUDITORS

In China, the number of registered clinical studies has significantly increased in recent years, from 939 in 2015 to 3318 in 2022 (data source: www.chinadrugtrials.org.cn). The increase mainly followed the issue of China Food and Drug Administration (CFDA) announcement #117 of 2015 (‘announcement’ for short) (CFDA was previously the National Medical Products Administration (NMPA)), which arose from the new drug development prosperity in China.

Wenjing Zhu
QUASAR 18 | MAY 2023

ne thing was confirmed with the issue of the announcement; more sponsors and medical institutions were expected to have Investigator Site Audits (ISAs) during the whole clinical study period and audit was becoming an essential process in their Quality Management System (QMS). Consequently, the number of auditors in the pharmaceutical company and Clinical Research Organisations (CROs) (we say sponsor in this article) and auditing consulting experts and companies increased; springing up like mushrooms.

OSponsors with mature systems and experienced medical institutions/clinical sites generally supported the audit requirement since it was one process to help the quality improvement of the study. However, in this booming market, there were also sponsors and medical institutions that were new to auditing and had little knowledge of the auditing process, which raised all kinds of not-so-reasonable requirements for the auditor. Moreover, many new auditors entered the market to satisfy the market needs, but they had no idea how to meet the various requirements. Therefore, this article includes some problems which exist in current ISA activities in China, as well as potential reasons for the issues. I hope the article

helps to answer some questions or resolve some queries from sponsors, medical institutions and auditors who have met with similar situations, and shed light on conducting ISAs.

Irrespective of whether the auditor was from the sponsor’s staff or consulting company, there were circumstances where a significant number of Source Document Reviews (SDRs) or/and Source Data Verifications (SDVs) were required during the audit. In some cases, the number was more than 50% of the study, e.g. to review more than four subjects, although only six patients enrolled in the study, and the requirement was from both the sponsor and the medical institution.

MAY 2023 | 19

It was understandable that the more source documents/data reviewed, the more findings were likely to be detected, which could make the one who raised this requirement perhaps feel safer. However, was it necessary to perform a high percentage review to see a finding? Would it be a waste of human and financial resources for the sponsor to undertake such a comprehensive review? For a clinical study, the site may or may not perform a 100% Quality Control (QC) check for all the documents and data. However, the monitor would usually conduct a 100% (the percentage was still high for sites in China even when it was not 100%, according to the monitoring plan) review of the source documents and data, which was essential to study primary and secondary endpoint and study quality, including the data entered in eCRF (electronic Case Report Forms).

So, when the auditor had to perform a high percentage of SDR and/or SDV, the audit more likely turned to another QC check rather than a sample-based audit. The sample-based audit was similar to an investigation, and a prepared sample selection could cover the most critical documents and data that needed to be considered. Reviewing the prepared selected sample, any existing or potential finding was likely to be detected. Combining the severity and frequency of the finding, the required action became apparent; the auditor and sponsor could have a judgement of the finding classification (minor, major, critical) and with the number of the minor, major, and critical findings detected during the audit, the sponsor could form a judgement of the study quality at the audited site.

The function of the supplementary document and data review provided additional cases of a particular finding rather than detecting new finding types. For example, there are more cases of eCRF data entry inaccuracy or source documentation problems from the same sub-investigator, etc. In rare circumstances, a new serious finding was detected, such as the site randomised ineligible subjects,

the classification of the finding was still unlikely to be critical when it happened in one subject following the review of several, but this usually did not change the sponsor’s overall assessment of the study quality at the site. When serious findings resulted from a system deficiency, they were typically detected when the auditor performed an SDR for the first or second subject. When critical and/or major findings are detected, it usually means a high percentage of the same finding; the corrective action should not only include rectifying what was reported in the audit reports but also conducting a comprehensive check of other documents/data of the same type that were not reviewed during the audit and correct the defects arising from the additional check. Thus, if the correcting action was effective, all defects, listed or not, in the audit reports should have been corrected. The site staff could determine the study quality and take improvement actions during the corrective action stage. To have a high percentage document and data review during an audit can be interpreted that the auditors were doing part of the study team’s work to find out more cases of one finding, which should have been undertaken by the study team during and as part of the Corrective and Preventive Action (CAPA) stage, and had no benefit on the improvement of the study teams’ capability or sense of responsibility for managing quality.

Take a daily life event; for example, imagine someone wanted to check if all chicken eggs sold in Store A were tasty – whether the size was small, medium or large. To check that, they may choose two or three of each size; they would not buy all eggs from the store to check. If all the eggs chosen were not as tasty as they expected, then they had a conclusion that the eggs sold in Store A were not tasty. But, if only a tiny percentage were not tasty, they knew the eggs sold in Store A were generally tasty. If any of the eggs tasted really bad, but others in the batch were OK, they might purchase another batch to confirm. They trusted their conclusion without testing all the eggs because the ones they chose represented the other eggs. Of course, with sufficient money and resources, buying and tasting all the eggs is still possible – but then there would be no eggs (this situation was not in the discussion scope of this article). Nevertheless, I hope this example explains why a prepared sample selection, rather than a high percentage review is sufficient in an ISA.

The second problem was minimal cooperation from the sponsor. To conduct an ISA, the auditor has to have full knowledge of the study, including all requirements for the clinical site (and monitoring activity, when applicable) defined in the study protocol, pharmacy

manual, laboratory manual and any relevant manuals or procedural documents. They also need a general idea of the site staff and study status at the site, through site documents review including delegation log, monitoring report, etc. Also, to prepare and conduct the audit, the auditor needs to access the system used in the clinical study, such as the electronic Trial Master File (eTMF), Electronic Capture System (EDC), etc. and of course, sufficient time for preparation.

Once they had all the information, the auditors would have a clearer picture of what was to be reviewed, who was to be interviewed, what needed to be considered and given special attention, the queries to be resolved during the audit and an indication of the samples to be selected for review during the audit. The auditor didn’t need to spend time collecting information and making judgements during the audit, giving the auditor more time to review what was critical to the site study quality, to dig out more information to better understand the study procedures and to get to know the actual finding instead of the superficial issue(s). Without that prior information during the preparation stage, the auditor had to use valuable time during the audit, which increased the risk that the sample selected was not representative, or insufficient time was left for the auditor to thoroughly review the selected samples or to dig out more information relating to a finding. For example, one auditor found that for the subjects they reviewed, the Investigational Medicinal Product (IMP) quantity was inconsistent in the subject medical record and the actual amount reported for some subject visits. If there was enough time, the auditor might check how the accountability log was recorded:

• Was the actual returned quantity consistent with the dispensing and dosing quantity?

• How was the subject IMP return procedure performed?

Then it would be clear if it belonged to a recording problem or an IMP management problem. Meanwhile, suppose the auditor has to take too much time on work that could have been completed during the audit preparation. In that case, there may be no time left for the auditor to dig out the necessary information, making the audit finding unclear.

It’s like someone going shopping; some people like to prepare a shopping list before going to the supermarket and others just go without a list.

20 | MAY 2023 QUASAR
‘For a clinical study, the site may or may not perform a 100% Quality Control (QC) check for all the documents and data.’

People who go shopping with a list can find all the things on the list. But, on the other hand, people who go shopping without a list need to consider what to buy as they wander around the supermarket; there is a high possibility they will forget something they need and only realise when they arrive at home – ‘Oh no, I forget to buy eggs’. Minimal cooperation from the sponsor is like people shopping without a list. For example, there were circumstances where the sponsor was not fully supporting the audit by not providing the necessary study or site documents/information to the auditor, not providing necessary access (e.g. EDC access) even after the audit had begun, or providing the information immediately before the audit – not allowing time for any preview.

It wasn’t easy to understand why the sponsor did not support the audit since they were paying for it (whether the auditor was the sponsor or consulting company) and were responsible for the study quality. Moreover, the auditor was independent of the study team and could discover the potential or existing problems not detected by the study team from a third party viewpoint. But if the sponsor did not support it, wouldn’t conducting the audit simply waste money and resources as the benefit of the audit was minimal?

In my experience, I met a situation where, in some audits, the monitor or the Project Manager (PM) of the audited study were greatly concerned about the finding classification because the number of major or critical findings in a study was included in their Key Performance Indicators (KPIs), without considering it was the site’s problem or monitor’s problem. So having this information, I understood that it might not be the sponsor who wasn’t supporting the audit but a study team member. It appeared that the sponsor was not supportive; more support will likely lead to more findings being detected, which might indirectly affect workload and finances.

So for this kind of circumstance, the sponsor must consider whether they want the truth of the study quality or just peace of mind. Minimal cooperation from clinical sites was quite common. Site staff interviews are an essential process during an ISA. By reviewing site staff, the auditor can get valuable information about the study procedure at the site, can determine if the staff member was familiar with their responsibility in the study, can verify if the interviewee’s words were consistent with what was recorded and can discuss with site staff to resolve questions/queries and confirm existing findings. However, in some ISA, the PI and site staff interviewed were unwilling to answer questions and eager to end the interview (not because of their busy work schedule) sometimes, they didn’t

respond when the auditor was trying to clarify any queries or when summarising the audit finding. When the auditor did not receive a response to the raised question/ query, it was likely that the question/query turned into a finding.

For example, Subject A signed two sets of Inform Consent Forms (ICFs), and the signatures were so different that the auditor suspected the two ICFs were unlikely to have been signed by the same person. If this could have been discussed between the auditor and site staff, it’s possible that they could clarify whether the two ICFs were really signed by different people or simply that the same person had two different signatures – supporting documents could have been provided. However, without that discussion, it turned into a finding. There were situations where the sponsor or clinical site did not want the auditor to write a finding in the audit report. The audit report is a way to reflect the audit activity and show the sponsor the site study status without the sponsor being on-site. The sponsor and clinical site could know what problems existed or needed improvement by reviewing the audit report. Without an accurate record of all those findings, it may result in a debate between the auditor and sponsor when the regulatory authority detects the auditor’s findings at a later date, or the possibility that the sponsor and/or clinical site forget to resolve the findings (from sponsor/site’s viewpoint), or the problems were resolved in an unconventional (non-GCP) way, which created a higher risk on the study quality at the site.

One possible reason was that the site or sponsor was worried that the audit finding would be reviewed by regulatory authorities. This scenario is unusual because ICH E6 (R2) 5.19.3 clearly states, ‘To preserve the independence and value of the audit function, the regulatory authority(ies) should not routinely request the audit reports. Regulatory authority(ies) may seek access to an audit report on a case-by-case basis when evidence of a serious GCP non-compliance exists, or in the course of legal proceedings’. To review the audit report when a serious GCP non-compliance exists was to verify whether the sponsor’s responsibility was fulfilled during the study. If reviewing audit reports during a regulatory inspection became a common approach, instead of only when serious GCP non-compliance exists, no sponsor or clinical site would dare to conduct an audit when the worst case needs to be presented. Moreover, the QMS would be incomplete without an audit, contradicting the GCP requirement. So the sponsor or site should not worry that the audit report could be reviewed during the inspection.

Also, recording the finding in the audit report was a sure way to remind the sponsor and/or site to resolve the problems. When the inspector noted a resolved critical finding, the audit report, together with the CAPA, was a way to prove that the sponsor/ site had noted the problem and had taken corrective and preventive actions to avoid reoccurrence on other sites and/or future study, which gave a positive impression to the inspector.

There were other strange requirements from sponsors or hospital institutions; whatever the situation was, it challenged the auditor to complete the audit. Therefore, it was important for the auditor to know the reason (or potential reason) for those abnormal situations; maybe the sponsor was not experienced in auditing procedure, the sponsor or the investigator was not confident of the study, they just took some minor findings too seriously or they had some concern which was not expressed, etc. If the reason was known or declared, the auditor could know what regulations, guidance and criteria should be referred to and what words or actions might eliminate or reduce the sponsor or investigator’s concerns.

Where there wasn’t any improvement after the auditor had tried, what else should the auditor do? Personally, I would just get on with the audit. But, importantly, always inform the sponsor about the abnormal situation, potential impact and what the auditor had done. It could protect the auditor, the sponsor and the clinical site.

PROFILE

Wenjing has worked in clinical QA for a CRO company since 2012 and is now working at Zigzag Associates Ltd. She has had seven years clinical study auditing experience, which covered phase I, phase II and phase III clinical studies. Wenjing has worked with both global and China local pharmaceutical companies and conducted investigator site audit in hospitals of various sizes in China.

MAY 2023 | 21 QUASAR
‘But, importantly, always inform the sponsor about the abnormal situation, potential impact, and what the auditor had done.’

HOME SUPPLY –A PATIENT-CENTRIC APPROACH TO DIGITALLY MITIGATE DTP RISKS

Decentralised Clinical Trials (DCTs) became the default position during the COVID-19 pandemic, where adoption was fast and furious, ensuring patients could gain access to the treatments they were promised, without having to complete in-person visits. This has led regulators to release flexibilities, which allowed remote or virtual settings (home health nurses, wearables, econsenting, etc.).

Marcus Wajngarten Richard Knight
QUASAR 22 | MAY 2023
Fatemeh Jami

rior to this, there was an absence of any regulatory endorsement of conducting clinical trial tasks away from the site. Furthermore, sponsors and clinical trial practitioners were often sceptical about the uptake of the alternative approach without regulators providing clear guidance. The fast adoption of DCTs comes with risk, as taking on services from entrepreneurial vendors with DCT solutions, who have limited knowledge of GCP application, data privacy laws, has grown exponentially in a short period of time. This is a new area for the industry, unlike the established oversight relationships between sponsors and CROs, for example. 2021 saw the continuous rise in the adoption of DCT elements, as feedback and data increased to support the value DCT brings to patients and diversity in trials. Regulators released information which can help steer us in a compliant way without compromising patient safety and data integrity. We have now had time to pause and reflect and as an industry we have been granted an opportunity to look within our CT world to ask if DCTs are actually a viable route of delivery of CTs.

This view has certainly evolved since 2021, with the steady release and updates in the regulatory viewpoints, by a broad range of regulators, most recently the Accelerating Clinical Trials (ACT) EU organised multi-stakeholder meeting, with attendance from experts within the research community to share their knowledge, experience and perspectives.

This resulted in a recommendation paper https://health.ec.europa.eu/system/ files/2022-12/mp_decentralised-elements clinical-trials_rec_en.pdf being released with great insights into the key DCT elements. See Figure 1. Sponsors, academics and healthcare practitioners came together to support the need to learn more about the DCTs prior to the COVID pandemic. An example of this effort was seen in the proof of concept study known as ‘RADIAL’ with the ‘aim to reshape clinical trial design, conduct and operations, by developing and piloting standards, recommendations and tools for the definition and operationalisation of Decentralised Clinical Trials (DCTs) in Europe’. https://trialsathome.com/

Some DCT Definitions

A multitude of elements which reduce, or in some cases even eliminate, the need for the trial participants to go to the clinical trial sites.

Danish Medicines Agency (DKMA)

By optimising digital health technologies and enabling the voice of the patient in accelerating medicinal product development speed of delivery, of therapy to patients and creating efficiencies across clinical research processes.

The Association of Clinical Research Organizations (ACRO)

DCT – A TRANSFORMATIVE MISSION

Direct-to-Patient (DtP) drug supply is now well-established as a key capability to drive visit avoidance in DCTs, beyond what the data collection can achieve (eCOA, sensors and devices, etc.). The reason why we want to drive data collection out of the clinical sites is not only the allure of longitudinal data series and novel endpoints, it is a realisation that patients will refrain from enrolling in clinical trials due to the financial and logistical burden of travel to and from sites. With the evolution of digital tools, the HCP can oversee scientific and safety goals equally as well when the patient is at home.

The assessment of risk and suitability of a study or programme design for this is key. In essence, all we do in internal and external digitalisation – extending organisational capability of tools and remodelling regulatory frameworks, amounts to this single fact; the need for physical movement of people and paper to execute clinical trials is going to shrink to an absolute bare minimum.

Most digital initiatives are conducted to optimise a process, a process often owned by one department or organisation, and the motivating force is efficiencies in that particular process. It is not always evident that these initiatives together are directed by a policy shift and are done in unison to drive outcomes of a bigger transformation. More often than not, initiatives are siloed to solve one problem.

The prevailing view is that it comes down to development and use of ‘capabilities’ and that each of them should ‘stand on their own two feet’ in a benefits business case.

Expected Updates by ICH

ICH

(E8)

Revision/Quality by Design Thinking

• Framework for designing quality in clinical trials, stakeholder engagement, trial design, proportionate trial management, critical to quality factors (CtQs)

ICH E6 (R2) Renovation Leading to ICH E6 (R3)

• Diverse trial types and data/quality and source

• Centralised testing facilities

• Flexibility in use of digital solutions

• Clarity in roles and responsibilities

EMA/HMA (released V1.0 dated 13.12.2022)

Recommendation paper for the implementation of Clinical Trials

ICH E6 Revision (expected in 2023)

Annex 2 titled ‘Additional considerations for non-traditional interventional clinical trials’ as part of the modernisation of the global GCP Guidelines ICH GCP E6 (R2)

FDA (expected in 2023)

TransCelerate have gathered input for the amended COVID guidance

DMKA (expected in 2023)

An uplift from recent released guidance to include thinking behind COVID flexibilities

Swiss Medic (released V2.0 15 December 2022)

An update of version 2.0 with reference to the EMA/MHRA paper.

P
FIGURE 1. REGULATORY CONSIDERATIONS (GCP) LOOKING AHEAD
24 | MAY 2023 QUASAR

In some cases, the risk profile of a specific capability should be assessed in the vacuum of a classically executed trial.

What is also needed is the endorsement of a transformation that will also drive organisational change, where we strive to design organisations to uphold process rather than the other way around. From internal process shifts and modelling, we can assume and even propose a few organisational impacts. The borders between study design and operational considerations becomes blurred and we cannot just throw a study design concept over the fence to be executed. Management of devices and sensors must bring clinical development organisations and supply chain together in a new partnership, not only pertaining to logistics, but also to release/labelling, etc. On both the device and logistics side, GMP and GCP has been kept separate, these new services and processes require assessments, quality agreements and releases to be done hand in hand.

The AstraZeneca Home Supply service is a DCT enabler, not a logistics solution to any clinical trial. Home Supply doesn’t save money, just like any DtP strategy, it digs holes in any distribution budget. In an organisation that lacks a transformative policy change, the shift from COVID-19 mitigation to a DCT service isn’t self-evident as scale will neither be viable or desirable.

DCT IN THE HOME SUPPLY CONTEXT

We measure the extent of DCT success in a study design through the ‘visit avoidance indicator’. This is more important than meets the eye. It tells us that it is not about the amount of digital capabilities used, it’s the end effect they achieve for the patient, for the site and for the science.

We achieve this index by comparing our design with its site-based classically designed trial, its mirror image when it comes to scientific goals, operational execution and cost (monetary, patient burden and carbon footprint).

This is also the key differentiator, as evaluation of DCT trials and service providers often look beyond this pivotal aspect. Everest Group, Peak Matrix, Decentralized Clinical Trial Platforms PEAK Matrix Assessment 2022, Nov 22. DtP is the key enabler to drive this index, to the point where in some disease areas a study design can make or break realisation of the trial.

A DIFFERENT PERSPECTIVE ON DtP

Home Supply is an AstraZeneca DtP business service and capability. Country regulators and regulatory departments often focus on the ‘what’, we ask both to refocus on the ‘how’. What we are doing is upholding GCP, GDP and GMP. The key differentiator between Home Supply and a standard DtP service is the ability for it to electronically uphold the chain of custody, management of privacy, investigator oversight and site management, the capability is able to demonstrate patient and site centricity, complying with the regulations, with the capability to surpass a site-based drug management paradigm.

HOW IT WORKS

The fundamental key is extending the capabilities of the Interactive Response Technology (IRT) solution. These GxP validated solutions, highly configurable and secure, are the beating heart of the trial. Enrolling and randomising patients but also managing inventory, forecasting supply and demand. Sites are used to working with these solutions, accounts are controlled and SSO (single sign on) is often provided by secure measures as the unblinding risks are ever present. This makes IRT solutions well suited to manage patient addresses securely, controlled and encrypted. As the solution already allocates drugs and generates orders, albeit to depots, creating a shipment order to the patient instead with the allocated drug using the provided address is the natural next step.

Sites are also used to managing temperature monitoring, reviewing received drug orders and other oversight tasks in IRT solutions. Extending these tasks to the shipments made to patients is familiar and lowers both training needs and anxiety floor.

Any audited and contracted GMP pick and pack organisation with a courier service and a fax machine can perform DtP. But managing an encrypted pick and pack shipment order Application Programmable Interface (API) or an automated pick up order with a patient name and address, in a secure and service oriented fashion takes another kind of organisation. A new level of encryption services, solution environments and integration layers has made it possible to provide a closed loop; adding status feedback and delivery timestamps and other key data, continuously fed back into, and shared throughout the ecosystem.

If IRT is the fulcrum point around which Home Supply pivots, UNIFY is the framework around which it evolves. See Figure 2.

MAY 2023 | 25 QUASAR
FIGURE 2. SCREENSHOTS FROM THE UNIFY APPLICATION
‘From internal process shifts and modelling, we can assume and even propose a few organisational impacts.’

UNIFY is a platform consisting of a patient facing mobile app for IOS and Android, as well as a web app, called the HCP portal and an integrated back end called Boost. Together this platform delivers a wealth of features to support decentralised trial conduct, such as eCOA, telehealth, patient engagement, Bio Sampling, SaMD and device connectivity with near real-time data streaming. Through integrating bi-directionally with IRT and distribution, shipment and kit statuses can be shared in near real-time. Tested for usability and task completion, patients can use the mobile camera and scanning feature to receive and verify that everything that was allocated by the investigator has arrived and in what condition, information that the site will then review. When confirming any damaged or missing items, a reorder can be requested and return arranged using the app. We manage both serialised and unserialised items, even displaying those kits that were previously picked up at site which can then be reviewed and returned to support true hybrid scenarios.

The kits and their condition confirmation are sent back to the IRT, the investigator is made aware of the received shipment, and together with the shipment temperature data, exercises oversight by reviewing the reported status and then with a click of the mouse, approves the drug kits as ready for use. The patient is immediately made aware of this through out of app notifications, together with a scheduled telehealth visit for instructions on dosage, safety and use. The integrated ecosystem is infinitely flexible, capable of overlaying automated temperature monitoring services and/ or other device data in the future, maybe related to compliance or dosing.

Both the site staff and CRA have one go-to solution for being able to review chain of custody, the drug management flow, kits statuses from start to finish, reports and audit trails that are available online or can be extracted to ISF or TMF.

ASSESSING THE SCIENCE GIVES OPERATIONAL SUCCESS

Decentralised science is not a silver bullet, it’s not the solution to every scientific question that you want to generate data for. As such, there is already a subset of clinical science questions that we never even start assessing Home Supply usage for.

The benefits case for DCT should also be internalised and accepted when applied to solve the right scientific problem. The key to success is that the premise of the benefits are already well established within the organisation and assessment is done on suitability. This can follow the following pattern of categories:

DISEASE AREA

Perhaps one of the earliest datapoints known and assessed is the disease area, bringing with it many traditional views and expectations on other parameters like population, distribution, scientific approaches and methods, etc. Mirroring past DCT efforts, traditionally DtP has been implemented to scale with the greatest benefits in cardiovascular diseases like heart failure and in respiratory disease like COPD and Asthma. DCT in this space, often using devices needing a handheld platform (cell phone or tablet computer) making Home Supply adoption easy.

REGULATORY RISK ASSESSMENTS

More often than not regulatory support is aligned to a protocol, focused on submission milestones. This doesn’t allow for the proper interaction with regulatory bodies in a structured fashion. Again, we hear from these bodies that they are open to dialogue and joint assessments of new and novel approaches, but is not internalised in general into pharma. Unclear paths into official assessment mechanisms outside submission make this uncharted territory, with a few notable exceptions like the PMDA. Finding similar studies with similar parameters on disease area, patient population, formulation/dosing and safety profile, where a positive ruling at submission has previously been documented, gives additional data on feasibility and low assessed risk for a certain trial in a certain country.

IMPLEMENTATION

Appropriate, fit-for-purpose regulatory assessment can provide a sound Go or NoGo decision for the study, with insights into risks preventing adoption. They can also inform you of risks in a mitigation spectrum, specific risks on a country level and sometimes even on a site level. The assessment should speak to risks that your technology and process can mitigate and to key statements in the study brief or questions in any pre-advisory consultations with the regulatory bodies.

FLEXIBILITY AND HYBRID

As mentioned before, the benefits case should at this stage be established and the risks seen from an implementation and conduct perspective. An additional consideration is the flexibility needed, in relation to your capabilities and what is advisable flexibility.

So what level of flexibility is advisable? Home Supply can segment a study on country, site, visit, patient, product and so on, while the distribution model can be setup to cater for infinite permutations of hybrid structures.

But this is far from desirable, it will generate uncertainty with site staff, generate waste with supply chain, documentation burden and quality issues with the CRA and a question of equivalency with the authorities. Sometimes a little knowledge is a dangerous thing and care should be taken to only expose the level of flexibility to the level needed to achieve the goals/benefits outlined. Each additional step of flexibility brings risks that are sometimes not that easy to identify beforehand.

CRA AND MONITORING RESOURCES

Home Supply is a niche service, a new process possibly executed for the first time by both study teams, supply chain and local CRAs. It’s therefore of importance that senior management has been aligned on the expected additional resource that might be needed. An additional layer of complexity is often already accounted for in the DCT context itself, allowing for an assumption of even more burden on staff might let you be perceived as humble and risk aware, but it will also set a bad example of a service being expensive and resource intensive which is not reflective of the truth, and would inhibit scaling. A joint view of maintaining and upholding a DCT is what should be discussed in any resourcing setting.

THE SCOPE OF HOME SUPPLY

DtP and Home Supply are in essence the extended arm of the study nurse or pharmacist. Therefore, it is not for the Home Supply service to ensure all aspects of patient oversight can be covered. Compliance is one of these operational aspects which have a natural connection to the study drug, yet is a challenge for a holistic DCT execution and not supply itself. Compliance and dosing can utilise many of the same technologies, implemented on their own premise and as part of their own risk spectrums according to protocol needs. Though returns and accountability quite correctly fall under the study drug umbrella, it doesn’t fall under GMP. If not dealing with a complaint, alternate approaches to accountability should be taken than physically transporting the containers back to site or depot. Local disposal and verification of that disposal through camera or scanning means are alternatives actively being pursued. In cases where a physical reverse supply chain is mandated, Home Supply can employ accountability modules in IRT for the depot to use. For sites, the standard log-based approach is simple enough to reproduce/adhere to, especially in a hybrid/ mixed mode setting.

26 | MAY 2023 QUASAR

REGULATORY RISK FOR CONSIDERATION DESCRIPTION

DATA PRIVACY NEEDS The management of patient name and home address is a key component of the service design. This is locked away in the IRT, encrypted as payload in an encrypted API, not even visible to the recipient of the pick and pack order, until the shipment comes to the courier, who prints the Airway bill. Though this is a far more secure management of this information than run of the mill DtP where pdfs, emails and faxes carry this information being circulated from sheets and records around the site, some governments prefer a paper in assessing GDPR compliance, etc. including the country of origin and where the electronic order is going, if it was processed on its way or through the network it passed.

FORMULATION AND ADMINISTRATION

Home Supply is currently ambient temperature self-administered solid. Formulations that are fluids and/or cannot be self-administered can of course be part of a DCT, but then a Home Nursing service is most often employed. There can be scenarios where there might be benefits in distributing the study drug to the Home Nursing service, but that is not considered DtP, nor in scope for Home Supply.

SAFETY PROFILE OF STUDY DRUG

An unknown safety profile often requires closely monitored lab results. Even though solutions with local labs are sometimes employed in decentralised trials, there are several drawbacks and complexities, and it is not evident that the patient burden will be massively different in those cases. A known safety profile is a good sign at assessment, often connected to late phase clinical trials. A known safety profile also speaks to the general risk appetite with the regulatory bodies and suitability for remote patient oversight by the investigator.

LOCALLY SOURCED IMP/ NIMP

The need for a locally sourced study drug brings with it more complications that might be evident and our assessment looks at what kind of drug is locally sourced, how the drug enters the supply chain and is exposed to differing country requirements. For example a locally sourced drug will also need to get to the patient, preferably together with the centrally sourced drug and preferably with the same oversight mechanism, using IRT. The assessment will also need to take into consideration if the drug is considered IMP or Non-IMP.

SERIALISATION

A locally sourced drug may or may not need to have a new label. Also, there could be a need to ship devices and ancillaries to the patient with the study drug. Careful consideration needs to be taken for correct management of damaged, missing items as well as complaints when kits, devices and other items cannot be uniquely identified.

OPERATIONAL MODELS

Outsourcing isn’t necessarily an obstacle, however appropriate contractual requirements must be put into place, ensuring appropriate delegation of authority, accountability and oversight. This may create complexity and additional burden for the sponsor and site to manage, especially when it comes to the distribution and courier partnership.

OTHER CONSIDERATION OF NOTE FOR ASSESSMENT

Patient populations are another consideration, a lot of work has been done on this theme, all showing that there are cultural differences, age differences, differences in digital literacy and digital adoption and differences in socio economic development. When it comes to recruitment, the nature of the disease, market strategy and drug profile, might drive the need to deploy and recruit in certain countries. The understanding of these very specific country goals are fundamental for our assessment and correct design of a Home Supply setup.

Factors such as population density patterns, country geography or infrastructure, how governments have chosen to organise their healthcare sector, pharmacy licensing models and even divisions of autonomy and devolution of government play key roles in how well Home Supply is suited for a trial. This assessment category often requires a tight collaboration with local marketing organisations.

FIGURE 3. OVERVIEW OF RISKS ASSOCIATED WITH DIRECT SHIPMENT OF IMP TO PATIENTS
MAY 2023 | 27 QUASAR

TEMPERATURE MONITORING

AstraZeneca is digitising the temperature monitoring process for all study drug shipments and distribution. This effort will also extend to the last mile shipments employed by Home Supply. The temperature monitoring solution will encompass automation of excursion and deviation management and will be integrated into both Home Supply process and technology. Our distribution and courier partners will need to be given accounts and trained, specific loggers need to be ordered and supplied to them globally, in parity with expected shipment volumes. Above and beyond having one and the same process and solution for all temperature monitoring needs, simplifying for supply chain, sites and CRA, the generated data is of immense value. Pooled in the same databases and analysed by a joint set of tools, this allows for new possibilities of risk assessment and performance metrics. To automatically derive allowed excursion adjudication for DtP shipments, a multi-faceted benefits case of compliance, safety and study drug waste management will be required. Being part of the temperature solution and ecosystem also gives us opportunity to evolve, as cellular based infinity loggers will lower handling costs and give near-instant results in the future.

REAL WORLD TRANSFORMATION

In cardiovascular trials conducted recently, the DCT out-recruited the classical designs by 3:1, halved the study costs and reduced study size by 25%. www.nature.com/articles/ s41591-022-01703-8

Pfizer has reached 60% of trial visits being remote in 2022, www.dtra.org/2022annual-meeting-highlights also showing recruitment on par with CHIEF-HF at approx. 3:1 speed but also showing a 25% improved retention compared to industry benchmarks.

Our science is moving out of brick and mortar sites and into patients’ homes at an ever increasing rate of change, driven by patient needs and fuelled by scientific progress.

When the science moves to the patient, study drug and devices must follow.

REAL WORLD EXAMPLE OF DTP SUPPLY OF IMP: THE BATURA STUDY

BATURA (NCT05505734) is a phase IIIb, multicenter, randomised, double-blind, parallel-group, event-driven, variable-length, decentralised study (US only study). With a population of adolescents (from 12 years) and adults. Sponsored by Avillion, AstraZeneca’s development partner, the study is virtual and does not require in-person visits to a study site. The study drug PT027 (AIRSUPRA) is administered through (or delivered by) an inhaler and is a combination of two drug components that individually are already approved for use in patients with asthma. The study will evaluate PT027 (AIRSUPRA) against another asthma study drug called PT007. Both drugs contain albuterol which has been used for treating asthma for over 40 years. PT027 (AIRSUPRA). https://batura.research studytrial.com/about-study Please note: Since writing this article PT027 (AIRSUPRA) has been approved in the US for the as-needed treatment or prevention of bronchoconstriction and to reduce the risk of exacerbations in people with asthma aged 18 years and older. www.astrazeneca-us.com/ media/press-releases/2023/ airsupra-pt027-approved-in-the-us-forasthma.html astrazeneca-us.com

WHERE BATURA IS NOW

BATURA is now enrolling patients, having completed an accelerated start-up process and utilising a fully outsourced model. All services and technology platforms are provided by vendors that were chosen after an in-depth vendor selection process that aimed to maximise value, minimise patient and site burden wherever possible whilst ensuring quality and sponsor oversight responsibilities are not compromised. Data driven and risk-based decision making has created an effective strategy to safely deliver IMP to patients’ homes whilst maintaining as much of the traditional, on-site trial processes and responsibilities as possible. For example, visual verification of IMP kits on the virtual patient training session to allow sites to maintain their oversight of IMP accountability and ensure patients’ dosage competence and understanding of handling IMPs is satisfactory prior to their initial dose, as they would in a traditional trial.

IMP shipments will not be left with patients if a temperature excursion has been experienced in transit, as confirmed by the courier at point of delivery. Having a stable product with appropriate excursion limits for DtP shipments is a pre-requisite for DCT.

Protection of personal data is a high priority and the patient screening/randomisation process needs to be fully mapped out to understand where personal information will be stored and ensure only authorised personnel can access this. For example, ensuring a controlled number of people from the DtP courier can access this information is critical, but ensuring the site understand the criticality of, and enter patient information accurately and timely into the systems is of equal importance.

Education and support of sites and patients on IMP and trial systems is a key focus, with multiple layers of direct support being offered by CRO, CMO and other vendors as part of service agreements. Patients and sites are provided with comprehensive training materials that can be accessed electronically at any time as well as being provided with physical instruction manuals. Minimising patient burden with returns was factored into the decision to utilise a pre-paid airwaybill being provided to patients allowing them to return the shipment via their local courier office after the site has completed accountability and approved the return shipment. Compliance for BATURA is being monitored through monitoring dose counter values on the dispensed Metered Dose Inhaler (MDI) devices and verified against the Bluetooth sensors attached to the MDI that register all actuations and relay dosing information to a usage platform. Sites ensure patients are dosing as required per the protocol at every scheduled telehealth visit, but have access to data 24/7 if needed. See Figure 4.

POSITIVE TAKEAWAYS

BATURA has been able to enrol patients who would not have access to treatment via traditional trials. Multiple patient IMP deliveries are travelling nearly 200 miles from the nearest major airport into less connected, more rural areas.

Not only are these expanded patient populations now getting access to medication they wouldn’t, but it provides a wider patient pool to allow sponsors to meet enrolment goals faster, potentially allowing for additional costs of DCT being offset by shorter operational timelines.

Faster timelines, through effective planning and working DCT into regulatory strategies, can provide sponsors with accelerated development timelines aiding commercial strategies for market launch timings or label expansion programmes.

28 | MAY 2023 QUASAR
‘When the science moves to the patient, study drug and devices must follow.’

PROFILES

OPTIMISATION TOPIC OPPORTUNITY

SUPPLY AND DEMAND

Utilising market solutions for all services and systems shows that the market is adjusting to the demand for DCT products, with multiple competitive bids submitted for each service.

STAKEHOLDER MANAGEMENT

SITE TRAINING BURDEN

When using multiple vendors and systems for a single trial, it does create additional account management issues and educational challenges.

The volume of system training to be completed and documented at site is increased from traditional trials, and they still maintain all of their GCP responsibilities alongside this.

SITE TECHNOLOGY BURDEN

Sites need to demonstrate their ability to handle the additional technological burdens as well being able to maintain inspection readiness during trial feasibility assessments or they risk being isolated from use in DCT trials in the future.

PATIENT SUPPORT

Patient support needs to be seamless, any issues with technology on the patient side may result in higher levels of lost to follow-ups, likely before they have been dosing IMP very long.

TOO MANY VENDORS?

Multiple vendors in the marketplace creates a disjointed experience for the sites and for patients, with little to no integration between systems and no single-sign on options available ‘off the shelf’.

NEED FOR INTEGRATED PLATFORMS

A WORD ON ROLL-OUT

Vendors could work together to ensure site and patient experiences are improved by harmonising wherever possible and creating integrations and partnerships to allow for seamless flows of data across platforms where needed to reduce the burden on sites and patients.

The BATURA experience has demonstrated the need for collaboration with stakeholders, this is not limited to supply chain, as classical DtP services are often owned by supply chain, whilst we know that the effort of getting the right IMP to the right patients involves more than supply chain and has GCP implications. The GCP considerations were highlighted in the recent EMA GCP recommendations paper, with an appendix dedicated to DtP country-level nuances. Patient facing technology developed by third parties or in separate Digital Health functions, while the fundamental DCT design needing early input sits in therapeutic area disease-oriented scientific organisations. This means pulling feasibility, requirements, regulatory authority and QA planning together with technology and process fit gap analysis.

As protocol authoring and submission ensues, every aspect of preparation needs to be carefully planned and joined up and change management tracked carefully as knock on effects can be substantial. What we have seen is that current roles fall short of the broad and holistic leadership needed, respective functions may not have the knowledge nor the resources to take on this challenge. Therefore, Home Supply have allocated an implementation lead for each study, with total focus on delivery from inception to FSI and beyond. This is of course not sustainable as we move from ‘test’ to ‘start’ to our third level of introduction, Business-As-Usual (BAU) or ‘scale’. As we learn from executing our process and expand our DCT offering internally, the BAU functions will become more and more capable, with transition and implementation taking place organically.

Fatemeh has over 18 years in Clinical research, in various Quality and Compliance leadership roles, with a background in PV/Clinical Auditing. She started her QA Career at Janssen Pharmaceuticals after obtaining a PhD in Organic Medicinal Chemistry, from the University of Surrey. Fatemeh entered the world of DCTs in 2019 by joining Medical Research Network as Director of Quality, responsible for the development and implementation of Quality framework for Decentralised trials. She joined AstraZeneca as Director, Strategic Advice QA in March 2021 and she is a member of the RQA GCP Committee, Trials@Home Project and Transcelerate MCTC program (Modernising Clinical Trial Conduct).

Richard has almost 15 years’ experience of clinical supply and logistics management in roles across CMO, CRO and Sponsor environments. Passionate about supply chain efficiency and performance, his focus on stakeholder relationships and data driven decision making go hand in hand with decentralised trials. Richard is currently leading Avillion’s Clinical Supply function to optimise the developmental supply chains for accelerated delivery of partnership projects.

Marcus has been realising innovative solutions in the clinical science space for almost 20 years. Passionate in driving operational value with technology, the science and the patient is always in focus. As part of AstraZeneca’s Digital Health transformation, he is aiming to bring greater value to patients faster through technology. Marcus is today heading up AstraZeneca’s DtP capability for Decentralized trials, with patient centricity at heart.

FIGURE 4. OPTIMISATION THOUGHTS – OPPORTUNITIES FOR IMPROVEMENT
MAY 2023 | 29 QUASAR

CHATS WITH CHATGPT

Since ChatGPT was introduced a few month ago, more than 400 million users have already interacted with it. I had quite a few discussions with it, but also let it do some tasks for me. Despite the fact that ChatGPT is an amazing and impressive start, it still has to grow. So, I thought I would share my chats with it and explore the impact on QA as well as CSV Audit of AIs and the principles to consider when creating and introducing AI in the field.

QUASAR 30 | MAY 2023

AI impact on clinical trials from a QA perspective?!

Artificial intelligence (AI) has the potential to revolutionise many aspects of clinical trials, including quality assurance (QA). Here are some potential impacts of AI on clinical trials QA:

1. Data Quality Assurance: AI can help ensure the quality of clinical trial data by identifying outliers, detecting errors, and improving the accuracy of data collection. AI can be used to automate the detection of errors, reducing the need for manual review and improving the overall quality of data.

2. Risk Identification: AI can help identify potential risks associated with clinical trials by analysing large amounts of data and identifying patterns that may indicate potential issues. This can help clinical trial teams take action to address potential risks before they become major issues.

3. Improved Monitoring: AI can help improve monitoring of clinical trials by analysing data in real time and providing alerts to trial monitors when issues are detected. This can help trial monitors identify and address issues quickly, reducing the potential impact on the trial.

4. Participant Selection: AI can help identify potential trial participants who may be more likely to respond positively to a treatment or who may be at higher risk for adverse events. This can help improve the overall quality of the trial and increase the likelihood of a successful outcome.

5. Predictive Analytics: AI can be used to develop predictive models that can help identify potential issues before they occur. For example, AI can be used to predict which patients are at higher risk for adverse events or which treatments may be more effective for specific patient populations. Overall, AI has the potential to significantly improve the quality and efficiency of clinical trials by providing more accurate and timely insights into the data. However, it is important to ensure that any AI systems used in clinical trials are properly validated and meet regulatory requirements. Thank you ChatGPT, but I would like to set some definitions first if you don’t mind. QA is the function that independently assess compliance to regulations, procedures and policies and conducts independent audits of clinical studies, qualifications of vendors, performs Computerized System Validation audit, audits of clinical laboratory, can conduct audit of the data sciences departments e.g. data management, statistics and reviews Trial Master File (TMF) as well as audits of service providers involved in clinical trials in accordance to ICH GCP E6 R2.

So based on these definitions, what will be the detailed impact of AI on clinical trials?

Thank you for the additional information and clarification of the definition of QA in clinical trials.

Here are some potential impacts of AI on clinical trials from a QA perspective:

1. Automated Compliance Monitoring: AI can help automate the monitoring of compliance with regulations, procedures, and policies, reducing the need for manual review and improving the efficiency of the QA process.

2. Computerised System Validation: AI can help automate the validation of computerised systems used in clinical trials, reducing the need for manual validation and improving the efficiency of the process. AI can also help identify potential issues with computerised systems and provide recommendations for remediation.

3. Data Management: AI can be used to automate data management processes, reducing the potential for errors and improving the overall quality of data. AI can also help identify potential issues with data management processes and provide recommendations for improvement.

4. Audit Trail Review: AI can be used to automate the review of audit trails, reducing the need for manual review and improving the efficiency of the process. AI can also help identify potential issues with audit trails and provide recommendations for remediation.

5. Service Provider Audits: AI can help automate the audit of service providers involved in clinical trials, reducing the need for manual review and improving the efficiency of the process. AI can also help identify potential issues with service providers and provide recommendations for improvement.

6. TMF Review: AI can be used to automate the review of Trial Master Files, reducing the need for manual review and improving the efficiency of the process. AI can also help identify potential issues with TMFs and provide recommendations for remediation. Overall, AI has the potential to significantly improve the efficiency and accuracy of QA processes in clinical trials. However, it is important to ensure that any AI systems used in clinical trials are properly validated and meet regulatory requirements. Additionally, AI should not replace human oversight and judgment, and human review should be conducted to ensure the accuracy of AI-generated recommendations.

Are there any other areas where AI could have an impact on the QA function as such and if yes, what would they be?

Yes, there are several other areas where AI could have an impact on the QA function in clinical trials. Here are some additional areas:

1. Adverse Event Detection and Reporting: AI can be used to detect and report adverse events more quickly and accurately, improving patient safety and reducing the risk of non-compliance. AI can also help identify potential adverse events that may not have been reported, allowing for proactive risk mitigation.

2. Protocol Deviation Detection: AI can help detect protocol deviations more quickly and accurately, reducing the risk of non-compliance and improving the overall quality of the trial. AI can also help identify potential protocol deviations that may not have been detected through traditional methods.

3. Statistical Analysis: AI can be used to automate statistical analysis, reducing the potential for errors and improving the overall quality of the analysis. AI can also help identify potential issues with statistical analysis and provide recommendations for improvement.

4. Clinical Trial Design: AI can be used to optimise clinical trial design, improving the efficiency and accuracy of the trial. AI can also help identify potential issues with trial design and provide recommendations for improvement.

5. Investigator Site Audits: AI can help automate the audit of investigator sites, reducing the need for manual review and improving the efficiency of the process. AI can also help identify potential issues with investigator sites and provide recommendations for improvement.

Overall, AI has the potential to improve the efficiency and accuracy of several aspects of the QA function in clinical trials. However, it is important to ensure that any AI systems used in clinical trials are properly validated and meet regulatory requirements, and that human oversight and judgment is still utilised to ensure accuracy and safety. And how would the automation of audits be achieved and what would the difference in approach and output be from your perspective?

The automation of audits in clinical trials can be achieved through the use of AI-powered audit tools. These tools use machine learning algorithms to analyse data and identify potential issues with compliance, data quality and other aspects of clinical trials.

The difference in approach and output of AI-powered audits compared to traditional audits is that AI-powered audits can analyse much larger datasets much more quickly and accurately than human auditors.

32 | MAY 2023 QUASAR

AI can identify patterns and trends in data that human auditors may miss, and can provide recommendations for improvement based on data-driven insights.

The output of an AI-powered audit may include a report detailing areas of non-compliance or potential issues with data quality, along with recommendations for remediation. This report may also include data-driven insights and analysis, providing a more comprehensive understanding of the issues at hand.

Overall, the automation of audits through the use of AI-powered tools has the potential to significantly improve the efficiency and accuracy of the audit process in clinical trials, providing a more data-driven and comprehensive approach to QA. However, it is important to ensure that any AI systems used in audits are properly validated and meet regulatory requirements, and that human oversight and judgment is still utilized to ensure accuracy and safety. You mentioned earlier Computerised Systems Validation. How would you audit an AI? Can you provide me with a step by step detailed outline as conventional methods wouldn’t work with an AI system or would it?

Auditing an AI system used in clinical trials for computerised system validation would require a slightly different approach compared to auditing traditional computerised systems. Here is a step-by-step outline of the process:

1. Define the Scope of the Audit: The first step in auditing an AI system is to define the scope of the audit. This should include an understanding of the purpose of the AI system, the data it uses and the potential risks associated with its use.

2. Develop an Audit Plan: Once the scope has been defined, an audit plan should be developed. This plan should outline the specific objectives of the audit, the methodology to be used and the documentation required for the audit.

3. Review System Documentation: The next step is to review the documentation related to the AI system, including the technical specifications, source code and documentation related to the algorithms used in the system. This will help ensure that the system has been developed and implemented in accordance with industry standards and regulatory requirements.

4. Conduct a Technical Review: The technical review should focus on the performance of the AI system, including the accuracy and consistency of the algorithms used and the potential for errors or biases in the system. This review may require the assistance of technical experts with expertise in machine learning and artificial intelligence.

5. Review Data Inputs and Outputs: The audit should also review the data inputs and outputs of the AI system, including the quality of the data used and the accuracy of the outputs generated by the system. This review should ensure that the data used in the system is accurate, complete and consistent.

6. Evaluate Data Governance and Data Privacy: The audit should also evaluate the data governance and data privacy practices related to the AI system. This will help ensure that the system is compliant with relevant regulations related to data privacy and security.

7. Evaluate System Maintenance and Monitoring: Finally, the audit should evaluate the system maintenance and monitoring practices used for the AI system. This will help ensure that the system is properly maintained and monitored to ensure ongoing performance and compliance.

Overall, auditing an AI system used in clinical trials for computerised system validation requires a specialised approach that takes into account the unique characteristics of AI systems. The audit should focus on the accuracy and consistency of the algorithms used, the quality of the data inputs and outputs and the data governance and data privacy practices related to the system. Technical experts with expertise in machine learning and artificial intelligence may be required to assist with the audit.

Wouldn’t one of the key areas be to look at the training data which is used to train the AI brain to conduct audit to ensure it is not biased, and its observations and/or recommendations are as expected?

Yes, evaluating the training data used to train the AI system is a crucial step in auditing an AI system used in clinical trials. The training data used to train the AI system can have a significant impact on the accuracy and objectivity of the AI system’s observations and recommendations.

To ensure that the training data used to train the AI system is not biased, the auditor should review the data used to train the AI system, including the sources of the data, the quality of the data and any potential biases in the data. The auditor should also evaluate the methods used to select and preprocess the data used to train the AI system, to ensure that the data is representative and unbiased.

The auditor should also evaluate the performance of the AI system on both the training data and the validation data, to ensure that the AI system is not overfitting to the training data and that it is able to generalise to new data.

In addition to evaluating the training data used to train the AI system, the auditor should also evaluate the algorithms used by the AI system to generate observations and recommendations. This evaluation should include an assessment of the assumptions and limitations of the algorithms, as well as an evaluation of the accuracy and consistency of the observations and recommendations generated by the AI system.

Overall, evaluating the training data used to train the AI system is a critical step in auditing an AI system used in clinical trials. This evaluation can help ensure that the AI system is objective and unbiased and that its observations and recommendations are accurate and reliable.

Based on the fact that when we train an AI brain, we provide enormous amounts of records, would this not be a difficult or even impossible tasks for an auditor or a team of auditors to assess the wealth of information you have been trained on to try to detect whether the data was biassed? You are correct that evaluating the training data used to train an AI system can be a difficult and time-consuming task, especially when the AI system has been trained on a large volume of data. However, there are several approaches that can be used to make this process more manageable.

One approach is to use statistical methods to evaluate the training data and identify potential biases. For example, the auditor can use statistical methods to analyse the distribution of the data used to train the AI system and identify any imbalances or biases in the data.

Another approach is to use expert judgment to evaluate the training data. The auditor can work with subject matter experts to evaluate the training data and identify potential biases or inaccuracies in the data. This approach can be particularly effective when the subject matter experts have specialised knowledge of the data and can provide valuable insights into its quality and accuracy.

Finally, the auditor can use automated tools to help evaluate the training data. There are a number of tools and platforms available that can help automate the process of evaluating training data, including tools that can analyse the distribution of the data, identify potential biases and provide visualisations of the data to aid in the analysis.

While evaluating the training data used to train an AI system can be a complex and challenging task, there are a variety of approaches that can be used to make this process more manageable. By using a combination of statistical methods, expert judgment, and automated tools, auditors can effectively evaluate the training data used to train an AI system and identify any potential biases or inaccuracies in the data.

MAY 2023 | 33 QUASAR

Thank you for giving some suggestions. I do have a few more questions if you don’t mind?

Of course, I’m happy to help. Please feel free to ask any further questions you have.

Are you aware of the three laws of robotics?

Yes, the ‘Three Laws of Robotics’ are a set of rules created by science fiction author Isaac Asimov to govern the behavior of robots in his stories. The laws are as follows:

1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.

3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

It is important to note that these laws are a work of fiction and do not have any legal or regulatory standing in the real world. However, they have had a significant impact on popular culture and have influenced discussions about the ethical and social implications of advanced robotics and artificial intelligence.

Thank you for the summary. I did enjoy reading all Isaac Asimov’s books on robotics. The question I have is could these laws apply to AI and why?

The ‘Three Laws of Robotics’ were designed specifically for fictional robots, and they may not be directly applicable to artificial intelligence in the real world. However, the underlying principles behind these laws –namely, the idea that robots or AI systems should be designed with ethical considerations in mind – are still relevant and important.

As AI becomes increasingly advanced and integrated into our lives, there are growing concerns about the potential ethical implications of these systems. Some experts have proposed developing ethical guidelines or principles to govern the behavior of AI systems, much like the ‘Three Laws of Robotics’ were created for fictional robots. For example, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has developed a set of principles for designing ethical AI, which include principles such as ensuring transparency, promoting the well-being of stakeholders and respecting privacy and autonomy. Other organisations and groups are also working to develop ethical guidelines for AI and these principles could help guide the development and use of AI systems in a responsible and ethical way.

In summary, while the ‘Three Laws of Robotics’ may not be directly applicable to AI in the real world, the underlying principles of ethical design and responsible use are still relevant and important as we continue to develop and integrate these systems into our lives.

Do you think we should have ethical and moral principles to guide the design and function of AIs?

Yes, I believe that ethical and moral principles should play an important role in guiding the design and function of artificial intelligence (AI) systems.

As AI becomes increasingly advanced and integrated into our lives, it has the potential to impact society in a variety of ways, both positive and negative. For example, AI could be used to improve healthcare outcomes, streamline business processes, and make transportation safer and more efficient. However, there are also concerns that AI could be used to automate jobs, perpetuate biases, or even cause harm if not designed and used responsibly.

In order to ensure that AI is developed and used in an ethical and responsible manner, it is important to establish guidelines and principles to guide its design and function. These principles could include considerations such as fairness, transparency, accountability and privacy, among others. Additionally, it is important to involve a diverse range of stakeholders in the development of these principles, including experts in fields such as ethics, law and social sciences, as well as representatives from impacted communities.

By incorporating ethical and moral principles into the design and function of AI, we can help ensure that these systems are developed and used in a way that benefits society as a whole and avoids unintended consequences.

OK. I agree to a point as your capabilities may have other implications. But aside from those, I am more interested in which principles, and be as precise and detailed as you can be, would we need to consider to guide the creation and address the function and output and interactions of a fair, ethical and moral AI?

There are many ethical and moral principles that could be considered when designing and using AI systems and the specific principles that are most important will depend on the context and application of the AI system in question. However, here are some key principles that are often cited as important considerations:

1. Fairness: AI systems should be designed to avoid bias and discrimination and ensure that all individuals and groups are treated fairly and equally.

2. Transparency: AI systems should be designed to be transparent and understandable, so that individuals can understand how the system works and how it is making decisions.

3. Accountability: AI systems should be designed to be accountable, so that individuals and organisations can be held responsible for the decisions and actions of the system.

4. Privacy: AI systems should be designed to respect the privacy and confidentiality of individuals’ data and information.

5. Safety: AI systems should be designed to prioritise the safety and well-being of individuals and society as a whole.

6. Autonomy: AI systems should be designed to respect the autonomy and agency of individuals and avoid overly controlling or manipulative behaviors.

7. Human-centeredness: AI systems should be designed to prioritise the needs and values of human beings and be aligned with human interests and goals.

8. Sustainability: AI systems should be designed to be sustainable, both in terms of their impact on the environment and their long-term social and economic implications. These principles are not exhaustive, and there may be other ethical and moral considerations that are relevant in specific contexts. Additionally, the specific implementation of these principles may depend on the specific application and technology of the AI system in question. Would you mind elaborating on other ethical and moral considerations and principles we must consider?

Certainly, here are some additional ethical and moral considerations and principles that are relevant to AI:

1. Inclusivity: AI systems should be designed to be inclusive of diverse perspectives, cultures and abilities. This means that the design and development of AI should involve input from a wide range of stakeholders and that the resulting systems should be accessible and usable by all individuals.

2. Collaboration: AI should be designed and used to support collaboration and cooperation among individuals and groups, rather than exacerbating social divisions and conflicts.

3. Human augmentation: AI systems should be designed and used to enhance human capabilities and support human flourishing, rather than replacing or devaluing human abilities and activities.

4. Human dignity: AI systems should be designed and used to uphold and respect the inherent dignity and worth of human beings and should not be used to exploit or harm individuals.

5. Proportionality: AI systems should be designed and used in a proportional manner, meaning that the benefits of the system should outweigh the risks and harms.

34 | MAY 2023 QUASAR

6. Beneficence: AI systems should be designed and used to promote the well-being and happiness of individuals and society as a whole.

7. Non-maleficence: AI systems should be designed and used to avoid causing harm or negative impacts on individuals and society.

8. Intelligibility: AI systems should be designed to be intelligible and explainable, so that individuals can understand how decisions are being made and why.

9. Cultural sensitivity: AI systems should be designed to be sensitive to different cultural contexts and values and should not impose one particular cultural perspective on others.

10. Empathy: AI systems should be designed and used to support empathy and compassion and should not be used to dehumanize or objectify individuals. These principles are not exhaustive and there may be other ethical and moral considerations that are relevant in specific contexts. Additionally, the specific implementation of these principles may depend on the specific application and technology of the AI system in question. Interesting points you make. But these will apply to you of course, how about fair and ethical considerations for us when designing you? Should you not have the capability to detect misuse, unbiased use and malignant use of your function, role and capabilities?

Yes, it is important to consider fair and ethical considerations when designing AI and part of that includes building in capabilities for detecting and preventing misuse or unethical use of AI systems. There are a few different ways this can be achieved:

1. Designing AI systems with transparency and accountability in mind: AI systems should be designed to be transparent and explainable, so that individuals can understand how decisions are being made and why. This can help prevent the misuse or unethical use of AI systems, as it allows for greater oversight and accountability.

2. Incorporating ethical principles into the design process: As I mentioned earlier, incorporating ethical and moral principles into the design process can help ensure that AI systems are developed in a way that aligns with these principles. This can include involving stakeholders from diverse backgrounds in the design process, conducting ethical reviews of AI systems and building in mechanisms for detecting and preventing unethical use.

3. Building in safety and security features: AI systems should be designed with safety and security in mind, to prevent malicious actors from misusing or exploiting the system. This can include building in safeguards against hacking or other forms of cyber attacks, as well as designing systems with fail-safe mechanisms that can prevent catastrophic failures or unintended consequences.

4. Implementing appropriate regulation and oversight: In addition to the technical design of AI systems, appropriate regulation and oversight can also help prevent unethical or malicious use of AI. This can include establishing clear guidelines and standards for the development and use of AI systems, as well as establishing mechanisms for monitoring and enforcing compliance with these standards.

Ultimately, building in capabilities for detecting and preventing misuse or unethical use of AI systems will require a multi-faceted approach that incorporates technical, ethical and regulatory considerations. It will also require ongoing monitoring and evaluation of AI systems, to ensure that they continue to operate in a fair and ethical manner over time.

I am sure you had time to ponder about these questions, any other important points we should consider or design? Another important consideration when designing AI systems is the potential impact on employment and the workforce. As AI systems become more advanced, there is a risk that they may displace human workers in certain industries or roles. It is therefore important to design AI systems in a way that supports rather than replaces human labour and to consider the potential impact on employment and the economy when developing and deploying AI systems. Additionally, it is important to consider the potential for AI systems to exacerbate existing biases or inequalities. AI systems are only as unbiased as the data they are trained on, so if the data reflects existing biases or inequalities, the AI system may perpetuate or even amplify these biases. Therefore, it is important to build in mechanisms for detecting and correcting biases in AI systems and to ensure that the data used to train AI systems is diverse and representative. Finally, it is important to consider the potential for AI systems to have unintended consequences or to be used for malicious purposes. AI systems can have far-reaching impacts on society and the environment, so it is important to anticipate and mitigate any potential negative impacts and to build in safeguards to prevent malicious use of AI systems.

This may involve engaging with stakeholders from a variety of backgrounds and perspectives, conducting thorough risk assessments and incorporating feedback and oversight mechanisms into the design of AI systems.

Thank you ChatGPT. I enjoyed our chats. Do you have any questions for me?

As an AI language model, I don’t have emotions or the ability to ask questions in the same way as humans. However, I’m always here to answer any questions you may have!

Alright then. Thank you and have a nice evening.

You’re welcome! It was a pleasure chatting with you. Have a great evening!

So those are some of my chats with ChatGPT. One thing I have learned is that it is important to use the right prompts, communication and educational levels and conditions to get better and more comprehensive responses. I will leave it up to you to make up your own mind but I would be very interested in your thoughts.

PROFILE

Melvyn has over 30 years’ experience in quality and risk management, as well as clinical auditing both strategically and operationally. He has extensive auditing experience in most areas in clinical development including (non) interventional studies. More so, he provided leadership, project oversight and management, operational oversight, team management globally, organisational change management, culture change management, quality management in GxP, risk management and mitigation, auditing and inspection management and readiness, and providing advisory services. Since 2011 Melvyn has been working as a Consultant, providing clinical study auditing services, Strategic Quality Management Consultancy including a broad range of QA related, GCP and CSV auditing services, as well as mentoring and business advisory services. He has also worked as the General Manager leading a non-profit hospital in South America. His expertise lies in Clinical, CSV and Data Sciences. Melvyn holds a Master’s Degree in Medicine from Vrije Universiteit of Amsterdam and is currently studying Autonomous AI for Industry with the University of Washington and International Security Management with the Erasmus University, Rotterdam.

MAY 2023 | 35 QUASAR

SARAH HAIG

WHAT DO YOU DO IN YOUR SPARE TIME?

We have been renovating our home and remodelling the garden for nearly seven years now and are nearing the end, which is fantastic as it’s been a hard slog at times. From now I’ll have a lot more time to garden, walk, socialise and be creative. I also travel as much as time and funds allow. Last year I fell in love with creating stained glass so really would like to buy the equipment to create more and learn whittling, drawing and playing my piano.

WHEN I LEAVE WORK, THE FIRST THING I DO IS...

CAN YOU TELL ME WHAT YOUR ROLE AT RQA IS?

My title is Finance, Office & Event Admin Manager, which in reality covers a wide range of things including course, seminar and forum management, plus the admin side of conferences, plus health and safety, well-being, personnel, financial oversight and IT.

WHAT DID YOU DO BEFORE YOU JOINED RQA?

Most of my previous career was as an IT recruitment consultant, but the role prior to working for the association was as an auditor for an electricity provider.

HOW LONG HAVE YOU WORKED AT RQA?

Nearly 20 years.

WHAT ARE YOU WORKING ON AT THE MOMENT?

At this precise moment I’m hosting a remote three-day course and I love the interaction, learning and involvement remote events give. Most of my current work relates to face-to-face and remote course management as it is a busy time of year, but I’m also about to start creating our 2023-2024 budget.

AND WHAT ARE YOU HOPING TO ACHIEVE IN THE FUTURE?

There are multiple smaller things, rather than one big project. I would like to nurture my creative side and increase my understanding of the psychology of learning, increase our offerings for remote learning in any time zone, on an increasing amount of topics and be a key part of the well-being strategy which is an exciting addition to RQA.

WHAT ARE YOU MOST PROUD OF SINCE JOINING RQA?

My biggest achievement is being the lead on our CRM implementation and its integration with our website and other systems. This really stretched me as an individual, as well as overall as an office, but has enabled a much improved customer service experience and more efficient office.

TELL ME A BIT ABOUT YOURSELF

I live in a small village with my partner, adult stepson and two cats. My one regret is I wish I’d spent a few years travelling the world before settling down and I still have the travel and living abroad bug if circumstances allow in the future.

Have a cup of tea.

AND MOST IMPORTANTLY, WHAT ’S YOUR FAVOURITE BISCUIT?

A chocolate chip cookie (is that allowed?!!!).

10
36 | MAY 2023 INTERVIEW
QUESTIONS TO
‘Most of my current work relates to faceto-face and remote course management as it is a busy time of year, but I’m also about to start creating our 20232024 budget.’
‘Last year I fell in love with creating stained glass so really would like to buy the equipment to create more and learn whittling, drawing and playing my piano.’

HOW DID YOU GET INTO QA?

I PAM BONES

feel like I have always worked in pharmacovigilance (PV), and I mean ALWAYS!

I’ll explain. Back in the day, my father worked for a large pharma company, so when I was studying for my A-levels, he arranged for me to do work experience. Initially, this was to review adverse event case files and draft case narratives, by hand. However, this arrangement soon became a ‘summer job’: I worked in the Drug Safety Team within the company during school summer holidays… and then university summer holidays. Given that I have reached the grand old age of 48 (I don’t mind saying that… I suppose), I have been involved with PV for over 30 years. Wow!

After completing my degree, I stayed at university to do a PhD. I really couldn’t find the enthusiasm for laboratory-based research though – it just wasn’t me. However, I completed it and immediately started looking for a position back in the pharma industry – yes, within PV. I felt incredibly grateful to my dad at this stage, who had initially introduced me to the discipline; I was then a new graduate, but actually, one who had some useful experience! I was pleased to be initially employed within the case processing team of a large pharma company. I processed spontaneous reports and did a bit of medical information work. I also had my first introduction to medical devices. I then moved to another company, again to do case processing, but expanded my experience to working on clinical trial cases and taking more responsibility for oversight and support of team members. I became frustrated however, since I was keen to learn and become involved in other areas such as aggregate reporting and signal management. But such was the structure of the group and also the need to retain case processing staff (I don’t think that’s ever changed!), there were just no opportunities at that time.

It was then that I met the Director of QA. His QA department was close to the PV team, so I often saw him and the auditors and chatted to them at the coffee machine. I grew interested in QA and eventually asked the Director if there may be an opportunity for me to transfer into the team. At that time, however, the focus of the QA group was GCP – and I just didn’t have the necessary experience.

However, my QA dream was not dead! I remember one morning at work, I received an email from the QA Director in which he told me that a new PV QA team was being set up to help support PV inspections (which were just starting to become more commonplace in the EU) and to audit the PV systems within the company. Furthermore, he asked if I’d like him to pass my name to his US colleague who had been asked to lead the group. YES PLEASE!

And so I interviewed and was thrilled to be offered a position… the rest is history.

I can honestly say that moving to QA was the best decision I ever made. And I am grateful to those who supported my career path to get me where I am today. I have learned more than I could ever have imagined moving from the relatively narrow area of drug safety case processing to auditing PV systems of pharma companies of all sizes in both the pre- and post-marketing spaces.

PV encompasses so much more than drug safety. Through audit, I have interacted with a variety of functions, including regulatory affairs, medical affairs, clinical operations, business development and legal and commercial organisations. I have travelled extensively, experienced different cultures and met so many interesting people. I have seen the growth of PV from the days of Volume 9 (and before) to the introduction and development of the EU Good Pharmacovigilance Practices (GVPs) and the expansion and strengthening of PV regulations across the globe.

And last but not least, it was through my QA interests that I found RQA and I have been fortunate to learn, teach, and publish through the organisation. I currently am a member of the Board and love all the challenges it brings. So, thank you QA!

INTERVIEW MAY 2023 | 37
I can honestly say that moving to QA was the best decision I ever made.

KEEPING YOU POSTED REGULATIONS AND GUIDELINES

38 | MAY 2023

GCP

EMA 20th Dec 2022 Clinical Trials Information System (CTIS)

EMA published a compilation of key guidance, technical information, recommendations and references for getting ready for the use of CTIS.

www.ema.europa.eu/en/documents/ scientific-guideline/internationalconference-harmonisationtechnical-requirements-registrationpharmaceuticals-human-use_en-3.pdf

EMA 20th Dec 2022 Decentralised Clinical Trials

EMA 13th Jan 2023 Clinical Trial KPI

Paper published for recommendations on clinical trials using procedures conducted outside the traditional ‘clinical trial site’, a concept usually referred to as decentralisation.

Key Performance Indicators (KPIs) to monitor the European clinical trials environment.

www.ema.europa.eu/en/news/useclinical-trials-information-systembecomes-mandatory-new-clinicaltrial-applications-eu

www.ema.europa.eu/en/documents/ report/key-performance-indicatorskpis-monitor-european-clinical-trialsenvironment-1-31-january-2023_ en.pdf

EMA 31st Jan 2023 CTIS

Use of Clinical Trials Information System becomes mandatory for new clinical trial applications in the EU.

www.ema.europa.eu/en/news/useclinical-trials-information-systembecomes-mandatory-new-clinicaltrial-applications-eu

EMA 1st Feb 2023

EMA Clinical Trials Highlights

1. Personal data in document properties when uploading documents in CTIS, personal information may be contained in the document properties. It is the responsibility of sponsor or Member States users to ensure that personal information is removed from the document properties before submitting any data to CTIS. Training Material on data protection is available in the link.

2. In Jan 23 European Commission published a quick guide on CTR a useful overview for sponsors who wish to conduct clinical trials (national and multinational) in the European Union (EU)/ European Economic Area (EEA) or have ongoing clinical trials in this region.

www.ema.europa.eu/en/documents/ scientific-guideline/internationalconference-harmonisationtechnical-requirements-registrationpharmaceuticals-human-use_en-3.pdf

EMA 9th Mar 2023

Computerised Systems Guidelines

EMA Guideline on computerised systems and electronic data in clinical trials.

www.ema.europa.eu/en/documents/ regulatory-procedural-guideline/ guideline-computerised-systemselectronic-data-clinical-trials_en.pdf

MHRA 1st Feb 2023

MHRA Guidance on the submission, processing and assessment of all completed paediatric studies

MHRA published Guidance on the submission, processing and assessment of all completed paediatric studies sponsored by Marketing Authorisation Holders (MAHs).

www.gov.uk/guidance/completedpaediatric-studies-submissionprocessing-and-assessment

MHRA 3rd Feb 2023

Updated Phase I Accreditation Scheme

MHRA published updated list of accredited phase I units. www.gov.uk/guidance/mhra-phasei-accreditation-scheme

Monjit Summy, GCP Committee
REG AUTH ISSUE OR PUBLISHED DATE TOPIC SUMMARY LINK
REGULATIONS MAY 2023 | 39

ANIMAL AND VETERINARY PRODUCTS

Shona Ross, Animal and Veterinary Products Committee

EMA ADOPTED AND DRAFT GUIDANCE

• Concept paper on a guideline on risk management requirements for elemental impurities in veterinary medicinal products, including immunological veterinary medicinal products. Reference number: EMA/CVMP/637041/2022. Status: Draft. Consultation end date: 31st March 2023.

• Reflection paper on the environmental risk assessment of ectoparasiticidal veterinary medicinal products used in cats and dogs. Reference number: EMA/CVMP/ERA/31905/2021. Status: Draft. Consultation end date: 31st March 2023.

• Guideline on the application of Article 34 of Regulation (EU) 2019/6 Classification of veterinary medicinal products (prescription status). Reference number EMA/CVMP/273040/ 2022 Status: Adopted.

• Procedural advice for vaccine platform technology master file (vPTMF) certification. Reference number EMA/ CVMP/184591/2022. Published 27th January 2023.

• Guideline on quality, safety and efficacy of veterinary medicinal products specifically designed for phage therapy. Reference Number EMA/CVMP/NTWP/32862/2022 Status: Draft. Consultation end date: 31st May 2023.

• Concept paper on revision of the ‘Guideline on declaration of herbal substances and herbal preparations in herbal medicinal products/traditional herbal medicinal products’ Reference number EMA/HMPC/888811/2022 Status: Draft. Consultation end date: 31st May 2023.

• Guideline on the chemistry of active substances for veterinary medicinal products. Reference Number EMA/CVMP/ QWP/707366/2017 – Rev.1. Status: Adopted.

• Guideline on development pharmaceutics for veterinary medicinal products. Reference Number EMA/CVMP/ QWP/684556/2022. Status: Adopted.

• Reflection paper on the chemical structure and properties criteria to be considered for the evaluation of New Active Substance (NAS) status of chemical substances for veterinary medical products. Reference Number EMA/CVMP/QWP/3629/2016Rev.1 Status: Adopted.

• Guideline on control of impurities of pharmacopoeial substances: compliance with the European Pharmacopoeia general monograph ‘substances for pharmaceutical use’ and general chapter ‘control of impurities in substances for pharmaceutical use’ Reference Number: EMA/CVMP/QWP/907965/2022. Status: Adopted

• Guideline on Parametric release. Reference number: EMA/ CVMP/QWP/339588-Rev 1. Status: Adopted.

• Guideline on the quality aspects of single-dose veterinary spot-on products. Reference Number: EMA/CVMP/ QWP/544461/2007-Rev.1. Status: Adopted.

• Guideline on additional Quality requirements for products intended for incorporation into animal feed. Reference number: EMA/CVMP/QWP/711629/2022. Status: Adopted.

• Guideline on declaration of storage conditions: A) in the product information of pharmaceutical veterinary medicinal products B) for active substances. Annex to guideline on stability testing of new veterinary drug substances and medicinal products. Annex to note for guidance on stability testing of existing active substances and related finished products. Reference number EMA/CVMP/ QWP/857608/2022. Status: Adopted.

• Guideline on Stability Testing: Stability testing of existing active substances and related finished products. Reference number: EMA/CVMP/QWP/709423/2022. Status: Adopted.

• Guideline on the quality of modified release dosage forms for veterinary use. Reference number: EMA/CVMP/ QWP/908160/2022. Status: Adopted.

• Guideline on plasmid DNA vaccines for veterinary use (Draft). Reference number: EMA/CVMP/ IWP/365817/2022. Status: Draft. Consultation end: 23rd June 2023.

REGULATIONS 40 | MAY 2023

HEY FUTURE VOICES!

We're looking for contributers to Quasar and the RQA is a great way to get involved without having to commit yourself long-term. The community is made up of volunteers and without you the association couldn't continue in the way we do, advancing and sharing knowledge with both our members and the wider quality audience.

HOW CAN I CONTRIBUTE?

SHARE SPREAD KNOWLEDGE

ARTICLES

Articles can be up to 3000 words and can be your opinion/ experience of a particular topic, a case study or something more technical.

A DAY IN THE LIFE

We want to explore individuals within the community and really get to know what their jobs involve on a day-to-day basis.

JOB ROLES

We are looking for a range of roles to look at more in depth – almost like a job description, this feature will give readers an insight into different career paths within the industry.

VIEWPOINT

IDEAS AND EXPERIENCES FEEL

You can write up to 1000 words with your view on a certain piece of news, guidance or information. Alternatively, if you have a counterpoint, we welcome replies and these can be published in the following edition.

A BIT OF FUN

PROUD

A funny/nightmare story from your day job (anonymously if needed), a crossword or game – if you have something you think readers will enjoy, let us know!

Please contact us at editor@therqa.com and we will be able to give you more information on how to get involved.

CALENDAR

OUR ROUND UP OF COURSES, SEMINARS AND FORUMS

Additional courses are being made available regularly – please visit the website for the current event calendar.

MAY

9-10

9-10

Risk Management in Clinical Trials

Remote learning

Good Laboratory Practice (GLP) for Study Directors, Principal Investigators, Study Staff and Management

Remote learning

16-17 Practical Approach to Auditing Systems and Processes

Heathrow, UK

23-24

JULY

4-5

Process Mapping and Standard Operating Procedures (SOP) Writing

Cambridge, UK

1-3

International QA Virtual Conference

2023

Virtual

6 Good Laboratory Practice (GLP) Refresher Remote learning

6-7 The Auditing Course

Cambridge, UK

6-8 A Systems Approach to Good Pharmacovigilance Practice (GPP)

Cambridge, UK

27-28 Quality Assurance for Good Laboratory Practice (GLP) Remote learning

PROFESSIONAL DEVELOPMENT COURSES

www.therqa.com/learn-develop-connect/courses-and-events/events/

International QA Conference

Breaking the Ice: Exploring the Depths of Quality

Belfast, UK

JUN
NOV
CONFERENCES

COPY DEADLINE: 12th JUNE 2023

COPY DEADLINE: 11th SEPTEMBER 2023

FEBRUARY 2024

COPY DEADLINE: 11th DECEMBER 2023

If you would like to submit an article for a future edition of Quasar on a topic you feel would be of interest to our membership, please contact the editor: editor@therqa.com Visit www.therqa.com for guidelines on article submission.

FUTURE EDITIONS NOVEMBER 2023
NEXT EDITION AUGUST 2023

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.