Page 1

REVIEW

Understanding United States Investigational Device Exemption Studies—Clinical Relevance and Importance for Healthcare Economics Jared D. Ament, MD, MPH∗ Scott Mollan, MSc‡ Krista Greenan, MD, MPH∗ Tamar Binyamin, MD∗ Kee D. Kim, MD∗ ∗ Department of Neurological Surgery, University of California Davis, Sacramento, California; ‡ ICON Clinical Research Services, Durham, North Carolina

Correspondence: Jared D. Ament, MD, MPH, Department of Neurological Surgery, University of California Davis, 4380 Y Street, #3740, Sacramento, CA, 95817. E-mail: Jared.ament@ucdmc.ucdavis.edu Received, July 20, 2016. Accepted, January 24, 2017. C 2017 by the Copyright  Congress of Neurological Surgeons

INTRODUCTION: The US Food and Drug Administration allows a previously unapproved device to be used clinically to collect safety and effectiveness data under their Investigational Device Exemption (IDE) category. The process usually falls under 3 different trial categories: noninferiority, equivalency, and superiority. To confidently inform our patients, understanding the basic concepts of these trials is paramount. The purpose of this manuscript was to provide a comprehensive review of these topics using recently published IDE trials and economic analyses of cervical total disc replacement as illustrative examples. CASE STUDY: MOBI-C ARTIFICIAL CERVICAL DISC: In 2006, an IDE was initiated to study the safety and effectiveness of total disc replacement controlled against the standard of care, anterior cervical discectomy, and fusion. Under the IDE, randomized controlled trials comparing both 1 and 2 level cervical disease were completed. The sponsor designed the initial trial as noninferiority; however, using adaptive methodology, superiority could be claimed in the 2-level investigation. REVIEWING HEALTHCARE ECONOMICS: Healthcare economics are critical in medical decision making and reimbursement practices. Once both cost- and quality-adjusted lifeyear (QALY) are known for each patient, the incremental cost-effectiveness ratio is calculated. Willingness-to-pay is controversial, but a commonly cited guideline considers interventions costing below 20 000 $/QALY strongly cost effective and more than 100 000 $/QALY as not cost effective. CONCLUSION: While large Food and Drug Administration IDE studies are often besieged by complex statistical considerations and calculations, it is fundamentally important that clinicians understand at least the terminology and basic concepts on a practical level. KEY WORDS: Cervical total disc arthroplasty, Clinical trial design, Statistical review, Noninferiority, Superiority, Randomized controlled trial Neurosurgery 0:1–7, 2017

DOI:10.1093/neuros/nyx048

T

he US Food and Drug Administration (FDA) allows a previously unapproved device to be used clinically to collect safety and effectiveness data under their Investigational Device Exemption (IDE) category. The framework for an IDE clinical study is regulated under federal law. The results of the IDE study ABBREVIATIONS: IDE, Investigational Device Exemption; QALY, quality-adjusted life-year; FDA, Food and Drug Administration; RCT, randomized controlled trial; TDR, total disc replacement; ACDF, anterior cervical discectomy and fusion; ICER, incremental cost effectiveness ratio

NEUROSURGERY

www.neurosurgery-online.com

may then be used to support an application to the FDA for market approval in the United States. However, the criteria for submitting and initiating an IDE study are stringent.1 The sponsor/investigator must prepare supportive documents that include all prior studies and research as well as a detailed investigational plan and protocol. The trial design, statistical methodology, risk analysis, copies of the patient consent process, and independent monitoring procedures must all be clearly defined. This information is then reviewed by FDA experts— typically doctoral staff or consultants who specialize in a given field, such as biostatistics

VOLUME 0 | NUMBER 0 | 2017 | 1


AMENT ET AL

TABLE 1. Clinical Trial Designs Trial design

Explanation

Noninferiority

- Used to show device outcomes are not significantly worse than comparator (current standard of care) - Novel device thought to offer other benefits: cost-savings, ease of use, or less adverse events - Used to show device outcomes are similar to standard of care - Used to show device outcomes are better than comparator - Statistically challenging to achieve

Equivalency Superiority

and clinical trial design—and must gain approval before the clinical trial is allowed to begin. There are typically 3 different trial designs to consider: noninferiority, equivalency, and superiority (Table 1; Figure). A noninferiority design is used to prove that the device in question is not significantly worse than a comparator. This means that the device performs within an acceptable limit of inferiority (ie, less than a clinically important difference). It is, by definition, a 1-sided test, and is often used when the new treatment/device is believed to perform similarly to the standard of care but offers other advantages, such as cost savings, ease of use, or an improved adverse events profile. An equivalency trial is a 2-sided test; its aim is

to prove that the novel intervention is no better or worse than a comparator. It is not typically used in clinical trials. Lastly, a superiority trial strives to demonstrate clinical dominance. It is used to demonstrate superiority when a meaningful clinical difference is thought to exist. Irrespective of the type of trial design used in an IDE, basic statistical theory and concepts apply. Again, these concepts must meet FDA standards, and gain approval prior to initiating the IDE. To review, the null hypothesis, alternate hypothesis, type I and II errors, and power all need to be considered. The null hypothesis (H0 ) is defined as the default/conservative assumption that no relationship exists, prior to conducting the trial and analysis (Table 2). In contrast, the alternate hypothesis (H1 ) is the assumption that a relationship or difference does exist. Both the null and alternate hypotheses are modified for the appropriate trial design used. When testing these hypotheses, it is important to be aware of the 2 types of statistical errors that are at risk of being inherent to any trial design. Type I error (also referred to as α) is the false positive rate. This is the likelihood of incorrectly rejecting the H0 . Practically, this would essentially be concluding that a difference between study groups and/or devices exists that in actuality does not. A type II error (also referred to as β) occurs when researchers incorrectly accept (or fail to reject) the H0 , resulting in a false negative. In practice, this means that superiority is not realized when it exists or that a large difference (in equivalency or noninferiority trials) is detected incorrectly. Power (defined as 1 – β)

FIGURE. Illustration of the 3 typical trial designs to consider: noninferiority, equivalency, and superiority.

2 | VOLUME 0 | NUMBER 0 | 2017

www.neurosurgery-online.com


UNDERSTANDING US IDE STUDIES—CLINICAL RELEVANCE AND HEALTHCARE ECONOMICS

TABLE 2. Hypothesis Testing by Clinical Trial Type Trial design

Null hypothesis (H0 )

Alternative hypothesis (H1 )

Noninferiority

The new device is worse than comparator (current standard of care) by a clinically significant margin The new device is either better or worse than the standard by a clinically significant margin The new device is not

The new device is not worse than the standard by more than a clinically significant margin

better than the standard of care

than the standard by a clinically significant margin

Equivalency

Superiority

The new device is not worse or better than the standard by more than a clinically significant margin The new device is better

TABLE 3. Types of Error H0 is true

H1 is true

Do not reject H0

True negative (P = 1 – α)

Reject the H0

False positive Type 1 error (P = α)

False negative Type II error (P = β) True positive Power (P = 1 – β)

is a measure of the type II error rate. This is the probability of correctly rejecting the H0 when the H1 is true (true positive rate). Table 3 provides a breakdown of types I and II errors, power, and their relationship with each other. To confidently inform our patients, understanding these concepts is critical when reading and analyzing published FDA IDE trial data. This is especially important for addressing bias in these studies, which are often large, industry-sponsored investigations. When done correctly, large multicenter randomized controlled trials (RCT) utilize independent contract research organizations and imaging labs, have sound statistical principles, and adhere to rigorous controls to ethically and scientifically mitigate the potential impact of financial interest and bias.2,3 In the subsequent section, we use a recently published RCT from an IDE study as an illustrative case study and review many of these aforementioned concepts and topics.

CASE STUDY: MOBI-C ARTIFICIAL CERVICAL DISC In 2006, an IDE was initiated to study the safety and effectiveness of cervical total disc replacement (TDR) with the MobiC Cervical Disc (LDR Spine, Austin, Texas) controlled against the standard of care—anterior cervical discectomy and fusion (ACDF).2 The authors have extensive knowledge of this IDE

NEUROSURGERY

study including the study design, statistical parameters, and analysis outcomes. The sponsor designed the IDE as a single study with 2 arms, enrolling concurrently at participating sites. All sites were trained on 1- and 2-level techniques and allowed to randomize to TDR or ACDF using adaptive design methodology.2,3 This methodology allowed researchers to perform statistical analyses in a sequential manner, beginning with the original hypothesis (ie, noninferiority) and, if confirmed, continuing with additional and more stringent tests (ie, superiority) in order to elicit the most precise statistical outcome.4 Success was defined by a composite endpoint as: (1) improvement in Neck Disability Index5 at 24 months as compared to baseline; (2) no device failures requiring subsequent surgical intervention consisting of revision, removal, reoperation, or supplemental fixation procedures; (3) no major complications defined as: a. degradation of neurological status, b. significant adverse event as determined by a clinical events committee, or c. fusion (for TDR)/lack of fusion (for ACDF). A composite endpoint was used because defining success is both complicated and partially subjective. Metrics such as the Neck Disability Index and neurological symptoms were patient reported, whereas reoperation rates, adverse events, and radiographic features are clinical and objective. Furthermore, analyses of adverse events and radiographic findings were conducted independently. This multiperspective endpoint was therefore felt to be more accurate and representative of success than any single metric alone. The primary goal of the study was to prove noninferiority for TDR compared to ACDF. Specific hypotheses and sample size assumptions can be found in Table 4. The sample size assumptions were developed by the sponsor and independent statisticians and were approved by the FDA prior to study initiation. The estimated success rates were based on pilot data of the study device from a study in France and the control group estimates were taken from the literature.4 Based on this IDE’s clinical trial design, the sponsor would be able to claim noninferiority provided that the 1-sided 95% lower confidence bound on the difference in success rates (MobiC minus ACDF) was no worse than –10%. Furthermore, if the results from this analysis were such that the 1-sided 95% lower confidence bound was no worse than 0, then superiority could be claimed. By utilizing this preplanned sequential testing, the sponsor had the ability to analyze results in a manner that would elicit comparative advantages of the investigational treatments beyond the noninferiority outcome while maintaining proper scientific rigor. As highlighted in the FDA guidance document for adaptive designs in medical device studies, there are 2 key requirements:

VOLUME 0 | NUMBER 0 | 2017 | 3


AMENT ET AL

TABLE 4. Mobi-C Study Sample Size Assumptions Hypothesis (noninferiority) H0 : π c ≥ π m + δ (inferiority) H1 : π c < π m + δ (noninferiority) where, π m: π c: δ:

Proportion of successes in the experimental (TDR) group Proportion of successes in the control (ACDF) group Difference that is clinically insignificant

Sample size assumptions (1-level ACDF) α = 0.05 Probability of type I error β = 0.20 Probability of type II error; power = 1 – β π m = 80% Estimated success rate for TDR treatment group π c = 75% Estimated success rate for ACDF control group δ = 0.10 Difference that can be considered clinically insignificant: π c – π m < δ Sample size assumptions (2-level ACDF) α = 0.05 Probability of type I error β = 0.20 Probability of type II error; power = 1 – β π m = 65% Estimated success rate for TDR treatment group π c = 60% Estimated success rate for ACDF control group δ = 0.10 Difference that can be considered clinically insignificant: π c – π m < δ

“control of the chance of erroneous conclusions (positive and negative)” and “minimization of operational bias.”6 To account for these requirements, the sponsor implemented sound statistical methodology, conducted analyses to examine potential bias, and utilized independent entities, where possible, to minimize their own influence. With respect to controlling for the risk of false conclusions, the primary endpoint claims (noninferiority and superiority) were assessed in a sequential fashion to mitigate type I error. Similarly, the secondary endpoints were assessed sequentially using a gate keeper approach to control the familywise error rate. The family-wise error rate is the probability of making multiple type I errors, among all the hypotheses, when performing multiple hypotheses tests. Regarding internal validity: the primary endpoint results were stratified by site and financial interest status to both assess the impact of these covariates and the ability to pool results. Furthermore, to ensure consistency across all study sites, centralized independent entities were utilized where possible. For example, a single vendor was used for all radiographic review and interpretation. Similarly, with respect to major complications, an independent clinical events committee, comprised of neurosurgeons and orthopedic surgeons, confirmed and classified all adverse events.

RESULTS In the 1-level study, 73.7% of TDR subjects achieved primary endpoint success compared to 65.3% in the ACDF group. This resulted in an 8.4% advantage for TDR, with the 95% lower confidence bound at –2.4%. Since this bound was greater than –10%, the study was able to claim noninferiority. However, since

4 | VOLUME 0 | NUMBER 0 | 2017

the bound was less than 0%, superiority could not be concluded. When examining site-specific differences and financial interest status, no significant differences were found. In the 2-level study, 69.7% of TDR subjects achieved primary endpoint success compared to 37.4% in the ACDF group. The result was a 32.3% advantage for TDR, with a 95% lower confidence bound of 22.8%. Similarly, since this bound was greater than –10%, the study claimed noninferiority. Furthermore, since the bound was greater than 0%, superiority was also concluded. Again, no statistically significant differences were identified when comparing sites or financial interest.

REVIEWING HEALTHCARE ECONOMICS Although not required nor usually included in FDA IDE trials, healthcare economic considerations are becoming critical in medical decision making and reimbursement practices. These analyses are intricate and have consequently started to occur after RCT results are published, thereby allowing utilization of what is perceived to be the best data available.7-13 In some instances, quality of life (QoL) and economic considerations are now even being included in original FDA IDE proposals.14 It is important to reiterate, however, that because most economic valuations are based on IDE study data, their legitimacy is inextricably dependent on the statistical validity of the initial trial design. The initial step in an economic analysis is understanding QoL metrics.15-19 According to Ament et al, “Health-related QoL is defined as the extent to which one’s usual or expected physical, emotional, and social well-being are affected by a medical condition or treatment.”7 Several tools exist for measuring QoL and are generically classified as either health status or

www.neurosurgery-online.com


UNDERSTANDING US IDE STUDIES—CLINICAL RELEVANCE AND HEALTHCARE ECONOMICS

preference-based instruments. The former is designed by deconstructing QoL into several scored domains.14,17 The domains typically comprise multiple-choice questions about current symptoms and functioning. Common health status instruments include the Medical Outcomes Study Short Form (SF-36, 12, 6D), EuroQol 5 dimensions’ questionnaire (EQ-5D), and the Health Utilities Index Mark 3 (HUI-3).14,17-22 These differ from preference-based QoL instruments that more directly estimate a patient’s current health state by generating a single QoL value, utility. This is expressed on a 0 to 1 ratio scale, where 1 represents the value of perfect health. A 0 utility is often mistakenly thought to represent death, but many contest this, suggesting that health states worse than death exist.10 When utility (the valuation of a health state) is considered over time, the variable quality-adjusted life-year (QALY) is used to represent the strength of an individual’s preference.13,17-22 It is prudent to note that these instruments are proxies for determining patient preferences and health states; they are utilized largely for their ease of administration. Incremental Cost Effectiveness Ratio Calculation of the incremental cost effectiveness ratio (ICER) initially requires both cost and QALY computations for each patient, at each time point being evaluated. The ICER is then commonly determined by either the simple incremental calculation technique or by creating a more intricate decision model. In the incremental calculation technique, mean cost and QALY data are collected, and the aggregate difference between study arms is compared. Decision analytical modeling involves transforming QoL data into input parameters that are used to inform a model about the likelihood of clinical events occurring to trial participants. Costs are associated with each likelihood probability. Modeling is advantageous in its flexibility, allowing for timeframe extrapolation, subgroup analysis, and robust sensitivity analyses to test generalizability. Obvious disadvantages include the exactitude required to derive input parameters from the original trial data and the mathematical assumptions needed to generate the model.7,10 The result of both approaches is an ICER. The significance of the ICER varies by country, culture, and socioeconomics and are all matters of debate. From the above case study, the ICER comparing 2-level TDR to ACDF was 8518 $/QALY and –165 103 $/QALY from healthcare and societal perspectives, respectively.7 A single level cost-effectiveness study was not conducted. Willingness-To-Pay Despite the increasing prevalence of cost-effective medicine in the literature, a standardized rubric for what is considered costeffective has yet to be clearly defined.23,24 Interventions with an ICER below 20 000 $/QALY or greater than 100 000 $/QALY are commonly cited as being highly cost-effective or not costeffective, respectively.23 The National Institute for Health and Clinical Excellence in the United Kingdom announced their “willingness-to-pay” threshold for medical treatments to range between 40 000 and 60 000 $/QALY.23 Indeed, therapeutics

NEUROSURGERY

costing beyond these guidelines are frequently funded in the United States, and regulatory bodies have repeatedly recognized the inherent limitations in using $/QALY as a proxy for value in medicine.24

DISCUSSION Clinicians are uniquely situated to monitor for accuracy and bias in the scientific literature. Our directive is to always deliver heavily scrutinized information to our patients and we can only accomplish this, consistently, by understanding the methodology used to perform the large, often pivotal, investigational, and comparative studies. Given the current climate and emphasis on healthcare costs, it is also paramount to have a basic understanding of healthcare economics. Medical device research ought to be performed in such a way that innovative technology that may provide health benefits to the population are not prevented from being utilized due to arduous regulatory policies. Yet, our patients also need protection from dangerous devices making it onto the market prematurely, without the proper vetting. As it stands, the United States has the most rigorous approval process in the world, and as a result, devices are frequently introduced into markets overseas first. Some argue that this leads to unnecessary delays in the United States, while others contend that there is not enough regulation in Europe and what does exist is not transparent and largely governed by private entities.24,25

Limitations One major concern, both in the US and abroad, is that there is significant fiscal bias in industry-sponsored RCTs, therefore calling many large IDE study results into question.26-29 As a result, high-quality evidence-based medicine has been difficult to practice in surgical subspecialties in the past. Yet, when these studies elucidate efforts to mitigate bias, careful consideration ought to be made about the techniques used to achieve this and whether or not they were successful. Furthermore, in spine surgery specifically, very few US-based multicenter randomized trials of devices not sponsored by the industry exist.29 This suggests that previous findings demonstrating larger effect sizes in industry-sponsored vs nonindustry-sponsored studies may actually be biased due to failure to take into account the marked differences in design and purpose.29 Based on the above considerations, the authors contend that a detailed understanding of methodology and the measures used to mitigate bias offers more insight about the value of the information presented in a study than whether it was industry sponsored, sua sponte. We of course acknowledge that all should heed caution if industry sponsorship is not paralleled with stringent trial design, sound statistical methodology, and independent contract research organization use.

VOLUME 0 | NUMBER 0 | 2017 | 5


AMENT ET AL

CONCLUSION While large FDA IDE studies are often besieged by complex statistical considerations and calculations, it is fundamentally important that clinicians understand them on practical levels. This review was designed to breakdown the basic statistical theory behind these studies, illustrating the rigorous scientific and mathematic methods used to limit bias, as well as deconstruct the complex economic analyses that often follow, especially as these issues become omnipresent in healthcare. Ideally, this breakdown will not only appeal to practitioners performing the procedures/interventions being analyzed in the IDE study, but also to primary care providers and payers, representing the backbone of specialty referrals and reimbursement, respectively. The potential for maximizing efficient use of societal resources while improving the health of our population is all of our responsibilities; and, it starts with the understanding of how something is being tested and measured and whether the conclusions being reached are significant or not. Disclosure Dr Ament is a consultant for LDR Spine and received funding to support data analysis and preparation of the manuscript. Mr Mollan’s institution was paid by LDR Spine to conduct this clinical trial and prepare this manuscript. Dr Kim receives a consulting honorarium from LDR Spine to conduct healthcare economics research; he also receives royalties from LDR Spine. The other authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article.

REFERENCES 1. http://www.fda.gov/medicaldevices/deviceregulationandguidance/howtomarketyourdevice/investigationaldeviceexemptionide/default.htm. Accessed March 3, 2016. 2. Hisey MS, Bae HW, Davis R, Gaede S, Hoffman G, Kim K, et al. Multi-center, prospective, randomized, controlled investigational device exemption clinical trial comparing Mobi-C cervical artificial disc to anterior discectomy and fusion in the treatment of symptomatic degenerative disc disease in the cervical spine. Int J Spine Surg. 2014;1:8. 3. Davis RJ, Kim KD, Hisey MS, Hoffman GA, Bae HW, Gaede SE, et al. Cervical total disc replacement with the Mobi-C cervical artificial disc compared with anterior discectomy and fusion for treatment of 2-level symptomatic degenerative disc disease: a prospective, randomized, controlled multicenter clinical trial: clinical article. J Neurosurg Spine. 2013;19(5):532-545. 4. Beaurain J, Bernard P, Dufour T, Fuentes JM, Hovorka I, Huppert J, et al. Intermediate clinical and radiological results of cervical TDR (Mobi-C) with up to 2 years of follow-up. Eur Spine J. 2009;18(6):841-850. 5. Vernon H, Mior S. The neck disability index: a study of reliability and validity. J Manipulative Physiol Ther. 1991;14(7):409-415. 6. Food and Drug Administration. Adaptive Designs for Medical Device Clinical Studies (issued May 18, 2016). Available at: http://www.fda.gov/downloads/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm446729.pdf. Accessed March 6, 2017. 7. Ament JD, Yang Z, Nunley P, Stone MB, Lee D, Kim KD. Cost utility analysis of the cervical artificial disc vs fusion for the treatment of 2-level symptomatic degenerative disc disease: 5-year follow-up. Neurosurgery. 2016;79(1):135-145. 8. Ament JD, Yang Z, Chen Y, Green RS, Kim KD. A novel quality of life utility index in patients with multilevel cervical degenerative disc disease: comparison of anterior cervical discectomy and fusion with total disc replacement. Spine (Phila Pa 1976). 2015;40(14):1072-1078. 9. Ament JD, Yang Z, Stone MB, Nunley P, Kim KD. Cost-effectiveness of cervical total disc replacement vs fusion for the treatment of 2-level symptomatic degenerative disc disease. JAMA Surg. 2014;149(12):1231-1239.

6 | VOLUME 0 | NUMBER 0 | 2017

10. Ament JD, Kim KD. Standardizing cost-utility analysis in neurosurgery. Neurosurg Focus 2012;33(1): E4;1-6. 11. Feeny DH, Torrance DW. Incorporating utility-based quality-of-life assessment measures in clinical trials: two examples. Med Care. 1989;27(suppl): S190-S204. 12. Gatchel RJ, ed. Compendium of Outcome Instruments for Assessment and Research of Neurosurgical Disorders. LaGrange, IL: North American Neurosurgery Society; 2001. 13. King JT, McGinnis KA, Roberts MS. Quality of life assessment with the medical outcomes study short form-36 among patients with cervical spondylotic myelopathy. Neurosurgery. 2003;52(1):113-120; discussion 121. 14. Davis RJ, Errico TJ, Bae H, Auerbach JD. Decompression and Coflex interlaminar stabilization compared with decompression and instrumented spinal fusion for spinal stenosis and low-grade degenerative spondylolisthesis: two-year results from the prospective, randomized, multicenter, food and drug administration investigational device exemption trial. Spine (Phila Pa 1976). 2013;38(18): 1529-1539. 15. Gold MR, Siegel JE, Russell LB, Weinstein MC. Cost-Effectiveness in Health and Medicine. New York: Oxford University Press; 1996:1-413. 16. McDowell I, Newell C. Measuring Health: A Guide to Rating Scales and Questionnaires. 2nd ed. New York: Oxford Press; 1996. 17. Redelmeier DA, Detsky AS. A clinician’s guide to utility measurement. Prim Care. 1995;22(2):271-280. 18. Revicki DA, Kaplan DM. Relationship between psychometric and utility-based approaches to the measurement of health-related quality of life. Qual Life Res. 1993;2(6):477-487. 19. Richardson J. Cost utility analysis: what should be measured? Soc Sci Med. 1994;39(1):7-21. 20. Wakker P, Sitggelbout A. Explaining distortions in utility elicitation through the rank-dependent model for risky choices. Med Decis Making. 1995;15(2):180-186. 21. Weinstein MC, Siegel JE, Gold MR, et al. Recommendations of the Panel on Cost-effectiveness in Health and Medicine. JAMA. 1996;276(15):1253-1258. 22. Vance, D. Financial Analysis and Decision Making: Tools and Techniques to Solve Financial Problems and Make Effective Business Decisions. New York: McGraw-Hill, Inc.; 2003:99. 23. McCabe C, Claxton K, Culyer AJ. The NICE cost-effectiveness threshold - What it is and what that means. Pharmacoeconomics. 2008;26(9):733-744. 24. Gottlieb S. How the FDA Could Cost You Your Life. Wall Street Journal. 2011. Available at: http://online.wsj.com/articles/SB10001424052970204833045765 197200095602270. Accessed May 16, 2016. 25. Cohen D, Billingsley M. Europeans are left to their own devices. BMJ. 2011; 13:342. 26. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: What it is and what it isn’t. BMJ. 1996;312(7023):71-72. 27. Howes N, Chagla L, Thorpe M, et al. Surgical practice is evidence based. Br J Surg. 1997;84(9):1220-1223. 28. Ament JD, Black P. Levels of evidence in medical publications. Asian J Neurosurg. 2009;3:1-4. 29. Cher DJ, Capobianco RA. Spine device clinical trials: design and sponsorship. Spine J. 2015;15(5):1133-1140.

COMMENTS

T

his article is timely, easy to read, and contains information on subjects that are becoming increasingly more relevant. As such, not much commentary is needed on the article per se. However, it does give one the opportunity to reflect on some issues. This article mentions a “willingness to pay” of $50 000/QALY. Of course, this is arbitrary, and actually quite an old figure as discussed by Neumann and colleagues.1 However, I contend that the underlying premise, ie that medical care “takes” 18% of gross national product (GNP) and should be kept as low as possible, is false: medicine contributes 18% to GNP! If one accepts that, costs become much less of an issue than effectiveness in the grand scheme of things. On the other hand, in the small scheme of things, I have always wondered what individuals would choose if given the opportunity to be treated with a drug that will lengthen their lives by 4 months at a

www.neurosurgery-online.com


UNDERSTANDING US IDE STUDIESâ&#x20AC;&#x201D;CLINICAL RELEVANCE AND HEALTHCARE ECONOMICS

cost of $100 000, or receive $50 000 cash to go on vacation or give to their children. Just food for thought and my 5 cents worth. J. Paul Muizelaar Sacramento, California

1. Neumann PJ, Cohen JT, Weinstein MC. Updating Cost-Effectiveness-The Curious Resilience of the $50,000-per-QALY Threshold. N Engl J Med. 2014; 371:796-797.

T

he authors provide a succinct primer on the design of complex clinical trials and provide basic definitions of key terms. A clear

NEUROSURGERY

understanding of these terms is essential in order to interpret coherently trial results and to determine how best to incorporate those results into our practice of neurosurgery. The authors are also correct to point out that few pivotal trials of medical devices conducted in the US are funded independently, and so we must be vigilant to ensure that industrysponsored trials are conducted with the highest possible scientific and ethical standards. In my view, this paper is mandatory reading for us all of us. Ron L. Alterman Boston, Massachusetts

VOLUME 0 | NUMBER 0 | 2017 | 7

Profile for neuronomicsco

Understanding United States Investigational Device Exemption Studies—Clinical Relevance and Importan  

Understanding United States Investigational Device Exemption Studies—Clinical Relevance and Importan  

Advertisement

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded