22 minute read

Preffi2.0- a quality assessment tool

Gerard R.M. Molleman, Machteld A. Ploeg, Clemens M.H. Hosman and Louk H. M. Peters

Abstract: The findings of many metastudies into the effects of health promotion programmes indicate that there is still much room for improvement in the quality of these programmes. Insights gained from research are rarely applied in practice. Practitioners and policymakers often find it hard to assess the value of the many and sometimes contradictory research findings, partly because the necessary contextual information is usually lacking. Practical considerations force them to respond to specific problems at short notice in the form of programmes that are as effective as possible. Hence, effective health promotion requires not only the dissemination of effective programmes but also insights into principles of effectiveness and the way professionals use these insights. It is against this background that the Netherlands Institute for Health Promotion and Disease Prevention (NIGZ) has developed and implemented the Preffiinstrument. Prefficonsists of a set of guidelines with items relevant to the effectiveness of health promotion and prevention projects, reflecting scientific and practical knowledge about effect predictors. This article describes the systematic, seven-step development process of the second version of the instrument, Preffi 2.0, a process in which scientists and practitioners were closely involved throughout. The article also describes the Preffimodel and its scoring method. The draft version of Preffi2.0 was tested for usefulness among 35 experienced practitioners from a range of health promotion institutes. They were asked to use the draft version to assess two project descriptions and to comment on their experiences using Preffi2.0. They gave the instrument an average overall mark of 7.7 on a scale of 10, and the large majority of them evaluated the instrument as valuable, complete, clear, well-organised and innovative. The findings of this trial implementation were used to construct the definitive version of Preffi2.0. To an experienced user, applying Preffito assess a project takes less than an hour. Preffiis used as a diagnostic quality assurance instrument at various stages of a project, either to critically evaluate one’s own project or to comment on projects proposed by others. Assessing other people’s projects may be difficult if the necessary information is lacking or unclear. A supplementary discussion with the project manager is always required. Users have commented that applying Preffito a project yields a balanced and useful assessment, as well as a clear overview of points in the project that could be improved.

Advertisement

This manuscript was submitted on June 10, 2004. It received blind peer review and was accepted for publication on September 26, 2005.

Keywords

•quality •effectiveness •health promotion

Gerard R.M. Molleman, PhD NIGZ Centre for Knowledge and Quality P.O.Box 500 NL-3440 AM Woerden The Netherlands Phone: +31 348 437621 E-mail: gmolleman@nigz.nl

Machteld A. Ploeg, MA NIGZ Finance department Woerden, The Netherlands

Clemens M.H. Hosman PhD Prevention Research Centre, Dept. of Health Education and Promotion University of Maastricht, and Dept. of Clinical Psychology University of Nijmegen The Netherlands

Louk H.M.Peters, MA NIGZ Woerden, The Netherlands

What makes health promotion programmes effective? How can projects be designed and implemented so as to maximise the chances of it being effective?

Over the past 20 years, many researchers have tried to answer these questions in studies focusing on meta-analysis of the effectiveness of programmes, and preferably assessing effectiveness by means of randomised controlled trials (RCTs). These studies have found great variation in the extent to which health promotion programmes are evidencebased (Durlak and Welsh, 1997; Kok et al., 1997; Boddy, 1999) . Although significant effects of many programmes have been proven in various countries, the average size of such effects has so far been moderate. And whereas many programmes were found to be effective or moderately effective, many others were poorly effective or even ineffective. In addition, many of the effective programmes have proved to be effective only for part of the target group, only for a limited number of objectives or only in the short or medium term, or to lose some or all of their effectiveness in a different setting. This means that there is much room for improvement in the quality and effectiveness of programmes. Although the various studies and metastudies into the effects of health promotion programmes have yielded many new insights, their findings are rarely applied in practical health promotion. Practitioners find it hard to assess the value of the many and sometimes contradictory research findings. In addition, researchers often fail to supply details about the contextual conditions (in terms of available time, funding and support) and circumstances (local context, social, cultural and economic influences, timing) in which the reported effects were achieved, whereas such details are required to decide whether and how the intervention needs to be adjusted to allow its successful repetition or implementation in a different situation. Researchers and practitioners often work to different time-scales. Attempts to improve effectiveness through controlled studies require long-term investments and involve long periods between programme development and the provision of feedback on effects. This is an important but extremely slow process. Policy-makers and practitioners, however, often need faster feedback as well, since they are asked to respond to specific problems at short notice, in the form of preventive programmes and interventions that yield the maximum

effect. In short, what they need is to have insights gained from research translated into practical guidelines, which fit in with the context in which they have to work. This means that stimulating effective health promotion requires a combination of: 1. developing and disseminating programmes that have proven to be effective; and 2. insights into principles of effectiveness that influence the effectiveness of health promotion programmes in practice, and the way professionals use such insights.

It was against this background that the Netherlands Institute for Health Promotion and Disease Prevention (NIGZ) initiated the Preffiproject in the mid-1990s. The project consists of a longterm process attempting to improve the effectiveness of health promotion efforts by stimulating systematic and critical reflection on programmes and projects. The core element of this project is the Preffiinstrument, a set of guidelines on items that help determine the effectiveness of health promotion and prevention projects. These items reflect the available scientific knowledge on effect predictors, as well as insights derived from a critical debate with practitioners about such effect predictors (practical expertise).

This article describes the development and content of the second version of this instrument, Preffi2.0. It is based on data relating to the development process, experiences and implementation which were systematically gathered among users of the instrument at various stages of the process (Molleman and Hosman, 2003; Molleman et al., 2004; Molleman, 2005).

Developing Preffi

The systematic development of the Preffi guidelines started in 1994, on the basis of a survey of research findings, an exploration of methods for guideline development and an extensive round of interviews with practitioners. This process resulted in the introduction of the Health Promotion Effectiveness Fostering Instrument, Preffi 1.0, in 1995 (Molleman, 1999).

In 2000, a joint task force of NIGZ and the Prevention Research Centre at Nijmegen University started work to develop a second version of Preffi, building on the experience gained with Preffi1.0 since its introduction. The team tried to improve four aspects: content, norms, format and positioning.

As regards content, the aim was to incorporate into the second version all recent insights provided by health promotion research and practice. In addition, the Prefficriteria had to be operationalised in such a way as to allow users to compare their own projects against a normative standard and to allow third parties to assess a project. The format developed for the new Preffi version intended to do justice to the cyclical and iterative nature of many health promotion projects. Finally, it was deemed necessary to clarify Preffi’s position as a quality assurance instrument aimed at identifying and improving conditions for the effectiveness of a project. Therefore, it was decided to change the name to Health Promotion Effect Management Instrument.

In developing Preffi2.0, much effort was invested in developing a solid scientific basis and in operationalising all Preffi criteria. To achieve these aims, we collaborated closely with a Scientific Advisory Committee, representing the various bodies engaged in health promotion research in the Netherlands. In addition, a Practitioners’ Advisory Committee, composed of 53 Dutch health promotion professionals, ensured that the users’ perspective would not be neglected.

The development process of Preffi2.0 was based on a model consisting of a number of systematic steps, and used various methods of formative evaluation, viz., product-oriented, expert-oriented and target-group-oriented methods (Jong and Schellens, 2000). In product-oriented methods, it is the designers themselves who indicate the aspects of the draft design on which they base their evaluation, while in the other two methods, the design is submitted to external experts and to members of the target group, respectively. The Preffi2.0 design process started with productoriented methods (steps 1-5), followed by expert-oriented methods (steps 4-6) and target-group-oriented methods (steps 6-7).

Step 1. The Preffitask force first tried to define the instrument’s position within the general context of quality management. It decided that, as regards output, Preffishould focus on the effectiveness, relevance and coverage of programmes, rather than on other output characteristics such as cost-effectiveness or client satisfaction, at least at this stage of its development. On the input side, Preffi’s focus was to be on the actual operational processes involved in the design and implementation of programmes. Major contextual variables such as structural aspects of organisations (infrastructure, institutional policies, staffing, collaborative relations, etc.) were to be included in Preffias contextual conditions for the actual operational processes.

Step 2. It was decided that the general structuring principle for the operational processes would be that of a systematic approach, since there is broad consensus that a systematic approach improves the effectiveness of prevention programmes (Bartholomew et al., 2001; Glanz et al., 2002) . Together with the contextual conditions dimension, the four stages in a systematic approach (analysis, development, implementation and evaluation) were seen as individual dimensions, and the main effect predictors of each of these five dimensions were identified and explicated.

Step 3. Originally, five criteria were defined for the selection of effect predictors, viz., relevance, scientific evidence, generalisability, modifiability and measurability. In the process of applying these criteria, we found that virtually all effect predictors in Preffi1.0 met the criteria of ‘generalisability’ and ‘modifiability’, which meant that these criteria had hardly any discriminatory value for the selection of the effect predictors. Nor did ‘measurability’ discriminate, since any effect predictor can in principle be made as clear and measurable as possible for the target group of practitioners.

This left the criteria of ‘relevance’ and ‘evidence’. The task force decided that the relevance of an effect predictor for the effectiveness of a programme was the most important criterion. Relevance refers to the proven or assumed impact of a particular characteristic or condition of a project on its effectiveness. Each effect predictor was given a score for scientific evidence, to assess the ‘evidence-based’ character of Preffi (Peters et al., 2003). A broad definition of

Figure 1 Preffi2.0

the concept of ‘evidence’ was used, for one thing because the predictive value of some effect predictors that are deemed important is hard to prove in controlled quantitative research (which is usually regarded as the highest form of evidence) . The principles of a systematic approach, for instance, are usually based on logical argument, consensus and the findings of multiple case studies. There is still no such thing as an internationally accepted specific ranking of types of evidence, and support is growing for the view that other methods of verification than controlled experimental research are also legitimate (Koelen et al., 2001; McQueen and Anderson, 2002).

Step 4. The results of the application of these selection criteria to Preffi1.0 were first discussed within the task force, then with the individual members of the Scientific Advisory Committee and then in a plenary meeting with the entire committee. The outcome of these consultations was used to define a number of Preffi1.0 criteria more precisely, though no criteria were removed. The consultations also covered ideas for the addition of new effect predictors. These ideas were based on new insights derived from health promotion research and practice, as well as on the experience gained with Preffi1.0.

Step 5. The selection of effect predictors thus obtained was discussed in detail in an explanatory document that described the nature of each predictor, its relation with effectiveness (relevance) and the available evidence for this relation. A second document provided a further operationalisation of each predictor in terms of specific questions, as well as the corresponding norms, based on argued consensus between the task force and the scientific advisory committee. This should help the intended users, that is, the practitioners, decide to what extent their projects pay adequate attention to each predictor, while also proposing specific aspects that need to be improved.

Step 6. In addition to the Scientific Advisory Committee, the Practitioners’ Advisory Committee was also involved in the various consultations and asked to give its opinion on the explanatory and operationalisation documents for Preffi 2.0. Their feedback was used to introduce various adjustments, such as subdividing effect predictors, clarifying explanations and adjusting the operationalisation and norms.

Step 7. A draft version of Preffi2.0 (consisting of a scoring form, the explanatory document, the operationalisation document and a user manual) was tested for practicability among 35 experienced practitioners from a range of institutes. They were asked to use the draft version of Preffi2.0 to assess two project descriptions, and then to complete a questionnaire (including both open and closed questions) about their experiences in applying the instrument and their opinions about the various elements of the draft version. Supplementary interviews were held with 10 of the practitioners. The findings of this study were then used to adjust certain aspects of the content and layout of the draft version in the definitive version of Preffi2.0. In addition, they were used to adjust ideas on the use of the instrument.

Content of Preffi2.0

Preffi2.0 consists of an Assessment Package (Molleman et al., 2003), which includes: •the Scoring form, which allows users to allocate scores on a list of quality criteria (= effect predictors), as well as providing room to identify points to be improved and showing a visual representation of the Preffi2.0 model; •a document called Operationalisation and Norms, which provides operationalisations for each of the

quality criteria using one or more yes/no-type questions and norms (categories of scores) based on the answers to the operationalisation questions; •a User Manual, which explains Preffi 2.0 and provides instructions for the use of the instrument and each of its components.

This is supplemented by an extensive document called Explanatory Guide, which provides further details on the quality criteria (effect predictors) and discusses their importance for the assessment of effectiveness and the available evidence for their impact on effectiveness (including literature references) (Peters et al., 2003) .

The Preffi2.0 model

The main conceptual elements are represented in the Preffi2.0 model (see Figure 1), which emphasises the dynamic nature of health promotion projects, the permanent interaction between content and contextual conditions and the cyclical nature of the process of health promotion.

The central part of the model shows the steps involved in the systematic design and implementation of a project, that is, the actual process of health promotion: analysing the problem, choosing and designing the right intervention, implementing it and evaluating it. These process steps are shown in lozenge shapes because each step first involves looking at a wide range of options (divergence) and then choosing from among these options on the basis of content aspects and contextual conditions (convergence). To give an example: the analysis of the problem ideally involves identifying all potential causes/determinants, after which a choice is made of determinants to be addressed in the intervention, based on substantive arguments, relevance, modifiability and practicability. The lozenges overlap because the selection process should always take the consequences and options in the next stage into consideration. For instance, in choosing a particular intervention, designers should already take account of the opportunities for implementation. The evaluation relates to all moments in the process where choices have to be made, which is why arrows point to all these moments.

Figure 2 Health Promotion Effectmanagement Instrument version 2.0 (PREFFI 2.0): Quality Criteria

Contextual conditions

1 Contextual conditions and feasibility 1.1 Support/commitment 1.2 Capacity 1.3 Leadership 1.3a Expertise and characteristics of project manager 1.3b Focal points for leadership

Analysis

2 Problem analysis 2a Nature, severity, scale of problem 2b Distribution of problem 2c Problem perception by stakeholders

3 Determinants of (psychological) problem, behaviour and environment 3a Theoretical model 3b Contributions of determinants to problem, behaviour or environmental factor 3c Amenability of determinants to change 3d Priorities and selection

Selection and development of intervention(s)

4 Target group 4a General and demographic characteristics of target group 4b Motivation and possibilities of target group 4c Accessibility of target group 5 Objectives 5a Objectives fit in with analysis 5b Objectives are specific, specified in time and measurable 5c Objectives are acceptable 5d Objectives are feasible 6 Intervention development 6.1 Rationale of the intervention strategy 6.1a Fitting strategies and methods to objectives and target groups 6.1b Previous experiences with intervention(s) 6.2 Duration, intensity and timing 6.2a Duration and intensity of intervention 6.2b Timing of intervention 6.3 Fitting to target group 6.3a Participation of target group 6.3b Fitting to 'culture' 6.4 Effective techniques (recommended) 6.4a Room for personalised approach 6.4b Feedback on effects 6.4c Use of reward strategies 6.4d Removing barriers to preferred behaviour 6.4e Mobilising social support/commitment 6.4f Training skills 6.4g Arranging follow-up 6.4h Goal-setting en implementationintentions 6.4i Interactive approach 6.5 Feasibility in existing practice 6.5a Fitting to intermediary target groups 6.5b Characteristics of implementability of intervention(s) 6.6 Coherence of interventions/activities 6.7 Pretest

Implementation

7 Implementation 7.1 Choice of implementation strategy fitted to intermediaries 7.1a Mode of implementation: top-down and/or bottom-up 7.1b Fitting implementation interventions to intermediaries 7.1c Appropriateness of supplier to intermediaries 7.2 Monitoring and generating feedback 7.3 Incorporation in existing structure

Evaluation

8 Evaluation 8.1 Clarity and agreement on principles of evaluation 8.2 Process evaluation 8.3 Effect evaluation 8.3a Has a change been measured? 8.3b Was a change caused by the intervention? 8.4 Feedback to stakeholders

The effectiveness of interventions and the choices that can be made within those interventions are partly determined by contextual conditions like the support for the project, the capacity available for its implementation and the quality of the leadership provided by the project manager. The arrows pointing inwards indicate the moments when choices have to be made. The arrow pointing at the choice of an intervention is larger, to indicate that this is the point where the influence of contextual conditions is often particularly strong. The 39 quality criteria (effect predictors) included in Preffi2.0 have been subdivided into eight clusters (see Figure 2). The clusters 2-6 relate mostly to the systematic development of interventions, while the clusters 1, 7 and 8 relate particularly to aspects of implementation. The basic structure is the same as that of Preffi1.0, although certain clusters have been redefined with respect to one another. At criterion level, Preffi2.0 has been thoroughly revised compared to the first version. The nature and designation of a number of criteria have been changed, criteria and clusters have been made more rational and contentbased, and the number of criteria has been reduced from 49 to 39. These changes reflect the new scientific and practical developments and insights that have arisen since 1995.

Scoring method

Each of the Preffiquality criteria has been operationalised in one or more specific yes/no questions. The answers to these questions allow users to rate the degree to which an intervention meets a particular criterion as ‘weak’, ‘moderate’ or ‘strong’ (see the example in Box 1). This operationalisation aims to provide Preffi users with an instrument that allows them to assess a programme as objectively as possible. Nevertheless, the nature of a particular criterion or of the questions operationalising the criterion may not always allow an objective assessment. A rough distinction can be made into three types of criteria. The first type is that of criteria for which the questions can be unequivocally answered, such as ‘Is it known to what extent the target group does indeed perceive the problem as a problem?’ (2.3). The second category is that of criteria for which the questions cannot be so straightforwardly answered because they require an assessment of certain aspects, such as that about the expertise of the project manager (1.3.a) or the question whether the target group perceives the intervention as compatible with their culture (6.3.b). In such cases, users are advised to seek peer consensus on the answer. Finally, there are a number of criteria which basically require an expert opinion. An example is the operationalisation of ‘theoretical model’ (3.1), which includes not only the question whether a theoretical model is being used but also asks whether it has been made plausible that the model chosen is suitable for application in a particular situation.

A number of Prefficriteria offer the opportunity to choose the option ‘not assessable’; these are particularly those criteria that are difficult to assess by third parties on the basis of documentary evidence provided, such as the expertise and characteristics of the project manager (1.3.a). In the other criteria, a lack of information is assessed as ‘weak’. The scoring form allows each cluster to be given an overall mark between 1 and 10, which is to be composed from the individual criterion

marks within the cluster. This can ultimately result in an overall mark for the project as a whole, based on the assessments for the individual clusters. The overall mark need not be a simple average, as different weights can be allocated to certain criteria or clusters. The back of the scoring form provides room to enter the cluster scores in graphic format, and to indicate points to be improved and actions required to achieve this improvement.

Experiences with Preffi2.0

The trial run among 35 practitioners (step 7), which can be regarded as a pretest for Preffi2.0, yielded a favourable assessment of the draft version (Meurs, 2002; Molleman, 2005) . Of the 28 respondents who had worked with Preffi 1.0 before, 25 regarded Preffi2.0 as an improvement, particularly because of its improved underpinning and the operationalisation of the quality criteria. The instrument was given an average overall mark of 7.7 out of 10, and was evaluated by the large majority as valuable, complete, clear, well-organised and innovative. Most respondents reported Preffi2.0 to be useful for themselves (83%) and for colleagues (89%), both for project development (79%) and for project evaluation (85%). They regarded Preffi2.0 as difficult and long, rather than as easy and short. They also reported that the time they had to invest to apply it decreased with successive applications: the first project required an average of 113 minutes to asses, the second 85 minutes. Practice has since shown that an experienced Preffi2.0 user can assess a project fully within one hour.

It has become clear that people use Preffi in various stages of projects, both to scrutinise their own projects and as a basis to discuss those of colleagues. Assessing other people’s projects can be difficult if information is lacking or unclear. In addition to an assessment of the documentary evidence provided, it has been found to always require a supplementary discussion with the project manager. Users feel that this then yields a balanced and useful overview of points in the project that need to be improved. Some practitioners involved in community development projects first considered the draft Preffi2.0 as too linear, top-down and expert-driven to be very useful for their work. However, in interactions with the task force their opinion changed. The cyclical form of Preffi2.0, and the emphasis on the importance of a broad analysis of the problem and of participation of the target group in several items of the Preffi convinced them of the usefulness of Preffi2.0 for community projects. In addition, in the user manual extra attention is paid to community projects. In that manual it is stated that this type of project requires intensive collaboration with and support from the members of the community, with special emphasis on their preferences and needs. The opinions expressed in the trial study have been confirmed by the experiences with our Preffi2.0 implementation programme, in which more than 400 health promotion specialists have already attended courses.

Conclusions

Effective health promotion requires the development and dissemination of programmes whose effectiveness has been proven, as well as knowledge about principles of effectiveness that influence the effectiveness of health promotion programmes in practice. Preffiis a dynamic learning system that could assist this process in various ways. The instrument combines scientific and practical knowledge on principles of effectiveness in health promotion. Preffi’s primary function is that of a diagnostic quality assurance instrument that helps users to identify possible improvements to their projects, and Preffiis above all suitable for this function. In addition, it could be used as an instrument for the selection of projects, though this function needs further research and development. A study has been conducted to examine how many assessors would be needed to obtain reliable conclusions (Molleman, 2005).

The Preffiinstrument is still being developed further, in that new versions are regularly produced and its practicability, reliability and validity are continuously being improved. Dutch practitioners have expressed the opinion that the use of Preffito assess their own and each other’s projects actually results in improved project quality. We are currently trying to corroborate this opinion by means of research.

Box 1

An example of the operationalisation and norms for a Prefficriterion

6.3.a. Participation of the target group Operationalisation: 1. In the case of interventions developed elsewhere (e.g. at national level): has the general target group at least been consulted while the intervention was being developed? 2. For any project: has the specific target group for the present project (e.g., residents of the target district) been at least consulted while the intervention was being developed or before the model intervention was selected?

3. For any project: in view of the nature of the project, has the target group participated sufficiently in the development or selection of the intervention?

Norms:

•Weak: question 1 = no or not applicable and question 2 = no (making question 3 irrelevant) •Moderate: question 1 and/or 2 = yes and question 3 = no •Strong: question 1 and/or 2 = yes and question 3 = yes

This article is from: