Best writers. Best papers. Let professionals take care of your academic papers

Order a similar paper and get 15% discount on your first order with us
Use the following coupon "FIRST15"
ORDER NOW

You have just mentored a group of medical-surgical nurses on your unit through an Evidence-Based Project (EBP) aimed at addressing the Hospital-Acquired Pressure In


You have just mentored a group of medical-surgical nurses on your unit through an Evidence-Based Project (EBP) aimed at addressing the Hospital-Acquired Pressure In

– APA STYLE 

– MUST HAVE 2 REFERENCES THE ONE ATTACHED AND ONE THAT YOU MUST SEARCH IT MUST BE **PEER REVIEW** THE ARTICLE MUST BE FROM A **CREDIBLE SOURCE ** CANNOT BE OLDER THAN 5 YEARS OLD AND MUST HAVE A DOI!

– ONE PAGE MINIMUM

Scenario: You have just mentored a group of medical-surgical nurses on your unit through an Evidence-Based Project (EBP) aimed at addressing the Hospital-Acquired Pressure Injuries (HAPI) rate. You want to disseminate the results quickly to let the world know of your team’s successes. The team recognizes the results need to be disseminated internally and submit an abstract at a local, state, or national level nurses’ organization/conference.

  • Describe the next steps to disseminate this evidence internally.
  • Discuss how a master prepared nurse would determine the best place to disseminate this information outside of the organization (externally).

PU38CH01-Brown ARI 17 March 2017 16:23

An Overview of Research and Evaluation Designs for Dissemination and Implementation

C. Hendricks Brown,1 Geoffrey Curran,2

Lawrence A. Palinkas,3 Gregory A. Aarons,4

Kenneth B. Wells,5 Loretta Jones,6 Linda M. Collins,7

Naihua Duan,8 Brian S. Mittman,9 Andrea Wallace,10

Rachel G. Tabak,11 Lori Ducharme,12

David A. Chambers,13 Gila Neta,13 Tisha Wiley,14

John Landsverk,15 Ken Cheung,16

and Gracelyn Cruden1,17 1 Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611; email: [email protected] 2 Division of Health Services Research, Psychiatric Research Institute, University of Arkansas for Medical Sciences, Little Rock, Arkansas 72205; email: [email protected] 3 Department of Children, Youth and Families, School of Social Work, University of Southern California, Los Angeles, California 90089; email: [email protected] 4 Department of Psychiatry, University of California, San Diego, School of Medicine, La Jolla, California 92093; email: [email protected] 5 Center for Health Services and Society, Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, California 90024; email: [email protected] 6 Healthy African American Families, Los Angeles, California 90008; email: [email protected] 7 The Methodology Center and Department of Human Development & Family Studies, Pennsylvania State University, University Park, Pennsylvania 16802; email: [email protected] 8 Department of Psychiatry, Columbia University Medical Center, Columbia University, New York, NY 10027; email: [email protected] 9 VA Center for Implementation Practice and Research Support, Virginia Greater Los Angeles Healthcare System, North Hills, California 91343; email: [email protected] 10 College of Nursing, The University of Iowa, Iowa City, Iowa 52242; email: [email protected] 11 Prevention Research Center, George Warren Brown School, Washington University, St. Louis, Missouri 63105; email: [email protected] 12 National Institute of Alcohol Abuse and Alcoholism, National Institutes of Health, Bethesda, Maryland 20814; email: [email protected] 13 Division of Cancer Control and Population Sciences, National Cancer Institute, Rockville, Maryland 20850; email: [email protected], [email protected] 14 National Institute on Drug Abuse, National Institutes of Health, Bethesda, Maryland 20814; email: [email protected] 15 Oregon Social Learning Center, Eugene, Oregon 97401; email: [email protected] 16 Mailman School of Public Health, Columbia University, New York, NY 10032; email: [email protected] 17 Department of Health Policy and Management, University of North Carolina, Chapel Hill, North Carolina 27514; email: [email protected]

1

Click here to view this article’s online features:

• Download figures as PPT slides • Navigate linked references • Download citations • Explore related articles • Search keywords

ANNUAL REVIEWS Further

A nn

u. R

ev . P

ub li

c H

ea lt

h 20

17 .3

8: 1-

22 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

17 2.

58 .1

28 .9

9 on

0 6/

13 /2

2. S

ee c

op yr

ig ht

f or

a pp

ro ve

d us

e.

PU38CH01-Brown ARI 17 March 2017 16:23

Annu. Rev. Public Health 2017. 38:1–22

The Annual Review of Public Health is online at publhealth.annualreviews.org

https://doi.org/10.1146/annurev-publhealth- 031816-044215

Copyright c© 2017 Annual Reviews. This work is licensed under a Creative Commons Attribution- ShareAlike 4.0 (CC-BY-SA) International License, which permits unrestricted use, distribution, and reproduction in any medium and any derivative work is made available under the same, similar, or a compatible license. See credit lines of images or other third-party material in this article for license information.

Keywords

implementation trial, scale up, adoption, adaptation, fidelity, sustainment

Abstract

The wide variety of dissemination and implementation designs now being used to evaluate and improve health systems and outcomes warrants review of the scope, features, and limitations of these designs. This article is one product of a design workgroup that was formed in 2013 by the National Institutes of Health to address dissemination and implementation research, and whose members represented diverse methodologic backgrounds, con- tent focus areas, and health sectors. These experts integrated their collective knowledge on dissemination and implementation designs with searches of published evaluations strategies. This article emphasizes randomized and nonrandomized designs for the traditional translational research continuum or pipeline, which builds on existing efficacy and effectiveness trials to exam- ine how one or more evidence-based clinical/prevention interventions are adopted, scaled up, and sustained in community or service delivery systems. We also mention other designs, including hybrid designs that combine ef- fectiveness and implementation research, quality improvement designs for local knowledge, and designs that use simulation modeling.

INTRODUCTION

Medicine and public health have made great progress using rigorous randomized clinical trials to determine whether an intervention is efficacious. The standards set by Fisher (40), who laid the foundation for experimental design first in agriculture, and by Hill (58), who developed the randomized clinical trial for medicine, provided a unified approach to examining the efficacy of an individual-level intervention versus control condition or the comparative effectiveness of one active intervention against another. Investigators have made practical modifications to the individual-level randomized clinical trial to test for program or intervention effectiveness under realistic conditions in randomized field trials (41), as well as for interventions delivered at the group level (77), for multilevel interventions (14, 15), for complex (35, 70) or multiple component interventions (32), and for tailoring or adapting the intervention to a subject’s response to targeted outcomes (32, 74) or to different social, physical, or virtual environments (13). To test efficacy or effectiveness, researchers now have a large family of designs that randomize across persons, place, and time (or combinations of these) (15), as well as designs that do not use randomization, such as pre-post comparisons, regression discontinuity (106), time series, and multiple baseline compar- isons (5). Although many of these designs rely on quantitative analysis, qualitative methods can also be used by themselves or in mixed-methods designs that combine qualitative and quantitative methods to precede, confirm, complement, or extend quantitative evaluation of effectiveness (83). Within this growing family of randomized, nonrandomized, and mixed-methods designs, reason- able consensus has grown across diverse fields about when certain designs should be used and which sample size requirements and design protocols are necessary to maximize internal validity (43, 87).

Dissemination and implementation research represents a distinct stage, and the designs for this newer field of research are currently not as well established as are those for efficacy

2 Brown et al.

A nn

u. R

ev . P

ub li

c H

ea lt

h 20

17 .3

8: 1-

22 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

17 2.

58 .1

28 .9

9 on

0 6/

13 /2

2. S

ee c

op yr

ig ht

f or

a pp

ro ve

d us

e.

PU38CH01-Brown ARI 17 March 2017 16:23

and effectiveness. A lack of understanding of the full range of these designs has impeded the development of dissemination and implementation science and practice. Dissemination and implementation ultimately aim to improve the adoption, appropriate adaptation, delivery, and sustainment of effective interventions by providers, clinics, organizations, communities, and sys- tems of care. In public health, dissemination and implementation research is intimately connected to understanding how the following seven types of interventions can be delivered in and function effectively in varying contexts: programs (e.g., cognitive behavioral therapy), practices [e.g., “catch-em being good” (84, 96)], principles (e.g., prevention before treatment), procedures (e.g., screen for depression), products (e.g., mHealth app for exercise), pills (e.g., PrEP to prevent HIV infection) (51), and policies (e.g., limit prescriptions for narcotics). We refer to these as the 7 Ps.

This article uses the term clinical/preventive intervention to refer to a single set or multiple sets of these 7 Ps, which are intended to improve health for individuals, groups, or populations. Dissemination refers to the targeted distribution of information or materials to a specific public health or clinical audience, whereas implementation involves “the use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings” (49, p. 1275). Dissemination distributions and implementation strategies may be designed to prevent a disorder or the onset of an adverse health condition, may intercede around the time of this event, may be continuous over a period of time, or may occur afterward.

Dissemination and implementation research pays explicit (although not exclusive) attention to external validity, in contrast to the main emphasis on internal validity in most randomized efficacy and effectiveness trials (21, 48, 52). Limitations in our understanding of dissemination and implementation have been well documented (1, 2, 86). But some have called for a moratorium on randomized efficacy trials for evaluating new health interventions until we address the vast disparity between what we know could work under ideal conditions versus what we know about program delivery in practice and in community settings (63).

There is considerable debate about whether and to what extent designs involving randomized assignment should be used in dissemination and implementation studies (79), as well as about the relative contributions of qualitative, quantitative, and mixed methods in dissemination and implementation designs (83). Some believe there is value in incorporating random assignment designs early in the implementation research process to control for exogenous factors across heterogeneous settings (14, 26, 66). Others are less sanguine about randomized designs in this context and suggest nonrandomized alternatives (71, 102). Debates about research designs for the emerging field of dissemination and implementation are often predicated on conflicting views of dissemination and implementation research and practice, such as whether the evaluation is intended to produce generalizable knowledge, support local quality improvement, or both (28). Debates about design also revolve around conflicting views pertaining to the underlying scientific issue of how much emphasis to place on internal validity compared with external validity (52).

In this article, we introduce a conceptual view of the traditional translational pipeline that was formulated as a continuum of research originally known as Levy’s arrow (68). This traditional translational pipeline is commonly used by the National Institutes of Health (NIH) and other research-focused organizations to move scientific knowledge from basic and other preintervention research to efficacy and effectiveness trials and to a stage that reaches the public (66, 79). By no means does all dissemination and implementation research follow this traditional translational pipeline, so we mention in the discussion section three different classes of research design that are of major importance to dissemination and implementation research. We also mention other methodologic issues, as well as community perspectives and partnerships that must be considered.

This article is a product of a design workgroup formed in 2013 by the NIH to address dissemi- nation and implementation research. We established a shared definition of terms, which required

www.annualreviews.org • Designs for Implementation 3

A nn

u. R

ev . P

ub li

c H

ea lt

h 20

17 .3

8: 1-

22 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

17 2.

58 .1

28 .9

9 on

0 6/

13 /2

2. S

ee c

op yr

ig ht

f or

a pp

ro ve

d us

e.

PU38CH01-Brown ARI 17 March 2017 16:23

significant compromise because the same words often have different meanings in different fields. Indeed, the term design, as used by quantitative or qualitative methodologists and intervention developers, is entirely different. Three terms we use repeatedly are process, output, and outcome. As used here, process refers to activities undertaken by the health system (e.g., frequency of super- vision), output refers to observable measures of service delivery provided to the target population (e.g., the number of individuals in the eligible population who take medication), and outcome refers only to health, illness, or health-related behaviors of individuals who are the ultimate tar- get of the clinical/preventive intervention. Throughout this article, we provide other consensus definitions involving dissemination and implementation as well as statistical design terms.

Where Dissemination and Implementation Fit in the Traditional Translational Pipeline

An updated version of the National Academy of Medicine [NAM, formerly the Institute of Medicine (IOM)] 2009 perspective on the traditional translational pipeline appears in Figure 1. This top-down translation approach (79) begins with basic and other preintervention research at the lower left that can inform the development of novel clinical/preventive interventions. These new interventions are then tested in tightly controlled efficacy trials to assess their impact un- der ideal conditions. A highly trained research team would typically deliver this program to a homogeneous group of subjects with careful monitoring and supervision to ensure high fidelity in this efficacy stage. Efficacy trials can answer only questions of whether a clinical/preventive intervention could work under rigorous conditions; therefore, such a program or practice that

R ea

l- w

o rl

d r

el ev

an ce

Implementation

Exploration

Adoption/preparation

Sustainment

Local knowledge

Generalized knowledge

Could a program work

Does a program work

Making a program work

Effectiveness studies

Efficacy studies

Dissemination and implementation studies*

Preintervention

Time

*These dissemination and implementation stages include systematic monitoring, evaluation, and adaptation as required.

Figure 1 Traditional translational pipeline from preintervention, efficacy, effectiveness, and dissemination and implementation studies.

4 Brown et al.

A nn

u. R

ev . P

ub li

c H

ea lt

h 20

17 .3

8: 1-

22 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

17 2.

58 .1

28 .9

9 on

0 6/

13 /2

2. S

ee c

op yr

ig ht

f or

a pp

ro ve

d us

e.

PU38CH01-Brown ARI 17 March 2017 16:23

demonstrates sufficient efficacy would then be followed, in the traditional research pipeline, by the next stage, an effectiveness trial in the middle of Figure 1, embedded in the community and/or organizational system where such a clinical/preventive intervention would ultimately be delivered. In these effectiveness trials, clinicians, other practitioners, or trained individuals from the commu- nity typically deliver the clinical/preventive intervention with ongoing supervision by researchers. Also, in contrast with the homogeneous group of subjects used in efficacy trials, a more heteroge- neous group of study participants is generally included in effectiveness trials. These less-controlled conditions allow an effectiveness trial to determine if a clinical/preventive intervention does work in a realistic context.

The final stage of research in this traditional translational pipeline model concerns how to make such a program work within community and/or service settings, the domain of dissemination and implementation research and the last stage of research in the traditional research pipeline. Accord- ing to this pipeline, the clinical/preventive intervention must have already demonstrated effective- ness before an implementation study can be conducted. Effectiveness of the clinical/preventive intervention in this traditional research pipeline would be considered settled law so that propo- nents of this translational pipeline consider it unnecessary to reexamine effectiveness in the midst of an implementation research design (39, 100, 109). Thus, the traditional translational pipeline model is built around those clinical/preventive interventions that have succeeded in making it through the effectiveness stage.

We now describe the focus of dissemination and implementation research under this traditional research pipeline (see Figure 1, upper right). A tacit assumption of this pipeline is that wide-scale use of evidence-based clinical/preventive interventions generally requires targeted information dissemination and often a concerted, deliberate strategy for implementation to move to this end of the diffusion, dissemination, and implementation continuum (53, 81, 94). A second assumption is that for a clinical/preventive intervention to have a population-level impact, it must not only be an effective program, but also reach a large portion of the population, be delivered with fidelity, and be maintained (50). Within the dissemination and implementation research agenda, researchers have distinguished some phases of the implementation process itself. A common exemplar, the EPIS conceptual model of the implementation process (1), identifies four phases: exploration, preparation, implementation, and sustainment, as represented by the four white boxes within implementation illustrated in Figure 1. The first of these phases, exploration, refers to whether a service delivery system (e.g., health care, social service, education) or community organiza- tion would find a particular clinical/preventive intervention useful, given its outer context (e.g., service system, federal policy, funding) and inner context (e.g., organizational climate, provider experience). The preparation phase refers to putting into place the collaborations, policies, fund- ing, supports, and processes needed across the multilevel outer and inner contexts to introduce this new clinical/preventive intervention into this service setting once stakeholders decide to adopt it. In this phase, adaptations to the service system, service delivery organizations, and the clinical/preventive intervention itself are considered and prepared. The implementation (with fi- delity) phase refers to the support processes that are developed both within a host delivery system and its affiliates to recruit, train, monitor, and supervise intervention agents to deliver the interven- tion with adherence and competence and, if necessary, to adapt systematically to the local context (36). The final phase is sustainment and refers to how host delivery systems and organizations maintain or extend the supports as well as the clinical/preventive intervention, especially after the initial funding period has ended. The entire set of structural, organizational, and procedural processes that form the support structure for a clinical/preventive intervention is referred to in this article as the implementation strategy, which is viewed as distinct from, but generally dependent on, the specific clinical/preventive intervention that is being adopted.

www.annualreviews.org • Designs for Implementation 5

A nn

u. R

ev . P

ub li

c H

ea lt

h 20

17 .3

8: 1-

22 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

17 2.

58 .1

28 .9

9 on

0 6/

13 /2

2. S

ee c

op yr

ig ht

f or

a pp

ro ve

d us

e.

PU38CH01-Brown ARI 17 March 2017 16:23

Figure 1 also contrasts local formative evaluation or quality improvement with generalizable knowledge, represented by the depth dimension of the dissemination and implementation box. Local evaluation is generally designed to test and improve the performance of the implementation strategy to deliver the clinical/preventive intervention in that particular setting, with little or limited interest in generalizing their findings to other settings. Implementation studies designed to produce generalizable knowledge contrast in obvious ways with this local evaluation perspective, but systematic approaches to adaptation can provide generalizable knowledge as well. In the traditional translational pipeline, most of the emphasis is on producing generalizable knowledge.

This traditional pipeline does not imply that research always continues to move in one direction; in fact, the sequential progression of intervention studies is often cyclical (14). Trial designs may change through this pipeline, as well. Efficacy or effectiveness trials nearly always use random assignment, whereas implementation research often requires trade-offs between a randomized trial design that can have high internal validity but is difficult to mount and an observational intervention study that has little experimental control but still may provide valuable information (71).

DESIGNS FOR DISSEMINATION AND IMPLEMENTATION STRATEGIES

This section examines three broad categories of designs that provide within-site, between-site, and within- and between-site comparisons of implementation strategies.

Within-Site Designs

Within-site designs can be used to evaluate implementation successes or failures by examining changes that occur inside an organization, community, or system. They can be comparatively simple and inexpensive, as in the example we provide for post designs, or vary in complexity and expense in pre-post and multiple baseline designs.

Post design of an implementation strategy to adopt an evidence-based clinical/preventive intervention in a new setting. The simplest and often most common design is a post design, which examines health care processes and health care utilization or output after introduction of an implementation strategy focused on the delivery of an evidence-based clinical/preventive intervention in a novel health setting. The emphasis here is on changing health care process and utilization or output rather than health outcomes (i.e., not measures of patient or subject health or illness). As one example, consider the introduction of rapid oral HIV testing in a hospital-based dental clinic, a site that may be useful from a population health standpoint, based on access to a population that may include individuals who have not been tested and on the convenience, speed, sensitivity, and specificity of this test. The Centers for Disease Control and Prevention (CDC) recently proposed that dental clinics could deliver this new technology, and public health questions remain about whether such a strategy would be successful. Implementation requires partnering with the dental clinic (exploration phase), which would need to accept this new mission, and hiring of a full-time HIV counselor to discuss results with patients (preparation phase). Here, a key process measure is the rate at which HIV testing is offered to appropriate patients. Two key output measures are the rate at which an HIV test is conducted and the rate of detection of subjects who did not know they were HIV positive. Blackstock and colleagues’ study (8) was successful in getting dental patients to agree to be tested for HIV within the clinic, but it had no comparison group and did not collect pretest rates of HIV testing among all patients. This

6 Brown et al.

A nn

u. R

ev . P

ub li

c H

ea lt

h 20

17 .3

8: 1-

22 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

17 2.

58 .1

28 .9

9 on

0 6/

13 /2

2. S

ee c

op yr

ig ht

f or

a pp

ro ve

d us

e.

PU38CH01-Brown ARI 17 March 2017 16:23

program did identify some patients who had not previously been tested and were found to be HIV positive.

For implementation of a complex clinical/preventive intervention in a new setting, this post design can be useful in assessing factors to predict program adoption. For example, all California’s county-level child welfare systems were invited to be trained to adopt Multidimensional Treatment Foster Care, an evidence-based alternative to group care (24). Only a post test was needed to assess adoption and utilization of this program because the sole purveyor of this program clearly knew when and where it was being used in any California communities (26). Post designs are also useful when new health guidelines or policy changes occur.

Pre-post design of an implementation strategy of a clinical/preventive intervention al- ready in use. Pre-post studies require information about preimplementation levels. Some clinical/preventive interventions are already being used in organizations and communities, but they do not have the reach into the target population that program objectives require. A pre-post design can assess such changes in reach. In a pre-post implementation design, all sites receive a new or revised implementation strategy for a clinical/preventive intervention that is already being used; process or output is measured prior to and after the new implementation strategy begins. Effects due to the new implementation strategy are inferred by comparing pre to post changes within this site. One example of a study using this design is the Veterans Administration’s use of the chronic care model for inpatient smoking cessation (61). A primary output measure in this study is the number of prescriptions given for smoking cessation. This pre-post design is useful in examining the impact of a complex implementation strategy within a single organization or across multiple sites that are representative of a population (e.g., federally qualified health centers).

Pre-post designs are also useful in assessing the adoption by health care systems of a guideline, black box warning, or other directive. For example, the comparison of management strategies used for inpatient cellulitis and cutaneous abscess were compared for all patients with these discharge diagnoses in the year prior to and after the publication of guidelines (8). The effects of a black box warning on antidepressant prescriptions for depressed youth were evaluated by comparing prescription rates (46) and adverse event reports (47) prior to and after introduction of the warning.

A variant of the pre-post design involves a multiple-baseline time-series design. After an outlet store reward and reminder system was implemented, Biglan and colleagues (7) examined the prevalence of tobacco being sold to young people over multiple time points without personnel checking birthdays. Tracking

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.



Source link

 

"Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!"