Psychiatric Services, a leading US journal, has published two important papers on Open Dialogue. Freeman and colleagues did an extensive literature review and analysis of currently available research. Their paper is accompanied by a commentary by Kim Mueser, PhD, Director of the Boston University Center for Psychiatric Rehabilitation and one of the world’s experts in his field.
Freeman and colleagues begin their paper with a detailed explanation of the criteria for inclusion into their investigation. They identified 23 studies for review. Papers selected were published in English and evaluated Open Dialogue effectiveness using either case study, qualitative, quantitative, or mixed methods. Studies were conducted in Finland, Norway, Sweden, and the US.
As the authors point out, most of the available research comes from the Western Lapland group that developed Open Dialogue (OD). This poses a fundamental source of weakness in the evidence base. Their studies had small sample sizes, there was no control group, and the ratings were not blinded. In addition, there were not consistent methods for either defining or evaluating OD.
Many of us learned of Open Dialogue because of their reported excellent outcomes for individuals who experienced a first episode of psychosis. We are eager to see if these results can be replicated elsewhere. But there are other important questions. OD is a way of working with individuals and their social networks but it is also a way of structuring a mental health system. There is inadequate information regarding successful implementation outside of Western Lapland.
The authors attempted to address these various questions in the paper and identified the following topics for review: treatment outcomes for OD, qualitative studies of the delivery of OD, implementation of OD principles, key principles and their application in network meetings, and service user acceptability and increasing trust in services.
The studies completed in Western Lapland comprise the bulk of the quantitative data. The authors have provided an online supplement with details of these studies; this is extremely valuable given their foundational importance for students of Open Dialogue. In the main paper, they summarize the three main cohorts who were studied and point out some challenges to uncritically accepting their conclusions regarding outcome: the sample sizes are small, there appear to be different sample sizes in different papers reporting on the same cohort, there appear to be variations in severity of symptoms among each cohort, and there is a sparsity of information on adherence to fidelity criteria for each cohort.
The qualitative studies have their own limitations including small sample sizes and lack of transparency with regard to sampling. This is critical since it introduces a major source of bias; if those who have favorable experiences are more likely to be included in the study, this would provide overly optimistic conclusions.
There were also large differences among the studies with respect to how they reported on implementation, making it difficult to use these studies to guide future implementation. Two studies of higher quality reported on some of the challenges faced by those implementing OD and the authors mention in particular the trouble some experience when questioning professional hierarchies.
In some studies, the focus was on the network meetings and not on systemic change. These offer some insights regarding which aspects seem to be correlated with optimal outcome.
With regard to service user acceptability, they point out that the qualitative studies report that this approach seems to be acceptable to service users who, along with families and clinicians, appreciate the style and transparency of the meetings.
Their conclusion emphasizes the limitations of existing research and points out several areas that require further investigation. This includes the need for studies conducted in the “real world” to evaluate OD’s effectiveness. They suggest further inquiry into not only if but also how and why OD is effective. They point out the need for further research on implementation and “scalability.” Along with this — and this is critical in tightly budgeted publicly funded systems — is the need for an assessment of cost effectiveness. Furthermore, they point out the need for a better understanding of the structural changes that are required to fully implement this model.
This is a valuable and important paper. Its conclusions should not come as a surprise to any student of Open Dialogue but one cannot understate the effort required and the significance of this type of scholarly endeavor. Its publication in a major journal reflects the fact that many outside of the OD world are paying attention to this work. The authors argue that the promising outcomes from Finland need to be replicated and, given the challenges at both a systemic and individual level (training is time intensive, for example), this is a daunting task.
Therefore, perhaps it should also come as no surprise that Mueser’s commentary, while essentially agreeing with the limitations articulated in the original paper, concludes that perhaps the task ahead is too daunting. His commentary concludes with these sobering words, “The present data on Open Dialogue are insufficient to warrant calls for further research on the program other than those projects that are currently under way.”
On first reading, I was frustrated. Dr. Mueser is influential and this seemed to create a catch-22: the current evidence base is not strong enough to form definitive conclusions on efficacy so therefore we need more research. However, since the evidence is not robust, we should not put any more resources into studying OD.
But to some extent, I understand his point even if I do not agree. As I was reflecting on this, I was amazed to realize that I have been a student of Open Dialogue for almost seven years. Along with some local colleagues, I was privileged to study at the Institute for Dialogic Practice. We have gone on to develop an adaptation within Vermont’s public sector that we call Collaborative Network Approach. We are currently in our third year of training. About 25 students have been enrolled in each of our first three years and most of them have gone on to complete two years of training. We have a smaller cohort who are training to be trainers so we can carry this forward and sustain our efforts. We want to keep this cost effective with inherent sustainability. This is critical in a system tight on resources with a constantly churning work force.
But as grateful as I am, there are challenges. Implementation is daunting. The people in my agency who attend training almost invariably return to work with a deep enthusiasm to carry this forward. I am a leader in this initiative and a leader at my agency so I feel the pressure of their expectations but I find myself in the awkward position of sometimes having to remind them that we do not yet know if this is helpful, how it is helpful, or how we can implement this system of care. And there are competing demands. There are other initiatives that show promise. And there is the daily grind — the daily urgent needs that arise and require our attention. Forgive the analogy (my daughter insisted I see Titanic about 50 times when she was young) but even when the iceberg is straight ahead, it is hard to shift course. I wonder if it is responsible to cry out for the need to shift in this particular direction before we have more data.
However, I share my colleagues’ enthusiasm and I join them in wanting to move ahead. In some ways, implementation can be simple. There are small steps. This way of working has helped me to embody principles that are not actually too controversial. This is “person-centered” to its core. It instantiates shared decision making. It is not hard to invite people to bring in their families or other important allies to the visits. While I hope this also isn’t controversial, Open Dialogue invites me to stay humble and to respect everyone’s voice. It doesn’t require me to disavow my expertise but to try to just bring it down a notch (or two, or many) and I continue to believe this is a good thing for my profession. And it is all about engagement. There are too many people — and often their families — who are struggling but who walk out the door because they do not like our message. OD offers a way to meet them without insisting they agree with our way of understanding the problem. I participated in the NIMH-funded RAISE early treatment study of individuals who experienced first episode psychosis. Engagement was everything and, at least in my experience, the road to engagement was not directly addressed in the RAISE protocol. OD offers a path that I did not find in RAISE. And in any event, everything embodied in RAISE can be brought into OD. OD is the hub; CBT, supported employment/education, can be introduced. Medications can be offered. Even traditional psychoeducation can be invited in; it just isn’t given the full weight of epistemic authority that it is given in more traditional systems.
But I am left wondering about Dr. Mueser’s final sentence. I understand that in the long run, it could take enormous resources to move this along, but, thus far, very little has been given to this effort. While there is broad international interest, it doesn’t run deep in the context of worldwide resources. Most of the funding in the US has come through the Foundation for Excellence in Mental Health Care (disclosure: I am the chair of its board). FEMHC is in the process of offering another grant to fund an international research project. While I am proud of what the previous and current grantees have accomplished, these are beginning efforts. Perhaps, Dr. Mueser has in mind the dilemma that psychosocial research in general is underfunded.
I hope, however, that others will listen to the broad array of voices — clinicians, consumers, family members — who find something of value here. Yes, there is more work to be done, more to be learned, but less us try to move this forward.