This post is about the value of sharing research “failures” as a core part of the dispute resolution community of practice.
Invisibility of failure in academic culture
In the competitive environment of academia, there is an unrealistic focus upon extrinsic measures of success such as tenure, promotion, publication, funding grants or awards. This belies the fact that in order to build expertise, knowledge, and excellence, many failed or abandoned endeavours will form part of our journey. The veneer of flawless achievement paints an unrealistic and unhelpful picture for emerging researchers and others who have much to offer should they be nurtured in the research environment. Some academics have published “CVs of failures” as a way of addressing the unrealistic portrayal of academic life. For example, Melanie Stefan from the University of Edinburgh whose idea was adopted by Professor Johannes Haushofer of Princeton University.
“Publish or perish” can be a preoccupation in the life of an academic researcher, with publication of research in peer reviewed journals being valued above other means of distributing the knowledge built through research endeavour. It is recognised that journals have a bias towards publication of positive research results and non-publication of null results. Recent attention has been paid to the fact that this trend skews the knowledge attained in the field and calls have been made to publish negative results. The validity of social science research with low response rates is often questioned, and also makes it difficult for such work to be published. In research involving questionnaires or surveys, “nonresponse bias” occurs where the missing data from potential participants who chose not to participate means that the data is skewed, and this is also a cause of criticism.
Some steps have been taken within research communities to address problems with publication bias. Journals have been created that focus exclusively on publishing negative, unexpected or controversial research results. Some researchers have sought to test how much of a problem low response rates are, as a small sample is not necessarily biased.
The value of methodological “failures”
An almost exclusive focus on publishing “successful” research means that only half (or perhaps even less) of the knowledge generated by the research community is made available. The result is that research methods that did not “succeed”, because the results were unexpected or the research method did not attract the response rates that it was designed to attract, are kept in the shadows. Future researchers miss out on the benefit of being informed to avoid repeating design faults. Sometimes research approaches that work in one setting do not translate well to another setting, and publishing details of failures enables challenges to be overcome. Sometimes disappointing participation rates are discussed in a defensive way within publications, where the author tries to justify the low response rather than sharing their reflections on how they might have done their research differently. Such reflection would be a way of supporting improved research design.
Like all aspects of research, methodologies are best designed with a background knowledge of existing literature. Many of the challenges of research can be managed by following advice (for example how to minimise nonresponse error) and by reading literature specifically related to research in a particular disciplinary area. For example, NADRAC’s ADR Research: a resource paper and Daniel Druckman’s Doing Research: Methods of Inquiry for Conflict Analysis are two useful resources in the dispute resolution field (but of course these are only two of many).
“Hot Air Balloon Shadow” By Dixonsej (Own work) [Public domain], via Wikimedia Commons
Bringing what didn’t work in dispute resolution research out of the shadow
I argue that it would be enormously beneficial to our field if we joined forces to gather data about the research projects that we abandoned, the research methods that failed, and the results that we haven’t published because we don’t think they tell a “good enough” story. This post is an invitation to share your stories.
To demonstrate the value of this proposal, I will share one of my own stories of research failure. It has been previously published as Appendix A “Lessons learnt from the original research design” of my PhD Thesis. Back in 2004 I embarked on a research planning process, and my first research design was to invite lawyers and their clients to participate in 20 minute telephone surveys about their experiences in a particular matter at mediation in the Supreme Court of Tasmania. I planned telephone surveys because I believed that asking for a short period of time, without needing to meet face to face, would be most attractive to potential participants. I was very interested to gather data about both lawyer and client perspectives, and my overall aim at the time was to conduct a programme evaluation.
An obvious challenge was identifying who should be invited to participate in the research. A letter was sent by the then Chief Justice to all Tasmanian legal practitioners informing them of the study, emphasising the court’s support, and encouraging them to participate. Information sheets and consent forms were distributed to lawyers by the Court by mailing them with the Notice of Mediation. The human research ethics committee required that the clients be approached through their lawyers. Lawyers were asked in a covering letter to forward their clients’ information sheet, consent form and a copy of the survey of legal practitioners to their client prior to the mediation. Consent forms could be handed to the mediator at the conclusion of the mediation or posted to the researcher, and mediators had spare copies of all materials on hand in the mediations. I met with each mediator to explain my research, and asked them to mention the research at the mediation and extend an invitation to participate. They all agreed to do so, but later reported that they frequently either forgot or failed to find an appropriate moment to raise the matter during the emotional upheaval during or the fatigue at the conclusion of mediation. Memoranda and articles in the Law Society’s magazine from both myself and the Registrar were published to try to boost participation rates. Ultimately I abandoned the research method as only 27 signed consent forms were received, representing less than 2% of the pool of potential participants over the relevant time period.
I redesigned my research aims and method. Information sheets and consent forms were emailed to potential participants, who were identified by the Registrar as being lawyers who practised in civil matters at the Court. I gathered data through face to face interviews of legal practitioners lasting around one hour. Interviews related to their mediation practice in the Supreme Court of Tasmania generally, not specific matters.
I concluded each interview with a question about whether the practitioner remembered receiving the documents about my original study and what they had done in response. The overwhelmingly common response was that the paperwork had gone straight into the rubbish bin and had not been sent to the client. Participation was a low priority compared to activities that would advance the client’s case. There was a reluctance from many lawyers to forward the material to their client. There was also some irritation at the volume of invitations that individual lawyers received. The sensitive and confidential nature of the information sought was also a deterrent to participation.
What I learnt from my “failed” research method included the following:
- Relying upon third parties to distribute invitations to participate in research can be risky, even when they are supportive of the research project. I might have been better to attend the court and approach potential participants myself (see Jill Howieson’s Local Court Mediation study).
- Mediation can be a stressful and emotionally draining process, and inviting participants to engage in research immediately after their mediation event is not optimal. Similarly, mediators may find it difficult to invite participation in research during a mediation event, as this distracts participants from their mediation experience.
- Lawyers are very protective of their clients’ wellbeing and can be reluctant to ask their clients to participate in research activity that (a) does not advance their case and/or (b) invites them to reflect upon their experiences or satisfaction with the service they have received.
- Where data is sought about individual cases rather than general experience, the frequency with which individuals will be invited to participate in the research should be considered, as multiple approaches can discourage engagement in research.
- Many professionals are overwhelmed with paperwork and telephone calls during their busy practice. They might be more receptive to a face to face interaction with another human being than participating in research by completing a form or telephone survey. They can be surprisingly generous with their time.
- Where possible, providing information about the research by electronic means such as attachments to an email and/or a link to a website (if ethics approval can be obtained for this approach) is likely to be less overwhelming for potential participants than hard copies.
How you can tell your story
I invite other dispute resolution researchers to share your stories of “failed” research methods in the comments section below. Please include:
- Some detail of the research design;
- In what way your research method “failed” to attract the response rate or results you wanted or expected, or why you abandoned it;
- What you learnt from the process and what lessons you want to share with current and future researchers.
I look forward to the valuable contribution that this sharing can make to our field.