About the Author(s)


Rizwana B. Mallick Email symbol
Department of Communication Sciences and Disorders, University of Cape Town, South Africa

Lehana Thabane symbol
Department of Health Research Methods, Evidence and Impact, McMaster University, Canada

A.S.M. Borhan symbol
Department of Health Research Methods, Evidence and Impact, McMaster University, Canada

Harsha Kathard
Department of Health and Rehabilitation Sciences, University of Cape Town, South Africa

Citation


Mallick, R.B., Thabane, L., Borhan, A.S.M., & Kathard, H. (2018). A pilot study to determine the feasibility of a cluster randomised controlled trial of an intervention to change peer attitudes towards children who stutter. South African Journal of Communication Disorders, 65(1), a583. https://doi.org/10.4102/sajcd.v65i1.583

Original Research

A pilot study to determine the feasibility of a cluster randomised controlled trial of an intervention to change peer attitudes towards children who stutter

Rizwana B. Mallick, Lehana Thabane, A.S.M. Borhan, Harsha Kathard

Received: 17 Oct. 2017; Accepted: 22 Apr. 2018; Published: 18 July 2018

Copyright: © 2018. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: While randomised controlled trials (RCTs) are considered the gold standard of research, prior study is needed to determine the feasibility of a future large-scale RCT study.

Objectives: This pilot study, therefore, aimed to determine feasibility of an RCT by exploring: (1) procedural issues and (2) treatment effect of the Classroom Communication Resource (CCR), an intervention for changing peer attitudes towards children who stutter.

Method: A pilot cluster stratified RCT design was employed whereby the recruitment took place first at school-level and then at individual level. The dropout rate was reported at baseline, 1 and 6 months post-intervention. For treatment effect, schools were the unit of randomisation and were randomised to receive either the CCR intervention administered by teachers or usual practice, using a 1:1 allocation ratio. The stuttering resource outcomes measure (SROM) measured treatment effect at baseline, 1 and 6 months post-intervention overall and within the constructs (positive social distance, social pressure and verbal interaction).

Results: For school recruitment, 11 schools were invited to participate and 82% (n = 9) were recruited. Based on the school recruitment, N = 610 participants were eligible for this study while only n = 449 were recruited, where there was n = 183 in the intervention group and n = 266 in the control group. The dropout rate from recruitment to baseline was as follows: intervention, 23% (n = 34), and control, 6% (n = 15). At 1 month a dropout rate of 7% (n = 10) was noted in the intervention and 6% (n = 15) in the control group, whereas at 6 months, dropout rates of 7% (n = 10) and 17% (n = 44) were found in the intervention and control groups, respectively. For treatment effect on the SROM, the estimated mean differences between intervention and control groups were (95% Confidence Interval (CI): -1.07, 5.11) at 1 month and 3.01 (95% CI: -0.69, 6.69) at 6 months. A statistically significant difference was observed at 6 months on the VI subscale of the SROM, with 1.35 (95% CI: 0.58, 2.13).

Conclusion: A high recruitment rate of schools and participants was observed with a high dropout rate of participants. Significant differences were only noted at 6 months post-intervention within one of the constructs of the SROM. These findings suggest that a future RCT study is warranted and feasible.

Introduction

Feasibility of randomised controlled trials

Among the various levels of evidence that are valuable in clinical practice, the randomised controlled trial (RCT) is regarded as the gold standard because of its design strength (Evans, 2003) and its power to draw conclusions (Oakley, Strange, Bonell, Allen, & Stephenson, 2006; Sibbald & Roland, 1998). Because of the large-scale nature of the RCT and its stringent design, there are financial, resource and time implications that require careful consideration before one can be conducted. Consequently, the literature recommends that it is vital to first conduct a comprehensive pilot study to determine feasibility and improve the validity and statistical power of a future RCT (Evans, 2003; Leon, Davis, & Kraemer, 2011; Oakley et al., 2006; Shanyinde, Pickering, & Weatherall, 2011).

It is further reported that the procedural rigour of the study is as important as the treatment effect benefit when evaluating the feasibility of an RCT (Oakley et al., 2006). Procedural aspects and treatment effect are weighted equally in importance to determine feasibility of an RCT, as this approach focuses both on the process of implementing and intervention as well as the outcome (determining the treatment effect) (Evans, 2003; Leon et al., 2011; Oakley et al., 2006; Shanyinde et al., 2011). Therefore, this study aimed to determine the feasibility of a future RCT by assessing two key components, the recruitment of schools and participants and dropout rate of participants, as well as the potential treatment effect of a classroom-based stuttering intervention.

The process evaluations of a study are specifically recommended in longitudinal studies, such as this one, where repeated measures occur (Oakley et al., 2006) at baseline, 1 and 6 months post-intervention. These process evaluations identify any organisational challenges and changes that are required (Akobeng, 2005; Bowen et al., 2009; Kingston, 2004; Oakley et al., 2006; Thabane et al., 2010) to minimise potential flaws or bias (Currie, Seaton, & Wesley, 2009; Downs & Black, 1998; Lancaster, Dodd, & Williamson, 2002; Oakley et al., 2006). The loss of participants is also reported as probable, particularly in longitudinal studies (Keyzer et al., 2005; Morton, Cahill, & Hartge, 2005). Several aspects of process evaluations exist; this study focused on recruitment (school and participant) and dropout rate of participants because of the longitudinal nature of this study. The dropout rate was also selected, as the dropout rate of participants can result in incomplete or missing data (Fitzmaurice, 2003). It may also result in a population no longer being representative, thus reducing the statistical power and validity of a study (Toerien et al., 2009). This was clearly observed in a previous preliminary classroom-based stuttering intervention study (Badroodien et al., 2011).

At present, there are no documented feasibility studies in South Africa within the domain of classroom-based intervention, which is essential for planning of future large-scale studies. Comment on feasibility is also important, as research within the school context is challenging. Common challenges relate to procedural aspects and treatment effect such as consent, participation and ethical concerns resulting from the vulnerable nature of conducting research with children. While it could be argued that this is the case for all studies, the complexity of school research adds to the level of difficulty that is often experienced when conducting school-based research.

In terms of treatment effect, it is essential to also determine the potential treatment effect in a pilot study design prior to conducting an RCT. Lancaster et al. (2002) reported that an intervention may not appeal to all and thus acceptability of the intervention should be studied as part of determining treatment effect. Questions around whether the intervention works and to what extent, whether the intended outcomes are achieved, its benefits and harms (including for whom) may also be answered through the study of potential treatment effect of an intervention (Evans, 2003). In addition to knowing whether there is any potential shift in treatment effect, the inclusion of treatment effect measures may also determine when treatment effect should be measured, specifically, which time interval shows a greater shift, if any. A pilot study is therefore required so that when an RCT is conducted, there are findings of a pilot study to show that time, resources and money can be justifiably invested into doing a RCT study. For this reason, it is critical to focus on both aspects, procedural and treatment effect, to accurately inform the feasibility of an RCT.

Classroom-based intervention

Stuttering, a communication disorder, presents with personal and social implications (negative self-perceptions, teasing and bullying), often occurring at primary school (Dijkstra, Lindenberg, & Veenstra, 2008; Murphy, Yaruss, & Quesal, 2007; Swearer, Espelage, Vaillancourt, & Hymel, 2010). Despite the lack of training and resources to address communication difficulty and disabilities reported by teachers (Penn, Watermeyer, & Schie, 2009), teacher involvement and training has been found to prevent teasing and bullying (Blank et al., 2009). Classroom-based intervention has therefore been advocated as a strategy to improve peer attitudes (Langevin, 2009; Merrell, Gueldner, Ross, & Isava, 2008; Murphy et al., 2007) internationally (Langevin, Bortnick, Hamer, & Wiebe, 1998) and in South Africa (Branfield et al., 2015; Farelo et al., 2015; Hobbs et al., 2016).

Internationally, persistent reports of teasing and bullying of children who stutter (CWS) led to the development and study of the Teasing and Bullying (TAB) resource, a classroom-based intervention in Canada (Langevin, 1998). The TAB was created on the basis that attitude is learnt and can be changed (Foster, 2006). The TAB was found useful and feasible in targeting negative peer attitudes in a pre- and post-intervention study (over three to four temporal periods), without a control group, among Grades 3–6 learners (Langevin, 1998, 2009; Langevin & Prasad, 2012). However, it was not suitable for the South African population, linguistically or culturally. Furthermore, the lack of a control group as a methodology does not align with the planning of a future RCT.

The TAB resulted in the development of the Classroom Communication Resource (CCR) intervention for South Africa, the intervention being subjected to testing in this study. It was required specifically in South Africa because of the prevalence of teasing and bullying and requests from teachers for support (Abrahams, Harty, St Louis, Thabane & Kathard, 2016; Hobbs et al., 2016). The focus of the CCR intervention is to target peer attitudes. The example of stuttering and communication is used in the CCR intervention but it can be extended to target difference, acceptance and teasing and bullying. The CCR intervention is a classroom-based resource that is administered by teachers, as the communication partner.

The CCR intervention was studied and developed through small-scale studies by the University of Cape Town between 2009 and 2014 (Badroodien et al., 2011; De Grass et al., 2010; De Freitas, Geben, Parusnath, Relleen, & Van den Berg, 2012; Filies, Hartley, Kaplan, & Pettit, 2009; Kathard et al., 2014; Walters, 2014). In 2014, Kathard et al. studied the attitudes of Grade 7 peers of CWS at pre-intervention and 1 month post-intervention, where the CCR was administered by teachers to intervention groups only. The stuttering resource outcomes measure (SROM) was used to measure attitudes at pre-intervention and 1 month post-intervention in control and intervention groups in the areas of pro-social behaviours – positive social distance (PSD), verbal interaction (VI) and social pressure (SP). The results of the study yielded minimal positive effects at 1 month post-intervention (Kathard et al., 2014). Kathard et al. (2014) subsequently recommended a large-scale study to explore peer attitudes over time, as they reported uncertainty around time intervals to determine treatment effect. This study therefore aims to build on the findings and recommendations of Kathard et al. (2014) by exploring the treatment effect at both 1 and 6 months post-intervention as well as procedural aspects to assist with future planning of an RCT.

Aim

The aim of this study was to determine the feasibility of an RCT through conducting a pilot study.

Objectives

The study had two objectives:

  • Primary objective: to determine the recruitment rates of schools and participants and the dropout rate of participants
  • Secondary objective: to determine the treatment effect of attitudes towards stuttering among Grade 7 students based on the SROM and its subscales – the PSD, VI and SP.

Methods

Study design

A pilot, cluster, stratified RCT design was used, where schools were the unit of randomisation. The cluster stratified RCT design was emulated using a pilot study design to accurately comment on the feasibility of a future RCT. The schools were stratified into two quintile groups (lower vs. higher) and randomised to receiving the CCR intervention or usual practice, using a 1:1 allocation ratio.

Participants

The eligibility criteria included Grade 7 participants, aged 11 years and older, in mixed-gender schools where the language of learning and teaching was English. The participants attended public primary schools in the lower (two and three) and higher quintiles (four and five). Quintiles were included to ensure a representative sample was included. The schools were situated in the Western Cape metropolitan urban area in South Africa. Participants were not financially compensated in any way. All participating schools were provided with their own copy of the CCR intervention. Schools could have CWS in the classroom; however, once CWS were identified they were approached to obtain consent to determine if the study could be conducted in their school. Exclusion criteria included participants aged younger than 11 years from same-sex schools and schools within Quintile 1.

Sample size

A total sample size of n = 401 was included where n = 149 children were included in the CCR intervention and n = 252 children in usual practice (control group). A minimum sample size of n = 192 was recommended based on power analysis calculations from previous studies in the project stream using observations of treatment effects and mean differences (Badroodien et al., 2011; Kathard et al., 2014; Walters et al., 2014). This study aimed to include a minimum of n = 384 (Walters, 2014).

Intervention
Classroom Communication Resource intervention

The CCR intervention included three key components, namely a social story, role play and teacher-led discussion. The teacher read the social story to his or her class. Once the story was complete, the teacher selected participants to act out the role play. The purpose of the role play was to emphasise the story but also to provide participants with a first-hand account of how the characters of the story may have felt. Finally, the teacher facilitated the discussion by using the guidelines in the CCR intervention. The discussion aimed to promote acceptance of diversity and difference related to stuttering, communication and generally, as well as discussions around teasing and bullying and how this related to what was happening at each school.

The CCR intervention is considered self-sufficient for the most part. However, the teacher was provided with basic training on how to administer the CCR intervention. The focus on training was placed on the discussion aspect of the intervention, as many teachers had queries and concerns about how to best administer this section. In doing so, the CCR could be considered a supported classroom-based intervention that was used as a single-dose intervention. The researchers observed, without interference, the administration of the CCR intervention. The CCR intervention was only administered in the intervention groups, while control group teachers continued with their teaching without drawing attention to stuttering in any way. Any questions that were asked after the intervention were to be answered and recorded by the teachers.

Outcomes measure
Procedural aspect

The recruitment rate described the number of schools that were invited compared to those who agreed to participate during the recruitment phase of this study. This was described at a school-level, as this is how participants were initially recruited. Thereafter individual recruitment was described in terms of those recruited from the eligible sample (based on school recruitment). The dropout rate described the loss of participants at baseline, 1 and 6 months post-intervention because of the longitudinal nature of this study.

Treatment effect: Stuttering resource outcomes measure

This study is concerned with the observation of a positive shift in the treatment effect (magnitude and direction) at 1 and 6 months post-intervention from baseline. The treatment effect was commented on using the global and sub-scale scores on the SROM. The SROM consisted of 20 questions making use of a Likert scale. The SROM sub-scales, including PSD, VI and SP, are psychometrically approved constructs (Walters, 2014).

The SROM was developed on the Peer Attitudes towards Children Who Stutter (PATCS). The PATCS was developed by Langevin (1998) in Canada while the SROM was developed for the South African population. The criterion reliability of the PATCS (Langevin, 2009; Langevin, Kleitman, Packman, & Onslow, 2009) and SROM was met (Walters, 2014).

Sampling and enrolment

Once-off randomised sampling took place to track participants from baseline, to 1 and 6 months post-intervention. Continuous sampling was therefore not practical.

Data collection procedure

Upon obtaining the relevant consent and assent, all participants viewed a video of a CWS. The participants were all asked to complete the SROM at baseline. Thereafter the teachers in the intervention groups received training, over a 60–90 min session, to administer the CCR intervention. The CCR intervention was then administered by the teacher to the participants in the intervention groups. No intervention took place in the control group. At 1 and 6 months post-intervention, all participants completed the SROM. Thereafter the control group teachers were provided with a copy of the CCR as well as teacher training.

Statistical analysis

The procedural aspects are calculated as follows:

  • The school recruitment rate was determined by examining how many schools were invited and agreed to participate. Individual recruitment was similarly reported.
  • The dropout rate was calculated into a percentage value at each time interval (from baseline, 1 month and 6 months post-intervention). It was reported as it is a common occurrence within the school setting and accounts for the participant numbers noted in this study. The inclusion of this information is essential for future planning of an RCT.

The treatment effect is calculated as follows.

Each participant’s SROM scores were captured in Microsoft Excel, and R Studio version 1.0.143 (http://www.studio.com) software was used to analyse the data. Information such as knowing someone who stutters and scores according to gender was not reported on, as no significant differences were noted in previous studies (Badroodien et al., 2011; Kathard et al., 2014; Walters, 2014). The random effect model was used to assess the intervention at 1 and 6 months post-intervention and was also used to account for potential correlation among the outcomes from schools. Additionally, the intra-school correlation coefficient (ICC) was reported on. An estimate of the ICC difference between groups, 95% confidence interval (CI) and associated p-values (to three decimal places, with those less than 0.001 reported as p < 0.001) was reported. The analyses were adjusted for the stratification covariate quintile. Sensitivity analysis was also performed to examine the treatment effect using linear regression, which did not account for the potential correlation among the outcomes from schools.

Ethical consideration

Ethical approval was obtained from the University of Cape Town Health Sciences Human Research Ethics committee (510/2013). Thereafter, permission was provided by the Western Cape Education Department. Consent and assent were obtained from schools, principals, parents and participants. The ethical principles of autonomy, confidentiality, beneficence, non-maleficence and distributive justice were upheld at all times, as stipulated by the Declaration of Helsinki (Williams, 2008).

Results

Feasibility aspects
Recruitment rate

A total of 11 schools were invited, 10 schools responded to the invitation to participate, nine schools accepted the invitation and only eight participated in this study, as one school withdrew from the study. The recruitment rate was therefore 82%, as 9 out of the 11 schools invited agreed to participate in this study. Based on the school recruitment, n = 610 participants were eligible for this study while only n = 449 were recruited, where there were n = 183 in the intervention group and n = 266 in the control group as a result of absenteeism and not providing consent.

Dropout rate

The dropout rate in the intervention group at baseline was 23% (n = 34) and 6% (n = 15) in the control group because of consent not being provided and absenteeism. At 1 month post-intervention, a dropout rate of 7% (n = 10) was noted in the intervention and 6% (n = 15) in the control. At 6 months post-intervention, dropout rates of 7% (n = 10) and 17% (n = 44) were noted in the intervention and control groups, respectively (Table 1).

TABLE 1: Dropout rate at baseline, 1 and 6 months.
Preliminary estimates of treatment effect

A total of n = 401 were analysed, with n = 149 in the intervention group and n = 252 in the control group, with 42% male in each group. A total of eight clusters (schools) were analysed, equally divided in terms of quintile to ensure a more representative sample. The baseline SROM score mean was 73.17 (SD 12.05) in the intervention and 71.48 (SD 12.80) in the control group. The baseline characteristics are shown in Table 2.

TABLE 2: Baseline characteristics of study participants by study group.

As shown in Figure 1, the key findings indicate no significant differences (95% CI) in the SROM with 2.01 (–1.07, 5.11) at 1 month post-intervention and at 6 months post-intervention with 3.01 (–0.69, 6.69). Findings showed no significant differences at 6 months in the constructs of PSD, 2.57 (0.67, 4.46), and SP, 1.04 (0.18, 1.89). The only significant difference noted was at 6 months within the construct of VI, with 1.35 (0.58, 2.13).

FIGURE 1: Forest plot of treatment effect at 1 and 6 months on the stuttering resource outcomes measure and its subscales positive social distance, social pressure and verbal interaction (p = 0.001).

A sensitivity analysis was also conducted ignoring clustering, which showed similar results as shown in Table 3. Table 3 includes the estimated differences between the intervention and control groups, along with 95% CIs and p-values, adjusted for quintile, for the outcomes SROM, PSD, SP and VI. Sensitivity analysis, ignoring clustering, showed similar results at 1 month post-intervention with 2.01 (–1.09, 5.12) and 6 months post-intervention with 2.46 (–1.05, 5.98).

TABLE 3: Sensitivity analysis of treatment effect at 1 and 6 months on the stuttering resource outcomes measure sub-scales (p = 0.001).

Discussion

Generalisability of findings

It should be noted that the findings of this study reflect schools in the Western Cape, South Africa, from Quintiles 2, 3, 4 and 5. Findings should, therefore, be interpreted with caution when considering other provinces within South Africa.

Feasibility aspects

Several challenges were encountered during this study, despite the anticipation of some general challenges that often arise during school-based research. It could be argued that all researchers experience varying degrees of difficulty with conducting research, while the complexity of school research added to the level of difficulty that was experienced in this study. The common challenges were experienced, such as consent and participation, which affected the recruitment of participants.

The results indicate that the recruitment rate was high because schools were approached early in the year and could thus foresee making time available for the researchers, showing that school recruitment may be a feasible method of initial recruitment. While there is disparity in the numbers of control versus intervention groups, it should be noted that this was as a result of consent not being provided and absenteeism, all factors out of control of the researcher. Furthermore, this was taken into account when interpreting the findings of this study. Once the challenge of school recruitment was overcome, the researcher faced difficulty with recruiting individual participants. Based on the eligible participants from school recruitment, far fewer participants were recruited, as a result of poor consent. It was challenging because the researcher relied on schools, principals, teachers, parents and participants to provide permission, consent and assent required for recruitment, while the researcher only had access to principals and some of the teachers. Because of the strict design of RCTs, the study will only be successful should schools agree to participate and facilitate return of consent forms from parents. Once participants were recruited, the next challenge was to retain participants and prevent a large dropout of participants to ensure power analysis of this study. It was reported by schools that clearer communication is required. It is, therefore, vital that, in future, schools and teachers are made explicitly aware of the time commitments required of them so that they may make an informed decision as to whether they are able to participate in a future study and not experience the burden of participating in a study.

Upon discovering that data were to be collected at three separate intervals (baseline, 1 month and 6 months) in addition to another visit to the school where the teacher administered the CCR, schools expressed anecdotally great concern around the time commitments required of them. As a result, a dropout of participants was noted over time as well as difficulty arranging for data collection dates. This is commonly reported in longitudinal studies (Galea & Tracy, 2007). Schools became increasingly hesitant to commit to an additional suitable time for data collection at 6 months post-intervention when compared to 1 month post-intervention. Schools felt that they had already provided this study with a substantial amount of time and no longer perceived their participation to be beneficial, which is a common determinant of a dropout rate (Galea & Tracy, 2007; Lundberg, Thakker, Hällström, & Forsell, 2005). Schools requested that, in future, data be collected over fewer time periods (i.e. perhaps only at 6 months) with fewer visits to the school. The researcher should also be understanding of the schools, their processes and preferred methods for participating and that all schools are different. Researchers should also display an awareness of the sacrifices that schools make to be able to participate in research studies. Because of the time constraints, schools asked that the research take place at the end of the school term. However, it meant that many participants were absent from school at the end of the term after completing their academic testing. This is reported by teachers in this study to be a common occurrence, as no new work was being taught at school.

In terms of organisation, early planning and scheduling, logistically it was challenging to find suitable times for data collection, given the pre-existing busy academic calendar. Schools found the research time-consuming and reported that they would not have committed to it had they realised the extent of the time needed to dedicate to this study. Consequently, there were serious implications in terms of motivation to participate and the relationship between the researcher and the school. This was found to be especially true where telephonic contact was made. While it appeared to be the most convenient method, it was viewed as impersonal. Face-to-face contact and direct contact is reported to improve building a relationship with schools, principals, teachers, participants and the researcher (Galea & Tracy, 2007). Additionally, multimodal reminders may have been more effective with face-to-face contact as the primary method of contact (Galea & Tracy, 2007; Hartge, 2006; Keyzer et al., 2005). However, the methods of contact appeared to differ in each school. The research thus adapted according to the preference of the specific school.

In addition to the administrative challenges discussed, related to planning and scheduling, other challenges included relationships and consistency of researchers. These factors collectively affected the recruitment and dropout rates. Schools reported that the use of research assistants was inconsistent and reported that they were unable to build a relationship with and get to know the researcher and research assistant at their school. Building a relationship early on with the school is recommended by Galea and Tracy (2007) and Hartge (2006), as it has implications for data collection at future time intervals. Schools reported feeling that it was challenging to deal with different people and that they did not know who the sole contact person was. Schools further reported that if they had built a relationship with a consistent researcher, it may have been easier to make certain concessions where challenges around organisation and planning arose. Consequently, this affected their motivation and willingness to participate in this study, especially given the time constraints that the school faced. Given the demands that schools face and feedback that schools provided, it should be taken into consideration that data collection is an added responsibility taken on by the school.

Preliminary estimates of treatment effect

Though no significant result was observed at 1 month post-intervention, it is possible that it was too early for participants to have internalised their learning. This is supported by Kathard et al. (2014), who stated that at 1 month post-intervention, an attitude shift was beginning but that more time was recommended and that 6 months post-intervention may yield further changes in treatment effect. In doing so, the results at 6 months post-intervention supported the use of the CCR at the 6-month interval in the construct of VI. This therefore illustrates that the use of the CCR intervention may facilitate a positive shift in the magnitude and direction of scores of attitudes towards CWS. The results suggest an indication of the direction of change in treatment effect. Despite the dropout of participants, the findings at 6 months show that evaluating treatment effect at 6 months post-intervention is a critical time period, as this is when the start of shift in treatment effect becomes apparent. The use of the CCR is important, as it may facilitate the holistic management of stuttering and communication difficulty by speech-language therapists (SLTs). It is important to note that while a potentially statistically significant result was observed at 6 months post-intervention within the VI construct of the SROM, this is not the sole finding to influence the feasibility of the RCT. It is repeatedly emphasised in the literature that collectively effectiveness of an intervention and procedural aspects are drawn upon to determine the feasibility of an RCT (Evans, 2003; Leon et al., 2011; Oakley et al., 2006; Shanyinde et al., 2011). This study is an illustration of this and has emphasised the need to draw on both components of this study to tell the researcher about how future planning may be facilitated.

Conclusion

Overall, both procedural aspects and treatment effect trends provide important information about the feasibility of an RCT. It is illustrated that collectively these factors suggest that an RCT is feasible. The recruitment and dropout rates specifically showed that several factors should be considered to improve the feasibility of a future RCT in terms of the procedural aspects of this study. Additionally, the treatment effect shows that 6 months post-intervention proved to be an optimal and feasible time to determine the treatment effect, whereas in this study a significant result was noted only in one of the constructs at 6 months. It would be important to retain a sample to test the effectiveness of the CCR intervention in a more robust way. Furthermore, it would be impractical to measure post-intervention attitudes at three intervals in future because of time constraints (reported by schools) and because of the repeated use of the same outcomes measure.

Strength, limitations and clinical implications

The main strength of this study was its ability to achieve the objectives of determining the feasibility of an RCT by drawing on the findings of this pilot study. The limitation of this study is the way in which schools experienced the study. Clinical implications include that an RCT is feasible and that there is a need for further research to enrich South African literature on classroom-based stuttering intervention.

Recommendations for future research

An RCT is recommended, with further development of the process. In order to conduct a methodologically sound RCT, there are several factors that need to be considered and put into place, as described in the discussion. There are two main recommendations for this study: (1) to reduce the dropout rate of participants through stringent methods and (2) to determine treatment effect at baseline and 6 months post-intervention only. No significant results were noted at 1 month, suggesting that perhaps only 6 months post-intervention data may be necessary, as this is where the shift in treatment effect begins. By reducing the number of data collection intervals and being transparent about the number of visits that are required, the researcher may also alleviate time pressure and any burden schools may experience.

Acknowledgements

R.B.M. and Prof. H. Kathard wish to acknowledge the Programme for Enhancement of Research Capacity grant, University of Cape Town that partially contributed to this study. Prof. L. Thabane wishes to acknowledge and was supported in part by funds from the Carnegie African Diaspora Fellowship Program. This study was funded in part by the Programme for Enhancement of Research Capacity grant, University of Cape Town, and the Carnegie African Diaspora Fellowship Program.

Competing interests

The authors declare that they do not have any competing interests. There are no personal and/or financial relationships that may have influenced the writing of this article.

Authors’ contributions

R.B.M., the corresponding author completed the research as part of her Master’s study and was responsible for writing and compiling drafts of this paper. Prof. H. Kathard was the primary supervisor and Prof. L. Thabane was a consultant in this research study. A.S.M.B. conducted the statistical analysis of this study. H.K., L.T. and A.S.M.B. contributed to the writing and reviewing of this paper through a number of drafts.

References

Abrahams, K., Harty, M., St. Louis, K.O., Thabane, L., & Kathard, H. (2016). Primary school teachers’ opinions and attitudes towards stuttering in two South African urban education districts. South African Journal of Communication Disorders, 63(1), e1–e10. https://doi.org/10.4102/sajcd.v63i1.157

Akobeng, A.K. (2005). Understanding randomised controlled trials. Archive of Disorders of Children, 90(8), 840–844. https://doi.org/10.1136/adc.2004.058222

Badroodien, R., Bielovich, J., Lilienfeld, S., Naiker, P., Stevens, M., & Weavind, J. (2011). Changes in peer attitudes towards children who stutter after the administration of a Classroom Communication Resource. Unpublished Undergraduate Thesis. Cape Town: University of Cape Town, South Africa.

Blank, L., Baxter, S., Goyder, E., Gullaumem, L., Wilkinson, A., Hummel, S., Chllcott, J., & Payne, N. (2009) Systematic review of the effectiveness of universal interventions which aim to promote emotional and social wellbeing in secondary schools. NICE Centre for Public Health Excellence. The University of Sheffield, Thessaloniki, Greece.

Bowen, D.J., Kreuter, M., Spring, B., Cofta-Woerpel, L., Linnan, L., Weiner, D., Bakken, S., Kaplan, C.P., Linda Squiers, L., Cecilia Fabrizio, C., & Fernandez, M. (2009). How we design feasibility studies. Journal of Preventive Medicine, Elsevier, 36(5), 452–457. https://doi.org/10.1016/j.amepre.2009.02.002

Branfield, S., Hendricks, S., Julius, A., Mbigi, F., Moipei, G., & Msizi, N. (2015). Caregivers’ perceptions and experiences of the impact of stuttering on their children and themselves. Unpublished Honours Thesis. Cape Town, South Africa: University of Cape Town.

Currie, R.R., Seaton, S., & Wesley, F. (2009). Determining stakeholders for feasibility analysis. Annals of Tourism Research, Elsevier, 36(1), 41–63. https://doi.org/10.1016/j.annals.2008.10.002

De Freitas, S., Geben, C., Parusnath, P., Relleen, A., & Van den Berg, N. (2012). Changes in negative peer attitudes towards children who stutter after the administration of a Classroom Communication Resource. Unpublished Undergraduate Thesis. Cape Town: University of Cape Town, South Africa.

De Grass, J., Gessesse, H., Harrison, J., Naidoo, L., Sewpersad, A., & Vaggie, Z. (2010). Changes in peers’ attitudes towards learners who stutter after the administration of a classroom communication resource. Unpublished Undergraduate Thesis. Cape Town, South Africa: University of Cape Town.

Dijkstra, J.K., Lindenberg, S., & Veenstra, R. (2008). Beyond the class norm: Bullying behaviour of popular adolescents and its relation to peer acceptance and rejection. Journal of Abnormal Child Psychology, 36, 1289–1299. https://doi.org/10.1007/s10802-008-9251-7

Downs, S.H., & Black, N. (1998). The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. Journal of Epidemiology Community Health, 52, 377–384. https://doi.org/10.1136/jech.52.6.377

Evans, D. (2003). Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing, 12, 77–84. https://doi.org/10.1046/j.1365-2702.2003.00662.x

Farelo, M.A., Jassen, I., Lawrence, C., Mofokeng, K., Morris, S., Ndawonde, N., & Sibiya, Z. (2015). The communication experiences and perceptions of primary school children who stutter (aged 8–13). Unpublished Honours Thesis. Cape Town, South Africa: University of Cape Town.

Filies, S., Hartley, R., Kaplan, G.R., & Pettit, L. (2009). Teachers’ and learners’ responses towards a classroom resource about communication and communication difficulties. Unpublished Undergraduate Thesis. Cape Town, South Africa: University of Cape Town.

Fitzmaurice, G.M. (2003). Methods for handling dropouts in longitudinal clinical trials. Statistica Neerlandica, 57 (1), 75–99. https://doi.org/10.1111/1467-9574.00222

Foster, D. (2006). Theoretical and metatheoretical frames in inter-group psychology: Three completing perspectives. In K. Ratele (Ed.), Inter-group relations: South African perspectives (pp. 23–65). Cape Town: Juta & Co.

Galea, S., & Tracy, M. (2007). Participation rates in epidemiologic studies. Epidemiology, Elsevier Inc. 17, 643–653. https://doi.org/10.1016/j.annepidem.2007.03.013

Hartge, P. (2006). Participation in population studies. Epidemiology, 17(3), 252–254. https://doi.org/10.1097/01.ede.0000209441.24307.92

Hobbs, A., Lewis, J., Kamedien, L., Sindi, A., Stijkel, K., & Stoto, S. (2016). Teacher’s perceptions and experiences of stuttering in Lower Quintile Schools in the Western Cape. Unpublished Undergraduate Thesis. Cape Town, South Africa: University of Cape Town.

Kathard, H., Walters, F., Frieslaar, K., Mhlongo, T., Rhoode, M., Shaboodien, R., Weidmann, J., Zimmerman, N., Zoetmulder, A., & Camroodien-Surve, F. (2014). Classroom intervention to change peers’ attitudes towards children who stutter: A feasibility study. South African Journal of Communication Disorders, 61(1), 1–11. https://doi.org/10.4102/sajcd.v61i1.80

Keyzer, J.F., Melnikow, J., Kuppermann, M., Birch, S., Kuenneth, C., Nuovo, J., Azari, R, Oto-Kent, D. & Rooney, M. (2005). Recruitment strategies for minority participation: Challenges and cost lessons from the power interview. Ethnicity & Disease, 15, 395–406. PMID: 16108298

Kingston, J. (2004). Conducting feasibility studies for knowledge based systems. Joseph Bell centre for forensic statistics and legal reasoning University of Edinburgh, South Bridge, Edinburgh. Elsevier Science Direct, 17, 157–164. https://doi.org/10.1016/j.knosys.2004.03.011

Lancaster, G.A., Dodd, S., & Williamson, P.R. (2002). Design and analysis of pilot studies: recommendations for good practice. Journal of Evaluation in Clinical Practice, 10(2), 307–312. https://doi.org/10.1111/j..2002.384.doc.x

Langevin, M. (1998). Teasing and bullying: Unacceptable behaviour: Field interventioning report – September, 1998. Unpublished Report. Edmonton: Institute for Stuttering Treatment & Research.

Langevin, M. (2009). The peer attitudes toward children who stutter scale: Reliability, know groups validity, and negativity of elementary school-age children’s attitudes. Journal of Fluency Disorders, 34, 72–86. https://doi.org/10.1016/j.jfludis.2009.05.001

Langevin, M., Bortnick, K., Hammer, T., & Wiebe, E. (1998). Teasing/bullying experienced by children who stutter: Toward development of a questionnaire. Contemporary Issues in Communication Science and Disorders, 25, 12–24.

Langevin, M., Kleitman, S., Packman, A., & Onslow, M. (2009). The peer attitudes toward children who stutter (patcs) scale: An evaluation of validity, reliability and the negativity of attitudes. International Journal of Language & Communication Disorders, 44, 352–368. https://doi.org/10.1080/13682820802130533

Langevin, M., & Prasad, N.G.N (2012). A stuttering education and bullying awareness and prevention resource: A feasibility study. Journal of Language, Speech, and Hearing Services in School, 43, 344–358. https://doi.org/101044/0161-1461(2012/11-0031)

Leon, A.C., Davis, L.L., & Kraemer, H.C. (2011). The role and interpretation of pilot studies in clinical research. Journal of Psychiatric Research, 45, 626–629. https://doi.org/10.1016/j.jpsychires.2010.10.008

Lundberg, I., Thakker, K.D., Hällström, T., & Forsell, Y. (2005). Determinants of non-participation, and the effects of non-participation on potential cause-effect relationships, in the PART study on mental disorders. Social Psychiatry, Psychiatry Epidemiology, 40, 475–483. https://doi.org/10.1007/s00127-005-0911-4

Merrell, K.W., Gueldner, B.A., Ross, S.W., & Isava, D.M. (2008). How effective are school bullying intervention programs? A meta-analysis of intervention research. School Psychology Quarterly, 23, 26–42. https://doi.org/10.1037/1045-3830.23.1.26

Morton, L.M., Cahill, J., & Hartge, P. (2005). Reporting participation in epidemiologic studies: A survey of practice. American Journal of Epidemiology, 163(3), 197–203. https://doi.org/10.1093/aje/kwj036

Murphy, W.P., Yaruss, J.S., & Quesal, R.W. (2007). Enhancing treatment for school-age children who stutter: II. Reducing bullying through role-playing and self-disclosure. Journal of Fluency Disorders, 32, 139–162. https://doi.org/10.1016/j.jfludis.2007.02.001

Oakley, A., Strange, V., Bonell, C., Allen, E., & Stephenson, J. (2006). Process evaluation in randomised controlled trials of complex interventions: Analysis and comment. Health Services Research, Biomedical Journal, 332, 413–416. https://doi.org/10.1177/1468794110380522

Penn, C., Watermeyer, J., & Schie, K. (2009). Auditory disorders in a South African paediatric TBI population: Some preliminary data. International Journal of Audiology, 48, 135–143. https://doi.org/10.1080/14992020802635309

Shanyinde, M., Pickering, R.M., & Weatherall, M. (2011). Questions asked and answered in pilot and feasibility randomized controlled trials. BMC Medical Research Methodology, 11, 117. https://doi.org/10.1186/1471-2288-11-117

Sibbald, B., & Roland, M. (1998). Understanding controlled trials: Why are randomised controlled trials important? The Biomedical Journal, 316, 201. https://doi.org/10.1136/bmj.36.7126.201

Swearer, S.M., Espelage, D.L., Vaillancourt, T., & Hymel, S. (2010). What can be done about school bullying? Linking research to educational practise. Educational Researcher, 39, 38–47. https://doi.org/10.3102/0013189X09357622

Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios. L.P., Robson, R., Thabane, M., Giangregorio, L. & Goldsmith, C.H. (2010). A tutorial on pilot studies: The what, why and how. BMC Medical Research Methodology, 10, 1. https://doi.org/10.1186/1471-2288-10-1

Toerien, M., Brookes, S.T., Metcalfe, C., de Salis, I., Tomlin, Z., Peters, T.J., Sterne, J., & Donovan, J.L. (2009). A review of reporting of participant recruitment and retention in RCTs in six major journals. Trails, 10, 52. https://doi.org/10.1186/1745-6215-10-52

Walters, F. (2014). Changes in peers’ attitudes towards children who stutter after the administration of the Classroom Communication Resource. Unpublished Postgraduate Masters. Cape Town: University of Cape Town, South Africa

Williams, J.R. (2008). The Declaration of Helsinki and public health. University of Ottawa: Canada. Bulletin of the World Health Organization, 86, 650–652. https://doi.org/10.2471/BLT.08.050955



Crossref Citations

No related citations found.