Defining competencies in curriculum and instruction and developing a new competency model

0
Defining competencies in curriculum and instruction and developing a new competency model

Research model

The research employed the Delphi technique to establish competencies for CIDP and proposed a new competency model based on expert opinions. According to various scholars (Bordoloi et al. 2023; Brady, 2015; Fletcher and Marchildon, 2014; Garrod and Fyall, 2005; Reid, 2020; Wiersma and Jurs, 2005), the Delphi technique is often classified as a qualitative research method due to the subjective nature of data derived from expert opinions and the way this data is reported. Likert questionnaires with competencies were developed and utilised in subsequent rounds. In each round, the experts provided their feedback on the competencies, which were then refined based on their responses.

Data sources and study group

For participant selection in the Delphi technique, the purposive sampling method was preferred (Hasson et al. 2000; Lang, 1995). Experts were purposefully chosen using criterion sampling to determine CIDP competencies. The selection criteria for this study were designed to ensure the inclusion of experts with substantial and relevant experience in the field of CI. Graduating from a CIDP was fundamental, as it ensured that participants had attained an advanced level of academic knowledge and expertise specific to CI. These individuals possess a deep understanding of curriculum design, instructional strategies, and educational assessment, enabling them to provide informed insights into the competencies required for doctoral programmes in CI. Their firsthand experience with the doctoral process provides valuable perspectives on the challenges and requirements at this advanced level of education.

Working in the field of CI at a university with a CIDP ensured that participants were actively engaged in current educational practices, research, and policy development, offering practical perspectives that bridge the gap between theory and application. These professionals not only understand the theoretical underpinnings of CI but also have practical insights into how doctoral programs should be structured and delivered. Additionally, the willingness to participate in the research was crucial for collecting high-quality data. This criterion ensured that participants were genuinely interested and motivated to provide thoughtful and candid responses, respecting the ethical principles of voluntary participation and informed consent. Collectively, these criteria enhanced the reliability and validity of the research findings, making these experts the most qualified to determine the competencies required at the doctoral degree in CI (as illustrated in Fig. 1).

Fig. 1
figure 1

The formation of the expert group.

Initially, the websites of universities in Türkiye offering Ph.D. programmes in CI were reviewed to gather academic CVs and contact information. By screening CVs against the inclusion criteria, an expert pool of 268 academicians was identified. An email invitation outlining the research aim and methodology was sent to these experts. Thirty-four experts agreed to participate in the study. However, five of them did not respond to the Delphi questionnaires sent to them. Therefore, the research commenced with the participation of 29 experts. Due to the withdrawal of one expert, the second and third rounds were conducted with 28 experts. Additionally, the Delphi questionnaires, which lasted three rounds, were completed with the same group of experts. Table 1 provides the demographic details of the experts involved in the Delphi rounds.

Table 1 Demographic characteristics of the experts participating in the Delphi rounds.

According to Table 1, 29 experts participated in the Delphi rounds. Seventeen (58.6%) were female, and 12 (41.4%) were male. The most common academic degrees among the participants were Associate Professor (44.8%) and Assistant Professor (37.9%). The age group with the most participants was 35–39 years (34.5%), followed by 45–49 years (24.1%). The least represented age group was 40–44 years (3.4%). Most participants had between 6–11 years of experience (31%), and a significant number had 24 or more years of experience (24.1%). Regarding experience in CI, the majority had 6–11 years (44.8%), with only a few having 0–5 years (6.9%). Finally, 16 experts (55.2%) had experience in defining competence.

Data collection process and tools

Considering the experts’ competency status, professional experience, and academic titles, consulting their opinions independently of time and place was deemed more functional and rational. Therefore, the e-Delphi technique was employed, which shares similar characteristics with the classical Delphi method but allows data collection via email and online questionnaires (Sheridan, 2005; Topper, 2006) Google Forms and JotForm were used to gather data throughout the research. Table 2 summarises the data collection process for each Delphi round.

Table 2 The data collection process for the Delphi rounds.

The first round of the Delphi questionnaire was designed to assess CIDP competencies and consisted of six questions, including two main questions and four probes. The primary questions aimed to gather comprehensive insights from experts in the field. The first question asked, ‘What competencies do you think a student who graduated with a doctoral degree in CI should have?’ Experts were encouraged to list as many competencies as they deemed relevant. They were also prompted to provide detailed competencies related to knowledge, skills, competence, attitudes, and values through additional probe questions.

The second question inquired, ‘In which competency domains do you think competencies can be expressed differently from the NQF-HETR?’ Experts were invited to suggest domains that might differ from those defined in the European Qualifications Framework [EQF], (2017), the NQF-HETR (2011), and General Competencies for Teaching Profession [GCTP], (2017). The responses collected from the first round were then utilised to develop 7-point Likert-type questions. These refined questionnaires were subsequently used to gather data in the second and third rounds of the Delphi process.

Data analysis

Delphi 1st round data analysis process

This research focused on developing a conceptual framework for competencies using the Delphi technique. The Delphi technique, which begins with open-ended questions, aims to establish an inductive framework through content analysis (Powell, 2003). In the first round, experts responded to six open-ended questions. These responses underwent a two-stage analysis process involving four steps, incorporating both content analysis and descriptive analysis techniques. The first three steps utilised content analysis, while the final step was conducted through descriptive analysis. The MAXQDA 2020 qualitative data analysis programme was employed to ensure a systematic process.

In the first stage, a descriptive analysis was conducted to categorise the competencies identified by the experts. This form of analysis organises data into themes consistent with the research questions or a pre-determined conceptual framework (Patton, 2015). During the descriptive analysis process, competencies were classified into the domains of knowledge, skills, competence, attitudes, and values, as specified in the EQF, NQF-HETR, and GCTP.

In the second stage, content analysis was carried out to reveal meaningful patterns within the identified competencies. This analysis examined the data in depth to identify original concepts, categories, and themes (Patton, 2015). The experts’ responses were analysed and separated into sub-codes that accurately reflected the semantic meaning of the words used. These sub-codes were grouped into broader codes (competencies) based on semantic similarity. Subsequently, these codes were categorised into sub-themes (sub-competencies).

Delphi 2nd and 3rd rounds data analysis

In the 2nd and 3rd rounds of Delphi data analysis, Likert-type questions were used to construct the survey questionnaires. These questionnaires were designed to refine competencies by including a text field for feedback on each competency in both rounds. Additionally, the third-round incorporated feedback from experts, encompassing their stated levels of agreement for competencies from the previous round and the statistical information derived from the second round.

To analyse the data from the Delphi questionnaires, measures of central tendency (mode, median, and mean), dispersion (interquartile range [IQR]), and standard deviation were employed to assess the judgements and general tendencies of the expert group (Glenn and Gordon, 2009; Hasson et al. 2000; Rowe et al. 1991). Furthermore, the percentage of agreement among experts on each competency was computed. Given that the Likert-type questions resulted in ordinal data, using measures of central tendency and dispersion was appropriate for analysis (Fish and Busby, 2005). Employing median value and IQR calculations helped mitigate the impact of outliers, which could otherwise skew the results (Giannarou and Zervas, 2014; von der Gracht, 2012; Mullen, 2003).

After selecting the consensus criterion, a consensus level was determined based on the characteristics of the participant group, the research topic, the purpose, and the results (von der Gracht, 2012; Hasson et al. 2000; Powell, 2003). Given that the research focused on identifying the competencies for CIDP, the number of competencies defined by the experts played a significant role in determining the criterion and level of consensus.

A high level of consensus on the determined criteria and competencies was desired. Accordingly, the median value was set to six, and the IQR was set to one. An IQR equal to or less than one indicates a high level of agreement on competencies (Rayens and Hahn, 2000). The consensus level is commonly regarded as 80% (Hohmann et al. 2020; Stewart et al. 2017). A total of ‘6’ and ‘7’ responses on the experts’ 7-point Likert-type questionnaires were required for at least 80% of the competencies in the second round, and at least 90% of the responses in the third round were accepted as the level of consensus. Table 3 presents a detailed overview of the criteria and consensus levels used in each round of the Delphi process.

Table 3 Statistical analysis and consensus criteria of the Delphi rounds.

Credibility and transferability of the research

Ensuring the credibility and transferability of the Delphi study was paramount, and several rigorous steps were taken to address these aspects comprehensively. Prior to initiating the research, a thorough literature review was conducted, providing a solid foundation for the study and grounding it in existing knowledge. This review ensured that the methodological processes were clearly explained and justified.

The selection of experts was based on well-defined criteria to ensure that participants possessed substantial knowledge and experience in the field of Curriculum and Instruction (CI). According to the literature, Delphi studies should involve more than ten participants to ensure content validity (Rayens and Hahn, 2000). In this research, twenty-nine experts participated in the first round, with twenty-eight continuing in subsequent rounds. This deliberate and methodical selection process enhanced the credibility of the findings by ensuring that the input was both informed and relevant.

Before the main Delphi surveys, pilot studies were conducted with experts who met the research participation criteria. This pilot study, involving five experts, aimed to assess the Delphi process, ensuring clarity and relevance of the questions and the overall feasibility of the data collection procedures. Feedback from the pilot study led to minor adjustments in the questionnaires and data collection methods, refining the process and addressing potential issues early in the research. This step significantly enhanced the study’s reliability and credibility.

The iterative rounds of the Delphi method, with feedback provided to experts in subsequent rounds, played a crucial role in enhancing the trustworthiness of the findings. This process allowed participants to reconsider their positions based on group feedback, facilitating a deeper consensus and a more robust set of findings. To ensure a comprehensive and unbiased analysis, researcher triangulation was employed. Analyses and interpretations were conducted separately and simultaneously by the researchers, with final decisions made collaboratively. This approach incorporated multiple perspectives, enhancing the study’s reliability and credibility.

Robust qualitative measures, including thematic analysis and triangulation, were employed to ensure that the consensus was based on a credible analysis of expert opinions. Descriptive and content analysis techniques were performed using MAXQDA, ensuring the confirmability of the analyses. The research process for generating themes, subthemes, and codes was meticulously documented and supported by expert feedback. Thematic analysis helped identify common themes and patterns, while triangulation ensured the reliability of the findings by cross-verifying data from multiple sources.

Throughout the study, transparency was maintained by documenting all steps of the research process and providing detailed descriptions of the data collection and analysis procedures. Reflexivity was practiced by acknowledging the researchers’ potential biases and their influence on the research process, further enhancing the study’s credibility. To further ensure credibility, member checking was conducted where the findings were shared with the experts for their feedback and validation. This step helped verify the accuracy of the interpretations and ensured that the findings accurately reflected the experts’ views. By implementing these rigorous steps, the study ensured its credibility and transferability, providing a robust and reliable framework for understanding and developing competencies in the field of CI.

Limitations of the research

While the Delphi technique offers several advantages, it also has limitations. One potential issue is the risk of groupthink, where experts align their opinions with the perceived consensus. To minimise this, the study ensured anonymity and allowed independent revision of opinions. Expert selection can also introduce bias, as chosen experts might share similar viewpoints. Despite rigorous criteria, this can limit the diversity of perspectives. The reliance on expert judgement makes the findings inherently subjective, though the structured Delphi process helps mitigate this. The iterative nature of Delphi can be time-consuming and lead to participant fatigue, affecting engagement in later rounds. Additionally, the initial open-ended questions generate large volumes of qualitative data, which can be challenging to analyse and synthesise objectively. Finally, the online format of the e-Delphi method, while useful for reaching dispersed experts, may limit participation from those less familiar with online tools. Despite these challenges, the Delphi technique remains valuable for achieving consensus on complex issues, and the study’s measures effectively address these limitations.

link

Leave a Reply

Your email address will not be published. Required fields are marked *