Curriculum Evaluation Questionnaire According to Participant Opinions

This research deals with the development of a questionnaire to evaluate curricula based on participant views. The research aims to determine the sub - factors, factor loadings, item - total correlations and alpha reliability coefficient of the developed questionnaire. The study involves curricula with a wide target audience and aims to evaluate various dimensions of curricula based on participant views through this questionnaire. For the pre - application and item analysis, a study was conducted with 178 participants who completed an in - service curriculum of a state institution. Data were interpreted through frequencies, percentages and arithmetic averages. The reliability of the questionnaire was assessed with alpha internal consistency coefficient and the result s showed that the developed questionnaire was usable. The research aims to contribute to the evaluation of curricula according to participants' views. The findings show that the developed questionnaire is a reliable and valid measurement tool. The alpha i nternal consistency coefficient was found to be high at 0,88, which supported that the measurement of the questionnaire was reliable.


Introduction
A questionnaire is a list of questions to be read and answered directly by informants.A questionnaire is an information gathering tool used by a researcher to collect information about the relationships of variables.A questionnaire consists of questions or items that respondents can read and understand (Taherdoost, 2019;Yücedağ, 2019).Questionnaires are valuable tools for measuring and analyzing variables in a systematic way.In the context of curriculum evaluation, questionnaires play an important role in assessing and improving the effectiveness of curricula.Curriculum evaluation is used to understand the extent to which an curriculumachieves its objectives, which is a critical step to improve the quality of education.Questionnaires provide valuable feedback about participants' curriculumexperiences, providing important information to curriculum developers, educators and administrators.Evaluations based on participants' opinions can help curricula become more student-centered and improve educational processes.Therefore, the role of surveys in curriculum evaluation is of great importance.
Survey results are used in curriculum evaluation in multiple ways.First, they provide baseline data to measure curriculum effectiveness and assess how well objectives have been achieved.They are also important for assessing participants' satisfaction and for improving the curriculum.Survey results can be used to identify curriculum changes such as curriculum content, methods or materials.They are also used to make decisions about the future of curricula and to direct resources.They also play a critical role in long-term performance monitoring and ensuring student focus.As a result, surveys are an indispensable source of data for improving the quality of curricula, better responding to participant needs and improving educational processes.In the process of curriculum evaluation, they often contain information on which decisions can be made to accept, modify or eliminate curriculum-based educational resources.With the feedback obtained in line with this information, curriculum development experts are given feedback on continuing the curriculum, revising the curriculum or moving to a new stage.At the end of the evaluation, a comparison is made between the results obtained and the characteristics indicated by the objectives and clues are obtained for the curricula to be developed.
Evaluation results shed light on the curriculum to be developed and guide at every stage of curriculum development.It should not be ignored that evaluation has an important place in curriculum development studies.One of the most frequently mentioned issues in curriculum development is continuity.This is evidenced by the fact that all stages of the curriculum process; content, teaching methods and techniques, tools and materials used, target and target behaviors should be taken into consideration, target and target behaviors should be determined in line with the needs and the relationship between all stages should be continuously evaluated and the curriculum should be improved.A curriculum may seem complete and perfect before it is implemented.However, in the light of the evaluations made during and as a result of the implementation, the missing, weak and non-compliant aspects of the curriculum can be determined.The curriculum should be developed in line with the data obtained as a result of these evaluations.In this way, in the twenty-first century, where change is very rapid, various developments and changes will be reflected in the curriculum, which is in a dynamic structure.The more feedback is provided from the results of the evaluation in the correction and determination of the objectives of the curriculum to be developed, the reorganization of the content, and the review and redetermination of the curriculum principles, the more success is achieved to the extent that the results are utilized.Educational institutions responsible for staff training are successful to the extent that they carry out the development of in-service curricula in the light of evaluation results.It is necessary to make a decision to determine the points to be improved and changed in the curriculum.The data to be obtained for the decisionmaking process are obtained through evaluation results using scientific methods and scientific criteria and scales (Karacaoğlu, 2020;Xuefeng, & Yanhua, 2020).Decisions to develop and change curricula should be supported by evaluation results based on scientific methods because these data provide an objective and reliable basis so that the right steps can be taken to identify and improve weaknesses and strengths in education.Data obtained using scientific criteria and scales provide decision makers with in-depth and reliable information about curriculum effectiveness, participant satisfaction, and the degree of achievement of learning objectives, which in turn promotes continuous improvement in education.
Evaluation of curricula is important to determine the effectiveness and efficiency of the curriculum.Evaluation determines the extent to which the curriculum achieves its objectives and responds to the intended outcomes (Tintiangco-Cubales et al., 2020).Evaluation results, especially those based on participant views, provide important feedback for curriculum improvement by assessing the impact and success of the curriculum.Evaluation results are used to identify shortcomings and areas for improvement of the curriculum.Feedback based on participants' opinions reveals the strengths and weaknesses of the curriculum and allows corrective measures to be taken in these areas.Thus, the curriculum is improved to be more effective, efficient and realistically responsive to participant needs.Participants' opinions and feedback reflect whether the curriculum meets their expectations and their level of satisfaction.Participant satisfaction can be seen as very important for learning motivation.
Curriculum evaluation provides feedback on the curriculum's targeted outcomes, content, learning-teaching process and assessment and evaluation dimension according to participant views with an accurate and scientifically prepared tool.This feedback enables the curriculum to better respond to the needs of the participants by assessing the relevance and appropriateness of the curriculum.Relevance is extremely important in the curriculum development process (Handelzalts, 2019;Nduna, 2012).Relevance refers to the degree of relevance of a topic or information to a situation, problem or purpose, and refers to the contextual relevance of the curriculum content.It is important to evaluate curricula to determine the relevance of the content, how valid, meaningful and purposeful it is (Das & Krishnan, 2009;Ntshwarang et al., 2021;Strong et al. 1999;Ntshwarang et al., 2021).As it is understood, relevance is a criterion related to the functionality, appropriateness, applicability, and usability of the content in the curriculum.The relevance of the curriculum evaluates whether the curriculum is appropriate for the target audience, whether it responds to the needs of the participants and whether it is intended to achieve the intended goals.Assessing the relevance of curricula is critical not only to meet current participant needs, but also to shape future training planning and establish sustainable training approaches.This importance fosters continuous improvement in training and increases the capacity to adapt to changing training needs so that curricula can meet future challenges.
Evaluation of curricula is a necessary step for curriculum sustainability and future planning (Ludvik, 2023;Nouraey et al., 2020).Evaluation results identify the strengths of the curriculum and focus on these areas.Furthermore, measures are taken to improve the weaknesses of the curriculum and contribute to better planning of future curricula.Consequently, evaluation of curricula and feedback based on participant opinions are important to determine the effectiveness, satisfaction and relevance of the curriculum.This evaluation process is essential for developing and improving curricula, responding better to participants' expectations and ensuring a sustainable training environment.The development of a questionnaire for quantitative research to determine the effectiveness of the evaluation process should question the stages of the curriculum development process.Moreover, Altıntaş and Görgen (2021) found that curriculum development and evaluation studies in basic education in Turkey are not scientific and systematic, and the models used in curriculum development studies are not suitable for the needs of the society.In addition, it was emphasized that there are problems in the formation of curriculum development commissions in Turkey and the dynamism that should be between the elements of the curriculum is not ensured, the results of curriculum evaluation studies are not taken into account much, even curriculum evaluation studies are not conducted, and those that are conducted cannot be considered as curriculum evaluation studies, and appropriate approaches, methods and techniques are not used in curriculum evaluation studies.
In quantitative research methodology, the empirical research method is considered important in social sciences and educational sciences.The empirical research method involves the procedure of developing a model to find the relationship between different variables identified in a problem.Based on developing hypotheses and testing hypotheses, the model can be examined and improved to explain real world phenomena.It consists of using a questionnaire based on quantitative research to collect data to identify and relate the variables present in the problem.It is a relatively difficult task to design and develop an effective and efficiently perfect questionnaire to be used for collecting research data and especially for evaluating curricula based on participant views (Aithal & Aithal, 2020).Such a questionnaire study that will also contribute to curriculum development to improve evaluation processes and increase participant satisfaction will contribute to the field.In this context, the design and implementation of a questionnaire developed based on participant views to increase the contribution to the curriculum development process was seen as a problem in the research.In the light of this problem, the study aimed to develop a questionnaire that would serve to evaluate curricula according to the views of the participants.The other objectives of the study were to determine the sub-factors, factor loadings, item-total correlation and alpha reliability coefficient of the questionnaire developed for this purpose.

Materials and Methods
Information on the participants of the study, the stages followed in the development of the questionnaire and the analysis of the collected data are given in this section.

Participants
The target population of this research, which is based on the survey model, is the implemented curricula.By using this questionnaire, all dimensions of curricula can be determined based on participants' opinions.The target population of this study, which aims to develop a questionnaire for the evaluation of curricula according to participant views, is all participants who receive training in any curriculum.The pre-application and item analysis of the questionnaire was carried out with 178 participants who completed an in-service curriculum in a state institution.

Questionnaire Development
Four stages were followed in the questionnaire development process.These stages are as follows (Büyüköztürk, 2005): Defining the problem 2. Article writing

Pre-application and item analysis
The curriculum evaluation questionnaire was developed according to the scientific stages.While developing the questionnaire, the purpose of the questionnaire was determined first.Therefore, aims such as evaluating the effectiveness of the curriculum, participant satisfaction or the appropriateness of the curriculum were deemed appropriate for the purpose of the questionnaire.Then, while creating the evaluation questionnaire, different question categories such as curriculum objectives, curriculum content, training methods, presentation quality, achievement of curriculum objectives were determined.Then the appropriate question type was determined and Likert scale questions were prepared.These questions were sorted by taking the opinions of a group of experts into account, taking into account the procedures to be carried out in the curriculum development process.Thus, content validity was verified.The choice of scale was considered important for data analysis and interpretation.By using a scale between 1-5, it was aimed to enable the participants to evaluate the curriculum.Each question in the questionnaire was clearly and concisely worded.Thus, it was aimed that the participants would better understand the questions and give correct answers.For this reason, attention was paid to using clear and understandable questions instead of complex or multifaceted questions.The flow of the questionnaire and the order of the questions were considered important.Arranging the questions in a logical order was considered to ensure consistency in the participants' responses.For this reason, general questions were included at the beginning, followed by more specific questions.Before starting the questionnaire, it was important to pre-test with a pre-selected group of participants so that the clarity, accuracy and usability of the questionnaire could be assessed.After the pre-test, improvements could be made with feedback from the participants.With 178 participants, the pre-application of the curriculum evaluation questionnaire was carried out, factor and item analysis was performed, and the level of reliability was determined.

Results
In the study conducted by following the stages of the questionnaire development process, the questionnaire was scaled with a five-point scale.In the questionnaire, a 5point scale consisting of "Definitely Yes", "Quite", "Partially", "Very Little", "Absolutely not" options was used.Expert opinion was obtained about the validity of the questionnaire, it was administered to 178 participants in an in-service curriculum, factor and item analysis was completed, and the reliability level was determined.
Reliability and validity studies were investigated using item analysis and exploratory factor analysis (Rattray & Jones, 2007).The "alpha internal consistency coefficient" test was used for the reliability of the questionnaire (Baykul, 2000, p. 149).The alpha reliability coefficient (α ) was found to be 0 "Factor analysis" tests were conducted for the construct validity of the sub-factors and "item-total correlation" tests for item discrimination (Büyüköztürk, 2002, p. 117).As a result of the factor analysis, 28 of the 30 items determined for the trial form were found to be appropriate, and two items (item 15 and item 21) were removed from the measurement tool because they measured different constructs.The total variance explained in these dimensions is 64,3%.Item-total correlations ranged between 0,45 and 0,80.Table 1 shows the factor loadings and item total correlation values of each item in the questionnaire.The findings obtained from the preliminary administration of the questionnaire were analyzed in terms of factor loadings and item-total correlations.Factor loading values indicate how much each item is related to the factors identified by the questionnaire.
The values are usually between 0 and 1 and higher values indicate that the item has a strong relationship with the factor.For example, a high factor loading value of 0,78 indicates a strong relationship between the item and the factor.
Item-total correlations measure how well each item correlates with the overall outcome of the questionnaire.These values are usually between 0 and 1, with higher values indicating a good correlation between the item and the overall outcome.For example, a high item-total correlation of 0,80 indicates a high agreement between the item and the overall outcome.
When the results are analyzed, we can say that some items with high factor loading values (e.g., 0,82) and high item total correlations (e.g., 0,80) are suitable for the purposes of the questionnaire and contribute reliably.However, items with low factor loadings (e.g., 0,30, item 26) and item total correlations (e.g., 0,45, item 1) may be weak in contributing to the objectives of the questionnaire.In light of the findings, item 15 and item 21 were removed from the questionnaire, and item 26 and item 1 were revised.Factor loadings and item-total correlations are important measures to assess the item quality of the questionnaire, and these findings may help to make necessary adjustments to improve the reliability and validity of the questionnaire.
The explained variance indicates how much of the total variance is explained by the variables measured by the questionnaire.An explained variance of 64,3% indicates that the questionnaire collected highly meaningful information about the variables measured.This indicates that the questionnaire can successfully explain a large number of variables and is a powerful tool that can serve curriculum evaluation purposes.
The alpha reliability coefficient is used to assess the internal consistency and reliability of a measurement tool.It takes a value between 0 and 1, with a higher value indicating a more reliable measurement tool.0,88 indicates that the internal consistency of the questionnaire is quite high.This means that there is a high level of consistency and reliability between the answers given by the respondents.
The findings indicate that the questionnaire is an effective tool for evaluating curricula and can be used in curriculum development processes.The measurement result of the questionnaire's explained variance and alpha internal consistency coefficient test indicates that the questionnaire has a strong foundation and therefore can be used as a reliable tool to support changes and improvements in curricula.
The study developed and validated a questionnaire for curriculum evaluation using a five-point scale.Expert opinions on validity were obtained, and the questionnaire was administered to 178 participants in an in-service curriculum.Reliability and validity were assessed through item analysis and exploratory factor analysis.The alpha internal consistency coefficient was 0,88, indicating high reliability.The questionnaire measured nine sub-factors, and factor and item analysis led to the removal of two items.Factor loadings and item-total correlations were analyzed, guiding revisions.The questionnaire explained 64,3% of the variance, signifying its effectiveness in collecting meaningful information.The high alpha reliability coefficient (0,88) affirmed the tool's internal consistency and reliability, supporting its efficacy for curriculum evaluation and development.

Discussion
The results of the research are discussed under the headings of evaluation of the findings, implications of the research findings for educational practices, limitations of the study and future research.
This study aimed to develop a questionnaire designed to evaluate curricula based on participant views.The research results show that the developed questionnaire is a reliable and valid measurement tool.The alpha internal consistency coefficient was found to be high at 0,88, which supported that the measurement of the questionnaire was reliable.Cronbach's alpha internal consistency coefficient is expected to be high (Goto et al., 2011).Factor loadings and item-total correlations also indicated that the design of the questionnaire was appropriate for its purposes.Ferrando et al. (2023) emphasize that the boundaries of the appropriateness zones will be obtained by extensive similarities in conditions that mimic conditions found in practice.It is defined as an item response theory index of item efficiency and suggests a unified approach to select the most effective and problem-free subset of items.This index is used to assess how well the questionnaire represents the variable measured by each item.The value obtained in the research shows that the items of the questionnaire are appropriate for curriculum evaluation and accurately reflect the opinions of the participants.The research emphasizes the importance of evaluating curricula based on participants' opinions.Participants' feedback provides curriculum developers and educators with the opportunity to understand and improve the strengths and weaknesses of curricula.This can contribute to improving the quality of education.
The findings of this study can make important contributions to educational practices.First of all, the developed questionnaire can be considered as a powerful tool that can be used to evaluate the effectiveness of curricula.Based on the survey results, curriculum developers can improve their curricula and respond better to the needs of the participants.We also believe that this type of research can improve the quality of research and practice in the field of education.Evaluations based on participants' opinions can help curricula become more student-oriented and improve educational processes.Richard et al. (2019) provide a good example of this with a student-designed and student-led leadership curriculum for students.In general, it was emphasized that participants showed high satisfaction with the training, the consideration of participants' evaluations, and measurable learning.Karacaoğlu (2018) evaluated the online curriculum he prepared for pre-service teachers to prepare for the central exam according to the opinions of the instructors and participants, the strengths and weaknesses of the curriculum were determined and some suggestions were made based on the findings.The results of both studies draw attention to the potential of the developed questionnaire to increase the effectiveness of curricula and encourage participant-oriented approaches.
The developed questionnaire offers versatility in assessing not only traditional curricula but also online curricula, aligning with practices observed in studies by Conroy et al. (2020), Mailizar et al. (2020), Martin et al. (2019), andSchlenz et al. (2020).
Additionally, the questionnaire's items can serve as a foundation for constructing semistructured interview questions, a method applied by Gani et al. (2020).This adaptability underscores the questionnaire's potential in diverse educational research contexts.
This study has limitations.First of all, the study is limited to participants who completed an in-service curriculum in a specific governmental institution.Therefore, more studies are needed on the general validity of the questionnaire and its validity for different curricula.Haidet et al. (2005) in their study on determining the patient orientation of implicit curricula in medical faculties also emphasize that more studies will be needed to further determine the validity of the scale related to the development and validation of a new measure.This type of research can always be improved by administering it to a larger population.Also, different research should be conducted on the adaptability of the questionnaire for different age groups or educational levels.
Nevertheless, this study can be considered as a basis for future research.There is a need for further development and testing of such instruments that can be used to evaluate the effectiveness of curricula.Further research is also needed to better understand the role of evaluations based on participant views in improving the quality of education.Future studies in this area may lead to better practices in the field of education.
The study discusses findings, implications for education, limitations, and future research.The developed questionnaire, with a high alpha consistency coefficient (0,88), proves reliable for curriculum evaluation.Factor loadings and item-total correlations affirm its appropriateness.The study underscores the significance of participant feedback for improving curricula, enhancing education quality.The questionnaire contributes to student-oriented curricula and effective educational processes.
Limitations include a specific participant group, urging broader studies for general validity and applicability to diverse curricula.Future research should explore adaptability across age groups and educational levels, fostering continued instrument development for curriculum effectiveness assessment and improving education quality.
The research successfully developed a reliable questionnaire for curriculum evaluation based on participant perspectives, highlighting its potential to enhance educational practices.The emphasis on participant feedback underscores the tool's value in improving curriculum strengths and weaknesses, contributing to overall educational quality.The study's contributions extend to both traditional and online curricula evaluation, aligning with contemporary educational research trends.Despite its merits, limitations exist, notably the study's focus on a specific institution and curriculum type.Further research should explore the questionnaire's general validity across diverse curricula and participant groups, ensuring broader applicability.Addressing these limitations can enhance the questionnaire's robustness and contribute to advancements in educational research and practice.

Conclusion
The data to be obtained through the questionnaire, which can be used to determine its effectiveness according to the views of the participants, can be interpreted in the light of the results obtained from frequencies, percentages and arithmetic averages.
Frequencies, percentages and arithmetic averages can be used for each item in the questionnaire.In addition, a single arithmetic mean can be found and interpreted for each sub-factor, in other words, for each of the elements of the curriculum.In order to find a single arithmetic mean for each sub-factor and interpret the total arithmetic mean score of the items in the range of 1-5, the total arithmetic mean can be divided by the number of items.A single arithmetic mean can be found and interpreted for all items of the questionnaire to determine the effectiveness of the entire curriculum.As a result of the alpha internal consistency coefficient test, the reliability coefficient of the questionnaire (α ) was found to be 0,88.
The refined version of the questionnaire, which achieved a cumulative explained variance of 64.3%, is presented in Table 2 on the last page of the article.The items removed from the questionnaire were measuring different constructs and the items in the final version of the questionnaire exhibited item-total correlations ranging from 0.45 to 0.80.The removal of these items was effective in increasing the precision and consistency of the questionnaire.
In order to ensure that the arithmetic averages found show the level of competence, the limits of each level in the five-point scale used were determined in terms of points.
Based on the idea that the intervals in the questionnaire are equal, the limits found for the options of the scale are as follows: Absolutely No ( 1 The results of the study show that the questionnaire, which aims to evaluate curricula based on participants' opinions, is a reliable and usable measurement tool.The alpha internal consistency coefficient of the questionnaire was 0,88, which confirms the reliability of the measurements.This means that the questionnaire can produce consistent results in different use cases.
Moreover, the factor loadings and item-total correlations indicate that the sub-factors and items of the questionnaire were designed in accordance with their purpose.This supports that the questionnaire is effective in measuring different aspects of curricula.
The research highlights that such tools can play an important role in evaluating curricula based on participant views.By using this questionnaire, educators and curriculum developers can better understand their curricula and improve them based on participants' feedback.This has the potential to improve the quality of education.
In conclusion, this research contributes to the development of a reliable and valid measurement tool that can be used to evaluate the effectiveness of curricula.This tool can be an important resource for practitioners in the field of education and can help to continuously improve curricula based on participant views.
The study focuses on evaluating curricula through a developed questionnaire, interpreted using frequencies, percentages, and arithmetic averages.The reliability of the questionnaire, with a high alpha internal consistency coefficient of 0,88, affirms its consistency in diverse contexts.The questionnaire, demonstrating a total variance explained of 64,3%, is deemed a reliable and usable tool for curriculum assessment.Utilizing a five-point scale, the questionnaire effectively measures participants' perspectives on various curriculum dimensions.The research underscores the tool's role in enhancing curriculum quality by incorporating participant feedback, emphasizing its significance in educational improvement.In conclusion, this research contributes a reliable measurement tool for assessing curriculum effectiveness, beneficial for education practitioners striving for continuous improvement based on participant insights.

o
,88.The questionnaire measured 9 different sub-factors: o Needs assessment and pre-curriculum activities, Physical conditions and problems of the curriculum.