Understanding research methods 7th edition patton




















Designing and implementing an effective radiologic technology educational program curriculum requires a disciplined pedagogical approach where the instructor performs a thorough situational analysis, develops a theory based and pragmatic learning plan, and implements a course of study in accordance with the established educational guidelines and requirements.

Diligent efforts are needed to enhance the relationship amongst curriculum developers and evaluators. The collection of information at the formative stage: followed by process evaluation to assess implementation as the curriculum progresses, and summative evaluation to assess impact is required for accreditation of program in the United States by the Joint Review Committee for Education in Radiologic Technology. Formative evaluation research is used to enhance effectiveness of the curriculum, guide development of teaching and learning strategies, and reveal promising and ineffective components of curriculum.

Related Articles:. Home References Article citations. Experimental versus nonexperimental studies -- 3. Experimental versus causal-comparative studies -- 4. Types of nonexperimental research -- 5. Variables in nonexperimental studies -- 6. Variables in experimental studies -- 7. Research hypotheses, purposes, and questions -- 8. Operational definitions of variables -- 9. Quantative versus qualitative reseach: I -- Quantative versus qualitative research: II -- Program evaluation -- Ethical considerations in research -- The role of theory in research -- Part B: Reviewing Literature -- Reasons for reviewing literature -- Locating literature electronically -- Organizing a literature review -- Preparing to write a critical review -- Creating a synthesis -- Citing references -- Part C: Sampling -- Biased and unbiased sampling -- Simple random and systemic sampling -- Stratified random sampling -- Other methods of sampling -- Sampling and demographics -- Introduction to sample size -- A closer look at sample size -- Part D: Instrumentation -- Introduction to validity -- Judgmental validity -- Empirical validity -- Judgmental-empirical validity -- Reliability and its relationship to validity -- Measures of reliability -- Internal consistency and reliability -- Norm- and criterion-referenced tests -- Measures of optimum performance -- Measures of typical performance -- Part E: Experimental design -- True experimental designs -- Biased and unbiased sampling Simple random and systemic sampling Stratified random sampling Other methods of sampling Sampling and demographics Introduction to sample size A closer look at sample size Part D: Instrumentation Introduction to validity Judgmental validity Empirical validity Judgmental-empirical validity Reliability and its relationship to validity Measures of reliability Internal consistency and reliability Norm- and criterion-referenced tests Measures of optimum performance Measures of typical performance Part E: Experimental design True experimental designs Threats to internal validity Threats to external validity Preexperimental designs Quasi-experimental designs Confounding in experiments F.

Understanding statistics Descriptive and inferential statistics Introduction to the null hypothesis Scales of measurement Descriptions of nominal data Introduction to the chi-square test A closer look at the chi-square test Shapes of distributions The mean, median, mode The mean and the standard deviation The median and interquartile range The pearson correlation coefficient The t test One-way analysis of variance Two-way analysis of variance Practical significance of results Part G: Effect size and meta-analysis Introduction to effect size d Interpretation of effect size d Effect size and correlation r Introduction to meta-analysis Meta-analysis and effect size



0コメント

  • 1000 / 1000