Loading
Rating:
Date: 30/09/2015
Feedback Given By: User_7413
Feedback Comment: Nice job by Propeterjohnson
Project Details
Project Status: Completed
This work has been completed by: Topwrite
Total payment made for this project was: $20.00
Project Summary: Measurement and Data Collection Data collection requires evaluators to consider a wide and diverse variety of data sources. These include both primary and secondary data sources. Data collection can significantly impact your evaluation project. Evaluators sometimes overlook problems where secondary data is involved and overlook the fact that data for comparison purposes can affect validity. The types of data in the Learning Resources are not meant to be exhaustive but are the most common types in program evaluation. To prepare for this Discussion, choose one of the cases provided, review the specified Learning Resources, and complete the corresponding prompts. Case 1 Steven Levitt, the author of Freakonomics, is exemplary in his ability to make data tell stories. Students of public administration and organizational behavior are familiar with the famous Hawthorne effect. Levitt and Lists (2009) reanalysis of the original Hawthorne study (which uses incomplete, archived, and secondary data) shows how management programs that were developed to increase worker productivity can be evaluated or reevaluated, even though the experiments were conducted almost a century ago. QUESTION: A description of at least two data collection problems that the reanalysis of Levitt and List might bring to the Hawthorne effect. _____________________________________________________________________________________ Case 2 Barbara Geddes (1990) points out the pitfalls of selecting cases, units, or observations purely on the basis of the dependent variable. Post by Day 3 a brief explanation of the relevance of Barbara Geddes argument for program evaluation. Then, explain when a scenario of choosing cases solely on the dependent variable might be permissible. Provide a rationale for your explanations. _____________________________________________________________________________________ Introduction Now that you have started developing your evaluation design, how are you going to collect the necessary information that will give you the answers to your questions? Is there preliminary or baseline data on the program participants or beneficiaries of the program? Are both the incidence and prevalence of the problems your program is designed to address well documented? What types of data are appropriate for your evaluation? What are your variables of interest and how are you going to measure these? Evaluations rely heavily on data. The good news is that you have a variety of methods that can be used to collect the data. The data can be baseline data, secondary data, surveys, censuses, case studies, and qualitative information. However, data collection issues such as data quality, coverage, costs, and ethics are major considerations in any data collection effort. The Learning Resources this week will focus on problems having to do with collecting data, given specific evaluation designs, measures, indicators of concepts of interest, and relevant units of observation. You will also concentrate on issues having to do with measurement and the selection of units of analysis. Learning Objectives Students will: Evaluate design and data collection problems Analyze the relevance of program evaluation Analyze selection of cases by dependent variable Develop input, output, and outcome program indicators Analyze data collection strategies ____________________________________________________________________________________ Required Resources Readings Langbein, L. (2012). Public program evaluation: A statistical guide (2nd ed.). Armonk, NY: ME Sharpe. o Chapter 7, Designing Useful Surveys for Evaluation (pp. 209238) McDavid, J. C., Huse, I., & Hawthorn, L. R. L. (2013). Program evaluation and performance measurement: An introduction to practice (2nd ed.). Thousand Oaks, CA: Sage. o Chapter 4, Measurement for Program Evaluation and Performance Monitoring (pp. 145185) Geddes, B. (1990). How the cases you choose affect the answers you get: Selection bias in comparative politics. Political Analysis, 2(1), 131150. Retrieved fromhttp://www.nyu.edu/classes/nbeck/q2/geddes.pdf Levitt, S., & List, J. (2009). Was there really a Hawthorne effect at the Hawthorne plant? An analysis of the original illumination experiments. Retrieved from http://www.nber.org/papers/w15016.pdf Urban Institute. (2014). Outcome indicators project. Retrieved fromhttp://www.urban.org/center/cnp/projects/outcomeindicators.cfm Optional Resources Note: Consult the following resources based on your interest and relevance to your project: Bamberger, M. (2010). Reconstructuring baseline data for impact evaluation and results measurement. Retrieved from http://siteresources.worldbank.org/INTPOVERTY/Resources/335642-1276521901256/premnoteME4.pdf Parnaby, P. (2006). Evaluation through surveys [Blog post]. Retrieved fromhttp://www.idea.org/blog/2006/04/01/evaluation-through-surveys/ Rutgers, New Jersey Agricultural Experiment Station. (2014). Developing a survey instrument. Retrieved from http://njaes.rutgers.edu/evaluation/resources/survey-instrument.asp MEASURE Evaluation. (n.d.). Secondary analysis of data. Retrieved February 24, 2015, fromhttp://www.cpc.unc.edu/measure/our-work/secondary-analysis/secondary-analysis-of-data Zeitlin, A. (2014). Sampling and sample size [PowerPoint slides]. Retrieved fromhttp://www.povertyactionlab.org/sites/default/files/2.%20Sampling%20and%20Sample%20Size_AFZ3.pdf