帳號:guest(18.118.30.33)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):佘文云
作者(外文):She, Wen-Yun
論文名稱(中文):在線上問卷中評定量表如何影響填答者追求最低要求滿意結果的行為
論文名稱(外文):How Rating Scales Affect Survey Satisficing Behavior in Online Surveys
指導教授(中文):雷松亞
指導教授(外文):Ray, Soumya
口試委員(中文):林福仁
王俊程
口試委員(外文):Lin, Fu-Ren
Wang, Jyun-Cheng
學位類別:碩士
校院名稱:國立清華大學
系所名稱:服務科學研究所
學號:105078515
出版年(民國):107
畢業學年度:106
語文別:英文
論文頁數:45
中文關鍵詞:線上問卷視覺類比量表滑標量表評定量表滿意度
外文關鍵詞:online surveyvisual analogue scaleslider scalerating scalesatisficing
相關次數:
  • 推薦推薦:0
  • 點閱點閱:65
  • 評分評分:*****
  • 下載下載:13
  • 收藏收藏:0
此研究探討在線上問卷中,三種評定量表的輸入方式如何影響填答者追求最低要求滿意結果的行為。當填答者只追求最低要求滿意結果時,他們會用最快的速度,回答一個「夠好」的答案,而不是一個被仔細思考過的最佳的答案。我們使用的三種評定量表都已經有被用在紙本問卷或線上問卷中,包含有傳統的單選按鈕(Radio Button)、滑標(Slider)和視覺類比量表(Visual Analogue Scale)。我們將問卷發佈在亞馬遜土耳其機器人(Amazon Mechanical Turk)上,並隨機分配使用其中一種評定量表輸入方式的問卷給工作者。結果顯示視覺類比量表最不容易引起填答者追求最低要求滿意結果的行為,由此可知,研究人員未來要使用評定量表在線上問卷中時,視覺類比量表是一個比傳統的單選按鈕更好的輸入方式。
This study examined how three types of rating scale inputs in online surveys affect respondents’ survey satisficing behavior. When satisficing, respondents provide quick, “good enough” answers instead of those considered thoroughly. The three types of rating scale we consider are traditional radio buttons, sliders, and Visual Analogue Scale (VAS), which have previously been used in paper surveys or online surveys. We administered surveys on Amazon Mechanical Turk (MTurk) and randomly assigned respondents to one of three surveys, each with one type of rating scale input. The results suggest that VAS triggered less satisficing behavior and was the easiest to use, providing another input choice for researchers when using rating scale in online surveys.
Chapter 1. Introduction 1
Chapter 2. Online Survey Instruments 3
2.1. Survey Research 3
2.2. Online Survey 4
2.2.1. Types of Rating Scale in Online Survey 5
2.2. Survey Satisficing 6
2.3. Hypotheses 7
Chapter 3. Experiment and Data Collection 10
3.1. Study Design 10
3.2. Recruiting Participants 12
3.3. Survey Platform Architecture: SurveyMoonbear 13
3.1.1. Google Sheet Integration 13
3.1.2. Service Architecture 15
Chapter 4. Analysis and Results 18
4.1. Overview of the Response 18
4.2. Ease-of-Use 21
4.3. Satisficing 22
4.3.1. Item Nonresponse 22
4.3.2. Choosing Middle Rate 24
4.3.3. Variance of Each Respondent’s Answer. 25
Chapter 5. Discussion 26
5.1. Ease-of-Use 26
5.2. Satisficing 27
5.2.1. Item Nonresponse 27
5.2.2. Choosing Middle Rate 27
5.2.3. Variance of Each Respondent’s Answer 28
5.3. Overview of Types of Rating Scale 29
5.4. Limitation and Future work 31
References 33
Appendix A 36
Barge, S., & Gehlbach, H. (2011). Using the Theory of Satisficing to Evaluate the Quality of Survey Data. Research in Higher Education, 53(2), 182–200.
Benfield, Jacob A., and William J. Szlemko. 2006. “Internet-Based Data Collection: Promises and Realities.” Journal of Research Practice 2:Article D1.
Couper, M. (2000). Web surveys: a review of issues and approaches. Public Opinion Quarterly, 64(4), 464–494.
Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web Survey Design and Administration. Public Opinion Quarterly, 65(2), 230–253.
Dillman, D. A., Redline, C. D., & Carley-Baxter, L. R. (1999, August). Influence of type of question on skip pattern compliance in self-administered questionnaires. In Joint Statistical Meetings of the American Statistical Association, Indianapolis.
Funke, F. (2015). A Web Experiment Showing Negative Effects of Slider Scales Compared to Visual Analogue Scales and Radio Button Scales. Social Science Computer Review, 34(2), 244–254.
Funke, F., & Reips, U. D. (2012). Why semantic differentials in web-based research should be made from visual analogue scales and not from 5-point scales. Field methods, 24(3), 310-327.
Funke, F., Reips, U.-D., & Thomas, R. K. (2010). Sliders for the Smart: Type of Rating Scale on the Web Interacts With Educational Level. Social Science Computer Review, 29(2), 221–231.
Grimm, A. (2017, July 10). Episode #487: Prototype Pattern – RubyTapas. Retrieved July 22, 2018, from https://www.rubytapas.com/2017/07/10/episode-487-prototype-pattern/
Hamby, T., & Taylor, W. (2016). Survey Satisficing Inflates Reliability and Validity Measures: An Experimental Comparison of College and Amazon Mechanical Turk Samples. Educational and Psychological Measurement, 76(6), 912–932.
Hayes, M. H. S., & Patterson, D. G. (1921). Experimental development of the graphic rating method. Psychological Bulletin, 18, 98–99.
Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236.
Lietz, P. (2010). Research into questionnaire design. International Journal of Market Research, 52(2), 249-272.
Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1), 1–15.
Redline, C. D., & Dillman, D. A. (1999). The influence of auxiliary, symbolic, numeric, and verbal languages on navigational compliance in self-administered questionnaires. In International Conference on Survey Nonresponse.
Reips, U.-D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49(4), 243–256.
Sekaran, U., & Bougie, R. J. (2016). Research Methods For Business: A Skill Building Approach. John Wiley & Sons.
Smith, T. W. (1995, May). Little things matter: A sampler of how differences in questionnaire format can affect survey responses. In Proceedings of the American Statistical Association, Survey Research Methods Section (pp. 1046-1051). American Statistical Association Alexandria, VA.
Tourangeau, R., & Rasinski, K. A. (1988). Cognitive processes underlying context effects in attitude measurement. Psychological Bulletin, 103(3), 299–314.
Van Selm, M., & Jankowski, N. W. (2006). Conducting Online Surveys. Quality and Quantity, 40(3), 435–456.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *