TY - GEN
T1 - From in the class or in the wild? Peers provide better design feedback than external crowds
AU - Wauck, Helen
AU - Yen, Yu Chun
AU - Fu, Wai Tat
AU - Gerber, Elizabeth
AU - Dow, Steven P.
AU - Bailey, Brian P.
N1 - Publisher Copyright:
© 2017 ACM.
PY - 2017/5/2
Y1 - 2017/5/2
N2 - As demand for design education increases, instructors are struggling to provide timely, personalized feedback for student projects. Gathering feedback from classroom peers and external crowds offer scalable approaches, but there is little evidence of how they compare. We report on a study in which students (n=127) created early- and late-stage prototypes as part of nine-week projects. At each stage, students received feedback from peers and external crowds: their own social networks, online communities, and a task market. We measured the quality, quantity and valence of the feedback and the actions taken on it, and categorized its content using a taxonomy of critique discourse. The study found that peers produced feedback that was of higher perceived quality, acted upon more, and longer compared to the crowds. However, crowd feedback was found to be a viable supplement to peer feedback and students preferred it for projects targeting specialized audiences. Feedback from all sources spanned only a subset of the critique categories. Instructors may fill this gap by further scaffolding feedback generation. The study contributes insights for how to best utilize different feedback sources in project-based courses.
AB - As demand for design education increases, instructors are struggling to provide timely, personalized feedback for student projects. Gathering feedback from classroom peers and external crowds offer scalable approaches, but there is little evidence of how they compare. We report on a study in which students (n=127) created early- and late-stage prototypes as part of nine-week projects. At each stage, students received feedback from peers and external crowds: their own social networks, online communities, and a task market. We measured the quality, quantity and valence of the feedback and the actions taken on it, and categorized its content using a taxonomy of critique discourse. The study found that peers produced feedback that was of higher perceived quality, acted upon more, and longer compared to the crowds. However, crowd feedback was found to be a viable supplement to peer feedback and students preferred it for projects targeting specialized audiences. Feedback from all sources spanned only a subset of the critique categories. Instructors may fill this gap by further scaffolding feedback generation. The study contributes insights for how to best utilize different feedback sources in project-based courses.
KW - Crowdsourcing
KW - Design methods
KW - Feedback
KW - Learning
UR - http://www.scopus.com/inward/record.url?scp=85044845401&partnerID=8YFLogxK
U2 - 10.1145/3025453.3025477
DO - 10.1145/3025453.3025477
M3 - Conference contribution
AN - SCOPUS:85044845401
T3 - Conference on Human Factors in Computing Systems - Proceedings
SP - 5580
EP - 5591
BT - CHI 2017 - Proceedings of the 2017 ACM SIGCHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
T2 - 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017
Y2 - 6 May 2017 through 11 May 2017
ER -