Abstract
Ten years ago we started a course on video game design and development. It was the first course on video games in our university and possibly in our country. We were immediately daunted by two main decisions: (i) the selection of the projects to be developed during the course and (ii) the evaluation of students’ projects. We wanted to give students the maximum freedom and no limit to their creativity. We wanted them to focus on the creation of a game that people would love to play without worrying about some score objectives to maximize and without caring about their instructors’ game design preferences. Accordingly, we decided to ask students to do all the job, starting from the submission and the selection of the game concepts to develop during the course, up to the final evaluation of the projects, the evaluation of their teammates, and thus basically the grading. In this paper, we discuss our experience over the last ten years with our course organization and grading model that, we believe, gives students complete freedom to express themselves and leaves them most of, if not all, the agency.
Formats available
You can view the full content in the following formats:
References
[1]
Gilbert W. Bassett and Joseph Persky. 1999. Robust voting. Public Choice 99, 3 (01 Jun 1999), 299–310. https://doi.org/10.1023/A:1018324807861
[2]
S. Battur, M. S. Patil, P. Desai, M. Vijayalakshmi, M. M. Raikar, P. Hegde, and G. H. Joshi. 2016. Enhancing the Students Project with Team Based Learning Approach: A Case Study. In 2016 IEEE 4th International Conference on MOOCs, Innovation and Technology in Education (MITE). 275–280. https://doi.org/10.1109/MITE.2016.061
[3]
Jeremy Gibson Bond. 2017. Introduction to Game Design, Prototyping, and Development (Second Edition). Addison Wesley.
[4]
F. Dochy, M. Segers, and D. Sluijsmans. 1999. The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education 24, 3 (1999), 331–350. https://doi.org/10.1080/03075079912331379935 arXiv:https://doi.org/10.1080/03075079912331379935
[5]
Peter Emerson (Ed.). 2007. Collective Decision-making The Modified Borda Count, MBC. Springer Berlin Heidelberg, Berlin, Heidelberg, 15–38. https://doi.org/10.1007/978-3-540-33164-3_2
[6]
Ranida B. Harris. 2008. A TECHNIQUE FOR PEER AND GROUP MEMBER EVALUATIONS OF TEAMWORK IN THE UNDERGRADUATE CLASSROOM. In Issues in Information Systems, Vol. IX. 63–71. Issue 2. http://iacis.org/iis/2008/S2008_903.pdf
[7]
Bart Huisman, Nadira Saab, Jan Van Driel, and Paul Van Den Broek. 2019. A questionnaire to assess students’ beliefs about peer-feedback. Innovations in Education and Teaching International 0, 0(2019), 1–11. https://doi.org/10.1080/14703297.2019.1630294 arXiv:https://doi.org/10.1080/14703297.2019.1630294
[8]
Hongli Li, Yao Xiong, Charles Vincent Hunter, Xiuyan Guo, and Rurik Tywoniw. 2020. Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education 45, 2 (2020), 193–211. https://doi.org/10.1080/02602938.2019.1620679
[9]
Anna Planas Lladó, LÃdia Feliu Soley, Rosa Maria Fraguell Sansbelló, Gerard Arbat Pujolras, Joan Pujol Planella, Núria Roura-Pascual, Joan Josep Suñol MartÃnez, and Lino Montoro Moreno. 2014. Student perceptions of peer assessment: an interdisciplinary study. Assessment & Evaluation in Higher Education 39, 5 (2014), 592–610. https://doi.org/10.1080/02602938.2013.860077 arXiv:https://doi.org/10.1080/02602938.2013.860077
[10]
Andrew Luxton-Reilly. 2009. A systematic review of tools that support peer assessment. Computer Science Education 19, 4 (2009), 209–232. https://doi.org/10.1080/08993400903384844 arXiv:https://doi.org/10.1080/08993400903384844
[11]
Ernesto Panadero and Maryam Alqassab. 2019. An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education 44, 8 (2019), 1253–1278. https://doi.org/10.1080/02602938.2019.1600186 arXiv:https://doi.org/10.1080/02602938.2019.1600186
[12]
RaphKoster. 2013. Theory of Fun for Game Design (Second Edition). O’Reilly Media.
[13]
Jesse Schell. 2014. The Art of Game Design: A Book of Lenses (Second Edition). A K Peters/CRC Press.
[14]
Mark G. Simkin. 2015. Should you Allow your Students to Grade their own Homework?JISE 26(2015), 147–154.
[15]
Renée Speyer, Walmari Pilz, Jolien van der Kruis, and Jan Wouter Brunings. 2011. Reliability and validity of student peer assessment in medical education: a systematic review.Medical teacher 33(2011). Issue 11.
[16]
Dennis L. Sun, Naftali Harris, Guenther Walther, and Michael Baiocchi. 2014. Peer assessment enhances student learning. arxiv:stat.AP/1410.3853
[17]
Thyago Tenório, Ig Ibert Bittencourt, Seiji Isotani, and Alan Pedro Silva. 2016. Does peer assessment in on-line learning environments work? A systematic review of the literature. Computers in Human Behavior 64 (2016), 94 – 107. https://doi.org/10.1016/j.chb.2016.06.020
[18]
Robert L. Thorndike. 1953. Who belongs in the family?Psychometrika 18, 4 (01 Dec 1953), 267–276. https://doi.org/10.1007/BF02289263
[19]
Yanbin Tu and Min Lu. 2004. Peer-and-Self Assessment to Reveal the Ranking of each Individual’s Contribution to a Group Project. JISE 16(2004), 197–206.
[20]
Lanqin Zheng, Nian-Shing Chen, Panpan Cui, and Xuan Zhang. 2019. A Systematic Review of Technology-Supported Peer Assessment Research: An Activity Theory Approach. International Review of Research in Open and Distributed Learning 20, 5(2019), 168–191. https://doi.org/10.19173/irrodl.v20i5.4333