Wir sind wieder da! Mit Neuigkeiten von uns (und, jaahaa, H5P). AuĂźerdem natĂĽrlich zwei Paper, diesmal beide zum Thema "KI". Und diesmal haben wir wirklich eine rappelvolle Fundgrube. AuĂźerdem in der Politiksparte: Was versprechen die Parteien eigentlich in Sachen digitale Bildung? Dann noch Veranstaltungtipps und eine Weltverbesserungsidee, und dann seid ihr auch schon durch!
Die Folge haben wir am 29.01.2025 aufgenommen.
Pölert, Hauke
Stoppt den Korrekturwahnsinn! oder: Warum wir spätestens 2025 unsere Korrekturpraxis überdenken sollten (De-Implementierung nach Benedikt Wisniewski) Sonstige
Blogbeitrag, 2025.
@misc{Pölert2025,
title = {Stoppt den Korrekturwahnsinn! oder: Warum wir spätestens 2025 unsere Korrekturpraxis überdenken sollten (De-Implementierung nach Benedikt Wisniewski)},
author = {Hauke Pölert},
url = {https://unterrichten.digital/2025/01/06/korrekturen-feedback-de-implementierung-wisniewski/},
year = {2025},
date = {2025-01-06},
urldate = {2025-01-06},
howpublished = {Blogbeitrag},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Truscott, John
The effect of error correction on learners’ ability to write accurately Artikel
In: Journal of Second Language Writing, Bd. 16, Ausg. 4, S. 255–272, 2007, ISBN: 1873-1422.
@article{Truscott2007,The paper evaluates and synthesizes research on the question of how error correction affects learners’ ability to write accurately, combining qualitative analysis of the relevant studies with quantitative meta-analysis of their findings. The conclusions are that, based on existing research: (a) the best estimate is that correction has a small negative effect on learners’ ability to write accurately, and (b) we can be 95% confident that if it has any actual benefits, they are very small. This analysis is followed by discussion of factors that have probably biased the findings in favor of correction groups, the implication being that the conclusions of the meta-analysis probably underestimate the failure of correction.
title = {The effect of error correction on learners’ ability to write accurately},
author = {John Truscott},
url = {https://doi.org/10.1016/j.jslw.2007.06.003},
doi = {10.1016/j.jslw.2007.06.003},
isbn = {1873-1422},
year = {2007},
date = {2007-12-01},
journal = {Journal of Second Language Writing},
volume = {16},
issue = {4},
pages = {255–272},
abstract = {The paper evaluates and synthesizes research on the question of how error correction affects learners’ ability to write accurately, combining qualitative analysis of the relevant studies with quantitative meta-analysis of their findings. The conclusions are that, based on existing research: (a) the best estimate is that correction has a small negative effect on learners’ ability to write accurately, and (b) we can be 95% confident that if it has any actual benefits, they are very small. This analysis is followed by discussion of factors that have probably biased the findings in favor of correction groups, the implication being that the conclusions of the meta-analysis probably underestimate the failure of correction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kluger, Avraham N.; DeNisi, Angelo
In: Psychological Bulletin, Bd. 119, Ausg. 2, S. 254–284, 1996.
@article{nokey,Since the beginning of the century, feedback interventions (FIs) produced negative--but largely ignored--effects on performance. A meta-analysis (607 effect sizes; 23,663 observations) suggests that FIs improved performance on average ( d  = .41) but that over one-third of the FIs decreased performance. This finding cannot be explained by sampling error, feedback sign, or existing theories. The authors proposed a preliminary FI theory (FIT) and tested it with moderator analyses. The central assumption of FIT is that FIs change the locus of attention among 3 general and hierarchically organized levels of control: task learning, task motivation, and meta-tasks (including self-related) processes. The results suggest that FI effectiveness decreases as attention moves up the hierarchy closer to the self and away from the task. These findings are further moderated by task characteristics that are still poorly understood. (PsycInfo Database Record (c) 2020 APA, all rights reserved)
title = {The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory},
author = {Avraham N. Kluger and Angelo DeNisi},
url = {https://psycnet.apa.org/doi/10.1037/0033-2909.119.2.254
https://psycnet.apa.org/record/1996-02773-003},
doi = {10.1037/0033-2909.119.2.254},
year = {1996},
date = {1996-01-01},
journal = {Psychological Bulletin},
volume = {119},
issue = {2},
pages = {254–284},
abstract = {Since the beginning of the century, feedback interventions (FIs) produced negative--but largely ignored--effects on performance. A meta-analysis (607 effect sizes; 23,663 observations) suggests that FIs improved performance on average ( d  = .41) but that over one-third of the FIs decreased performance. This finding cannot be explained by sampling error, feedback sign, or existing theories. The authors proposed a preliminary FI theory (FIT) and tested it with moderator analyses. The central assumption of FIT is that FIs change the locus of attention among 3 general and hierarchically organized levels of control: task learning, task motivation, and meta-tasks (including self-related) processes. The results suggest that FI effectiveness decreases as attention moves up the hierarchy closer to the self and away from the task. These findings are further moderated by task characteristics that are still poorly understood. (PsycInfo Database Record (c) 2020 APA, all rights reserved)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Es gibt den Aufsatz zurück und alles ist rot. Für Lernende oft demotivierend und für Lehrende viel Arbeit – ist das überhaupt nötig?
Muehlhoff, Rainer; Henningsen, Marte
Chatbots im Schulunterricht: Wir testen das Fobizz-Tool zur automatischen Bewertung von Hausaufgaben Unveröffentlicht
Preprint auf arXiv:2412.06651, 2024.
@unpublished{Muehlhoff2024,This study examines the AI-powered grading tool "AI Grading Assistant" by the German company Fobizz, designed to support teachers in evaluating and providing feedback on student assignments. Against the societal backdrop of an overburdened education system and rising expectations for artificial intelligence as a solution to these challenges, the investigation evaluates the tool's functional suitability through two test series. The results reveal significant shortcomings: The tool's numerical grades and qualitative feedback are often random and do not improve even when its suggestions are incorporated. The highest ratings are achievable only with texts generated by ChatGPT. False claims and nonsensical submissions frequently go undetected, while the implementation of some grading criteria is unreliable and opaque. Since these deficiencies stem from the inherent limitations of large language models (LLMs), fundamental improvements to this or similar tools are not immediately foreseeable. The study critiques the broader trend of adopting AI as a quick fix for systemic problems in education, concluding that Fobizz's marketing of the tool as an objective and time-saving solution is misleading and irresponsible. Finally, the study calls for systematic evaluation and subject-specific pedagogical scrutiny of the use of AI tools in educational contexts.
title = {Chatbots im Schulunterricht: Wir testen das Fobizz-Tool zur automatischen Bewertung von Hausaufgaben},
author = {Rainer Muehlhoff and Marte Henningsen},
url = {https://doi.org/10.48550/arXiv.2412.06651
https://media.ccc.de/v/38c3-chatbots-im-schulunterricht},
doi = {10.48550/arXiv.2412.06651},
year = {2024},
date = {2024-12-09},
urldate = {2024-12-09},
issue = {arXiv:2412.06651},
abstract = {This study examines the AI-powered grading tool "AI Grading Assistant" by the German company Fobizz, designed to support teachers in evaluating and providing feedback on student assignments. Against the societal backdrop of an overburdened education system and rising expectations for artificial intelligence as a solution to these challenges, the investigation evaluates the tool's functional suitability through two test series. The results reveal significant shortcomings: The tool's numerical grades and qualitative feedback are often random and do not improve even when its suggestions are incorporated. The highest ratings are achievable only with texts generated by ChatGPT. False claims and nonsensical submissions frequently go undetected, while the implementation of some grading criteria is unreliable and opaque. Since these deficiencies stem from the inherent limitations of large language models (LLMs), fundamental improvements to this or similar tools are not immediately foreseeable. The study critiques the broader trend of adopting AI as a quick fix for systemic problems in education, concluding that Fobizz's marketing of the tool as an objective and time-saving solution is misleading and irresponsible. Finally, the study calls for systematic evaluation and subject-specific pedagogical scrutiny of the use of AI tools in educational contexts.},
howpublished = {Preprint auf arXiv:2412.06651},
keywords = {},
pubstate = {published},
tppubtype = {unpublished}
}
Was taugen LLMs aktuell in der Bewertungspraxis von Schulhausaufgaben? Nachlesen oder Nachsehen (Talk auf dem 38c3)!
Projekte, Tools, Apps… das sind doch bürgerliche Kategorien. Wir packen einfach alles in die Fundgrube:
Was wollen die Parteien in Sachen „digital“? Kurzer Blick in die Parteiprogramme.
Das Netzwerk Polylux unterstützt Initiativen gegen den Rechtsruck im Osten. Hier kann man eine Fördermitgliedschaft abschließen oder einmalig spenden.
Diese und andere Weltverbesserungsideen findet man auch gesammelt hier.