Wikipedian here - AI on Wikipedia is actually nothing new. we’ve had a machine learning model identify malicious edits since 2017, and Cluebot (an ML-powered anti-vandalism bot) has been around for even longer than that.
even so, this is pretty exciting. from what i gather, this is a transformer model turned on its side; instead of taking textual data and transforming it, it checks to see if two pieces of textual data could reasonably be transformations of each other. used responsibly, this could really help knock out those [1] and [2] tags en masse
Wikipedian here - AI on Wikipedia is actually nothing new. we’ve had a machine learning model identify malicious edits since 2017, and Cluebot (an ML-powered anti-vandalism bot) has been around for even longer than that.
even so, this is pretty exciting. from what i gather, this is a transformer model turned on its side; instead of taking textual data and transforming it, it checks to see if two pieces of textual data could reasonably be transformations of each other. used responsibly, this could really help knock out those [1] and [2] tags en masse
dubious ↩︎
failed ^verification ↩︎
If I’m understanding you correctly, it doesn’t ever edit the actual pages, it just adds flags on certain kinds of content. Is that right?
yes. it only surfaces citations that may back up the content better, an editor still has to read the source and approve the change
Fascinating, as a developer, where can I read more/contribute?
The aforementioned ClueBot is here: https://en.wikipedia.org/wiki/User:ClueBot_NG
For bots in general, start here: https://en.wikipedia.org/wiki/Wikipedia:Bots