To deter careless editing and vandalism through "add a link", we may want to include some "quality gates" to identify if a user appears to be editing carelessly, and then prompt them to change or slow their behavior. This task would need more design and discussion before becoming actionable. We will likely want to evaluate data from the initial release before taking action.
Possible triggers
- User has accepted all suggestions on three articles in a row.
- User accepts over 90% of suggestions (given that we believe the model is about 80% accurate).
- User is spending less than four seconds per suggestion on average.
- User has over 10 "add a link" edits with a revert rate of 30% or more.
Possible interventions
- Dialog that says "You've been accepting all the AI suggestions you've received. Make sure that the AI isn't making mistakes."
- Dialog that says "You're reviewing links very quickly! Slow down to make sure the AI isn't making mistakes."
- Pausing/Not showing Add links tasks for a period of time to that person (with some message letting them know why and how to improve once "timeout" is over)
- Provide a short user education tutorial they must complete before being able to review add links tasks.