Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Appropriate and Inappropriate Uses

Appropriate

One appropriate use of this model without any revisions is for schools that experience rampent cyberbullying to promote their students to enable this feature on Instagram. This is also applicable for nonprofits and other organizations with resources targetting cyberbullying. Particularily, it is also appropriate to use this model’s results to better understand who are actually the most vulnerable targets and allow that to inform an efficient allocation of resources. For example, a national nonprofit organization could have an immersive educational program they want to integrate into certain schools’ curricula. This model could help the nonprofit identify which demographics in which locations are currently cyberbullying the most and target schools that match the model’s results.

It is also possible to update and revise this model for other contexts. One can use the component that identifies hostile text, train it with general data from all age groups, and use that feature to flag hostile text that may be harmful in other contexts. This is especially useful in cases related to canceling on social media or online political turmoils.

Inappropriate

Inappropriate uses for the model are generally uses that take the results from the model and attempt to apply them to an external problem due to a potential correlation of a shared attribute. For example, it is possible for a well-intentioned initiative to assume that the individuals identified by the model are also more likely to suffer from mental health issues since their vulnerability would also make them more of a target for cyberbullying. However, this is a weak correlation and unproven to be significant since mental health is not accounted for in any of the predictors in the model. Therefore, this initiative will likely cause unjustified actions and lead to an unjustified allocation of resources.

Another inappropriate use is using the aggregate outputs of the model, essentially a collection of accounts that are flagged or at risk for cyberbullying, for malicious purposes. Even if external parties had limited access to this data, teams within Instagram could potentially use this information for other platform purposes that wouldn’t be best for the potential victims of cyberbullying. For example, a research team could decide to explore whether there is a relationship between being at risk for cyberbullying and doom scrolling. This is an inappropriate use because it treats the individuals as means.

It is also inappropriate for the trends about cyberbullying learned throughout this project to be generalized since this model only targets teenagers on Instagram. For example, if this model shouldn’t be used by law enforcement to flag accounts of people who are vulnerable to being recruited into political insurrections or being radicalized because even though those targets and the cyberbullying victims could have much overlap, there are substantial differences in demographics, colloquialism, and context not included in the training data.