Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Model Updates and Revisions

Two potential situations where the model should be updated or revised for ethical reasons are when cyberbullying changes forms and when the model is used in contexts aimed at punishment instead of safety, such as law enforcement. The model should also not be used once Instagram is not a popular social media site with a large teenage audience in the U.S.

Cyberbullying Changes Forms

First, the model needs to be updated if cyberbullying changes (e.g., instead of mean comments, unsolicited pictures are the most common form of cyberbullying). The model will be consistently evaluated from various technical performance measures that measure accuracy, demographic parity, precision, and more. Additionally, we will supervise how many accounts are generally being flagged, how teenagers view Instagram’s handling of cyberbullying through public opinion surveys, and other statistics that reveal how effective this model is on a larger, non-technical scale. If the number of false positive increases dramatically and become a nuisance for users, if complaints about cyberbullying on Instagram increase consistently or near pre-feature levels, or if another statistic that shows this model does not encapsulate cyberbullying properly, then we ought to update the model. This will include working with domain experts to understand what the relevant factors would be and how we should analyze them.

Using the Model As A Punishment Tool

Second, this model should be revised when it is used as a tool for punishment, especially in law enforcement. The profiling nature, network analysis, and text analysis features of this model do make it ideal to identify potential criminals, especially those who are notably active on social media such as school shooters, radicalized terrorists, and consperiency theorists turned insurrectionists. Nevertheless, the predictors will need changed to information that is more publicly available since these targets are unlikely to provide voluntary information. Furthermore, network predictors should have more weight and predictors about criminal history ought to be collected to predict future crime. Since this use is aimed at preventing societal harm, instead of individual harm, we also ought to have more stringent thresholds and verification by real people along each threshold.

Instagram Becomes Outdated

Although Instagram is currently widely used by U.S. teenagers, it is likely that similar to other generations, Gen A, will likely choose a different set of platforms to spend their time on. Cyberbullying will likely manifest itself on those platforms, as it did on Myspace and Facebook, but only to a larger extent since Gen A is the most tech native generation to date. A similar tool will likely be needed to identify potential victims as a part of a combating cyberbullying strategy. However, this model should not be used because its database organization and predictors collected are based off of specifically Instagram’s features. Using this model on another platform would be unethical because it would be ineffective and provide a blanket level solution that would prevent the new platform from properly tackling cyberbullying and preventing harm to teenagers.