There are occasions when it is necessary to reward an idea and its realization. Occasions where, if you do it, you feel better. This is the case of Arianna Muti, 25, of her algorithm that fights misogynistic-aggressive posts on Twitter. Talking about it made me feel better right away.
Originally from Osimo and now adopted by Bologna, Arianna is a shy, shy girl. An introvert. As we continue in the conversation she lets out a few smiles, underlines the meritocratic hierarchy of her project, dispenses intellectual honesty.
Hi Arianna, who are you?
I'm Arianna and I'm not involved with IT. I recently obtained a master's degree in Language Society and Communication but since the time of the three-year I have been interested in the computational linguistics course. I am self-taught in computer programming and my goal is to develop an artificial intelligence model for language.
Why is there the need for an algorithm to protect yourself from misogynistic-aggressive posts on Twitter?
According to a recent Vox research, in the last period in Italy misogynist tweets have increased by 90% despite the general hate speech having dropped drastically. It is therefore not difficult to understand the need for such a tool.
How long does it take from the idea to its realization?
To clarify, the idea came from my professor Elisabetta Tersini. I participated in a task she organized and starting the algorithm programming in July I finished in December. I did it myself, I'm the architect of the 360-degree project.
Have you already thought about the possibility of extending the algorithm to other social networks?
Yes it is absolutely in the plans. For the moment the algorithm has a limit of 280 characters, changing the platform of use means limiting its effectiveness and it is necessary to understand how not to lose the primary functionality.
Do you remember a misogynistic tweet in particular?
I don't have a memory of a particular tweet. I have seen them of different types and with different levels of misogyny. There is unmotivated hatred and it is well rooted.
Is the automatic deletion of comments / posts of this kind part of an educational project?
To clarify... my algorithm does not have the function of eliminating misogynistic tweets but of identifying them. It is currently a classifier. By implementing its functions, I would lead it to report rather than remove: the right to speak is sacred.
How will you improve the algorithm?
It can be improved on many fronts. The data set should be increased, the more data allow a better performance. Then the idea of profiling would be interesting: is there a user who posts continuous misogynistic-aggressive tweets? The profile is studied by whoever is competent and legal proceedings are taken.