Human moderation is still largely being used to manage hateful comments. However, this solution has its limitations: biases, not enough reactivity, risk of depression for moderators, and a high cost. Moderation as we know it today needs to evolve to be able to closely analyze interactions between users. Because of its large quantity, it’s becoming difficult to moderate all the content on networking platforms and social networks, brand channels, media platforms and gaming platforms, blogs. GAFA (Google-Amazon-Facebook-Apple) companies use artificial intelligence (machine learning and deep learning) to detect some of the hateful content on their platforms and moderate it automatically. This technology serves as a filter but has many biases, not to mention the error rate and many ‘false positives’. This excessive, yet limited moderation is being used to the detriment of users.