Indraprastha Institute of Information Technology Delhi (IIIT-Delhi) and global technology company Logic have extended their collaboration to counter disinformation and hate speech until 2026.
“As part of this partnership, the two organizations will conduct further research into the development of cutting-edge technologies to counter hate speech and online misinformation and disinformation. The partnership will also improve multimedia analysis capabilities, including video, images and memes, as well as building multilingual models that understand regional languages in India,” they said in a statement.
The partnership had started in 2020 between Logical and the Computational Social Systems Laboratory (LCS2) at IIIT-Delhi. They had collaborated on “fundamental technical research on understanding the provenance, motivations and psychology of online misinformation”.
“Research from the first two years of collaboration has already been translated into multilingual capabilities that have been deployed in Logically’s flagship threat intelligence platform – Logically Intelligence – to detect and analyze misinformation, disinformation and damage faster. line. In 2021, the research results were recognized at prestigious academic conferences,” the two said in a statement.
Commenting on the partnership, Dr. Anil Bandhakavi, Head of Data Science at Logically said, “We are delighted with the impact of the first two years of our research collaboration with IIIT-Delhi. As expected, we were able to show quantifiable results in the research space to counter hate speech and mis/disinformation. Given the success of the first phase of our collaboration, we are delighted to further strengthen our partnership with a prestigious institution like IIIT-Delhi.
Dr. Tanmoy Chakraborty, Director of the Laboratory for Computational Social Systems and Head of the Center for AI at IIIT-Delhi, said, “We look forward to continuing our research successes and growing our research teams. in the next phase of the collaboration. Our research capabilities and Logically’s industry experience will enable us to develop better insights into understanding online harm and preventing it across languages and various forms of media.”
In a joint statement, the two said their research partnership had succeeded in designing “predictive models to predict the likelihood of a social media post attracting harmful content to social media discourse, allowing content moderators more quickly identify social media posts that may invite online harm.” ”.
“Additionally, to better understand and identify threats at the community level, the teams modeled the formation of online hate echo chambers, observing that a small number of echo chambers are responsible for spreading the majority of harmful online content,” they said.