Numerous organizations have taken on the noble task of reporting pedophilic images, but it’s both technically difficult and emotionally challenging to review vast amounts of the horrific content. Google is promising to make this process easier. It’s launching an AI toolkit that helps organizations review vast amounts of child sex abuse material both quickly and while minimizing the need for human inspections. Deep neural networks scan images for abusive content and prioritize the most likely candidates for review. This promises to both dramatically increase the number of responses (700 percent more than before) and reduce the number of people who have to look at the imagery.
Unlike the conventional approach, which simply compares image hashes against known offending images, the AI method can also flag previously undiscovered material. That, in turn, could help authorities catch active offenders and prevent further abuse.
The tool is free to both corporate partners and non-governmental organizations through Google’s Content Safety programming kit. While there’s no certainty that it’ll have a dramatically reduce the volume of horrible images online, it could help outlets detect and report child sex abuse even if they have only limited resources.