NEW YORK — Pornhub has released two statements about yesterday’s announcement by Visa and Mastercard that their cards would stop being accepted at the platform, following an editorial by Nicholas Kristof published by the New York Times making a number of allegations against the company.
Regarding the credit card situation, Pornhub’s statement reads:
These actions are exceptionally disappointing, as they come just two days after Pornhub instituted the most far-reaching safeguards in user-generated platform history. Unverified users are now banned from uploading content — a policy no other platform has put in place, including Facebook, which reported 84 million instances of child sexual abuse material over the last three years. In comparison, the Internet Watch Foundation reported 118 incidents on Pornhub over the last three years.
This news is crushing for the hundreds of thousands of models who rely on our platform for their livelihoods.
Regarding the allegations by Kristof published by the New York Times, Pornhub’s statement is:
Eliminating illegal content and ridding the internet of child sexual abuse material is one of the most crucial issues facing online platforms today, and it requires the unwavering commitment and collective action of all parties.
Due to the nature of our industry, people’s preconceived notions of Pornhub’s values and processes often differ from reality — but it is counterproductive to ignore the facts regarding a subject as serious as CSAM. Any assertion that we allow CSAM is irresponsible and flagrantly untrue. We have zero tolerance for CSAM. Pornhub is unequivocally committed to combating CSAM, and has instituted an industry-leading trust and safety policy to identify and eradicate illegal material from our community.
According to leading non-profits, advocates and third-party analyses, Pornhub’s safeguards and technologies have proven effective: while platforms intended to be family friendly like Facebook reported that it removed 84,100,000 incidents of CSAM over two and a half years, Instagram reported that it removed 4,452,000 incidents of CSAM over one and a half years, and Twitter reported that it suspended 1,466,398 unique accounts for CSAM over two years, the Internet Watch Foundation, the leading independent authority on CSAM, reported 118 incidents of CSAM on Pornhub in a three year period.
Pornhub has actively worked to employ extensive measures to protect the platform from such content. These measures include a vast team of human moderators dedicated to manually reviewing every single upload, a thorough system for flagging, reviewing and removing illegal material, robust parental controls, and a variety of automated detection technologies. These technologies include:
- CSAI Match, YouTube’s proprietary technology for combating Child Sexual Abuse Imagery online
- Content Safety API, Google’s artificial intelligence tool that helps detect illegal imagery
- PhotoDNA, Microsoft’s technology that aids in finding and removing known images of child exploitation
- Vobile, a fingerprinting software that scans any new uploads for potential matches to unauthorized materials to protect against banned videos being re-uploaded to the platform
Pornhub also attached the figures provided by Internet Watch Foundation comparing CSAM found in other platforms:
Facebook: 84,100,000
Instagram: 4,452,000
Twitter: 1,466,398 – You can navigate through the old reports near the top on the right hand side, adding up the totals from the last two years.
Pornhub: 118
Note the quote from IWF in this piece — Internet Watch Foundation (IWF), which identifies and removes child sexual abuse imagery online, said it found 118 cases of child abuse on Pornhub from 2017-2019 but that this number was low and Pornhub quickly removed this content.
„Everyday sites that you and I might use as social networks or other communications tools, they pose more of an issue of child sexual abuse material than Pornhub does,