UK watchdog accuses Apple of allowing images of child sexual abuse by not monitoring its platforms

1 month ago 12

The controversy around Apple’s handling of CSAM isn’t new. The NSPCC’s accusations add to the ongoing debate about Apple’s approach to detecting and reporting CSAM on its platforms. Some misunderstandings about end-to-end encryption contribute to the controversy read more

UK watchdog accuses Apple of allowing images of child sexual abuse by not monitoring its platforms

End-to-end encryption means that Apple cannot see the content of messages sent via iMessage or FaceTime, and therefore cannot report them. Image Credit: Reuters

The National Society for the Prevention of Cruelty to Children (NSPCC) in the UK has accused Apple of failing to report numerous cases of sexual images of children.

This criticism was highlighted by The Guardian, which reported that Apple had only documented 267 cases of child sexual abuse material (CSAM) globally between April 2022 and March 2023. This data was provided by the National Center for Missing and Exploited Children (NCMEC).

However, police data obtained by the NSPCC indicated a stark contrast. In the same period, child predators in England and Wales alone used Apple’s services — iCloud, FaceTime, and iMessage — in 337 recorded child abuse image offences.

This discrepancy raises concerns about Apple’s reporting practices, especially when compared to other tech giants. For instance, Meta reported over 30.6 million offences and Google around 1.47 million offences in the same timeframe, according to NCMEC’s annual report.

Apple could argue that its services like iMessage are end-to-end encrypted, making message contents inaccessible to the company itself. However, similar arguments apply to WhatsApp, which is also encrypted, yet Meta’s reported figures remain significantly higher.

All US-based tech companies are legally obligated to report any detected cases of CSAM to the NCMEC, which then forwards these reports to relevant law enforcement agencies.

Richard Collard, NSPCC’s Head of Child Safety Online Policy, highlighted the discrepancy between the number of UK child abuse crimes on Apple’s platforms and the number of CSAM reports made by Apple. He emphasized that Apple appeared to be lagging behind its peers in addressing the issue, especially as the UK prepares for the implementation of the Online Safety Act.

The controversy around Apple’s handling of CSAM isn’t new. The NSPCC’s accusations add to the ongoing debate about Apple’s approach to detecting and reporting CSAM on its platforms. Some misunderstandings about end-to-end encryption contribute to the controversy.

End-to-end encryption means that Apple cannot see the content of messages sent via iMessage or FaceTime, and therefore cannot report them. These cases typically come to light after offenders are caught through other means and their devices are accessed.

Another significant issue is iCloud. Unlike many cloud services that routinely scan for the digital fingerprints of known CSAM materials, Apple does not. The company has cited privacy concerns as the reason for this.

Back in 2021, Apple announced plans for a privacy-respecting on-device scanning system. However, these plans were leaked, leading to backlash over potential misuse by repressive governments.

Consequently, Apple postponed and then abandoned these plans. The company eventually used similar arguments against its original proposals.

This situation puts Apple in a difficult position. While attempting to balance privacy with public responsibility, the company’s approach backfired.

Had Apple implemented routine scanning of uploads like other companies, it might have avoided much of the controversy. Now, however, changing its approach would likely reignite the issue, leaving Apple in a challenging no-win situation.

Read Entire Article