On Friday, Apple Inc spun out a system for checking photos for child abuse imagery on a country-by-country basis, based on local laws, the company stated.
Apple announced it would execute a system that screens photos for such images before they are uploaded from iPhones in the United States to its iCloud storage. Child safety groups glorified Apple as it bounded with Facebook Inc, Microsoft Corp, Alphabet Inc's Google in taking such stratagems. But Apple's photo check on the iPhone itself elevated concerns that the company examined users' devices in ways that governments could misuse.
Several other technology companies verify photos after they are uploaded to servers. In a media briefing, Apple declared it would tender plans to expand the service based on the laws of each country where it functions. The company said distinctions in its system, such as "safety vouchers" passed from the iPhone to Apple's servers that do not hold valuable data, will shield Apple from government pressure to recognize material other than child abuse images.
Apple has a human review process that operates as a backstop against government exploitation, it added. Suppose the review determines no child abuse imagery. In that case, the company will not relinquish reports from its photo checking system to law enforcement.