It reported that Google and Facebook’s automated advertising tech had placed adverts for household names in a total of six apps that let users search for WhatsApp groups to join – a function that the chat service does not allow in its own app.
Using the third-party software, it was possible to look for groups containing inoffensive material.
But a search for the word “child” brought up links to join groups that clearly signalled their purpose was to share illegal pictures and videos.
The BBC understands these groups were listed under different names in WhatsApp itself to make them harder to detect.
Brands whose ads were shown ahead of these search results included:
“The link-sharing apps were mind-bogglingly easy to find and download off of Google Play,” Roi Carthy, AntiToxin’s chief marketing officer told the BBC
“Interestingly, none of the apps were to be found on Apple’s App Store, a point which should raise serious questions about Google’s app review policies.”
After the first article was published, Google removed the group-searching apps from its store.
“Google has a zero-tolerance approach to child sexual abuse material and we thoroughly investigate any claims of this kind,” a spokeswoman for the firm said.
“As soon as we became aware of these WhatsApp group link apps using our services, we removed them from the Play store and stopped ads.
“These apps earned very little ad revenue and we’re terminating these accounts and refunding advertisers in accordance with our policies.”
WhatsApp messages are scrambled using end-to-end encryption, which means only the members of a group can see their contents.
Group names and profile photos are, however, viewable.
WhatsApp’s own moderators began actively policing the service about 18 months ago, having previously relied on user reports.
They use the names and profile pictures as a means to detect banned activity.
Earlier this month, the firm revealed it had terminated 130,000 accounts over a 10 day period.