2024 Australia: In November, the eSafety commissioner announced draft standards that would require the operators of cloud and messaging services to detect and remove known child abuse and pro-terror material “where technically feasible”, as well as disrupt and deter new material of the same nature.
Key Concerns Raised by Experts:
1. Flawed ‘voluntary’ AI Chat Control Creates Dangerous False PositivesThe experts warn that unlike the previous Council’s proposal, the new text expands scanning of private communications to include automated text analysis, using AI to identify ambiguous “grooming” behaviours. They argue this will create a dragnet that ensnares innocent people. “Current AI technology is far from being precise enough to undertake these tasks with guarantees for the necessary level of accuracy.”
Leave a Reply