Legal letters were recently delivered to Twitter and Google from an Australian regulator, which has presently pressured other international tech firms into handing over relevant data about their attempts to curb “online child abuse.”

Table of Contents
The step that was made by the nation’s e-safety commissioner
Under the new management of Tesla founder and chief executive officer Elon Musk, who now has mentioned that ” Child Protection Services is his foremost focus, this contributes to maintaining widespread attention here on anti-exploitation behaviours at “Twitter.”
“With Elon Musk asserting sexual assault of children a primary concern, this is a great chance for him to help us understand the work he’s indeed doing,” said e-safety commissioner Julie Inman Grant, making reference to Musk’s Twitter posts. “This is an excellent chance for him to describe in detail what his company is indeed doing,” she added.
She went on to say that Twitter is taking this step because it is in the company’s best interest to demonstrate that it is actively working to remove content that glorifies the sexual exploitation of children, which may scare away potential advertisers.
Grant, who had previously worked as the government policy director for the microblogging platform up until 2016, stated that the responses from significantly bigger technology companies, in addition to findings of Twitter’s content moderation becoming laxer since Musk became its chief executive officer, compelled her to make this decision.
In the course of responding to Twitter, the commissioner had already sent these letters to Google, which is owned by Alphabet Inc., the person who owns YouTube, and TikTok, which is based in China.
Response by Samantha Yorke, Google’s senior consultant
She provided a response to the letter by stating, “we utilize a spectrum of new leading imaging techniques which would include hash-matching technology as well as artificial intelligence to detect and eliminate (child sexual abuse content) which has been posted online to our services.”
On the contrary, TikTok’s regulatory manager for Australia, Jed Horner, claimed in a statement that the organization had a “zero-tolerance framework to the propagation of harassment material,” including over 40,000 security experts across the world who establish and implement rules, create procedures and technology that could identify, eliminate, or strictly limit morally unsound content at scale. Horner was referring to the corporation’s “zero-tolerance outlook” to the spreading of abusive material.

Modifications in the New Legislations
According to recently passed legislation in Australia, the e-safety commissioner now has the authority to coerce technology companies into disclosing specific information regarding the exploitation that occurs on their platforms as well as the preventative measures that are implemented.
Estimates indicate that businesses that refuse to cooperate will be punished with penalties of up to $480,000 each day.
In December of the previous year, the E-Commissioner wrote similar letters to Apple, Microsoft Corporation, and Meta Operating systems, which owns Facebook. On the other hand, once the commissioner had reviewed their feedback, he determined that their procedures were unsatisfactory.
According to Inman Grant, Twitter research conducted in 2020 in conjunction with the Canadian Centre for Safeguarding Children uncovered extensive publicly accessible abuse content, which was then disclosed to Twitter’s head of security and trust.
Despite the fact that Twitter had successfully closed its Australian office, Inman Grant stated her department had extraterritorial competencies to fine companies overseas, and she hoped that the global attention would encourage Twitter to work cooperatively.
Read More Such Articles
Chinese-made Cameras to Be Removed from Australian Departments
Tensions escalate at Afghanistan-Pakistan border, 6 killed
Indian-American, Vivek Ramaswamy Announces 2024 US Presidential Bid