Monash expert: Grok blocked - should Australia follow suit? And how to safeguard images against malicious content generators
Monash University
A Monash University expert is available to comment on the controversial AI tool Grok, whether Australia should follow Indonesia, Malaysia and the UK in restricting it, and how to safeguard against private images being used by AI image generators.
Associate Professor Abhinav Dhall, Department of Data Science & AI, Faculty of Information Technology
Contact via: +61 450 501 248 or [email protected]
- Human-centred artificial intelligence
- Audio-visual deepfakes
- Computer vision
The following can be attributed to Associate Professor Dhall:
“Grok has made it easier to produce malicious content because it is directly integrated into X (formerly Twitter), so anyone can quickly tag it and request image edits. As it is so well integrated into the platform, the edited outputs also appear directly within the same public thread, which increases the visibility and reach of manipulated images.
“In many cases, the original poster may not even have the rights to the image they are uploading on the platform, which can make it easier for the edits to become potentially defamatory or unsafe.
“Grok may perceive the post's edit request to be benign; however, in some cases the system may not fully grasp the ethical context or the emotional, reputational and privacy impact the manipulated image can have on the person involved. In our research, we have found that outputs generated using vision-language models can lead to negative changes in perception about the subjects in an image, especially if the prompt is crafted carefully.
“To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms.
“It is also important to avoid posting children’s photos publicly. If you suspect your images have been misused, reverse image search can be applied to detect AI-generated content, and fake or harmful content should be reported to the relevant platforms as quickly as possible.
“Rather than restricting Grok completely, a more balanced approach is to strengthen and enforce rules against misuse of images and misrepresentation. Australia already has laws covering image-based abuse, so the focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly. At the same time, social media platforms should be required to implement stronger guardrails to stop harmful edits before they spread.”
For any other topics on which you may be seeking expert comment, contact the Monash University Media Unit on +61 3 9903 4840 or [email protected]