On 22 August, the Australian government legislation targeting the use of deep fake technology. This new law introduces penalties, including prison sentences, for those found guilty of creating or distributing deceptive deep fake content intended to harm individuals or the public interest.
The legislation, known as the Deep Fake Accountability Act, was passed in the House of Representatives with a substantial majority. It aims to address growing concerns about the potential for deep fakes to be used in malicious activities, such as spreading misinformation, defaming individuals, or facilitating criminal schemes.
Deep fake technology, which leverages artificial intelligence to create hyper-realistic but entirely fabricated images, videos, or audio recordings, has raised alarms globally. The Australian government has been proactive in addressing these concerns, reflecting a broader international trend toward regulating digital misinformation and protecting public trust.
Under the new law, individuals convicted of creating or distributing deep fakes with the intent to deceive or harm face up to 15 years in prison. The legislation also includes provisions for hefty fines and civil penalties, which can be imposed alongside or instead of prison time, depending on the severity of the offense.
The bill’s passage comes after a series of high-profile incidents involving deep fakes in Australia, including fake videos used to spread false information about public figures and to manipulate public opinion on critical issues. These incidents have underscored the urgent need for robust legal frameworks to address the potential dangers associated with this technology.
Attorney General Linda Roberts emphasised its importance in safeguarding democratic processes and individual rights. “Today marks a significant step in ensuring that technology serves society ethically and responsibly. The use of deep fake technology for deceitful purposes undermines trust and can have devastating consequences. This legislation provides a clear message that such behavior will not be tolerated.”
The new law also includes measures to enhance transparency and accountability within the tech industry. Companies that develop or deploy deep fake technology will be required to implement safeguards to prevent misuse. They must also cooperate with law enforcement agencies in investigations involving their technology.
The legislation has received broad support from various stakeholders, including digital rights groups and cybersecurity experts. Many have welcomed the move as a necessary action to address a growing threat. However, some critics argue that the law could potentially infringe on freedom of expression and have called for careful implementation to avoid unintended consequences.
“While we support the aim of combating malicious deep fakes, it’s crucial that this law is applied judiciously to avoid stifling legitimate creative and journalistic uses of the technology,” said Jane Thompson, a spokesperson for the Australian Digital Rights Coalition.
In addition to the legislative measures, the Australian government plans to launch a public awareness campaign to educate citizens about the risks associated with deep fakes and the importance of verifying information before sharing it.