The American politician, Daniel Patrick Moynihan, famously said, “You are entitled to your opinion. But you are not entitled to your own facts.” Claims such as the September 11 terror attacks were an ‘inside job’ of the US government, Haitian immigrants are eating pets in Ohio, or the Congress manifesto for the 2024 Lok Sabha election proposed to redistribute private wealth to Muslims are not facts. They are baseless conspiracy theories easily refutable if one cares to look for evidence. Yet a large number of people believe them to be true. When online platforms, with their extensive and instant reach, repeatedly and forcefully spread pernicious lies such as these, falsehoods seep into the collective consciousness. Common people short on resources and time to discern facts from fiction start believing them.
An ill-informed population weakens democracy. But a misinformed population imperils democracy. Abraham Lincoln’s belief that “a government of the people, by the people, for the people, shall not perish from the Earth” is founded on the premise that a well-informed populace will make decisions that safeguard the republic. But false claims when widely circulated lead people to form opinions based on misinformation rather than facts. Studies done by the European Commission, RAND Corporation, Carnegie Endowment for International Peace and the Oxford Internet Institute suggest that a bulk of online misinformation is designed to deepen societal divisions, erode confidence in democratic processes, and prepare the ground for violence or even authoritarian takeovers.
Realising the dangers disinformation poses to democracy, Australia has recently introduced a slew of new digital legislations to contain its spread. The most significant among them is the Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill 2024, which arms the Australian Communications and Media Authority with regulatory powers to hold “digital communications platforms” accountable for misinformation and disinformation that cause “serious harm”. Misinformation, the unintentional spread of false information, and disinformation, the deliberate spreading of lies, are two sides of the same coin.
The Australian initiative could provide a much-needed roadmap to safeguard democracy without stifling dissent or unreasonably restricting free speech. It is markedly different from the Government of India’s fact check unit that was to be set up “to identify fake, false and misleading information about the government and its establishments” on social media. The Information Technology Amendment Rules, 2023 that provided for the FCU were recently struck down as unconstitutional by the Bombay High Court on the grounds that it could have a chilling effect on social media intermediaries and that the expression, “fake, false and misleading”, in the rules was vague and undefined. The Indian government’s FCU was designed to protect the government. The Australian bill is designed to protect the public.
Unlike the FCU, the bill does not intend to cover all dissemination of content that may be considered false but rather dissemination of content that is reasonably verifiable as false, misleading or deceptive and likely to cause “serious harm”. The types of harm covered under the definition are harm to public health, vilification of a social group, intentionally inflicted physical injury to an individual, imminent damage to critical infrastructure or disruption of emergency services, and imminent harm to the economy. The narrow scope of serious harm will ensure there is no unreasonable censorship. Moreover, the FCU gave the GoI a monopoly over facts related to its own affairs. Instead of making the government the sole arbiter of facts, the Australian bill allows the ACMA to set standards for digital platforms to identify, prevent and respond to the spread of misinformation. The government-authorised content is subject to the same standards as applied to any other content.
The digital platforms will formulate a misinformation code, which will then be approved by the ACMA. Digital platforms will also be required to provide reporting tools, links to authoritative information, and support for fact-checkers. The platforms will also publish reports on the risks their services pose as well as media literacy plans. This will enable people to critically evaluate the vast range of content they encounter online. Additionally, the ACMA will make rules on complaints and dispute handling and the making and retaining of records by the digital platforms. Non-compliance with ACMA-set standards will be subject to civil penalties, including fines of up to 5% of the digital platforms’ global revenue.
The Australian bill has to pass through both Houses of Parliament before it becomes law. Not surprisingly, there is significant opposition from digital platforms like X and its owner, Elon Musk, who has become the champion of unrestrained, consequence-free online free speech. But free speech does not mean freedom to spread disinformation and hateful content. Multicultural, multiracial, multi-religious societies start fracturing under the weight of public lies. Collaboration and compromise, both key to a functioning democracy, give way to polarisation, hate and sectarian violence.
India has witnessed several episodes of mob lynching triggered by disinformation circulated on social media platforms like WhatsApp. In the United States of America, violent rhetoric and conspiracy theories of election fraud perpetuated by white supremacists on online platforms fomented the storming of Capitol Hill in 2020. In France, in 2019, misinformation led to violent attacks on the Roma people in the suburbs of Paris. Recently, false information spread online about a mass stabbing in Southport, United Kingdom, led to a wave of violent anti-immigration protests and riots across the country.
But existing law or the lack of it has failed to fix legal accountability for both actors who post disinformation and online platforms that spread them to the point that they gain verisimilitude. Strong First Amendment protections in the US have made combatting online disinformation extremely challenging. The European Union’s Code of Practice on Disinformation, initially established in 2018 and revised in 2022, is a voluntary, self-regulatory instrument signed by major online platforms. Experience has shown that self-regulation by digital platforms often equates to no regulation. In India, the government seems to be more interested in quelling dissent and the voice of the Opposition than disinformation. As a result, online falsehoods have become a potent tool to manipulate public opinion and influence voter behaviour in every major democracy.
A report by the Oxford Internet Institute in 2019 stated that organised disinformation campaigns were operating in 70 countries, with India being one of the most targeted nations. A study by Microsoft in the same year found that Indians had the highest rate of exposure to fake news compared to other countries. In the World Economic Forum’s 2024 Global Risks Report, surveyed experts ranked misinformation and disinformation as the greatest threats faced by India — even greater than extreme weather, or infectious diseases. Every election cycle in India brings with it a tsunami of conspiracy theories and falsehoods. A research paper titled “Political Hazard: misinformation in the 2019 Indian General Election Campaign” documented that misinformation not only reached people through forwarded WhatsApp messages but also through official social media handles run by political entities.
Combatting disinformation requires a multi-pronged approach involving governments, tech companies, civil society, and media organisations. This is exactly what the model proposed by the new Australian bill endeavours to do. It involves all stakeholders while legally mandating digital platforms to constantly monitor and remove falsehoods and do it all with full transparency and urgency. The bill has a long way to go before it becomes law. But by prioritising the fight against online falsehoods and underscoring its pernicious impact, Australia has taken a significant first step in the battle between disinformation and democracy.
Ashish Khetan is a lawyer who specialises in international law