The 'Construction' of Truth
Two recent examples of politicians claiming 'real' pictures as AI generated once again should be reason for us to pause and reflect
A few weeks ago I was trying to reflect on the Indian elections and why, despite predictions, AI generated image, voice, or videos could not really affect Indian elections. At least not to the extent being predicted.
Jayson Harsin notes how Disinformation Studies scholars have broadly avoided interacting with literature from related fields. I broadly am in agreement with his points that scholars looking into Disinformation have a cryptonormative attitude towards disinformation, that hampers an actual understanding of the subject.
Recently Donald Trump claimed that an image of crowds that had turned up at Detroit in support of Democratic candidate for US President Kamala Harris was AI generated. This was a lie.
But Donald Trump lies isn’t really a ground-breaking revelation. The way contemporary media validates truth claims, and how Generative AI may lead to newer conversations.
Trump then turned around and used AI generated images to suggest that ‘Swifties for Trump’ is a real movement. Did he really think the images are real? Is he trolling? Does it matter?
Another ‘AI’ incident that caught my attention was West Bengal CM Mamata Banerjee’s claims that the videos of vandalism and violence (with possible intent to destroy evidence) at the R.G. Kar Medical College & Hospital were AI generated. She went on to then blame the violence on the political opposition from the Left and the Right.
In order to contextualise the protests were sparked by the gruesome rape and murder of a PG trainee doctor late at night on 9th August, during a time she was on duty inside the hospital’s seminar room. Maybe it was the sheer audacity of the crime or the perception that the perpetrators were being shielded by the state, but this sparked a mass upsurge.
This was the context of the protests on the night of 14th August, there are more intricate details on the politics of the protest and the question of ‘safety’, but for our limited purposes of unboxing the AI claims, this should suffice. What is interesting is that the CM and her supporters used this AI claim to deflect all criticism or responsibility.
Most discussion on disinformation has focused on how AI generated audio-visual (or textual) material could possibly be used to ‘flood’ social media platforms with disinformation. The opposite remaining rather underexplored is that AI claims can be used to discredit genuine images or footage. More importantly what needs to be re-examined is our relationship with the ‘mediated’ truth.
The ability to influence representation of events and institutions or individuals in the media has always existed. This may be through direct control or indirect structuring of the narrative through PR operations. It is that in the case of social media, the immediacy and proximity to sources, and amplification through engagement are presented as symptoms of authenticity. Control over platforms remain firmly in the hands of capital and are guided by the drive for accumulation of capital as with ‘legacy’ media.