The Election Commission of India called a press conference to quieten the storm over “vote theft.” Instead of clarity, we got a combative ultimatum, selective talking points, and very little data that can actually reassure voters. In a moment when public trust is fragile, the EC chose to lecture and warn, not to explain and prove.

This row did not start with a rally or a TV debate. It began with Bihar’s Special Intensive Revision (SIR) where 65 lakh names were reportedly deleted in the draft rolls. The Supreme Court had to step in and direct the EC to publish details of the deletions with reasons on district websites. Only after that order did Bihar’s CEO upload the list, and the EC publicly emphasised how quickly it complied. This timeline matters because it shows transparency came after judicial nudging, not before. That does not build confidence.

What the EC said — and what it didn’t

At the presser, the Chief Election Commissioner gave Rahul Gandhi seven days to either file a sworn affidavit with evidence or apologise to the nation. He also argued that “vote theft cannot happen” at the machine because a person can press the button only once, and stressed that roll preparation and voting are separate processes. These lines made headlines. But they also skirted the heart of the dispute: alleged large-scale, wrongful deletions and additions in the rolls ahead of a crucial state election.

The Commission explained away anomalies like “house number 0” and duplicate names as address-formatting or record-cleaning issues, and insisted the SIR is not a rush job. What it did not share was granular, verifiable evidence that wrongful deletions have been promptly corrected or that inclusion barriers are low for poor, migrant and first-time voters.

Why the press meet failed to convince

1) It fought a straw man

The EC focused on EVM tampering and “one person, one vote,” while the main charge is voter suppression via the rolls. Conflating machine integrity with roll integrity avoids the central pain point: who got cut, on what grounds, and how fast are they being restored.

2) Transparency arrived late, not early

Publishing reasons for deletions after a Supreme Court directive is good, but reactive. People want to see proactive disclosure: district-wise reason codes, ward/booth heatmaps of deletions, and a running correction log that shows how many names were restored after objections. The EC showcased speed of compliance (uploaded “within 56 hours”), not depth of transparency.

3) The burden of proof was pushed on citizens

A seven-day affidavit demand to a political opponent is theatre, not governance. The legal and moral burden sits with the authority controlling the rolls. If the EC believes the SIR is clean, it should publish audit trails, independent verification reports, and error rates by category (death, shift, duplication, “dead but alive” cases discovered and fixed). A podium warning cannot substitute for public evidence.

4) Too many tough questions went unanswered

Reporters flagged holes; the Commission largely sidestepped them. Simple queries remain: What is the false-positive deletion rate in the draft? What share of objections filed were accepted? How many deletions were reversed within the window? Without these figures, the presser looked defensive. Even business media called out the dodges.

5) The human stories cut through the spin

In the Supreme Court, “dead” voters walked in alive. Their testimony about the documents demanded to re-enter the rolls underscored how the costs of correction fall on the weakest. If deletion is easy and restoration is hard, the system is tilted. The EC did not convincingly address this asymmetry.

The trust deficit is now political capital

Opposition parties have turned the trust gap into street and parliamentary pressure — joint pressers, protest marches, even talk of an impeachment motion. You can disagree with their politics, but the scale of mobilisation shows how little the presser calmed the waters. When the referee is the story, the game is already in trouble.

What the EC should have done — and still can Publish the full diagnostics, not just a list

Release district-level dashboards showing reasons for each deletion, acceptance/rejection rates of objections, and time taken to restore names. Add a weekly errata log until final rolls close.

Independent audit, publicly presented

Commission a third-party audit (retired constitutional judges, CAG-grade auditors, and statisticians) of a representative sample of deletions and additions. Present the findings in an open hearing.

Lower the barrier to get back on the roll

If eleven documents are accepted under the SIR framework, ensure BLOs help citizens generate at least one low-friction proof on the spot. Mobile camps in bastis and panchayats should process restorations within days, not weeks.

Name-and-notify policy for the “declared dead”

When a person is marked deceased, the system should auto-trigger door-to-door verification and a public notice period before deletion. Where mistakes are found, the EC must publicly count and correct them.

Stop the optics, start the evidence

Ditch ultimatums to politicians. Hold a data briefing every 72 hours till September 1 with hard numbers, district comparatives, and case studies of corrected errors — not just assertions.

The bottom line

Here’s the thing: confidence in elections is built on boring paperwork that stands up to scrutiny. The EC’s press conference was heavy on rhetoric and light on proof. It answered charges of “vote theft” with moral outrage and legalese, not with the granular facts people needed. Until the Commission puts out verifiable, district-level evidence and shows that wrongful deletions are being fixed fast and fairly many citizens will remain unconvinced. An institution of this stature should not be asking for trust. It should be earning it, line by line, name by name.

Let the Truth be known. If you read VB and like VB, please be a VB Supporter and Help us deliver the Truth to one and all.



New Delhi (PTI): The IT Ministry is examining the response and submissions made by X following a government directive to crack down on misuse of artificial intelligence chatbot Grok by users for the creation of sexualised and obscene images of women and minors, sources said.

X had been given extended time until Wednesday, 5 PM to submit a detailed Action Taken Report to the ministry, after a stern warning was issued to the Elon Musk-led social media platform over indecent and sexually-explicit content being generated through misuse of AI-based services like 'Grok' and other tools.

Sources told PTI that X has submitted their response, and it is under examination.

The details of X's submission were, however, not immediately known.

On Sunday, X's 'Safety' handle said it takes action against illegal content on its platform, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.

"Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," X had said, reiterating the stance taken by Musk on illegal content.

ALSO READ:  Israeli PM Netanyahu calls PM Modi; briefs on Gaza Peace Plan status

On January 2, the IT Ministry pulled up X and directed it to immediately remove all vulgar, obscene and unlawful content, especially generated by Grok (X's built-in artificial intelligence interface) or face action under the law.

In the directive on Friday, the ministry asked the US-based social media firm to submit a detailed action taken report (ATR) within 72 hours, spelling out specific technical and organisational measures adopted or proposed in relation to the Grok application; the role and oversight exercised by the Chief Compliance Officer; actions taken against offending content, users and accounts; as well as mechanisms to ensure compliance with the mandatory reporting requirement under Indian laws.

The IT Ministry, in the ultimatum issued, noted that Grok AI, developed by X and integrated on the platform, is being misused by users to create fake accounts to host, generate, publish or share obscene images or videos of women in a derogatory or vulgar manner.

"Importantly, this is not limited to creation of fake accounts but also targets women who host or publish their images or videos, through prompts, image manipulation and synthetic outputs," the ministry said, asserting that such conduct reflects a serious failure of platform-level safeguards and enforcement mechanisms, and amounts to gross misuse of artificial intelligence (AI) technologies in violation of stipulated laws.

The government made it clear to X that compliance with the IT Act and rules is not optional, and that the statutory exemptions under section 79 of the IT Act (which deals with safe harbour and immunity from liability for online intermediaries) are conditional upon strict observance of due diligence obligations.

"Accordingly, you are advised to strictly desist from the hosting, displaying, uploading, publication, transmission, storage, sharing of any content on your platform that is obscene, pornographic, vulgar, indecent, sexually explicit, paedophilic, or otherwise prohibited under any law...," the ministry said.

The government warned X in clear terms that any failure to observe due diligence obligations shall result in the loss of the exemption from liability under section 79 of the IT Act, and that the platform will also be liable for consequential action under other laws, including the IT Act and Bharatiya Nyaya Sanhita.

It asked X to enforce user terms of service and AI usage restrictions, including ensuring strong deterrent measures such as suspension, termination and other enforcement actions against violating users and accounts.

X has also been asked to remove or disable access "without delay" to all content already generated or disseminated in violation of applicable laws, in strict compliance with the timelines prescribed under the IT Rules, 2021, without, as such, vitiating the evidence.

Besides India, the platform has drawn flak in the UK and Malaysia too. Ofcom, the UK's independent communications regulator, in a recent social media post, said: "We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children".

"We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation," Ofcom said.