Washington: US President Donald Trump on Tuesday fired his National Security Advisor John Bolton, saying he "disagreed strongly" with many of his suggestions.

Announcing Bolton's sacking on twitter, Trump said he will name a new national security advisor next week.

Appointed in April 2018, Bolton is the third national security adviser to leave Trump's side, following in the footsteps of Michael Flynn and H R McMaster.

"I informed John Bolton last night that his services are no longer needed at the White House. I disagreed strongly with many of his suggestions, as did others in the Administration, and therefore.... I asked John for his resignation, which was given to me this morning," Trump tweeted.

"I thank John very much for his service. I will be naming a new National Security Advisor next week," he added.

The tweet came just one hour after the White House press office said Bolton was scheduled to appear at a Tuesday press briefing alongside Secretary of State Mike Pompeo and Treasury Secretary Steve Mnuchin.

Minutes after Trump tweeted, Bolton came out with his own response, contradicting the president's version of events. "I offered to resign last night and President Trump said, 'Let's talk about it tomorrow'," Bolton tweeted.

A leading foreign policy hawk, Bolton was widely known to have pressed Trump for a harder line on North Korea and Iran. He had also advocated a tougher approach on Russia and Afghanistan.

A disagreement between Trump and Bolton over the president's decision to host a now-cancelled meeting with leaders of the Taliban at Camp David reportedly appeared to be the last straw.

Let the Truth be known. If you read VB and like VB, please be a VB Supporter and Help us deliver the Truth to one and all.



New Delhi: A committee set up by the Department for Promotion of Industry and Internal Trade (DPIIT) has proposed a mandatory blanket licensing system requiring AI developers to compensate copyright holders for using their work to train large language models. The panel, formed to assess how emerging AI technologies intersect with copyright law, released its working paper for public consultation on the DPIIT website. Feedback has been invited within 30 days from December 8 at the designated email address.

The committee, chaired by DPIIT Additional Secretary Himani Pande and comprising legal and technical experts, examined whether India’s existing copyright framework is adequate or requires amendments in light of rapid advances in AI, as reported by Bar&Bench. During consultations, most stakeholders from the AI industry argued for a blanket Text and Data Mining exception that would permit unrestricted training on copyrighted material. In contrast, content creators and rights holders advocated for a voluntary licensing regime.

In its paper, the committee said a broad TDM exception would weaken copyright protection and leave creators without any recourse for compensation. It noted that such a system would be unsuitable for a country with a large cultural economy and a rapidly expanding content sector. The option of allowing creators to opt out was also rejected. The panel observed that small creators would be at a disadvantage due to limited awareness and an inability to monitor whether their work had been used despite opting out.

As the committee concluded that withholding works entirely from AI training would restrict access to diverse and high-quality datasets, it recommended a hybrid model under which all lawfully accessed copyrighted content can be used for AI training to strike a balance, but with a statutory remuneration right for copyright holders.

The panel proposed that the Central government designate a central non-profit body to collect royalties from AI developers and distribute them to rights holders. Only one representative body per class of work would be allowed, either a registered copyright society or a collective management organisation. The entity, tentatively named the Copyright Royalties Collective for AI Training (CRCAT), would maintain a database where creators can register their works. A government-appointed commission would determine royalty rates. A portion of the revenue generated by AI systems trained on protected content would also be distributed proportionally.

Avoiding exposing technical or sensitive information, AI developers would be expected to identify the categories, nature, and general sources of the content used in training datasets. The panel further noted that this would ensure transparency while keeping proprietary details protected.

Industry body Nasscom registered its dissent, stating that rights holders should receive explicit statutory protection against data mining. The panel members were Simrat Kaur, Anurag Kumar, advocates Ameet Datta and Adarsh Ramanujan, Raman Mittal, Chockalingam M, and Sudipto Banerjee.