Strengthening digital security, India's Digital India Act to work on countering Deepfake AI risks

Strengthening digital security, India's Digital India Act to work on countering Deepfake AI risks

The government aims to establish appropriate safety protocols as part of DIA, specifically address the issue of high-risk and deepfake AI

In case you aren't caught up with recent tech trends, there have been significant advancements in AI. This has given rise to new and innovative techniques such as AI-driven deepfake technology. Such tech allows people to manipulate and alter audio and video content to a remarkable degree. The emergence of deepfake and AI-related frauds and scams is indeed a growing concern around the world, and now the Indian authorities are tackling it head on.

In a bid to address the challenges posed by misinformation and high-risk AI tech, the central government is taking steps towards implementing a comprehensive Digital India Act. The first draft of the bill is expected to be released in the first week of June.

The Digital India Act could be unveiled in the next 2-3 months

During the second phase of pre-drafting public consultations involving multiple stakeholders, the Indian Union Minister of State for Electronics and Information Technology emphasized the increasing risk of AI-driven misinformation. The government aims to establish appropriate safety protocols as part of the Digital India Act (DIA), specifically to address the issue of high-risk and deepfake AI.

The government recently conducted the second round of consultations with policy experts and various stakeholders as part of the drafting process for the highly anticipated Digital India Act (DIA). It is anticipated that the DIA will be finalized and unveiled within the next 2-3 months.

In addition, the first round of pre-drafting consultations was initiated back in March by the Ministry of Electronics and Information Technology, engaging stakeholders to gather valuable inputs.

The primary objective of the DIA is to propel India's aspiration of being at the forefront of nations driving and influencing future technologies.

Deepfake tech and malicious usage of AI tech can cause widespread frauds and scams

Deepfake technology utilizes AI algorithms to generate realistic and convincing synthetic media. With the ability to create highly deceptive fake audio and video, malicious actors have the capability to deceive the public or manipulate through tech based identity theft.

Misuse of deepfake technology can lead to various consequences, including identity theft, fake news dissemination, impersonation, and reputation damage.

Addressing the challenges presented by deepfake technology requires a collective effort from individuals, technology developers and policymakers. Priority should be given to the development of strong detection and authentication mechanisms, enabling the identification and mitigation of deepfake content's impact.

Furthermore, public awareness campaigns play a vital role in educating individuals about the existence and potential risks associated with deepfakes. By increasing awareness, people can become more discerning consumers of media, reducing the likelihood of being deceived by deepfake scams.

To get all the latest content, download our mobile application. Available for both iOS & Android devices. 

Knocksense
www.knocksense.com