Regulating GenAI: How Pakistan Can Protect Its Future from AI Risks

[post-views]


Muhammad Noman Qamar

The advent of human-mimicking Artificial Intelligence (AI) is affecting all aspects of life, and its technological trajectory is set to dramatically transform the world in the coming decades. One of the most significant advances in this space is generative AI (commonly called GenAI) — a type of AI that creates new, original content. The typical use cases of GenAI include videos, images, and audio clips crafted from users’ prompts. A frontier system can produce output in mere minutes, whereas similar outputs once required days or even months, depending on the task’s complexity and the model’s capabilities. While this innovation is groundbreaking, it raises serious questions regarding the ethical use and responsible deployment of GenAI. In countries like Pakistan, where digital transformation is both rapid and uneven, the potential of these technologies is especially significant in industries such as entertainment, education, and marketing. Like any powerful technology, however, it carries risks. When misused, these tools can have profound social and ethical consequences.

The Growing Risks of GenAI Misuse in Pakistan:

AI-generated material on the internet has increased by more than 8,000 times over the past five years, according to a study by CopyLeak (a leading content detection and plagiarism software company). A report by Zebracat, a leading AI analytics platform, reveals that social media platforms are now flooded with AI-generated videos, which account for 40% of the content on major platforms like TikTok, YouTube, and Instagram, with millions of views. These statistics reveal the growing scale of AI-generated content online. However, behind these numbers, a concerning pattern emerges.

The rapid rise of this technology raises important questions about the potential of exploitation, social harm, and the ethical implications of using such powerful tools. These concerns are not hypothetical; they are already unfolding in real time in Pakistan’s digital media landscape, where misuse is increasingly visible.

Numerous social media accounts–particularly on TikTok, have adopted GenAI-based video platforms, such as Veo 3, to mass-produce discriminatory, misleading, and socially harmful material. This inappropriate production disproportionately targets vulnerable groups, especially young people, who are highly susceptible to manipulation. Many of these viral videos feature veiled sexualization and objectification of female characters, along with the use of double-meaning and vulgar phrases. These elements, which are physiologically harmful and are disturbingly common across TikTok. The violations also reflect unfair and discriminatory portrayals of gender roles.

Notably, these actors appear to be deliberately engaging in these practices, coordinating their actions to strengthen their influence. Many AI content creators–self-proclaimed experts and trainers with hundreds of thousands of social followers–are actively selling courses and services to produce more of this harmful material. Their pupils and community members, most often, young girls and boys, unknowingly help spread harmful productions via influencing algorithms. They use manipulative tactics, such as likes, shares, and comments. These mechanisms allow them to monetize content and drive virality. These practices clearly violate AI safety and ethics policies and stand in direct contradiction to the principles of responsible AI use.

The Urgent Need for Regulation:

The irresponsible deployment of GenAI is dangerous and must be addressed immediately. If unchecked, these practices could have dire consequences. These practices risk corrupting societal and cultural norms and further marginalizing communities. Another potential consequence is a regional ban on such technologies – an outcome that would stifle innovation and derail technological progress in Pakistan.

In light of these potential risks, it is crucial to introduce effective guardrails and measures to eliminate these impermissible threats. The draft National AI Policy of Pakistan (NAIP), proposed in 2023, offers a long-term, pragmatic approach in the right direction. It acknowledges the need for ethical and responsible use of AI and proposes the establishment of an AI Regulatory Directive (AIRD) to enforce these practices. However, progress has stalled, and a working model for AIRD is yet to be developed. No concrete steps have been taken toward policy implementation, nor has a roadmap been provided to regulate the ethical and responsible use of AI.

Beyond this policy inaction, an alarming concern is the role of academia and industry in Pakistan, which have not made significant efforts or proactive measures to overcome these ethical issues. There are many steps that could be taken to address this issue, if not completely eliminate it, such as fostering discussions and debates.

Comprehensive Solutions:

Therefore, an effective solution to the deeply rooted risks associated with ethical and responsible AI usage requires more than just legislation. Pakistan needs a holistic approach to address these challenges. Mitigating these threats requires action at three levels, with each level playing a crucial role in ensuring the trustworthy and positive use of AI.

a) Government Level:
The government should establish clear milestones for the implementation of the National AI Policy (NAIP) on an urgent basis, and build critical infrastructure, such as the AI Regulatory Directive (AIRD), to enforce ethical and responsible AI practices. For example, the EU AI Act, alongside AI regulatory offices in various states, offers a model that Pakistan could follow.

b) Academia and Industry:
AI literacy, especially in ethical and responsible AI use, should be integrated at every level in academic institutions and industry organizations. This can be achieved through curricular reforms, industry certifications, awareness sessions, dedicated workshops, and training programs. These measures will ensure that experts and users of these tools understand how to use them safely and ethically.

c) Public Level:
The public must be AI-literate to identify and report violations of ethical and responsible AI usage on social media platforms. Public awareness programs are essential to equip individuals with the skills needed to recognize harmful AI content and promote accountability.

Final Thoughts:

Pakistan faces a critical juncture regarding the future of GenAI. One option is the collapse of societal norms and innovation if left unchecked. The other is the regulated and ethical deployment of these frontier models, which can foster a better society and promote innovation. Through a coordinated effort at the government, academia, and public levels, we can mitigate the risks associated with AI misuse. The time to act is now, before the consequences of inaction become irreversible.

Leave a Comment

Your email address will not be published. Required fields are marked *

Latest Videos