Insights
How Does India Inc Govern AI?
By Shreya Ramann
Published on: 7 April 2026
AI Governance
Insights from the State of Responsible AI in India 2025 Survey
The NASSCOM RAI India 2025 survey is one of the most comprehensive studies we have on how Indian businesses are approaching responsible AI, capturing the practices of 574 senior executives across enterprises, SMEs, and startups that are developing or using AI commercially. It reflects an industry taking responsible AI (RAI) more seriously than it was two years ago, when the previous edition of the report was released. Now, RAI is maturing as AI use grows – more companies have frameworks, more are investing in training, fewer are starting from zero.

What makes the 2025 survey particularly valuable is the granular data on around what risks organisations are experiencing, where accountability sits, and what compliance areas feel out of reach. These findings can be a springboard to move past the question of whether Indian businesses are taking RAI seriously, toward the more useful question of how. Here are some patterns from the report that we thought are worth highlighting.
Who’s Running this Ship?
In 49% of companies, the primary responsibility for RAI sits with the C-suite or board. While AI/data departments are taking on more responsibility (up to 26% from 19% in 2023), top-down accountability continues to dominate AI governance.
This works if executives have clear visibility over AI deployment and risks, and if teams have clear escalation pathways to leadership. The risks reported by companies in the survey include hallucinations, privacy violations and bias, which surface in everyday operations, not at the board-level. A board resolution on RAI doesn’t automatically translate into an employee knowing how to identify an inaccurate response, or a product manager knowing how to document an AI-related incident. In many organisations, department heads are closer to operational teams and may have better visibility into gaps or tensions.
Accountability at the top is only as effective as the systems beneath it. Without structured ways for operational teams to flag, escalate, and respond to AI-related risks, top-down accountability risks becoming a policy position rather than a governance practice.
Questions for your organisation:
- Do teams know how to flag or escalate AI-related concerns, and to whom?
- Is it clear who owns the response when something goes wrong with AI?
- If accountability sits with leadership, what processes exist to make that accountability operational at the team level?
Jumping the gun on Agentic AI?
When surveyed, 55% of businesses with mature RAI practices said their existing frameworks are sufficient to address agentic AI risks. However, in personal interviews, industry leaders cautioned that this may be an overestimation and that most businesses are not yet equipped for the risk profile introduced by agentic AI.
While we shouldn’t automatically assume the self-assessors are wrong, it is worth asking why this overestimation may be occurring. One possibility is that agentic AI is still new enough that many organisations do not yet have a clear picture of how differently it behaves from existing AI tools.
Most existing RAI frameworks were built to govern assistive AI, which provides an output or action and waits for humans to take the next step. An AI agent can initiate actions with multiple steps and cause real-world consequences—it might process a refund, send a legal notice, modify a customer record, or execute a transaction. This autonomy creates new risks such as cascading errors that may be difficult to trace, and actions where harm may have already occurred before a human can intervene.
Without understanding these distinct risks, it is easy to feel prepared for agentic AI if you have a governance framework already in place.
Questions for your organisation:
- Have we mapped the specific risks that agentic AI introduces and tested our frameworks against them?
- Does our governance framework cover decisions made autonomously by AI, or only decisions made by humans using AI?
- Do we have a process in place to identify if an agentic system causes harm, and would we know who is responsible for it?
The Foundation Problem
43% of respondents cited lack of access to high-quality data as the biggest barrier to RAI implementation. This was well ahead of challenges like regulatory uncertainty (20%) and skills gaps (15%), and consistent across large enterprises, SMEs, and startups.
This matters because an AI system is only as reliable as the data it works with. Poor data quality produces unreliable outputs like hallucinations, biased recommendations, and inaccurate summaries, which downstream governance cannot compensate for.
In practice, we often see data governance and AI governance treated as separate workstreams, owned by different teams and operating on different timelines. While the survey numbers indicate that data quality is considered a barrier to AI implementation, it isn’t widely perceived as a prerequisite for AI governance. Until that connection is made explicit, the barrier is likely to persist.
Questions for your organisation:
- Is our data clean, consistent, and well-governed enough to produce reliable AI outputs?
- Do we have clarity on what data our AI systems are drawing from, and who is responsible for its quality?
- Are our data governance and AI governance frameworks connected, or managed separately?
The Data Protection Dichotomy
55% of respondents ranked data protection as the compliance area they feel most confident about. Yet privacy violations are reported as the second most frequently experienced AI risk (36%) after hallucinations (56%). While these numbers may not come from the same set of organisations, taken together, they raise the question of whether confidence is adequately translating into more effective privacy risk management.
Organisations may be confident in data protection compliance—policies, contracts, consent mechanisms—but this confidence may not extend to the technical layer where AI systems process data, and where incidents happen. It is also possible that incidents are simply outpacing controls because the exposure is genuinely new and because AI is creating privacy risk scenarios that did not exist when those controls were designed. Traditional data protection frameworks were not designed for a world where employees upload customer data to third-party AI tools, or where models are trained on vast personal datasets.
We see regulators grappling with the same gap. Regulators in the EU are looking to clarify how the General Data Protection Regulation (GDPR) applies to AI systems. Proposed amendments include recognising ‘legitimate interest’ as a legal basis for processing personal data for AI development and deployment. This would provide an alternative to consent for certain AI-related data processing, subject to safeguards.
In India, the Digital Personal Data Protection Act (DPDPA) is moving from legislation into enforcement, but how it applies to AI operationally is still being worked out. As that regulatory clarity emerges, the confidence gap and the incident gap may start to narrow.
Questions for your organisation:
- Have our data protection controls been specifically reviewed and updated for how our AI systems ingest, process, and output personal data?
- When we report confidence in data protection compliance, are we measuring policy coverage or operational effectiveness?
- Do the teams building and deploying our AI systems have the same understanding of our data protection obligations as our legal and compliance teams?
- As regulatory enforcement matures, do we understand where our AI systems create new data protection obligations?
Out of Sight, Out of Governance
For 74% of companies, monitoring is where they felt least confident. Monitoring is how organisations track AI after deployment—whether systems produce accurate outputs, behave consistently, and meet deployment standards. It is how organisations catch drift, hallucinations, bias, and privacy violations before they become incidents.
The report finding raises a couple of questions. First, companies can only report what they detect. If monitoring confidence is this low, there is a possibility that the risk numbers in the survey (56% experiencing hallucinations, 36% privacy violations) may be undercounts. This may not be due to underreporting, but because gaps in monitoring mean some incidents are simply not visible.
Second, in cases where monitoring isn’t catching the risks they know about, then what is? Internal audits, customer complaints, or incidents that have already caused harm?
The combination of accelerating AI deployment and low monitoring confidence is a gap that deserves attention.
Questions for your organisation:
- Do we have monitoring in place for AI systems that have been deployed?
- What does the monitoring system cover? Is it architecture, outputs, accuracy, compliance?
- How would we find out if an AI system started behaving differently after an update?
Is Legal in the Room?
Only 1% of respondents identified Legal as primarily responsible for RAI. This makes sense because RAI ownership sitting with technology, data, or risk functions is often the right design. But ownership is different from involvement, and the more useful question is whether Legal has a meaningful contribution in the decisions that create legal exposure.
This is even more relevant because AI-related liability in India is being adjudicated under frameworks legal teams already own—laws around data protection, consumer protection, contracts, competition, and tort principles.
For instance, when an AI system is being deployed, a legal review of how it could affect customers, employees, or third parties could surface risks that technical teams aren’t positioned to see, like the potential for discrimination claims in AI-assisted hiring or competition law risks with AI-pricing tools. When a new AI product or tool is being evaluated, legal input on vendor contracts could determine whether liability for harmful outputs, data breaches, or model failures sits with the organisation or the vendor. While Legal need not own or lead RAI, it is a critical enough to require a seat at the table.
Questions for your organisation:
- At what point in an AI decision does our legal team get involved? Is it early enough to shape risk outcomes, or mainly for contract review?
- Does our legal team have a working understanding of how our AI systems operate, or is their involvement limited to reviewing policies and agreements?
- When an AI-related incident occurs, is the legal team part of the initial response, or brought in later?
Where to go from here
The foundations being built today will be tested at a different scale tomorrow. The data we see in this survey will evolve as AI becomes more central to enterprise operations, and as agentic AI moves from emerging technology to mainstream deployment. So, treat this data as a starting point for an honest internal conversation about AI governance.
- Know where you stand: At Counselect, we have built a simple, free AI Risk Maturity Assessment for organisations using AI. If you’re exploring similar questions internally, this can serve as a useful starting point to ground those conversations. The NASSCOM Responsible AI Resource Kit also includes a self-assessment tool worth exploring.
- Continue the conversation: If these themes resonate with what your organisation is navigating, we’re always keen to exchange perspectives on how AI governance is evolving in practice. Feel free to reach out to our team at [email protected].