
AI Processing Under the DPDP Framework
Author : Managing Partner, Worivo Advisors
Consent Requirements
The DPDP Act mandates that consent must be free, specific, informed, unconditional, and unambiguous. For AI applications, this creates several practical challenges.
Organizations must clearly communicate how AI systems will process personal data. Generic consent notices stating “we use AI to improve services” fail to meet the statutory standard. Instead, consent mechanisms should specify the type of AI processing, such as automated decision-making, predictive analytics, or personalization algorithms, the categories of personal data involved, and the specific purposes for which the data will be processed.
When AI systems evolve or new use cases emerge, fresh consent may be required. This presents particular challenges for machine learning systems that discover new insights or patterns beyond originally contemplated purposes.
Legitimate Uses and Exemptions
The Act permits certain processing activities without consent under specific legitimate uses. For AI applications, relevant exemptions include processing for compliance with legal obligations, processing necessary for medical emergencies or public health, and processing for employment purposes subject to specified safeguards.
However, organizations should exercise caution in relying on exemptions. The burden of demonstrating that processing falls within a legitimate use category rests with the Data Fiduciary, and exemptions should be interpreted narrowly.
Purpose Limitation and Data Minimization
AI systems often thrive on large, diverse datasets, creating tension with the DPDP Act’s purpose limitation and data minimization principles. Data Fiduciaries may only process personal data for lawful purposes for which consent was obtained, and must erase personal data when it is no longer necessary for the specified purpose.
For AI applications, organizations should implement purpose-specific data collection strategies, avoid repurposing training data without fresh consent, establish clear data retention schedules aligned with business and legal requirements, and implement technical measures to anonymize or pseudonymize data where full identification is unnecessary.
Automated Decision-Making and Profiling
While the DPDP Act does not explicitly address automated decision-making with the same granularity as GDPR’s Article 22, several provisions bear directly on AI-driven decisions.
Transparency Obligations
Data Fiduciaries must provide clear information about processing activities. When AI systems make decisions affecting individuals, such as credit scoring, recruitment screening, insurance underwriting, or content moderation, organizations should disclose the use of automated processing, the logic involved in decision-making to the extent feasible without revealing proprietary algorithms, and the significance and envisaged consequences of such processing.
Accuracy and Quality
The Act requires Data Fiduciaries to ensure the completeness, accuracy, and consistency of personal data. For AI systems, this obligation extends beyond data collection to ongoing monitoring and validation.
Organizations should implement data quality checks before feeding information into AI models, establish mechanisms to correct inaccurate data that may lead to flawed predictions or decisions, and conduct regular audits of AI outputs to identify systematic errors or biases.
Special Category Data and Sensitive AI Applications
While the DPDP Act does not create explicit “special categories” of personal data like GDPR, the Act empowers the government to notify sensitive personal data categories requiring enhanced protection.
AI applications in healthcare, financial services, and human resources frequently process potentially sensitive information. Organizations should anticipate stricter requirements for such data, implement enhanced security measures for AI systems processing health data, financial information, or biometric identifiers, and conduct privacy impact assessments even before formal designation as Significant Data Fiduciaries.
Healthcare AI and Biometric Data
India’s healthcare sector is rapidly deploying AI systems that process sensitive personal and biometric data. Leading pharmaceutical companies like Sun Pharma and Dr. Reddy’s Laboratories are using AI to address diseases with high national burdens such as tuberculosis and diabetes. The Ayushman Bharat Digital Mission has significantly accelerated AI-based health innovations in India, with AI contributing to disease prediction, personalized treatment plans, and medical imaging analysis.
Healthcare providers are linking health records with the Aadhaar biometric identity system, which has been documented as suffering multiple privacy and security breaches. Telemedicine platforms and AI-driven diagnostics are processing vast amounts of patient data, including biometric identifiers for patient identification and secure data management. Organizations like Forus Health have integrated AI capabilities into portable screening devices like 3nethra, which enables operators to obtain AI-powered insights at eye check-up camps in remote areas.
With the Data Protection Board of India established on November 13, 2025, and full compliance requirements taking effect by May 13, 2027, healthcare AI providers must prepare for stringent privacy requirements around patient data, biometric information, and algorithmic decision-making affecting medical care.
Biometric Attendance and HR Systems
India’s biometrics market is experiencing significant growth, with Aadhaar card holders surpassing 1.3 billion as of 2024, making it the largest biometric database in the world. Biometric systems are widely used in Indian workplaces for attendance management, security, and access control through fingerprint scanning and facial recognition.
The DPDP Rules 2025 prohibit excessive retention of biometric data, typically limiting storage to the duration of employment plus a reasonable period for disputes, with potential fines up to ₹250 crore for privacy breaches. Government agencies including the Delhi Education Department, Transport Department, National Medical Commission, and Bar Council of India have deployed facial recognition technologies and biometric scans to record attendance at public offices, hospitals, and schools without adequate safeguards.
Privacy concerns are significant. The Aadhaar Act of 2016 prohibits unauthorized use, sharing, or storage of Aadhaar biometric data, and private employers cannot require Aadhaar biometrics for employment verification, attendance, or payroll practices. Employees must provide free and informed consent for biometric collection and have the right to withdraw consent and demand deletion of their data.
Organizations deploying biometric attendance systems must address algorithmic bias concerns, particularly as facial recognition systems tend to be error-prone and show higher error rates for certain demographics, including women and people with darker skin.
Financial Services and AI Credit Scoring
India’s fintech sector extensively uses AI for credit scoring and lending decisions, processing sensitive financial information. Indian fintech startups like mPokket use AI to process thousands of demographic, social, behavioral, financial, and transactional data points to assess creditworthiness, while platforms like Paisadukan use AI for credit assessments for rural users who may not have sufficient documents for traditional credit scoring.
FinTech start-ups in India use AI-driven scoring to assess borrowers with no traditional credit history, relying on mobile payment and telecom data. Companies like KreditBee provide real-time AI decisioning for digital loans. However, these systems raise significant privacy concerns as they access extensive personal data including phone contacts, social media connections, transaction histories, and behavioural patterns.
Research has documented that some lenders in India’s alternative credit market have accessed contact lists and engaged in controversial debt recovery practices. Using alternative data requires strict compliance with laws like India’s DPDP Act, presenting compliance challenges for the rapidly growing digital lending sector.
Cross-Border Data Transfers and AI Infrastructure
The DPDP Act permits cross-border transfers of personal data to countries and territories notified by the Central Government. This provision has significant implications for AI systems relying on global cloud infrastructure or offshore development.
Compliance Strategies
Organizations should map data flows in AI processing pipelines to identify cross-border transfers, monitor notifications regarding approved jurisdictions for data transfers, and implement contractual safeguards with Data Processors in any jurisdiction.
For AI training and development, organizations should consider whether training data can be anonymized before cross-border transfer, whether model training can occur within India using domestic compute resources, and the implications of transfer restrictions on vendor selection and technology architecture.
Rights of Data Principals
The DPDP Act grants individuals specific rights regarding their personal data. AI applications must accommodate these rights through technical and organizational measures.
Right to Access and Correction
Data Principals may request access to their personal data and seek correction of inaccurate information. For AI systems, organizations must establish processes to retrieve individual data from training sets and production systems, enable corrections that propagate through the AI pipeline, and provide information about how personal data influenced automated decisions.
Right to Erasure
Individuals may request erasure of their personal data in specified circumstances. This “right to be forgotten” creates particular challenges for machine learning models where individual data points have been incorporated into model parameters.
Organizations should consider implementing machine unlearning techniques where feasible, maintaining the ability to exclude individuals from future processing even if historical model training cannot be fully reversed, and documenting technical limitations on data erasure in AI contexts.
Right to Nominate and Grievance Redressal
The Act requires Data Fiduciaries to establish mechanisms for addressing grievances. For AI-driven services, this should include dedicated channels for challenging automated decisions, human review processes for consequential AI decisions, and clear escalation procedures when AI systems produce disputed outcomes.
Significant Data Fiduciary Obligations
Organizations meeting criteria established by the Central Government will be designated as Significant Data Fiduciaries, subject to enhanced obligations.
Data Protection Impact Assessments
Significant Data Fiduciaries must conduct periodic data protection impact assessments (DPIAs). For AI systems, effective DPIAs should systematically identify risks to individual rights and freedoms, evaluate the necessity and proportionality of processing, assess potential discriminatory impacts or biases in AI outputs, and document mitigation measures and safeguards.
Data Protection Officers
Significant Data Fiduciaries must appoint a Data Protection Officer based in India. The DPO should possess expertise in AI technologies and their privacy implications, maintain independence from business units deploying AI, and serve as the point of contact for individuals and the Data Protection Board.
Independent Audits
Regular independent audits of processing activities will be required. For AI systems, audit scope should encompass data governance in AI development and deployment, technical and organizational security measures, compliance with consent and purpose limitation requirements, and the effectiveness of rights fulfillment mechanisms.
Security and Breach Notification
The DPDP Act requires Data Fiduciaries to implement reasonable security safeguards and notify the Data Protection Board and affected individuals of personal data breaches.
AI-Specific Security Considerations
Organizations should implement adversarial testing to identify vulnerabilities in AI models, access controls to prevent unauthorized manipulation of training data or model parameters, monitoring systems to detect data poisoning or model inversion attacks, and encryption for data at rest and in transit throughout the AI pipeline.
Breach Response
When breaches affect AI systems, organizations must assess whether personal data has been compromised through training data exposure, model outputs revealing training data, or unauthorized access to prediction or profiling data. Response protocols should address the unique characteristics of AI-related breaches, including potential ongoing exposure through deployed models.
Children’s Data and Age Verification
The DPDP Act prohibits processing children’s personal data (individuals under 18) except with verifiable parental consent and prohibits tracking, behavioral monitoring, or targeted advertising directed at children.
These provisions significantly impact AI applications targeting young users or processing data from mixed-age populations.
Compliance Measures
Organizations should implement robust age verification mechanisms that balance privacy and accuracy, design AI systems to avoid profiling or tracking children, establish separate consent workflows for services accessible to minors, and conduct regular reviews to ensure AI systems do not inadvertently process children’s data.
Penalties and Enforcement
The DPDP Act establishes a penalty framework with fines up to ₹250 crores for violations. The Data Protection Board of India will adjudicate complaints and impose penalties.
Risk Assessment
Organizations deploying AI should prioritize compliance based on processing large volumes of personal data or qualifying as Significant Data Fiduciaries, operating in sectors likely to face heightened scrutiny such as technology, financial services, and healthcare, and deploying AI systems making consequential decisions about individuals.
Building a Compliance Framework
General Counsels should lead the development of comprehensive AI governance frameworks incorporating DPDP Act requirements.
Governance Structure
Establish cross-functional AI governance committees including legal, compliance, technology, and business representatives. Define clear roles and responsibilities for AI development, deployment, and monitoring. Create escalation procedures for high-risk AI applications or novel use cases.
Privacy by Design
Integrate privacy considerations into the AI development lifecycle from conception through deployment. Conduct privacy impact assessments before deploying new AI systems. Implement technical privacy-enhancing technologies such as differential privacy, federated learning, or secure multi-party computation where appropriate.
Vendor Management
The increasing reliance on third-party AI solutions requires robust vendor due diligence. Organizations should assess vendors’ DPDP Act compliance capabilities and commitments, establish clear data processing agreements defining responsibilities and limitations, and monitor vendor compliance through audits and reporting requirements.
Training and Awareness
Compliance depends on organizational understanding. Develop training programs for data scientists and AI developers on privacy requirements, educate business stakeholders on consent, purpose limitation, and data minimization principles, and conduct regular awareness campaigns highlighting DPDP Act obligations and individual rights.
Documentation and Accountability
Maintain comprehensive records of processing activities involving AI systems, document consent mechanisms, legitimate use assessments, and privacy impact assessments, and create audit trails for data access, model training, and automated decisions.
Emerging Considerations
As regulatory frameworks evolve globally, organizations should monitor several emerging areas.
Algorithmic Transparency and Explainability
While not explicitly required under the current DPDP Act, increasing regulatory and social expectations around AI transparency suggest organizations should proactively develop capabilities to explain AI decision-making processes, particularly for consequential decisions.
Bias and Fairness
Discriminatory outcomes from AI systems may implicate constitutional protections and sectoral regulations beyond the DPDP Act. Organizations should implement fairness testing and bias mitigation throughout the AI lifecycle, establish diverse teams for AI development and oversight, and create mechanisms for identifying and addressing discriminatory impacts.
Regulatory Convergence
India’s approach to AI regulation may evolve beyond data protection to encompass algorithmic accountability, safety, and ethics. Organizations should track developments in AI-specific regulation at national and state levels, engage with regulatory consultations and industry working groups, and adopt international best practices where they exceed minimum DPDP Act requirements.
Practical Recommendations for General Counsels
As organizations navigate the intersection of AI and data privacy, general counsels should prioritize several strategic actions.
First, conduct comprehensive audits of current AI systems to map personal data processing, assess DPDP Act compliance gaps, and prioritize remediation efforts based on risk. Second, develop AI-specific policies and procedures addressing consent management, purpose limitation, data minimization, security safeguards, and rights fulfillment. Third, invest in privacy-enhancing technologies and architectural solutions that enable compliance while preserving AI functionality.
Fourth, establish clear approval processes for new AI initiatives requiring privacy impact assessments before deployment, defining criteria for legal review and escalation, and implementing go/no-go gates based on compliance readiness. Fifth, engage proactively with regulators to seek clarity on ambiguous requirements, participate in industry consultations on AI governance, and demonstrate good faith compliance efforts.
Finally, prepare for evolving requirements by monitoring regulatory developments in India and globally, building flexible compliance frameworks that can adapt to new obligations, and fostering a culture of responsible AI development that views privacy as integral to innovation rather than a constraint upon it.
Conclusion
The DPDP Act establishes India’s foundational framework for data protection in an AI-driven economy. While the Act’s principles-based approach provides flexibility, it also requires organizations to make thoughtful judgments about compliance in novel technological contexts.
For general counsels, success requires moving beyond checkbox compliance to embed privacy considerations into AI strategy, governance, and operations. By building robust frameworks that address consent, purpose limitation, security, transparency, and individual rights, organizations can harness AI’s transformative potential while respecting the privacy interests that the DPDP Act seeks to protect.
The regulatory landscape will undoubtedly evolve as the Data Protection Board issues guidance, rules are notified, and enforcement actions clarify obligations. Organizations that invest now in compliance infrastructure, governance capabilities, and a culture of responsible AI development will be best positioned to navigate these changes while maintaining competitive advantage in India’s dynamic digital economy.
This article provides general information and analysis. It does not constitute legal advice. Organizations should consult with qualified legal counsel regarding specific compliance obligations under the DPDP Act and related regulations.