Akzamol K. Ani is a law student at Kristu Jayanti College of Law with a keen interest in legal research, contemporary judicial developments and Technology and is passionate about academic writing and contributing meaningful insights to the legal community.
ABSTRACT
Artificial Intelligence (AI) and Machine Learning (ML) have rapidly moved from experimental ideas to everyday tools, changing the way we work, communicate, and make decisions. Generative AI, predictive analytics, and automated systems are now used in sectors such as healthcare, finance, education, and governance. While these technologies bring enormous opportunities, they also raise serious questions about privacy, ownership, fairness, and accountability—issues that our legal and regulatory systems are struggling to keep up with. This paper explores these challenges by focusing on three key areas: data privacy, intellectual property (IP), and ethics.
First, it looks at the conflict between AI’s reliance on massive amounts of data and the need to protect individual privacy. Since AI often depends on sensitive personal data that flows across borders, compliance with laws like the EU’s General Data Protection Regulation (GDPR), India’s Digital Personal Data Protection Act, and other global frameworks has become a pressing concern. Second, the paper examines the unsettled debate over intellectual property rights for AI-generated content. Who owns a piece of art, music, or text created by an algorithm? Should the credit go to the programmer, the user, or the AI system itself? Finding a balance between rewarding innovation and avoiding unfair monopolies is a central challenge. Third, the ethical concerns surrounding AI are explored—especially issues of bias, transparency, and accountability. Flaws in training data or opaque algorithms can lead to discriminatory outcomes, threatening fundamental rights and fairness in society.
The discussion also compares global approaches to AI regulation, including the EU’s proposed AI Act, the United States’ sector-based methods, and India’s evolving legal framework. By bringing these perspectives together, the paper highlights both the risks of fragmented regulation and the opportunities for global cooperation. Ultimately, it argues that what is needed is a layered legal framework—one that protects privacy, promotes fair ownership, and builds ethical safeguards, while still leaving space for innovation. Such a framework would ensure that AI and ML grow in ways that are responsible, inclusive, and beneficial for society as a whole…
1.INTRODUCTION
Artificial Intelligence (AI) and Machine Learning (ML) have transcended the boundaries of technological innovation to become central elements of modern human society. What once belonged to the realm of speculative fiction now shapes the foundations of business, governance, education, and everyday life. From facial recognition systems and predictive healthcare diagnostics to algorithmic trading and generative text tools, AI has become the backbone of twenty-first-century progress (Bryson, 2019). Yet, this advancement is not without its ethical and legal costs. As algorithms evolve, learn, and make autonomous decisions, societies face pressing questions about accountability, fairness, and control—issues that are now central to legal and moral discourse.
The transformative capacity of AI and ML lies in their ability to replicate human cognitive functions such as reasoning, perception, and problem-solving. Unlike traditional software, which follows fixed programming, AI systems adapt dynamically to data inputs and evolve through pattern recognition. This growing autonomy blurs the boundary between human and machine decision-making, challenging long-standing legal doctrines related to liability, authorship, and responsibility (Wachter, Mittelstadt, & Floridi, 2017). When an AI system causes harm, identifying who bears accountability—the programmer, the deploying corporation, or the algorithm itself—becomes a complex legal question. These challenges urge policymakers and legal theorists to reimagine frameworks capable of addressing the implications of technological agency while upholding justice and social trust.
Within corporate environments, AI and ML have revolutionized operations. Businesses now rely on predictive analytics for consumer insights, algorithmic systems for human resource management, and automated models for financial decision-making. While these tools drive efficiency and innovation, they also amplify risks of bias and error. Cases such as gender-biased recruitment algorithms or AI-driven market volatility highlight how unintended consequences can result in legal exposure and reputational harm for corporations. Thus, the question of corporate liability for AI-related actions has become increasingly urgent.At the core of these challenges lies data—the lifeblood of AI. Massive data collection and processing have granted corporations immense informational power, raising concerns over privacy, consent, and surveillance. Laws such as the General Data Protection Regulation (GDPR) in the European Union and India’s Digital Personal Data Protection Act, 2023 aim to address these risks (Gasser & Almeida, 2021; Kuner, 2020). However, uneven enforcement and the cross-border nature of data complicate their efficacy. As AI continues to evolve beyond human predictability, the law must evolve alongside it—balancing innovation with ethics, accountability, and human dignity.
Check Out: Click here to check out Our Instagram Handle
2. UNDERSTANDING CORPORATE LIABILITY IN THE AGE OF AI
The concept of corporate liability has long been grounded in the principle that corporations, though artificial persons, act through natural individuals—its directors, employees, and agents. This doctrine allows the law to attribute wrongful acts and omissions of individuals to the corporate entity, ensuring accountability within collective enterprises. However, the emergence of Artificial Intelligence (AI) and Machine Learning (ML) systems introduces an unprecedented disruption to this framework. Unlike human agents, algorithms can act autonomously, “learning” from data without direct human input or oversight. Consequently, traditional legal notions of fault, intention, and foreseeability are increasingly inadequate to address harms arising from AI operations.
2.1 The Shift from Human to Algorithmic Agency
Under conventional legal theory, liability depends on mens rea—a guilty mind—or the capacity to form intent. AI systems, however, lack consciousness, moral awareness, or subjective intention. Their “decisions” are the outcome of statistical correlations and pattern recognition rather than rational choice. This disconnect challenges the premise that responsibility can only attach to actors capable of moral reasoning.(Russell, S., & Norvig, P. (2021). Corporations that deploy AI-driven systems often defend themselves by claiming that algorithmic errors are unintentional or unforeseeable, thereby seeking to evade liability. Yet, courts and scholars increasingly argue that delegation of decision-making to AI does not absolve corporations of responsibility. The rationale is that AI functions as an extension of corporate activity—its tools and processes remain under the company’s ownership and control. In Hollis v. Vabu Pty Ltd (2001) 207 CLR 21 (HCA), though unrelated to AI, the High Court of Australia reaffirmed the doctrine of vicarious liability, holding that companies cannot outsource their legal duties merely by delegating to contractors or intermediaries. By analogy, corporations cannot delegate accountability to algorithms.
2.2 Attribution of Fault in AI Operations
Attributing fault when Artificial Intelligence (AI) systems cause harm presents one of the most complex challenges in modern legal discourse. Determining liability requires identifying the connection between human input and machine output—a task complicated by the autonomous nature of AI decision-making. Scholars and regulators have proposed several models to allocate responsibility within this evolving framework. One of the most discussed is developer liability, which holds that the creators or trainers of AI systems should be accountable when harm results from flawed coding, inadequate testing, or biased datasets. If the architecture of an algorithm embeds discrimination or malfunction, the developer may be liable under principles of product liability or negligence, since they possess the expertise and control to prevent foreseeable risks.
A second approach emphasizes corporate or deploying entity liability, recognizing that corporations using AI for commercial purposes have a heightened duty of care. These entities are expected to ensure that their systems function safely, ethically, and in compliance with laws on data protection and anti-discrimination. Since corporations often profit directly from AI applications, they are viewed as responsible for verifying algorithmic reliability and maintaining transparent oversight of automated processes.
Finally, some scholars advocate for a shared or composite liability model, which distributes responsibility among developers, deployers, and users according to their degree of control and the foreseeability of harm. This approach aligns closely with the European Commission’s Artificial Intelligence Act (2021), which distinguishes between “providers” who develop AI systems and “users” who deploy them. Despite these theoretical models, real-world cases demonstrate persistent ambiguity. The 2018 Uber self-driving car fatality in Arizona exemplifies this uncertainty: although the vehicle’s autonomous system made the fatal error, both the human safety driver and Uber faced public and legal scrutiny. The case revealed that while AI systems may operate autonomously, their deployment remains an act of human and corporate choice—anchoring responsibility ultimately within human decision-making and organizational accountability (European Commission, 2021).
2.3 Product Liability and Negligence Principles
The adaptation of product liability laws to AI represents another emerging trend. Under traditional frameworks, manufacturers are liable for defective products that cause harm. When AI is embedded in a product—such as an autonomous vehicle or medical device—the line between product and process blurs. Scholars argue that AI-driven products should be treated as “dynamic goods,” whose evolving nature requires continuous monitoring.
In the European Parliament’s Resolution on Civil Liability for AI (2020/2014(INL)), it was proposed that high-risk AI systems should be subject to strict liability, regardless of fault, to ensure victim compensation. Similarly, under Indian tort law, negligence is established when there is a breach of a duty of care resulting in foreseeable harm (Donoghue v Stevenson [1932] AC 562). Applied to AI, a corporation’s failure to supervise or test algorithms adequately could amount to negligence, even without intent.
2.4 Need for a New Liability Paradigm
The rapid integration of Artificial Intelligence (AI) into corporate operations has exposed the limitations of traditional liability doctrines. As AI systems increasingly make autonomous decisions, the law must evolve to ensure accountability remains grounded in human and institutional responsibility. Rather than discarding established legal principles, the focus should be on adapting them to modern technological realities. A promising approach lies in the adoption of a “risk-based responsibility” model, which allocates liability according to the degree of human control, foreseeability, and potential for harm (Gasser & Almeida, 2021). This framework acknowledges that responsibility cannot rest with a single actor; instead, it must be distributed across the AI lifecycle—from design and development to deployment and use.
Within this model, developers bear the primary duty of ensuring algorithmic integrity, transparency, and data accuracy, as flaws in these early stages often lead to downstream harm. Corporations, as deploying entities, hold responsibility for oversight, ethical supervision, and compliance with relevant laws, ensuring that AI systems are used in ways that align with societal values and regulatory standards. Users, though more limited in control, also share a duty to engage with AI tools ethically and lawfully. This layered framework embodies a balance between innovation and accountability, ensuring that technological advancement does not erode the moral foundations of responsibility. Ultimately, the law must reaffirm a crucial principle: machines act through humans, not independently of them. Preserving this truth will safeguard justice as AI becomes an inseparable part of human enterprise, ensuring that progress remains a servant of ethical and legal order rather than its master.

3. DATA PRIVACY AND PROTECTION CONCERNS
In the digital era, data is the currency of Artificial Intelligence (AI) and Machine Learning (ML). Algorithms learn, predict, and evolve by analysing vast volumes of personal and behavioural data, often collected without explicit user awareness. This dependence on data creates a fundamental tension between technological innovation and the right to privacy. While corporations argue that large datasets are essential for AI accuracy, the unregulated extraction and processing of personal information have given rise to surveillance capitalism and serious ethical concerns ( Pasquale,2020).
3.1 The Privacy Paradox in AI
AI thrives on access to comprehensive datasets—spanning from biometric details to location histories. Yet, such data often includes sensitive or personally identifiable information, making it subject to privacy laws. The privacy paradox lies in the contradiction between individuals’ desire for privacy and their dependence on AI-driven services that demand personal data. Corporate entities, leveraging AI, often justify mass data collection as “user consent,” though this consent is typically buried in opaque terms and conditions. Consequently, users become both data providers and data victims.
The Cambridge Analytica scandal epitomises this conflict. Personal data from millions of Facebook users were harvested to influence political outcomes through algorithmic profiling (Cadwalladr and Graham -Harrason ,2019). This incident revealed that AI-powered analytics can be weaponised, turning personal information into tools of manipulation. It also demonstrated the corporate failure to ensure data accountability, reinforcing the need for stringent privacy safeguards.
3.2 Legal Frameworks Governing Data Protection
Globally, data protection regimes have evolved to confront these challenges. The European Union’s General Data Protection Regulation (GDPR) remains the most comprehensive framework, premised on principles of consent, transparency, and accountability. It introduces concepts such as Data Protection Impact Assessments (DPIAs) and the “right to be forgotten”, ensuring individuals retain control over their data. Non-compliance attracts heavy penalties, as seen in the €1.2 billion fine imposed on Meta Platforms in 2023 for unlawful data transfers to the United States ( European Data Protection Board, 2023).
In contrast, the United States follows a sectoral approach, with fragmented regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the California Consumer Privacy Act (CCPA). While flexible, this model often leaves gaps in protection, especially for AI systems operating across sectors. The absence of a unified federal privacy law results in inconsistent standards of corporate liability.
India’s Digital Personal Data Protection Act (DPDPA), 2023, represents an emerging effort to balance innovation and individual rights. It mandates lawful processing based on consent, data minimisation, and purpose limitation. The Act also introduces the Data Protection Board of India, responsible for grievance redressal and compliance oversight. However, critics argue that broad exemptions for government and corporate entities dilute its effectiveness. Given that AI companies in India increasingly rely on global data flows, questions of cross-border transfer and jurisdictional overlap remain unresolved.
3.3 Corporate Accountability and Data Ethics
Corporate liability for data misuse extends beyond legal compliance—it encompasses ethical stewardship. Companies deploying AI must ensure that their data practices align with principles of fairness, transparency, and informed consent ( Wachter,Mittelstadt,and Floridi,(2017). This includes anonymising personal information, conducting regular audits, and disclosing algorithmic decision-making criteria. Failure to do so may not only invite regulatory sanctions but also erode public trust—an asset as valuable as data itself.
The OECD AI Principles (2019) and UNESCO’s Recommendation on the Ethics of AI (2021) both emphasise that privacy protection should be embedded “by design” in AI systems. This means privacy considerations must shape system architecture from the outset, rather than being added as a compliance formality. Corporations bear a duty of foresight, ensuring that their technologies respect human dignity even in the absence of explicit regulation.
Also Read: How to Study Law Effectively: A Practical Guide for Indian Law Students
4. INTELLECTUAL PROPERTY RIGHTS AND AI-GENERATED WORKS
The relationship between Artificial Intelligence (AI) and Intellectual Property (IP) is one of the most debated frontiers in contemporary law. AI’s ability to generate artistic, literary, and inventive works has challenged the very foundations of copyright, patent, and ownership doctrines that have historically assumed a human creator. As corporations increasingly deploy AI systems capable of producing original content—from paintings and software to product designs—questions arise about who should own the outputs and who bears legal responsibility for infringement or misuse(Samuelson, 2020). This uncertainty has significant implications for both innovation and accountability.
4.1 Authorship and Ownership Dilemmas
Traditional IP law is built on the principle of human authorship. Copyright protection, for instance, is granted to the “author” who creates an original work, while patent law rewards human inventors for novel and useful inventions. However, AI-generated content complicates this framework because it often lacks direct human creative input. Systems like OpenAI’s GPT or DALL-E, for example, can autonomously compose text or art without continuous human supervision.
This leads to the central legal question: Can an AI be an author or inventor? Most jurisdictions have answered in the negative. The United States Copyright Office (USCO), in Zarya of the Dawn (2023), denied copyright protection to AI-generated images, affirming that human creativity is a prerequisite for copyright ownership. Similarly, the UK Copyright, Designs and Patents Act 1988 attributes authorship of computer-generated works to the person “by whom the arrangements necessary for the creation of the work are undertaken.” This approach attempts to preserve human responsibility, even when machines are the proximate creators.
The “DABUS cases” further highlight the debate. Dr. Stephen Thaler sought to name his AI system, DABUS, as the inventor on patent applications in the UK, US, and EU. All three jurisdictions rejected the claim, reiterating that only natural persons can hold inventor status (Thaler v Comptroller General of Patents [2021] EWCA Civ 1374). These rulings confirm that, under current law, AI lacks legal personhood and cannot independently own or enforce IP rights.(Gervais, 2021).
4.2 Corporate Interests and Control
While AI itself cannot hold rights, corporations that develop or deploy AI systems may claim ownership of AI-generated outputs through contractual or employment relationships. Most corporate frameworks already stipulate that intellectual property created “in the course of employment” belongs to the employer. When AI tools are used internally—such as by media, design, or pharmaceutical firms—ownership is often attributed to the corporation that owns the algorithm or dataset.(Vincent, 2023).
However, this approach raises ethical and practical concerns. If AI systems autonomously generate outputs based on data obtained from external sources, corporations might inadvertently infringe existing copyrights or patents. The use of copyrighted materials for training AI models—such as the massive datasets scraped from the internet—has sparked global lawsuits. In Getty Images v Stability AI (2023), Getty alleged that Stability AI unlawfully used millions of copyrighted images to train its text-to-image model, thereby breaching copyright law. Such cases illustrate how corporate liability can arise indirectly through algorithmic conduct, even when no human actor intentionally commits infringement.
4.3 The Balance Between Innovation and Monopoly
From a policy perspective, the challenge lies in balancing innovation incentives with public access. Overly restrictive IP protection for AI-generated works could lead to monopolistic control by a few corporations possessing advanced algorithms and vast dataset(Reed, 2021). Conversely, denying protection altogether might discourage investment in AI creativity. Some scholars propose a “shared authorship model”, where partial credit is attributed to both human developers and AI systems’ deployers, ensuring that innovation remains collaborative and accountable.(Ginsburg, 2022).
The World Intellectual Property Organization (WIPO) has initiated consultations on this issue, recognising that global harmonisation is essential. Inconsistent national interpretations risk creating “forum shopping” by corporations seeking jurisdictions with favourable IP rules. The European Union has considered limited sui generis rights for AI-generated outputs, though no binding regulation has yet emerged. India’s Copyright Act 1957, while silent on AI authorship, grants protection only to works “created by a person,” implying a continued reliance on human authorship.(Menon, 2024).
4.4 Re-imagining IP in the Age of AI
The convergence of AI and IP marks a transformative moment in the philosophy of creativity. As AI systems blur the line between tool and creator, the law must evolve to protect human ingenuity while acknowledging technological contribution. A layered framework of accountability—where human oversight, corporate responsibility, and ethical data use coexist—offers a viable path forward. Ultimately, intellectual property should remain anchored in human values of authorship, integrity, and fairness, even as machines expand the boundaries of creativity.(Bently, 2021).
5. ETHICAL AND ALGORITHMIC ACCOUNTABILITY
The ethical dimension of Artificial Intelligence (AI) and Machine Learning (ML) sits at the heart of the global debate about their responsible use. While legal frameworks attempt to regulate AI through data protection and liability rules, the ethical question is deeper—it asks whether the deployment of AI aligns with values of justice, fairness, and human dignity (Floridi & Cowls, 2019) Ethics functions as a moral compass that guides technology toward social benefit, preventing it from becoming a tool of inequality or exploitation.
5.1 Algorithmic Bias and Discrimination
AI systems are only as fair as the data that train them. When datasets reflect historical inequalities or cultural prejudices, the resulting algorithms tend to replicate and amplify these biases. This phenomenon, known as algorithmic bias, poses significant ethical and legal challenges. For example, in 2018, Amazon discontinued an AI recruitment tool after it was discovered that the system systematically discriminated against female applicants due to biased historical hiring data. Similarly, facial recognition technologies have demonstrated higher error rates when identifying women and people of colour, leading to wrongful arrests and civil rights violations(Buolamwini & Gebru, 2018).
Bias in AI is not merely a technical flaw; it is a reflection of social asymmetry embedded in data. When corporations deploy biased AI systems, they risk perpetuating systemic discrimination. This undermines not only ethical principles of equality but also legal obligations under anti-disc
5.2 Corporate Duty of Care and Ethical Governance
Corporate entities deploying AI bear a duty of ethical stewardship. This duty extends beyond mere legal compliance; it encompasses foresight, responsibility, and fairness in the development and use of AI technologies. The OECD Principles on Artificial Intelligence (2019) and UNESCO’s Recommendation on the Ethics of AI (2021) emphasise that corporations must ensure AI systems are transparent, accountable, and aligned with human rights.
Ethical governance frameworks are emerging across the world. For instance, NITI Aayog’s “Responsible AI for All” (2021) in India outlines principles of safety, inclusivity, and trust. Similarly, multinational corporations such as Google and Microsoft have published internal “AI ethics guidelines,” committing to avoid harm, promote fairness, and ensure human oversight. Yet critics argue that self-regulation alone is insufficient. Without external enforcement, ethical promises risk becoming public relations(Hao, 2020).
5.3 Ethics as a Pillar of Legal Reform
Ultimately, ethical accountability in AI cannot exist in isolation from the law. Ethics must inform legislation, ensuring that AI regulation reflects moral values rather than mere procedural compliance. The convergence of law and ethics will help redefine corporate responsibility in the age of automation. By embedding ethical standards—fairness, transparency, non-discrimination, and respect for privacy—into legal mandates, societies can foster innovation that is both sustainable and just.(Binns, 2018).
As philosopher Hans Jonas observed, “Technology compels responsibility because it magnifies human power.” In the age of AI, this responsibility must rest squarely on corporate and societal shoulders. Ethical accountability is not an obstacle to progress; it is the foundation upon which trust, legitimacy, and human-centred technology are built.

6. COMPARATIVE ANALYSIS OF GLOBAL REGULATORY FRAMEWORKS
As Artificial Intelligence (AI) and Machine Learning (ML) technologies evolve, countries worldwide are racing to design regulatory frameworks that balance innovation, accountability, and human rights. The global legal landscape, however, remains fragmented, with differing approaches shaped by cultural values, political systems, and economic priorities. This section compares three key jurisdictions—the European Union (EU), the United States (US), and India—to understand how each navigates the tension between technological advancement and ethical governance.
6.1 The European Union: A Rights-Centric and Risk-Based Model
The European Union has emerged as the global leader in AI regulation through its proposed Artificial Intelligence Act (AI Act), adopted by the European Parliament in 2024. The EU’s approach is rights-centric, grounded in the protection of fundamental freedoms under the EU Charter of Fundamental Rights, and risk-based, categorizing AI systems according to their potential to cause harm.
The AI Act classifies systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Systems that manipulate human behaviour, exploit vulnerabilities, or enable social scoring are prohibited outright, reflecting Europe’s precautionary philosophy. High-risk systems—used in sectors like healthcare, recruitment, or law enforcement—are subject to stringent obligations, including transparency, human oversight, and conformity assessments before deployment.
Complementing this, the General Data Protection Regulation (GDPR) remains central to AI governance, ensuring that automated data processing respects privacy and consent principles. The “right to explanation” under GDPR Articles 13–15, though limited, signifies an attempt to democratize algorithmic decision-making.
Ethically, the EU model integrates trustworthy AI principles established by the European Commission’s High-Level Expert Group on AI (2019)—fairness, transparency, accountability, and human oversight. By embedding ethics into law, the EU offers a comprehensive, human-rights-based framework. However, critics argue that excessive regulation may stifle smaller innovators who cannot afford the high cost of compliance, thereby consolidating power in large corporations.
6.2 The United States: A Market-Driven and Sectoral Approach
In contrast, the United States follows a market-driven, sectoral approach, relying largely on existing laws rather than a unified federal AI statute. Regulation in the US is dispersed across agencies—the Federal Trade Commission (FTC), Equal Employment Opportunity Commission (EEOC), and Department of Commerce—each addressing AI-related issues within its jurisdiction.
The FTC, for instance, applies its consumer protection mandate to challenge deceptive or unfair uses of AI, especially those involving biased algorithms or privacy breaches. Similarly, the Algorithmic Accountability Act, reintroduced in Congress in 2022, seeks to mandate impact assessments for high-risk AI systems, though it remains pending.
The US approach prioritizes innovation and competitiveness, reflecting its Silicon Valley ethos. Ethical guidance, rather than legal mandates, often directs corporate behaviour. The National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (2023), promoting principles of transparency, fairness, and security, but compliance is voluntary.
Critics contend that this fragmented model leaves significant regulatory gaps, allowing corporate misuse and opaque algorithmic decision-making. However, supporters argue that the US model fosters technological agility, avoiding bureaucratic constraints that may hinder progress. The challenge for the US lies in balancing its pro-innovation stance with stronger consumer and ethical protections.
6.3 India: The Emerging Framework and Balancing Act
India stands at a crossroads—aspiring to become a global AI hub while safeguarding individual rights. Unlike the EU or US, India does not yet have a comprehensive AI law, but several policy initiatives have laid the foundation for an ethical governance framework.
The NITI Aayog’s Discussion Paper on “Responsible AI for All” (2021) articulates a vision for AI that is inclusive, safe, and transparent. It emphasizes principles such as fairness, accountability, and explainability. The paper also underscores the need for human oversight and sector-specific regulation.
The enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act) marks a crucial milestone in India’s digital governance. It introduces data protection obligations for AI systems, requiring corporations to ensure consent-based processing and protect sensitive personal information. Meanwhile, the Information Technology Act, 2000, though outdated, continues to provide a partial legal basis for addressing AI-related cyber offences and liability issues.
From an ethical standpoint, India’s approach reflects its broader constitutional values—especially the right to privacy, recognised as a fundamental right in Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1. The decision established privacy as intrinsic to dignity and autonomy, laying the groundwork for AI accountability within a human rights framework.
However, India faces practical challenges: lack of technical expertise, absence of a dedicated AI regulator, and the need to balance rapid innovation with socio-economic inclusion. The government’s strategy thus combines soft law instruments, public-private collaboration, and gradual institutional development.
Also Read: The Art of Drafting High-Stakes Commercial Contracts
7. CORPORATE GOVERNANCE AND RISK MANAGEMENT IN THE AGE OF AI
As Artificial Intelligence (AI) and Machine Learning (ML) technologies reshape global business models, corporate governance faces a fundamental transformation. The traditional principles of accountability, transparency, and fiduciary responsibility are being redefined in light of algorithmic decision-making, data dependence, and automation of human judgment. Corporations that deploy or develop AI systems must not only comply with legal obligations but also embed ethical and risk-management strategies into their governance structures. Effective corporate governance in the age of AI thus demands a multi-layered approach—combining compliance, ethics, and proactive risk mitigation.
7.1 Redefining Corporate Responsibility
Corporate liability in AI contexts extends beyond the physical boundaries of products or services. AI systems can autonomously learn, adapt, and act in unpredictable ways, blurring the line between human agency and machine behaviour. This raises a crucial question: who should be held accountable when an AI system causes harm— the developer, the deploying company, or the algorithm itself?
Under existing legal principles, liability generally attaches to the corporate entity or its agents, since current law does not recognise AI as a legal person. However, scholars have proposed models of “algorithmic agency”, suggesting that corporate liability should reflect both human and machine participation in harm. For example, when an AI-driven trading bot manipulates financial markets, liability may extend to both the company deploying it and the programmers who failed to implement safeguards.
Corporate responsibility therefore must evolve from a reactive model—responding to damage after it occurs—to a preventive governance model that anticipates algorithmic risk through design and oversight.
7.2 Risk Identification and Impact Assessment
A cornerstone of modern AI governance is risk identification. Corporations must recognise that AI systems inherently carry operational, reputational, ethical, and legal risks. Effective governance begins with AI Impact Assessments (AIIAs)—systematic evaluations of potential harms to stakeholders, particularly where algorithms affect employment, finance, health, or access to services.
The European Union AI Act mandates risk assessments for “high-risk” systems, a model increasingly emulated worldwide. Similarly, the OECD AI Principles (2019) encourage organisations to ensure that AI systems are robust, secure, and under human oversight. For Indian corporations, integrating impact assessments aligns with the constitutional commitment to fairness and equality under Articles 14 and 21.
Risk management must also include bias detection audits, data-provenance documentation, and testing protocols for algorithmic safety. These measures are not only regulatory expectations but business imperatives—ethical AI enhances brand trust, investor confidence, and long-term sustainability
7.3 Compliance and Internal Controls
Corporate compliance frameworks must evolve alongside the rapidly changing landscape of AI regulation, as traditional, rule-based approaches are no longer sufficient for technologies that learn and adapt autonomously. To address this, companies should adopt dynamic compliance systems that can continuously adjust policies in response to legal and technological developments. This involves establishing strong internal controls, such as data governance policies to ensure lawful collection and consent under laws like India’s Digital Personal Data Protection Act (2023) and the EU’s GDPR; maintaining transparent documentation of algorithms, including data sources and testing outcomes; and implementing accountability systems that enable traceability and facilitate audits. Additionally,whistleblower mechanisms should empower employees to report unethical or biased AI practices. Together, these measures shift compliance from a mere procedural obligation to a living culture of ethical responsibility and innovation, embedding accountability at every stage of AI development and deployment.
8. TOWARDS A LAYERED LEGAL FRAMEWORK
As Artificial Intelligence (AI) and Machine Learning (ML) become increasingly embedded in corporate operations, governance, and daily life, it is evident that traditional legal frameworks alone are insufficient to manage the complex challenges posed by autonomous systems. The fragmented regulatory landscape, ethical ambiguities, and technological unpredictability require a layered approach to law and governance, combining hard legal rules, soft ethical guidance, and corporate self-regulation(Calo, 2015). Such a framework can reconcile innovation, accountability, and societal values in a coherent and adaptive manner.
8.1 Conceptualizing a Layered Approach
A layered legal framework offers a balanced and adaptive approach to regulating Artificial Intelligence (AI) by distributing responsibilities and safeguards across multiple levels of governance. At the global level, international standards set universal principles for ethics, human rights, and cross-border cooperation. National legislation then translates these values into enforceable rules on accountability, liability, and data protection. Within corporations, governance structures such as ethical review boards and compliance committees ensure responsible deployment and oversight of AI systems. Complementing these formal measures, self-regulation and ethical codes promote industry best practices, transparency, and voluntary adherence to ethical norms. Together, these interconnected layers create a dynamic framework that balances innovation with accountability—avoiding the rigidity of over-regulation while preventing the risks of unregulated technological growth.
8.2 International and Global Norms
At the global level, organisations such as the OECD, UNESCO, and the World Intellectual Property Organization (WIPO) have articulated principles for responsible AI. The OECD AI Principles (2019) emphasize transparency, fairness, robustness, and accountability, offering a blueprint for national and corporate adoption. Similarly, UNESCO’s Recommendation on the Ethics of AI (2021) provides guidance on privacy, inclusivity, and human oversight(UNESCO, 2021).
International standards serve two key purposes. First, they create a baseline of ethical expectation that transcends jurisdictional boundaries. Second, they provide a reference for cross-border corporate liability, particularly for companies operating in multiple regulatory environments. For instance, a company complying with OECD principles can demonstrate good faith in liability disputes, even when domestic regulations are less stringent(WIPO, 2022).
8.3 Corporate Governance and Compliance Mechanisms
At the corporate level, the layered legal framework emphasizes proactive governance to ensure ethical and responsible use of AI. Companies are expected to establish internal oversight mechanisms, such as AI ethics boards and audit committees, to guide compliance and accountability. They must also conduct algorithmic impact assessments to identify potential biases, privacy breaches, or discriminatory effects arising from AI deployment. Transparency plays a crucial role—organizations should clearly document their data sources, algorithmic decision-making processes, and risk mitigation strategies. Furthermore, continuous monitoring systems are essential to adapt to evolving legal and ethical standards. Together, these measures ensure that corporations integrate ethical foresight and accountability into their daily operations, rather than relying solely on external regulators for oversight.
8.4 Ethical Self-Regulation and Industry Best Practices
The final layer of governance consists of voluntary codes of conduct, industry standards, and ethical certifications, which function as the “soft law” complementing formal regulation. Initiatives such as Google’s AI Principles, emphasizing fairness, transparency, and safety, and the Partnership on AI, which unites academia, industry, and civil society to promote responsible innovation, illustrate this approach. Unlike rigid legal frameworks, soft regulation encourages a culture of accountability and ethical integrity while maintaining flexibility to adapt to rapid technological advancements. By fostering shared responsibility across stakeholders, it ensures that innovation progresses in a way that remains both socially beneficial and ethically grounded (Whittaker et al., 2018).
8.5 Advantages of a Layered Legal Framework
A multi-tiered approach offers several benefits:
Flexibility: Can adapt to rapidly evolving AI technologies.
Accountability: Distributes responsibility across regulators, corporations, and international bodies.
Innovation-Friendly: Avoids rigid prohibitions that may stifle technological progress.
Global Harmonization: Provides a reference standard for cross-border AI deployment and liability.
By integrating international norms, national law, corporate governance, and ethical self-regulation, the framework ensures that AI and ML can develop responsibly, serving society while minimizing harm.(Cath et al., 2018).
CONCLUSION
Artificial Intelligence (AI) and Machine Learning (ML) are no longer futuristic technologies—they have become central to modern corporate operations and governance. Their capacity to automate decision-making, analyse data, and generate content offers immense potential, but it also challenges traditional notions of liability, accountability, and ethics. The legal system, designed for human conduct, now faces the task of addressing autonomous decision-making and algorithmic unpredictability.
Corporations deploying AI must recognise that legal and moral responsibility ultimately remains human. Transparent algorithmic design, regular audits, and risk assessments should form part of every corporate AI strategy. Data privacy and intellectual property disputes underline the urgent need for robust compliance under frameworks such as the GDPR, India’s Digital Personal Data Protection Act 2023, and evolving global norms.
Ethically, AI governance must prioritise fairness, transparency, and human oversight. Corporate boards should adopt accountability models that ensure technology serves social welfare rather than profit alone. Biases in recruitment, credit scoring, or surveillance systems illustrate that ethical negligence can lead to social harm as severe as legal liability.
A layered regulatory framework, integrating statutory obligations, ethical guidelines, and corporate self-governance, is essential. Such a model can harmonise innovation with accountability and enable global cooperation on AI governance. Ultimately, the law must evolve not to stifle innovation but to ensure it remains human-centred—where technology enhances justice, equity, and collective progress rather than undermining them.
REFERENCES
- Amazon. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com
- Balkin, J. M. (2017). The three laws of robotics in the age of big data. Ohio State Law Journal, 78(5), 1217–1248.
- Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291.
- Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513–563.
- Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the “good society”: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505–528.
- Donoghue v. Stevenson [1932] AC 562 (House of Lords).
- European Parliament. (2020). Resolution on civil liability regime for artificial intelligence (2020/2014(INL)). Strasbourg: European Parliament.
- Gasser, U., & Almeida, V. A. F. (2017). A layered model for AI governance. IEEE
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
- Jonas, H. (1984). The imperative of responsibility: In search of an ethics for the technological age. University of Chicago Press.
- NITI Aayog. (2021). Responsible AI for All: Operationalizing principles for responsible AI. Government of India.
- OECD. (2019). OECD principles on artificial intelligence. Organisation for Economic Co-operation and Development.
- Thaler v. Comptroller General of Patents [2021] EWCA Civ 1374 (Court of Appeal, UK).
- UNESCO. (2021). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization.
- Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., & Raji, I. D. (2018). AI Now Report 2018. AI Now Institute, New York University.
- WIPO. (2022). Artificial intelligence and intellectual property policy. World Intellectual Property Organization.
Mail to Author: akzamolkani@gmail.com