CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out!

CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out! CyberPet Issue 02 is Out!

Artificial Intelligence Law Interview

Artificial Intelligence Law Interview

BY Tuğsem Soner

Technology

February 24, 2025

The EU AI Act is currently the most comprehensive legal framework for AI globally.

While current regulations do not recognize copyright in AI-generated works, legal changes seem inevitable.

If an AI model uses copyrighted materials without permission during it is training process, this can be considered copyright infringement.

Bias in AI systems is a recurring issue. Since these models learn patterns from historical data—data that may contain systemic discrimination—their outputs naturally reflect these biases.

In the United States, an AI model called COMPAS was used in the criminal justice system to assess the likelihood of individuals reoffending. However, the model faced serious criticism for assigning disproportionately higher risk scores to Black individuals compared to white individuals.

Another example is AI-based 'drowsiness detection' systems in cars, which incorrectly flagged Asian drivers as fatigued.

For instance, when translating from Turkish to English, Google Translate used to automatically translate 'O bir doktor' as 'He is a doctor,' failing to consider the possibility of a female doctor.

Training a single large language model is estimated to generate five times the carbon emissions of an average car over its lifetime.

Training a single large language model is estimated to generate five times the carbon emissions of an average car over its lifetime.

Cooling data centers requires vast amounts of water. In 2021 alone, Google’s U.S. data centers consumed an estimated 12.7 billion liters of water. To address this challenges, legal measures should be implemented to enforce energy efficiency standards for AI systems and mandate compliance.

Brands are increasingly using AI-generated digital models and virtual influencers in commercial campaigns. As AI-generated models become more realistic, job opportunities for real models may decrease.

Brands may prefer virtual models due to lower costs and greater control. Consequently, human models will not only compete with each other but also with AI-generated digital models. Additionally, the unrealistic beauty standards and lifestyles portrayed by virtual influencers could further distort reality perception among younger audiences.

In this context, we are sharing selected questions and answers from our interview with Deniz Çelikkaya on artificial intelligence.

How might legal regulations evolve regarding data collection in AI systems? Is the GDPR sufficient?

AI systems, particularly large language models like ChatGPT, rely on vast amounts of data for training. WhiLE GDPR (General Data Protection Regulation) provides a strong foundation for data protection, it does not fully address the specific challanges posed by AI.

One major limitation of the GDPR is its individual-centric approach, whereas AI systems rely on massive datasets aggregated from various sources. For example, the GDPR ensures that individuals are not subjected to automated decision-making processes (such as credit approval systems) without their explicit consent. However, due to the complexity and opacity of AI models, individuals often struggle to challenge AI-driven decisions.

The EU AI Act complements the GDPR by introducing specific data protection measures for AI. It requires risk and human rights impact assessments, risk mitigation strategies, transparency, and human oversight for high-risk AI systems.

Future regulations will likely emphasize transparency and explanationability in AI systems. Developers will need to disclose how AI models function and how they arrive at decisions. This is crucial for ensuring AI ethics and accountability.

If artificial intelligence causes harm or an error, who should be held responsible?

The asnwer to this question depends on the level of control over the system and the degree of autonomy it possesses. However, one thing is clear: although some AI models have autonomous decision-making capabilities, they are not legal persons. Non-human entities do not bear criminal liability, at least within the framework of current legal regulations.

So, if AI is not responsible, who is? For instance, if there is physical harm involved (such as an accident involving an autonomous vehicle resulting in injury), consumer protection and product liability laws are typically consulted. Depending on the legislation, responsibility may fall on the seller or manufacturer. The role of the user is also significant. For example, in an Uber autonomous vehicle accident case, Uber was held liable along with both its development team and the vehicle operator.

Determining liability becomes more complex in cases of non-physical harm (such as financial loss due to an AI-driven investment decision). Here, the responsibility may depend on whether the developers or distributors of the AI system took appropriate measures to prevent foreseeable damages. Existing legal frameworks often fall short in such cases. As AI systems become more autonomous, new regulatory measures will be needed to clarify accountability and ensure liability is properly assigned.

Click the link for the full interview: LINK