Creators of Artificial Intelligence Will Have to Prove in Court That They Did Not Cause Harm
I recommend reading the article on Lupa: Creators of Artificial Intelligence Will Have to Prove in Court That They Did Not Cause Harm. It discusses a new draft directive on...

I recommend reading the article on Lupa: Creators of Artificial Intelligence Will Have to Prove in Court That They Did Not Cause Harm. It discusses a new draft directive on liability for damage caused by artificial intelligence systems.
This directive introduces a presumption that artificial intelligence has caused harmful consequences. It will be a so-called rebuttable presumption, meaning that the creator of the artificial intelligence can absolve themselves if they can prove that they had no part in the incident. I wonder whether such a directive might be somewhat demotivating for developing systems in the EU, to which the law approaches with the principle of reversing the burden of proof.
Another new feature is the possibility for the court to propose that it orders the disclosure of information about high-risk AI systems. This would allow the injured party to peek behind the curtain at the developers' work. If documentation can so easily fall into the hands of disgruntled plaintiffs, I believe, alongside John Buyers, that it will significantly affect how documentation is written.
To determine how much various systems and applications cause harm, the AIA introduces their categorisation. The prohibited group of "unacceptable risk" includes applications that manipulate human behaviour in order to circumvent users' free will. I’m not sure I understand this correctly, but if I create an application that motivates me to exercise, am I thereby circumventing my free will (which was insufficient for exercising) and creating an unacceptable risk, regardless of whether the application is beneficial or harmful? Or do I not understand how this aligns with neurological research that has abandoned the concept of free will for good reasons (the brain decides before our self does, creating a false illusion of free will; we have instincts and drives...)?
Furthermore, while applications in the high-risk category are not fundamentally excluded, a whole range of conditions must be met before their deployment. This group includes sectors such as education, employment, finance, or law enforcement. Thus, the EU will view my educational applications as high-risk. EU experts refuse to fund my research (for example, about 1 million for a proper Czech version of DestilBERT that would genuinely assist several Czech companies). Instead, they prefer to support somewhat dubious projects from my perspective (e.g., hundreds of millions in funding for Mr Babiš's Penam toast bread innovation programme).
Now the EU wants to help me create trustworthy and safe systems by requiring me to meet a whole range of conditions to prove that my systems are accurate and fair, to always have human oversight over automated processes (I think a monitoring system with notifications is better), and so on.
I understand that people need protection. AI indeed carries certain risks (recommendation systems on TikTok and Facebook, for example). I have established three ethical principles of AI that guide my company (benefit, safety, and transparency). I would like to see more discussion in the EU and the Czech Republic primarily about the benefit. Additionally, I have not fully grasped what falls under the definition of AI (for example, is it just software or also hardware?). Why should AI be viewed differently from other systems?
Article: https://www.lupa.cz/…/tvurci-umele-inteligence-budou…/
Directive: https://ec.europa.eu/…/files/1_1_197605_prop_dir_ai_en.pdf
Původní zdroj: wordpress