I recommend reading the article on Lupa: AI creators will have to prove in court that they did not cause harm...
I recommend reading the article on Lupa: AI creators will have to prove in court that they did not cause harm. It discusses a new proposal for a directive on liability for damage caused by artificial intelligence. This directive introduces...

I recommend reading the article on Lupa: AI creators will have to prove in court that they did not cause harm. It discusses a new proposal for a directive on liability for damage caused by artificial intelligence.
This directive introduces the presumption that artificial intelligence caused the harmful consequence. It will involve a rebuttable presumption, meaning that the creator of the artificial intelligence can exonerate themselves if they can prove that they had no part in the incident. I wonder whether such a directive might be somewhat demotivating for developing systems in the EU, particularly when the law approaches the principle of reversing the burden of proof.
Another new feature is the court's ability to propose that information about high-risk AI systems be made accessible. This would allow the injured party to peek behind the curtain at the developers' work. If documentation can easily fall into the hands of dissatisfied plaintiffs, I think, along with John Buyers, that it will significantly influence how documentation is written.
To determine how much various systems and applications cause harm, the AIA introduces their categorisation. The prohibited group of "unacceptable risk" includes applications that manipulate human behaviour with the aim of circumventing users' free will. I’m not sure I understand this correctly, but if I create an application that, for example, motivates me to exercise—am I thereby circumventing my free will (which was insufficient for exercising) and creating an unacceptable risk regardless of whether the application is beneficial or harmful? Or am I misunderstanding how this aligns with neurological research, which has abandoned the concept of free will for good reasons (the brain decides before our self does, creating a false illusion of free will; we have instincts and drives...)?
Furthermore, applications in the high-risk category are not fundamentally excluded; however, a whole range of conditions must be met before their deployment. This group includes sectors such as education, employment, finance, or law enforcement. Thus, the EU will view my educational applications as high-risk. EU experts refuse to fund my research (for example, around 1 million for a proper Czech version of DestilBERT that would genuinely help several Czech companies). Instead, they prefer to support, from my perspective, somewhat dubious projects (e.g., a hundred-million grant in the Innovation programme for Mr Babiš's toast bread).
Now the EU wants to help me create trustworthy and safe systems by imposing a whole range of conditions to prove that my systems are accurate and fair, insisting that there must always be human oversight over automatic processes (I think a monitoring system with notifications would be better), and so on.
I understand that it is necessary to protect people. AI does indeed carry certain risks (recommendation systems on TikTok and Facebook, for example). I have established three ethical principles of AI that guide my company (benefit, safety, and transparency). I would like to see more discussion in the EU and Czech Republic primarily about the benefit. Furthermore, I have not fully grasped what falls under the definition of AI (for example, does it include only software or also hardware). Why should AI be viewed differently from other systems?
Article: https://www.lupa.cz/clanky/tvurci-umele-inteligence-budou-muset-pred-soudem-dokazovat-ze-neskodili/
Directive: https://ec.europa.eu/info/sites/default/files/1_1_197605_prop_dir_ai_en.pdf
Originally published on Facebook — link to post
Původní zdroj: facebook
Související články
January 2022
Today we release a podcast where we discuss the problem of brain licking with Matěj Kotrba😝👅🧠.
ReadNovember 2025
Today, a big thank you to the University of Pardubice for the invitation 🙌
ReadOctober 2019