The Duty of Care Requirements in AI Applications — Italian Court Cases

PropTech@ecyY
3 min readAug 5, 2024

--

EU AI ACT 2025

The EU Artificial Intelligence Act has finally been released with the date of entry into force in January 2025. Detailed clauses of the Act are available at https://artificialintelligenceact.eu/ai-act-explorer/. Yet, there are no clear stipulations on the transparency and explainability requirements.

Courts of New Zealand’s Guidelines on AI in Courts

NZ Court has issued a guideline for use of generative AI in courts, but among others, it requires that: “All information generated by a GenAI chatbot should be checked by an appropriately qualified person for accuracy before it is used or referred to in court or tribunal proceedings.”
However, who can be an appropriately qualified person is not defined, and how can AI’s outputs be checked for accuracy is also an unanswered question.

Italian Courts Set the Precedent Cases

Andrea Tuninetti Ferrari (2021) reported two decisions of the Italian courts on AI issues. We focus on the second case: Mevaluate case, which is related to the requirements of explainable AI.

Mevaluate, was an AI application to make an impartial assessment of a person’s reputational ranking based on web-based information. “In 2016, the Italian Authority issued a ban preventing Mevaluate from processing personal data through its web platform, because the processing was inconsistent with the principles of the then applicable Italian Privacy Code.” Yet, users of Mevaluate had given their consents by signing the document.

In a nutshell, “The key issue is whether — before using the rating platform — the user is sufficiently informed about how the algorithm calculates the rating

In the lower court’s decision, “the Court of Rome concluded that transparency was not an issue”, because the subscribers who signed the consent document are deemed to have agreed to be bound to the algorithm even without knowing how it works.

However, the decision was rejected by the Supreme Court, which implies a requirement of explainability to AI users.

Ferrari (2021) contended that the Italian courts decisions are consistent with the risk-based approach adopted in the EU AI Act. The Mevaluate case, in particular, flags a risk where the algorithm processes sensitive data and the algorithm’s decision making is not transparent, such that the user cannot understand the logic behind the algorithm’s decision. He further proposed among others the following action:

"Promoting internal governance and compliance systems aimed at ensuring that AI can be explained (e.g. to users, authorities), and to show how AI pursues algorithmic transparency, data cleanliness, ethics"

Judges Will Shape The Nature and Form of Explainable AI

Ashley Deeks (2024) contended that the black box problem of AI applications is not an unsolvable issue. Rather ‘If and as judges demand these explanations, they will play a seminal role in shaping the nature and form of “explainable AI” (xAI)’.

With these courts’ guidelines and decisions, XAI has aroused a lot of attention among stakeholders. Richmond et al. (2024) showed a sharp increase in the number of studies of XAI in recent years.

References:

Deeks, A. (2024) The Judicial Demand for Explainable Artificial Intelligence, Columbia Law Review, 119(7). https://columbialawreview.org/content/the-judicial-demand-for-explainable-artificial-intelligence/

Ferrari, A.T. (2021) The Italian courts lead the way on explainable AI. Clifford Chance, 22 June. https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2021/06/the-italian-courts-lead-the-way-on-explainable-ai.html

Richmond, K.M., Muddamsetty, S.M., Gammeltoft-Hansen, T. et al. Explainable AI and Law: An Evidential Survey. DISO 3, 1 (2024). https://doi.org/10.1007/s44206-023-00081-z

--

--