Articles | Open Access |

ALGORITHMIC TRANSPARENCY IN LEGAL SYSTEMS: CHALLENGES AND SOLUTIONS FOR AI EXPLAINABILITY

Albina Kurmichkina , Lecturer of Cyber Law Department, Tashkent State University of Law, Uzbekistan

Abstract

Algorithmic transparency in legal systems represents a critical challenge as artificial intelligence increasingly influences judicial decision-making, risk assessment, and administrative processes. This research examines the regulatory landscape governing AI explainability through comprehensive analysis of the EU AI Act, GDPR, and emerging national frameworks. The study identifies fundamental tensions between algorithmic opacity and due process requirements, evaluating how legal systems balance technological innovation with constitutional rights. Through comparative analysis of regulatory approaches and landmark judicial decisions, this research demonstrates that current transparency mechanisms remain inadequate for complex machine learning systems deployed in legal contexts. The findings reveal that meaningful algorithmic accountability requires multi-layered frameworks combining technical explanations, legal interpretability standards, and institutional oversight mechanisms. This research proposes conceptual solutions for enhancing AI transparency through standardized documentation requirements, interpretability benchmarks, and participatory governance structures applicable to legal systems worldwide, including emerging jurisdictions such as Uzbekistan.

Keywords

algorithmic transparency, legal systems, AI explainability, due process, regulatory frameworks, judicial decision-making

References

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There's software used across the country to predict future criminals and it's biased against blacks. ProPublica, 23 May 2016.

Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.

Brauneis, R., & Goodman, E. P. (2018). Algorithmic transparency for the smart city. Yale Journal of Law & Technology, 20(1), 103-176.

Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1-12.

Castelvecchi, D. (2016). Can we open the black box of AI? Nature, 538(7623), 20-23.

Citron, D. K. (2008). Technological due process. Washington University Law Review, 85(6), 1249-1313.

Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-33.

Constitution of the Republic of Uzbekistan (1992). Adopted 8 December 1992.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

European Parliament and Council (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, L 119, 1-88.

European Parliament and Council (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 206, 1-144.

Houston Federation of Teachers v. Houston Independent School District, 251 F. Supp. 3d 1168 (S.D. Tex. 2017).

Illinois General Assembly (2019). Artificial Intelligence Video Interview Act, 820 ILCS 42/5.

Kaminski, M. E. (2019). The right to explanation, explained. Berkeley Technology Law Journal, 34(1), 189-218.

Kaminski, M. E., & Malgieri, G. (2020). Algorithmic impact assessments under the GDPR: Producing multi-layered explanations. International Data Privacy Law, 11(2), 125-144.

Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.

Kehl, D., Guo, P., & Kessler, S. (2017). Algorithms in the criminal justice system: Assessing the use of risk assessments in sentencing. Responsive Communities Initiative, Berkman Klein Center for Internet & Society, Harvard Law School.

Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-705.

Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31-57.

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765-4774.

New York City Council (2021). Local Law 144 of 2021: Automated employment decision tools. NYC Administrative Code § 20-870 et seq.

OECD (2024). OECD AI Principles: Updated framework for trustworthy artificial intelligence. OECD Digital Economy Papers, No. 347, OECD Publishing, Paris.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Basic Books.

Pomfret, R. (2019). The Central Asian economies in the twenty-first century. Princeton University Press.

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33-44).

Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center Research Publication No. 2018-6.

Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59-68).

State v. Loomis, 881 N.W.2d 749 (Wis. 2016).

Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act: Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97-112.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76-99.

Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841-887.

Zweigert, K., & Kötz, H. (1998). An introduction to comparative law (3rd ed.). Oxford University Press.

Article Statistics

Downloads

Download data is not yet available.

Copyright License

Download Citations

How to Cite

ALGORITHMIC TRANSPARENCY IN LEGAL SYSTEMS: CHALLENGES AND SOLUTIONS FOR AI EXPLAINABILITY. (2025). International Journal of Artificial Intelligence, 5(10), 1359-1372. https://www.academicpublishers.org/journals/index.php/ijai/article/view/7116