
LEARNING MACHINES: HOW MODELS LIKE CHATGPT WORK
Iskandarov Elzod Olimjon ugli , Academic lyceum of Samarkand Branch of Tashkent University of Information Technologies 2nd year studentAbstract
In recent years, learning machines, particularly large language models (LLMs) like ChatGPT, have revolutionized natural language processing (NLP). This article provides an overview of how such models function, focusing on the underlying architecture, training methodology, and inference mechanisms. It also explores their applications, limitations, and ethical concerns. Drawing on insights from computer science and cognitive modeling, this work explains how machines "learn" to understand and generate human-like language.
Keywords
transformer-based models, policymakers, computational algorithms, literary standpoint, pre-existing texts.
References
Presidential Decree No. PF-60. (2022). On the Development of the Education System for 2022–2026. Republic of Uzbekistan.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008.
Barthes, R. (1967). The death of the author. Aspen, (5–6).
Derrida, J. (1976). Of grammatology (G. C. Spivak, Trans.). Johns Hopkins University Press.
Bakhtin, M. M. (1984). Problems of Dostoevsky’s poetics (C. Emerson, Trans.). University of Minnesota Press.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. pp. 610–623).
Article Statistics
Downloads
Copyright License

This work is licensed under a Creative Commons Attribution 4.0 International License.