تعداد نشریات | 27 |
تعداد شمارهها | 366 |
تعداد مقالات | 3,243 |
تعداد مشاهده مقاله | 4,753,783 |
تعداد دریافت فایل اصل مقاله | 3,244,690 |
Mathematical concepts of linear algebra in AI tools: a calculation-based study | ||
Journal of Hyperstructures | ||
مقالات آماده انتشار، پذیرفته شده، انتشار آنلاین از تاریخ 18 آذر 1403 اصل مقاله (1.53 M) | ||
نوع مقاله: Research Paper | ||
شناسه دیجیتال (DOI): 10.22098/jhs.2024.15660.1038 | ||
نویسنده | ||
Ram Milan Singh* | ||
Institute for Excellence in Higher Education, Bhopal, India | ||
چکیده | ||
The rapid advancement of Artificial Intelligence (AI) relies heavily on mathematical foundations, with linear algebra serving as a cornerstone. This paper examines the essential mathematical concepts of vector spaces, matrices, and linear transformations that underpin key AI algorithms, such as machine learning and neural networks. Special attention is given to eigenvalues, eigenvectors, and matrix factorizations, including Singular Value Decomposition (SVD) and Principal Component Analysis (PCA), which are crucial for dimensionality reduction and feature extraction. Additionally, the paper explores the role of quadratic programming and convex optimization in training Support Vector Machines (SVMs) and deep learning models, presenting detailed mathematical formulations of these processes. Computational challenges in handling large-scale matrix operations, such as multiplication, inversion, and sparse matrices, are addressed with a focus on numerical methods that enhance scalability and performance. Supported by worked examples and simulations, this research bridges theoretical rigor and practical applications, offering valuable insights for advancing AI systems. | ||
کلیدواژهها | ||
Linear Algebra؛ AI Tools؛ Mathematical Programming؛ Matrix Operations؛ Optimization | ||
مراجع | ||
[1] G. Strang, Introduction to Linear Algebra, 5th ed., Wellesley-Cambridge Press (2016). [2] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press (2004). [3] G. H. Golub and C. F. Van Loan, Matrix Computations, 4th ed., Johns Hopkins University Press (2013). [4] I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT Press (2016). [5] V. N. Vapnik, Statistical Learning Theory, Wiley (1998). [6] S. Sra, S. Nowozin and S. J. Wright, Optimization for Machine Learning, MIT Press (2012). [7] M. A. Nielsen, Neural Networks and Deep Learning, Determination Press (2016). [8] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis, Springer (1997). [9] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed., MIT Press (2018). [10] A. Kendall, Quantum Computing for Computer Scientists, Cambridge University Press (2020). [11] G. James, D. Witten, T. Hastie and R. Tibshirani, An Introduction to Statistical Learning: with Applications in R, Springer (2013). [12] E. Alpaydin, Introduction to Machine Learning, 4th ed., MIT Press (2020). [13] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis, Cambridge University Press (2004). | ||
آمار تعداد مشاهده مقاله: 11 تعداد دریافت فایل اصل مقاله: 14 |