Distributed Computing Models For Large-Scale Applications

19 Apr

Authors: Abena Osei

Abstract: Distributed computing models have become essential for designing and implementing large-scale applications that require high performance, scalability, and fault tolerance. By dividing computational tasks across multiple interconnected nodes, distributed systems enable parallel processing, resource sharing, and improved reliability. This study provides a comprehensive review of distributed computing models, including client-server, peer-to-peer, cluster computing, grid computing, and cloud-based paradigms, highlighting their architectures, operational mechanisms, and suitability for different application domains. The study examines how distributed computing supports large-scale applications in scientific computing, big data analytics, e-commerce, and enterprise IT systems. Challenges such as task scheduling, load balancing, fault tolerance, data consistency, and network latency are discussed, along with strategies and algorithms to address these issues. Additionally, the study explores emerging trends, including edge computing, serverless architectures, and hybrid distributed systems, which enhance scalability, reduce latency, and improve resource utilization. The findings underscore the critical role of distributed computing in enabling robust, efficient, and scalable large-scale applications.

DOI: https://doi.org/10.5281/zenodo.19653757