Member of IEEE Computer Society, Tampa, Florida, USA.

Research Article

Received on 21 February 2025; revised on 29 March 2025; accepted on 31 March 2025

Deploying large language models (LLMs) in edge computing environments is an emerging challenge at the intersection of AI and distributed systems. Running LLMs directly on edge devices can greatly reduce latency and improve privacy, enabling real-time intelligent applications without constant cloud connectivity. However, modern LLMs often consist of billions of parameters and require tens of gigabytes of memory and massive compute power, far exceeding what typical edge hardware can provide. In this paper, we present a comprehensive approach to optimize LLM deployment in edge computing environments by combining four existing classes of optimisation techniques: model compression, quantization, distributed inference, and federated learning, in a unified framework. Our insight is that a holistic combination of these techniques is necessary to successfully deploy LLMs in practical edge settings. We also provide new algorithmic solutions and empirical data to advance the state of the art. 

Large Language Models (LLMs); Edge Computing; Model Compression; Distributed Inference; Federated Learning

Raghavan Krishnasamy Lakshmana Perumal. Optimizing Large Language Model Deployment in Edge Computing Environments. International Journal of Science and Research Archive, 2025, 14(03), 1658-1669. Article DOI: https://doi.org/10.30574/ijsra.2025.14.3.0912.

Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0