TY - GEN
T1 - IoTCoder
T2 - 30th International Conference on Mobile Computing and Networking, ACM MobiCom 2024
AU - Shen, Leming
AU - Zheng, Yuanqing
N1 - Publisher Copyright:
© 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/12/4
Y1 - 2024/12/4
N2 - Existing code Large Language Models are primarily designed for generating simple and general algorithms but are not dedicated to IoT applications. To fill this gap, we present IoTCoder, a coding copilot specifically designed to synthesize programs for IoT application development. IoTCoder features three locally deployed small language models (SLMs): a Task Decomposition SLM that decomposes a complex IoT application into multiple tasks with detailed descriptions, a Requirement Transformation SLM that converts the decomposed tasks described in natural language to well-structured specifications, and a Modularized Code Generation SLM that generates modularized code based on the task specifications. Experiment results show that IoTCoder can synthesize programs adopting more IoT-specific algorithms and outperform state-of-the-art code LLMs in terms of both task accuracy (by more than 24.2% on average) and memory usage (by less than 358.4 MB on average).
AB - Existing code Large Language Models are primarily designed for generating simple and general algorithms but are not dedicated to IoT applications. To fill this gap, we present IoTCoder, a coding copilot specifically designed to synthesize programs for IoT application development. IoTCoder features three locally deployed small language models (SLMs): a Task Decomposition SLM that decomposes a complex IoT application into multiple tasks with detailed descriptions, a Requirement Transformation SLM that converts the decomposed tasks described in natural language to well-structured specifications, and a Modularized Code Generation SLM that generates modularized code based on the task specifications. Experiment results show that IoTCoder can synthesize programs adopting more IoT-specific algorithms and outperform state-of-the-art code LLMs in terms of both task accuracy (by more than 24.2% on average) and memory usage (by less than 358.4 MB on average).
KW - IoT applications
KW - large language models
UR - https://www.scopus.com/pages/publications/105002579487
U2 - 10.1145/3636534.3697447
DO - 10.1145/3636534.3697447
M3 - Conference article published in proceeding or book
AN - SCOPUS:105002579487
T3 - ACM MobiCom 2024 - Proceedings of the 30th International Conference on Mobile Computing and Networking
SP - 1647
EP - 1649
BT - ACM MobiCom 2024 - Proceedings of the 30th International Conference on Mobile Computing and Networking
PB - Association for Computing Machinery, Inc
Y2 - 18 November 2024 through 22 November 2024
ER -