TY - GEN
T1 - LERC
T2 - 2017 IEEE Global Communications Conference, GLOBECOM 2017
AU - Yu, Yinghao
AU - Wang, Wei
AU - Zhang, Jun
AU - Letaief, Khaled B.
PY - 2017/12/4
Y1 - 2017/12/4
N2 - Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.
AB - Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy.
UR - http://www.scopus.com/inward/record.url?scp=85046453765&partnerID=8YFLogxK
U2 - 10.1109/GLOCOM.2017.8254999
DO - 10.1109/GLOCOM.2017.8254999
M3 - Conference article published in proceeding or book
AN - SCOPUS:85046453765
T3 - 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings
SP - 1
EP - 6
BT - 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 December 2017 through 8 December 2017
ER -