A New Paradigm in Tuning Learned Indexes: A Reinforcement Learning Enhanced Approach
Published version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Learned Index Structures (LIS) have significantly advanced data management by leveraging machine learning models to optimize data indexing. However, designing these structures often involves critical trade-offs, making it challenging for both designers and end-users to find an optimal balance tailored to specific workloads and scenarios. While some indexes offer adjustable parameters that demand intensive manual tuning, others rely on fixed configurations based on heuristic auto-tuners or expert knowledge, which may not consistently deliver optimal performance. This paper introduces LIT une , a novel framework for end-to-end automatic tuning of Learned Index Structures. LIT une employs an adaptive training pipeline equipped with a tailor-made Deep Reinforcement Learning (DRL) approach to ensure stable and efficient tuning. To accommodate long-term dynamics arising from online tuning, we further enhance LIT une with an on-the-fly updating mechanism termed the O2 system. These innovations allow LIT une to effectively capture state transitions in online tuning scenarios and dynamically adjust to changing data distributions and workloads, marking a significant improvement over other tuning methods. Our experimental results demonstrate that LIT une achieves up to a 98% reduction in runtime and a 17-fold increase in throughput compared to default parameter settings given a selected Learned Index instance. These findings highlight LIT une 's effectiveness and its potential to facilitate broader adoption of LIS in real-world applications.
Description
Journal Title
Conference Name
Journal ISSN
2836-6573

