Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers


Type
Conference Object
Change log
Authors
Jiang, AQ 
Li, W 
Tworkowski, S 
Czechowski, K 
Odrzygóźdz, T 
Abstract

In theorem proving, the task of selecting useful premises from a large library to unlock the proof of a given conjecture is crucially important. This presents a challenge for all theorem provers, especially the ones based on language models, due to their relative inability to reason over huge volumes of premises in text form. This paper introduces Thor, a framework integrating language models and automated theorem provers to overcome this difficulty. In Thor, a class of methods called hammers that leverage the power of automated theorem provers are used for premise selection, while all other tasks are designated to language models. Thor increases a language model’s success rate on the PISA dataset from 39% to 57%, while solving 8.2% of problems neither language models nor automated theorem provers are able to solve on their own. Furthermore, with a significantly smaller computational budget, Thor can achieve a success rate on the MiniF2F dataset that is on par with the best existing methods. Thor can be instantiated for the majority of popular interactive theorem provers via a straightforward protocol we provide.

Description
Keywords
Journal Title
Advances in Neural Information Processing Systems
Conference Name
36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Journal ISSN
1049-5258
Volume Title
Publisher
Publisher DOI