Tactic Prediction for Theorem Proving with Proof Context
By Viktor Toman
In this project we consider machine learning applications in higher-order theorem proving. Given a proof state, the crucial task in theorem proving is to select a beneficial tactic and optionally to parameterize the tactic by premises. We consider learning policies for this task from data in the environment of an interactive higher-order theorem prover HOL Light. Given logs of successful HOL Light proofs guided by mathematicians, we learn in a supervised manner policies that operate on the level of individual proof steps. Given a proof step, a policy predicts the tactic that was applied and computes relevance scores for premises possibly used as tactic parameters. The crucial objective of the project is to explore possibilities of improving the prediction by providing more context of proof steps as input. The context can include the original conjecture of the proof, the current list of assumptions, the history of goals in previous proof steps, and the history of tactic applications. The internship project aligns well with the objective of our research group to build an interface that automates the theorem-proving process in HOL Light. Given a statement to be proven, the interface will repeatedly call a fixed policy to select a parameterized tactic for the current proof state. Accurate prediction policies make the interface more powerful, as they increase the chances of successfully proving statements and thus obtaining more proof data.
Viktor Toman is a computer science PhD student at the Institute of Science and Technology (IST) Austria. After obtaining his Bachelor's and Master's degree at Masaryk University Brno, Czech Republic, he has joined the research group of Krishnendu Chatterjee at IST Austria to pursue a PhD in formal methods. His research interests include model checking, program verification, and machine learning in formal methods.