Typed meta-interpretive learning

 

Sándor Bartha 

 

The present research is motivated by the wish to learn the semantics of opaque automated systems. Logic programming languages can express specifications in many different computation models. Inductive logic programming could be a tool to learn these specifications from the observed behaviour of the system. As automated systems often deal with typed data, we would like also to rely on this information.

 My aim is to enrich inductive logic programming to allow types as a new type of constraint. Specifically, I aim to port the meta-interpretive learning framework, a state-of-art framework for inductive logic programming, from Prolog to a typed logic language, like lambda-Prolog, alpha-Prolog or Twelf. The key objective of the research is to find out whether type information can be used to learn more efficiently in domains where this information is available.

 

Supervisors: James Cheney & Vaishak Belle