A defeasible reasoning model of inductive concept learning from examples and communication

Santiago Ontañón, Pilar Dellunde, Lluís Godo, Enric Plaza

Research output: Contribution to journalArticleResearchpeer-review

9 Citations (Scopus)

Abstract

This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data. © 2012 Elsevier B.V.
Original languageEnglish
Pages (from-to)129-148
JournalArtificial Intelligence
Volume193
DOIs
Publication statusPublished - 1 Dec 2012

Keywords

  • Argumentation
  • Concept learning
  • Induction
  • Logic
  • Machine learning

Fingerprint

Dive into the research topics of 'A defeasible reasoning model of inductive concept learning from examples and communication'. Together they form a unique fingerprint.

Cite this