Thumbnail
Access Restriction
Subscribed

Author Kearns, Michael ♦ Li, Ming ♦ Valiant, Leslie
Source ACM Digital Library
Content type Text
Publisher Association for Computing Machinery (ACM)
File Format PDF
Copyright Year ©1994
Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Inductive inference ♦ Machine learning
Abstract Efficient distribution-free learning of Boolean formulas from positive and negative examples is considered. It is shown that classes of formulas that are efficiently learnable from only positive examples or only negative examples have certain closure properties. A new substitution technique is used to show that in the distribution-free case learning DNF (disjunctive normal form formulas) is no harder than learning monotone DNF. We prove that monomials cannot be efficiently learned from negative examples alone, even if the negative examples are uniformly distributed. It is also shown that, if the examples are drawn from uniform distributions, then the class of DNF in which each variable occurs at most once is efficiently weakly learnable (i.e., individual examples are correctly classified with a probability larger than 1/2 + $1/\textit{p},$ where $\textit{p}$ is a polynomial in the relevant parameters of the learning problem). We then show an equivalence between the notion of weak learning and the notion of group learning, where a group of examples of polynomial size, either all positive or all negative, must be correctly classified with high probability.
ISSN 00045411
Age Range 18 to 22 years ♦ above 22 year
Educational Use Research
Education Level UG and PG
Learning Resource Type Article
Publisher Date 1994-11-01
Publisher Place New York
e-ISSN 1557735X
Journal Journal of the ACM (JACM)
Volume Number 41
Issue Number 6
Page Count 31
Starting Page 1298
Ending Page 1328


Open content in new tab

   Open content in new tab
Source: ACM Digital Library