2.2 Guarantee of finite hypothesis set — consistent case
In the case of the axis-aligned rectangle we examined, the algorithm returns the assumption that hSh_ShS is always consistent, i.e., it does not admit errors on the training sample SSS. In this section, we put forward a general sample complexity boundaries, or equivalent, a generalization, for the same assumptions, on the base of ∣ H ∣ | H | ∣ H ∣ under the condition of a set of assumptions is limited. As we consider consistent assumptions, we will assume the target concept CCC in HHH.
Theorem 2.1 Learning boundary — finite H, consistent case
Let HHH be a finite set of functions mapped from XXX to YYY. Let AAA be an algorithm for any target concept c ∈ Hc ∈ Hc ∈ H and I.I.D. Sample SSS returns a consistent assumption hS: R^(hS) = 0h_S: \widehat R(h_S) = 0hS: R(hS) = 0. Then, for any ϵ,δ > 0\epsilon,δ > 0ϵ,δ > 0, Inequality PrS ~ Dm [R (hS) ϵ] or less 1 or more – the delta \ underset ~ D_m {S} {Pr} [R (h_S) \ epsilon] or less or greater 1 – the delta S ~ DmPr [R (hS) ϵ] or less 1 or more – the delta, if
The sample complexity results allow the following equivalent statements as generalization bounds: for any ϵ, δ > 0\epsilon, δ > 0ϵ, δ > 0, probability at least 1 − δ,
To prove that with A fixed ϵ>0\epsilon>0ϵ>0, we do not know which consistent assumption hS ∈ Hh_S ∈ HhS ∈ H is chosen by algorithm A. This hypothesis is further dependent on the training sample SSS. Therefore, we need to give a consistent convergence bound for the set of all consistent assumptions, including, more importantly, hSh_ShS. Therefore, we will limit the probability that some H ∈ Hh ∈ Hh ∈ H is consistent and the error is greater than ϵ\epsilonϵ :
Now, consider any hypothesis h ∈ Hh \ in Hh ∈ h, of which R (h) > ϵ R (h) > \ epsilonR (h) > ϵ. Then, the probability of HHH is consistent on the training sample SSS drawn with I.I.D., that is, it has no error at any point in the SSS, and the bounds can be defined as:
The previous inequality means that
- Set the right-hand side equal to delta and solve for ε to prove it.
The theorem shows that the consistent algorithm AAA is the PAC learning algorithm when the hypothesis set HHH is finite, because the sample complexity given by (2.8) is dominated by polynomials in 1/ϵ1/\epsilon1/ϵ and 1/δ1/δ1/δ. As shown in (2.9), the upper limit of the generalization error of the consensus hypothesis is a term decreasing as a function of the sample size MMM. It is a general fact that, as expected, learning algorithms benefit from larger labeled training samples. However, the reduction rate of O(1/m)O(1/m)O(1/m) guaranteed by the theorem is particularly favorable. The cost of proposing a consistent algorithm is to use a larger hypothesis set HHH that contains the target concept. Cap (2.9), of course, with ∣ H ∣ | H | ∣ H ∣ increased. However, this dependence is only logarithmic. Please note that the term log ∣ H ∣ log | H | log ∣ H ∣, or related terms log2 ∣ H ∣ log_2 | H | log2 ∣ H ∣ differ with it a constant factor, HHH said can be interpreted as the required number of digits. As a result, the theorem of generalized guarantee by the digit ratio, log2 ∣ H ∣ log_2 | H | log2 ∣ H ∣ and MMM control sample size.