The base case of [tex]n=1[/tex] is trivially true, since
[tex]\displaystyle P\left(\bigcup_{i=1}^1 E_i\right) = P(E_1) = \sum_{i=1}^1 P(E_i)[/tex]
but I think the case of [tex]n=2[/tex] may be a bit more convincing in this role. We have by the inclusion/exclusion principle
[tex]\displaystyle P\left(\bigcup_{i=1}^2 E_i\right) = P(E_1 \cup E_2) \\\\ P\left(\bigcup_{i=1}^2 E_i\right) = P(E_1) + P(E_2) - P(E_1 \cap E_2) \\\\ P\left(\bigcup_{i=1}^2 E_i\right) \le P(E_1) + P(E_2) \\\\ P\left(\bigcup_{i=1}^2 E_i\right) \le \sum_{i=1}^2 P(E_i)[/tex]
with equality if [tex]E_1\cap E_2=\emptyset[/tex].
Now assume the case of [tex]n=k[/tex] is true, that
[tex]\displaystyle P\left(\bigcup_{i=1}^k E_i\right) \le \sum_{i=1}^k P(E_i)[/tex]
We want to use this to prove the claim for [tex]n=k+1[/tex], that
[tex]\displaystyle P\left(\bigcup_{i=1}^{k+1} E_i\right) \le \sum_{i=1}^{k+1} P(E_i)[/tex]
The I/EP tells us
[tex]\displaystyle P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) = P\left(\left(\bigcup\limits_{i=1}^k E_i\right) \cup E_{k+1}\right) \\\\ P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) = P\left(\bigcup\limits_{i=1}^k E_i\right) + P(E_{k+1}) - P\left(\left(\bigcup\limits_{i=1}^k E_i\right) \cap E_{k+1}\right)[/tex]
and by the same argument as in the [tex]n=2[/tex] case, this leads to
[tex]\displaystyle P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) = P\left(\bigcup\limits_{i=1}^k E_i\right) + P(E_{k+1}) - P\left(\left(\bigcup\limits_{i=1}^k E_i\right) \cap E_{k+1}\right) \\\\ P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) \le P\left(\bigcup\limits_{i=1}^k E_i\right) + P(E_{k+1})[/tex]
By the induction hypothesis, we have an upper bound for the probability of the union of the [tex]E_1[/tex] through [tex]E_k[/tex]. The result follows.
[tex]\displaystyle P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) \le P\left(\bigcup\limits_{i=1}^k E_i\right) + P(E_{k+1}) \\\\ P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) \le \sum_{i=1}^k P(E_i) + P(E_{k+1}) \\\\ P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) \le \sum_{i=1}^{k+1} P(E_i)[/tex]