Archive for the sets category

Families of Sets and `Generalized’ Set Operations

Standard post by ancoop on October 12, 2018
Comments are off for this post

Finite Set Operations

The operations $\cup$ and $\cap$ apply to two sets; we can however use them to combine more than one set. If we have a finite collection of sets $A_1,A_2,\ldots,A_N$, we write

\begin{equation}\begin{aligned}\bigcup_{i=1}^N A_i&=A_1\cup A_2\cup \cdots\cup A_N\\\bigcap_{i=1}^N A_i&=A_1\cap A_2\cap \cdots\cap A_N\end{aligned}\end{equation}

Activity. Give a better definition of $\displaystyle\bigcup_{i=1}^NA_i$ and $\displaystyle\bigcap_{i=1}^NA_i$ . (Hint. What did we say about definitions involving $\cdots$?)

De Morgan’s Laws. For any finite collection of sets $A_1,\ldots, A_n$,

\begin{align*}\left(\bigcup_{i=1}^N A_i\right)^c=&\bigcap_{i=1}^NA_i^c\\\left(\bigcap_{i=1}^N A_i\right)^c=&\bigcup_{i=1}^NA_i^c \end{align*}

Exercise. Prove this version of de Morgan’s Laws.

Transfinite Operations

There is no reason to restrict our attention to {\em finite} collections of sets. For example, we might consider

the collection of intervals of the form $[n,n+1]$, where $n$ is allowed to be an natural number,

or perhaps

the set of all disks centered at the origin

each of which consists of an infinite number of sets. We usually use set-builder notation to write something like

\begin{equation}\left\{[n,n+1]\middle\vert n\in\mathbb{N}\right\}\end{equation}


\begin{equation}\left\{D_r\middle\vert r\in(0,\infty) \right\}\end{equation}

to emphasize that what we are dealing with is a(n infinite) set of sets. (Here we have written $D_r=\left\{(x,y)\middle\vert x^2+y^2<r^2\right\}$ for the disk of radius $r$ centered at the origin.) We call the set to the right of the pipe the index set: in these collections  the index sets are $\mathbb{N}$ and the interval $(0,\infty)$, respectively.

We can define unions and intersections of such infinite collections of sets by recalling that $\exists$ acts like a souped-up version of $\vee$ and $\forall$ acts like a souped-up version of $\wedge$.

Given a collection of sets $\mathcal{A}=\left\{A_\lambda\middle\vert \lambda\in\Lambda\right\}$, we set

Definition. \begin{align*}\bigcup\mathcal{A}&=\bigcup_{\lambda\in\Lambda}A_\lambda=\left\{x\middle\vert \exists\lambda\in\Lambda:x\in A_\lambda\right\}\\\bigcap\mathcal{A}&=\bigcap_{\lambda\in\Lambda}A_\lambda=\left\{x\middle\vert \forall\lambda\in\Lambda,x\in A_\lambda\right\}\end{align*}

E.g. $\displaystyle\bigcup_{n\in\mathbb{N}} [n,n+1]=[1,\infty)$

Proof. Let $x\in[1,\infty)$. Then there is some $n\in\mathbb{N}$ with $n\leq x\leq n+1$, so $x\in[n,n+1]$. Thus $\displaystyle x\in\bigcup_{n\in\mathbb{N}} [n,n+1]$.

Now let $\displaystyle x\in\bigcup_{n\in\mathbb{N}} [n,n+1]$. Then there is some $n\in\mathbb{N}$ with $x\in[n,n+1]$. But $[n,n+1]\subseteq [1,\infty)$, so $x\in[1,\infty)$. $\Box$

E.g. $\displaystyle\bigcap_{r\in(0,\infty)} D_r=\{(0,0)\}$

Proof. Certainly the origin $(0,0)$ is in each disk $D_r$. We will show it is the only such point.

Consider any point $(x,y)$ in the plane. Let $r_1=\sqrt{x^2+y^2}$ be the distance between $(x,y)$ and the origin. Then if we select $r=\frac{1}{2}r_1$, we see that $(x,y)\notin D_r$. Thus $(x,y)$ cannot be in every $D_r$.

de Morgan’s Laws. For any collection $\{A_\lambda|\lambda\in\Lambda\}$, we have \begin{align*}\left(\bigcup_{\lambda\in\Lambda} A_\lambda\right)^c=&\bigcap_{\lambda\in\Lambda}A_\lambda^c\\ \left(\bigcap_{\lambda\in\Lambda} A_\lambda\right)^c=&\bigcup_{\lambda\in\Lambda}A_\lambda^c \end{align*}

Exercise. Prove this version of de Morgan’s Laws.

Exercise. What is each of the following sets?

  1. $\displaystyle\bigcap_{n\in\mathbb{N}} [n,n+1]$
  2. $\displaystyle\bigcup_{r\in(0,\infty)} D_r$
  3. $\displaystyle\bigcup_{n\in\mathbb{N}}\left[\frac{1}{n},1\right]$
  4. $\displaystyle\bigcap_{n\in\mathbb{N}}\left[\frac{1}{n},1\right]$

Exercise Express each set below as either a generalized union or a generalized intersection, and also express it in set-builder notation.

  1. $\displaystyle B\setminus\left(\bigcap_{\lambda\in\Lambda} A_\lambda\right)$
  2. $\displaystyle B\setminus\left(\bigcup_{\lambda\in\Lambda} A_\lambda\right)$
  3. $\displaystyle \left(\bigcap_{\lambda\in\Lambda} A_\lambda\right)\setminus B$
  4. $\displaystyle \left(\bigcup_{\lambda\in\Lambda} A_\lambda\right)\setminus B$

Set-Builder Notation

Standard post by ancoop on October 10, 2018
Comments are off for this post

Recall that, by definition, $S$ being a set is equivalent to “$x\in S$” being an open sentence. This means that the theory of sets and the theory of logic are basically the same–differing only by notation and perspective. One can exploit this correspondence by using open sentences to define sets. Given an open sentence $P(t)$, we can define the corresponding set $S_P$, which consists of all values $t$ so that $P(t)$ is true. We write $$S_P=\left\{t\middle\vert P(t)\right\}$$ and read aloud “$S_P$ is the set of all $t$ such that $P(t)$.” In other words, $S_P$ is the set so that $x\in S_P$ means exactly $P(x)$. This way of defining sets has the unfortunately puerile name set-builder notation. It’s how most sets in most mathematics texts are defined. By way of example, we’ll rephrase some previous definitions.


  • $A^c=\left\{x\middle\vert x\notin A\right\}$
  • $A\cup B= \left\{p\middle\vert p\in A\vee p\in B\right\}$

Activity Complete the list of set operations by giving set-builder descriptions of each of the following set operations.

  • $A\cap B$
  • $A\setminus B$
  • $\left(A\setminus B\right)\cup\left(B\setminus A\right)$
  • $2^B$

Exercise. Consider two sets, $A=\left\{x\middle\vert P(x)\right\}$ and $B=\left\{t\middle\vert Q(t)\right\}$. What can we say about the open sentences $P$ and $Q$ if we know each of the following facts about the sets $A$ and $B$?

  • $A=B$
  • $A\subseteq B$
  • $A=\varnothing$
  • $B$ is the universe

The last exercise referenced the universe, which naturally leads to the question of how to specify the universe in set-builder notation. An even number is an integer which is twice some other integer. So we could write

Definition. \begin{equation*}\left\{n\middle\vert n\in\mathbb{Z}\wedge\exists k:\left(k\in \mathbb{Z}\wedge n=2k\right) \right\}

But the right-hand side (after the pipe symbol $\vert$) is a little crowded. Doing the same trick as we did in, we want to handle the conditions $n\in\mathbb{Z}$ and $k\in\mathbb{Z}$ by changing the universe. We do this as follows:

Definition. The set of even numbers is the set \begin{equation*}\left\{n\in\mathbb{Z}\middle\vert \exists k\in\mathbb{Z}:n=2k\right\}\end{equation*}

We would read this aloud “The set of all integers $n$ such that there is an integer $k$ with $n=2k$.” In this version of set-builder notation, the left-hand side (before the pipe) is about what kind of objects the elements of the set being defined are; the right-hand side gives a condition that describes the set.

There is yet another way to use set-builder notation to define a set, as exemplified:

Definition. \begin{equation*} \left\{2k\middle\vert k\in\mathbb{Z}\right\} \end{equation*}

We would read this aloud “The set of a numbers of the form $2k$, where $k$ is an integer.” Notice that $n$ has disappeared entirely from this version. In this version of set-builder notation, the left-hand side gives the general form of the elements of the set, and the right-hand side tells us which elements of that form actually occur in the set.

Where are the quantifiers?

Learning to let go of your $x$s

An open sentence like $$E(x):\ \ \ \exists k: x=2k$$
has two variables and one quantifier. The two variables are used in quite different ways. First, notice that the only variable that’s eligible for plugging in to $E$ is $x$: we can sensibly ask Is it true that $E(2)$? and Is it true that $E(3)$?. But substituting for $k$ like this:$$\exists 7: x=2\cdot 7$$ yields nonsense.

What’s going on here? Variables where we can plug in values and get something sensible (like $x$ here) are called free. The other kind of variable (like $k$ here) is called a bound variable, from the verb bind. The idea is that the quantifier $\exists$ has a binding effect on the variable. You can recognize bound variables in three ways:

  • Substituting for a bound variable results in nonsense; substituting for a free variable does not.
  • Bound variables almost always occur exactly twice; free variables occur once.
  • There is always a way to read a mathematical formula containing a bound variable in a way that makes the bound variable go away.

I’ll illustrate these checks by giving some examples of bound variables, one of which you’ve seen before:

Consider the integral $$\int_a^b x^2\ dx$$ The variables $a$ and $b$ are free; the variable $x$ is bound.

  • We could substitute in for $a$ and $b$: $\int_1^3x^2\ dx$. If we substitute for $x$, we get $\int_a^b 7^3\ d7$, which is nonsense.
  • The variable $x$ occurs twice: once in the integrand and once in the differential.
  • We could read this integral aloud as “The integral, from $a$ to $b$, of the squaring function.”

Consider the (true) statement $$\forall x, x^2<9 \Rightarrow x^3<27$$ Here, the variable $x$ is bound and there are no free variables.

  • If we tried to substitute, we’d get something like $\forall 4, 4^2<9 \Rightarrow 4^3<27$, which doesn’t make sense.
  • The variable $x$ occurs twice: once in the open sentence $x^2<9 \Rightarrow x^3<27$, and once in the quantifier.
  • We could read this aloud as “If a number’s square is less than 9, its cube is less than 27.”

One very important trick to do with a bound variable is to replace it, across the board, with another variable. I call this letting go of your $x$s. So for example, in an integral we can change the integrating variable at whim: $$\int_1^3 x^2\ dx=\frac{26}{3}=\int_1^3 y^2\ dy$$ We can do the same with quantified statements and open sentences: $$\forall x, x^2<9 \Rightarrow x^3<27\text{ means the same thing as } \forall q, q^2<9 \Rightarrow q^3<27$$

What does all this have to do with sets and set-builder notation? Observe that the variables that appear in set-builder notation are almost all bound, which we can tell because they occur twice (once on each side of the pipe). For example:$$\left\{x\middle\vert -1<x<1\right\}$$ is the same set as $$\left\{g\middle\vert -1<g<1\right\}$$ which in turn is the same set as $$\left\{t\middle\vert -1<t<1\right\}$$

The point is that the particular choice of variable $x$ or $y$ or $Z$ or $p$ or $\varphi$ has no significance at all. By way of example, if $A=\left\{ x \middle\vert x^2 \in[1,3)\right\}$, then $$t\in A\text{ means that } t^2\in[1,3)$$ $$F\in A\text{ means that } F^2\in[1,3)$$
In particular, our definition of $A$ does not create any particular link between the letter $x$ and the set $A$. We could just as well have defined $A=\left\{ z \middle\vert z^2 \in[1,3)\right\}$.

The Element’s-Eye View

Standard post by ancoop on October 10, 2018
Comments are off for this post

There are three things to keep in mind when it comes to set-theory proofs (which most of our proofs hereafter will be):

  1. Sets are defined by their membership. Because of this, we nearly always take an element’s-eye view of any proof involving sets. Notice that in the proofs we presented above, the first thing we did to get going was say something like “Let $x\in$. . .”.
  2. Generally, the way to go to prove two sets are equal is to show (separately) that each is a subset of the other.
  3. Keep in mind what your goal is. That is, pay attention to the consequent. Your goal is to translate from the language of the consequent into the language of the antecedent.

These three principles are not only good advice for writing a proof, but they are in fact good advice for {\em figuring out} a proof as well. The following examples will illustrate principles 1 and 3.

Theorem. If $A$ and $B$ are sets with $2^A\subseteq 2^B$, then $A\subseteq B$.

Proof. Let $A$ and $B$ be set with $2^A\subseteq 2^B$. Since we have to show $A\subseteq B$, let $x\in A$. Then $\{x\}\subseteq A$, so $\{x\}\in 2^A$. Since $2^A\subseteq 2^B$, we have $\{x\}\in 2^B$. But this means $\{x\}\subseteq B$. Since $x\in\{x\}$, we see that $x\in B$.

So we’ve shown every $x\in A$ also has $x\in B$, i.e. $A\subseteq B$. $\Box$

Theorem. If $A$ and $B$ are sets with $A\subseteq B$, then $2^A\subseteq 2^B$.

Proof. Let $A$ and $B$ be sets with $A\subseteq B$. Since we have to show $2^A\subseteq 2^B$, let $x\in 2^A$. Then $x\subseteq A$. Since $A\subseteq B$, and set inclusion is transitive, we have $x\subseteq B$. But this is exactly the statement that $x\in 2^B$.

So we’ve show that every $x\in 2^A$ also has $x\in 2^B$, i.e. $2^A\subseteq 2^B$. $\Box$

Notice that in both cases, we were concerned with the {\em consequent} of the conditional we’d be asked to show (principle III), and that we proceeded by following elements (principle I). In the first proof, we translated from the language of elements of $A$ and $B$ to the language of subsets of $A$ and $B$ to the language of elements of $2^A$ and $2^B$, applied our assumption, and translated back. In the second proof, we translated from the language of elements of $2^A$ and $2^B$ to the language of subsets of $A$ and $B$, applied our assumptions, and translated back.

Activity Consider the following couplet:

Heffalumps and woozles are very confuzle.
A heffalump or woozle’s very sly.

Explain why both lines are talking about the same thing as follows. First, give a mathematical name to what they describe. Then explain why each line takes a different perspective on that mathematical object, and why those perspectives are not contradictory, the difference between “and” and “or” notwithstanding.


Set Operations and Subsets

Standard post by ancoop on October 10, 2018
Comments are off for this post

Just like we defined logical formulae by giving truth tables, we can define set formulae by giving a criterion for membership.

Definition. Given sets $A$, $B$, we define

  •  the complement of $A$, $A^c$, by the rule $x\in A^c$ iff $x\notin A$.
  •  the intersection} of $A$ and $B$, $A\cap B$, by the rule $x\in A\cap B$ iff $x\in A$ and $x\in B$.
  •  the union of $A$ and $B$, $A\cup B$, by the rule $x\in A\cup B$ iff $x\in A$ or $x\in B$.
  •  the difference of $A$ and $B$, $A\setminus B$, by the rule $x\in A\setminus B$ iff $x\in A$ and $x\notin B$.

The set operations ${}^c, \cap, \cup, \setminus$ take in sets and produce sets. So we can ask, just as we asked for the logical operations, whether these operations commute, distribute, associate, etc. We’d like to prove the following theorem:

Theorem. For sets $A$, $B$, $C$,

  1. $\left(A\cup B\right)\cup C=A\cup\left(B\cup C\right)$ and $\left(A\cap B\right)\cap C=A\cap\left(B\cap C\right)$, i.e. union and intersection are associative.
  2. $A\cup B=B\cup A$ and $A\cap B=B\cap A$ (i.e. union and intersection are commutative).
  3. $A\cup\left(B\cap C\right)=\left(A\cup B\right)\cap \left(A\cup C\right)$ and $A\cap\left(B\cup C\right)=\left(A\cap B\right)\cup \left(A\cap C\right)$ (i.e. union and intersection each distributes over the other)

Exercise. Write a formula (in terms of ${}^c,\cup,\cap,\setminus$) which corresponds to exclusive {\em or}. That is, give an expression for $S$ so that $x\in S$ iff $x\in A$ exclusive or $x\in B$.

To prove a theorem like the one above, we need to understand what it means for two sets to equal each other. Since sets are defined by their elements, two sets should be equal if they have the same elements. That is,

Definition. $A=B$ means $x\in A$ iff $x\in B$.

The fact that $P\Leftrightarrow Q$ is equivalent to $\left(P\Rightarrow Q\right)\wedge\left(Q\Rightarrow P\right)$ means we could just as well say that

Definition. $A=B$ means $x\in A$ guarantees $x\in B$ and $x\in B$ guarantees $x\in A$.

We’ll prove the commutativity of $\cup$ as an example.

Proof that $A\cup B=B\cup A$. We need to show that $x\in \left(A\cup B\right)$ guarantees $x\in \left(B\cup A\right)$. So let $x\in\left(A\cup B\right)$. By definition of union, this means $(x\in A)\vee (x\in B)$, which is equivalent to $(x\in B)\vee (x\in A)$. By definition of union, this means $x\in \left(B\cup A\right)$.

We also need to show that $x\in \left(B\cup A\right)$ guarantees $x\in \left(A\cup B\right)$. So let $x\in\left(B\cup A\right)$. By definition of union, this means $(x\in B)\vee (x\in A)$, which  is equivalent to $(x\in A)\vee (x\in B)$. By definition of union, this means $x\in \left(A\cup B\right)$. $\Box$.

Definition. We write $A\subseteq B$ (read “$A$ is a subset of $B$”) if $x\in A\Rightarrow x\in B$.

So we finally have our most useful equivalent definition of set equality:

Definition. $A=B$ iff $A\subseteq B$ and $B\subseteq A$.

Theorem. Set inclusion is transitive. That is, if $A$, $B$, and $C$ are sets with $A\subseteq B$ and $B\subseteq C$, then $A\subseteq C$.

Proof. Let $A$ and $B$ and $C$ be sets with $A\subseteq B$ and $B\subseteq C$. To see that $A\subseteq C$, let $x\in A$. Since $A\subseteq B$, and $x\in A$, $x\in B$. Since $B\subseteq C$ and $x\in B$, $x\in C$.

Thus we’ve shown that every $x\in A$ is guaranteed to have $x\in C$, so $A\subseteq C$. $\Box$

Theorem. If $A$, $B$, and $C$ are sets with $A\subseteq B$, $B\subseteq C$, and $C\subseteq A$, then $A=B$ and $B=C$.

Proof.Let $A$, $B$, and $C$ be such sets. We’ll first show $A=B$. $A\subseteq B$ by assumption. We’ll show $B\subseteq C$ by applying the previous theorem to $B$, $C$, and $A$.

Now since $A=B$, we have by assumption that $B\subseteq C$ and also by assumption $C=A\subseteq B$. So $B=C$. $\Box$

The Empty Set and Other Subsets

An important set for us will be the {\em empty set}. It is the set-theoretic equivalent of a contradiction.

Definition. The empty set is $\varnothing = \{ x\vert x\neq x\}$, the set of all things which are not themselves.

Lemma. The empty set has no elements.

Proof. Suppose, for the purpose of obtaining a contradiction, that there were some $x\in\varnothing$. Then $x\neq x$. But this is absurd. So there can be no such $x$. $\Box$.

Theorem. For any set $A$, $\varnothing\subseteq A$.

Proof. To show $\varnothing \subseteq A$, we need to show that $x\in \varnothing \Rightarrow x\in A$.

So let $x\in \varnothing$. By definition of $\varnothing$, $x\neq x$. But there are no such $x$, so the statement $x\in\varnothing$ is false. Thus the conditional $x\in\varnothing\Rightarrow x\in A$ must be true. $\Box$

Theorem. The empty set is the unique set with no elements.

That is, not only does $\varnothing$ have no elements, but it’s also the case that {\em any} set with no elements must be $\varnothing$.

Proof. Let $A$ be a set with no elements. We’ll show that $\varnothing\subseteq A$ and $A\subseteq \varnothing$. The first was established by the previous theorem.

To see that $A\subseteq\varnothing$, we’ll show that $x\in A \Rightarrow x\in\varnothing$. But there are no $x\in A$, so this is a conditional with a false antecedent, hence automatically true.$\Box$

This is somewhat surprising, because it means, in particular, that the set of natural numbers with no successor and the set of married bachelors and the set of dogs with twelve million legs are all the same set (notwithstanding that one of them is a “set of numbers”, one is a “set of people”, and one is a “set of dogs”).

The Powerset

There’s one more axiom of sets we need, besides the empty set being a set. That’s

Axiom of Infinity. For any set $A$, the collection of all subsets of $A$ is its powerset, $2^A$.

That is, $z\in 2^A$ means $z\subseteq A$.

E.g. Let $A=\{1,2\}$. The powerset of $A$ is $\{\varnothing, \{1\}, \{2\},\{1,2\}\}$.

Exercise. What is $2^\varnothing$? What is $2^{2^\varnothing}$?

Exercise. Can a powerset ever be empty?

Why the notation $2^A$?

Theorem. If $A$ is a finite set with $N$ elements, then $2^A$ is finite and has $2^N$ elements.

This proof is a counting proof–we need to know how many of something there are, so we just make sure we count everything once. More on this later!

Proof. Each element of $2^A$ is a subset of $A$. So we really need to count the number of subsets of $A$.

Given a subset $S$, for any element $x$ $A$, we can ask “Is $x$ in $S$?”. There are $N$ times we need to ask this question, and 2 possible answers, so we have $2^N$ possibilities (each of which corresponds to a subset of $A$). Thus there are $2^N$ subsets of $A$, hence $2^N$ elements of $2^A$. $\Box$


Standard post by ancoop on September 17, 2018
Comments are off for this post

[callout headingicon=”noicon” textalign=”textleft” type=”basic”]

I don’t want to belong to any club that will accept people like me as a member.

Groucho Marx


In our quest to reason about quantities and relationships, we’d like to handle quantities that are a bit more general than integers (real numbers, for example). We’d also like to handle relationships. To do this we will develop a language that is essentially equivalent to logic, but is a bit more intuitive to use. That language is the language of sets.

Overview of Sets

What is a set?

The intuitive definition of a set is just a collection of things. This is, of course, not much of a definition. Many mathematicians in the late nineteenth century actually thought the precise definition of set did not much matter. But the philosopher and mathematician Bertrand Russell, writing in 1901, discovered that we have to be more careful. He considered the following:

Definition. We call a collection ordinary if it does not contain itself. Otherwise we call the collection extraordinary.

Most collections are ordinary, for example the collection of all Bernese mountain dogs is not itself a Bernese mountain dog. The set of all natural numbers is not itself a natural number. But some collections are extraordinary, for example:

  • The collection of abstract ideas is an abstract idea.
  • The collection of mathematical objects is a mathematical object.
  • The collection of all collections is a collection.

Russell therefore asked the question: Is the collection of ordinary collections ordinary? To make this easier, let’s write $E$ for the collection of extraordinary collections, $O$ for the collection of ordinary collections, and write $\in E$ and $\in O$ for “is extraordinary” and “is ordinary”, respectively. Thus a collection $C$ satisfies $C\in O$ exactly when $C\notin C$ and $C\in E$ exactly when $C\in C$.

There are two possibilities. If the collection of ordinary collections is ordinary ($O\in O$), then we know that $O$ must be ordinary, so $O$ cannot contain itself, i.e. $O\notin O$. If the collection of ordinary collections is extraordinary, ($O\in E$), then we know that $O$ must contain itself, so $O\in O$. But a collection cannot be both ordinary and extraordinary.

So such a collection leads to a paradox–precisely as the sentence this sentence is false did. Russell’s paradox shows that we need to be very careful about the words “is in”.

Definition. A set $S$ is an object for which the question “Is $x$ in $S$?” has an unambiguous answer. We write $x\in S$ (read “$x$ is an element of $S$”) if the answer is yes and $x\notin S$ if the answer is no.

Another way to phrase this is: $S$ is a set exactly when $x\in S$ is an open sentence.

Skip to toolbar