The issues of the size of the House of Representatives and the apportionment of its members were not settled in 1792 and remain contentious in 2022. In the nearly century-and-a-half between 1790 and 1930, the House grew from 105 to 435 members, and Congress used several different methods to apportion them among the states. Politics always played a role in these decisions, but Congress also became entangled in what mathematicians label the apportionment problem. The current method of determining apportionment is the Huntington-Hill, or the method of equal proportions, a rigorous mathematical procedure based on the work of Joseph Adna Hill and, especially, Edward Vermelye Huntington. It was adopted by Congress in 1941 and sanctioned by the Supreme Court in 1992. However, the Court admitted that “neither mathematical analysis nor constitutional interpretation provides a conclusive answer” to what is the best method of apportionment. In today’s era of deep and equal partisan division in the United States, the methods chosen to apportion representation can determine control of the House of Representatives and the outcome of presidential elections.

Because so much is at stake in determining apportionment, disputes continue. Application of the Huntington-Hill method results in significant differences in population among Congressional districts. For instance, in 2022, Delaware’s at-large district has 897,934 residents, while Montana’s 1st District has only about 494,708. This situation seems to violate the principle of “one person, one vote” enunciated in the 1964 Supreme Court decision, Wesberry v. Sanders, which held that the “by the People” language of Art. I, Sect. 2 of the U.S. Constitution required Congressional districts to be equal in size within a state.

The method chosen for Congressional apportionment also affects the selection of the President because it determines the number of electors in each state. For instance, if the method defended by Thomas Jefferson in 1792—the largest divisor method—had been in effect in 2000, Al Gore would have defeated George W. Bush by 271 electoral votes to 267, instead of losing by the same margin. If, by comparison, the method Alexander Hamilton defended–the method of largest remainders–had been in place, the candidates would have tied at 269. Mathematicians agree that Huntington-Hill favors smaller states—which Bush disproportionately carried—by giving them more representatives. Since these same small states already benefit in the electoral college from equality of representation in the Senate, the Huntington-Hill method of apportionment helped Bush win the election.

The size of the House of Representatives presents another problem. Since Congress first fixed it at 435 members in 1910, the population of each district has grown enormously, reaching an average of 761,952 people per representative by the census of 2020. This number is almost four times greater than that of the next highest nation (Japan) in the thirty-eight-member Organisation for Economic Co-operation and Development, a group of nations with economies comparable to that of the United States. This high ratio of population to representatives leads to the often-heard charge that elected representatives “are out of touch” and it puts strain on their staffs’ ability to provide constituent services.

The size of the House, as well as the distribution of representatives, also affects the electoral college vote. Congress at any time can change the size of the House from its “permanent” number of 435. By contrafactual analysis, in 2000 if the House had been any size smaller than 491 and the seats apportioned by Huntington-Hill, Bush would have won. If the House size had been any size greater than 635, Gore would have won. Confusingly, and disturbingly, if the House had been any size between 492 and 634, the outcome would have been almost random.

Because of these issues, political reformers have proposed changes to make both representation and the electoral college fairer and more representative. Some proposals look to international comparisons. The size of a nation’s legislature is usually close to the cube root of its population. (n=∛p, where *n* is the number of representatives and *p* is the population of the nation). The *New York Times*, among others, has therefore endorsed making the size of the House of Representatives the cube root of the United States population. Critics, however, attack this as an overly wonkish method of calculation and object to the enlarged size of the House it would produce; by the census of 2020, for example, the House would grow to 690 members. A simpler change would be to increase the House size by an arbitrary number, say 150. If Huntington-Hill continued to be used for future apportionments, however, the existing distortions would return as the U.S. population increased over time. Some also advocate returning to the largest divisor method, which Jefferson defended in 1792. They believe that its large-state biases appropriately offset the small-state biases of having a minimum of one representative per state and two senators for each state, both of which affect the weight given these states in the electoral college.

The most common reform suggestion is to adopt the so-called Wyoming Rule. Under this method, the smallest state (Wyoming in 2020) gets its Constitutionally mandated one representative, and the delegations of the other states are proportionately larger based on their populations. (The catch is that some method of proportionate distribution would still have to be chosen). This method prioritizes equality among districts because the population of the smallest state would become the mean population for Congressional districts overall; therefore, there would be as many districts with populations greater than the mean as there are districts with populations less than the mean.

What’s interesting is that the Wyoming Rule is not really new; rather, it was, in effect, the working assumption about distribution of representation used by the delegates during the first weeks of the Constitutional Convention in 1787. It can be seen most clearly in New Jersey delegate David Brearley’s 1785 table which editor Max Farrand put with the debates for July 10**. **Convention delegates assigned Georgia—which they assumed was the smallest state—one representative. They then apparently used the largest divisor method to distribute the 89 other representatives for a total of 90.

If the Wyoming Rule were adopted today, there would be about 574 members of the House of Representatives, which is still fewer than in Japan, the United Kingdom, Germany, France, and Italy. Some critics, however, still believe that is too large a number, just as some delegates to the Convention of 1787 said 90 was too large. (The Convention ultimately settled on 65). Critics also object to increased operating costs and the costs of remodeling House chambers to accommodate more representatives; and they charge that floor debates will be less effective with more representatives. They argue that if the rule had been in effect in 1930, for instance, there would have been a gargantuan assembly of 1,300 representatives (the result of the premature admission of Nevada, with a tiny population, as a state in 1864). However, between 1980 and 2020, the House size would have varied only between 574 and 562 members. Republicans also oppose the rule because models based on the 2020 election show that it would slightly, but increasingly over time, favor Democratic presidential candidates in the electoral college. Proponents of the rule, on the other hand, point out that the larger the House size, the better the likelihood that representatives would come from historically underrepresented communities. This insight was discussed at the 1787 Convention and in the ratification debates (see, for example, Alexander Hamilton in Federalist No. 35).

In the past twenty-five years mathematics educators have increasingly taught the apportionment problem as a way to show the present-day relevance of mathematics. Indeed, I first encountered the topic when, as a content supervisor of student teachers, I observed a social studies major with a mathematics minor teach a lesson on representative government to a 12th-grade civics class. Her students learned that “fair” has different meanings to different people; that there are different ways to solve problems that lead to different outcomes; and that, in the everyday world, these outcomes can have significant differences in determining who governs us. The Framers sensed an awareness of these problems and had some solutions that are still cogent today. In the succeeding 235 years, however, historical developments—such as a larger population, additional states, and more sophisticated statistical methods—have complicated the thorny struggle to figure out the apportionment problem and distribute representation fairly and proportionately.

**For more information**: Historians who are confident of their mathematics abilities should refer to Michel L. Balinski and H. Peyton Young, *Fair Representation: Meeting the Ideal of One Man, One Vote, *2d ed. (Washington: Brookings Institution Press, 2001), the definitive work on the apportionment problem. More accessible to most historians, but still written by a mathematician, is Charles M. Biles, *The History of Congressional Apportionment* (Arcata: Humboldt State University Press, 2017).

Robert J. Gough earned his Ph.D. in 1977 at the University of Pennsylvania, where he was a student of Lee Benson’s. He attended the informal meeting of Philadelphia-area early Americanists at Mike Zuckerman’s house in September 1973, out of which the McNeil Center eventually developed. Gough taught for 35 years at the University of Wisconsin—Eau Claire and published scholarship in early American history, 20th-century Wisconsin history, and the history of education.