Much of contemporary scientific theorizing involves modeling, the construction and analysis of mathematical models. Such models are then used to explain the properties of real-world systems, but they almost always contain idealizations: intentional approximations and distortions. For example, Vito Volterra was able to explain anomalous fishery data from the Adriatic during WWI using a model containing just two species and whose interactions were fully randomized. The practice of relying on highly idealized models raises a number of fundamental philosophical questions. Is idealization inevitable regardless of the amount of data and computational power at our disposal? Does idealization have any nonpragmatic value or justification? Will idealization persist as science progresses? In other words, as we learn more about the processes underlying fishery dynamics, or the weather, will we still need to use models that leave out certain variables or gloss over certain details?