probability and statistics

probability and statistics

Introduction

      the branches of mathematics concerned with the laws governing random events, including the collection, analysis, interpretation, and display of numerical data. Probability has its origin in the study of gambling and insurance in the 17th century, and it is now an indispensable tool of both social and natural sciences. Statistics may be said to have its origin in census counts taken thousands of years ago; as a distinct scientific discipline, however, it was developed in the early 19th century as the study of populations, economies, and moral actions and later in that century as the mathematical tool for analyzing such numbers. For technical information on these subjects, see probability theory and statistics.

Early probability

Games of chance
      The modern mathematics of chance is usually dated to a correspondence between the French mathematicians Pierre de Fermat (Fermat, Pierre de) and Blaise Pascal (Pascal, Blaise) in 1654. Their inspiration came from a problem about games of chance, proposed by a remarkably philosophical gambler, the chevalier de Méré. De Méré inquired about the proper division of the stakes when a game of chance is interrupted. Suppose two players, A and B, are playing a three-point game, each having wagered 32 pistoles, and are interrupted after A has two points and B has one. How much should each receive?

      Fermat and Pascal proposed somewhat different solutions, though they agreed about the numerical answer. Each undertook to define a set of equal or symmetrical cases, then to answer the problem by comparing the number for A with that for B. Fermat, however, gave his answer in terms of the chances, or probabilities. He reasoned that two more games would suffice in any case to determine a victory. There are four possible outcomes, each equally likely in a fair game of chance. A might win twice, AA; or first A then B might win; or B then A; or BB. Of these four sequences, only the last would result in a victory for B. Thus, the odds for A are 3:1, implying a distribution of 48 pistoles for A and 16 pistoles for B.

      Pascal thought Fermat's solution unwieldy, and he proposed to solve the problem not in terms of chances but in terms of the quantity now called “expectation.” Suppose B had already won the next round. In that case, the positions of A and B would be equal, each having won two games, and each would be entitled to 32 pistoles. A should receive his portion in any case. B's 32, by contrast, depend on the assumption that he had won the first round. This first round can now be treated as a fair game for this stake of 32 pistoles, so that each player has an expectation of 16. Hence A's lot is 32 + 16, or 48, and B's is just 16.

      Games of chance such as this one provided model problems for the theory of chances during its early period, and indeed they remain staples of the textbooks. A posthumous work of 1665 by Pascal on the “arithmetic triangle” now linked to his name (see binomial theorem) showed how to calculate numbers of combinations and how to group them to solve elementary gambling problems. Fermat and Pascal were not the first to give mathematical solutions to problems such as these. More than a century earlier, the Italian mathematician, physician, and gambler Girolamo Cardano (Cardano, Girolamo) calculated odds for games of luck by counting up equally probable cases. His little book, however, was not published until 1663, by which time the elements of the theory of chances were already well known to mathematicians in Europe. It will never be known what would have happened had Cardano published in the 1520s. It cannot be assumed that probability theory would have taken off in the 16th century. When it began to flourish, it did so in the context of the “new science” of the 17th-century scientific revolution, when the use of calculation to solve tricky problems had gained a new credibility. Cardano, moreover, had no great faith in his own calculations of gambling odds, since he believed also in luck, particularly in his own. In the Renaissance world of monstrosities, marvels, and similitudes, chance—allied to fate—was not readily naturalized, and sober calculation had its limits.

Risks, expectations, and fair contracts
      In the 17th century, Pascal's strategy for solving problems of chance became the standard one. It was, for example, used by the Dutch mathematician Christiaan Huygens (Huygens, Christiaan) in his short treatise on games of chance, published in 1657. Huygens refused to define equality of chances as a fundamental presumption of a fair game but derived it instead from what he saw as a more basic notion of an equal exchange. Most questions of probability in the 17th century were solved, as Pascal solved his, by redefining the problem in terms of a series of games in which all players have equal expectations. The new theory of chances was not, in fact, simply about gambling but also about the legal notion of a fair contract. A fair contract implied equality of expectations, which served as the fundamental notion in these calculations. Measures of chance or probability were derived secondarily from these expectations.

      Probability was tied up with questions of law and exchange in one other crucial respect. Chance and risk, in aleatory contracts, provided a justification for lending at interest, and hence a way of avoiding Christian prohibitions against usury. Lenders, the argument went, were like investors; having shared the risk, they deserved also to share in the gain. For this reason, ideas of chance had already been incorporated in a loose, largely nonmathematical way into theories of banking and marine insurance. From about 1670, initially in the Netherlands, probability began to be used to determine the proper rates at which to sell annuities (annuity). Jan de Wit, leader of the Netherlands from 1653 to 1672, corresponded in the 1660s with Huygens, and eventually he published a small treatise on the subject of annuities in 1671.

      Annuities in early modern Europe were often issued by states to raise money, especially in times of war. They were generally sold according to a simple formula such as “seven years purchase,” meaning that the annual payment to the annuitant, promised until the time of his or her death, would be one-seventh of the principal. This formula took no account of age at the time the annuity was purchased. Wit lacked data on mortality rates at different ages, but he understood that the proper charge for an annuity depended on the number of years that the purchaser could be expected to live and on the presumed rate of interest. Despite his efforts and those of other mathematicians, it remained rare even in the 18th century for rulers to pay much heed to such quantitative considerations. Life insurance, too, was connected only loosely to probability calculations and mortality records, though statistical data on death became increasingly available in the course of the 18th century. The first insurance society to price its policies on the basis of probability calculations was the Equitable, founded in London in 1762.

Probability as the logic of uncertainty
      The English clergyman Joseph Butler (Butler, Joseph), in his very influential Analogy of Religion (1736), called probability “the very guide of life.” The phrase, however, did not refer to mathematical calculation but merely to the judgments made where rational demonstration is impossible. The word probability was used in relation to the mathematics of chance in 1662 in the Logic of Port-Royal, written by Pascal's fellow Jansenist (Roman Catholicism)s, Antoine Arnauld (Arnauld, Antoine) and Pierre Nicole (Nicole, Pierre). But from medieval times to the 18th century and even into the 19th, a probable belief was most often merely one that seemed plausible, came on good authority, or was worthy of approval. Probability, in this sense, was emphasized in England and France from the late 17th century as an answer to Skepticism. Man may not be able to attain perfect knowledge but can know enough to make decisions about the problems of daily life. The new experimental natural philosophy of the later 17th century was associated with this more modest ambition, one that did not insist on logical proof.

      Almost from the beginning, however, the new mathematics of chance was invoked to suggest that decisions could after all be made more rigorous. Pascal invoked it in the most famous chapter of his Pensées, “Of the Necessity of the Wager,” in relation to the most important decision of all, whether to accept the Christian faith. One cannot know of God's existence with absolute certainty; there is no alternative but to bet (“il faut parier”). Perhaps, he supposed, the unbeliever can be persuaded by consideration of self-interest. If there is a God (Pascal assumed he must be the Christian God), then to believe in him offers the prospect of an infinite reward for infinite time. However small the probability, provided only that it be finite, the mathematical expectation of this wager is infinite. For so great a benefit, one sacrifices rather little, perhaps a few paltry pleasures during one's brief life on Earth. It seemed plain which was the more reasonable choice.

      The link between the doctrine of chance and religion remained an important one through much of the 18th century, especially in Britain. Another argument for belief in God relied on a probabilistic natural theology. The classic instance is a paper read by John Arbuthnot (Arbuthnot, John) to the Royal Society of London in 1710 and published in its Philosophical Transactions in 1712. Arbuthnot presented there a table of christenings in London from 1629 to 1710. He observed that in every year there was a slight excess of male over female births. The proportion, approximately 14 boys for every 13 girls, was perfectly calculated, given the greater dangers to which young men are exposed in their search for food, to bring the sexes to an equality of numbers at the age of marriage. Could this excellent result have been produced by chance alone? Arbuthnot thought not, and he deployed a probability calculation to demonstrate the point. The probability that male births would by accident exceed female ones in 82 consecutive years is (0.5)82. Considering further that this excess is found all over the world, he said, and within fixed limits of variation, the chance becomes almost infinitely small. This argument for the overwhelming probability of Divine providence was repeated by many—and refined by a few. The Dutch natural philosopher Willem 'sGravesande incorporated the limits of variation of these birth ratios into his mathematics and so attained a still more decisive vindication of Providence over chance. Nicolas Bernoulli, from the famous Swiss mathematical family, gave a more skeptical view. If the underlying probability of a male birth was assumed to be 0.5169 rather than 0.5, the data were quite in accord with probability theory. That is, no Providential direction was required.

      Apart from natural theology, probability came to be seen during the 18th-century Enlightenment as a mathematical version of sound reasoning. In 1677 the German mathematician Gottfried Wilhelm Leibniz (Leibniz, Gottfried Wilhelm) imagined a utopian world in which disagreements would be met by this challenge: “Let us calculate, Sir.” The French mathematician Pierre-Simon de Laplace (Laplace, Pierre-Simon, marquis de), in the early 19th century, called probability “good sense reduced to calculation.” This ambition, bold enough, was not quite so scientific as it may first appear. For there were some cases where a straightforward application of probability mathematics led to results that seemed to defy rationality. One example, proposed by Nicolas Bernoulli and made famous as the St. Petersburg paradox, involved a bet with an exponentially increasing payoff. A fair coin is to be tossed until the first time it comes up heads. If it comes up heads on the first toss, the payment is 2 ducats; if the first time it comes up heads is on the second toss, 4 ducats; and if on the nth toss, 2n ducats. The mathematical expectation of this game is infinite, but no sensible person would pay a very large sum for the privilege of receiving the payoff from it. The disaccord between calculation and reasonableness created a problem, addressed by generations of mathematicians. Prominent among them was Nicolas's cousin Daniel Bernoulli (Bernoulli, Daniel), whose solution depended on the idea that a ducat added to the wealth of a rich man benefits him much less than it does a poor man (a concept now known as decreasing marginal utility; see utility and value: Theories of utility (utility and value)).

      Probability arguments figured also in more practical discussions, such as debates during the 1750s and '60s about the rationality of smallpox inoculation. Smallpox was at this time widespread and deadly, infecting most and carrying off perhaps one in seven Europeans. Inoculation in these days involved the actual transmission of smallpox, not the cowpox vaccines developed in the 1790s by the English surgeon Edward Jenner (Jenner, Edward), and was itself moderately risky. Was it rational to accept a small probability of an almost immediate death to reduce greatly a large probability of death by smallpox in the indefinite future? Calculations of mathematical expectation, as by Daniel Bernoulli, led unambiguously to a favourable answer. But some disagreed, most famously the eminent mathematician and perpetual thorn in the flesh of probability theorists, the French mathematician Jean Le Rond d'Alembert (Alembert, Jean Le Rond d'). One might, he argued, reasonably prefer a greater assurance of surviving in the near term to improved prospects late in life.

The probability of causes
 Many 18th-century ambitions for probability theory, including Arbuthnot's, involved reasoning from effects to causes. Jakob Bernoulli (Bernoulli, Jakob), uncle of Nicolas and Daniel, formulated and proved a law of large numbers to give formal structure to such reasoning. This was published in 1713 from a manuscript, the Ars conjectandi, left behind at his death in 1705. There he showed that the observed proportion of, say, tosses of heads or of male births will converge as the number of trials increases to the true probability p, supposing that it is uniform. His theorem was designed to give assurance that when p is not known in advance, it can properly be inferred by someone with sufficient experience. He thought of disease and the weather as in some way like drawings from an urn. At bottom they are deterministic, but since one cannot know the causes in sufficient detail, one must be content to investigate the probabilities of events under specified conditions.

      The English physician and philosopher David Hartley (Hartley, David) announced in his Observations on Man (1749) that a certain “ingenious Friend” had shown him a solution of the “inverse problem” of reasoning from the occurrence of an event p times and its failure q times to the “original Ratio” of causes. But Hartley named no names, and the first publication of the formula he promised occurred in 1763 in a posthumous paper of Thomas Bayes (Bayes, Thomas), communicated to the Royal Society by the British philosopher Richard Price (Price, Richard). This has come to be known as Bayes's theorem. But it was the French, especially Laplace, who put the theorem to work as a calculus of induction, and it appears that Laplace's publication of the same mathematical result in 1774 was entirely independent. The result was perhaps more consequential in theory than in practice. An exemplary application was Laplace's probability that the sun will come up tomorrow, based on 6,000 years or so of experience in which it has come up every day.

      Laplace and his more politically engaged fellow mathematicians, most notably Marie-Jean-Antoine-Nicolas de Caritat, marquis de Condorcet (Condorcet, Marie-Jean-Antoine-Nicolas de Caritat, marquis de), hoped to make probability into the foundation of the moral sciences. This took the form principally of judicial and electoral probabilities, addressing thereby some of the central concerns of the Enlightenment philosophers and critics. Justice and elections were, for the French mathematicians, formally similar. In each, a crucial question was how to raise the probability that a jury or an electorate would decide correctly. One element involved testimonies, a classic topic of probability theory. In 1699 the British mathematician John Craig used probability to vindicate the truth of scripture and, more idiosyncratically, to forecast the end of time, when, due to the gradual attrition of truth through successive testimonies, the Christian religion would become no longer probable. The Scottish philosopher David Hume (Hume, David), more skeptically, argued in probabilistic but nonmathematical language beginning in 1748 that the testimonies supporting miracles (miracle) were automatically suspect, deriving as they generally did from uneducated persons, lovers of the marvelous. Miracles, moreover, being violations of laws of nature, had such a low a priori probability that even excellent testimony could not make them probable. Condorcet (Condorcet, Marie-Jean-Antoine-Nicolas de Caritat, marquis de) also wrote on the probability of miracles, or at least faits extraordinaires, to the end of subduing the irrational. But he took a more sustained interest in testimonies at trials, proposing to weigh the credibility of the statements of any particular witness by considering the proportion of times that he had told the truth in the past, and then use inverse probabilities to combine the testimonies of several witnesses.

      Laplace and Condorcet applied probability also to judgments (judgment). In contrast to English juries, French juries voted whether to convict or acquit without formal deliberations. The probabilists began by supposing that the jurors were independent and that each had a probability p greater than 1/2 of reaching a true verdict. There would be no injustice, Condorcet argued, in exposing innocent defendants to a risk of conviction equal to risks they voluntarily assume without fear, such as crossing the English Channel from Dover to Calais. Using this number and considering also the interest of the state in minimizing the number of guilty who go free, it was possible to calculate an optimal jury size and the majority required to convict. This tradition of judicial probabilities lasted into the 1830s, when Laplace's student Siméon-Denis Poisson (Poisson, Siméon-Denis) used the new statistics of criminal justice to measure some of the parameters. But by this time the whole enterprise had come to seem gravely doubtful, in France and elsewhere. In 1843 the English philosopher John Stuart Mill (Mill, John Stuart) called it “the opprobrium of mathematics,” arguing that one should seek more reliable knowledge rather than waste time on calculations that merely rearrange ignorance.

The rise of statistics

Political arithmetic
       casualties, statistics on mortality in London 1647-60, from John Graunt, Natural and Political Observations, TableDuring the 19th century, statistics grew up as the empirical science of the state and gained preeminence as a form of social knowledge. Population and economic numbers had been collected, though often not in a systematic way, since ancient times and in many countries. In Europe the late 17th century was an important time also for quantitative studies of disease, population, and wealth. In 1662 the English statistician John Graunt (Graunt, John) published a celebrated collection of numbers and observations pertaining to mortality in London, using records that had been collected to chart the advance and decline of the plague (see the table (casualties, statistics on mortality in London 1647-60, from John Graunt, Natural and Political Observations, Table)). In the 1680s the English political economist and statistician William Petty (Petty, Sir William) published a series of essays on a new science of “political arithmetic,” which combined statistical records with bold—some thought fanciful—calculations, such as, for example, of the monetary value of all those living in Ireland. These studies accelerated in the 18th century and were increasingly supported by state activity, though ancien régime governments often kept the numbers secret. Administrators and savants used the numbers to assess and enhance state power but also as part of an emerging “science of man.” The most assiduous, and perhaps the most renowned, of these political arithmeticians was the Prussian pastor Johann Peter Süssmilch, whose study of the divine order in human births and deaths was first published in 1741 and grew to three fat volumes by 1765. The decisive proof of Divine Providence in these demographic affairs was their regularity and order, perfectly arranged to promote man's fulfillment of what he called God's first commandment, to be fruitful and multiply. Still, he did not leave such matters to nature and to God, but rather he offered abundant advice about how kings and princes could promote the growth of their populations. He envisioned a rather spartan order of small farmers, paying modest rents and taxes, living without luxury, and practicing the Protestant faith. Roman Catholicism was unacceptable on account of priestly celibacy.

Social numbers
 Lacking, as they did, complete counts of population, 18th-century practitioners of political arithmetic had to rely largely on conjectures and calculations. In France especially, mathematicians such as Laplace used probability to surmise the accuracy of population figures determined from samples. In the 19th century such methods of estimation fell into disuse, mainly because they were replaced by regular, systematic censuses. The census of the United States, required by the U.S. Constitution and conducted every 10 years beginning in 1790, was among the earliest. (For the role of the U.S. census in spurring the development of the computer, see computer: Herman Hollerith's census tabulator (computer).) Sweden had begun earlier; most of the leading nations of Europe followed by the mid-19th century. They were also eager to survey the populations of their colonial possessions, which indeed were among the very first places to be counted. A variety of motives can be identified, ranging from the requirements of representative government to the need to raise armies. Some of this counting can scarcely be attributed to any purpose, and indeed the contemporary rage for numbers was by no means limited to counts of human populations. From the mid-18th century and especially after the conclusion of the Napoleonic Wars in 1815, the collection and publication of numbers proliferated in many domains, including experimental physics, land surveys, agriculture, and studies of the weather, tides, and terrestrial magnetism. (For perhaps the best statistical graph ever constructed, see the figure—>.) Still, the management of human populations played a decisive role in the statistical enthusiasm of the early 19th century. Political instabilities associated with the French Revolution of 1789 and the economic changes of early industrialization made social science a great desideratum. A new field of moral statistics grew up to record and comprehend the problems of dirt, disease, crime, ignorance, and poverty.

 Some of these investigations were conducted by public bureaus, but much was the work of civic-minded professionals, industrialists, and, especially after midcentury, women such as Florence Nightingale (Nightingale, Florence) (see the figure—>). One of the first serious statistical organizations arose in 1832 as section F of the new British Association for the Advancement of Science. The intellectual ties to natural science were uncertain at first, but there were some influential champions of statistics as a mathematical science. The most effective was the Belgian mathematician Adolphe Quetelet (Quetelet, Adolphe), who argued untiringly that mathematical probability was essential for social statistics. Quetelet hoped to create from these materials a new science, which he called at first social mechanics and later social physics. He wrote often of the analogies linking this science to the most mathematical of the natural sciences, celestial mechanics. In practice, though, his methods were more like those of geodesy or meteorology, involving massive collections of data and the effort to detect patterns that might be identified as laws. These, in fact, seemed to abound. He found them in almost every collection of social numbers, beginning with some publications of French criminal statistics from the mid-1820s. The numbers, he announced, were essentially constant from year to year, so steady that one could speak here of statistical laws. If there was something paradoxical in these “laws” of crime, it was nonetheless comforting to find regularities underlying the manifest disorder of social life.

A new kind of regularity
      Even Quetelet had been startled at first by the discovery of these statistical laws. Regularities of births and deaths belonged to the natural order and so were unsurprising, but here was constancy of moral and immoral acts, acts that would normally be attributed to human free will. Was there some mysterious fatalism that drove individuals, even against their will, to fulfill a budget of crimes (crime)? Were such actions beyond the reach of human intervention? Quetelet determined that they were not. Nevertheless, he continued to emphasize that the frequencies of such deeds should be understood in terms of causes acting at the level of society, not of choices made by individuals. His view was challenged by moralists, who insisted on complete individual responsibility for thefts, murders, and suicides. Quetelet was not so radical as to deny the legitimacy of punishment, since the system of justice was thought to help regulate crime rates. Yet he spoke of the murderer on the scaffold as himself a victim, part of the sacrifice that society requires for its own conservation. Individually, to be sure, it was perhaps within the power of the criminal to resist the inducements that drove him to his vile act. Collectively, however, crime is but trivially affected by these individual decisions. Not criminals but crime rates form the proper object of social investigation. Reducing them is to be achieved not at the level of the individual but at the level of the legislator, who can improve society by providing moral education or by improving systems of justice. Statisticians have a vital role as well. To them falls the task of studying the effects on society of legislative changes and of recommending measures that could bring about desired improvements.

      Quetelet's arguments inspired a modest debate about the consistency of statistics with human free will. This intensified after 1857, when the English historian Henry Thomas Buckle recited his favourite examples of statistical law to support an uncompromising determinism in his immensely successful History of Civilization in England. Interestingly, probability had been linked to deterministic arguments from very early in its history, at least since the time of Jakob Bernoulli. Laplace argued in his Philosophical Essay on Probabilities (1825) that man's dependence on probability was simply a consequence of imperfect knowledge. A being who could follow every particle in the universe, and who had unbounded powers of calculation, would be able to know the past and to predict the future with perfect certainty. The statistical determinism inaugurated by Quetelet had a quite different character. Now it was not necessary to know things in infinite detail. At the microlevel, indeed, knowledge often fails, for who can penetrate the human soul so fully as to comprehend why a troubled individual has chosen to take his or her own life? Yet such uncertainty about individuals somehow dissolves in light of a whole society, whose regularities are often more perfect than those of physical systems such as the weather. Not real persons but l'homme moyen, the average man, formed the basis of social physics. This contrast between individual and collective phenomena was, in fact, hard to reconcile with an absolute determinism like Buckle's. Several critics of his book pointed this out, urging that the distinctive feature of statistical knowledge was precisely its neglect of individuals in favour of mass observations.

Statistical physics
      The same issues were discussed also in physics. Statistical understandings first gained an influential role in physics at just this time, in consequence of papers by the German mathematical physicist Rudolf Clausius (Clausius, Rudolf) from the late 1850s and, especially, of one by the Scottish physicist James Clerk Maxwell (Maxwell, James Clerk) published in 1860. Maxwell, at least, was familiar with the social statistical tradition, and he had been sufficiently impressed by Buckle's History and by the English astronomer John Herschel (Herschel, Sir John, 1st Baronet)'s influential essay on Quetelet's work in the Edinburgh Review (1850) to discuss them in letters. During the1870s, Maxwell often introduced his gas theory using analogies from social statistics. The first point, a crucial one, was that statistical regularities of vast numbers of molecules were quite sufficient to derive thermodynamic (thermodynamics) laws relating the pressure, volume, and temperature in gases. Some physicists, including, for a time, the German Max Planck (Planck, Max), were troubled by the contrast between a molecular chaos at the microlevel and the very precise laws indicated by physical instruments. They wondered if it made sense to seek a molecular, mechanical grounding for thermodynamic laws. Maxwell invoked the regularities of crime and suicide as analogies to the statistical laws of thermodynamics and as evidence that local uncertainty can give way to large-scale predictability. At the same time, he insisted that statistical physics implied a certain imperfection of knowledge. In physics, as in social science, determinism was very much an issue in the 1850s and '60s. Maxwell argued that physical determinism could only be speculative, since human knowledge of events at the molecular level is necessarily imperfect. Many of the laws of physics, he said, are like those regularities detected by census officers: they are quite sufficient as a guide to practical life, but they lack the certainty characteristic of abstract dynamics.

The spread of statistical mathematics
      Statisticians, wrote the English statistician Maurice Kendall in 1942, “have already overrun every branch of science with a rapidity of conquest rivaled only by Attila, Mohammed, and the Colorado beetle.” The spread of statistical mathematics through the sciences began, in fact, at least a century before there were any professional statisticians. Even regardless of the use of probability to estimate populations and make insurance calculations, this history dates back at least to 1809. In that year, the German mathematician Carl Friedrich Gauss (Gauss, Carl Friedrich) published a derivation of the new method of least squares (least squares approximation) incorporating a mathematical function that soon became known as the astronomer's curve of error, and later as the Gaussian or normal distribution.

 The problem of combining many astronomical observations to give the best possible estimate of one or several parameters had been discussed in the 18th century. The first publication of the method of least squares as a solution to this problem was inspired by a more practical problem, the analysis of French geodetic measures undertaken in order to fix the standard length of the metre. This was the basic measure of length in the new metric system, decreed by the French Revolution and defined as 1/40,000,000 of the longitudinal circumference of the Earth. In 1805 the French mathematician Adrien-Marie Legendre (Legendre, Adrien-Marie) proposed to solve this problem by choosing values that minimize the sums of the squares of deviations of the observations from a point, line, or curve drawn through them. In the simplest case, where all observations were measures of a single point, this method was equivalent to taking an arithmetic mean.

  Gauss soon announced that he had already been using least squares since 1795, a somewhat doubtful claim. After Legendre's publication, Gauss became interested in the mathematics of least squares, and he showed in 1809 that the method gave the best possible estimate of a parameter if the errors of the measurements were assumed to follow the normal distribution. This distribution, whose importance for mathematical probability and statistics was decisive, was first shown by the French mathematician Abraham de Moivre (Moivre, Abraham de) in the 1730s to be the limit (as the number of events increases) for the binomial distribution (see the figure—>). In particular, this meant that a continuous function (the normal distribution) and the power of calculus could be substituted for a discrete function (the binomial distribution) and laborious numerical methods. Laplace used the normal distribution extensively as part of his strategy for applying probability to very large numbers of events. The most important problem of this kind in the 18th century involved estimating populations from smaller samples. Laplace also had an important role in reformulating the method of least squares as a problem of probabilities. For much of the 19th century, least squares was overwhelmingly the most important instance of statistics in its guise as a tool of estimation and the measurement of uncertainty. It had an important role in astronomy, geodesy, and related measurement disciplines, including even quantitative psychology. Later, about 1900, it provided a mathematical basis for a broader field of statistics that came to be used by a wide range of fields.

Statistical theories in the sciences
 The role of probability and statistics in the sciences was not limited to estimation and measurement. Equally significant, and no less important for the formation of the mathematical field, were statistical theories of collective phenomena that bypassed the study of individuals. The social science bearing the name statistics was the prototype of this approach. Quetelet advanced its mathematical level by incorporating the normal distribution into it. He argued that human traits of every sort, from chest circumference (see the figure—>) and height to the distribution of propensities to marry or commit crimes, conformed to the astronomer's error law. The kinetic theory of gases of Clausius, Maxwell, and the Austrian physicist Ludwig Boltzmann (Boltzmann, Ludwig Eduard) was also a statistical one. Here it was not the imprecision or uncertainty of scientific measurements but the motions of the molecules themselves to which statistical understandings and probabilistic mathematics were applied. Once again, the error law played a crucial role. The Maxwell-Boltzmann distribution law of molecular velocities, as it has come to be known, is a three-dimensional version of this same function. In importing it into physics, Maxwell drew both on astronomical error theory and on Quetelet's social physics.

Biometry
 The English biometric school developed from the work of the polymath Francis Galton (Galton, Sir Francis), cousin of Charles Darwin (Darwin, Charles). Galton admired Quetelet, but he was critical of the statistician's obsession with mean values rather than variation. The normal law, as he began to call it, was for him a way to measure and analyze variability. This was especially important for studies of biological evolution, since Darwin's theory was about natural selection acting on natural diversity. A figure from Galton's 1877 paper on breeding sweet peas shows a physical model, now known as the Galton board, that he employed to explain the normal distribution of inherited characteristics; in particular, he used his model to explain the tendency of progeny to have the same variance as their parents, a process he called reversion, subsequently known as regression to the mean. Galton was also founder of the eugenics movement, which called for guiding the evolution of human populations the same way that breeders improve chickens or cows. He developed measures of the transmission of parental characteristics to their offspring: the children of exceptional parents were generally somewhat exceptional themselves, but there was always, on average, some reversion or regression toward the population mean. He developed the elementary mathematics of regression and correlation as a theory of hereditary transmission and thus as statistical biological theory rather than as a mathematical tool. However, Galton came to recognize that these methods could be applied to data in many fields, and by 1889, when he published his Natural Inheritance, he stressed the flexibility and adaptability of his statistical tools.

      Still, evolution and eugenics remained central to the development of statistical mathematics. The most influential site for the development of statistics was the biometric laboratory set up at University College London by Galton's admirer, the applied mathematician Karl Pearson (Pearson, Karl). From about 1892 he collaborated with the English biologist Walter F.R. Weldon on quantitative studies of evolution, and he soon began to attract an assortment of students from many countries and disciplines who hoped to learn the new statistical methods. Their journal, Biometrika, was for many years the most important venue for publishing new statistical tools and for displaying their uses.

      Biometry was not the only source of new developments in statistics at the turn of the 19th century. German social statisticians such as Wilhelm Lexis had turned to more mathematical approaches some decades earlier. In England, the economist Francis Edgeworth (Edgeworth, Francis Ysidro) became interested in statistical mathematics in the early 1880s. One of Pearson's earliest students, George Udny Yule, turned away from biometry and especially from eugenics in favour of the statistical investigation of social data. Nevertheless, biometry provided an important model, and many statistical techniques, for other disciplines. The 20th-century fields of psychometrics (psychological testing), concerned especially with mental testing, and econometrics, which focused on economic time-series, reveal this relationship in their very names.

Samples and experiments
      Near the beginning of the 20th century, sampling regained its respectability in social statistics, for reasons that at first had little to do with mathematics. Early advocates, such as the first director of the Norwegian Central Bureau of Statistics, A.N. Kiaer, thought of their task primarily in terms of attaining representativeness in relation to the most important variables—for example, geographic region, urban and rural, rich and poor. The London statistician Arthur Bowley was among the first to urge that sampling should involve an element of randomness. Jerzy Neyman (Neyman, Jerzy), a statistician from Poland who had worked for a time in Pearson's laboratory, wrote a particularly decisive mathematical paper on the topic in 1934. His method of stratified sampling incorporated a concern for representativeness across the most important variables, but it also required that the individuals sampled should be chosen randomly. This was designed to avoid selection biases but also to create populations to which probability theory could be applied to calculate expected errors. George Gallup (Gallup, George Horace) achieved fame in 1936 when his polls, employing stratified sampling, successfully predicted the reelection of Franklin Delano Roosevelt (Roosevelt, Franklin D.), in defiance of the Literary Digest's much larger but uncontrolled survey, which forecast a landslide for the Republican Alfred Landon (Landon, Alf).

      The alliance of statistical tools and experimental design was also largely an achievement of the 20th century. Here, too, randomization came to be seen as central. The emerging protocol called for the establishment of experimental and control populations and for the use of chance where possible to decide which individuals would receive the experimental treatment. These experimental repertoires emerged gradually in educational psychology during the 1900s and '10s. They were codified and given a full mathematical basis in the next two decades by Ronald A. Fisher (Fisher, Sir Ronald Aylmer), the most influential of all the 20th-century statisticians. Through randomized, controlled experiments and statistical analysis, he argued, scientists could move beyond mere correlation to causal knowledge even in fields whose phenomena are highly complex and variable. His ideas of experimental design and analysis helped to reshape many disciplines, including psychology, ecology, and therapeutic research in medicine, especially during the triumphant era of quantification after 1945.

The modern role of statistics
      In some ways, statistics has finally achieved the Enlightenment aspiration to create a logic of uncertainty. Statistical tools are at work in almost every area of life, including agriculture, business, engineering, medicine, law, regulation, and social policy, as well as in the physical, biological, and social sciences and even in parts of the academic humanities. The replacement of human “computers” with mechanical and then electronic ones in the 20th century greatly lightened the immense burdens of calculation that statistical analysis once required. Statistical tests are used to assess whether observed results, such as increased harvests where fertilizer is applied, or improved earnings where early childhood education is provided, give reasonable assurance of causation, rather than merely random fluctuations. Following World War II, these significance levels virtually came to define an acceptable result in some of the sciences and also in policy applications.

      From about 1930 there grew up in Britain and America—and a bit later in other countries—a profession of statisticians, experts in inference, who defined standards of experimentation as well as methods of analysis in many fields. To be sure, statistics in the various disciplines retained a fair degree of specificity. There were also divergent schools of statisticians, who disagreed, often vehemently, on some issues of fundamental importance. Fisher was highly critical of Pearson; Neyman and Egon Pearson, while unsympathetic to father Karl's methods, disagreed also with Fisher's. Under the banner of Bayesianism appeared yet another school, which, against its predecessors, emphasized the need for subjective assessments of prior probabilities. The most immoderate ambitions for statistics as the royal road to scientific inference depended on unacknowledged compromises that ignored or dismissed these disputes. Despite them, statistics has thrived as a somewhat heterogeneous but powerful set of tools, methods, and forms of expertise that continues to regulate the acquisition and interpretation of quantitative data.

Theodore M. Porter

Additional Reading

General history
Ian Hacking, The Emergence of Probability: A Philosophical Study of Early Ideas About Probability, Induction, and Statistical Inference (1975, reissued 1991), discusses the history of probability and its interpretations in relation to a broad intellectual background, up to about 1750. Lorraine Daston, Classical Probability in the Enlightenment (1988, reprinted 1995), considers probability theory in the 17th and 18th centuries and how it was understood as the mathematics of good sense. Ian Hacking, The Taming of Chance (1990), covers statistical ideas of regularity and order, set against a background of scientific activity and bureaucratic intervention, in the 19th century. Theodore M. Porter, The Rise of Statistical Thinking, 1820–1900 (1986), examines statistics as a strategy for dealing with large numbers, its emergence in bureaucratic social science, and its extension to the natural sciences, and his Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (1995) scrutinizes numbers, calculation, and objectivity, understood as administrative tools. Gerd Gigerenzer et al., The Empire of Chance: How Probability Changed Science and Everyday Life (1989, reprinted 1991), contains essays on the development of probability and statistics, from its roots in gambling and insurance to its relations to science and philosophy, and on to more recent applications in polling and baseball.

Census and social statistics
Patricia Cline Cohen, A Calculating People: The Spread of Numeracy in Early America (1982, reissued 1985), is an engaging history of numbers in commerce, education, and the census in America from the colonial period to the mid-19th century. Margo J. Anderson, The American Census: A Social History (1988), discusses the census in relation to the social and political history of the United States. Alain Desrosières, The Politics of Large Numbers: A History of Statistical Reasoning (1998; originally published in French, 1993), is a general history of statistics and statistical mathematics emphasizing the development of mathematical tools for analyzing social numbers. Martin Bulmer, Kevin Bales, and Kathryn Kish Sklar (eds.), The Social Survey in Historical Perspective, 1880–1940 (1991), introduces sampling methodologies in social research and includes information on the contributions of Beatrice Webb, Florence Kelley, and W.E.B. Du Bois.

Econometrics
Judy L. Klein, Statistical Visions in Time: A History of Time Series Analysis, 1662–1938 (1997), discusses the development of tools for analyzing time series, especially in business and economics. Mary S. Morgan, The History of Econometric Ideas (1990), is an analysis of economic statistics from about 1870 to 1940, especially the attempt to infer causes from numbers.

Psychometrics and eugenics
Donald A. MacKenzie, Statistics in Britain, 1865–1930: The Social Construction of Scientific Knowledge (1981), introduces the British founders of modern statistics and the relation of their work to eugenic ambitions. Kurt Danziger, Constructing the Subject: Historical Origins of Psychological Research (1990, reissued 1994), discusses statistical psychology and intelligence testing in relation to the administration of American public schools.

Medical statistics
J. Rosser Matthews, Quantification and the Quest for Medical Certainty (1995), describes statistical ideas and methods in medicine in the 19th and early 20th centuries. Harry M. Marks, The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990 (1997, reissued 2000), describes the regulation of pharmaceuticals in the United States and the randomized clinical trial as a tool of medical administration.

Mathematical history
Anders Hald, A History of Probability and Statistics and Their Applications Before 1750 (1990), and A History of Mathematical Statistics from 1750 to 1930 (1998), provide a useful catalog of statisticians, with information about their lives and mathematical work. Stephen M. Stigler, The History of Statistics: The Measurement of Uncertainty Before 1900 (1986), is a historical introduction to statistical mathematics and methods of analyzing data, especially in astronomy and the social sciences, and his Statistics on the Table: The History of Statistical Concepts and Methods (1999) contains various essays on topics in the history of statistics from the Middle Ages to the 20th century (and is less mathematically demanding than his History of Statistics).Theodore M. Porter

* * *


Universalium. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Notation in probability and statistics — Probability theory and statistics has some commonly used conventions of its own, in addition to standard mathematical notation and mathematical symbols. Contents 1 Probability theory 2 Statistics 3 Critical values …   Wikipedia

  • Probability and statistics — See the separate articles on probability or the article on statistics. Statistical analysis depends on the characteristics of particular probability distributions, and the two topics are normally studied together.See also*List of probability… …   Wikipedia

  • Glossary of probability and statistics — The following is a glossary of terms. It is not intended to be all inclusive. Concerned fields *Probability theory *Algebra of random variables (linear algebra) *Statistics *Measure theory *Estimation theory Glossary *Atomic event : another name… …   Wikipedia

  • Timeline of probability and statistics — A timeline of probability and statistics 17th century * 1654 Blaise Pascal and Pierre de Fermat create the theory of probability, * 1693 Edmund Halley prepares the first mortality tables statistically relating death rate to age, 18th century *… …   Wikipedia

  • Research Students Conference Probability and Statistics — The Research Students Conference Probability and Statistics is an annual event aiming to provide postgraduate statisticians andprobabilists with an appropriate forum to present their research. As usual, this four day event is organised by… …   Wikipedia

  • Probability — is the likelihood or chance that something is the case or will happen. Probability theory is used extensively in areas such as statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the… …   Wikipedia

  • Statistics — is a mathematical science pertaining to the collection, analysis, interpretation or explanation, and presentation of data. Also with prediction and forecasting based on data. It is applicable to a wide variety of academic disciplines, from the… …   Wikipedia

  • Probability theory — is the branch of mathematics concerned with analysis of random phenomena.[1] The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non deterministic events or measured… …   Wikipedia

  • statistics — /steuh tis tiks/, n. 1. (used with a sing. v.) the science that deals with the collection, classification, analysis, and interpretation of numerical facts or data, and that, by use of mathematical theories of probability, imposes order and… …   Universalium

  • probability theory — Math., Statistics. the theory of analyzing and making statements concerning the probability of the occurrence of uncertain events. Cf. probability (def. 4). [1830 40] * * * Branch of mathematics that deals with analysis of random events.… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”