/euh nal"euh sis/, n., pl. analyses /-seez'/.
1. the separating of any material or abstract entity into its constituent elements (opposed to synthesis).
2. this process as a method of studying the nature of something or of determining its essential features and their relations: the grammatical analysis of a sentence.
3. a presentation, usually in writing, of the results of this process: The paper published an analysis of the political situation.
4. a philosophical method of exhibiting complex concepts or propositions as compounds or functions of more basic ones.
5. Math.
a. an investigation based on the properties of numbers.
b. the discussion of a problem by algebra, as opposed to geometry.
c. the branch of mathematics consisting of calculus and its higher developments.
d. a system of calculation, as combinatorial analysis or vector analysis.
e. a method of proving a proposition by assuming the result and working backward to something that is known to be true. Cf. synthesis (def. 4).
6. Chem.
a. intentionally produced decomposition or separation of materials into their ingredients or elements, as to find their kind or quantity.
b. the ascertainment of the kind or amount of one or more of the constituents of materials, whether obtained in separate form or not. Cf. qualitative analysis, quantitative analysis.
7. psychoanalysis.
8. Computers. See systems analysis.
[1575-85; < NL < Gk, equiv. to analý(ein) to loosen up (ana- ANA- + lýein to loosen) + -sis -SIS]

* * *

Field of mathematics that incorporates the methods of algebra and calculus
specifically of limits, continuity, and infinite series
to analyze classes of functions and equations having general properties (e.g., differentiability).

Analysis builds on the work of G.W. Leibniz and Isaac Newton by exploring the applications of the derivative and the integral. Several distinct but related subfields have developed, including the calculus of variations, differential equations, Fourier analysis (see Fourier transform), complex analysis, vector and tensor analysis, real analysis, and functional analysis. See also numerical analysis.
In chemistry, the determination of the properties and composition of samples of materials; qualitative analysis establishes what is there, and quantitative analysis measures how much.

A large body of systematic procedures (analytical chemistry) has evolved in close association with other branches of the physical sciences since their beginnings. A sample of a single compound may be analyzed to establish its elemental composition (see element, molecular weight) or molecular structure; many measurements use spectroscopy and spectrophotometry. A mixed sample is usually analyzed by separating, detecting, and identifying its components by methods that depend on differences in their properties (e.g., volatility, mobility in an electric or gravitational field, distribution between liquids that do not mix). The many types of chromatography are increasingly useful, particularly with biological and biochemical samples.
(as used in expressions)
algorithms analysis of
cost benefit analysis
input output analysis

* * *


      a branch of mathematics that deals with continuous change and with certain general types of processes that have emerged from the study of continuous change, such as limits, differentiation, and integration. Since the discovery of the differential and integral calculus by Isaac Newton (Newton, Sir Isaac) and Gottfried Wilhelm Leibniz (Leibniz, Gottfried Wilhelm) at the end of the 17th century, analysis has grown into an enormous and central field of mathematical research, with applications throughout the sciences and in areas such as finance, economics, and sociology.

      The historical origins of analysis can be found in attempts to calculate spatial quantities such as the length of a curved line or the area enclosed by a curve. These problems can be stated purely as questions of mathematical technique, but they have a far wider importance because they possess a broad variety of interpretations in the physical world. The area inside a curve, for instance, is of direct interest in land measurement: how many acres does an irregularly shaped plot of land contain? But the same technique also determines the mass of a uniform sheet of material bounded by some chosen curve or the quantity of paint needed to cover an irregularly shaped surface. Less obviously, these techniques can be used to find the total distance traveled by a vehicle moving at varying speeds, the depth at which a ship will float when placed in the sea, or the total fuel consumption of a rocket.

      Similarly, the mathematical technique for finding a tangent line to a curve at a given point can also be used to calculate the steepness of a curved hill or the angle through which a moving boat must turn to avoid a collision. Less directly, it is related to the extremely important question of the calculation of instantaneous velocity or other instantaneous rates of change, such as the cooling of a warm object in a cold room or the propagation of a disease organism through a human population.

      This article begins with a brief introduction to the historical background of analysis and to basic concepts such as number systems, functions, continuity, infinite series, and limits, all of which are necessary for an understanding of analysis. Following this introduction is a full technical review, from calculus to nonstandard analysis, and then the article concludes with a complete history.

Historical background

Bridging the gap between arithmetic and geometry
      Mathematics divides phenomena into two broad classes, discrete (combinatorics) and continuous (continuity), historically corresponding to the division between arithmetic and geometry. Discrete systems can be subdivided only so far, and they can be described in terms of whole numbers 0, 1, 2, 3, …. Continuous systems can be subdivided indefinitely, and their description requires the real numbers, numbers represented by decimal expansions such as 3.14159…, possibly going on forever. Understanding the true nature of such infinite decimals lies at the heart of analysis.

      The distinction between discrete mathematics and continuous mathematics is a central issue for mathematical modeling, the art of representing features of the natural world in mathematical form. The universe does not contain or consist of actual mathematical objects, but many aspects of the universe closely resemble mathematical concepts. For example, the number two does not exist as a physical object, but it does describe an important feature of such things as human twins and binary stars. In a similar manner, the real numbers provide satisfactory models for a variety of phenomena, even though no physical quantity can be measured accurately to more than a dozen or so decimal places. It is not the values of infinitely many decimal places that apply to the real world but the deductive structures that they embody and enable.

      Analysis came into being because many aspects of the natural world can profitably be considered as being continuous—at least, to an excellent degree of approximation. Again, this is a question of modeling, not of reality. Matter is not truly continuous; if matter is subdivided into sufficiently small pieces, then indivisible components, or atoms, will appear. But atoms are extremely small, and, for most applications, treating matter as though it were a continuum introduces negligible error while greatly simplifying the computations. For example, continuum modeling is standard engineering practice when studying the flow of fluids such as air or water, the bending of elastic materials, the distribution or flow of electric current, and the flow of heat.

Discovery of the calculus and the search for foundations
      Two major steps led to the creation of analysis. The first was the discovery of the surprising relationship, known as the fundamental theorem of calculus, between spatial problems involving the calculation of some total size or value, such as length, area, or volume (integration), and problems involving rates of change, such as slopes of tangents and velocities (differentiation). Credit for the independent discovery, about 1670, of the fundamental theorem of calculus together with the invention of techniques to apply this theorem goes jointly to Gottfried Wilhelm Leibniz (Leibniz, Gottfried Wilhelm) and Isaac Newton (Newton, Sir Isaac).

      While the utility of calculus in explaining physical phenomena was immediately apparent, its use of infinity in calculations (through the decomposition of curves, geometric bodies, and physical motions into infinitely many small parts) generated widespread unease. In particular, the Anglican bishop George Berkeley (Berkeley, George) published a famous pamphlet, The Analyst; or, A Discourse Addressed to an Infidel Mathematician (1734), pointing out that calculus—at least, as presented by Newton and Leibniz—possessed serious logical flaws. Analysis grew out of the resulting painstakingly close examination of previously loosely defined concepts such as function and limit.

      Newton's and Leibniz's approach to calculus had been primarily geometric, involving ratios with “almost zero” divisors—Newton's “fluxions” and Leibniz's “infinitesimals.” During the 18th century calculus became increasingly algebraic, as mathematicians—most notably the Swiss Leonhard Euler (Euler, Leonhard) and the Italian French Joseph-Louis Lagrange (Lagrange, Joseph-Louis, comte de l'Empire)—began to generalize the concepts of continuity and limits from geometric curves and bodies to more abstract algebraic functions and began to extend these ideas to complex numbers. Although these developments were not entirely satisfactory from a foundational standpoint, they were fundamental to the eventual refinement of a rigorous basis for calculus by the Frenchman Augustin-Louis Cauchy (Cauchy, Augustin-Louis, Baron), the Bohemian Bernhard Bolzano (Bolzano, Bernhard), and above all the German Karl Weierstrass (Weierstrass, Karl) in the 19th century.

Technical preliminaries

Numbers and functions
Number systems
      Throughout this article are references to a variety of number systems—that is, collections of mathematical objects (numbers) that can be operated on by some or all of the standard operations of arithmetic: addition, multiplication, subtraction, and division. Such systems have a variety of technical names (e.g., group, ring, field) that are not employed here. This article shall, however, indicate which operations are applicable in the main systems of interest. These main number systems are:
● a. The natural numbers ℕ. These numbers are the positive (and zero) whole numbers 0, 1, 2, 3, 4, 5, …. If two such numbers are added or multiplied, the result is again a natural number.
● b. The integers ℤ. These numbers are the positive and negative whole numbers …, −5, −4, −3, −2, −1, 0, 1, 2, 3, 4, 5, …. If two such numbers are added, subtracted, or multiplied, the result is again an integer.
● c. The rational numbers ℚ. These numbers are the positive and negative fractions p/q where p and q are integers and q ≠ 0. If two such numbers are added, subtracted, multiplied, or divided (except by 0), the result is again a rational number.
● d. The real numbers (real number) ℝ. These numbers are the positive and negative infinite decimals (including terminating decimals that can be considered as having an infinite sequence of zeros on the end). If two such numbers are added, subtracted, multiplied, or divided (except by 0), the result is again a real number.
● e. The complex numbers (complex number) ℂ. These numbers are of the form x + iy where x and y are real numbers and i =  √(−1) . (For further explanation, see the section Complex analysis (analysis).) If two such numbers are added, subtracted, multiplied, or divided (except by 0), the result is again a complex number.

      In simple terms, a function f is a mathematical rule that assigns to a number x (in some number system and possibly with certain limitations on its value) another number f(x). For example, the function “square” assigns to each number x its square x2. Note that it is the general rule, not specific values, that constitutes the function.

      The common functions that arise in analysis are usually definable by formulas, such as f(x) = x2. They include the trigonometric functions sin (x), cos (x), tan (x), and so on; the logarithmic function log (x); the exponential function exp (x) or ex (where e = 2.71828… is a special constant called the base of natural logarithms); and the square root function √x. However, functions need not be defined by single formulas (indeed by any formulas). For example, the absolute value function |x| is defined to be x when x ≥ 0 but −x when x < 0 (where ≥ indicates greater than or equal to and < indicates less than).

The problem of continuity
      The logical difficulties involved in setting up calculus on a sound basis are all related to one central problem, the notion of continuity. This in turn leads to questions about the meaning of quantities that become infinitely (infinity) large or infinitely small—concepts riddled with logical pitfalls. For example, a circle of radius r has circumference 2πr and area πr2, where π is the famous constant 3.14159…. Establishing these two properties is not entirely straightforward, although an adequate approach was developed by the geometers of ancient Greece, especially Eudoxus (Eudoxus of Cnidus) and Archimedes. It is harder than one might expect to show that the circumference of a circle is proportional to its radius and that its area is proportional to the square of its radius. The really difficult problem, though, is to show that the constant of proportionality for the circumference is precisely twice the constant of proportionality for the area—that is, to show that the constant now called π really is the same in both formulas. This boils down to proving a theorem (first proved by Archimedes) that does not mention π explicitly at all: the area of a circle is the same as that of a rectangle, one of whose sides is equal to the circle's radius and the other to half the circle's circumference.

Approximations in geometry
 A simple geometric argument shows that such an equality must hold to a high degree of approximation. The idea is to slice the circle like a pie, into a large number of equal pieces, and to reassemble the pieces to form an approximate rectangle (see figure—>). Then the area of the “rectangle” is closely approximated by its height, which equals the circle's radius, multiplied by the length of one set of curved sides—which together form one-half of the circle's circumference. Unfortunately, because of the approximations involved, this argument does not prove the theorem about the area of a circle. Further thought suggests that as the slices get very thin, the error in the approximation becomes very small. But that still does not prove the theorem, for an error, however tiny, remains an error. If it made sense to talk of the slices being infinitesimally thin, however, then the error would disappear altogether, or at least it would become infinitesimal.

      Actually, there exist subtle problems with such a construction. It might justifiably be argued that if the slices are infinitesimally thin, then each has zero area; hence, joining them together produces a rectangle with zero total area since 0 + 0 + 0 +⋯ = 0. Indeed, the very idea of an infinitesimal quantity is paradoxical because the only number that is smaller than every positive number is 0 itself.

      The same problem shows up in many different guises. When calculating the length of the circumference of a circle, it is attractive to think of the circle as a regular polygon with infinitely many straight sides, each infinitesimally long. (Indeed, a circle is the limiting case for a regular polygon as the number of its sides increases.) But while this picture makes sense for some purposes—illustrating that the circumference is proportional to the radius—for others it makes no sense at all. For example, the “sides” of the infinitely many-sided polygon must have length 0, which implies that the circumference is 0 + 0 + 0 + ⋯ = 0, clearly nonsense.

 Similar paradoxes occur in the manipulation of infinite series, such as
1/2 + 1/4 + 1/8 +⋯ (1)
continuing forever. This particular series is relatively harmless, and its value is precisely 1. To see why this should be so, consider the partial sums formed by stopping after a finite number of terms. The more terms, the closer the partial sum is to 1. It can be made as close to 1 as desired by including enough terms. Moreover, 1 is the only number for which the above statements are true. It therefore makes sense to define the infinite sum to be exactly 1. The figure—> illustrates this geometric series graphically by repeatedly bisecting a unit square. (Series whose successive terms differ by a common ratio, in this example by 1/2, are known as geometric series.)

      Other infinite series are less well-behaved—for example, the series

1 − 1 + 1 − 1 + 1 − 1 + ⋯ . (2)
If the terms are grouped one way,
(1 − 1) + (1 − 1) + (1 − 1) +⋯,
then the sum appears to be
0 + 0 + 0 +⋯ = 0.
But if the terms are grouped differently,
1 + (−1 + 1) + (−1 + 1) + (−1 + 1) +⋯,
then the sum appears to be
1 + 0 + 0 + 0 +⋯ = 1.
It would be foolish to conclude that 0 = 1. Instead, the conclusion is that infinite series do not always obey the traditional rules of algebra, such as those that permit the arbitrary regrouping of terms.

      The difference between series (1) and (2) is clear from their partial sums. The partial sums of (1) get closer and closer to a single fixed value—namely, 1. The partial sums of (2) alternate between 0 and 1, so that the series never settles down. A series that does settle down to some definite value, as more and more terms are added, is said to converge (convergence), and the value to which it converges is known as the limit of the partial sums; all other series are said to diverge.

The limit of a sequence
      All the great mathematicians who contributed to the development of calculus had an intuitive concept of limits (limit), but it was only with the work of the German mathematician Karl Weierstrass (Weierstrass, Karl) that a completely satisfactory formal definition of the limit of a sequence was obtained.

      Consider a sequence (an) of real numbers, by which is meant an infinite list

a0, a1, a2, ….
It is said that an converges to (or approaches) the limit a as n tends to infinity, if the following mathematical statement holds true: For every ε > 0, there exists a whole number N such that |an − a| < ε for all n > N. Intuitively, this statement says that, for any chosen degree of approximation (ε), there is some point in the sequence (N) such that, from that point onward (n > N), every number in the sequence (an) approximates a within an error less than the chosen amount (|an − a| < ε). Stated less formally, when n becomes large enough, an can be made as close to a as desired.

      For example, consider the sequence in which an = 1/(n + 1), that is, the sequence

1, 1/2, 1/3, 1/4, 1/5, …,
going on forever. Every number in the sequence is greater than zero, but, the farther along the sequence goes, the closer the numbers get to zero. For example, all terms from the 10th onward are less than or equal to 0.1, all terms from the 100th onward are less than or equal to 0.01, and so on. Terms smaller than 0.000000001, for instance, are found from the 1,000,000,000th term onward. In Weierstrass's terminology, this sequence converges to its limit 0 as n tends to infinity. The difference |an − 0| can be made smaller than any ε by choosing n sufficiently large. In fact, n > 1/ε suffices. So, in Weierstrass's formal definition, N is taken to be the smallest integer > 1/ε.

      This example brings out several key features of Weierstrass's idea. First, it does not involve any mystical notion of infinitesimals; all quantities involved are ordinary real numbers. Second, it is precise; if a sequence possesses a limit, then there is exactly one real number that satisfies the Weierstrass definition. Finally, although the numbers in the sequence tend to the limit 0, they need not actually reach that value.

Continuity of functions
      The same basic approach makes it possible to formalize the notion of continuity of a function. Intuitively, a function f(t) approaches a limit L as t approaches a value p if, whatever size error can be tolerated, f(t) differs from L by less than the tolerable error for all t sufficiently close to p. But what exactly is meant by phrases such as “error,” “prepared to tolerate,” and “sufficiently close”?

      Just as for limits of sequences, the formalization of these ideas is achieved by assigning symbols to “tolerable error” (ε) and to “sufficiently close” (δ). Then the definition becomes: A function f(t) approaches a limit L as t approaches a value p if for all ε > 0 there exists δ > 0 such that |f(t) − L| < ε whenever |t − p| < δ. (Note carefully that first the size of the tolerable error must be decided upon; only then can it be determined what it means to be “sufficiently close.”)

      Having defined the notion of limit in this context, it is straightforward to define continuity of a function. Continuous functions preserve limits; that is, a function f is continuous at a point p if the limit of f(t) as t approaches p is equal to f(p). And f is continuous if it is continuous at every p for which f(p) is defined. Intuitively, continuity means that small changes in t produce small changes in f(t)—there are no sudden jumps.

Properties of the real numbers
      Earlier, the real numbers (real number) were described as infinite decimals, although such a description makes no logical sense without the formal concept of a limit. This is because an infinite decimal expansion such as 3.14159… (the value of the constant π) actually corresponds to the sum of an infinite series

3 + 1/10 + 4/100 + 1/1,000 + 5/10,000 + 9/100,000 +⋯,
and the concept of limit is required to give such a sum meaning.

      It turns out that the real numbers (unlike, say, the rational numbers) have important properties that correspond to intuitive notions of continuity. For example, consider the function x2 − 2. This function takes the value −1 when x = 1 and the value +2 when x = 2. Moreover, it varies continuously with x. It seems intuitively plausible that, if a continuous function is negative at one value of x (here at x = 1) and positive at another value of x (here at x = 2), then it must equal zero for some value of x that lies between these values (here for some value between 1 and 2). This expectation is correct if x is a real number: the expression is zero when x = √2 = 1.41421…. However, it is false if x is restricted to rational values because there is no rational number x for which x2 = 2. (The fact that √2 is irrational has been known since the time of the ancient Greeks. See Sidebar: Incommensurables.)

      In effect, there are gaps in the system of rational numbers. By exploiting those gaps, continuously varying quantities can change sign without passing through zero. The real numbers fill in the gaps by providing additional numbers that are the limits of sequences of approximating rational numbers. Formally, this feature of the real numbers is captured by the concept of completeness.

      One awkward aspect of the concept of the limit of a sequence (an) is that it can sometimes be problematic to find what the limit a actually is. However, there is a closely related concept, attributable to the French mathematician Augustin-Louis Cauchy (Cauchy, Augustin-Louis, Baron), in which the limit need not be specified. The intuitive idea is simple. Suppose that a sequence (an) converges to some unknown limit a. Given two sufficiently large values of n, say r and s, then both ar and as are very close to a, which in particular means that they are very close to each other. The sequence (an) is said to be a Cauchy sequence if it behaves in this manner. Specifically, (an) is Cauchy if, for every ε > 0, there exists some N such that, whenever rs > N, |ar − as| < ε. Convergent sequences are always Cauchy, but is every Cauchy sequence convergent? The answer is yes for sequences of real numbers but no for sequences of rational numbers (in the sense that they may not have a rational limit).

      A number system is said to be complete if every Cauchy sequence converges. The real numbers are complete; the rational numbers are not. Completeness is one of the key features of the real number system, and it is a major reason why analysis is often carried out within that system.

      The real numbers have several other features that are important for analysis. They satisfy various ordering properties associated with the relation less than (<). The simplest of these properties for real numbers x, y, and z are:
● a. Trichotomy law. One and only one of the statements x < y, x = y, and x > y is true.
● b. Transitive law. If x < y and y < z, then x < z.
● c. If x < y, then x + z < y + z for all z.
● d. If x < y and z > 0, then xz < yz.

      More subtly, the real number system is Archimedean. This means that, if x and y are real numbers and both x, y > 0, then x + x +⋯+ x > y for some finite sum of x's. The Archimedean property indicates that the real numbers contain no infinitesimals. Arithmetic, completeness, ordering, and the Archimedean property completely characterize the real number system.

      With the technical preliminaries out of the way, the two fundamental aspects of calculus may be examined:
● a. Finding the instantaneous rate of change of a variable quantity.
● b. Calculating areas, volumes, and related “totals” by adding together many small parts.

      Although it is not immediately obvious, each process is the inverse of the other, and this is why the two are brought together under the same overall heading. The first process is called differentiation, the second integration. Following a discussion of each, the relationship between them will be examined.

      Differentiation is about rates of change; for geometric curves and figures, this means determining the slope, or tangent, along a given direction. Being able to calculate rates of change also allows one to determine where maximum and minimum values occur—the title of Leibniz's first calculus publication was “Nova Methodus pro Maximis et Minimis, Itemque Tangentibus, qua nec Fractas nec Irrationales Quantitates Moratur, et Singulare pro illi Calculi Genus” (1684; “A New Method for Maxima and Minima, as Well as Tangents, Which Is Impeded Neither by Fractional nor by Irrational Quantities, and a Remarkable Type of Calculus for This”). Early applications for calculus included the study of gravity and planetary motion, fluid flow and ship design, and geometric curves and bridge engineering.

Average rates of change
 A simple illustrative example of rates of change is the speed of a moving object. An object moving at a constant speed travels a distance that is proportional to the time. For example, a car moving at 50 kilometres per hour (km/hr) travels 50 km in 1 hr, 100 km in 2 hr, 150 km in 3 hr, and so on. A graph of the distance traveled against the time elapsed looks like a straight line whose slope, or gradient, yields the speed (see figure—>).

      Constant speeds pose no particular problems—in the example above, any time interval yields the same speed—but variable speeds are less straightforward. Nevertheless, a similar approach can be used to calculate the average speed of an object traveling at varying speeds: simply divide the total distance traveled by the time taken to traverse it. Thus, a car that takes 2 hr to travel 100 km moves with an average speed of 50 km/hr. However, it may not travel at the same speed for the entire period. It may slow down, stop, or even go backward for parts of the time, provided that during other parts it speeds up enough to cover the total distance of 100 km. Thus, average speeds—certainly if the average is taken over long intervals of time—do not tell us the actual speed at any given moment.

Instantaneous rates of change
      In fact, it is not so easy to make sense of the concept of “speed at a given moment.” How long is a moment? Zeno Of Elea, a Greek philosopher who flourished about 450 BC, pointed out in one of his celebrated paradoxes that a moving arrow, at any instant of time, is fixed. During zero time it must travel zero distance. Another way to say this is that the instantaneous speed of a moving object cannot be calculated by dividing the distance that it travels in zero time by the time that it takes to travel that distance. This calculation leads to a fraction, 0/0, that does not possess any well-defined meaning. Normally, a fraction indicates a specific quotient. For example, 6/3 means 2, the number that, when multiplied by 3, yields 6. Similarly, 0/0 should mean the number that, when multiplied by 0, yields 0. But any number multiplied by 0 yields 0. In principle, then, 0/0 can take any value whatsoever, and in practice it is best considered meaningless.

      Despite these arguments, there is a strong feeling that a moving object does move at a well-defined speed at each instant. Passengers know when a car is traveling faster or slower. So the meaninglessness of 0/0 is by no means the end of the story. Various mathematicians—both before and after Newton and Leibniz—argued that good approximations to the instantaneous speed can be obtained by finding the average speed over short intervals of time. If a car travels 5 metres in one second, then its average speed is 18 km/hr, and, unless the speed is varying wildly, its instantaneous speed must be close to 18 km/hr. A shorter time period can be used to refine the estimate further.

      If a mathematical formula is available for the total distance traveled in a given time, then this idea can be turned into a formal calculation. For example, suppose that after time t seconds an object travels a distance t2 metres. (Similar formulas occur for bodies falling freely under gravity, so this is a reasonable choice.) To determine the object's instantaneous speed after precisely one second, its average speed over successively shorter time intervals will be calculated.

      To start the calculation, observe that between time t = 1 and t = 1.1 the distance traveled is 1.12 − 1 = 0.21. The average speed over that interval is therefore 0.21/0.1 = 2.1 metres per second. For a finer approximation, the distance traveled between times t = 1 and t = 1.01 is 1.012 − 1 = 0.0201, and the average speed is 0.0201/0.01 = 2.01 metres per second. It is clear that the smaller the interval of time, the closer the average speed is to 2 metres per second.

The structure of the entire table points very compellingly to an exact value for the instantaneous speed—namely, 2 metres per second. Unfortunately, 2 cannot be found anywhere in the table. However far it is extended, every entry in the table looks like 2.000…0001, with perhaps a huge number of zeros, but always with a 1 on the end. Neither is there the option of choosing a time interval of 0, because then the distance traveled is also 0, which leads back to the meaningless fraction 0/0.

Formal definition of the derivative
      More generally, suppose an arbitrary time interval h starts from the time t = 1. Then the distance traveled is (1 + h)2 −12, which simplifies to give 2h + h2. The time taken is h. Therefore, the average speed over that time interval is (2h + h2)/h, which equals 2 + h, provided h ≠ 0. Obviously, as h approaches zero, this average speed approaches 2. Therefore, the definition of instantaneous speed is satisfied by the value 2 and only that value. What has not been done here—indeed, what the whole procedure deliberately avoids—is to set h equal to 0. As Bishop George Berkeley pointed out in the 18th century, to replace (2h + h2)/h by 2 + h, one must assume h is not zero, and that is what the rigorous definition of a limit achieves.

      Even more generally, suppose the calculation starts from an arbitrary time t instead of a fixed t = 1. Then the distance traveled is (t + h)2 − t2, which simplifies to 2th + h2. The time taken is again h. Therefore, the average speed over that time interval is (2th + h2)/h, or 2t + h. Obviously, as h approaches zero, this average speed approaches the limit 2t.

      This procedure is so important that it is given a special name: the derivative of t2 is 2t, and this result is obtained by differentiating t2 with respect to t.

      One can now go even further and replace t2 by any other function f of time. The distance traveled between times t and t + h is f(t + h) − f(t). The time taken is h. So the average speed is

(f(t + h) − f(t))/h. (3)
If (3) tends to a limit as h tends to zero, then that limit is defined as the derivative of f(t), written f′(t). Another common notation for the derivative is
symbolizing small change in f divided by small change in t. A function is differentiable at t if its derivative exists for that specific value of t. It is differentiable if the derivative exists for all t for which f(t) is defined. A differentiable function must be continuous, but the converse is false. (Indeed, in 1872 Weierstrass produced the first example of a continuous function that cannot be differentiated at any point—a function now known as a nowhere differentiable function.)

Graphical interpretation
 The above ideas have a graphical interpretation. Associated with any function f(t) is a graph in which the horizontal axis represents the variable t and the vertical axis represents the value of the function. Choose a value for t, calculate f(t), and draw the corresponding point; now repeat for all appropriate t. The result is a curve, the graph of f (see part A of the figure—>). For example, if f(t) = t2, then f(t) = 0 when t = 0, f(t) = 1 when t = 1, f(t) = 4 when t = 2, f(t) = 9 when t = 3, and so on, leading to the curve known as a parabola.

 Expression (3), the numerical calculation of the average speed traveled between times t and t + h, also can be represented graphically. The two times can be plotted as two points on the curve, as shown in the figure—>, and a line can be drawn joining the two points. This line is called a secant, or chord, of the curve, and its slope corresponds to the change in distance with respect to time—that is, the average speed traveled between t and t + h. If, as h becomes smaller and smaller, this slope tends to a limiting value, then the direction of the chord stabilizes and the chord approximates more and more closely the tangent to the graph at t. Thus, the numerical notion of instantaneous rate of change of f(t) with respect to t corresponds to the geometric notion of the slope of the tangent to the graph.

      The graphical interpretation suggests a number of useful problem-solving techniques. An example is finding the maximum value of a continuously differentiable function f(x) defined in some interval a ≤ x ≤ b. Either f attains its maximum at an endpoint, x = a or x = b, or it attains a maximum for some x inside this interval. In the latter case, as x approaches the maximum value, the curve defined by f rises more and more slowly, levels out, and then starts to fall. In other words, as x increases from a to b, the derivative f′(x) is positive while the function f(x) rises to its maximum value, f′(x) is zero at the value of x for which f(x) has a maximum value, and f′(x) is negative while f(x) declines from its maximum value. Simply stated, maximum values can be located by solving the equation f′(x) = 0.

      It is necessary to check whether the resulting value genuinely is a maximum, however. First, all of the above reasoning applies at any local maximum—a place where f(x) is larger than all values of f(x) for nearby values of x. A function can have several local maxima, not all of which are overall (“global”) maxima. Moreover, the derivative f′(x) vanishes at any (local) minimum value inside the interval. Indeed, it can sometimes vanish at places where the value is neither a maximum nor a minimum. An example is f(x) = x3 for −1 ≤ x ≤1. Here f′(x) = 3x2 so f′(0) = 0, but 0 is neither a maximum nor a minimum. For x < 0 the value of f(x) gets smaller than the value f(0) = 0, but for x > 0 it gets larger. Such a point is called a point of inflection. In general, solutions of f′(x) = 0 are called critical points of f.

      Local maxima, local minima, and points of inflection are useful features of a function f that can aid in sketching its graph. Solving the equation f′(x) = 0 provides a list of critical values of x near which the shape of the curve is determined—concave up near a local minimum, concave down near a local maximum, and changing concavity at an inflection point. Moreover, between any two adjacent critical points of f, the values of f either increase steadily or decrease steadily—that is, the direction of the slope cannot change. By combining such information, the general qualitative shape of the graph of f can often be determined.

 For example, suppose that f(x) = x3 − 3x + 2 is defined for −3 ≤ x ≤ 3. The critical points are solutions x of 0 = f′(x) = 3x2 − 3; that is, x = −1 and x = 1. When x < −1 the slope is positive; for −1 < x < 1 the slope is negative; for x > 1 the slope is positive again. Thus, x = −1 is a local maximum, and x = 1 is a local minimum. Therefore, the graph of f slopes upward from left to right as x runs from −3 to −1, then slopes downward as x runs from −1 to 1, and finally slopes upward again as x runs from 1 to 3. In addition, the value of f at some representative points within these intervals can be calculated to obtain the graph shown in the figure—>.

Higher-order derivatives (derivative)
      The process of differentiation can be applied several times in succession, leading in particular to the second derivative f″ of the function f, which is just the derivative of the derivative f′. The second derivative often has a useful physical interpretation. For example, if f(t) is the position of an object at time t, then f′(t) is its speed at time t and f″(t) is its acceleration at time t. Newton's laws of motion state that the acceleration of an object is proportional to the total force acting on it; so second derivatives are of central importance in dynamics. The second derivative is also useful for graphing functions, because it can quickly determine whether each critical point, c, corresponds to a local maximum (f″(c) < 0), a local minimum (f″(c) > 0), or a change in concavity (f″(c) = 0). Third derivatives occur in such concepts as curvature; and even fourth derivatives have their uses, notably in elasticity. The nth derivative of f(x) is denoted by

f(n)(x) or dnf/dxn
and has important applications in power series.

      An infinite series of the form

a0 + a1x + a2x2 +⋯,
where x and the aj are real numbers, is called a power series. The aj are the coefficients. The series has a legitimate meaning, provided the series converges (convergence). In general, there exists a real number R such that the series converges when −R < x < R but diverges if x < −R or x > R. The range of values −R < x < R is called the interval of convergence. The behaviour of the series at x = R or x = −R is more delicate and depends on the coefficients. If R = 0 the series has little utility, but when R > 0 the sum of the infinite series defines a function f(x). Any function f that can be defined by a convergent power series is said to be real-analytic.

      The coefficients of the power series of a real-analytic function can be expressed in terms of derivatives of that function. For values of x inside the interval of convergence, the series can be differentiated term by term; that is,

f′(x) = a1 + 2a2x + 3a3x2 +⋯,
and this series also converges. Repeating this procedure and then setting x = 0 in the resulting expressions shows that a0 = f(0), a1 = f′(0), a2 = f″(0)/2, a3 = f′′′(0)/6, and, in general, aj = f(j)(0)/j!. That is, within the interval of convergence of f,

      This expression is the Maclaurin series of f, otherwise known as the Taylor series of f about 0. A slight generalization leads to the Taylor series of f about a general value x:

All these series are meaningful only if they converge.

      For example, it can be shown that

ex = 1 + x + x2/2! + x3/3! +⋯,
sin (x) = x − x3/3! + x5/5! − ⋯,
cos (x) = 1 − x2/2! + x4/4! − ⋯,
and these series converge for all x.

      Like differentiation, integration has its roots in ancient problems—particularly, finding the area or volume of irregular objects and finding their centre of mass. Essentially, integration generalizes the process of summing up many small factors to determine some whole.

 Also like differentiation, integration has a geometric interpretation. The (definite) integral of the function f, between initial and final values t = a and t = b, is the area of the region enclosed by the graph of f, the horizontal axis, and the vertical lines t = a and t = b, as shown in the figure—>. It is denoted by the symbol
Here the symbol ∫ is an elongated s, for sum, because the integral is the limit of a particular kind of sum. The values a and b are often, confusingly, called the limits of the integral; this terminology is unrelated to the limit concept introduced in the section Technical preliminaries.

The fundamental theorem of calculus
      The process of calculating integrals is called integration. Integration is related to differentiation by the fundamental theorem of calculus, which states that (subject to the mild technical condition that the function be continuous) the derivative of the integral is the original function. In symbols, the fundamental theorem is stated as

d/dt( ∫atf(u)du) = f(t).

 The reasoning behind this theorem (see figure—>) can be demonstrated in a logical progression, as follows: Let A(t) be the integral of f from a to t. Then the derivative of A(t) is very closely approximated by the quotient (A(t + h) − A(t))/h. This is 1/h times the area under the graph of f between t and t + h. For continuous functions f the value of f(t), for t in the interval, changes only slightly, so it must be very close to f(t). The area is therefore close to hf(t), so the quotient is close to hf(t)/h = f(t). Taking the limit as h tends to zero, the result follows.

      Strict mathematical logic aside, the importance of the fundamental theorem of calculus is that it allows one to find areas by antidifferentiation—the reverse process to differentiation. To integrate a given function f, just find a function F whose derivative F′ is equal to f. Then the value of the integral is the difference F(b) − F(a) between the value of F at the two limits. For example, since the derivative of t3 is 3t2, take the antiderivative of 3t2 to be t3. The area of the region enclosed by the graph of the function y = 3t2, the horizontal axis, and the vertical lines t = 1 and t = 2, for example, is given by the integral ∫12 3t2dt. By the fundamental theorem of calculus, this is the difference between the values of t3 when t = 2 and t = 1; that is, 23 − 13 = 7.

      All the basic techniques of calculus for finding integrals work in this manner. They provide a repertoire of tricks for finding a function whose derivative is a given function. Most of what is taught in schools and colleges under the name calculus consists of rules for calculating the derivatives and integrals of functions of various forms and of particular applications of those techniques, such as finding the length of a curve or the surface area of a solid of revolution.

Table 2 lists the integrals of a small number of elementary functions. In the table, the symbol c denotes an arbitrary constant. (Because the derivative of a constant is zero, the antiderivative of a function is not unique: adding a constant makes no difference. When an integral is evaluated between two specific limits, this constant is subtracted from itself and thus cancels out. In the indefinite integral, another name for the antiderivative, the constant must be included.)

The Riemann integral
      The task of analysis is to provide not a computational method but a sound logical foundation for limiting processes. Oddly enough, when it comes to formalizing the integral, the most difficult part is to define the term area. It is easy to define the area of a shape whose edges are straight; for example, the area of a rectangle is just the product of the lengths of two adjoining sides. But the area of a shape with curved edges can be more elusive. The answer, again, is to set up a suitable limiting process that approximates the desired area with simpler regions whose areas can be calculated.

 The first successful general method for accomplishing this is usually credited to the German mathematician Bernhard Riemann (Riemann, Bernhard) in 1853, although it has many precursors (both in ancient Greece and in China). Given some function f(t), consider the area of the region enclosed by the graph of f, the horizontal axis, and the vertical lines t = a and t = b. Riemann's approach is to slice this region into thin vertical strips (see part A of the figure—>) and to approximate its area by sums of areas of rectangles, both from the inside and from the outside. If both of these sums converge to the same limiting value as the thickness of the slices tends to zero, then their common value is defined to be the Riemann integral of f between the limits a and b. If this limit exists for all a, b, then f is said to be (Riemann) integrable. Every continuous function is integrable.

Ordinary differential equations (ordinary differential equation)

Newton and differential equations
      Analysis is one of the cornerstones of mathematics. It is important not only within mathematics itself but also because of its extensive applications to the sciences. The main vehicles for the application of analysis are differential equations (differential equation), which relate the rates of change of various quantities to their current values, making it possible—in principle and often in practice—to predict future behaviour. Differential equations arose from the work of Isaac Newton (Newton, Sir Isaac) on dynamics in the 17th century, and the underlying mathematical ideas will be sketched here in a modern interpretation.

Newton's laws of motion
      Imagine a body moving along a line, whose distance from some chosen point is given by the function x(t) at time t. (The symbol x is traditional here rather than the symbol f for a general function, but this is purely a notational convention.) The instantaneous velocity of the moving body is the rate of change of distance—that is, the derivative x′(t). Its instantaneous acceleration is the rate of change of velocity—that is, the second derivative x″(t). According to the most important of Newton's laws of motion, the acceleration experienced by a body of mass m is proportional to the force F applied, a principle that can be expressed by the equation

F = mx″. (4)

      Suppose that m and F (which may vary with time) are specified, and one wishes to calculate the motion of the body. Knowing its acceleration alone is not satisfactory; one wishes to know its position x at an arbitrary time t. In order to apply equation (4), one must solve for x, not for its second derivative x″. Thus, one must solve an equation for the quantity x when that equation involves derivatives of x. Such equations are called differential equations, and their solution requires techniques that go well beyond the usual methods for solving algebraic equations.

      For example, consider the simplest case, in which the mass m and force F are constant, as is the case for a body falling under terrestrial gravity. Then equation (4) can be written as

x″(t) = F/m. (5)
Integrating (5) once with respect to time gives
x′(t) = Ft/m + b (6)
where b is an arbitrary constant. Integrating (6) with respect to time yields
x(t) = Ft2/2m + bt + c
with a second constant c. The values of the constants b and c depend upon initial conditions; indeed, c is the initial position, and b is the initial velocity.

Exponential growth and decay
      Newton's equation for the laws of motion could be solved as above, by integrating twice with respect to time, because time is the only variable term within the function x″. Not all differential equations can be solved in such a simple manner. For example, the radioactive decay of a substance is governed by the differential equation

x′(t) = −kx(t) (7)
where k is a positive constant and x(t) is the amount of substance that remains radioactive at time t. The equation can be solved by rewriting it as
x'(t)/x(t) = −k. (8)

      The left-hand side of (8) can be shown to be the derivative of ln x(t), so the equation can be integrated to yield ln x(t) + c = −kt for a constant c that is determined by initial conditions. Equivalently, x(t) = e−(kt + c). This solution represents exponential decay: in any fixed period of time, the same proportion of the substance decays. This property of radioactivity is reflected in the concept of the half-life of a given radioactive substance—that is, the time taken for half the material to decay.

      A surprisingly large number of natural processes display exponential decay or growth. (Change the sign from negative to positive on the right-hand side of (7) to obtain the differential equation for exponential growth.) However, this is not quite so surprising if consideration is given to the fact that the only functions whose derivatives are proportional to themselves are exponential functions. In other words, the rate of change of exponential functions directly depends upon their current value. This accounts for their ubiquity in mathematical models. For instance, the more radioactive material present, the more radiation is produced; the greater the temperature difference between a “hot body” in a “cold room,” the faster the heat loss (known as Newton's law of cooling and an essential tool in the coroner's arsenal); the larger the savings, the greater the compounded interest; and the larger the population (in an unrestricted environment), the greater the population explosion.

Dynamical systems theory and chaos
      The classical methods of analysis, such as outlined in the previous section on Newton and differential equations, have their limitations. For example, differential equations describing the motion of the solar system do not admit solutions by power series. Ultimately, this is because the dynamics of the solar system is too complicated to be captured by such simple, well-behaved objects as power series. One of the most important modern theoretical developments has been the qualitative theory of differential equations, otherwise known as dynamical systems theory, which seeks to establish general properties of solutions from general principles without writing down any explicit solutions at all. Dynamical systems theory combines local analytic information, collected in small “neighbourhoods” around points of special interest, with global geometric and topological properties of the shape and structure of the manifold in which all the possible solutions, or paths, reside—the qualitative aspect of the theory. (A manifold, also known as the state space or phase space, is the multidimensional analog of a curved surface.) This approach is especially powerful when employed in conjunction with numerical methods, which use computers to approximate the solution.

      The qualitative theory of differential equations was the brainchild of the French mathematician Henri Poincaré (Poincaré, Henri) at the end of the 19th century. A major stimulus to the development of dynamical systems theory was a prize offered in 1885 by King Oscar II of Sweden and Norway for a solution to the problem of determining the stability of the solar system. The problem was stated essentially as follows: Will the planets of the solar system continue forever in much the same arrangement as they do at present? Or could something dramatic happen, such as a planet being flung out of the solar system entirely or colliding with the Sun? Mathematicians already knew that considerable difficulties arise in answering any such questions as soon as the number of bodies involved exceeds two. For two bodies moving under Newtonian gravitation, it is possible to solve the differential equation and deduce an exact formula for their motion: they move in ellipses about their mutual centre of gravity. Newton carried out this calculation when he showed that the inverse square law of gravitation explains Kepler's discovery that planetary orbits are elliptical. The motion of three bodies proved less tractable—indeed, nobody could solve the “ three-body problem”—and here was Oscar asking for the solution to a ten-body problem (or something like a thirty-body problem if one includes the satellites of the planets and a many-thousand-body problem if one includes asteroids).

      Undaunted, Poincaré set up a general framework for the problem, but, in order to make serious progress, he was forced to specialize to three bodies and to assume that one of them has negligible mass in comparison with the other two. This approach is known as the “restricted” three-body problem, and his work on it won Poincaré the prize.

 Ironically, the prizewinning memoir contained a serious mistake, and Poincaré's biggest discovery in the area came when he hastened to put the error right (costing him more in printing expenses than the value of the prize). It turned out that even the restricted three-body problem was still too difficult to be solved. What Poincaré did manage to understand, though, was why it is so hard to solve. By ingenious geometric arguments, he showed that planetary orbits in the restricted three-body problem are too complicated to be describable by any explicit formula. He did so by introducing a novel idea, now called a Poincaré section. Suppose one knows some solution path and wants to find out how nearby solution paths behave. Imagine a surface that slices through the known path. Nearby paths will also cross this surface and may eventually return to it. By studying how this “point of first return” behaves, information is gained about these nearby solution paths. (See the illustration—> of a Poincaré section.)

      Today the term chaos (chaos theory) is used to refer to Poincaré's discovery. Sporadically during the 1930s and '40s and with increasing frequency in the 1960s, mathematicians and scientists began to notice that simple differential equations can sometimes possess extremely complex solutions. The American mathematician Stephen Smale (Smale, Stephen), continuing to develop Poincaré's insights on qualitative properties of differential equations, proved that in some cases the behaviour of the solutions is effectively random. Even when there is no hint of randomness in the equations, there can be genuine elements of randomness in the solutions. The Russian school of dynamicists under Andrey Kolmogorov (Kolmogorov, Andrey Nikolayevich) and Vladimir Arnold developed similar ideas at much the same time.

      These discoveries challenged the classical view of determinism, the idea of a “clockwork universe” that merely works out the consequences of fixed laws of nature, starting from given initial conditions. By the end of the 20th century, Poincaré's discovery of chaos had grown into a major discipline within mathematics, connecting with many areas of applied science. Chaos was found not just in the motion of the planets but in weather, disease epidemics, ecology, fluid flow, electrochemistry, acoustics, even quantum mechanics. The most important feature of the new viewpoint on dynamics—popularly known as chaos theory but really just a subdiscipline of dynamical systems theory—is not the realization that many processes are unpredictable. Rather, it is the development of a whole series of novel techniques for extracting useful information from apparently random behaviour. Chaos theory has led to the discovery of new and more efficient ways to send space probes to the Moon or to distant comets, new kinds of solid-state lasers, new ways to forecast weather and estimate the accuracy of such forecasts, and new designs for heart pacemakers. It has even been turned into a quality-control technique for the wire- and spring-making industries.

Partial differential equations (partial differential equation)
      From the 18th century onward, huge strides were made in the application of mathematical ideas to problems arising in the physical sciences: heat, sound, light, fluid dynamics, elasticity, electricity, and magnetism. The complicated interplay between the mathematics and its applications led to many new discoveries in both. The main unifying theme in much of this work is the notion of a partial differential equation.

Musical origins
      The problem that sparked the entire development was deceptively simple, and it was surprisingly far removed from any serious practical application, coming not so much from the physical sciences but from music: What is the appropriate mathematical description of the motion of a violin string? The Pythagorean cult of ancient Greece also found inspiration in music, especially musical harmony. They experimented with the notes sounded by strings of various lengths, and one of their great discoveries was that two notes sound pleasing together, or harmonious, if the lengths of the corresponding strings are in simple numerical ratios such as 2:1 or 3:2. It took more than two millennia before mathematics could explain why these ratios arise naturally from the motion of elastic strings.

Normal modes
      Probably the earliest major result was obtained in 1714 by the English mathematician Brook Taylor (Taylor, Brook), who calculated the fundamental vibrational frequency of a violin string in terms of its length, tension, and density. The ancient Greeks knew that a vibrating string can produce many different musical notes, depending on the position of the nodes, or rest-points. Today it is known that musical pitch is governed by the frequency of the vibration—the number of complete cycles of vibrations every second. The faster the string moves, the higher the frequency and the higher the note that it produces. For the fundamental frequency, only the end points are at rest. If the string has a node at its centre, then it produces a note at exactly double the frequency (heard by the human ear as one octave higher); and the more nodes there are, the higher the frequency of the note. These higher vibrations are called overtones.

      The vibrations produced are standing waves. That is, the shape of the string at any instant is the same, except that it is stretched or compressed in a direction at right angles to its length. The maximum amount of stretching is the amplitude of the wave, which physically determines how loud the note sounds. The waveforms shown are sinusoidal in shape—given by the sine function from trigonometry—and their amplitudes vary sinusoidally with time. Standing waves of this simple kind are called normal modes. Their frequencies are integer multiples of a single fundamental frequency—the mathematical source of the Pythagoreans' simple numerical ratios.

Partial derivatives
      In 1746 the French mathematician Jean Le Rond d'Alembert (Alembert, Jean Le Rond d') showed that the full story is not quite that simple. There are many vibrations of a violin string that are not normal modes. In fact, d'Alembert proved that the shape of the wave at time t = 0 can be arbitrary.

 Imagine a string of length l, stretched along the x-axis from (0, 0) to (l, 0), and suppose that at time t the point (x, 0) is displaced by an amount y(xt) in the y-direction (see figure—>). The function y(xt)—or, more briefly, just y—is a function of two variables; that is, it depends not on a single variable t but upon x as well. If some value for x is selected and kept fixed, it is still possible for t to vary; so a function f(t) can be defined by f(t) = y(xt) for this fixed x. The derivative f′(t) of this function is called the partial derivative of y with respect to t; and the procedure that produces it is called partial differentiation with respect to t. The partial derivative of f with respect to t is written ∂y/∂t, where the symbol ∂ is a special form of the letter d reserved for this particular operation. An alternative, simpler notation is yt. Analogously, fixing t instead of x gives the partial derivative of y with respect to x, written ∂y/∂x or yx. In both cases, the way to calculate a partial derivative is to treat all other variables as constants and then find the usual derivative of the resulting function with respect to the chosen variable. For example, if y(xt) = x2 + t3, then yt = 3t2 and yx = 2x.

      Both yx and yt are again functions of the two variables x and t, so they in turn can be partially differentiated with respect to either x or t. The partial derivative of yt with respect to t is written ytt or ∂2y/∂t2; the partial derivative of yt with respect to x is written ytx or ∂2y/∂tx; and so on. Henceforth the simpler subscript notation will be used.

D'Alembert's wave equation
      D'Alembert's wave equation takes the form

ytt = c2yxx. (9)
Here c is a constant related to the stiffness of the string. The physical interpretation of (9) is that the acceleration (ytt) of a small piece of the string is proportional to the tension (yxx) within it. Because the equation involves partial derivatives, it is known as a partial differential equation—in contrast to the previously described differential equations, which, involving derivatives with respect to only one variable, are called ordinary differential equations. Since partial differentiation is applied twice (for instance, to get ytt from y), the equation is said to be of second order.

 In order to specify physically realistic solutions, d'Alembert's wave equation must be supplemented by boundary conditions, which express the fact that the ends of a violin string are fixed. Here the boundary conditions take the form
y(0, t) = 0 and
y(l, t) = 0 for all t. (10)
D'Alembert showed that the general solution to (10) is
y(x, t) = f(x + ct) + g(xct) (11)
where f and g are arbitrary functions (of one variable). The physical interpretation of this solution is that f represents the shape of a wave that travels with speed c along the x-axis in the negative direction, while g represents the shape of a wave that travels along the x-axis in the positive direction. The general solution is a superposition of two traveling waves, producing the complex waveform shown in the figure—>.

      In order to satisfy the boundary conditions given in (10), the functions f and g must be related by the equations

f(−ct) + g(ct) = 0 and
f(lct) + g(l + ct) = 0 for all t.
These equations imply that g = −f, that f is an odd function—one satisfying f(−u) = −f(u)—and that f is periodic with period 2l, meaning that f(u + 2l) = f(u) for all u. Notice that the part of f lying between x = 0 and x = l is arbitrary, which corresponds to the physical fact that a violin string can be started vibrating from any shape whatsoever (subject to its ends being fixed). In particular, its shape need not be sinusoidal, proving that solutions other than normal modes can occur.

Trigonometric series solutions
      In 1748, in response to d'Alembert's work, the Swiss mathematician Leonhard Euler (Euler, Leonhard) wrote a paper, Sur la vibration des cordes (“On the Vibrations of Strings”). In it he repeated d'Alembert's derivation of the wave equation for a string, but he obtained a new solution. Euler's innovation was to permit f and g to be what he called discontinuous curves (though in modern terminology it is their derivatives that are discontinuous, not the functions themselves). To Euler, who thought in terms of formulas, this meant that the shapes of the curves were defined by different formulas in different intervals. In 1749 he went on to explain that if several normal mode solutions of the wave equation are superposed, the result is a solution of the form

where the coefficients a1, a2, a3, … are arbitrary constants. Euler did not state whether the series should be finite or infinite; but it eventually turned out that infinite series held the key to a central mystery, the relation between d'Alembert's arbitrary function solutions (11) and Euler's trigonometric series solutions (12). Every solution of Euler's type can also be written in the form of d'Alembert's solution, but is the converse true? This question was the subject of a lengthy controversy, whose final conclusion was that all possible vibrations of the string can be obtained by superposing infinitely many normal modes in suitable proportions. The normal modes are the basic components; the vibrations that can occur are all possible sums of constant multiples of finitely or infinitely many normal modes. As the Swiss mathematician Daniel Bernoulli (Bernoulli, Daniel) expressed it in 1753: “All new curves given by d'Alembert and Euler are only combinations of the Taylor vibrations.”

      The controversy was not really about the wave equation; it was about the meaning of the word function. Euler wanted it to include his discontinuous functions, but he thought—wrongly as it turned out—that a trigonometric series cannot represent a discontinuous function, because it provides a single formula valid throughout the entire interval 0 ≤ x ≤ l. Bernoulli, mostly on physical grounds, was happy with the discontinuous functions, but he thought—correctly but without much justification—that Euler was wrong about their not being representable by trigonometric series. It took roughly a century to sort out the answers—and, along the way, mathematicians were forced to take what might seem to be logical hairsplitting very seriously indeed, because it was only by being very careful about logical rigour that the problem could be resolved in a satisfactory and reliable manner.

      Mathematics did not wait for this resolution, though. It plowed ahead into the disputed territory, and every new discovery made the eventual resolution that much more important. The first development was to extend the wave equation to other kinds of vibrations—for example, the vibrations of drums (drum). The first work here was also Euler's, in 1759; and again he derived a wave equation, describing how the displacement of the drum skin in the vertical direction varies over time. Drums differ from violin strings not only in their dimensionality—a drum is a flat two-dimensional membrane—but in having a much more interesting boundary. If z(xyt) denotes the displacement at time t in the z-direction of the portion of drum skin that lies at the point (xy) in the plane, then Euler's wave equation takes the form

ztt = c2(zxx + zyy) (13)
with boundary conditions
z(x, y, t) = 0 (14)
whenever (xy) lies on the boundary of the drum. Equation (13) is strikingly similar to the wave equation for a violin string. Its physical interpretation is that the acceleration of a small piece of the drum skin is proportional to the average tension exerted on it by all nearby parts of the drum skin. Equation (14) states that the rim of the drum skin remains fixed. In this whole subject, boundaries are absolutely crucial.

      The mathematicians of the 18th century were able to solve the equations for the motion of drums of various shapes. Again they found that all vibrations can be built up from simpler ones, the normal modes. The simplest case is the rectangular drum, whose normal modes are combinations of sinusoidal ripples in the two perpendicular directions.

Fourier analysis
      Nowadays, trigonometric series solutions (12) are called Fourier series, after Joseph Fourier (Fourier, Joseph, Baron), who in 1822 published one of the great mathematical classics, The Analytical Theory of Heat. Fourier began with a problem closely analogous to the vibrating violin string: the conduction of heat in a rigid rod of length l. If T(xt) denotes the temperature at position x and time t, then it satisfies a partial differential equation

Tt = a2Txx (15)
that differs from the wave equation only in having the first time derivative Tt instead of the second, Ttt. This apparently minor change has huge consequences, both mathematical and physical. Again there are boundary conditions, expressing the fact that the temperatures at the ends of the rod are held fixed—for example,
T(0, t) = 0 and
T(l, t) = 0, (16)
if the ends are held at zero temperature. The physical effect of the first time derivative is profound: instead of getting persistent vibrational waves, the heat spreads out more and more smoothly—it diffuses.

      Fourier showed that his heat equation can be solved using trigonometric series. He invented a method (now called Fourier analysis) of finding appropriate coefficients a1, a2, a3, … in equation (12) for any given initial temperature distribution. He did not solve the problem of providing rigorous logical foundations for such series—indeed, along with most of his contemporaries, he failed to appreciate the need for such foundations—but he provided major motivation for those who eventually did establish foundations.

      These developments were not just of theoretical interest. The wave equation, in particular, is exceedingly important. Waves arise not only in musical instruments but in all sources of sound and in light. Euler found a three-dimensional version of the wave equation, which he applied to sound waves; it takes the form

wtt = c2(wxx + wyy + wzz) (17)
where now w(xyzt) is the pressure of the sound wave at point (xyz) at time t. The expression wxx + wyy + wzz is called the Laplacian, after the French mathematician Pierre-Simon de Laplace (Laplace, Pierre-Simon, marquis de), and is central to classical mathematical physics. Roughly a century after Euler, the Scottish physicist James Clerk Maxwell (Maxwell, James Clerk) extracted the three-dimensional wave equation from his equations for electromagnetism, and in consequence he was able to predict the existence of radio waves. It is probably fair to suggest that radio, television, and radar would not exist today without the early mathematicians' work on the analytic aspects of musical instruments.

Complex analysis
      In the 18th century a far-reaching generalization of analysis was discovered, centred on the so-called imaginary number i =  √(−1) . (In engineering this number is usually denoted by j.) The numbers commonly used in everyday life are known as real numbers, but in one sense this name is misleading. Numbers are abstract concepts, not objects in the physical universe. So mathematicians consider real numbers to be an abstraction on exactly the same logical level as imaginary numbers (imaginary number).

      The name imaginary arises because squares of real numbers are always positive. In consequence, positive numbers have two distinct square roots—one positive, one negative. Zero has a single square root—namely, zero. And negative numbers have no “real” square roots at all. However, it has proved extremely fruitful and useful to enlarge the number concept to include square roots of negative numbers. The resulting objects are numbers in the sense that arithmetic and algebra can be extended to them in a simple and natural manner; they are imaginary in the sense that their relation to the physical world is less direct than that of the real numbers. Numbers formed by combining real and imaginary components, such as 2 + 3i, are said to be complex (complex number) (meaning composed of several parts rather than complicated).

      The first indications that complex numbers (complex number) might prove useful emerged in the 16th century from the solution of certain algebraic equations by the Italian mathematicians Girolamo Cardano (Cardano, Girolamo) and Raphael Bombelli. By the 18th century, after a lengthy and controversial history, they became fully established as sensible mathematical concepts. They remained on the mathematical fringes until it was discovered that analysis, too, can be extended to the complex domain. The result was such a powerful extension of the mathematical tool kit that philosophical questions about the meaning of complex numbers became submerged amid the rush to exploit them. Soon the mathematical community had become so used to complex numbers that it became hard to recall that there had been a philosophical problem at all.

Formal definition of complex numbers
 The modern approach is to define a complex number x + iy as a pair of real numbers (xy) subject to certain algebraic operations. Thus one wishes to add or subtract, (a, b) ± (c, d), and to multiply, (a, b) × (c, d), or divide, (a, b)/(c, d), these quantities. These are inspired by the wish to make (x, 0) behave like the real number x and, crucially, to arrange that (0, 1)2 = (−1, 0)—all the while preserving as many of the rules of algebra as possible. This is a formal way to set up a situation which, in effect, ensures that one may operate with expressions x + iy using all the standard algebraic rules but recalling when necessary that i2 may be replaced by −1. For example,
(1 + 3i)2 = 12 + 2∙3i + (3i)2 = 1 + 6i + 9i2 = 1 + 6i − 9 = −8 + 6i.
A geometric interpretation of complex numbers is readily available, inasmuch as a pair (xy) represents a point in the plane shown in the figure—>. Whereas real numbers can be described by a single number line, with negative numbers to the left and positive numbers to the right, the complex numbers require a number plane with two axes, real and imaginary.

Extension of analytic concepts to complex numbers
      Analytic concepts such as limits, derivatives, integrals, and infinite series (all explained in the sections Technical preliminaries (analysis) and Calculus (analysis)) are based upon algebraic ideas, together with error estimates that define the limiting process: certain numbers must be arbitrarily well approximated by particular algebraic expressions. In order to represent the concept of an approximation, all that is needed is a well-defined way to measure how “small” a number is. For real numbers this is achieved by using the absolute value |x|. Geometrically, it is the distance along the real number line between x and the origin 0. Distances also make sense in the complex plane, and they can be calculated, using Pythagoras's theorem (Pythagorean theorem) from elementary geometry (the square of the hypotenuse of a right triangle is equal to the sum of the squares of its two sides), by constructing a right triangle such that its hypotenuse spans the distance between two points and its sides are drawn parallel to the coordinate axes. This line of thought leads to the idea that for complex numbers the quantity analogous to |x| is

      Since all the rules of real algebra extend to complex numbers and the absolute value is defined by an algebraic formula, it follows that analysis also extends to the complex numbers. Formal definitions are taken from the real case, real numbers are replaced by complex numbers, and the real absolute value is replaced by the complex absolute value. Indeed, this is one of the advantages of analytic rigour: without this, it would be far less obvious how to extend such notions as tangent or limit from the real case to the complex.

      In a similar vein, the Taylor series for the real exponential and trigonometric functions shows how to extend these definitions to include complex numbers—just use the same series but replace the real variable x by the complex variable z. This idea leads to complex-analytic functions as an extension of real-analytic ones.

 Because complex numbers differ in certain ways from real numbers—their structure is simpler in some respects and richer in others—there are differences in detail between real and complex analysis. Complex integration, in particular, has features of complete novelty. A real function must be integrated between limits a and b, and the Riemann integral is defined in terms of a sum involving values spread along the interval from a to b. On the real number line, the only path between two points a and b is the interval whose ends they form. But in the complex plane there are many different paths between two given points (see figure—>). The integral of a function between two points is therefore not defined until a path between the endpoints is specified. This done, the definition of the Riemann integral can be extended to the complex case. However, the result may depend on the path that is chosen.

      Surprisingly, this dependence is very weak. Indeed, sometimes there is no dependence at all. But when there is, the situation becomes extremely interesting. The value of the integral depends only on certain qualitative features of the path—in modern terms, on its topology. ( topology, often characterized as “rubber sheet geometry,” studies those properties of a shape that are unchanged if it is continuously deformed by being bent, stretched, and twisted but not torn.) So complex analysis possesses a new ingredient, a kind of flexible geometry, that is totally lacking in real analysis. This gives it a very different flavour.

      All this became clear in 1811 when, in a letter to the German astronomer Friedrich Bessel, the German mathematician Carl Friedrich Gauss (Gauss, Carl Friedrich) stated the central theorem of complex analysis:

I affirm now that the integral…has only one value even if taken over different paths, provided [the function]…does not become infinite in the space enclosed by the two paths.

      A proof was published by Cauchy in 1825, and this result is now named Cauchy's theorem. Cauchy went on to develop a vast theory of complex analysis and its applications.

      Part of the importance of complex analysis is that it is generally better-behaved than real analysis, the many-valued nature of integrals notwithstanding. Problems in the real domain can often be solved by extending them to the complex domain, applying the powerful techniques peculiar to that area, and then restricting the results back to the real domain again. From the mid-19th century onward, the progress of complex analysis was strong and steady. A system of numbers once rejected as impossible and nonsensical led to a powerful and aesthetically satisfying theory with practical applications to aerodynamics, fluid mechanics, electric power generation, and mathematical physics. No area of mathematics has remained untouched by this far-reaching enrichment of the number concept.

      Sketched below are some of the key ideas involved in setting up the more elementary parts of complex analysis. Alternatively, the reader may proceed directly to the section Measure theory (analysis).

Some key ideas of complex analysis
      A complex number is normally denoted by z = x + iy. A complex-valued function f assigns to each z in some region Ω of the complex plane a complex number w = f(z). Usually it is assumed that the region Ω is connected (all in one piece) and open (each point of Ω can be surrounded by a small disk that lies entirely within Ω). Such a function f is differentiable at a point z0 in Ω if the limit exists as z approaches z0 of the expression

This limit is the derivative f′(z). Unlike real analysis, if a complex function is differentiable in some region, then its derivative is always differentiable in that region, so f″(z) exists. Indeed, derivatives f(n)(z) of all orders n = 1, 2, 3, … exist. Even more strongly, f(z) has a power series expansion f(z) = c0 + c1(z − z0) + c2(z − z0)2 +⋯ with complex coefficients cj. This series converges for all z lying in some disk with centre z0. The radius of the largest such disk is called the radius of convergence of the series. Because of this power series representation, a differentiable complex function is said to be analytic.

      The elementary functions of real analysis, such as polynomials, trigonometric functions, and exponential functions, can be extended to complex numbers. For example, the exponential of a complex number is defined by

ez = 1 + z + z2/2! + z3/3! +⋯
where n! = n(n − 1)⋯3∙2∙1. It turns out that the trigonometric functions are related to the exponential by way of Euler's famous formula
eiθ = cos (θ) + isin (θ),
which leads to the expressions
cos (z) = (eiz + eiz)/2
sin (z) = (eizeiz)/2i.
Every complex number can be written in the form z = reiθ for real r ≥ 0 and real θ. Here r is the absolute value (or modulus) of z, and θ is known as its argument. The value of θ is not unique, but the possible values differ only by integer multiples of 2π. In consequence, the complex logarithm is many-valued:
log (z) = log (reiθ) = log |r| + i(θ + 2nπ)
for any integer n.

      The integral

C f(z)dz
of an analytic function f along a curve (or contour) C in the complex plane is defined in a similar manner to the real Riemann integral. Cauchy's theorem, mentioned above, states that the value of such an integral is the same for two contours C1 and C2, provided both curves lie inside a simply connected region Ω—a region with no “holes.” When Ω has holes, the value of the integral depends on the topology of the curve C but not its precise form. The essential feature is how many times C winds around a given hole—a number that is related to the many-valued nature of the complex logarithm.

Measure theory
      A rigorous basis for the new discipline of analysis was achieved in the 19th century, in particular by the German mathematician Karl Weierstrass. Modern analysis, however, differs from that of Weierstrass's time in many ways, and the most obvious is the level of abstraction. Today's analysis is set in a variety of general contexts, of which the real line and the complex plane (explained in the section Complex analysis (analysis)) are merely two rather simple examples. One of the most important spurs to these developments was the invention of a new—and improved—definition of the integral by the French mathematician Henri-Léon Lebesgue (Lebesgue, Henri-Léon) about 1900. Lebesgue's contribution, which made possible the subbranch of analysis known as measure theory, is described in this section.

      In Lebesgue's day, mathematicians had noticed a number of deficiencies in Riemann's way of defining the integral. (The Riemann integral is explained in the section Integration (analysis).) Many functions with reasonable properties turned out not to possess integrals in Riemann's sense. Moreover, certain limiting procedures, when applied to sequences not of numbers but of functions, behaved in very strange ways as far as integration was concerned. Several mathematicians tried to develop better ways to define the integral, and the best of all was Lebesgue's.

      Consider, for example, the function f defined by f(x) = 0 whenever x is a rational number but f(x) = 1 whenever x is irrational. What is a sensible value for

Using Riemann's definition, this function does not possess a well-defined integral. The reason is that within any interval it takes values both 0 and 1, so that it hops wildly up and down between those two values. Unfortunately for this example, Riemann's integral is based on the assumption that over sufficiently small intervals the value of the function changes by only a very small amount.

      However, there is a sense in which the rational numbers form a very tiny proportion of the real numbers. In fact, “almost all” real numbers are irrational. Specifically, the set of all rational numbers can be surrounded by a collection of intervals whose total length is as small as is wanted. In a well-defined sense, then, the “length” of the set of rational numbers is zero. There are good reasons why values on a set of zero length ought not to affect the integral of a function—the “rectangle” based on that set ought to have zero area in any sensible interpretation of such a statement. Granted this, if the definition of the function f is changed so that it takes value 1 on the rational numbers instead of 0, its integral should not be altered. However, the resulting function g now takes the form g(x) = 1 for all x, and this function does possess a Riemann integral. In fact,

abg(x)dx = b − a.
Lebesgue reasoned that the same result ought to hold for f—but he knew that it would not if the integral were defined in Riemann's manner.

  The reason why Riemann's method failed to work for f is that the values of f oscillate wildly over arbitrarily small intervals. Riemann's approach relied upon approximating the area under a graph by slicing it, in the vertical direction, into very thin slices, as shown in the figure—>. The problem with his method was that vertical direction: vertical slices permit wild variation in the value of the function within a slice. So Lebesgue sliced the graph horizontally instead (see figure—>). The variation within such a slice is no more than the thickness of the slice, and this can be made very small. The price to be paid for keeping the variation small, though, is that the set of x for which f(x) lies in a given horizontal slice can be very complicated. For example, for the function f defined earlier, f(x) lies in a thin slice around 0 whenever x is rational and in a thin slice around 1 whenever x is irrational.

      However, it does not matter if such a set is complicated: it is sufficient that it should possess a well-defined generalization of length. Then that part of the graph of f corresponding to a given horizontal slice will have a well-defined approximate area, found by multiplying the value of the function that determines the slice by the “length” of the set of x whose functional values lie inside that slice. So the central problem faced by Lebesgue was not integration as such at all; it was to generalize the concept of length to sufficiently complicated sets. This Lebesgue managed to do. Basically, his method is to enclose the set in a collection of intervals. Since the generalized length of the set is surely smaller than the total length of the intervals, it only remains to choose the intervals that make the total length as small as possible.

      This generalized concept of length is known as the Lebesgue measure. Once the measure is established, Lebesgue's generalization of the Riemann integral can be defined, and it turns out to be far superior to Riemann's integral. The concept of a measure can be extended considerably—for example, into higher dimensions, where it generalizes such notions as area and volume—leading to the subbranch known as measure theory. One fundamental application of measure theory is to probability and statistics, a development initiated by Kolmogorov in the 1930s.

Other areas of analysis
      Modern analysis is far too broad to describe in detail. Instead, a small selection of other major areas is explored below to convey some flavour of the subject.

Functional analysis
      In the 1920s and '30s a number of apparently different areas of analysis all came together in a single generalization—rather, two generalizations, one more general than the other. These were the notions of a Hilbert space and a Banach space, named after the German mathematician David Hilbert (Hilbert, David) and the Polish mathematician Stefan Banach (Banach, Stefan), respectively. Together they laid the foundations for what is now called functional analysis.

      Functional analysis starts from the principle, explained in the section Complex analysis (analysis), that, in order to define basic analytic notions such as limits or the derivative, it is sufficient to be able to carry out certain algebraic operations and to have a suitable notion of size. For real analysis, size is measured by the absolute value |x|; for complex analysis, it is measured by the absolute value |x + iy|. Analysis of functions of several variables—that is, the theory of partial derivatives—can also be brought under the same umbrella. In the real case, the set of real numbers is replaced by the vector space Rn of all n-tuples of real numbers x = (x1, …, xn) where each xj is a real number. Used in place of the absolute value is the length of the vector x, which is defined to be

In fact there is a closely related notion, called an inner product, written 〈xy〉, where x, y are vectors. It is equal to x1y1 +⋯+ xnyn. The inner product relates not just to the sizes of x and y but to the angle between them. For example, 〈xy〉 = 0 if and only if x and y are orthogonal—at right angles to each other. Moreover, the inner product determines the length, because ||x|| = √〈xx〉. If F(x) = (f1(x), …, fk(x)) is a vector-valued function of a vector x = (x1, …, xn), the derivative no longer has numerical values. Instead, it is a linear operator, a special kind of function.

      Functions of several complex variables similarly reduce to a study of the space Cn of n-tuples of complex numbers x + iy = (x1 + iy1, …, xn + iyn). Used in place of the absolute value is

However, the correct concept of an analytic function of several complex variables is subtle and was developed only in the 20th century. Henceforth only the real case is considered here.

      Hilbert realized that these ideas could be extended from vectors—which are finite sequences of real numbers—to infinite sequences of real numbers. Define (the simplest example of) Hilbert space to consist of all infinite sequences x = (x0x1x2, …) of real numbers, subject to the condition that the sequence is square-summable, meaning that the infinite series x02 + x12 + x22 +⋯ converges to a finite value. Now define the inner product of two such sequences to be

x, y〉 = x0y0 + x1y1 + x2y2 +⋯.
It can be shown that this also takes a finite value Hilbert discovered that it is possible to carry out the basic operations of analysis on Hilbert space. For example, it is possible to define convergence of a sequence b0, b1, b2, … where the bj are not numbers but elements of the Hilbert space—infinite sequences in their own right. Crucially, with this definition of convergence, Hilbert space is complete: every Cauchy sequence is convergent. The section Properties of the real numbers (analysis) shows that completeness is central to analysis for real-valued functions, and the same goes for functions on a Hilbert space.

      More generally, a Hilbert space in the broad sense can be defined to be a (real or complex) vector space with an inner product that makes it complete, as well as determining a norm—a notion of length subject to certain constraints. There are numerous examples. Furthermore, this notion is very useful because it unifies large areas of classical analysis. It makes excellent sense of Fourier analysis, providing a satisfactory setting in which convergence questions are relatively unsubtle and straightforward. Instead of resolving various delicate classical issues, it bypasses them entirely. It organizes Lebesgue's theory of measures (described in the section Measure theory (analysis)). The theory of integral equations—like differential equations but with integrals instead of derivatives—was very popular in Hilbert's day, and that, too, could be brought into the same framework. What Hilbert could not anticipate, since he died before the necessary physical theories were discovered, was that Hilbert space would also turn out to be ideal for quantum mechanics. In classical physics an observable value is just a number; today a quantum mechanical observable value is defined as an operator on a Hilbert space.

      Banach extended Hilbert's ideas considerably. A Banach space is a vector space with a norm, but not necessarily given by an inner product. Again the space must be complete. The theory of Banach spaces is extremely important as a framework for studying partial differential equations, which can be viewed as algebraic equations whose variables lie in a suitable Banach space. For instance, solving the wave equation for a violin string is equivalent to finding solutions of the equation P(u) = 0, where u is a member of the Banach space of functions u(x) defined on the interval 0 ≤ x ≤ l and where P is the wave operator


Variational principles and global analysis
      The great mathematicians of Classical times were very interested in variational problems. An example is the famous problem of the brachistochrone: find the shape of a curve with given start and end points along which a body will fall in the shortest possible time. The answer is (part of) an upside-down cycloid, where a cycloid is the path traced by a point on the rim of a rolling circle. More important for the purposes of this article is the nature of the problem: from among a class of curves, select the one that minimizes some quantity.

      Variational problems can be put into Banach space language too. The space of curves is the Banach space, the quantity to be minimized is some functional (a function with functions, rather than simply numbers, as input) defined on the Banach space, and the methods of analysis can be used to determine the minimum. This approach can be generalized even further, leading to what is now called global analysis.

      Global analysis has many applications to mathematical physics. Euler and the French mathematician Pierre-Louis Moreau de Maupertuis (Maupertuis, Pierre-Louis Moreau de) discovered that the whole of Newtonian mechanics can be restated in terms of a variational principle: mechanical systems move in a manner that minimizes (or, more technically, extremizes) a functional known as action. The French mathematician Pierre de Fermat (Fermat, Pierre de) stated a similar principle for optics, known as the principle of least time: light rays follow paths that minimize the total time of travel. Later the Irish mathematician William Rowan Hamilton (Hamilton, Sir William Rowan) found a unified theory that includes both optics and mechanics under the general notion of a Hamiltonian system—nowadays subsumed into a yet more general and abstract theory known as symplectic geometry.

      An especially fascinating area of global analysis concerns the Plateau problem. The blind Belgian physicist Joseph Plateau (using an assistant as his eyes) spent many years observing the form of soap films and bubbles. He found that if a wire frame in the form of some curve is dipped in a soap solution, then the film forms beautiful curved surfaces. They are called minimal surfaces because they have minimal area subject to spanning the curve. (Their surface tension is proportional to their area, and their energy is proportional to surface tension, so they are actually energy-minimizing films.) For example, a soap bubble is spherical because a sphere has the smallest surface area, subject to enclosing a given volume of air.

      The mathematics of minimal surfaces is an exciting area of current research with many attractive unsolved problems and conjectures. One of the major triumphs of global analysis occurred in 1976 when the American mathematicians Jean Taylor and Frederick Almgren obtained the mathematical derivation of the Plateau conjecture, which states that, when several soap films join together (for example, when several bubbles meet each other along common interfaces), the angles at which the films meet are either 120 degrees (for three films) or approximately 108 degrees (for four films). Plateau had conjectured this from his experiments.

Constructive analysis
      One philosophical feature of traditional analysis, which worries mathematicians whose outlook is especially concrete, is that many basic theorems assert the existence of various numbers or functions but do not specify what those numbers or functions are. For instance, the completeness property of the real numbers indicates that every Cauchy sequence converges but not what it converges to. A school of analysis initiated by the American mathematician Errett Bishop has developed a new framework for analysis in which no object can be deemed to exist unless a specific rule is given for constructing it. This school is known as constructive analysis, and its devotees have shown that it is just as rich in structure as traditional analysis and that most of the traditional theorems have analogs within the constructive framework. This philosophy has its origins in the earlier work of the Dutch mathematician-logician L.E.J. Brouwer (Brouwer, Luitzen Egbertus Jan), who criticized “mainstream” mathematical logicians for accepting proofs that mathematical objects exist without there being any specific construction of them (for example, a proof that some series converges without any specification of the limit which it converges to). Brouwer founded an entire school of mathematical logic, known as intuitionism, to advance his views.

      However, constructive analysis remains on the fringes of the mathematical mainstream, probably because most mathematicians accept classical existence proofs and see no need for the additional mathematical baggage involved in carrying out analysis constructively. Nevertheless, constructive analysis is very much in the same algorithmic spirit as computer science, and in the future there may be some fruitful interaction with this area.

Nonstandard analysis
      A very different philosophy—pretty much the exact opposite of constructive analysis—leads to nonstandard analysis, a slightly misleading name. Nonstandard analysis arose from the work of the German-born mathematician Abraham Robinson in mathematical logic, and it is best described as a variant of real analysis in which and infinities genuinely exist—without any paradoxes. In nonstandard analysis, for example, one can define the limit a of a sequence an to be the unique real number (if any) such that |an − a| is infinitesimal for all infinite integers n.

      Generations of students have spent years learning, painfully, not to think that way when studying analysis. Now it turns out that such thinking is entirely rigorous, provided that it is carried out in a rather subtle context. As well as the usual systems of real numbers ℝ and natural numbers ℕ, nonstandard analysis introduces two more extensive systems of nonstandard real numbers ℝ* and nonstandard natural numbers ℕ*. The system ℝ* includes numbers that are infinitesimal relative to ordinary real numbers ℝ. That is, nonzero nonstandard real numbers exist that are smaller than any nonzero standard real number. (What cannot be done is to have nonzero nonstandard real numbers that are smaller than any nonzero nonstandard real number, which is impossible for the same reason that no infinitesimal real numbers exist.) In a similar way, ℝ* also includes numbers that are infinite relative to ordinary real numbers.

      In a very strong sense, it can be shown that nonstandard analysis accurately mimics the whole of traditional analysis. However, it brings dramatic new methods to bear, and it has turned out, for example, to offer an interesting new approach to stochastic differential equations—like standard differential equations but subject to random noise. As with constructive analysis, nonstandard analysis sits outside the mathematical mainstream, but its prospects of joining the mainstream seem excellent.

Ian Stewart

History of analysis

The Greeks encounter continuous magnitudes
      Analysis consists of those parts of mathematics in which continuous change is important. These include the study of motion and the geometry of smooth curves and surfaces—in particular, the calculation of tangents, areas, and volumes. Ancient Greek mathematicians made great progress in both the theory and practice of analysis. Theory was forced upon them about 500 BC by the Pythagorean (Pythagoreanism) discovery of irrational magnitudes and about 450 BC by Zeno (Zeno Of Elea)'s paradoxes of motion.

The Pythagoreans and irrational numbers (irrational number)
 Initially, the Pythagoreans believed that all things could be measured by the discrete natural numbers (1, 2, 3, …) and their ratios (ordinary fractions, or the rational numbers). This belief was shaken, however, by the discovery that the diagonal of a unit square (that is, a square whose sides have a length of 1) cannot be expressed as a rational number. This discovery was brought about by their own Pythagorean theorem, which established that the square on the hypotenuse of a right triangle is equal to the sum of the squares on the other two sides—in modern notation, c2 = a2 + b2 (see figure—>). In a unit square, the diagonal is the hypotenuse of a right triangle, with sides a = b = 1, hence its measure is √2—an irrational number. Against their own intentions, the Pythagoreans had thereby shown that rational numbers did not suffice for measuring even simple geometric objects. (See Sidebar: Incommensurables.) Their reaction was to create an arithmetic of line segments, as found in Book II of Euclid's Elements (c. 300 BC), that included a geometric interpretation of rational numbers. For the Greeks, line segments were more general than numbers because they included continuous as well as discrete magnitudes.

      Indeed, √2 can be related to the rational numbers only via an infinite process. This was realized by Euclid, who studied the arithmetic of both rational numbers and line segments. His famous Euclidean algorithm, when applied to a pair of natural numbers, leads in a finite number of steps to their greatest common divisor. However, when applied to a pair of line segments with an irrational ratio, such as √2 and 1, it fails to terminate. Euclid even used this nontermination property as a criterion for irrationality. Thus, irrationality challenged the Greek concept of number by forcing them to deal with infinite processes.

Zeno's paradoxes and the concept of motion
      Just as √2 was a challenge to the Greeks' concept of number, Zeno's paradoxes (paradoxes of Zeno) were a challenge to their concept of motion. In his Physics (c. 350 BC), Aristotle quoted Zeno as saying:

There is no motion because that which is moved must arrive at the middle [of the course] before it arrives at the end.

      Zeno's arguments are known only through Aristotle, who quoted them mainly to refute them. Presumably, Zeno meant that, to get anywhere, one must first go half way and before that one-fourth of the way and before that one-eighth of the way and so on. Because this process of halving distances would go on into infinity (a concept that the Greeks would not accept as possible), Zeno claimed to “prove” that reality consists of changeless being. Still, despite their loathing of infinity, the Greeks found that the concept was indispensable in the mathematics of continuous magnitudes. So they reasoned about infinity as finitely as possible, in a logical framework called the theory of proportions and using the method of exhaustion (exhaustion, method of).

      The theory of proportions was created by Eudoxus (Eudoxus of Cnidus) about 350 BC and preserved in Book V of Euclid's Elements. It established an exact relationship between rational magnitudes and arbitrary magnitudes by defining two magnitudes to be equal if the rational magnitudes less than them were the same. In other words, two magnitudes were different only if there was a rational magnitude strictly between them. This definition served mathematicians for two millennia and paved the way for the arithmetization of analysis (analysis) in the 19th century, in which arbitrary numbers were rigorously defined in terms of the rational numbers. The theory of proportions was the first rigorous treatment of the concept of limits, an idea that is at the core of modern analysis. In modern terms, Eudoxus' theory defined arbitrary magnitudes as limits of rational magnitudes, and basic theorems about the sum, difference, and product of magnitudes were equivalent to theorems about the sum, difference, and product of limits.

The method of exhaustion
 The method of exhaustion (exhaustion, method of), also due to Eudoxus, was a generalization of the theory of proportions. Eudoxus's idea was to measure arbitrary objects by defining them as combinations of multiple polygons or polyhedra. In this way, he could compute volumes and areas of many objects with the help of a few shapes, such as triangles and triangular prisms, of known dimensions. For example, by using stacks of prisms (see figure—>), Eudoxus was able to prove that the volume of a pyramid is one-third of the area of its base B multiplied by its height h, or in modern notation Bh/3. Loosely speaking, the volume of the pyramid is “exhausted” by stacks of prisms as the thickness of the prisms becomes progressively smaller. More precisely, what Eudoxus proved is that any volume less than Bh/3 may be exceeded by a stack of prisms inside the pyramid, and any volume greater than Bh/3 may be undercut by a stack of prisms containing the pyramid. Hence, the volume of the pyramid itself can be only Bh/3—all other possibilities have been “exhausted.” Similarly, Eudoxus proved that the area of a circular disk is proportional to the square of its radius (see Sidebar: Pi Recipes) and that the volume of a cone (obtained by exhausting it by pyramids) is also Bh/3, where B is again the area of the base and h is the height of the cone.

 The greatest exponent of the method of exhaustion was Archimedes (c. 285–212/211 BC). Among his discoveries using exhaustion were the area of a parabolic segment, the volume of a paraboloid, the tangent to a spiral, and a proof that the volume of a sphere is two-thirds the volume of the circumscribing cylinder. His calculation of the area of the parabolic segment (see figure—>) involved the application of infinite series (analysis) to geometry. In this case, the infinite geometric series
1 + 1/4 + 1/16 +1/64 +⋯ = 4/3
is obtained by successively adding a triangle with unit area, then triangles that total 1/4 unit area, then triangles of 1/16, and so forth, until the area is exhausted. Archimedes avoided actual contact with infinity, however, by showing that the series obtained by stopping after a finite number of terms could be made to exceed any number less than 4/3. In modern terms, 4/3 is the limit of the partial sums. For information on how he made his discoveries, see Sidebar: Archimedes' Lost Method.

Models of motion in medieval Europe
      The ancient Greeks applied analysis only to static problems—either to pure geometry or to forces in equilibrium. Problems involving motion were not well understood, perhaps because of the philosophical doubts exemplified by Zeno's paradoxes or because of Aristotle's erroneous theory that motion required the continuous application of force.

      Analysis began its long and fruitful association with dynamics in the Middle Ages, when mathematicians in England and France studied motion under constant acceleration. They correctly concluded that, for a body under constant acceleration over a given time interval,

total displacement = time × velocity at the middle instant.

 This result was discovered by mathematicians at Merton College, Oxford, in the 1330s, and for that reason it is sometimes called the Merton acceleration theorem. A very simple graphical proof was given about 1361 by the French bishop and Aristotelian scholar Nicholas Oresme (Oresme, Nicholas). He observed that the graph of velocity versus time is a straight line for constant acceleration and that the total displacement of an object is represented by the area under the line. This area equals the width (length of the time interval) times the height (velocity) at the middle of the interval (see figure—>).

      In making this translation of dynamics into geometry, Oresme was probably the first to explicitly use coordinates outside of cartography. He also helped to demystify dynamics by showing that the geometric equivalent of motion could be quite familiar and tractable. For example, from the Merton acceleration theorem the distance traveled in time t by a body undergoing constant acceleration from rest is proportional to t2. At the time, it was not known whether such motion occurs in nature, but in 1604 the Italian mathematician and physicist Galileo discovered that this model precisely fits free-falling bodies.

      Galileo also overthrew the mistaken dogma of Aristotle that motion requires the continual application of force by asserting the principle of inertia: in the absence of external forces, a body has zero acceleration; that is, a motionless body remains at rest, and a moving body travels with constant velocity. From this he concluded that a projectile—which is subject to the vertical force of gravity but negligible horizontal forces—has constant horizontal velocity, with its horizontal displacement proportional to time t. Combining this with his knowledge that the vertical displacement of any projectile is proportional to t2, Galileo discovered that a projectile's trajectory is a parabola.

      The three conic sections (conic section) ( ellipse, parabola, and hyperbola) had been studied since antiquity, and Galileo's models of motion gave further proof that dynamics could be studied with the help of geometry. In 1609 the German astronomer Johannes Kepler (Kepler, Johannes) took this idea to the cosmic level by showing that the planets orbit the Sun in ellipses. Eventually, Newton uncovered deeper reasons for the occurrence of conic sections with his theory of gravitation.

      During the period from Oresme to Galileo, there were also some remarkable discoveries concerning infinite series. Oresme summed the series

1/2 + 2/22 + 3/23 + 4/24 +⋯ = 2,
and he also showed that the harmonic series
1 + 1/2 + 1/3 + 1/4 +⋯
does not have a finite sum, because in the successive groups of terms
1/21/3 + 1/41/5 + 1/6 + 1/7 + 1/8, …
each group has a sum greater than 1/2. With his use of infinite series, coordinates, and graphical interpretations of motion, Oresme was on the brink of a decisive advance beyond the discoveries of Archimedes. All that Oresme lacked was a symbolic language to unite his ideas and allow them to be manipulated mathematically. That symbolic language was to be found in the emerging mathematical discipline of algebra.

      About 1630 the French mathematicians Pierre de Fermat (Fermat, Pierre de) and René Descartes (Descartes, René) independently realized that algebra was a tool of wondrous power in geometry and invented what is now known as analytic geometry. If a curve in the plane can be expressed by an equation of the form p(xy) = 0, where p(xy) is any polynomial in the two variables, then its basic properties can be found by algebra. (For example, the polynomial equation x2 + y2 = 1 describes a simple circle of radius 1 about the origin.) In particular, it is possible to find the tangent anywhere along the curve. Thus, what Archimedes could solve only with difficulty and for isolated cases, Fermat and Descartes solved in routine fashion and for a huge class of curves (now known as the algebraic curves).

      It is easy to find the tangent by algebra, but it is somewhat harder to justify the steps involved. (See the section Graphical interpretation (analysis) for an illustrated example of this procedure.) In general, the slope of any curve y = f(x) at any value of x can be found by computing the slope of the chord

and taking its limit as h tends to zero. This limit, written as f′(x), is called the derivative of the function f. Fermat's method showed that the derivative of x2 is 2x and, by extension, that the derivative of xk is kxk − 1 for any natural number k.

The fundamental theorem of calculus
Differentials and integrals
 The method of Fermat and Descartes is part of what is now known as differential calculus, and indeed it deserves the name calculus, being a systematic and general method for calculating tangents. (See the section Differential calculus (analysis).) At the same time, mathematicians were trying to calculate other properties of curved figures, such as their arc length, area, and volume; these calculations are part of what is now known as integral calculus (analysis). A general method for integral problems was not immediately apparent in the 17th century, although algebraic techniques worked well in certain cases, often in combination with geometric arguments. In particular, contemporaries of Fermat and Descartes struggled to understand the properties of the cycloid, a curve not studied by the ancients. The cycloid is traced by a point on the circumference of a circle as it rolls along a straight line, as shown in the figure—>.

      The cycloid was commended to the mathematicians of Europe by Marin Mersenne (Mersenne, Marin), a French priest who directed much of the scientific research in the first half of the 16th century by coordinating correspondence between scientists. About 1634 the French mathematician Gilles Personne de Roberval (Roberval, Gilles Personne de) first took up the challenge, by proving a conjecture of Galileo that the area enclosed by one arch of the cycloid is three times the area of the generating circle.

      Roberval also found the volume of the solid formed by rotating the cycloid about the straight line through its endpoints. Because his position at the Collège Royal had to be reclaimed every three years in a mathematical contest—in which the incumbent set the questions—he was secretive about his methods. It is now known that his calculations used indivisibles (loosely speaking, “nearly” dimensionless elements) and that he found the area beneath the sine curve, a result previously obtained by Kepler. In modern language, Kepler and Roberval knew how to integrate the sine function.

      Results on the cycloid were discovered and rediscovered over the next two decades by Fermat, Descartes, and Blaise Pascal (Pascal, Blaise) in France, Evangelista Torricelli (Torricelli, Evangelista) in Italy, and John Wallis (Wallis, John) and Christopher Wren (Wren, Sir Christopher) in England. In particular, Wren found that the length (as measured along the curve) of one arch of the cycloid is eight times the radius of the generating circle, demolishing a speculation of Descartes that the lengths of curves could never be known. Such was the acrimony and national rivalry stirred up by the cycloid that it became known as the Helen of geometers because of its beauty and ability to provoke discord. Its importance in the development of mathematics was somewhat like solving the cubic equation—a small technical achievement but a large encouragement to solve more difficult problems. (See Sidebar: Algebraic Versus Transcendental Objects and Sidebar: Calculus of Variations.)

      A more elementary, but fundamental, problem was to integrate xk—that is, to find the area beneath the curves y = xk where k = 1, 2, 3, …. For k = 2 the curve is a parabola, and the area of this shape had been found in the 3rd century BC by Archimedes. For an arbitrary number k, the area can be found if a formula for 1k + 2k +⋯+ nk is known. One of Archimedes' approaches to the area of the parabola was, in fact, to find this sum for k = 2. The sums for k = 3 and k = 4 had been found by the Arab mathematician Abū ʿAlī al-Ḥasan ibn al-Haytham (Ibn al-Haytham) (c. 965–1040) and for k up to 13 by Johann Faulhaber in Germany in 1622. Finally, in the 1630s, the area under y = xk was found for all natural numbers k. It turned out that the area between 0 and x is simply xk + 1/(k + 1), a solution independently discovered by Fermat, Roberval, and the Italian mathematician Bonaventura Cavalieri (Cavalieri, Bonaventura).

Discovery of the theorem
      This hard-won result became almost a triviality with the discovery of the fundamental theorem of calculus (analysis) a few decades later. The fundamental theorem states that the area under the curve y = f(x) is given by a function F(x) whose derivative is f(x), F′(x) = f(x). The fundamental theorem reduced integration to the problem of finding a function with a given derivative; for example, xk + 1/(k + 1) is an integral of xk because its derivative equals xk.

      The fundamental theorem was first discovered by James Gregory (Gregory, James) in Scotland in 1668 and by Isaac Barrow (Barrow, Isaac) (Newton's predecessor at the University of Cambridge) about 1670, but in a geometric form that concealed its computational advantages. Newton (Newton, Sir Isaac) discovered the result for himself about the same time and immediately realized its power. In fact, from his viewpoint the fundamental theorem completely solved the problem of integration. However, he failed to publish his work, and in Germany Leibniz (Leibniz, Gottfried Wilhelm) independently discovered the same theorem and published it in 1686. This led to a bitter dispute over priority and over the relative merits of Newtonian and Leibnizian methods. This dispute isolated and impoverished British mathematics until the 19th century.

 For Newton, analysis meant finding power series for functions f(x)—i.e., infinite sums of multiples of powers of x. A few examples were known before his time—for example, the geometric series for 1/(1 − x),
1/(1 − x) = 1 + x + x2 + x3 + x4 +⋯,
which is implicit in Greek mathematics, and series for sin (x), cos (x), and tan−1 (x), discovered about 1500 in India although not communicated to Europe (see table—>). Newton created a calculus of power series by showing how to differentiate, integrate, and invert them. Thanks to the fundamental theorem, differentiation and integration were easy, as they were needed only for powers xk. Newton's more difficult achievement was inversion: given y = f(x) as a sum of powers of x, find x as a sum of powers of y. This allowed him, for example, to find the sine series from the inverse sine and the exponential (exponential function) series from the logarithm. See Sidebar: Newton and Infinite Series.

 For Leibniz the meaning of calculus was somewhat different. He did not begin with a fixed idea about the form of functions, and so the operations he developed were quite general. In fact, modern derivative and integral symbols are derived from Leibniz's d for difference and ∫ for sum. He applied these operations to variables and functions in a calculus of . When applied to a variable x, the difference operator d produces dx, an infinitesimal increase in x that is somehow as small as desired without ever quite being zero. Corresponding to this infinitesimal increase, a function f(x) experiences an increase df = fdx, which Leibniz regarded as the difference between values of the function f at two values of x a distance of dx apart. Thus the derivative f′ = df/dx was a quotient of infinitesimals. Similarly, Leibniz viewed the integral ∫f(x)dx of f(x) as a sum of infinitesimals—infinitesimal strips of area under the curve y = f(x), as shown in the figure—>—so that the fundamental theorem of calculus was for him the truism that the difference between successive sums is the last term in the sum: df(x)dx = f(x)dx.

      In effect, Leibniz reasoned with continuous quantities as if they were discrete. The idea was even more dubious than indivisibles, but, combined with a perfectly apt notation that facilitated calculations, mathematicians initially ignored any logical difficulties in their joy at being able to solve problems that until then were intractable. Both Leibniz and Newton (who also took advantage of mysterious nonzero quantities that vanished when convenient) knew the calculus was a method of unparalleled scope and power, and they both wanted the credit for inventing it. True, the underlying infinitesimals were ridiculous—as the Anglican bishop George Berkeley (Berkeley, George) remarked in his The Analyst; or, A Discourse Addressed to an Infidel Mathematician (1734):

They are neither finite quantities…nor yet nothing. May we not call them ghosts of departed quantities?

      However, results found with their help could be confirmed (given sufficient, if not quite infinite, patience) by the method of exhaustion. So calculus forged ahead, and eventually the credit for it was distributed evenly, with Newton getting his share for originality and Leibniz his share for finding an appropriate symbolism.

Calculus flourishes
      Newton had become the world's leading scientist, thanks to the publication of his Principia (1687), which explained Kepler's laws and much more with his theory of gravitation. Assuming that the gravitational force between bodies is inversely proportional to the distance between them, he found that in a system of two bodies the orbit of one relative to the other must be an ellipse. Unfortunately, Newton's preference for classical geometric methods obscured the essential calculus. The result was that Newton had admirers but few followers in Britain, notable exceptions being Brook Taylor (Taylor, Brook) and Colin Maclaurin (Maclaurin, Colin). Instead, calculus flourished on the Continent, where the power of Leibniz's notation was not curbed by Newton's authority.

      For the next few decades, calculus belonged to Leibniz and the Swiss brothers Jakob (Bernoulli, Jakob) and Johann Bernoulli (Bernoulli, Johann). Between them they developed most of the standard material found in calculus courses: the rules for differentiation, the integration of rational functions, the theory of elementary functions, applications to mechanics, and the geometry of curves. To Newton's chagrin, Johann even presented a Leibniz-style proof that the inverse square law of gravitation implies elliptical orbits. He claimed, with some justice, that Newton had not been clear on this point. The first calculus textbook was also due to Johann—his lecture notes Analyse des infiniment petits (“Infinitesimal Analysis”) was published by the marquis de l'Hôpital in 1696—and calculus in the next century was dominated by his great Swiss student Leonhard Euler (Euler, Leonhard), who was invited to Russia by Catherine the Great (Catherine II) and thus helped to spread the Leibniz doctrine to all corners of Europe.

      Perhaps the only basic calculus result missed by the Leibniz school was one on Newton's specialty of power series, given by Taylor in 1715. The Taylor series neatly wraps up the power series for 1/(1 − x), sin (x), cos (x), tan−1 (x) and many other functions in a single formula:

Here f′(a) is the derivative of f at x = a, f′′(a) is the derivative of the derivative (the “second derivative”) at x = a, and so on (see Higher-order derivatives (analysis)). Taylor's formula pointed toward Newton's original goal—the general study of functions by power series—but the actual meaning of this goal awaited clarification of the function concept.

Elaboration and generalization
Euler and infinite series
      The 17th-century techniques of differentiation, integration, and infinite processes were of enormous power and scope, and their use expanded in the next century. The output of Euler alone was enough to dwarf the combined discoveries of Newton, Leibniz, and the Bernoullis. Much of his work elaborated on theirs, developing the mechanics of heavenly bodies, fluids, and flexible and elastic media. For example, Euler studied the difficult problem of describing the motion of three masses under mutual gravitational attraction (now known as the three-body problem). Applied to the Sun-Moon-Earth system, Euler's work greatly increased the accuracy of the lunar tables used in navigation—for which the British Board of Longitude awarded him a monetary prize. He also applied analysis to the bending of a thin elastic beam and in the design of sails.

      Euler also took analysis in new directions. In 1734 he solved a problem in infinite series that had defeated his predecessors: the summation of the series

1/12 + 1/22 + 1/32 + 1/42 +⋯.
Euler found the sum to be π2/6 by the bold step of comparing the series with the sum of the roots of the following infinite polynomial equation (obtained from the power series for the sine function):
sin (√x)/x = 1 − x/3! + x2/5! − x3/7! +⋯ = 0.
Euler was later able to generalize this result to find the values of the function
for all even natural numbers s.

      The function ζ(s), later known as the Riemann zeta function, is a concept that really belongs to the 19th century. Euler caught a glimpse of the future when he discovered the fundamental property of ζ(s) in his Introduction to Analysis of the Infinite (1748): the sum over the integers 1, 2, 3, 4, … equals a product over the prime numbers 2, 3, 5, 7, 11, 13, 17, …, namely

      This startling formula was the first intimation that analysis—the theory of the continuous—could say something about the discrete and mysterious prime numbers. The zeta function unlocks many of the secrets of the primes—for example, that there are infinitely many of them. To see why, suppose there were only finitely many primes. Then the product for ζ(s) would have only finitely many terms and hence would have a finite value for s = 1. But for s = 1 the sum on the left would be the harmonic series, which Oresme showed to be infinite, thus producing a contradiction.

      Of course it was already known that there were infinitely many primes—this is a famous theorem of Euclid—but Euler's proof gave deeper insight into the result. By the end of the 20th century, prime numbers had become the key to the security of most electronic transactions, with sensitive information being “hidden” in the process of multiplying large prime numbers (see cryptology). This demands an infinite supply of primes, to avoid repeating primes used in other transactions, so that the infinitude of primes has become one of the foundations of electronic commerce (e-commerce).

Complex exponentials
      As a final example of Euler's work, consider his famous formula for complex (complex number) exponentials eiθ = cos (θ) + i sin (θ), where i =  √(−1) . Like his formula for ζ(2), which surprisingly relates π to the squares of the natural numbers, the formula for eiθ relates all the most famous numbers—e, i, and π—in a miraculously simple way. Substituting π for θ in the formula gives eiπ = −1, which is surely the most remarkable formula in mathematics.

      The formula for eiθ appeared in Euler's Introduction, where he proved it by comparing the Taylor series for the two sides. The formula is really a reworking of other formulas due to Newton's contemporaries in England, Roger Cotes and Abraham de Moivre (Moivre, Abraham de)—and Euler may also have been influenced by discussions with his mentor Johann Bernoulli—but it definitively shows how the sine and cosine functions are just parts of the exponential function. This, too, was a glimpse of the future, where many a pair of real functions would be fused into a single “complex” function. Before explaining what this means, more needs to be said about the evolution of the function concept in the 18th century.

Functions (function)
      Calculus introduced mathematicians to many new functions by providing new ways to define them, such as with infinite series and with integrals. More generally, functions arose as solutions of ordinary differential equations (analysis) (involving a function of one variable and its derivatives) and partial differential equations (analysis) (involving a function of several variables and derivatives with respect to these variables). Many physical quantities depend on more than one variable, so the equations of mathematical physics typically involve partial derivatives.

      In the 18th century the most fertile equation of this kind was the vibrating string equation, derived by the French mathematician Jean Le Rond d'Alembert (Alembert, Jean Le Rond d') in 1747 and relating to rates of change of quantities arising in the vibration of a taut violin string (see Musical origins (analysis)). This led to the amazing conclusion that an arbitrary continuous function f(x) can be expressed, between 0 and 2π, as a sum of sine and cosine functions in a series (later called a Fourier series (analysis)) of the form

y = f(x) = a0/2 + (a1 cos (πx) + b1 sin (πx)) + (a2 cos (2πx) + b2 sin (2πx)) +⋯.

      But what is an arbitrary continuous function, and is it always correctly expressed by such a series? Indeed, does such a series necessarily represent a continuous function at all? The French mathematician Joseph Fourier (Fourier, Joseph, Baron) addressed these questions in his The Analytical Theory of Heat (1822). Subsequent investigations turned up many surprises, leading not only to a better understanding of continuous functions but also of discontinuous functions, which do indeed occur as Fourier series. This in turn led to important generalizations of the concept of integral designed to integrate highly discontinuous functions—the Riemann integral of 1854 and the Lebesgue integral of 1902. (See the sections Riemann integral (analysis) and Measure theory (analysis).)

Fluid flow
      Evolution in a different direction began when the French mathematicians Alexis Clairaut in 1740 and d'Alembert in 1752 discovered equations for fluid flow. Their equations govern the velocity components u and v at a point (xy) in a steady two-dimensional flow. Like a vibrating string, the motion of a fluid is rather arbitrary, although not completely—d'Alembert was surprised to notice that a combination of the velocity components, u + iv, was a differentiable function of x + iy. Like Euler, he had discovered a function of a complex variable, with u and v its real and imaginary parts, respectively.

  This property of u + iv was rediscovered in France by Augustin-Louis Cauchy (Cauchy, Augustin-Louis, Baron) in 1827 and in Germany by Bernhard Riemann (Riemann, Bernhard) in 1851. By this time complex numbers had become an accepted part of mathematics, obeying the same algebraic rules as real numbers and having a clear geometric interpretation as points in the plane (see figure—>). Any complex function f(z) can be written in the form f(z) = f(x + iy) = u(xy) + iv(xy), where u and v are real-valued functions of x and y. Complex differentiable functions are those for which the limit f′(z) of (f(z + h) − f(z))/h exists as h tends to zero. However, unlike real numbers, which can approach zero only along the real line, complex numbers reside in the plane, and an infinite number of paths lead to zero (see figure—>). It turned out that, in order to give the same limit f′(z) as h tends to zero from any direction, u and v must satisfy the constraints imposed by the Clairaut and d'Alembert equations (see the section D'Alembert's wave equation (analysis)).

      A way to visualize differentiability is to interpret the function f as a mapping from one plane to another. For f′(z) to exist, the function f must be “similarity preserving in the small,” or conformal, meaning that infinitesimal regions are faithfully mapped to regions of the same shape, though possibly rotated and magnified by some factor. This makes differentiable complex functions useful in actual mapping problems, and they were used for this purpose even before Cauchy and Riemann recognized their theoretical importance.

      Differentiability is a much more significant property for complex functions than for real functions. Cauchy discovered that, if a function's first derivative exists, then all its derivatives exist, and therefore it can be represented by a power series in z—its Taylor series. Such a function is called analytic. In contrast to real differentiable functions, which are as “flexible” as string, complex differentiable functions are “rigid” in the sense that any region of the function determines the entire function. This is because the values of the function over any region, no matter how small, determine all its derivatives, and hence they determine its power series. Thus, it became feasible to study analytic functions via power series, a program attempted by the Italian French mathematician Joseph-Louis Lagrange (Lagrange, Joseph-Louis, comte de l'Empire) for real functions in the 18th century but first carried out successfully by the German mathematician Karl Weierstrass (Weierstrass, Karl) in the 19th century, after the appropriate subject matter of complex analytic functions had been discovered.

Rebuilding the foundations
Arithmetization of analysis
      Before the 19th century, analysis rested on makeshift foundations of arithmetic and geometry, supporting the discrete and continuous sides of the subject, respectively. Mathematicians since the time of Eudoxus had doubted that “all is number,” and when in doubt they used geometry. This pragmatic compromise began to fall apart in 1799, when Gauss (Gauss, Carl Friedrich) found himself obliged to use continuity in a result that seemed to be discrete—the fundamental theorem of algebra (algebra).

 The theorem says that any polynomial equation has a solution in the complex numbers. Gauss's first proof fell short (although this was not immediately recognized) because it assumed as obvious a geometric result actually harder than the theorem itself. In 1816 Gauss attempted another proof, this time relying on a weaker assumption known as the intermediate value theorem: if f(x) is a continuous function of a real variable x and if f(a) < 0 and f(b) > 0, then there is a c between a and b such that f(c) = 0 (see figure—>).

      The importance of proving the intermediate value theorem was recognized in 1817 by the Bohemian mathematician Bernhard Bolzano (Bolzano, Bernhard), who saw an opportunity to remove geometric assumptions from algebra. His attempted proof introduced essentially the modern condition for continuity of a function f at a point x: f(x + h) − f(x) can be made smaller than any given quantity, provided h can be made arbitrarily close to zero. Bolzano also relied on an assumption—the existence of a greatest lower bound: if a certain property M holds only for values greater than some quantity l, then there is a greatest quantity u such that M holds only for values greater than or equal to u. Bolzano could go no further than this, because in his time the notion of quantity was still too vague. Was it a number? Was it a line segment? And in any case how does one decide whether points on a line have a greatest lower bound?

      The same problem was encountered by the German mathematician Richard Dedekind (Dedekind, Richard) when teaching calculus, and he later described his frustration with appeals to geometric intuition:

For myself this feeling of dissatisfaction was so overpowering that I made a fixed resolve to keep meditating on the question till I should find a purely arithmetic and perfectly rigorous foundation for the principles of infinitesimal analysis.…I succeeded on November 24, 1858.

      Dedekind eliminated geometry by going back to an idea of Eudoxus but taking it a step further. Eudoxus said, in effect, that a point on the line is uniquely determined by its position among the rationals. That is, two points are equal if the rationals less than them (and the rationals greater than them) are the same. Thus, each point creates a unique “cut” (LU) in the rationals, a partition of the set of rationals into sets L and U with each member of L less than every member of U.

      Dedekind's small but crucial step was to dispense with the geometric points supposed to create the cuts. He defined the real numbers to be the cuts (LU) just described—that is, as partitions of the rationals with each member of L less than every member of U. Cuts included representatives of all rational and irrational quantities previously considered, but now the existence of greatest lower bounds became provable and hence also the intermediate value theorem and all its consequences. In fact, all the basic theorems about limits and continuous functions followed from Dedekind's definition—an outcome called the arithmetization of analysis. (See Sidebar: Infinitesimals.)

      The full program of arithmetization, based on a different but equivalent definition of real number, is mainly due to Weierstrass in the 1870s. He relied on rigorous definitions of real numbers and limits to justify the computations previously made with infinitesimals. Bolzano's 1817 definition of continuity of a function f at a point x, mentioned above, came close to saying what it meant for the limit of f(x + h) to be f(x). The final touch of precision was added with Cauchy's “epsilon-delta” definition of 1821: for each ε > 0 there is a δ > 0 such that |f(x + h) − f(x)| < ε for all |h| < δ.

Analysis in higher dimensions
 While geometry was being purged from the foundations of analysis, its spirit was taking over the superstructure. The study of complex functions, or functions with two or more variables, became allied with the rich geometry of higher-dimensional spaces. Sometimes the geometry guided the development of concepts in analysis, and sometimes it was the reverse. A beautiful example of this interaction was the concept of a Riemann surface. The complex numbers can be viewed as a plane (as pointed out in the section Fluid flow (analysis)), so a function of a complex variable can be viewed as a function on the plane. Riemann's insight was that other surfaces can also be provided with complex coordinates, and certain classes of functions belong to certain surfaces. For example, by mapping the plane stereographically onto the sphere (see figure—>), each point of the sphere except the north pole is given a complex coordinate, and it is natural to map the north pole to infinity, ∞. When this is done, all rational functions make sense on the sphere; for example, 1/z is defined for all points of the sphere by making the natural assumptions that 1/0 = ∞ and 1/ = 0. This leads to a remarkable geometric characterization of the class of rational complex functions—they are the differentiable functions on the sphere. One similarly finds that the elliptic functions (elliptic equation) (complex functions that are periodic in two directions) are the differentiable functions on the torus.

      Functions of three, four, … variables are naturally studied with reference to spaces of three, four, … dimensions, but these are not necessarily the ordinary Euclidean spaces. The idea of differentiable functions on the sphere or torus was generalized to differentiable functions on manifolds (manifold) (topological spaces of arbitrary dimension). Riemann surfaces, for example, are two-dimensional manifolds.

      Manifolds can be complicated, but it turned out that their geometry, and the nature of the functions on them, is largely controlled by their topology, the rather coarse properties invariant under one-to-one continuous mappings. In particular, Riemann observed that the topology of a Riemann surface is determined by its genus, the number of closed curves that can be drawn on the surface without splitting it into separate pieces. For example, the genus of a sphere is zero and the genus of a torus is one. Thus, a single integer controls whether the functions on the surface are rational, elliptic, or something else.

      The topology of higher-dimensional manifolds is subtle, and it became a major field of 20th-century mathematics. The first inroads were made in 1895 by the French mathematician Henri Poincaré (Poincaré, Henri), who was drawn into topology from complex function theory and differential equations. The concepts of topology, by virtue of their coarse and qualitative nature, are capable of detecting order where the concepts of geometry and analysis can see only chaos (chaos theory). Poincaré found this to be the case in studying the three-body problem, and it continues with the intense study of chaotic dynamical systems (analysis).

      The moral of these developments is perhaps the following: It may be possible and desirable to eliminate geometry from the foundations of analysis, but geometry still remains present as a higher-level concept. Continuity can be arithmetized, but the theory of continuity involves topology, which is part of geometry. Thus, the ancient complementarity between arithmetic and geometry remains the essence of analysis.

John Colin Stillwell

Additional Reading

Nontechnical works
James R. Newman (ed.), The World of Mathematics, 4 vol. (1956, reprinted 1988), a gigantic and eclectic collection of writings about mathematics and mathematicians, contains many items related to analysis. Leo Zippin, Uses of Infinity (1962, reissued 2000), covers topics such as limits and sums of infinite series. Morris Kline, Mathematical Thought from Ancient to Modern Times (1972, reprinted in 3 vol., 1990), an enormous and comprehensive history of mathematics up to the early 20th century, contains masses of material on the development of analysis and the thinking behind it. Philip J. Davis and Reuben Hersh, The Mathematical Experience (1981, reprinted 1998), tells what mathematicians do and why. Ian Stewart, From Here to Infinity (1996), follows the historical development of many areas of mathematics, including several chapters on analysis, both standard and nonstandard, and his Does God Play Dice?, 2nd ed. (1997), explains the basic underlying ideas of chaos theory. John Stillwell, Mathematics and Its History, 2nd ed. (2002), emphasizes historical developments in order to unify and motivate mathematical theory at an undergraduate level. Frederick J. Almgren, Jr., and Jean E. Taylor, “The Geometry of Soap Films and Soap Bubbles,” Scientific American, 235(1):82–93 (July 1976), is a highly illustrated introduction to the Plateau problem for the nonspecialist.

Technical works
Calculus and real analysis
E. Hairer and G. Wanner, Analysis by Its History (1996), a well-illustrated and readable account of the history of calculus from Descartes to the beginning of the 20th century, is particularly informative on the classical period of Newton, Leibniz, the Bernoullis, and Euler. Jerrold Marsden and Alan Weinstein, Calculus, 2nd ed., 3 vol. (1985, vol. 2 and 3 reprinted with corrections, 1996, 1991), a clear and well-organized calculus text, is typical of a vast literature but better than most. Tom M. Apostol, Calculus, 2nd. ed., 2 vol. (1967–69), is an introduction to rigorous analysis that is directed toward the topics usually featured in calculus courses. Walter Rudin, Principles of Mathematical Analysis, 3rd ed. (1976, reissued 1987), is a typical advanced undergraduate text on analysis. Bernard R. Gelbaum and John M.H. Olmsted, Counterexamples in Analysis (1964), contains a collection of problems that demonstrate just how counterintuitive rigorous analysis can be.

E.T. Whittaker and G.N. Watson, A Course of Modern Analysis, 4th ed. (1927, reprinted 1996), is a classic text on complex analysis that turns into a remarkably detailed survey of the most interesting special functions of mathematical physics; worth reading as a period piece, it is still relevant today. John B. Conway, Functions of One Complex Variable, 2nd ed. (1978, reprinted with corrections, 1995), is a beautifully organized introduction to the analysis of complex functions at an undergraduate level. Ian Stewart and David Tall, Complex Analysis (1983, reprinted with corrections, 1985), is an undergraduate textbook that includes historical material and an unusual amount of motivating discussion to bring out the geometric ideas behind the rigorous formalism. Lars V. Ahlfors, Complex Analysis, 3rd ed. (1979), is an advanced undergraduate text by one of the subject's leading authorities.

H.S. Bear, A Primer of Lebesgue Integration, 2nd ed. (2002), is an introduction to Henri Lebesgue's theory of measure and integration at an undergraduate level.

Ordinary differential equations and dynamical systems
Martin Braun, Differential Equations and Their Applications, 4th ed. (1993), is a typical undergraduate text on differential equations that is unusually clear and readable. Morris W. Hirsch and Stephen Smale, Differential Equations, Dynamical Systems, and Linear Algebra (1974), was the first textbook to bring the qualitative theory of differential equations into the modern era for classroom use. Martin Golubitsky and Michael Dellnitz, Linear Algebra and Differential Equations Using MATLAB (1999), includes computer software, MATLAB on a CD-ROM, for carrying out symbolic calculations to develop differential equations for beginning undergraduates. John H. Hubbard and Beverly H. West, Differential Equations: A Dynamical Systems Approach, 2 vol. (1991–95; vol. 1 reprinted with corrections, 1997), uses in vol. 1 the methods of the qualitative theory of differential equations to develop traditional and modern topics within the field and is computer-oriented and highly pictorial in its approach; vol. 2 presents the qualitative theory of differential equations when many variables are present. Robert L. Devaney, An Introduction to Chaotic Dynamical Systems, 2nd ed. (1989, reissued 1998), introduces rigorous mathematics of chaos theory in the setting of discrete-time dynamics in order to minimize technicalities.

Partial differential equations and Fourier analysis
Michael Renardy and Robert C. Rogers, An Introduction to Partial Differential Equations (1993, reprinted with corrections, 1996), on the theory and applications of partial differential equations, is a good starting point for serious mathematicians. T.W. Körner, Fourier Analysis (1988, reissued with corrections, 1989), is a clear and simple introduction to Fourier analysis, leading into more advanced topics.

Other areas of analysis
John B. Conway, A Course in Functional Analysis, 2nd ed. (1990, reprinted with corrections, 1997), an excellent textbook; andLawrence W. Baggett, Functional Analysis: A Primer (1992), a thorough introduction, are suitable for advanced undergraduates. Stefan Hildebrandt and Anthony Tromba, The Parsimonious Universe: Shape and Form in the Natural World (1996), is a popular account of the classical problems in the calculus of variations—the isoperimetric problem, shortest paths, brachistochrone, least action, and soap films—with magnificient illustrations. U. Brechtken-Manderscheid, Introduction to the Calculus of Variations (1991; originally published in German, 1983), is an undergraduate text on the calculus of variations and its uses in science. Frank Morgan, Geometric Measure Theory: A Beginner's Guide, 3rd ed. (2000), presents the Plateau problem from the modern geometric viewpoint, an excellent introduction to global analysis as applied to a classic variational problem. Errett Bishop and Douglas Bridges, Constructive Analysis (1985), offers a fairly accessible introduction to the ideas and methods of constructive analysis. Abraham Robinson, Non-Standard Analysis, rev. ed. (1974, reissued 1996), is a readable account by the mathematician who made the field of nonstandard analysis respectable.Ian Stewart John Colin Stillwell

▪ physics and chemistry

      in physics and chemistry, determination of the physical properties or chemical composition of samples of matter or, particularly in modern physics, of the energy and other properties of subatomic particles produced in nuclear interactions. A large body of systematic procedures intended for these purposes has been continuously evolving in close association with the development of other branches of the physical sciences since their beginnings.

      Chemical analysis, which relies on the use of measurements (measurement), is divided into two categories depending on the manner in which the assays (assaying) are performed. Classical analysis, also termed wet chemical analysis, consists of those analytical techniques that use no mechanical or electronic instruments (instrumentation) other than a balance. The method usually relies on chemical reactions (chemical reaction) between the material being analyzed (the analyte) and a reagent that is added to the analyte. Wet techniques often depend on the formation of a product of the chemical reaction that is easily detected and measured. For example, the product could be coloured or could be a solid that precipitates from a solution.

      Most chemical analysis falls into the second category, which is instrumental analysis. It involves the use of an instrument, other than a balance, to perform the analysis. A wide assortment of instrumentation is available to the analyst. In some cases, the instrument is used to characterize a chemical reaction between the analyte and an added reagent; in others, it is used to measure a property of the analyte. Instrumental analysis is subdivided into categories on the basis of the type of instrumentation employed.

      Both classical and instrumental quantitative (quantitative chemical analysis) analyses can be divided into gravimetric and volumetric analyses. gravimetric analysis relies on a critical mass measurement. As an example, solutions containing chloride ions can be assayed by adding an excess of silver nitrate. The reaction product, a silver chloride precipitate, is filtered from the solution, dried, and weighed. Because the product was formed by an exhaustive chemical reaction with the analyte (i.e., virtually all of the analyte was precipitated), the mass of the precipitate can be used to calculate the amount of analyte initially present.

       volumetric analysis relies on a critical volume measurement. Usually a liquid solution of a chemical reagent (a titrant (titration)) of known concentration is placed in a buret (burette), which is a glass tube with calibrated volume graduations. The titrant is added gradually, in a procedure termed a titration, to the analyte until the chemical reaction is completed. The added titrant volume that is just sufficient to react with all of the analyte is the equivalence point and can be used to calculate the amount or concentration of the analyte that was originally present.

      Since the advent of chemistry, investigators have needed to know the identity and quantity of the materials with which they are working. Consequently, the development of chemical analysis parallels the development of chemistry. The 18th-century Swedish scientist Torbern Bergman (Bergman, Torbern Olof) is usually regarded as the founder of inorganic qualitative and quantitative chemical analysis. Prior to the 20th century nearly all assays were performed by classical methods. Although simple instruments (such as photometers and electrogravimetric analysis apparatus) were available at the end of the 19th century, instrumental analysis did not flourish until well into the 20th century. The development of electronics during World War II and the subsequent widespread availability of digital computers (computer) have hastened the change from classical to instrumental analysis in most laboratories. Although most assays currently are performed instrumentally, there remains a need for some classical analyses.

Principal stages
      The main steps that are performed during a chemical analysis are the following: (1) sampling, (2) field sample pretreatment, (3) laboratory treatment, (4) laboratory assay, (5) calculations, and (6) results presentation. Each must be executed correctly in order for the analytical result to be accurate. Some analytical chemists distinguish between an analysis, which involves all the steps, and an assay, which is the laboratory portion of the analysis.

      During this initial step of analysis, a portion of a bulk material is removed in order to be assayed. The portion should be chosen so that it is representative of the bulk material. To assist in this, statistics is used as a guide to determine the sample size and the number of samples. When selecting a sampling program, it is important that the analyst has a detailed description of the information required from the analysis, an estimate of the accuracy to be achieved, and an estimate of the amount of time and money that can be spent on sampling. It is worthwhile to discuss with the users of the analytical results the type of data that is desired. Results may provide needless or insufficient information if the sampling procedure is either excessive or inadequate.

      Generally the accuracy of an analysis is increased by obtaining multiple samples at varying locations (and times) within the bulk material. As an example, analysis of a lake for a chemical pollutant will likely yield inaccurate results if the lake is sampled only in the centre and at the surface. It is preferable to sample the lake at several locations around its periphery as well as at several depths near its centre. The homogeneity of the bulk material influences the number of samples needed. If the material is homogeneous, only a single sample is required. More samples are needed to obtain an accurate analytical result when the bulk material is heterogeneous. The disadvantages of taking a larger number of samples are the added time and expense. Few laboratories can afford massive sampling programs.

Sample preparation
      After the sample has been collected, it may be necessary to chemically or physically treat it at the sampling site. Normally this treatment is done immediately after the sample has been collected. The nature of the treatment is dependent on the sample and the substances for which it is being analyzed. For example, natural water samples that are assayed for dissolved oxygen generally are placed in containers that are sealed, stored, and transported in a refrigerated compartment. Sealing prevents a change in oxygen concentration owing to exposure to the atmosphere, and refrigeration slows changes in oxygen levels caused by microscopic organisms within the sample. Similarly, samples that are to be assayed for trace levels of metallic pollutants are pretreated in order to prevent a decrease in the concentration of the pollutant that is caused by adsorption on the walls of the sample vessel. Metallic adsorption can be minimized by adding nitric acid to the sample and by washing the walls of the vessel with the acid.

      After the samples arrive at the laboratory, additional operations might be required prior to performing the assay. In some cases, multiple samples simply are combined into a composite sample which is made homogeneous and then assayed. This process eliminates the need to assay each of the individual specimens. In other instances, the sample must be chemically or physically treated in order to place it in a form that can be assayed. For example, ore samples normally must be first dissolved in acidic solutions. Sometimes it is necessary to change the concentration of the analyte prior to performing the assay so that it will fall within the range of the analytical method. Once the specimen is prepared, enough laboratory assays are completed to allow the analyst to estimate the amount of random error. Typically a minimum of three assays are performed on each sample.

Evaluation of results
      After the assays have been completed, quantitative results are mathematically manipulated, and both qualitative and quantitative results are presented in a meaningful manner. In most cases, two values are reported for quantitative analyses. The first value is an estimate (estimation) of the correct value for the analysis, and the second value indicates the amount of random error in the analysis. The most common way of reporting the best value is to give the mean (average) of the results of the laboratory assays. In specific cases, however, it is better to report either the median (central value when the results are arranged in order of size) or the mode (the value obtained most often).

      Accuracy is the degree of agreement between the experimental result and the true value. Precision is the degree of agreement among a series of measurements of the same quantity; it is a measure of the reproducibility of results rather than their correctness. Errors may be either systematic (determinant) or random (indeterminant). Systematic errors cause the results to vary from the correct value in a predictable manner and can often be identified and corrected. An example of a systematic error is improper calibration of an instrument. Random errors are the small fluctuations introduced in nearly all analyses. These errors can be minimized but not eliminated. They can be treated, however, using statistical methods. statistics is used to estimate the random error that occurs during each step of an analysis, and, upon completion of the analysis, the estimates for the individual steps can be combined to obtain an estimate of the total experimental error.

      The most frequently reported error estimate is the standard deviation of the results; however, other values, such as the variance, the range, the average deviation, or confidence limits at a specified probability level are sometimes reported. For the relatively small number of replicate samples that are used during chemical assays, the standard deviation (s) is calculated by using equation (1)—> where Σ represents summation, xi represents each of the individual analytical results, a is the average of the results, and N is the number of replicate assays.

      The standard deviation is a popular estimate of the error in an analysis because it has statistical significance whenever the results are normally distributed. Most analytical results exhibit normal (Gaussian) behaviour, following the characteristic bell-shaped curve. If the results are normally distributed, 68.3 percent of the results can be expected to fall within the range of plus or minus one standard deviation of the mean as a result of random error. The units of standard deviation are identical to those of the individual analytical results.

      The variance (V) is the square of the standard deviation and is useful because, in many cases, it is additive throughout the several steps of the chemical analysis. Consequently, an estimate of the total random error in the analysis can be obtained by adding the variances for each of the individual steps in the analysis. The standard deviation for the overall analysis can then be calculated by taking the square root of the sum of the variances.

      A simple measure of variability is the range, given as the difference between the largest and the smallest results. It has no statistical significance, however, for small data sets. Another statistical term, the average deviation, is calculated by adding the differences, while ignoring the sign, between each result and the average of all the results, and then dividing the sum by the number of results. Confidence limits at a given probability level are values greater than and less than the average, between which the results are statistically expected to fall a given percentage of the time.

Preliminary laboratory methods
      A summary, though not comprehensive, of the common laboratory measurements that can be performed to supplement information obtained by another analytical procedure is provided in this section. Many of the methods can be used in the field or in process control apparatus as well as in the laboratory.

      Some physical measurements that do not require instrumentation other than an accurate balance can be useful in selected circumstances. Density, specific gravity, viscosity, and pH measurements are among the more useful measurements in this category.

density measurements
      This property is defined as the ratio of mass to volume of a substance. Generally the mass is measured in grams and the volume in millilitres or cubic centimetres. Density measurements of liquids (liquid) are straightforward and sometimes can aid in identifying pure substances or mixtures that contain two or three known components; they are most useful in assays of simple mixtures whose components differ significantly in their individual densities. Densities can be used, for example, as an aid in the quantitative analysis of aqueous sugar solutions. Liquid densities usually are measured by using a calibrated glass vessel called a pycnometer, which typically has a volume of about 10 millilitres. The vessel is weighed by using an analytical balance with an accuracy of at least 0.0001 gram and is subsequently filled to the calibration mark with the liquid. After the filled vessel has been weighed, the mass of the liquid is determined by subtracting the mass of the empty vessel. The density is calculated by dividing the mass of the liquid by the volume of the pycnometer.

specific gravity measurements
      Specific gravity is a related quantity that is defined as the ratio of the density of the analyte to the density of water at a specified temperature. The procedure used to measure specific gravity is similar to that used to measure density, although it does not require accurate knowledge of the volume of the vessel that contains the liquid. After the weight of the vessel when empty has been obtained, the vessel is filled to the calibration mark with distilled water at a specified temperature (often 4°, 20°, or 25° C [39°, 68°, or 77° F, respectively]) and weighed. From the difference between the weights, the mass of the water is determined. The vessel is emptied and then filled with the analyte and reweighed. The mass of the analyte is determined as during density measurements (i.e., by subtracting the mass of the empty vessel), and the ratio of the analyte mass to the water mass is calculated. The resultant ratio is the specific gravity of the analyte. It is not necessary to know accurately the volume of the container, because it and the volume of the analyte cancel one another while the ratio of the densities is obtained. Density and specific gravity measurements rarely provide sufficient information to qualitatively identify a pure analyte. They can be used as supporting evidence, however, when an assay is performed by another procedure.

viscosity measurements
      Measurements of this kind also provide limited analytical information. Viscosity is a measure of the resistance of a substance to change of shape. Often it is defined as the resistance to flow of a fluid. It is measured in units of poises (dyne-seconds per square centimetre) or a subdivision of poises. For liquids viscosity is measured in a calibrated glass vessel known as a viscometer, of which there are various types. After inversion, the upper glass bulb is filled to the lower calibration mark by applying suction with a rubber bulb and drawing the liquid analyte into the apparatus. The device is stoppered at the end near the lower bulb, inverted to its upright position, and placed in a constant-temperature bath. After temperature equilibrium has been established, the stopper is removed. The time required for the volume of liquid between the two marks to drain from the bulb is measured. The time elapsed is used in conjunction with a table supplied by the manufacturer of the bulb to determine the viscosity. The tube at the lower end of the upper bulb has a fixed length and radius that is used along with the pressure differential between the upper and lower ends of the apparatus to measure the viscosity. Viscosity measurements are common in industries that produce oils or other relatively slow-flowing liquids. They often are employed in oil refineries to determine the viscosities of refined oils.

pH determinations
      The pH of a solution is the negative logarithm (base 10) of the activity (the product of the molar concentration and the activity coefficient) of the hydrogen ions (hydrogen ion) (H+) in the solution. In solutions of low ionic strength, pH can be defined as the negative logarithm of the molar concentration of the hydrogen ions because activity and concentration are nearly identical in these solutions. One method for determining pH is by use of a chemical (chemical indicator) acid-base indicator, which consists of a dye that is either a weak acid or a weak base. The dye has one colour in its acidic form and a second colour in its basic form. Because different dyes change from the acidic to the basic form at different pH values, it is possible to use a series of dyes to determine the pH of a solution. A small portion of the dye or dye mixture is added to the analyte, or a portion of the analyte is added to the dye mixture (often on a piece of paper that is permeated with the indicator). By comparing the colour of the indicator or indicator mixture that is in contact with the sample to the colours of the dyes in their acidic and basic forms, it is possible to determine the pH of the solution. Although this method is rapid and inexpensive, it rarely is used to determine pH with an accuracy greater than about 0.5 pH units. More accurate measurements are performed instrumentally as described below (see Instrumental methods: Electroanalysis: Potentiometry (analysis)).

Interference removal
      Regardless of whether a classical or instrumental method is used, it may be necessary to remove interferences from an analyte prior to an assay. An interference is a substance, other than the assayed material, that can be measured by the chosen analytical method or that can prevent the assayed material from being measured. Interferences cause erroneous analytical results. Several methods have been devised to enable their removal. The most popular of such separatory (separation and purification) methods include distillation, selective precipitation, filtration, complexation, osmosis, reverse osmosis, extraction, electrogravimetry, and chromatography. Some of these methods can be used not only to remove interferences but also to perform the assay.

      During distillation a mixture of either liquid or liquid and solid components is placed in a glass vessel, called a pot (or boiling flask), and heated. The more volatile components—i.e., those with the lower boiling points—are converted to a gaseous state and exit the pot through a cooling tube, called a condenser (condensation), that is located above the pot. The condensed liquids, termed the distillate, are collected in a receiving flask and thereby separated from the less volatile components. Separation is based on relative boiling points (boiling point) of the components. Normally the efficiency of the separation is increased by inserting a column between the pot and the condenser. A distillation column is a tube that provides surfaces on which condensations and vaporizations (vaporization) can occur before the gas enters the condenser in order to concentrate the more volatile liquid in the first fractions and the less volatile components in the later fractions. The analyte typically goes through several vaporization-condensation steps prior to arriving at the condenser.

Selective precipitation (chemical precipitation)
      In some cases, selective precipitation can be used to remove interferences from a mixture. A chemical reagent is added to the solution, and it selectively reacts with the interference to form a precipitate. The precipitate can then be physically separated from the mixture by filtration or centrifugation. The use of precipitation in gravimetric analysis is described below (see Classical methods: Classical quantitative analysis (analysis)).

      This operation can be used to separate particles according to their dimensions. One application is the removal of the precipitate after selective precipitation. Such solid-liquid laboratory filtrations are performed through various grades of filter paper (i.e., those differing in pore size). The mixture is poured either onto a filter paper that rests in a funnel or onto another filtering device. The liquid passes through the filter while the precipitate is trapped. When the filter has a small pore size, the normal filtration rate is slow but can be increased by filtering into a flask that is maintained under a partial vacuum. In that instance, fritted glass or glass fibre filters often are used in place of paper filters. Solid-gas filtrations are carried out in the laboratory as well.

      This is another method used to prevent a substance from interfering with an assay. A chemical complexing (complex) agent is added to the analyte mixture for the purpose of selectively forming a complex with the interference. A complex is a combination of the two substances and normally remains dissolved. Because the chemical nature of the complex is different from that of the original interference, the complex does not interfere with the assay.

      This is a separation technique in which a semipermeable membrane is placed between two solutions containing the same solvent. The membrane allows passage of small solution components (usually the solvent) while preventing passage of larger molecules. The natural tendency is for the solvent to flow from the side where its concentration is higher to the side where its concentration is lower. Reverse osmosis occurs when pressure is applied to the solution on the side of the membrane that contains the lower solvent concentration. The pressure forces the solvent to flow from a region of low concentration to one of high concentration. Reverse osmosis often is used for water purification. Osmosis or reverse osmosis can be utilized in certain instances to perform separations prior to a chemical assay.

      Extraction takes advantage of the relative solubilities of solutes in immiscible solvents. If the solutes are in an aqueous solution, an organic solvent that is immiscible with water is added. The solutes will dissolve either in the water or in the organic solvent. If the relative solubilities of the solutes differ in the two solvents, a partial separation occurs. The upper, less dense solvent layer is physically separated from the lower layer. The separation is enhanced if the process is repeated on each of the separated layers. It is possible to perform the extractions in a continuous procedure, called counter current (countercurrent distribution) extraction, as well as in the batch process described here.

      This method employs an electric current to deposit a solid on an electrode from a solution. Normally the deposit is a metallic plate that has formed from the corresponding metallic ions in the solution; however, other electrode coatings also can be formed. The use of electrogravimetry as an instrumental analytical method is described below (see Instrumental methods: Electroanalysis: Electrogravimetry (analysis)).

      Chromatography consists of a large group of separatory methods in which the components of a mixture are separated by the relative attraction of the components for a stationary phase (a solid or liquid) as a mobile phase (a liquid or gas) passes over the stationary phase. Chromatography usually is divided into two categories depending on the type of mobile phase that is used. If the mobile phase is a liquid, the technique is liquid chromatography; if it is a gas, the technique is gas chromatography.

      In a simple liquid chromatographic apparatus the stationary phase is held in place either in a column or on a plane (such as a plate of glass, metal, or plastic or a sheet of paper). In the case of a column, the lower end is loosely plugged, often with glass wool or a sintered glass disk. Prior to the separation, the column is filled with the mobile phase to a level that is slightly above the level of the stationary phase. The mixture to be separated is added to the top of the column and is allowed to drain onto the stationary phase.

      In the most common form of chromatography, known as elution chromatography, the mobile phase is continuously added to the top of the column as solution flows from the bottom. The stationary phase must be continuously immersed in the mobile phase to prevent air bubbles from entering the column and impeding the mobile-phase flow. As the components of the mixture are flushed through the column, they are partitioned between the two phases depending on their attractions to the stationary phase. Because different mixture components have different attractions for the stationary phase, a separation occurs. The components that are more attracted to the stationary phase remain in the column longer, while those components that are less attracted are flushed more rapidly from the column. The separated components are collected as they exit the column.

      A similar process occurs during separations that are performed on a plane. In such a case, however, the separations occur in space after a fixed time period rather than in time at a fixed location as was described for column chromatography. The separated components appear as spots on the plane.

Classical methods
      The majority of the classical analytical methods rely on chemical reactions to perform an analysis. In contrast, instrumental methods typically depend on the measurement of a physical property of the analyte.

Classical qualitative analysis (qualitative chemical analysis)
      Classical qualitative analysis is performed by adding one or a series of chemical reagents to the analyte. By observing the chemical reactions (chemical reaction) and their products, one can deduce the identity of the analyte. The added reagents are chosen so that they selectively react with one or a single class of chemical compounds to form a distinctive reaction product. Normally the reaction product is a precipitate or a gas, or it is coloured. Take for example copper(II), which reacts with ammonia to form a copper-ammonia complex that is characteristically deep blue. Similarly, dissolved lead(II) reacts with solutions containing chromate to form a yellow lead chromate precipitate. Negative ions (anions) as well as positive ions (cations) can be qualitatively analyzed using the same approach. The reaction between carbonates and strong acids to form bubbles of carbon dioxide gas is a typical example.

      Prior to the qualitative analysis of any given compound, the analyte generally has been identified as either organic or inorganic. Consequently, qualitative analysis is divided into organic and inorganic categories. Organic compounds consist of carbon compounds, whereas inorganic compounds primarily contain elements other than carbon. Sugar (C12H22O11) is an example of an organic compound, while table salt (NaCl) is inorganic.

      Classical organic qualitative analysis usually involves chemical reactions between added chemical reagents and functional groups of the organic molecules. As a consequence, the result of the assay provides information about a portion of the organic molecule but usually does not yield sufficient information to identify it completely. Other measurements, including those of boiling points, melting points, and densities, are used in conjunction with a functional group analysis to identify the entire molecule. An example of a chemical reaction that can be used to identify organic functional groups is the reaction between bromine in a carbon tetrachloride solution and organic compounds containing carbon-carbon double bonds. The disappearance of the characteristic red-brown colour of bromine, due to the addition of bromine across the double bonds, is a positive test for the presence of a carbon-carbon double bond. Similarly, the reaction between silver nitrate and certain organic halides (those compounds containing chlorine, bromine, or iodine) results in the formation of a silver halide precipitate as a positive test for organic halides.

      Classical qualitative analyses can be complex owing to the large number of possible chemical species in the mixture. Fortunately, analytical schemes have been carefully worked out for all the common inorganic ions and organic functional groups. Detailed information about inorganic and organic qualitative analysis can be found in some of the texts listed in the Bibliography (analysis) at the end of this article.

Classical quantitative analysis (quantitative chemical analysis)
      Classical quantitative analysis can be divided into gravimetric analysis and volumetric analysis. Both methods utilize exhaustive chemical reactions between the analyte and added reagents. As discussed above, during gravimetric analysis an excess of added reagent reacts with the analyte to form a precipitate. The precipitate is filtered, dried, and weighed. Its mass is used to calculate the concentration or amount of the assayed substance in the analyte.

      Volumetric analysis is also known as titrimetric analysis. The reagent (the titrant (titration)) is added gradually or stepwise to the analyte from a buret. The key to performing a successful titrimetric analysis is to recognize the equivalence point of the titration (the point at which the quantities of the two reacting species are equivalent), typically observed as a colour change. If no spontaneous colour change occurs during the titration, a small amount of a chemical indicator is added to the analyte prior to the titration. Chemical indicators are available that change colour at or near the equivalence point of acid-base, oxidation-reduction, complexation, and precipitation titrations. The volume of added titrant corresponding to the indicator colour change is the end point of the titration. The end point is used as an approximation of the equivalence point and is employed, with the known concentration of the titrant, to calculate the amount or concentration of the analyte.

Instrumental methods
      The instrumental (instrumentation) methods of chemical analysis are divided into categories according to the property of the analyte that is to be measured. Many of the methods can be used for both qualitative and quantitative analysis. The major categories of instrumental methods are the spectral, electroanalytical, and separatory.

Spectral methods (spectrochemical analysis)
      Spectral methods measure the electromagnetic radiation that is absorbed, scattered, or emitted by the analyte. Because the types of radiation that can be monitored are multitudinous and the manner in which the radiation is measured can significantly vary from one method to another, the spectral methods constitute the largest category of instrumental methods. Since a detailed description of the spectral methods of analysis is included in a later section, only an introduction is provided here.

      In the most often used spectral method, the electromagnetic radiation that is provided by the instrument is absorbed by the analyte, and the amount of the absorption is measured. Absorption occurs when a quantum of electromagnetic radiation, known as a photon, strikes a molecule and raises it to some excited (high-energy) state. The intensity (i.e., the energy, in the form of electromagnetic radiation, transferred across a unit area per unit time) of the incident radiation decreases as it passes through the sample. The techniques that measure absorption in order to perform an assay are absorptiometry or absorption spectrophotometry.

      Normally absorptiometry is subdivided into categories depending on the energy or wavelength region of the incident radiation. In order of increasingly energetic radiation, the types of absorptiometry are radiowave absorptiometry (called nuclear magnetic resonance spectrometry), microwave absorptiometry (including electron spin resonance spectrometry), thermal absorptiometry (thermal analysis), infrared absorptiometry, ultraviolet-visible absorptiometry, and X-ray absorptiometry. The instruments that provide and measure the radiation vary from one spectral region to another, but their operating principles are the same. Each instrument consists of at least three essential components: (1) a source of electromagnetic radiation in the proper energy region, (2) a cell that is transparent to the radiation and that can contain the sample, and (3) a detector that can accurately measure the intensity of the radiation after it has passed through the cell, and the sample.

      Essentially, the amount of absorbed radiation increases with the concentration of the analyte and with the distance through the analyte that the radiation must travel (the cell path length). As radiation is absorbed in the sample, the intensity of the radiative beam decreases. By measuring the decreased intensity through a fixed-path-length cell containing the sample, it is possible to determine the concentration of the sample. Because different substances absorb at different wavelengths (or energies), the instruments must be capable of controlling the wavelength of the incident electromagnetic radiation. In most instruments, this is accomplished with a monochromator. In other instruments, it is done by use of radiative filters or by use of sources that emit radiation within a narrow wavelength band.

      Because the wavelength at which substances absorb radiation depends on their chemical makeup, absorptiometry can also be used for qualitative analysis. The analyte is placed in the cell, and the wavelength of the incident radiation is scanned throughout a spectral region while the absorption is measured. The resulting plot of radiative intensity or absorption as a function of wavelength or energy of the incident radiation is a spectrum. The wavelengths at which peaks are observed are used to identify components of the analyte.

      The absorption that occurs in different spectral regions corresponds to different physical processes that occur within the analyte. Absorption of energy in the radiofrequency region is sufficient to cause a spinning nucleus in some atoms to move to a different spin state in the presence of a magnetic field. Consequently, nuclear magnetic resonance spectrometry is useful for examining atomic nuclei and the transitions between their possible spin states. Because nuclei from different atoms have different possible spin states that are separated from each other by different amounts of energy, nuclear magnetic resonance spectrometry can be used to identify the type of atoms in the analyte. The spin states can be observed only in the presence of an externally applied magnetic field.

      The energy at which absorption occurs depends on the strength of the magnetic field. Any factors that change the magnetic field strength experienced by the nucleus affect the energy at which absorption occurs. Since spinning nuclei of other atoms in the vicinity of the nucleus studied can affect the magnetic field strength, those neighbouring nuclei cause the absorption to be shifted to slightly different energies. As a result, nuclear magnetic resonance spectrometry can be used to deduce the number and types of different nuclei of the groups attached to the atom containing the nucleus studied. It is particularly useful for qualitative analysis of organic compounds.

Microwave absorptiometry
      In a manner that is similar to that described for nuclear magnetic resonance spectrometry, electron spin resonance spectrometry is used to study spinning electrons. The absorbed radiation falls in the microwave spectral region and induces transitions in the spin states of the electrons. An externally applied magnetic field is required. The technique is effective for studying structures and reactions of materials that contain unpaired electrons.

      Absorbed microwave radiation can cause changes in rotational energy levels within molecules, making it useful for other purposes. The rotational energy levels within a molecule correspond to the different possible ways in which a portion of a molecule can revolve around the chemical bond that binds it to the remainder of the molecule. Because the permitted rotational levels depend on the natures of the bonded atoms (e.g., their masses), microwave radiation can be used for qualitative analysis of some organic molecules.

      During thermal analysis heat is added to an analyte while some property of the analyte is measured. Often the temperature of the sample is monitored during the addition of heat. The manner in which the temperature changes is compared to the way in which the temperature of a completely inert material changes while being exposed to the same heating program. The results are employed for qualitative and quantitative analysis and for determining decomposition mechanisms of the analyte. For example, compounds that contain water exhibit a constant temperature region as the water is stripped from the compound even though heat is continuously added. If the manner in which a compound responds to a heating program is known, the technique can be used for quantitative analysis by measuring the time necessary for a particular change within the analyte to occur.

Infrared spectrophotometry
      Absorbed infrared radiation causes rotational changes in molecules, as described for microwave absorption above, and also causes vibrational changes. The vibrational energy levels within a molecule correspond to the ways in which the individual atoms or groups of atoms vibrate relative to the remainder of the molecule. Because vibrational energy levels are dependent on the types of atoms and functional groups, infrared absorption spectrophotometry is primarily used for organic qualitative analysis. It can be used for quantitative analysis, however, by monitoring the amount of absorbed radiation at a given energy corresponding to one of the peaks in the spectrum of the molecule.

Ultraviolet-visible spectrophotometry
      Absorption in the ultraviolet-visible region of the spectrum causes electrons in the outermost occupied orbital of an atom or molecule to be moved to a higher (i.e., farther from the nucleus) unoccupied orbital. Ultraviolet-visible absorptiometry is principally used for quantitative analysis of atoms or molecules. It is a useful method in this respect because the height of the absorption peaks in the ultraviolet-visible region of the spectra of many organic and inorganic compounds is large in comparison to the peak heights observed in other spectral regions. Small analyte concentrations can be more easily measured when the peaks are high. If the analyte consists of discrete atoms (which exist only in the gaseous state), the method is termed atomic absorption spectrophotometry.

      Some ions and molecules do not absorb strongly in the ultraviolet-visible spectral region. Methods have been developed to apply ultraviolet-visible absorptiometry to those substances. Normally a chemical reagent is added that reacts with the analyte to form a reaction product that strongly absorbs. The absorption of the product of the chemical reaction is measured and related to the concentration of the nonabsorbing analyte. When a nonabsorbing metallic ion is assayed, the added reagent generally is a complexing agent. For example, 1,10-phenanthroline is added to solutions that are assayed for iron(II). The complex that forms between the iron and the reagent is red and is suitable for determining even very small amounts of iron. When a chemical reagent is used in a spectrophotometric assay, the procedure is called a spectrochemical analysis.

      Spectrophotometric titrations (titration) are another example of spectrochemical analyses. The titrant (reagent) is placed in a buret and is added stepwise to the assayed substance. After each addition, the absorption of the solution in the reaction vessel is measured. A titration curve is prepared by plotting the amount of absorption as a function of the volume of added reagent. The shape of the titration curve depends on the absorbances of the titrant, analyte, and reaction product; from the shape of the curve, it is possible to determine the end point. The end-point volume is used with the concentration of the reagent and the initial volume of the sample solution to calculate the concentration of the analyte.

      The detectors that are used in ultraviolet-visible spectrophotometry measure photons (photon). If these photon detectors are replaced by a detector that measures pressure waves, the technique is known as photoacoustic, or optoacoustic, spectrometry. Photoacoustic spectrometers typically employ microphones or piezoelectric transducers as detectors. Pressure waves result when the analyte expands and contracts as it absorbs chopped electromagnetic radiation.

X-ray absorption
      Absorbed X rays (X-ray) cause excitation of electrons (electron) from inner orbitals (those near the nucleus) to unoccupied outer orbitals. In some cases, the energy of the incident X ray is sufficient to ionize the analyte by completely removing the electron from the atom or molecule. The energy required to excite the electron from an inner orbital is greater than that which is available in the ultraviolet-visible region. Because the inner shell electrons that are excited during X-ray absorption are associated with atoms in molecules rather than with the molecule as a whole, the information that is provided from a study of X-ray absorption spectra relates to the atoms within a molecule rather than to the entire molecule. X-ray absorption is used for qualitative analysis by comparing the spectrum of the analyte to spectra of known substances. Quantitative analysis also is performed in a manner similar to that used in other spectral regions. X-ray absorption spectra differ in shape from those observed in other regions, but the same measurement principles are applied during the assays.

Scattered radiation
      Radiative scattering is utilized in the second major spectral method of analysis. In this technique some radiation that passes through a sample strikes particles of the analyte and is scattered in a different direction. A detector is used to measure either the intensity of the scattered radiation or the decreased intensity of the incident radiation. Depending on the scattering mechanism, the method can be employed for either qualitative or quantitative analysis. If the intensity of the scattered radiation is measured, quantitative analysis is performed by preparing a working curve of intensity as a function of concentration of a series of standard solutions (i.e., solutions containing known concentrations of the component being analyzed). Working curves also are used with other analytical methods, including absorptiometry. The intensity of the scattered radiation in the analyte is measured and compared to the working curve. The concentration of the analyte corresponds to the concentration on the curve that has an intensity identical to that of the analyte.

      For chemical analysis three forms of radiative scattering are important—namely, Tyndall, Raman, and Rayleigh scattering. Tyndall scattering (Tyndall effect) occurs when the dimensions of the particles that are causing the scattering are larger than the wavelength of the scattered radiation. It is caused by reflection of the incident radiation from the surfaces of the particles, reflection from the interior walls of the particles, and refraction and diffraction of the radiation as it passes through the particles.

      Raman (Raman effect) and Rayleigh scattering occur when the dimensions of the scattering particles are less than 5 percent of the wavelength of the incident radiation. Both Rayleigh and Raman scattering are caused by the effect on the analyte of the fluctuating electromagnetic field that is associated with the passing incident radiation. The fluctuating field induces an electric dipole (separation of charges equal in size but opposite in sign) within the scattering particles that oscillates at the same frequency as the incident radiation. The oscillating dipole behaves as a point source of emitted radiation.

Turbidimetry and nephelometry
      Scattered radiation can be used to perform quantitative analysis in either of two ways. If the apparatus is designed so that the detector is aligned with the cell and the radiative source, the detector responds to the decreased intensity of the incident radiation that is caused by scattering in the cell. Measurements of the decreased intensity are turbidimetric measurements; the technique is called turbidimetry. The measurements are completely analogous to absorption measurements. The only difference is in the phenomenon that causes the decreased radiative intensity. As with absorption measurements, the decreased intensity is related to the concentration of the scattering species in the cell at a constant wavelength. In both Tyndall scattering and Rayleigh scattering, the wavelength of the scattered radiation is identical to that of the incident radiation. Consequently, neither type provides information that is useful for qualitative analysis.

      If the intensity of the scattered radiation is measured, rather than the decrease in intensity of the incident radiation, the method is known as nephelometry (nephelometry and turbidimetry). The apparatus used for nephelometric measurements differs from that used for turbidimetric measurements in the placement of the detector. In nephelometry the detector is not aligned with the radiation source and the cell; normally it is placed perpendicular to the path of the incident radiation. Placing the detector out of the path of the incident radiation eliminates the possibility of measuring its intensity. Both nephelometry and turbidimetry are used with Tyndall scattering to quantitatively assay turbid solutions.

      As mentioned above, Raman and Rayleigh scattering are caused by induced dipoles that are formed as the electromagnetic radiation passes the scattering particles. Raman scattering differs from Rayleigh scattering in that in the former the induced dipole relaxes to a different vibrational level than it originally had. Accordingly, the wavelength of the scattered radiation differs from the wavelength of the incident radiation by an amount corresponding to the difference between the particle's original and final vibrational levels. Shifts between the wavelengths of the incident radiation and the scattered radiation correspond to differences in vibrational levels within the scattering molecule and therefore can be used for qualitative analysis in much the same way that infrared spectrophotometry is used.

      Another category of spectral analysis in which the incident radiation changes direction is refractometry. The refractive index of a substance is defined as the ratio of the velocity of electromagnetic radiation in a vacuum to its velocity in the medium of interest. Because it is difficult to accurately measure velocities as large as those of electromagnetic radiation, the refractive index is determined from the extent to which the radiation changes direction, owing to the decrease in velocity, as it passes from one medium into another. This phenomenon is refraction. Measurements of refractive index are used to qualitatively analyze pure substances because each substance has a constant and unique refractive index that can be determined with great accuracy. Quantitative analysis of simple mixtures containing known components is possible because the refractive index changes with the composition of the mixture.

Emitted radiation
      The spectroanalytical methods in the final major category utilize measurements of emitted radiation. Except for a few radionuclides that spontaneously emit radiation, emission occurs only after initial excitation of the analyte by an external source of energy.

      In the most common case excitation occurs after the absorption of electromagnetic radiation. The absorption process is identical to that which occurs during absorptiometric measurements. After ultraviolet-visible absorption, an electron in the analyte molecule or atom resides in an upper electron orbital with one or more vacant orbitals nearer to the nucleus. Emission occurs when the excited electron returns to a lower electron orbital. The emitted radiation is termed luminescence. Luminescence is observed at energies that are equal to or less than the energy corresponding to the absorbed radiation.

      After initial absorption, emission can occur by either of two mechanisms. In the most common form of luminescence, the excited electron returns to the lower electron orbital without inverting its spin—i.e., without changing the direction in which the electron rotates in the presence of a magnetic field. This phenomenon, known as fluorescence, occurs immediately after absorption. When absorption ceases, fluorescence also immediately ceases.

      Although it occurs with low probability, the excited electron sometimes returns to a lower electron orbital by a path in which the electron first inverts its spin while moving to a slightly lower energy state and then inverts the spin again while returning to the original spin state in the unexcited electron orbital. Emission of ultraviolet-visible radiation occurs during the transition from the excited, inverted spin state to the unexcited electron orbital. Because inversion of the spinning electron during the last transition can require a relatively long time, the emission does not immediately cease when the absorption ceases. The resulting luminescence is called phosphorescence. Both fluorescence and phosphorescence can be used for analysis. Fluorescence can be distinguished from phosphorescence by the time delay in emission that occurs during the latter. If the luminescence immediately stops when the exciting radiation is cut off, it is fluorescence; if the luminescence continues, it is phosphorescence.

      Owing to the arrangement of electron orbitals in molecules and atoms, phosphorescence is observed only in polyatomic species, whereas fluorescence can be observed in atoms as well as in polyatomic species. When fluorescence is observed in discrete, gaseous atoms, it is termed atomic fluorescence.

      The apparatus used to make fluorescent and phosphorescent measurements is similar to that used to make measurements of scattered radiation. The detector is usually placed perpendicular to the path of the incident radiation in order to eliminate the possibility of monitoring the incident radiation. Devices that are used to measure fluorescence are fluorometers, and those that are employed to measure phosphorescence are phosphorimeters. Phosphorimeters differ from fluorometers in that they monitor luminescent intensity while the exciting radiation is not striking the cell.

      At dilute concentrations, the intensity of the luminesced radiation is directly proportional to the concentration of the emitting species. As with other spectral methods, qualitative analysis is performed by comparing the spectrum of the analyte (a plot of the intensity of emitted radiation as a function of wavelength) with spectra of known substances.

      Luminescence can be initiated by a process other than absorption of electromagnetic radiation. Some atoms can be sufficiently excited to emit radiation when exposed to the heat in a flame. The analytical technique that measures the wavelength and/or the intensity of emitted radiation from a flame is flame emission spectrometry. If electrical (electricity) energy in the form of a spark or an arc is used to excite the analyte prior to measuring the intensity of emitted radiation, the method is atomic emission spectrometry. If a chemical reaction is used to initiate the luminescence, the technique is chemiluminescence; (chemiluminescence) if an electrochemical reaction causes the luminescence, it is electrochemiluminescence.

X-ray emission
      X-ray emission spectrometry is the group of analytical methods in which emitted X-ray radiation is monitored. X rays (X-ray) are emitted when an electron in an outer orbital falls into a vacancy in an inner orbital. The vacancy is created by bombarding the atom with electrons, protons, alpha particles, or another type of particles. The vacancy also can be created by absorption of X-ray radiation or by nuclear capture of an inner-shell electron as it approaches the nucleus. Often the bombardment is sufficiently energetic to cause the inner orbital electron to be completely removed from the atom, thereby forming an ion with a vacant inner orbital.

      Emitted X rays are used for qualitative and quantitative analysis in much the same way that emitted ultraviolet-visible radiation is employed in fluorometry. X-ray fluorescence is used more often for chemical analysis than the other X-ray methods. The diffraction pattern of X rays that are passed through solid crystalline materials is useful for determining the crystalline structure of solids. The analytical method that measures the diffraction patterns for the purpose of determining structure is termed X-ray diffraction analysis.

      Several methods of surface analysis utilize X rays. Particle-induced X-ray emission (PIXE) is the method in which a small area on the surface of a sample is bombarded with accelerated particles and the resulting fluoresced X rays are monitored. If the bombarding particles are protons and the analytical technique is used to obtain an elemental map of a surface, the apparatus utilized is a proton microprobe. An electron microprobe functions in much the same manner. The scanning electron microscope utilizes electrons to bombard a surface, but the intensity of either backscattered (deflected through angles greater than 90°) or transmitted electrons is measured rather than the intensity of X rays. Electron microscopes are often used in conjunction with X-ray spectrometers to obtain information about surfaces.

      Electron spectroscopy comprises a group of analytical methods that measure the kinetic energy of expelled electrons after initial bombardment of the analyte with X rays, ultraviolet radiation, ions, or electrons. When X rays are used for the bombardment, the analytical method is called either electron spectroscopy for chemical analysis (ESCA) or X-ray photoelectron spectroscopy (XPS). If the incident radiation is ultraviolet radiation, the method is termed ultraviolet photoelectron spectroscopy (UPS) or photoelectron spectroscopy (PES). When the bombarding particles are electrons and different emitted electrons are monitored, the method is Auger electron spectroscopy (AES). Other forms of less frequently used electron spectroscopy are available as well.

Radiochemical methods
      During use of the radiochemical methods, spontaneous emissions of particles or electromagnetic radiation from unstable atomic nuclei are monitored. The intensity of the emitted particles or electromagnetic radiation is used for quantitative analysis, and the energy of the emissions is used for qualitative analysis. Emissions of alpha particles, electrons (negatrons and positrons), neutrons, protons, and gamma rays (gamma ray) can be useful. Gamma rays are energetically identical to X rays; however, they are emitted as a result of nuclear transformations rather than electron orbital transitions.

      A radioisotope is an isotope of an element that spontaneously emits particles or radiation. Radioisotopes can be assayed using a radioanalytical method. In other cases, it is possible to bombard a nonradioactive sample with a particle or with radiation in order to transform temporarily all or part of the sample into a radioactive material that can be assayed. Sometimes it is possible to dilute a sample with a radioactive isotope of the assayed element. If the amount of the dilution can be deduced, the intensity of the emissions from the added radioisotope can be used to assay the nonradioactive analyte. This method is called isotopic dilution (isotope dilution) analysis.

      The second major category of instrumental analysis is electroanalysis. The electroanalytical methods use electrically conductive probes, called electrodes (electrode), to make electrical contact with the analyte solution. The electrodes are used in conjunction with electric or electronic devices to which they are attached to measure an electrical parameter of the solution. The measured parameter is related to the identity of the analyte or to the quantity of the analyte in the solution.

      The electroanalytical methods are divided into categories according to the electric parameters that are measured. The major electroanalytical methods include potentiometry, amperometry, conductometry, electrogravimetry, voltammetry (and polarography), and coulometry. The names of the methods reflect the measured electric property or its units. Potentiometry measures electric potential (or voltage) while maintaining a constant (normally nearly zero) electric current between the electrodes. Amperometry monitors electric current (amperes) while keeping the potential constant. Conductometry measures conductance (the ability of a solution to carry an electric current) while a constant alternating-current (AC) potential is maintained between the electrodes. Electrogravimetry is a gravimetric technique similar to the classical gravimetric methods that were described above, in which the solid that is weighed is deposited on one of the electrodes. Voltammetry is a technique in which the potential is varied in a regular manner while the current is monitored. Polarography is a subtype of voltammetry that utilizes a liquid metal electrode. Coulometry is a method that monitors the quantity of electricity (coulombs) that are consumed during an electrochemical reaction involving the analyte.

      Most of the electroanalytical methods rely on the flow of electrons between one or more of the electrodes and the analyte. The analyte must be capable of either accepting one or more electrons (known as reduction) from the electrode or donating one or more electrons (oxidation) to the electrode. As an example, ferric iron (Fe3+) can be assayed because it can undergo a reduction to ferrous iron (Fe2+) by accepting an electron from the electrode as shown in the following reaction:

      This is the method in which the capability of the analyte to conduct an electrical current is monitored. From Ohm's law (E = IR) it is apparent that the electric current (I) is inversely proportional to the resistance (R), where E represents potential difference. The inverse of the resistance is the conductance (G = 1/R). As the conductance of a solution increases, its ability to conduct an electric current increases.

      In liquid solutions current is conducted between the electrodes by dissolved ions (ion). The conductance of a solution depends on the number and types of ions in the solution. Generally small ions and highly charged ions conduct current better than large ions and ions with a small charge. The size of the ions is important because it determines the speed with which the ions can travel through the solution. Small ions can move more rapidly than larger ones. The charge is significant because it determines the amount of electrostatic attraction between the electrode and the ions.

      Because conductometric measurements require the presence of ions, conductometry is not useful for the analysis of undissociated molecules. The measured conductance is the total conductance of all the ions in the solution. Since all ions contribute to the conductivity of a solution, the method is not particularly useful for qualitative analysis—i.e., the method is not selective. The two major uses of conductometry are to monitor the total conductance of a solution and to determine the end points of titrations that involve ions. Conductivity meters are used in conjunction with water purification systems, such as stills or deionizers, to indicate the presence or absence of ion-free water.

      Conductometric titration curves are prepared by plotting the conductance as a function of the volume of added titrant. The curves consist of linear regions prior to and after the end point. The two linear portions are extrapolated to their point of intersection at the end point. As in other titrations, the end-point volume is used to calculate the amount or concentration of analyte that was originally present.

      Voltammetry can be used for both qualitative and quantitative analysis of a wide variety of molecular and ionic materials. In this method, a set of two or three electrodes is dipped into the analyte solution, and a regularly varying potential is applied to the indicator electrode relative to the reference electrode. The analyte electrochemically reacts at the indicator electrode. The reference electrode is constructed so that its potential is constant regardless of the solution into which it is dipped. Usually a third electrode (an auxiliary or counter electrode) is placed in the solution for the purpose of carrying most of the current. The potential is controlled between the indicator electrode and the reference electrode, but the current flows between the auxiliary electrode and the indicator electrode.

      The several forms of voltammetry differ in the type of varying potential that is applied to the indicator electrode. Polarography is voltammetry in which the indicator electrode is made of mercury or, rarely, another liquid metal. In classic polarography, mercury drops from a capillary tube. The surface of the mercury drop is the site of the electrochemical reaction with the analyte. The manner in which the direct-current (DC) potential of the indicator electrode varies with time is a potential (or voltage) ramp. In the most common case, the potential varies linearly with time, and the analytical method is known as linear sweep voltammetry (LSV).

  Typically the potential is initially adjusted to a value at which no electrochemical reaction occurs at the indicator electrode. The potential is scanned in a direction that makes an electrochemical reaction more favourable. If reduction reactions are studied, the electrode is made more cathodic (negative); if oxidations are studied, the electrode is made more anodic (positive). Initially the current that is measured, before the electrochemical reaction begins, is small. As the electrode potential is changed, however, sufficient energy is applied to the indicator electrode to cause the reaction to take place. As the reaction occurs, electrons are withdrawn from the electrode (for electrochemical reductions) or donated to the electrode (for oxidations), and a current flows in the external electrical circuit. A voltammogram is a plot of the current as a function of the applied potential. The shape of a voltammogram depends on the type of indicator electrode and the potential ramp that are used. In nearly all cases, the voltammogram has a current wave as shown in Figure 1—> or a current peak as shown in Figure 2—>.

      This technique can be used for qualitative analysis because substances exhibit characteristic peaks or waves at different potentials. The height (current) of the wave or the peak, as measured by extrapolating the linear portion of the curve prior to the wave or peak and taking the difference between this extrapolated line and the current peak or plateau, is directly proportional to the concentration of the analyte and can be used for quantitative analysis. Normally the concentration corresponding to the peak or wave height of the analyte is determined from a working curve.

Triangular wave voltammetry
      Triangular wave voltammetry (TWV) is a method in which the potential is linearly scanned to a value past the potential at which an electrochemical reaction occurs and is then immediately scanned back to its original potential. A triangular wave voltammogram usually has a current peak on the forward scan and a second, inverted peak on the reverse scan representing the opposite reaction (oxidation or reduction) to that observed on the forward scan. Cyclic voltammetry is identical to TWV except in having more than one cycle of forward and reverse scans successively completed.

AC (alternating current) voltametry
      During AC voltammetry an alternating potential is added to the DC potential ramp used for LSV. Only the AC portion of the total current is measured and plotted as a function of the DC potential portion of the potential ramp. Because flow of an alternating current requires the electrochemical reaction to occur in the forward and reverse directions, AC voltammetry is particularly useful for studying the extent to which electrochemical reactions are reversible.

Pulse and differential pulse voltammetry
 Differential pulse voltammetry adds a periodically applied potential pulse (temporary increase in potential) to the voltage ramp used for LSV. The current is measured just prior to application of the pulse and at the end of the applied pulse. The difference between the two currents is plotted as a function of the LSV ramp potential. Pulse voltammetry utilizes a regularly increasing pulse height that is applied at periodic intervals. In pulse and differential pulse polarography the pulses are applied just before the mercury drop falls from the electrode. Typically the pulse is applied for about 50–60 milliseconds; and the current is measured during the last 17 milliseconds of each pulse. The voltammogram is a plot of the measured current as a function of the potential of the pulse. Many other variations of voltammetry also are available but are not as commonly used. Sketches showing the various potential ramps that are applied to the indicator electrode during the various types of polarography, along with the typical corresponding polarograms, are shown in Figure 3—>.

      Electrogravimetry was briefly described above as an interference removal technique. This method employs two or three electrodes, just as in voltammetry. Either a constant current or a constant potential is applied to the preweighed working electrode. The working electrode corresponds to the indicator electrode in voltammetry and most other electroanalytical methods. A solid product of the electrochemical reaction of the analyte coats the electrode during application of the electric current or potential. After the assayed substance has been completely removed from the solution by the electrochemical reaction, the working electrode is removed, rinsed, dried, and weighed. The increased mass of the electrode due to the presence of the reaction product is used to calculate the initial concentration of the analyte.

      Assays done by using constant-current electrogravimetry can be completed more rapidly (typically 30 minutes per assay) than assays done by using constant-potential electrogravimetry (typically one hour per assay), but the constant-current assays are subject to more interferences. If only one component in the solution can react to form a deposit on the electrode, constant-current electrogravimetry is the preferred method. In constant-potential electrogravimetry the potential at the working electrode is controlled so that only a single electrochemical reaction can occur. The applied potential corresponds to the potential on the plateau of a voltammetric wave of the assayed material.

      This technique is similar to electrogravimetry in that it can be used in the constant-current or in the constant-potential modes. It differs from electrogravimetry, however, in that the total quantity of electricity (coulombs) required to cause the analyte to completely react is measured rather than the mass of the electrochemical reaction product. It is not necessary for the reaction product to deposit on the electrode in order to perform a coulometric assay; however, it is necessary that the current that flows through the electrode be ultimately used for a single electrochemical reaction. This requirement can be met in constant-current coulometry by using the current to perform a coulometric titration. In a coulometric titration, the current generates a titrant that chemically reacts with the analyte. By keeping the precursor to the titrant in excess, it is possible to ensure that all of the current is used to form the chemical reactant. Because the electrochemically formed titrant reacts completely with the analyte, it is possible to perform a quantitative analysis. Constant-potential coulometry is not subject to the effects of interferences, because the potential of the working electrode is controlled at a value at which only a single electrochemical reaction can occur.

      During amperometric assays the potential of the indicator electrode is adjusted to a value on the plateau of the voltammetric wave, as during controlled-potential electrogravimetry and coulometry (see above). The current that flows between the indicator electrode and a second electrode in the solution is measured and related to the concentration of the analyte. Amperometry is commonly employed in two ways, both of which take advantage of the linear variation in current at constant potential with the concentration of an electroactive species. A working curve of current as a function of concentration of a series of standard solutions is prepared, and the concentration of the analyte is determined from the curve, or amperometry is used to locate the end point in an amperometric titration. An amperometric titration curve is a plot of current as a function of titrant volume. The shape of the curve varies depending on which chemical species (the titrant, the analyte, or the product of the reaction) is electroactive. In each case the curve consists of linear regions before and after the end point that are extrapolated to intersection at the end point.

      This is the method in which the potential between two electrodes is measured while the electric current (usually nearly zero) between the electrodes is controlled. In the most common forms of potentiometry, two different types of electrodes are used. The potential of the indicator electrode varies, depending on the concentration of the analyte, while the potential of the reference electrode is constant. Potentiometry is probably the most frequently used electroanalytical method. It can be divided into two categories on the basis of the nature of the indicator electrode. If the electrode is a metal or other conductive material that is chemically and physically inert when placed in the analyte, it reflects the potential of the bulk solution into which it is dipped. Electrode materials that are commonly used for this type of potentiometry include platinum, gold, silver, graphite, and glassy carbon.

Inert-indicator-electrode potentiometry
      Inert-indicator-electrode potentiometry utilizes oxidationreduction reactions (oxidation–reduction reaction). The potential of a solution that contains an oxidation-reduction couple (e.g., Fe3+ and Fe2+) is dependent on the identity of the couple and on the activities of the oxidized and reduced chemical species in the couple. For a general reduction half reaction of the form Ox + ne- → Red, where Ox is the oxidized form of the chemical species, Red is the reduced form, and n is the number of electrons (e) transferred during the reaction, the potential can be calculated by using the Nernst equation (equation 2—>). In the Nernst equation E is the potential at the indicator electrode, E° is the standard potential of the electrochemical reduction (a value that changes as the chemical identity of the couple changes), R is the gas law constant, T is the absolute temperature of the solution, n is the number of electrons transferred in the reduction (the value in the half reaction), F is the faraday constant, and the aOx and aRed terms are the activities of the oxidized and reduced chemical species, respectively, in the solution. The activities can be replaced by concentrations of the ionic species if the solution is sufficiently dilute.

      The most common use for potentiometry with inert-indicator electrodes is determining the end points of oxidation-reduction titrations. A potentiometric titration curve is a plot of potential as a function of the volume of added titrant. The curves have an “S” or backward “S” shape, where the end point of the titration corresponds to the inflection point.

Ion-selective electrodes
 The second category of potentiometric indicator electrodes is the ion-selective electrode. Ion-selective electrodes preferentially respond to a single chemical species. The potential between the indicator electrode and the reference electrode varies as the concentration or activity of that particular species varies. Unlike the inert indicator electrodes, ion-selective electrodes do not respond to all species in the solution. The electrodes usually are constructed as illustrated in Figure 4—>. An internal reference electrode dips into a reference solution containing the assayed species and constant concentrations of the species to which the internal electrode responds. The internal reference electrode and reference solution are separated from the analyte solution by a membrane that is chosen to respond to the analyte. As usual, a second external reference electrode is also dipped into the analyte solution.

      The selectivity of the ion-selective electrodes results from the selective interaction between the membrane and the analyte. The electrodes are categorized according to the nature of the membrane. The most common types of ion-selective electrodes are the glass, liquid-ion-exchanger, solid-state, neutral-carrier, coated-wire, field-effect transistor, gas-sensing, and biomembrane electrodes. The glass membranes in glass electrodes are designed to allow partial penetration by the analyte ion. They are most often used for pH measurements, where the hydrogen ion is the measured species.

      Liquid-ion-exchanger electrodes utilize a liquid ion exchanger that is held in place in an inert, porous hydrophobic membrane. The electrodes are selective because the ion exchangers selectively exchange a single analyte ion. Solid-state ion-selective electrodes use a solid sparingly soluble, ionically conducting substance, either alone or suspended in an organic polymeric material, as the membrane. One of the ions in the solid generally is identical to the analyte ion; e.g., membranes that are composed of silver sulfide respond to silver ions and to sulfide ions. Neutral-carrier ion-selective electrodes are similar in design to the liquid-ion-exchanger electrodes. The liquid ion exchanger, however, is replaced by a complexing agent that selectively complexes the analyte ion and thereby draws it into the membrane.

      Coated-wire electrodes were designed in an attempt to decrease the response time of ion-selective electrodes. They dispense with the internal reference solution by using a polymeric membrane that is directly coated onto the internal reference electrode. Field-effect transistor electrodes place the membrane over the gate of a field-effect transistor. The current flow through the transistor, rather than the potential across the transistor, is monitored. The current flow is controlled by the charge applied to the gate, which is determined by the concentration of analyte in the membrane on the gate.

      Gas-sensing electrodes are designed to monitor dissolved gases. Typically they consist of an internal ion-selective electrode of one of the designs previously described (usually a glass electrode), which has a second, gas-permeable membrane wrapped around the membrane of the internal electrode. Between the membranes is an electrolyte solution containing ions that correspond to a reaction product of the analyte gas. For example, an ammonia-selective electrode can be constructed by using an internal glass pH electrode and an ammonium chloride solution between the membranes. The ammonia from the sample diffuses into the ammonium chloride solution between the membranes and partially dissociates in the aqueous solution to form ammonium ions and hydroxide ions. The internal pH electrode responds to the altered pH of the solution caused by the formation of hydroxide ions.

      Biomembrane electrodes are similar in design to gas-sensing electrodes. The outer permeable membrane is used to hold a gel between the two membranes. The gel contains an enzyme that selectively catalyzes the reaction of the analyte. The internal ion-selective electrode is chosen to respond to one of the products of the catalyzed reaction. Internal pH electrodes are commonly used.

      In the absence of electrode interferences from other ions, ion-selective electrodes usually obey equation (3)—>, where E is the potential measured between the electrode and a reference electrode, z is the charge on the analyte ion, ai is the activity of the ion, and the other terms represent the same terms as given above for the Nernst equation.

      Quantitative analysis of all ions except hydrogen generally is performed by using the working curve method. A working curve is prepared by plotting the potential of a series of standard solutions as a function of the logarithm or natural logarithm (ln) of the activities or concentrations of the solutions. The activity or concentration of the analyte is determined from the curve.

      Normally pH measurements are performed with a modified voltmeter called a pH meter. Buffer solutions of known pH are used to standardize the instrument. After standardization, the electrodes are dipped into the analyte and the pH of the solution is displayed. A similar approach can be used in place of the working curve method to determine the concentration of ions other than the hydrogen ion by using standard solutions to adjust the meter.

Separatory methods
      The final major category of instrumental methods is the separatory (separation and purification) methods. Chromatography and mass spectrometry are two such methods that are particularly important for chemical analysis. Because both are described in detail below, they are only introduced in this section.

      Chromatography was described earlier as a method for removing interferences prior to an analysis. Both gas and liquid chromatographic methods can be used for chemical analysis.

      In gas chromatography the stationary phase is contained in a column (column chromatography). The column generally is a coiled metallic or glass tube. An injector near the entrance to the column is used to add the analyte. The mobile phase gas usually is contained in a high pressure gas cylinder that is attached by metallic tubing to the injector and the column. A detector, placed at the exit from the column, responds to the separated components of the analyte. The detector is electrically attached to a recorder or other readout device (e.g., a computer) that displays the detector response as a function of time. The plot of the detector response as a function of time is a chromatogram. Each separated component of the analyte appears as a peak on the chromatogram.

      Qualitative analysis is performed by comparing the time required for the component to pass through the column with the corresponding times for known substances. The interval between the instant of injection and the detection of the component is known as the retention time. Because retention times vary with the identity of the component, they are utilized for qualitative analysis. Quantitative analysis is performed by preparing a working curve, at a specific retention time, by plotting the peak height or peak area of a series of standards as a function of the concentration of the component being assayed. The concentration of the component in the analyte is determined from the chromatographic peak height or area of the component and the working curve.

Liquid chromatography
      This procedure can be performed either in a column (column chromatography) or on a plane. Columnar liquid chromatography is used for qualitative and quantitative analysis in a manner similar to the way in which gas chromatography is employed. Sometimes retention volumes, rather than retention times, are used for qualitative analysis. For chemical analysis the most popular category of columnar liquid chromatography is high-performance liquid chromatography (HPLC). The method uses a pump to force one or more mobile phase solvents through high-efficiency, tightly packed columns. As with gas chromatography, an injection system is used to insert the sample into the entrance to the column, and a detector at the end of the column monitors the separated analyte components.

      The stationary phase that is used for plane chromatography is physically held in place in or on a plane. Typically the stationary phase is attached to a plastic, metallic, or glass plate. Occasionally, a sheet of high-quality filter paper is used as the stationary phase. The sample is added as a spot or a thin strip at one end of the plane. The mobile phase flows over the spot by capillary action during ascending development or as a result of the force of gravity during descending development. During ascending development, the end of the plane near and below the sample spot is dipped into the mobile phase, and the mobile phase moves up and through the spot. During descending development, the mobile phase is added to the top of the plane and flows downward through the spot.

      Qualitative analysis is performed by comparing the retardation factor (Rf) of the analyte components with the retardation factors of known substances. The retardation factor is defined as the distance from the original sample spot that the component has moved divided by the distance that the mobile phase front has moved and is constant for a solute in a given solvent. Quantitative analysis is performed by measuring the sizes of the developed spots, by measuring some physical property of the spots (such as fluorescence), or by removing the spots from the plane and assaying them by another procedure.

      This is the analytical method in which ions (ion) or ionic fragments of an analyte are separated based on mass-to-charge ratios (m/z). Most mass spectrometers have four major components: an inlet system, an ion source, a mass analyzer, and a detector. The inlet system is used to introduce the analyte and to convert it to a gas at reduced pressure. The gaseous analyte flows from the inlet system into the ionic source of the instrument where the analyte is converted to ions or ionic fragments. That is often accomplished by bombarding the analyte with electrons or by allowing the analyte to undergo collisions with other ions.

      The ions that are formed in the ionic source are accelerated into the mass analyzer by a system of electrostatic slits. In the analyzer the ions are subjected to an electric or magnetic field that is used to alter their paths. In the most common mass analyzers the ions are separated in space according to their mass-to-charge ratios. In time-of-flight mass analyzers, however, no electric or magnetic field is employed, and the time required for ions of varying m/z that are accelerated to the same kinetic energy to pass through a flight tube is measured. The detector is placed at the end of the mass analyzer and measures the intensity of the ionic beam. A mass spectrum is a plot of the ionic beam intensity as a function of the mass-to-charge ratio of the ionic fragment.

      Mass spectrometry is used for quantitative analysis by relating the height of a specific mass spectrometric peak to the concentration of the analyte. The peak heights vary linearly with concentration. Qualitative analysis is performed by using the entire spectrum. Generally the major peak with the largest m/z is the molecular ion peak that has a charge of +1, corresponding to the loss of a single electron. Consequently, the m/z of the peak corresponds to the molecular weight of the analyte. The spacing between peaks is used to deduce the manner in which the analyte has fragmented in the ionic source. By carefully examining the fragmentation pattern, it is possible to deduce the structure of the analyte molecule. Computerized comparisons of analyte mass spectra with mass spectra of known materials is commonly used to identify an analyte.

Robert Denton Braun

Additional Reading
Herbert A. Laitinen and Galen W. Ewing (eds.), A History of Analytical Chemistry (1977), provides a historical overview. General works on analytical chemistry include Larry G. Hargis, Analytical Chemistry: Principles and Techniques (1988); Douglas A. Skoog, Donald M. West, and F. James Holler, Fundamentals of Analytical Chemistry, 5th ed. (1988), also available in an abbreviated version, Analytical Chemistry: An Introduction, 5th ed. (1990); Kenneth A. Rubinson, Chemical Analysis (1987); Daniel C. Harris, Quantitative Chemical Analysis, 3rd ed. (1991); John H. Kennedy, Analytical Chemistry: Principles, 2nd ed. (1990); and Stanley E. Manahan, Quantitative Chemical Analysis (1986). Analytical Chemistry (semimonthly); and Analytical Biochemistry (16/yr.), are useful periodicals.The following are useful texts on qualitative analysis: Daniel J. Pasto and Carl R. Johnston, Organic Structure Determination (1969); Ralph L. Shriner et al., The Systematic Identification of Organic Compounds, 6th ed. (1980), a laboratory manual; John W. Lehman, Operational Organic Chemistry: A Laboratory Course, 2nd ed. (1988); and J.J. Lagowski and C.H. Sorum, Introduction to Semimicro Qualitative Analysis, 7th ed. (1991).Instrumental analysis is the focus of Robert D. Braun, Introduction to Instrumental Analysis (1987); Hobart H. Willard et al., Instrumental Methods of Analysis, 7th ed. (1988); Gary D. Christian and James E. O'Reilly (eds.), Instrumental Analysis, 2nd ed. (1986); J.D. Winefordner (ed.), Spectrochemical Methods of Analysis (1971); Joseph B. Lambert et al., Organic Structural Analysis (1976); James D. Ingle, Jr., and Stanley R. Crouch, Spectrochemical Analysis (1988); Allen J. Bard and Larry R. Faulkner, Electrochemical Methods (1980); E.P. Serjeant, Potentiometry and Potentiometric Titrations (1984); A.M. Bond, Modern Polarographic Methods in Analytical Chemistry (1980); and R. Belcher (ed.), Instrumental Organic Elemental Analysis (1977).Robert Denton Braun

* * *

Universalium. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • analysis — a‧nal‧y‧sis [əˈnælss] noun analyses PLURALFORM [ siːz] [countable, uncountable] 1. a careful examination of something in order to understand it better: • The researchers carried out a detailed analysis of recent trends in share prices. •… …   Financial and business terms

  • Analysis — (from Greek ἀνάλυσις , a breaking up ) is the process of breaking a complex topic or substance into smaller parts to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle,… …   Wikipedia

  • Analysis — • The process by which anything complex is resolved into simple, or at least less complex parts or elements Catholic Encyclopedia. Kevin Knight. 2006. Analysis     Analysis      …   Catholic encyclopedia

  • Analysis — Analysis, der Teil der Mathematik, der alle Untersuchungen über die (positiven und negativen, ganzen und gebrochenen, rationalen und irrationalen, algebraischen und transzendenten, reellen und komplexen, konstanten und veränderlichen) Zahlen… …   Lexikon der gesamten Technik

  • Analysis — Analysis, Auflösung, Zergliederung eines Ganzen in seine Theile, wie sie z.B. der Philosoph mit Begriffen, Urtheilen oder Systemen, der Chemiker mit einem zusammengesetzten Körper, der Grammatiker mit Wort und Satzformen vornimmt. – Die math.… …   Herders Conversations-Lexikon

  • analysis — (n.) 1580s, resolution of anything complex into simple elements (opposite of synthesis), from M.L. analysis (15c.), from Gk. analysis a breaking up, a loosening, releasing, noun of action from analyein unloose, release, set free; to loose a ship… …   Etymology dictionary

  • analysis — [ə nal′ə sis] n. pl. analyses [ə nal′əsēz΄] [ML < Gr, a dissolving < ana , up, throughout + lysis, a loosing < lyein, to loose: see LOSE] 1. a) a separating or breaking up of any whole into its parts, esp. with an examination of these… …   English World dictionary

  • Analysis — A*nal y*sis, n.; pl. {Analyses}. [Gr. ?, fr. ? to unloose, to dissolve, to resolve into its elements; ? up + ? to loose. See {Loose}.] 1. A resolution of anything, whether an object of the senses or of the intellect, into its constituent or… …   The Collaborative International Dictionary of English

  • analysis — I noun ascertainment, assay, audit, canvassing, close inquiry, consideration, critical examination, critique, delineation, dissection, examination, exhaustive inquiry, explicatio, exploration, inquiry, investigation, perusal, probe, research,… …   Law dictionary

  • Analysis [1] — Analysis (Analyse, v. gr.), 1) Auflösung, Zerlegung in seine Theile, als Gegensatz der Synthesis; 2) Zergliederung organischer Körper in einzelne Theile, bes. von Pflanzen, auch von Thieren (hier Anatomie). 3) (Philos.), die Auflösung der… …   Pierer's Universal-Lexikon

  • analysis — [n1] examination and determination assay, breakdown, dissection, dissolution, division, inquiry, investigation, partition, reasoning, resolution, scrutiny, search, separation, study, subdivision, test; concepts 24,103 analysis [n2] statement of… …   New thesaurus

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”