/fiz"iks/, n. (used with a sing. v.)
the science that deals with matter, energy, motion, and force.
[1580-90; see PHYSIC, -ICS]

* * *

Science that deals with the structure of matter and the interactions between the fundamental constituents of the observable universe.

Long called natural philosophy (from the Greek physikos), physics is concerned with all aspects of nature, covering the behaviour of objects under the action of given forces and the nature and origin of gravitational, electromagnetic, and nuclear force fields. The goal of physics is to formulate comprehensive principles that bring together and explain all discernible phenomena. See also aerodynamics; astrophysics; atomic physics; biophysics; mechanics; nuclear physics; particle physics; quantum mechanics; solid-state physics; statistical mechanics.
(as used in expressions)
high energy physics
solid state physics

* * *

▪ 1995

      Physicists in 1994 continued to be fascinated by the behaviour of matter on the largest and smallest scales. Both areas of interest were indulged by the announcement in June that a detector at the Los Alamos (N.M.) National Laboratory had monitored eight events that could represent the first direct evidence for a conjectured property of neutrinos called oscillation.

      Neutrinos are extremely light particles—for decades they had been thought to be entirely massless—that are produced in abundance in nuclear reactions (as in the Sun). They travel through both space and solid matter at almost the speed of light. Neutrinos come in three types and, according to the oscillation theory, can change from one type to another as they go on their way. To make these transformations, however, neutrinos are required to have a tiny amount of mass. Apart from the intrinsic interest of observing such a curious phenomenon, physicists are intrigued by neutrino oscillation for two reasons. First, if neutrinos do oscillate, the phenomenon could explain why scientists detect fewer neutrinos coming from the Sun than theory predicts. This so-called solar neutrino problem has persisted for more than two decades, since the first solar neutrino detectors were built. But the detectors designed to catch solar neutrinos passing through the Earth are sensitive to only one type of neutrino. If that type is produced inside the Sun as theory predicts but then oscillates into a mixture of types en route to Earth, the behaviour could explain the deficit measured for the original type.

      The second reason for interest in neutrino oscillation is the requirement that neutrinos have mass. If they do, they exert gravitational effects and thus could account for at least some of the "dark matter" that is now thought to make up at least 90% of the mass of the universe. There is clear astronomical evidence, from the way that galaxies rotate and move about in clusters, that the bright stars and galaxies observable with telescopes or other instruments are embedded in much larger quantities of dark, or nonluminous, matter. The universe contains so many neutrinos—about a billion for every atom of ordinary matter—that even a small mass for each neutrino would add up to a lot of mass over the whole universe. Neutrinos, however, cannot be the only kind of dark matter in the universe. The year also brought further support of a finding announced in 1993 that galaxies like the Milky Way are embedded in dark halos made up of massive, comparatively small objects (dubbed MACHOs, for massive compact halo objects), rather like very faint stars or giant planets. The MACHOs were revealed by the way in which their gravitational influence magnified light from distant stars, making the stars flicker brightly as the MACHOs passed in front of them.

      Particle physicists were cautiously optimistic that high-energy experiments at the Fermi National Accelerator Laboratory near Chicago had detected the long-sought top quark. The researchers would say only that the evidence was "persuasive," although it seemed to fit the last piece into the jigsaw puzzle of particle physics. Everyday matter is thought to be made up of just 12 types of particles in two families of six particles each. One family, the leptons, consists of the electron and its partner, the electron neutrino, together with two successively heavier equivalents of the electron, the muon and the tau particle, plus their own neutrino partners. The other family, the quarks, also consists of three pairs, dubbed up and down, charmed and strange, and top and bottom. All of these particles, except the top quark, have already been detected in particle accelerator experiments. The as-yet-tentative addition of the top quark completes the set and suggests that physicists' model of what matter is made of really is valid and complete.

      The next task would be to use the completion of the particle puzzle to develop a better understanding of what went on during the conditions of extremely high density, pressure, and temperature that existed shortly after the birth of the universe, in the big bang itself. Physicists from several universities around the world were trying to re-create the conditions that existed in the big bang by smashing beams of heavy ions together head on at enormous speeds.

      Ions are atoms given a positive charge by being stripped of one or more negatively charged electrons. The positive charge provides a "handle" by which the ions can be gripped in magnetic fields and accelerated to speeds that are a sizable fraction of the speed of light. The aim is to produce a transient sea of free-moving quarks and gluons called the quark-gluon plasma. (Gluons are the entities that carry the strong force that binds quarks together into particles such as protons and neutrons, in a way analogous to that in which an exchange of photons—quantum packets of electromagnetic energy—between charged particles generates the electromagnetic force between the particles.)

      Preliminary experiments were under way at CERN (European Laboratory for Particle Physics) in Geneva and at Brookhaven National Laboratory, Upton, N.Y. To get a feel for the extreme conditions involved, one may consider what happens when the nucleus of an atom of gold that has been accelerated to 0.999957 the speed of light collides with another gold nucleus head on at the same speed. The greatest naturally occurring density of matter in the universe today is in the atomic nucleus. Albert Einstein's special theory of relativity shows, and many experiments have confirmed, that at this speed the mass of the nucleus is increased to 108 times the mass that it has when stationary. At the same time, it contracts to just 1/108 of its rest length along the line of flight. In round terms the nucleus is 100 times heavier and 100 times smaller than when it is at rest, so it has increased in density 10,000 times. In the collision the two overlapping gold nuclei briefly create a density 20,000 times greater than that of an ordinary atomic nucleus.

      Each nucleus is made up of protons and neutrons, and each proton and neutron is made up of three quarks. Under the extreme conditions generated in the collision—conditions that once existed in the big bang itself—the quarks from one nucleus interact directly with quarks from the other nucleus. The quarks are ripped from their nuclei, and new particles are created out of the pure kinetic energy associated with the collision, in line with Einstein's famous equation E = mc2 (or, rather, m = E/c2).

      As of 1994, experimenters had gone an estimated one-fifth of the way toward reaching these extreme conditions, using collisions involving nuclei of sulfur instead of the heavier gold. The first, relatively low-energy, experiments with gold beams were carried out during the year. As particle physicists study successively bigger "little bangs" of this kind, they hope to unravel the secrets of how the universe exploded out of the big bang.

      While experimental successes in understanding how the world works were made, 1994 also saw a revival of the debate about what it all means, reminiscent of the great debates of the quantum pioneers 60 years earlier. Quantum mechanics is the theory that describes the behaviour of matter on the very smallest scale. Among its many curious features, the theory says that an entity such as a photon or an electron can be described either as a solid particle, like a tiny billiard ball, or as a wave moving through space, like ripples on a pond. Einstein received his Nobel Prize for work demonstrating that photons of light exist as particles. Yet Geoff Jones, a British physicist, claimed in 1994 that it is "wrong and unnecessary" to describe light in terms of small, localized particles.

      In a paper published in the European Journal of Physics, Jones showed a way in which the behaviour of light and other electromagnetic radiation can be explained entirely in terms of waves and claimed that the entity that physicists call a photon is simply the addition or extraction of one unit of energy from the electromagnetic field (the "pond") that the waves move through. On the other hand, Jones said that such entities as electrons really are particles and should not be treated as waves. Echoing once-unfashionable ideas of the quantum theorist David Bohm, Jones said that the apparent waviness of electrons is caused by a separate wave associated with electrons, which guides their behaviour.

      All this might be just an esoteric curiosity for theorists and philosophers to debate were it confined to the world of electrons and photons. In 1994, however, researchers at the University of Paris-North, Villetaneuse, France, carried out experiments that seemed to show entire iodine molecules (I2), each with a mass 254 times that of a neutron, behaving as waves. These were the largest objects for which the quantum "wave-particle duality" was observed. With detectable quantum effects verging into the everyday world, the debate about the real meaning of quantum mechanics seemed set for a major revival at year's end. (JOHN GRIBBIN)

▪ 1994

      The year 1993 began with a note of excitement for astrophysicists and cosmologists following release of results of new observations indicating that the stars, dust, and other observable matter in space represent less than 10% of all the mass in the universe. The results, which augmented other recent findings, supported a long-held belief among cosmologists that the universe holds a great deal of undetected "dark matter" and spurred the search for answers to what that matter could be.

      The idea that as much as 90% of all matter is nonluminous is founded mainly on measurements of the rate at which galaxies rotate and on analyses of the way in which they move about in clusters. The new evidence emerged from satellite data, taken by the Earth-orbiting ROSAT X-ray observatory, of the distribution and temperature of intergalactic gas clouds in a small cluster of galaxies known as NGC 2300. This information, together with the assumption that the gas is confined by gravity to remain in the vicinity of the group, allowed ROSAT's team of scientists to estimate the total mass of NGC 2300. They concluded that visible matter could account for only about 4% (with an upper limit of 15%) of the total mass. Previous estimates had given much higher values but had been based on observations of gas clouds in rich galaxy clusters where additional gas ejected as jets from the galaxies themselves complicates the interpretation.

      The new results rekindled much speculation as to the physical nature of the dark matter. One idea was that the missing mass may be hidden in starlike or planetlike objects that reside mainly in a halo of matter surrounding a galaxy and that, for various reasons, do not emit enough light to be detectable. Black holes may be an example since they are collapsed stars so massive that the gravitational attraction near them is too great to allow light to escape. Large stray planets and stellar remnants that have ceased to shine are other possibilities. The term MACHO (for massive compact halo object) gained popularity in some quarters to describe this candidate class of dark matter.

      Some physicists preferred a less prosaic explanation for dark matter. Guided by predictions from the big bang theory of the birth of the universe and the present rate of cosmic expansion, they proposed that ordinary matter, such as that which forms planets, stars, and other cosmic objects, accounts for only a small fraction of the total mass of the universe and that a sea of hitherto undetected elementary particles filling the cosmos provides the remainder. A wide variety of particles with different exotic properties were suggested, often with correspondingly bizarre names. Axions, magnetic monopoles, and WIMPs (for weakly interacting massive particles) fell into a category known as "cold" dark matter, which would clump together readily, while at the other extreme lay "hot" dark matter, which would be dispersed more uniformly throughout the universe.

      The one thing on which dark-matter researchers were agreed was that any resolution of the problem would have to come from experimental observation. Accordingly, three teams of researchers began an intensive search for MACHOs by a method first suggested by Princeton University astrophysicist Bohdan Paczynski. The technique involves studying the systematic variations in the light intensity of millions of distant bright stars over several years. The principle of the technique is that, were a MACHO to pass through the line of sight to a distant star, the object's gravitational field would focus the light from the star, rather like a lens, and terrestrial observers should see a momentary enhancement in the star's brightness.

      Meanwhile, the search for dark-matter particles also began, but closer to home. For example, an experiment was set up in a tunnel at the Stanford High Energy Physics Laboratory that used a large germanium detector sensitive to the ionization produced when an atomic nucleus is struck by a WIMP or other dark-matter particle. A great deal of attention was focused on these searches, and with good reason: dark matter enters into many of the theories of the origin of the universe and its present large-scale structure, and also into models of gravity and other fundamental forces between particles. Thus, the dark-matter hunters were poised to shed light into many a murky corner of theoretical physics. (See Astronomy .)

      The year saw a revival of interest in superconductivity, the strange property possessed by a small number of materials whereby below a certain transition temperature, typically only a few degrees above absolute zero (zero kelvin, or 0 K), they entirely lose all resistance to the flow of electric current. (To convert kelvins to degrees Celsius, subtract 273; thus, 0 K = -273° C. To convert Celsius to Fahrenheit, multiply by 1.8 and add 32.) Superconductors had taken centre stage in physics several years earlier with the discovery of a new class of superconducting ceramic compounds—mixed metal oxides characterized by crystal structures containing sheets of copper and oxygen atoms—that become superconducting at temperature values as high as five times the previous record. These new so-called high-temperature superconductors appeared to hold great technological promise, as they could function as resistance-free conductors at temperatures maintained by liquid nitrogen (which boils at 77 K), a coolant that is relatively easy and cheap to obtain. After the initial discovery there ensued a period of frantic research wherein the superconducting transition temperature was quickly pushed up to 125 K, but thereafter scientists made no further progress in the quest for higher transition temperatures. Worse, theoretical efforts failed to yield any consensus on what mechanism caused the superconductivity in the new materials, and attempts to make practical devices out of them ran into serious difficulties because of their brittle, ceramic texture.

      During 1993 encouraging progress was made in each of these problematic areas, and it seemed that the high-temperature-superconductor wagon once again had started to roll. A new compound in the family, incorporating mercury atoms, was discovered that becomes superconducting near 135 K at atmospheric pressure and at temperatures around 150 K when subjected to high pressures. The crystal structure of the new compound is relatively simple, suggesting that it may be a better material to use for fundamental investigations of the physical properties of high-temperature superconductors. Furthermore, in preliminary work the material appeared to perform well when subjected to magnetic fields, a behaviour encouraging for applications in the superconducting-magnet industry. (See Chemistry .)

      Rapid improvements in techniques for manufacturing high-temperature superconductors into forms suitable for practical devices were made in 1993. Methods were developed for converting the brittle materials into flexible wires, and several companies began selling wires 100 m (330 ft) long for use as underground power-transmission cables. Even more promising results came in the area of thin films, in which high-temperature superconductors offer great potential for faster and smaller electronic circuits and highly sensitive detectors of magnetic fields. The use of conventional metal conductors in such devices is limited by the amount of heat generated in the metal films: the smaller the conducting channels are made, the greater is their resistance to current flow. Superconductors avoid problems of heating because they have zero resistance, so designers can pack channels more closely together and thereby reduce the size of microelectronic components. Because circuits are smaller, a signal takes less time to travel from one point to another, and so operation of the device is faster. Several companies began marketing devices incorporating high-temperature superconductors, including high-frequency microwave circuitry and detectors of very weak magnetic fields.

      Numerous theories had been proposed to explain high-temperature superconductivity, most of which were too unspecific, or abstract, to be open to direct test by experiment. The only feature common to all the ideas was that superconductivity occurs when the electrons responsible for electrical conduction become bound in pairs. In high-temperature superconductors the binding force responsible for this coupling remained a mystery. By 1993 there remained among the theories only a small number of serious contenders and, more significantly, the architects of these theories had begun to build into them sufficient detail that predictions could be made for measurable properties, allowing a direct evaluation of the models. This evolution of theoretical work brought greater focus to the experimental measurements, and many physicists believed that a solution to the problem was close at hand. The less optimistic, however, pointed to historical precedent, noting that all the major advances in the field of superconductivity had occurred via either chance or intuition. Irrespective of their perspective, most scientists agreed that the year had been a turning point for the field. (See Chemistry .)


      This updates the articles Cosmos; low-temperature phenomena; subatomic particle; physics.

* * *


      science that deals with the structure of matter and the interactions between the fundamental constituents of the observable universe. In the broadest sense, physics (from the Greek physikos) is concerned with all aspects of nature on both the macroscopic and submicroscopic levels. Its scope of study encompasses not only the behaviour of objects under the action of given forces but also the nature and origin of gravitational, electromagnetic, and nuclear force fields. Its ultimate objective is the formulation of a few comprehensive principles that bring together and explain all such disparate phenomena.

      Physics is the basic physical science. Until rather recent times the terms physics and natural philosophy were used interchangeably for the science whose aim is the discovery and formulation of the fundamental laws of nature. As the modern sciences developed and became increasingly specialized, physics came to denote that part of physical science not included in astronomy, chemistry, geology, and engineering. Physics plays an important role in all the natural sciences, however, and all such fields have branches in which physical laws and measurements receive special emphasis, bearing such names as astrophysics, geophysics, biophysics, and even psychophysics. Physics can, at base, be defined as the science of matter, motion, and energy. Its laws are typically expressed with economy and precision in the language of mathematics.

      Both experiment, the observation of phenomena under conditions that are controlled as precisely as possible, and theory, the formulation of a unified conceptual framework, play essential and complementary roles in the advancement of physics. Physical experiments result in measurements, which are compared with the outcome predicted by theory. A theory that reliably predicts the results of experiments to which it is applicable is said to embody a law of physics. However, a law is always subject to modification, replacement, or restriction to a more limited domain, if a later experiment makes it necessary.

      The ultimate aim of physics is to find a unified set of laws governing matter, motion, and energy at small (microscopic) subatomic distances, at the human (macroscopic) scale of everyday life, and out to the largest distances (e.g., those on the extragalactic scale). This ambitious goal has been realized to a notable extent. Although a completely unified theory of physical phenomena has not yet been achieved (and possibly never will be), a remarkably small set of fundamental physical laws appears able to account for all known phenomena. The body of physics developed up to about the turn of the 20th century, known as classical physics, can largely account for the motions of macroscopic objects that move slowly with respect to the speed of light and for such phenomena as heat, sound, electricity, magnetism, and light. The modern developments of relativity and quantum theory modify these laws insofar as they apply to higher speeds, very massive objects, and to the tiny elementary constituents of matter, such as electrons, protons, and neutrons.

The scope of physics
      The traditionally organized branches or fields of classical and modern physics are delineated below.

      Mechanics is generally taken to mean the study of the motion of objects (or their lack of motion) under the action of given forces. Classical mechanics is sometimes considered a branch of applied mathematics. It consists of kinematics, the description of motion, and dynamics, the study of the action of forces in producing either motion or static equilibrium (the latter constituting the science of statics). The 20th-century subjects of quantum mechanics, crucial to treating the structure of matter, subatomic particles, superfluidity, superconductivity, neutron stars, and other major phenomena, and relativistic mechanics, important when speeds approach that of light, are forms of mechanics that will be discussed later in this section.

      In classical mechanics the laws are initially formulated for point particles in which the dimensions, shapes, and other intrinsic properties of bodies are ignored. Thus in the first approximation even objects as large as the Earth and the Sun are treated as pointlike—e.g., in calculating planetary orbital motion. In rigid-body dynamics, the extension of bodies and their mass distributions are considered as well, but they are imagined to be incapable of deformation. The mechanics of deformable solids is elasticity; hydrostatics and hydrodynamics treat, respectively, fluids at rest and in motion.

      The three laws of motion (Newton's laws of motion) set forth by Isaac Newton form the foundation of classical mechanics, together with the recognition that forces are directed quantities (vectors) and combine accordingly. The first law, also called the law of inertia, states that, unless acted upon by an external force, an object at rest remains at rest, or if in motion, it continues to move in a straight line with constant speed. Uniform motion therefore does not require a cause. Accordingly, mechanics concentrates not on motion as such but on the change in the state of motion of an object that results from the net force acting upon it. Newton's second law equates the net force on an object to the rate of change of its momentum, the latter being the product of the mass of a body and its velocity. Newton's third law, that of action and reaction, states that when two particles interact, the forces each exerts on the other are equal in magnitude and opposite in direction. Taken together, these mechanical laws in principle permit the determination of the future motions of a set of particles, providing their state of motion is known at some instant, as well as the forces that act between them and upon them from the outside. From this deterministic character of the laws of classical mechanics, profound (and probably incorrect) philosophical conclusions have been drawn in the past and even applied to human history.

      Lying at the most basic level of physics, the laws of mechanics are characterized by certain symmetry properties, as exemplified in the aforementioned symmetry between action and reaction forces. Other symmetries, such as the invariance (i.e., unchanging form) of the laws under reflections and rotations carried out in space, reversal of time, or transformation to a different part of space or to a different epoch of time, are present both in classical mechanics and in relativistic mechanics, and with certain restrictions, also in quantum mechanics. The symmetry properties of the theory can be shown to have as mathematical consequences basic principles known as conservation laws (conservation law), which assert the constancy in time of the values of certain physical quantities under prescribed conditions. The conserved quantities are the most important ones in physics; included among them are mass and energy (in relativity theory, mass and energy are equivalent and are conserved together), momentum, angular momentum, and electric charge.

The study of gravitation
      This field of inquiry has in the past been placed within classical mechanics for historical reasons, because both fields were brought to a high state of perfection by Newton and also because of its universal character. Newton's gravitational law states that every material particle in the universe attracts every other one with a force that acts along the line joining them and whose strength is directly proportional to the product of their masses and inversely proportional to the square of their separation. Newton's detailed accounting for the orbits of the planets and the Moon, as well as for such subtle gravitational effects as the tides and the precession of the equinoxes (a slow cyclical change in direction of the Earth's axis of rotation) through this fundamental force was the first triumph of classical mechanics. No further principles are required to understand the principal aspects of rocketry and space flight (although, of course, a formidable technology is needed to carry them out).

      The modern theory of gravitation was formulated by Albert Einstein and is called the general theory of relativity. From the long-known equality of the quantity “mass” in Newton's second law of motion and that in his gravitational law, Einstein was struck by the fact that acceleration can locally annul a gravitational force (as occurs in the so-called weightlessness of astronauts in an Earth-orbiting spacecraft) and was led thereby to the concept of curved space-time. Completed in 1915, the theory was valued for many years mainly for its mathematical beauty and for correctly predicting a small number of phenomena, such as the gravitational bending of light around a massive object. Only in recent years, however, has it become a vital subject for both theoretical and experimental research. (Relativistic mechanics, discussed below, refers to Einstein's special theory of relativity, which is not a theory of gravitation.)

      Heat is a form of internal energy associated with the random motion of the molecular constituents of matter or with radiation. temperature is an average of a part of the internal energy present in a body (it does not include the energy of molecular binding or of molecular rotation). The lowest possible energy state of a substance is defined as the absolute zero of temperature. An isolated body eventually reaches uniform temperature, a state known as thermal equilibrium, as do two or more bodies placed in contact. The formal study of states of matter at (or near) thermal equilibrium is called thermodynamics; it is capable of analyzing a large variety of thermal systems without considering their detailed microstructures.

First law
      The first law of thermodynamics is the energy conservation principle of mechanics (i.e., for all changes in an isolated system, the energy remains constant) generalized to include heat.

Second law
      The second law of thermodynamics asserts that heat will not flow from a place of lower temperature to one where it is higher without the intervention of an external device (e.g., a refrigerator). The concept of entropy involves the measurement of the state of disorder of the particles making up a system. For example, if tossing a coin many times results in a random-appearing sequence of heads and tails, the result has a higher entropy than if heads and tails tend to appear in clusters. Another formulation of the second law is that the entropy of an isolated system never decreases with time.

Third law
      The third law of thermodynamics states that the entropy at the absolute zero of temperature is zero, corresponding to the most ordered possible state.

      The science of statistical mechanics derives bulk properties of systems from the mechanical properties of their molecular constituents, assuming molecular chaos and applying the laws of probability. Regarding each possible configuration of the particles as equally likely, the chaotic state (the state of maximum entropy) is so enormously more likely than ordered states that an isolated system will evolve to it, as stated in the second law of thermodynamics. Such reasoning, placed in mathematically precise form, is typical of statistical mechanics, which is capable of deriving the laws of thermodynamics but goes beyond them in describing fluctuations (i.e., temporary departures) from the thermodynamic laws that describe only average behaviour. An example of a fluctuation phenomenon is the random motion of small particles suspended in a fluid, known as Brownian motion.

      Quantum statistical mechanics plays a major role in many other modern fields of science, as, for example, in plasma physics (the study of fully ionized gases), in solid-state physics, and in the study of stellar structure. From a microscopic point of view the laws of thermodynamics imply that, whereas the total quantity of energy of any isolated system is constant, what might be called the quality of this energy is degraded as the system moves inexorably, through the operation of the laws of chance, to states of increasing disorder until it finally reaches the state of maximum disorder (maximum entropy), in which all parts of the system are at the same temperature, and none of the state's energy may be usefully employed. When applied to the universe as a whole, considered as an isolated system, this ultimate chaotic condition has been called the “heat death.”

The study of electricity and magnetism
      Although conceived of as distinct phenomena until the 19th century, electricity and magnetism are now known to be components of the unified field of electromagnetism (electromagnetic radiation). Particles with electric charge interact by an electric force, while charged particles in motion produce and respond to magnetic forces as well. Many subatomic particles, including the electrically charged electron and proton and the electrically neutral neutron, behave like elementary magnets. On the other hand, in spite of systematic searches undertaken, no magnetic monopoles, which would be the magnetic analogues of electric charges, have ever been found.

      The field concept plays a central role in the classical formulation of electromagnetism, as well as in many other areas of classical and contemporary physics. Einstein's gravitational field, for example, replaces Newton's concept of gravitational action at a distance. The field describing the electric force between a pair of charged particles works in the following manner: each particle creates an electric field in the space surrounding it, and so also at the position occupied by the other particle; each particle responds to the force exerted upon it by the electric field at its own position.

      Classical electromagnetism is summarized by the laws of action of electric and magnetic fields upon electric charges and upon magnets and by four remarkable equations (Maxwell's equations) formulated in the latter part of the 19th century by James Clerk Maxwell. The latter equations describe the manner in which electric charges and currents produce electric and magnetic fields, as well as the manner in which changing magnetic fields produce electric fields, and vice versa. From these relations Maxwell inferred the existence of electromagnetic waves—associated electric and magnetic fields in space, detached from the charges that created them, traveling at the speed of light, and endowed with such “mechanical” properties as energy, momentum, and angular momentum. The light to which the human eye is sensitive is but one small segment of an electromagnetic spectrum that extends from long-wavelength radio waves to short-wavelength gamma rays and includes X rays, microwaves, and infrared (or heat) radiation.

      Because light consists of electromagnetic waves, the propagation of light can be regarded as merely a branch of electromagnetism. However, it is usually dealt with as a separate subject called optics: the part that deals with the tracing of light rays is known as geometrical optics, while the part that treats the distinctive wave phenomena of light is called physical optics. More recently, there has developed a new and vital branch, quantum optics, which is concerned with the theory and application of the laser, a device that produces an intense coherent beam of unidirectional radiation useful for many applications.

      The formation of images by lenses, microscopes, telescopes, and other optical devices is described by ray optics, which assumes that the passage of light can be represented by straight lines, that is, rays. The subtler effects attributable to the wave property of visible light, however, require the explanations of physical optics. One basic wave effect is interference, whereby two waves present in a region of space combine at certain points to yield an enhanced resultant effect (e.g., the crests of the component waves adding together); at the other extreme, the two waves can annul each other, the crests of one wave filling in the troughs of the other. Another wave effect is diffraction, which causes light to spread into regions of the geometric shadow and causes the image produced by any optical device to be fuzzy to a degree dependent on the wavelength of the light. Optical instruments such as the interferometer and the diffraction grating can be used for measuring the wavelength of light precisely (about 500 micrometres) and for measuring distances to a small fraction of that length.

Atomic and chemical physics
      One of the great achievements of the 20th century has been the establishment of the validity of the atomic (atom) hypothesis, first proposed in ancient times, that matter is made up of relatively few kinds of small, identical parts—namely, atoms. However, unlike the indivisible atom of Democritus and other ancients, the atom, as it is conceived today, can be separated into constituent electrons and nucleus. Atoms combine to form molecules, whose structure is studied by chemistry and chemical physics; they also form other types of compounds, such as crystals, studied in the field of condensed-matter physics. Such disciplines study the most important attributes of matter (not excluding biologic matter) that are encountered in normal experience—namely, those that depend almost entirely on the outer parts of the electronic structure of atoms. Only the mass of the atomic nucleus and its charge, which is equal to the total charge of the electrons in the neutral atom, affect the chemical and physical properties of matter.

      Although there are some analogies between the solar system and the atom due to the fact that the strengths of gravitational and electrostatic forces both fall off as the inverse square of the distance, the classical forms of electromagnetism and mechanics fail when applied to tiny, rapidly moving atomic constituents. Atomic structure is comprehensible only on the basis of quantum mechanics, and its finer details require as well the use of quantum electrodynamics (QED) (see below).

      Atomic properties are inferred mostly by the use of indirect experiments. Of greatest importance has been spectroscopy, which is concerned with the measurement and interpretation of the electromagnetic radiations either emitted or absorbed by materials. These radiations have a distinctive character, which quantum mechanics relates quantitatively to the structures that produce and absorb them. It is truly remarkable that these structures are in principle, and often in practice, amenable to precise calculation in terms of a few basic physical constants: the mass and charge of the electron, the speed of light, and Planck's constant ℏ, the fundamental constant of the quantum theory named for the German physicist Max Planck.

Condensed-matter physics
      This field, which treats the thermal, elastic, electrical, magnetic, and optical properties of solid and liquid substances, has grown at an explosive rate in recent years and has scored numerous important scientific and technical achievements, including the transistor. Among solid materials, the greatest theoretical advances have been in the study of crystalline materials whose simple repetitive geometric arrays of atoms are multiple-particle systems that allow treatment by quantum mechanics. Because the atoms in a solid are coordinated with each other over large distances, the theory must go beyond that appropriate for atoms and molecules. Thus conductors, such as metals, contain some so-called free (or conduction) electrons, which are responsible for the electrical and most of the thermal conductivity of the material and which belong collectively to the whole solid rather than to individual atoms. Semiconductors and insulators, either crystalline or amorphous, are other materials studied in this field of physics.

      Other aspects of condensed matter involve the properties of the ordinary liquid state, of liquid crystals, and, at temperatures near absolute zero, of the so-called quantum liquids. The latter exhibit a property known as superfluidity (completely frictionless flow), which is an example of macroscopic quantum phenomena. Such phenomena are also exemplified by superconductivity (completely resistance-less flow of electricity), a low-temperature property of certain metallic and ceramic materials. Besides their significance to technology, macroscopic liquid and solid quantum states are important in astrophysical theories of stellar structure in, for example, neutron stars.

Nuclear physics
      This branch of physics deals with the structure of the atomic nucleus and the radiation from unstable nuclei. About 10,000 times smaller than the atom, the constituent particles of the nucleus, protons and neutrons, attract one another so strongly by the nuclear forces that nuclear energies are approximately 1,000,000 times larger than typical atomic energies. Quantum theory is needed for understanding nuclear structure.

      Like excited atoms, unstable radioactive nuclei (either naturally occurring or artificially produced) can emit electromagnetic radiation. The energetic nuclear photons are called gamma rays. Radioactive nuclei also emit other particles (subatomic particle): negative and positive electrons (beta rays), accompanied by neutrinos, and helium nuclei (alpha rays).

      A principal research tool of nuclear physics involves the use of beams of particles (e.g., protons or electrons) directed as projectiles against nuclear targets. Recoiling particles and any resultant nuclear fragments are detected, and their directions and energies are analyzed to reveal details of nuclear structure and to learn more about the strong nuclear force. A much weaker nuclear force, the so-called weak interaction, is responsible for the emission of beta rays. Nuclear collision experiments use beams of higher-energy particles, including those of unstable particles called mesons produced by primary nuclear collisions in accelerators dubbed meson factories. Exchange of mesons between protons and neutrons is directly responsible for the strong nuclear force. (For the mechanism underlying mesons, see below Fundamental forces and fields (physics).)

      In radioactivity and in collisions leading to nuclear breakup, the chemical identity of the nuclear target is altered whenever there is a change in the nuclear charge. In fission and fusion nuclear reactions in which unstable nuclei are, respectively, split into smaller nuclei or amalgamated into larger ones, the energy release far exceeds that of any chemical reaction.

Particle physics
      One of the most significant branches of contemporary physics is the study of the fundamental subatomic constituents of matter, the elementary particles. This field, also called high-energy physics, emerged in the 1930s out of the developing experimental areas of nuclear and cosmic-ray physics. Initially investigators studied cosmic rays, the very-high-energy extraterrestrial radiations that fall upon the Earth and interact in the atmosphere (see below The methodology of physics (physics)). However, after World War II, scientists gradually began using high-energy particle accelerators to provide subatomic particles for study. Quantum field theory, a generalization of QED to other types of force fields, is essential for the analysis of high-energy physics.

      During recent decades a coherent picture has evolved of the underlying strata of matter involving three types of particles called leptons (lepton), quarks (quark), and field quanta, for whose existence evidence is good. (Other types of particles have been hypothesized but have not yet been detected.) Subatomic particles cannot be visualized as tiny analogues of ordinary material objects such as billiard balls, for they have properties that appear contradictory from the classical viewpoint. That is to say, while they possess charge, spin, mass, magnetism, and other complex characteristics, they are nonetheless regarded as pointlike. Leptons and quarks occur in pairs (e.g., one lepton pair consists of the electron and the neutrino). Each quark and each lepton have an antiparticle with properties that mirror those of its partner (the antiparticle of the negatively charged electron is the positive electron, or positron; that of the neutrino is the antineutrino). In addition to their electric and magnetic properties, quarks have very strong nuclear forces and also participate in the weak nuclear interaction, while leptons take part in only the weak interaction.

      Ordinary matter consists of electrons surrounding the nucleus, which is composed of neutrons and protons, each of which is believed to contain three quarks. Quarks have charges that are either positive two-thirds or negative one-third of the electron's charge, while antiquarks have the opposite charges. Mesons (meson), responsible for the nuclear binding force, are composed of one quark and one antiquark. In addition to the particles in ordinary matter and their antiparticles, which are referred to as first-generation, there are probably two or more additional generations of quarks and leptons, more massive than the first. Evidence exists at present for the second generation and all but one quark of the third, namely the t (or top) quark, which may be so massive that a new higher-energy accelerator may be needed to produce it.

      The quantum fields (quantum field theory) through which quarks and leptons interact with each other and with themselves consist of particle-like objects called quanta (from which quantum mechanics derives its name). The first known quanta were those of the electromagnetic field; they are also called photons because light consists of them. A modern unified theory of weak and electromagnetic interactions, known as the electroweak theory, proposes that the weak nuclear interaction involves the exchange of particles about 100 times as massive as protons. These massive quanta have been observed—namely, two charged particles, W+ and W, and a neutral one, W0.

      In the theory of strong nuclear interactions known as quantum chromodynamics (QCD), eight quanta, called gluons (gluon), bind quarks to form protons and neutrons and also bind quarks to antiquarks to form mesons, the force itself being dubbed the “colour force.” (This unusual use of the term colour is a somewhat forced analogue of ordinary colour mixing.) Quarks are said to come in three colours—red, blue, and green. (The opposites of these imaginary colours, minus-red, minus-blue, and minus-green, are ascribed to antiquarks.) Only certain colour combinations, namely colour-neutral, or “white” (i.e., equal mixtures of the above colours cancel out one another, resulting in no net colour), are conjectured to exist in nature in an observable form. The gluons and quarks themselves, being coloured, are permanently confined (deeply bound within the particles of which they are a part), while the colour-neutral composites such as protons can be directly observed. One consequence of colour confinement is that the observable particles are either electrically neutral or have charges that are integral multiples of the charge of the electron. A number of specific predictions of QCD have been experimentally tested and found correct.

      Although the various branches of physics differ in their experimental methods and theoretical approaches, certain general principles apply to all of them. The forefront of contemporary advances in physics lies in the submicroscopic regime, whether it be in atomic, nuclear, condensed-matter, plasma, or particle physics, or in quantum optics, or even in the study of stellar structure. All are based upon quantum theory (i.e., quantum mechanics and quantum field theory) and relativity, which together form the theoretical foundations of modern physics. Many physical quantities whose classical counterparts vary continuously over a range of possible values are in quantum theory constrained to have discontinuous, or discrete, values. The intrinsically deterministic character of classical physics is replaced in quantum theory by intrinsic uncertainty.

      According to quantum theory, electromagnetic radiation does not always consist of continuous waves (wave-particle duality); instead it must be viewed under some circumstances as a collection of particle-like photons, the energy and momentum of each being directly proportional to its frequency (or inversely proportional to its wavelength, the photons still possessing some wavelike characteristics). Conversely, electrons and other objects that appear as particles in classical physics are endowed by quantum theory with wavelike properties as well, such a particle's quantum wavelength being inversely proportional to its momentum. In both instances, the proportionality constant is the characteristic quantum of action (action being defined as energy × time)—that is to say, Planck's constant ℏ.

      In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behaviour, as well as the spectroscopic, electrical, and other physical properties of atoms, molecules, and condensed matter, can be accounted for by quantum mechanics. Roughly speaking, the electrons in the atom must fit around the nucleus as some sort of standing wave (the Schrödinger wave function; see above) analogous to the waves on a plucked violin or guitar string. As the fit determines the wavelength of the quantum wave, it necessarily determines its energy state. Consequently, atomic systems are restricted to certain discrete, or quantized, energies. When an atom undergoes a discontinuous transition, or quantum jump, its energy changes abruptly by a sharply defined amount, and a photon of that energy is emitted when the energy of the atom decreases, or is absorbed in the opposite case.

      Although atomic energies can be sharply defined, the positions of the electrons within the atom cannot be, quantum mechanics giving only the probability for the electrons to have certain locations. This is a consequence of the feature that distinguishes quantum theory from all other approaches to physics, the indeterminacy (or uncertainty (uncertainty principle)) principle of Werner Heisenberg. As was explained earlier in the article, this principle holds that measuring a particle's position with increasing precision necessarily increases the uncertainty as to the particle's momentum, and conversely. The ultimate degree of uncertainty is controlled by the magnitude of Planck's constant, which is so small as to have no apparent effects except in the world of microstructures. In the latter case, however, because both a particle's position and its velocity or momentum must be known precisely at some instant in order to predict its future history, quantum theory precludes such certain prediction and thus escapes determinism.

      The complementary wave and particle aspects, or wave–particle duality, of electromagnetic radiation and of material particles furnish another illustration of the uncertainty principle. When an electron exhibits wavelike behaviour, as in the phenomenon of electron diffraction, this excludes its exhibiting particle-like behaviour in the same observation. Similarly, when electromagnetic radiation in the form of photons interacts with matter, as in the Compton effect in which X-ray photons collide with electrons, the result resembles a particle-like collision and the wave nature of electromagnetic radiation is precluded. The principle of complementarity (complementarity principle), asserted by Niels Bohr, who pioneered the theory of atomic structure, states that the physical world presents itself in the form of various complementary pictures, no one of which is by itself complete, all of these pictures being essential for our total understanding. Thus both wave and particle pictures are needed for understanding either the electron or the photon.

      Although it deals with probabilities and uncertainties, the quantum theory has been spectacularly successful in explaining otherwise inaccessible atomic phenomena and in thus far meeting every experimental test. Its predictions, especially those of QED, are the most precise and the best checked of any in physics; some of them have been tested and found accurate to better than one part per billion.

      In classical physics, space is conceived as having the absolute character of an empty stage in which events in nature unfold as time flows onward independently; events occurring simultaneously for one observer are presumed to be simultaneous for any other; mass is taken as impossible to create or destroy; and a particle given sufficient energy acquires a velocity that can increase without limit. The special theory of relativity, developed principally by Einstein in 1905 and now so adequately confirmed by experiment as to have the status of physical law, shows that all these, as well as other apparently obvious assumptions, are false.

      Specific and unusual relativistic effects flow directly from Einstein's two basic postulates, which are formulated in terms of so-called inertial reference frames. These are reference systems that move in such a way that in them Newton's first law, the law of inertia, is valid. The set of inertial frames consists of all those that move with constant velocity with respect to each other (accelerating frames therefore being excluded). Einstein's postulates are: (1) All observers, whatever their state of motion relative to a light source, measure the same speed for light; and (2) The laws of physics are the same in all inertial frames.

      The first postulate, the constancy of the speed of light, is an experimental fact from which follow the distinctive relativistic phenomena of space contraction, time dilation, and the relativity of simultaneity: as measured by an observer assumed to be at rest, an object in motion is contracted along the direction of its motion, and moving clocks run slow; two spatially separated events that are simultaneous for a stationary observer occur sequentially for a moving observer. As a consequence, space intervals in three-dimensional space are related to time intervals, thus forming so-called four-dimensional space-time.

      The second postulate is called the principle of relativity. It is equally valid in classical mechanics (but not in classical electrodynamics until Einstein reinterpreted it). This postulate implies, for example, that table tennis played on a train moving with constant velocity is just like table tennis played with the train at rest, the states of rest and motion being physically indistinguishable. In relativity theory, mechanical quantities such as momentum and energy have forms that are different from their classical counterparts but give the same values for speeds that are small compared to the speed of light, the maximum permissible speed in nature (about 300,000 kilometres per second, or 186,000 miles per second). According to relativity, mass and energy are equivalent and interchangeable quantities, the equivalence being expressed by Einstein's famous equation E = mc2, where m is an object's mass and c is the speed of light.

      The general theory of relativity, as discussed above, is Einstein's theory of gravitation, which uses the principle of the equivalence of gravitation and locally accelerating frames of reference. Einstein's theory has special mathematical beauty; it generalizes the “flat” space-time concept of special relativity to one of curvature. It forms the background of all modern cosmological theories (see Cosmos: Relativistic cosmologies (Cosmos)). In contrast to some vulgarized popular notions of it, which confuse it with moral and other forms of relativism, Einstein's theory does not argue that “all is relative.” On the contrary, it is largely a theory based upon those physical attributes that do not change, or, in the language of the theory, that are invariant.

Conservation laws (conservation law) and symmetry
      Since the early period of modern physics, there have been conservation laws, which state that certain physical quantities, such as the total electric charge of an isolated system of bodies, do not change in the course of time. In the 20th century it has been proved mathematically that such laws follow from the symmetry properties of nature, as expressed in the laws of physics. The conservation of mass-energy of an isolated system, for example, follows from the assumption that the laws of physics may depend upon time intervals but not upon the specific time at which the laws are applied. The symmetries and the conservation laws that follow from them are regarded by modern physicists as being even more fundamental than the laws themselves, since they are able to limit the possible forms of laws that may be proposed in the future.

      Conservation laws are valid in classical, relativistic, and quantum theory for mass-energy, momentum, angular momentum, and electric charge. (In nonrelativistic physics, mass and energy are separately conserved.) Momentum, a directed quantity equal to the mass of a body multiplied by its velocity or to the total mass of two or more bodies multiplied by the velocity of their centre of mass, is conserved when, and only when, no external force acts. Similarly angular momentum, which is related to spinning motions, is conserved in a system upon which no net turning force, called torque, acts. External forces and torques break the symmetry conditions from upon which the respective conservation laws follow.

      In quantum theory, and especially in the theory of elementary particles, there are additional symmetries and conservation laws, some exact and others only approximately valid, which play no significant role in classical physics. Among these are the conservation of so-called quantum numbers related to left-right reflection symmetry of space (called parity) and to the reversal symmetry of motion (called time reversal). These quantum numbers are conserved in all processes other than the weak nuclear interaction.

      Other symmetry properties not obviously related to space and time (and referred to as internal symmetries) characterize the different families of elementary particles and, by extension, their composites. Quarks, for example, have a property called baryon number, as do protons, neutrons, nuclei, and unstable quark composites. All of these except the quarks are known as baryons. A failure of baryon-number conservation would exhibit itself, for instance, by a proton decaying into lighter non-baryonic particles. Indeed, intensive search for such proton decay has been conducted, but so far it has been fruitless. Similar symmetries and conservation laws hold for an analogously defined lepton number, and they also appear, as does the law of baryon conservation, to hold absolutely.

Fundamental forces (fundamental interaction) and fields
      The four basic forces of nature, in order of increasing strength, are thought to be: (1) the gravitational force between particles with mass; (2) the electromagnetic force between particles with charge or magnetism or both; (3) the colour force between quarks; and (4) the weak nuclear interaction by which, for example, quarks can change their type, so that a neutron decays into a proton, an electron, and an antineutrino. The strong nuclear interaction that binds protons and neutrons into nuclei and is responsible for fission, fusion, and other nuclear reactions is in principle derived from the colour force. Nuclear physics is thus related to QCD as chemistry is to atomic physics.

      According to quantum field theory, each of the four fundamental interactions is mediated by the exchange of quanta, called vector gauge bosons (boson), which share certain common characteristics. All have an intrinsic spin of one unit, measured in terms of Planck's constant ℏ. (Leptons and quarks each have one-half unit of spin.) The term gauge (gauge theory) refers to a special type of symmetry they possess. This symmetry was first seen in the equations for electromagnetic potentials, quantities from which electromagnetic fields can be derived. It is possessed in pure form by the eight massless gluons of QCD, but in the electroweak theory, the unified theory of electromagnetic and weak interactions, gauge symmetry is partially broken, so that only the photon remains massless, with the other gauge bosons (W+, W, and Z) acquiring large masses. At present, theoretical physicists are trying to produce a further unification of QCD with the electroweak theory and, more ambitiously still, to unify them with a quantum version of gravity in which the force would be transmitted by massless quanta of two units of spin called gravitons.

The methodology of physics
      Physics has evolved and continues to evolve without any single strategy. Essentially an experimental science, refined measurements can reveal unexpected behaviour. On the other hand, mathematical extrapolation of existing theories into new theoretical areas, critical reexamination of apparently obvious but untested assumptions, argument by symmetry or analogy, aesthetic judgment, pure accident, and hunch—each of these plays a role (as in all of science). Thus, for example, the quantum hypothesis proposed by Planck was based on observed departures of the character of blackbody radiation (radiation emitted by a heated body that absorbs all radiant energy incident upon it) from that predicted by classical electromagnetism. P.A.M. Dirac predicted the existence of the positron in making a relativistic extension of the quantum theory of the electron. The elusive neutrino, without mass or charge, was hypothesized by Wolfgang Pauli as an alternative to abandoning the conservation laws in the beta-decay process. Maxwell conjectured that if changing magnetic fields create electric fields (which was known to be so), then changing electric fields might create magnetic fields, leading him to the electromagnetic theory of light. Einstein's special theory of relativity was based on a critical reexamination of the meaning of simultaneity, while his general theory rests on the equivalence of inertial and gravitational mass.

      Although the tactics may vary from problem to problem, the physicist invariably tries to make unsolved problems more tractable by constructing a series of idealized models, with each successive model being a more realistic representation of the actual physical situation. Thus, in the theory of gases, the molecules are at first imagined to be particles that are as structureless as billiard balls with vanishingly small dimensions. This ideal picture is then improved on step by step.

      The correspondence principle, a useful guiding principle for extending theoretical interpretations, was formulated by Niels Bohr in the context of the quantum theory. It asserts that when a valid theory is generalized to a broader arena, the new theory's predictions must agree with the old one in the overlapping region in which both are applicable. For example, the more comprehensive theory of physical optics must yield the same result as the more restrictive theory of ray optics whenever wave effects proportional to the wavelength of light are negligible on account of the smallness of that wavelength. Similarly, quantum mechanics must yield the same results as classical mechanics in circumstances when Planck's constant can be considered as negligibly small. Likewise, for speeds small compared to the speed of light (as for baseballs in play), relativistic mechanics must coincide with Newtonian classical mechanics.

      Some ways in which experimental and theoretical physicists attack their problems are illustrated by the following examples.

      The modern experimental study of elementary particles began with the detection of new types of unstable particles produced in the atmosphere by primary radiation, the latter consisting mainly of high-energy protons arriving from space. The new particles were detected in Geiger counters and identified by the tracks they left in instruments called cloud chambers and in photographic plates. After World War II, particle physics, then known as high-energy nuclear physics, became a major field of science. Today's high-energy particle accelerators can be several kilometres in length, cost hundreds (or even thousands) of millions of dollars, and accelerate particles to enormous energies (trillions of electron volts). Experimental teams, such as those that discovered the W+, W-, and Z quanta of the weak force at the European Laboratory for Particle Physics (CERN) in Geneva can have 100 or more physicists from many countries, along with a larger number of technical workers serving as support personnel. A variety of visual and electronic techniques are used to interpret and sort the huge amounts of data produced by their efforts, and particle-physics laboratories are major users of the most advanced technology, be it superconductive magnets or supercomputers.

      Theoretical physicists use mathematics both as a logical tool for the development of theory and for calculating predictions of the theory to be compared with experiment. Newton, for one, invented integral calculus to solve the following problem, which was essential to his formulation of the law of universal gravitation: Assuming that the attractive force between any pair of point particles is inversely proportional to the square of the distance separating them, how does a spherical distribution of particles, such as the Earth, attract another nearby object? Integral calculus, a procedure for summing many small contributions, yields the simple solution that the Earth itself acts as a point particle with all its mass concentrated at the centre. In modern physics, Dirac predicted the existence of the then-unknown positive electron (or positron) by finding an equation for the electron that would combine quantum mechanics and the special theory of relativity.

Relations between physics and other disciplines and society

Influence of physics on related disciplines
      Because physics elucidates the simplest fundamental questions in nature on which there can be a consensus, it is hardly surprising that it has had a profound impact on other fields of science, on philosophy, on the worldview of the developed world, and, of course, on technology.

      Indeed, whenever a branch of physics has reached such a degree of maturity that its basic elements are comprehended in general principles, it has moved from basic to applied physics and thence to technology. Thus almost all current activity in classical physics consists of applied physics, and its contents form the core of many branches of engineering. Discoveries in modern physics are converted with increasing rapidity into technical innovations and analytical tools for associated disciplines. There are, for example, such nascent fields as nuclear and biomedical engineering, quantum chemistry and quantum optics, and radio, X-ray, and gamma-ray astronomy, as well as such analytic tools as radioisotopes, spectroscopy, and lasers, which all stem directly from basic physics.

      Apart from its specific applications, physics—especially Newtonian mechanics—has become the prototype of the scientific method, its experimental and analytic methods sometimes being imitated (and sometimes inappropriately so) in fields far from the related physical sciences. Some of the organizational aspects of physics, based partly on the successes of the radar and atomic-bomb projects of World War II, also have been imitated in large-scale scientific projects, as, for example, in astronomy and space research.

      The great influence of physics on the branches of philosophy concerned with the conceptual basis of human perceptions and understanding of nature, such as epistemology, is evidenced by the earlier designation of physics itself as natural philosophy. Present-day philosophy of science deals largely, though not exclusively, with the foundations of physics. determinism, the philosophical doctrine that the universe is a vast machine operating with strict causality whose future is determined in all detail by its present state, is rooted in Newtonian mechanics, which obeys that principle. Moreover, the schools of materialism, naturalism, and empiricism have in large degree considered physics to be a model for philosophical inquiry. An extreme position is taken by the logical (Logical Positivism) positivists, whose radical distrust of the reality of anything not directly observable leads them to demand that all significant statements must be formulated in the language of physics.

      The uncertainty principle of quantum theory has prompted a reexamination of the question of determinism, and its other philosophical implications remain in doubt. Particularly problematic is the matter of the meaning of measurement, for which recent theories and experiments confirm some apparently noncausal predictions of standard quantum theory. It is fair to say that though physicists agree that quantum theory works, they still differ as to what it means.

Influence of related disciplines on physics
      The relationship of physics to its bordering disciplines is a reciprocal one. Just as technology feeds on fundamental science for new practical innovations, so physics appropriates the techniques and instrumentation of modern technology for advancing itself. Thus experimental physicists utilize increasingly refined and precise electronic devices. Moreover, they work closely with engineers in designing basic scientific equipment, such as high-energy particle accelerators. Mathematics has always been the primary tool of the theoretical physicist, and even abstruse fields of mathematics such as group theory and differential geometry have become invaluable to the theoretician classifying subatomic particles or investigating the symmetry characteristics of atoms and molecules. Much of contemporary research in physics depends on the high-speed computer. It allows the theoretician to perform computations that are too lengthy or complicated to be done with paper and pencil. Also, it allows experimentalists to incorporate the computer into their apparatus, so that the results of measurements can be provided nearly instantaneously on-line as summarized data while an experiment is in progress.

The physicist in society
      Because of the remoteness of much of contemporary physics from ordinary experience and its reliance on advanced mathematics, physicists have sometimes seemed to the public to be initiates in a latter-day secular priesthood who speak an arcane language and can communicate their findings to laymen only with great difficulty. Yet, the physicist has come to play an increasingly significant role in society, particularly since World War II. Governments have supplied substantial funds for research at academic institutions and at government laboratories through such agencies as the National Science Foundation and the Department of Energy in the United States, which has also established a number of national laboratories, including the Fermi National Accelerator Laboratory in Batavia, Ill., with the world's largest particle accelerator. CERN, mentioned above, is composed of 14 European countries and operates a large accelerator at the Swiss–French border. Physics research is supported in the Federal Republic of Germany by the Max Planck Society for the Advancement of Science and in Japan by the Japan Society for the Promotion of Science. In Trieste, Italy, there is the International Center for Theoretical Physics, which has strong ties to developing countries. These are only a few examples of the widespread international interest in fundamental physics.

      Basic research in physics is obviously dependent on public support and funding, and with this development has come, albeit slowly, a growing recognition within the physics community of the social responsibility of scientists for the consequences of their work and for the more general problems of science and society.

Richard Tilghman Weidner Laurie M. Brown

Additional Reading
I. Bernard Cohen, The Birth of a New Physics, rev. ed. (1985), is an account of the work of Galileo, Newton, and other 17th-century scientists. See also Emilio Segrè, From Falling Bodies to Radio Waves: Classical Physicists and Their Discoveries (1984), and From X-Rays to Quarks: Modern Physicists and Their Discoveries (1980; originally published in Italian, 1976). Henry A. Boorse and Lloyd Motz (eds.), The World of the Atom, 2 vol. (1966), is a comprehensive anthology of historical sources on 19th- and 20th-century developments in atomic physics. For more recent history, see Stephen G. Brush, “Resource Letter HP-1: History of Physics,” American Journal of Physics, 55:683-691 (August 1987). An interesting collection of writings by physicists is presented in Jefferson Hane Weaver (ed.), The World of Physics: A Small Library of the Literature of Physics from Antiquity to the Present, 3 vol. (1987).Reference works surveying the scope and methodology of physics include Robert M. Besançon (ed.), The Encyclopedia of Physics, 3rd ed. (1985); and Cesare Emiliani, Dictionary of the Physical Sciences: Terms, Formulas, Data (1987). Other works include David Halliday and Robert Resnick, Fundamentals of Physics, 3rd extended ed., 2 vol. (1988), a good standard text; Gerald Holton, Introduction to Concepts and Theories in Physical Science, 2nd ed., rev. by Stephen G. Brush (1973, reprinted 1985), analyzing the physical theories from a historical standpoint; Richard P. Feynman, Robert B. Leighton, and Matthew Sands, The Feynman Lectures on Physics, 3 vol. (1963–65), and Richard P. Feynman, QED: The Strange Theory of Light and Matter (1985), works by a modern master; Frank Close, Michael Marten, and Christine Sutton, The Particle Explosion (1987), a discussion of the latest developments in fundamental physics, written for the general reader; Steven Weinberg, The Discovery of Subatomic Particles (1983), and The First Three Minutes: A Modern View of the Origin of the Universe, updated ed. (1988); P.C.W. Davies, Space and Time in the Modern Universe (1977); and Peter G. Bergmann, The Riddle of Gravitation, rev. ed. (1987). An unusual social history of the U.S. scientific community is presented in Daniel J. Kevles, The Physicists (1978, reprinted 1987).

* * *

Universalium. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Physics — (Greek: physis φύσις), in everyday terms, is the science of matter [R. P. Feynman, R. B. Leighton, M. Sands (1963), The Feynman Lectures on Physics , ISBN 0 201 02116 1 Hard cover. p.1 1 Feynman begins with the atomic hypothesis.] and its motion …   Wikipedia

  • Physics — Специализация: Физика Периодичность: еженедельно Язык: Английский Адрес редакции: Главный редактор: Джессика Томас Учредител …   Википедия

  • physics — [fiz′iks] n. [transl. of L physica, physics < Gr (ta) physika (lit., natural things), name given to the physical treatises of ARISTOTLE: see PHYSIC] 1. Obs. natural philosophy 2. a) the science dealing with the properties, changes,… …   English World dictionary

  • Physics — Phys ics, n. [See {Physic}.] The science of nature, or of natural objects; that branch of science which treats of the laws and properties of matter, and the forces acting upon it; especially, that department of natural science which treats of the …   The Collaborative International Dictionary of English

  • physics — UK US /ˈfɪzɪks/ noun [U] ► the scientific study of matter and energy: »He studied Physics at university before becoming an engineer. »a physics lab/researcher/degree …   Financial and business terms

  • physics — (n.) 1580s, natural science, from PHYSIC (Cf. physic) in sense of natural science. Also see ICS (Cf. ics). Specific sense of science treating of properties of matter and energy is from 1715. Physicist coined 1840 by the Rev. William Whewell (1794 …   Etymology dictionary

  • physics — physics, philosophy of …   Philosophy dictionary

  • physics — ► PLURAL NOUN (treated as sing. ) 1) the branch of science concerned with the nature and properties of matter and energy. 2) the physical properties and phenomena of something. DERIVATIVES physicist noun. ORIGIN Latin physica natural things …   English terms dictionary

  • PHYSICS — The material presented in this entry emphasizes those contributions which were important in arriving at verified present day scientific results, rather than those that may have appeared important at the time. Unavoidably it will overlap in parts… …   Encyclopedia of Judaism

  • physics — noun ADJECTIVE ▪ classical, Newtonian ▪ modern ▪ Einstein restructured modern physics. ▪ applied, experimental, theoretical …   Collocations dictionary

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”