—semanticist /si man"teuh sist/, semantician /see'man tish"euhn/, n./si man"tiks/, n. (used with a sing. v.)1. Ling.a. the study of meaning.b. the study of linguistic development by classifying and examining changes in meaning and form.2. Also called significs. the branch of semiotics dealing with the relations between signs and what they denote.3. the meaning, or an interpretation of the meaning, of a word, sign, sentence, etc.: Let's not argue about semantics.4. See general semantics.[1895-1900; see SEMANTIC, -ICS]
* * *Study of meaning, one of the major areas of linguistic study (see linguistics).Linguists have approached it in a variety of ways. Members of the school of interpretive semantics study the structures of language independent of their conditions of use. In contrast, the advocates of generative semantics insist that the meaning of sentences is a function of their use. Still another group maintains that semantics will not advance until theorists take into account the psychological questions of how people form concepts and how these relate to word meanings.
* * *▪ study of meaningIntroductionthe philosophical and scientific study of meaning. The term is one of a group of English words formed from the various derivatives of the Greek verb sēmainō (“to mean” or “to signify”). The noun semantics and the adjective semantic are derived from sēmantikos (“significant”); semiotics (adjective and noun) comes from sēmeiōtikos (“pertaining to signs”); semology from sēma (“sign”) + logos (“account”); and semasiology from sēmasia (“signification”) + logos (“account”).It is difficult to formulate a distinct definition for each of these terms because their use largely overlaps in the literature despite individual preferences. Semantics is a relatively new field of study, and its originators, often working independently of one another, felt the need to coin a new name for the new discipline; hence the variety of terms denoting the same subject. The word semantics has ultimately prevailed as a name for the doctrine of meaning, in particular, of linguistic meaning. Semiotics is still used, however, to denote a broader field: the study of sign-using behaviour in general.Modern development of semanticsThe concern with meaning, always present for philosophers and linguists, greatly increased in the decades following World War II. The sudden rise of interest in meaning can be attributed to an interaction of several lines of development in various disciplines. From the middle of the 19th century onward, logic, the formal study of reasoning, underwent a period of growth unparalleled since the time of Aristotle. Although the main motivation for a renewed interest in logic was a search for the foundations of mathematics, the chief protagonists of this effort—notably the German mathematician Gottlob Frege (Frege, Gottlob) and the English philosopher Bertrand Russell (Russell, Bertrand)—extended their inquiry into the domain of the natural languages, which are the original media of human reasoning. The influence of mathematical thinking, and of mathematical logic in particular, however, left a permanent mark on the subsequent study of semantics.Positivist theoryThis mark is nowhere more obvious than in the semantic theories offered by the Neopositivists of the Vienna Circle, which flourished in the 1920s and 1930s, and which was composed of philosophers, mathematicians, and scientists who discussed the methodology and epistemology of science. To such “logical” Positivists as the German-born philosopher Rudolf Carnap (Carnap, Rudolf), for instance, the symbolism of modern logic represented the grammar (syntax) of an “ideal” language (ideal language). Because the Logical Positivists (Logical Positivism) were, at the same time, radical Empiricists (observationalists) in their philosophy, the semantics of their ideal language has been given in terms of a tie connecting the symbols of this language with observable entities in the world, or the data of one's sense experience, or both. Against such a rigid ideal as logic, natural language appeared to these philosophers as something primitive, vague, inaccurate, and confused. Moreover, since a large part of ordinary and philosophical discourse, particularly that concerning metaphysical and moral issues, could not be captured by the ideal language, the Positivistic approach provided a way to brand all such talk as nonsensical, or at least as “cognitively” meaningless. Accordingly, the Positivists engaged in a prolonged, and largely unsuccessful, effort to formulate a criterion of meaningfulness in terms of empirical verifiability with respect to the sentences formed in natural language.Whorfian viewsAnother source of dissatisfaction with the vernacular was made apparent shortly before World War II by the work of the American anthropological linguist Benjamin Lee Whorf (Whorf, Benjamin Lee). Whorf's famous thesis of linguistic relativity implied that the particular language a person learns and uses determines the framework of his perception and thought. If that language is vague and inaccurate, as the Positivists suggested, or is burdened with the prejudices and superstitions of an ignorant past, as some cultural anthropologists averred, then it is bound to render the user's thinking—and his mental life itself—confused, prejudiced, and superstitious. The Polish-American semanticist Alfred Korzybski (Korzybski, Alfred), the founder of the movement called General Semantics, believed that the cure for such vague and superstition-laden language lay in a radical revision of linguistic habits in the light of modern science.School of natural languageNatural language did not remain without champions in the face of this combined onslaught from the logicians and the Whorfians. A reaction started in England, first in Cambridge, then in Oxford. Influenced by the English philosopher George Edward Moore (Moore, G E) but more so by the “converted” Vienna-born Positivist Ludwig Wittgenstein, the philosophy of “ordinary language (ordinary language analysis)” (also known as the Oxford school) came into its own in the 1940s. According to the philosophers of this group, natural language, far from being the crude instrument the Positivists alleged it to be, provides the basic and unavoidable matrix of all thought, including philosophical reflections. Any “ideal” language, therefore, can make sense only as a parasitical extension of, and never as a substitute for, the natural language. Philosophical problems arise as a result of a failure to see the workings of man's language; they are bound to “dissolve” with improved understanding. These assumptions provided a mighty impetus to reflect upon the vernacular language, including its minute points of grammar and fine nuances of meaning. Indeed, some of the later representatives of this approach, particularly the English philosopher John L. Austin (Austin, John Langshaw), became renowned as much among linguists as among philosophers.Modern grammatical influencesIn the 1950s the science of linguistics itself rose to the challenges that had been coming chiefly from philosophical quarters. The development of transformational (transformational grammar), or generative (generative grammar), grammar, initiated by the work of the U.S. linguists Zellig S. Harris (Harris, Zellig S.) and Noam Chomsky (Chomsky, Noam), opened a deeper insight into the syntax of the natural languages. Instead of merely providing a structural description (parsing) of sentences, this approach demonstrates how sentences are built up, step by step, from some basic ingredients. In the hands of the philosopher, this powerful new grammar not only served to counter the positivistic charge of imprecision laid against natural language but aided him in his own work of conceptual clarification. Moreover, the generative approach promised further results: since the late 1960s some steps have been taken to develop a generative semantics for natural languages, in addition to a generative syntax.Philosophical views on meaningMeaning and referenceOn a rather unsophisticated level the problem of meaning can be approached through the following steps. The perception of certain physical entities (objects, marks, sounds, and so on) might lead an intelligent being to the thought of another thing with some regularity. For example, the sight of smoke evokes the idea of fire, footprints on the sand makes one think of the man who must have passed by. The smoke and the footprints are thus signs of something else. They are natural signs, inasmuch as the connection between the sign and the thing signified is a causal link, established by nature and learned from experience. These can be compared with road signs, for example, or such symbols as the outline of a heart pierced by an arrow. The connection between the symbol and the thing signified in these cases is not a natural one; it is established by human tradition or convention and is learned from these sources. These nonnatural signs, or symbols, are widely used in human communication.In this framework the elements of language appear to be nonnatural signs. The interest in words and phrases reaches beyond their physical appearance: their perception is likely to direct attention or thought to something else. Words, in fact, are the chief media of human communication, and, as the diversity of languages clearly shows, the link involved between words and what they signify cannot be a natural one. Words and sentences are like symbols: they point beyond themselves; they mean something. Smoke means fire, the pierced heart means love. Words mean the thing they make us think of; the meaning of the word is the tie that connects it with that thing.There are some words for which this approach seems to work very straightforwardly. The name Paris means (signifies, stands for, refers to, denotes) the city of Paris, the name Aristotle means that philosopher, and so forth. The initial plausibility of such examples created an obsession in the minds of many thinkers, beginning with Plato. Regarding proper names as words par excellence, they tried to extend the referential model of meaning to all of the other classes of words and phrases. Plato's theory of “forms” may be viewed as an attempt to find a referent for such common nouns as “dog” or for abstract nouns like “whiteness” or “justice.” As the word Socrates in the sentence “Socrates is wise” refers to Socrates, for example, so the word wise refers to the form of wisdom. Unfortunately, whereas Socrates was a real person in this world, the form of wisdom is not something to be encountered anywhere, at any time, in the world. The difficulty represented by “Platonic” entities of this kind increases as one tries to find appropriate referents for verbs, prepositions, connectives, and so forth. Discussion of abstract entities such as classes (e.g., the class of all running things) and relations (e.g., the relation of being greater than . . .) abound in philosophical literature; Gottlob Frege even postulated “the True” and “the False” as referents for complete propositions.There are many more serious problems besetting the referential theory of meaning. The first one, eloquently pointed out by Frege, is that two expressions may have the same referent without having the same meaning. For example, “the Morning Star” and “the Evening Star” denote the same planet, yet, clearly, the two phrases do not have the same meaning. If they had, then the identity of the Morning Star and the Evening Star would be as obvious to anybody who understands these phrases as the identity of a vixen with a female fox or a bachelor with an unmarried man is obvious to speakers of English. As it is, the identity of the Morning Star with the Evening Star is a scientific and not a linguistic matter. Thus, even in the case of names, or expressions equivalent to names, one has to distinguish between the denotation (reference, extension) of the name—i.e., the object (or group of objects) it refers to—and its connotation (sense, intension (intension and extension))—i.e., its meaning.The second problem with the theory of referential meaning arises from phrases that, though meaningful, pretend to refer but, in fact, do not. For example, in the case of such a definite description as “the present king of France,” the phrase is meaningful although there is no such person. If the phrase were not meaningful, one would not even know that the phrase has no actual referent. Russell's analysis of these phrases, and the U.S. philosopher Willard V. Quine's (Quine, Willard Van Orman) similar treatment of such names as Cerberus, effectively detached meaning from reference by claiming that these expressions, when used in sentences, are equivalent to a set of existential propositions; i.e., propositions without definite reference. For example, “The present king of France is bald” comes out as “There is at least, and at most, one person that rules over France, and whoever rules over France is bald.” These propositions are meaningful, true or false, without definite reference.Names, in fact, are very untypical words. The name of the third Secretary General of the United Nations, U Thant, has no meaning in English. Whether it means anything in Burmese does not matter either; the reference is not affected by the meaning or the lack of meaning of the name. Names, as such, do not belong to the vocabulary of a language; most dictionaries do not list them. Thus, in spite of the initial plausibility, the idea of reference does not help in understanding the nature of linguistic meaning.Meaning and truthDespite the failure of referential meaning, many philosophers were quite unwilling to give up the idea that the meaning of linguistic expressions has something to do with objects, events, and states of affairs in the world. They reasoned that if language is used to talk about the physical environment, then there must be some connection between man's words and the things around him. If reference fails to provide the link, something else must.In the face of referential failure Russell fell back on truth. The Positivists suggested verifiability as the criterion of empirical meaning. Indeed, at least in many cases, it stands to reason that to understand a sentence is to know what state of affairs would make it true or false. Such considerations motivate the aletheic (Greek alētheia, “truth”) semantic theories, which claim that the notion of meaning is best explained in terms of truth rather than reference.The most influential discussion of the notion of truth was offered by the Polish-born mathematician and logician Alfred Tarski (Tarski, Alfred) in the 1930s. His semantic definition of truth is contained in the following formula (which he called [T]):in which “p” is a variable representing any sentence and “X” is a variable representing the name, or unique description, of that sentence. The easiest way to obtain such a unique description is to put the sentence in quotation marks. Thus, we get such instances of (T) as“Snow is white” is true if, and only if, snow is white.The above formula implies a distinction between the object language and the metalanguage. “X” represents the name of a sentence in the object language—i.e., roughly, the language used to talk about things in the world. The instances of (T) themselves, however, are in the metalanguage, the language in which one can talk about both things in the world and sentences of the object language. Tarski claimed that no language can contain its own truth predicate, for if it did then it would permit the formation of such sentences as:Is the sentence (S) true or false? Clearly, it is true if, and only if, it is false, which is an intolerable paradox. Consequently, for any language the predicate “. . . is true” and other semantical predicates must belong to a language of a higher order (a metalanguage).For this reason Tarski restricted his theory to clearly formalized, artificial languages, a decision that was very much in line with the positivistic tendencies of the 1930s. Nevertheless, the Tarski formula remained attractive even to some semanticists concerned with meaning in natural languages. For one thing, it seemed to succeed in pairing linguistic entities (named by the values of “X”; e.g., the sentence “Snow is white”) and nonlinguistic entities (named by the values of “p”; e.g., the fact, or possible state of affairs, that snow is white). This correlation, however, is not very helpful because each one of the nearly infinite number of sentences one may form would have its “fact” as a counterpart, identifiable only by means of that very same sentence. Consequently, if linguistic meaning consisted in these correlations, no one could learn the meaning of any sentence at all and certainly not the meaning of all the sentences the speakers of a language are able to understand.What was needed was a theory explaining the contribution of individual words—a clearly finite set—to the truth of sentences. Tarski himself, as well as other writers, suggested a repeatable procedure based on the notion of satisfaction. Snow, for example, satisfies the sentential function “x is white” because “Snow is white” is true. In much the same way, 3 satisfies the function “2 · x = 6” because “2 · 3 = 6” is true. Simply stated, the meaning of the predicate “. . . is white” is determined—and is learned—in terms of the set of objects of which it is true.As this approach is extended to cover the wide variety of words that exist in a natural language, however, its initial simplicity—and thereby its attractiveness—becomes progressively lost. This can be illustrated by “egocentric” words like “I,” “you,” “here,” and “now”; by connectives like “since,” “however,” and “nevertheless”; and, if these appear trivial, by such crucial words as “believe,” “know,” and “intend,” on the one hand, and “good” and “beautiful” on the other. Whereas it is plausible to say that, for instance, Joe and Mary satisfy the function “X loves Y,” provided Joe loves Mary, it is more complicated to determine what would satisfy such functions as “X believes Y,” “X knows Y,” or “X intends Y.” “X” is satisfiable by people, but the satisfaction of “Y” poses a problem. If one suggests such things as propositions, facts, and possibilities, one is confronted with abstract entities of a kind similar to those encountered in the Platonic theory of referential meaning. Again, can it be said that John and a unicorn will satisfy the function “X looks for Y”—if it is true that John looks for a unicorn?Another difficulty arises concerning “good,” “beautiful,” and other words of moral or aesthetic judgment. If, for example, beauty is indeed “in the eye of the beholder,” then what one person calls beautiful might not similarly impress another, yet two people might keep arguing as to whether the thing is beautiful or is not. Thus, people may seem to agree on the meaning of the word, yet remain at odds about its proper application. The meaning of such “emotive” words cannot be decided in terms of truth alone.A more serious objection to the aletheic (truth) theory arises from the fact that many significant utterances of natural language are not true or false at all. Whereas statements, testimonies, and reports are true or false, orders, promises, laws, regulations, proposals, prayers, curses, and so forth are not assessed in terms of truth or falsity. It is not obvious that the employment of words in these speech acts is less relevant to their meaning than their use in speech acts of the truth-bearing kind.Meaning and useThe difficulties just mentioned lead to another view concerning the notion of meaning, a theory that may be called the use theory. This view admits that not all words refer to something, and not all utterances are true or false. What is common to all words and all sentences, without exception, is that people use them in speech. Consequently, their meaning may be nothing more than the restrictions, rules, and regularities that govern their employment.The use theory has several sources. First, in trying to understand the nature of moral and aesthetic discourse certain authors suggested that such words as “good” and “beautiful” have an emotive meaning instead of (or in addition to) the descriptive meaning other words have; in using them one expresses approval or commendation. If one says, for instance, that helping the poor is good, one does not describe that action, but says, in effect, something like “I approve of helping the poor, do so as well.” Such is the role of these words, according to these thinkers, and to understand this role is to know their meaning.The second, and more important, stimulus for the use theory was provided by the work of Ludwig Wittgenstein. This philosopher not only pointed out the wide variety of linguistic moves mentioned above but in order to show that none of these moves enjoys a privileged status proposed the idea of certain language games in which one or another of these moves plays a dominant or even an exclusive role. One can imagine, for instance, a tribe whose language consists of requests only. Members of the tribe make requests and the other members comply or refuse. There is no truth in this language, yet the words used to make requests would have meaning. Human language as it exists in reality is more complex; it is a combination of a great many language games. Yet the principle of meaning, according to this theory, is the same: the meaning of a word is the function of its employment in these games. To Wittgenstein the question “What is a word really?” is analogous to “What is a piece in chess?”Finally, John L. Austin offered a systematic classification of the variety of speech acts. According to him, to say something is to do something, and what one does in saying something is typically indicated by a particular performative verb prefixing the “normal form” of the utterance. These verbs, such as “state,” “declare,” “judge,” “order,” “request,” “promise,” “warn,” “apologize,” “call,” and so on, mark the illocutionary force of the utterance in question. If one says, for instance, “I shall be there,” then, depending on the circumstances, this utterance may amount to a prediction, a promise, or a warning. Similarly, the words of the commanding officer, “You will retreat” may have the force of a simple forecast, or of an order. If the circumstances are not clear, the speaker always can be more explicit and use the normal form; e.g., “I promise that I shall be there” or “I order you to retreat.”To rephrase the conclusion already stated: the dimension of truth and falsity is not invoked by all the utterances of the language; therefore, it cannot provide an exclusive source of meaning. There are other dimensions, such as feasibility (in case of orders and promises), utility (in case of regulations and prescriptions), and moral worth (in case of advices and laws). These dimensions may be as much involved in the understanding of what one said and, consequently, in the meaning of the words the speaker used, as the dimension of truth.As previously mentioned, philosophers professing the aletheic theory claimed that the meaning of a word should be explained in terms of its contribution to the truth or falsity of the sentences in which it can occur. The latest form of the use theory is an appropriate extension of the same idea. According to some exponents, the meaning of a word is nothing but its illocutionary act potential—i.e., its contribution to the nature of the speech acts that can be performed by using that word. One difficulty with this view is that the definition is too broad, to the extent of being unilluminating or useless. Given this definition, nobody would know what any word means without knowing the entire language completely because the possibilities of employing a given word are not only without limit but extend to every conceivable context and circumstance. As Wittgenstein stated so forcefully,The sign (the sentence) gets its significance from the system of signs, from the language to which it belongs. Roughly: understanding a sentence means understanding a language.If this be the case, how can one account for the obviously gradual and prolonged process of learning a language? Indeed, the definition of the meaning of a word as illocutionary act potential seems to overstate the case. The obvious truth that the meaning of performative verbs, and other words closely tied to one illocutionary aspect or other, cannot be divorced from the nature of that type of speech act, does not entail that the meaning of an ordinary word, like “cat” or “running” is affected by any illocutionary force. Such words can occur in utterances bearing all kinds of illocutionary forces, so the contribution of these forces, as it were, cancel out. Nevertheless, what remains is the fact that all words are used to say something, in one way or another. The use theory would put a strong emphasis on the word “used” in the previous sentence. The next, and final, approach to meaning would stress the word “say.”Meaning and thoughtIn Wittgenstein's chess example, moves are made by moving the pieces. In a language, moves (saying something) are made by using words. And, according to the use theory, as a piece is defined by its move potential, so (the meaning of) a word is defined by its “saying” potential.This analogy works only up to a certain limit. Whereas chess is only a game, the use of language is much more. One plays chess—or any other game—for its own sake; one speaks, however, with other ends in mind. Games, as it were, do not point beyond themselves; speech does. In order to see this, compare an ordinary conversation with such word games as children play—e.g., exchanging words that rhyme, or words that begin with the same letter. These word games are language games and nothing more because the children use words according to certain rules. In doing so, however, they do not say anything except in the trivial sense of uttering words; nor are they called to understand what the other children say beyond the minimal feat of recognizing the words.In real speech the situation is radically different. The point of using words in a real speech act is to be understood. If someone says, “It will rain tomorrow,” his aim is to make the hearer believe that it will rain tomorrow. It is possible, of course, that the hearer or listener will not believe the speaker. Nevertheless, if the hearer understands what the speaker says, he will at least know that this is what the other person wants him to believe by saying what he says. Similarly, if one says, “Go home,” and the listener understands what is said, then, whether or not the listener will actually go, he will at least know that this is what the speaker wants to bring about by using these words. Thus, the notion of saying something is inseparably tied to such concepts as belief, intention, knowledge, and understanding.The view just outlined is a reformulation of a very traditional idea, namely, that speech is essentially the expression of thought. Words are used not to play a game with fixed rules but to express beliefs and judgments, intentions and desires; that is, to make others know, by the use of words according to fixed rules, that one has certain beliefs, desires, and so forth, and that one invites others to share them.“Expression of thought” sounds rather vague. For one thing, what is a thought? Suppose John believes that Joe has stolen his watch. John can express this belief by saying, “Joe has stolen my watch” or “My watch has been stolen by Joe” or “It is Joe who has stolen my watch” and so on. Moreover, if John is a multilingual person, he can express the same belief in German, French, and so forth. These variants, called paraphrases and translations respectively, will express the same belief, the same thought. But whereas it makes sense to ask for the exact words of John's statement or to ask about the language in which it was made, it would be foolish to ask for the exact words of John's belief or to ask about the language in which it is framed. The alternative in “Do you believe that Joe has stolen the watch, or that the watch was stolen by Joe?” does not make sense. Consequently, the same thought—the same proposition, as some philosophers prefer to call it—can be expressed by using various linguistic media. In other words, the same thought can be encoded in various codes (languages) and in various ways in the same code (paraphrases) in much the same way as the same idea can be expressed in speech or in writing and the same numbers can be written by using Roman or Arabic numerals.From this point of view, it appears that saying something involves encoding a thought and that understanding what one said involves decoding and recovering the same thought. The meaning of a sentence will consist in its relation to the thought it is used to encode. This may be viewed as the fundamental thesis of the psychological theory of meaning.As previously explained, no theory of meaning can be adequate as long as it treats sentences as indivisible units. For, in the first place, the potentially infinite number of sentences would defy any attempt to learn their meaning one by one, and, second, such a theory could not account for the obvious ability of fluent speakers to understand entirely novel sentences. There must be, therefore, a correlation between certain recurring elements of sentences (roughly, words) and certain recurring elements of thoughts (roughly, concepts (concept) or ideas). Accordingly, the learning of the semantic component of a language will consist in the learning of these connections.In this learning process two notions play a prominent role: synonymy and analyticity. As the sentences that express the same thought stand in the relation of paraphrase (or translation), so words or phrases that code the same idea are related as synonyms—e.g., “vixen” and “female fox.” Again, because one concept may include another, the sentence expressing this relation will record a conceptual truth or analytic proposition; e.g., “A dog is an animal.” A definition, finally, will exhibit all parts of a concept by a combination of such propositions.What concepts are and how they are related to words are topics that have been discussed throughout the history of philosophy. The following problems related to concepts pertain to the core of philosophical psychology: whether all concepts are derived from experience, as Aristotle and the Empiricists believed, or whether some of them at least are innate, as Plato and the Rationalists maintained; whether concepts exist prior to and independent from their verbal encodings, as the Realists and Conceptualists claimed, or whether they are nothing but a certain “field of force” accompanying the words, as the Nominalists thought.It should be noted that these disputes, in a modern garb, still continue with undiminished force. Contemporary Empiricists still try to reduce most concepts to a configuration of sense data, or a pattern of nerve stimulation, while the Behaviourists attempt to explain understanding in terms of overt behaviour. Modern-day Rationalists reply by insisting on the unique spontaneity of human speech and by reviving the theory of innate ideas.Meaning in linguisticsSemantics in the theory of languageThe science of linguistics is concerned with the theory of language expressed in terms of linguistic universals—i.e., features that are common to all natural languages. According to the widely adopted schema of the U.S. scholar Charles W. Morris, this theory must embrace three domains: pragmatics, the study of the language user as such; semantics, the study of the elements of a language from the point of view of meaning; and syntax, the study of the formal interrelations that exist between the elements of a language (i.e., sounds, words) themselves. Subsequently, certain authors spoke of three levels: the phonetic, the syntactic (the phonetic and syntactic together are often called grammatical), and the semantic level. On each of these levels a language may be studied in isolation or in comparison with other languages. In another dimension, the investigation might be restricted to the state of a language (or languages) at a given time (synchronic study), or it might be concerned with the development of a language (or languages) through a period of time (diachronic study).Semantics, then, is one of the main fields of linguistic science. Yet, except for borderline investigations, the linguist's interest in semantic matters is quite distinct from the philosopher's concern. Whereas the philosopher asks the question “What is meaning?”, the typical questions the linguist is likely to ask include: “How is the meaning of words encoded in a language?” “How is this meaning to be determined?” “What are the laws governing change of meaning?” and “How can the meaning of a word be given, expressed, or defined?”A few examples will suffice to illustrate some of these problems, and to show how the linguist's approach differs from that of the philosopher. In the matter of encoding, words are arbitrary signs; to some authors, particularly to the Swiss linguist Ferdinand de Saussure (Saussure, Ferdinand de), this feature of arbitrariness represents an essential characteristic of all real languages. Nevertheless, in all languages there are clear cases of onomatopoeia—i.e., the occurrence of imitative words, such as “whisper,” “snore,” “slap,” and, more remotely, “cuckoo.”There are several other issues that pertain to the question of encoding. Certain languages show a marked preference for very specific words, at least in certain domains, while lacking the corresponding general terms, which are the only ones occurring in other languages. The Eskimos, for instance, have a number of words denoting various kinds of snow, but no single word for snow. Similarly, in English, although there are distinct names for hundreds of animal species, there is no name for the very familiar animal species of which the female member is called cow and the male member bull.There are also languages, such as English or Chinese, that for the most part prefer single words to the compounded phrases that other languages (e.g., German) seem to favour. Accordingly, whereas the English vocabulary is larger, the German words are more pliable, capable of entering into compounds often of great length and complexity. Such differences support Saussure's distinction between lexicological and grammatical languages.Another distinction can be drawn concerning the relative frequency and importance of context-bound and context-free words. The meaning of such English words as “take,” “put,” and “get” depends almost entirely on the context—e.g., putting up with somebody has very little to do with putting off something or other. These can be compared with verbs like “canter” or “promulgate,” which, by their very specific meaning, almost determine the context, rather than having their meaning determined by the context. Clearly, the context-bound type of word, such as “take” or “put,” lends itself to idiomatic use, rather than the context-free word.There are some obvious regularities in the change of meaning that are of interest to the linguist. One such regularity is the extension or transference of meaning based upon some similarities—i.e., the phenomenon of metaphor. For example, one can speak of the leg of the table, the mouth of the river, the eye of the needle, and the crown of the tree. These are anthropomorphic metaphors: the transfer goes from something belonging to an individual or close to him (his body, garments) to something more remote. The same principle operates in the extension of meaning from a domain close in interest rather than in physical proximity. Baseball-minded people are apt to speak of “not getting to first base,” “striking out,” or “scoring a hit” in contexts often remote from baseball. For a similar reason, many abstract concepts are denoted by words transplanted from the concrete domain. Such phrases as “grasping ideas,” “seeing the point of a joke,” “body of knowledge,” “in the back of my mind,” and many others, are the result of this very important move from the abstract to the concrete.Meaning changes of another type are the result of emotive factors. The word democracy, for instance, has all but lost its original meaning and has become a word applicable to any system the speaker wants to praise. The contrary development is exhibited in the recent history of such words as “Fascist” and “aggression.” In order to avoid derogatory connotations one is often forced, by social pressure, to use euphemisms, often to the detriment of clarity. Examples of this include the switch from “underdeveloped nations” to “developing nations,” from “retarded children” to “exceptional children,” and from “old people” to “senior citizens.”The preceding are but a few examples concerning coding and meaning change. The questions about the ways of finding out what a word means and about the manner of giving an adequate definition of a word deserve a more detailed account.Meaning, structure, and contextForeigners in a strange country and linguists are often confronted with the task of learning a new language. It is important to realize that in doing so they do not set out with a completely blank mind: they expect to learn a language (i.e., a system of communication describable in terms of a large set of linguistic universals). They expect—consciously in the case of the linguist and unconsciously in the case of the layman—to find words and sentences, grammatical structures, and illocutionary forces in that language. And, on the semantic level, they expect words that will fit into the familiar semantic classes. They are confident, in other words, that the language they intend to master will be intertranslatable with their own.Therefore, although at the very beginning their learning remains on the ostensive level (trying to find out the name of this or that kind of object), very soon they proceed to the level of first guessing, then establishing, the meaning of words from the contexts in which they occur. This has to be the case, for words that in any way can be viewed as “names” of objects (i.e., that could be learned ostensively) form but a fraction of the vocabulary of any language. Anyone who doubts this should but try to list the words from this paragraph that could be learned ostensively. Moreover, linguists find no great difficulties in learning dead languages—e.g., that of the ancient Egyptians—without any contact with any speaker, provided that a sufficiently large corpus of texts is available and that some clues are provided to the meaning of at least some words.If any more evidence concerning this point is needed, one should remember that “pictorial” dictionaries are bound to remain on the kindergarten level, and that the mark of a good dictionary is the abundance of appropriate contexts. Thus, the contexts show the concept.These intuitions are behind the U.S. philosopher Paul Ziff's semantic theory. According to Ziff, the meaning of a word is a function, first, of its complementary set, which consists of all the acceptable sentences in which the word can occur, and, second, of its contrastive set, which consists of all of the words that can replace that word in all of these sentences without rendering the sentences deviant. Clearly, the elaboration of the contrastive set will produce words more and more similar in meaning to the word in question, the limiting case being synonyms that can occur wherever the word in question can occur.This theory is in need of further refinement. In the sentence “The cat sleeps,” the fact that “cat” can co-occur with “sleep” undoubtedly casts some light on the meaning of these words (a cat is a kind of thing that can sleep). But there are a great number of sequences that could complete the frames “The cat sleeps and . . . ,” “. . . said that the cat sleeps,” and so forth. Clearly, the near infinity of the resulting sentences will not contribute anything to the meaning of “cat” beyond what the segment “the cat sleeps” already contributes.Transformational grammar can be of assistance at this point. According to this approach, the sentences just considered are simply surface forms, each corresponding to an underlying structure, in which “cat” and “sleep” appear as forming an elementary, or kernel sentence (roughly: “a cat sleeps”). The essence of Ziff's insight can be reinterpreted in terms of the notions developed by Zellig S. Harris: co-occurrence (instead of complementary set) and co-occurrence difference (instead of contrastive set), both restricted to kernels. Because the vocabulary of a language is limited and the number of kernel structures is very small, the meaning of a word can be determined on the basis of a finite set of elementary sentences.The contribution of grammar to semantic theory is by no means exhausted by this step. For the grammatical restrictions on a word represent, as it were, the “skeleton” of its meaning before the “flesh” is put on by the co-occurrences. The very first step in giving the meaning of a word is to specify its grammatical category—noun, verb, adjective, adverb, connective, and so forth—and not to speak of grammatical constants (such as the first, but not the second, “to” in “I want to go to Paris”), the meaning of which, if any, is entirely determined by their grammatical role. A refined grammar yields much more: the fact that the adjective “good,” for example, unlike adjectives like “yellow” or “fat,” can occur in the frames “(He is) good at (playing chess)”; “(The root is) good to (eat)”; “It is good that (it is raining)”; “It was good of (him) to (come)” says a great deal about the meaning of that word. The co-occurrences then complete the picture.Lexical entriesGood dictionaries (dictionary) offer a variety of contexts for the items listed, but, obviously, this is not enough. For one thing, no dictionary can list all the co-occurrences. There must be devices to sum up, as it were, the information revealed by the contexts. This is the role of dictionary definitions. The branch of scientific semantics that is concerned with the form and adequacy conditions of dictionary entries is called lexicography.A systematic study of dictionary entries was presented in the 1960s by the U.S. philosophers Jerrold J. Katz and Jerry A. Fodor. According to them, the standard form of a dictionary entry comprises three kinds of ingredients: grammatical markers, semantic markers, and distinguishers. The grammatical markers describe the syntactic behaviour of the item in question in terms of a refined system of grammatical categories. The traditional division of words into nouns, adjectives, verbs, adverbs, and so on is but the first step in this direction. The class of nouns, for example, has to be subdivided into count nouns (like “cat”), mass nouns (like “water”), abstract nouns (like “love”), and so forth. The class of adjectives must be classified into subclasses that are fine enough to capture the grammatical peculiarities of such adjectives as “unlikely” or “good.” The traditional subdivision of verbs into transitives and intransitives has to be completed to account for such verbs as “compare” or “order,” which obviously involve three noun phrases (“someone compares something to something”), often of a particular kind (“human” nouns or noun clauses).The idea of a semantic marker is merely a further elaboration of the traditional notions of genus and species. The result is a system of semantic markers that comprise such items as “physical object,” “animate,” “human,” “male,” “young” (in the case of the entry for “boy”), and others. Katz claims that the problems of synonymy, analyticity, and contradiction can be handled, at least in part, in terms of lexical items sharing some or all of their semantic markers.Finally, the distinguisher completes the dictionary entry by giving, as it were, the leftover, if any, of the semantic information. There is no general form for the distinguisher; it may give the atomic weight (for elements), purpose (for tools), concise description (for animals), and so forth.Generative semanticsAccording to the original formulation of generative or transformational grammar, the semantic and the syntactic components were regarded as distinct elements in the deep structure of a sentence. The syntactic component consisted of a relation of phrase markers giving the transformational structure of the sentence, usually represented in terms of “tree” diagrams with such nodes as “S” for sentence, “NP” for noun phrase. The semantic content entered through the process of lexical insertion—i.e., the replacement of some of the nodes by words, the carriers of meaning. Lexical insertion was supposed to take place at the very beginning of the series of transformations leading up to the surface form of the sentence. The original input of meaning, as it were, was carried through the transformations yielding the semantical reading, or sense, of the whole sentence.Several modern studies have attempted to demonstrate that this separation between syntax and semantics cannot be maintained. It appears that certain words in themselves indicate a structure analogous to syntactic structures. For example, consider “harden” and “break.” To harden something is to cause that thing to become hard (or harder); to break something is to cause something to become broken. Because “harden” consists of two elements, “hard-” and “-en,” thus it could be argued that the word itself is structured; “break” does not indicate any structure, yet its meaning clearly involves one. “Broken,” therefore, carries a more basic semantic unit than “break.” Again, in the case of such verbs as “remind,” “allege,” “blame,” or “forgive,” one feels that their meaning is highly structured, and, with some thinking, one can articulate the presuppositions of these words, which involve a great number of semantic units in a very complex relationship.These and similar reasons support the theory of generative semantics, which denies a clear distinction between the semantic and the syntactic components. The transformations, in this theory, connect the surface structure of the sentence with its semantic representation (or according to some linguists, its logical form). Words, then, can encode either a semantic primitive (such as “blue”) or a whole structure (such as “forgive”) within the semantic representation. Thus, there is no definite point in the transformational history of the sentence at which lexical insertion must occur. The ultimate conclusion of this view is that, instead of the threefold division of semantics, syntax, and phonetics, all that is needed is the simple distinction between semantics and phonetics, corresponding to the distinction between meaning (as structured) and its verbal encoding. How much, finally, of the semantic structure can be attributed to a particular language, and how much can be ascribed to common (and possibly innate) elements of the human mind, remains a fascinating problem for continued study.Zeno VendlerThe study of writingThis section deals with the various fields of study relating to written sources and writing systems. Of these, the most important are philology, which deals mainly with written and oral source materials; and linguistics and grammatology, which deal mainly with systems of signs, which in turn are based on primary source materials. The use of the word “mainly” implies that, while certain fields deal predominantly with certain aspects of study, there are no sharp divisions between them, and some areas of study can be treated by both disciplines.The scope of the study of writingSources of informationIn a study of any system of writing, a basic distinction must be made between the raw materials (subject matter) that are studied and the systems deduced from them. Spoken (or written) utterances represent the raw materials of a language; a grammar or a lexicon presents a linguistic system abstracted and reconstructed from these materials. Similarly, written texts represent the raw materials of writing from which an alphabet can be reconstructed.Manifold sources of information bear on systems of writing: (1) Written sources proper include texts, inscriptions, books, and manuscripts. The ability to understand these sources can be handed down traditionally from generation to generation, as in the case of the Hebrew or Chinese writings, or it must be recovered by a process of decipherment, as in the case of the Egyptian hieroglyphic. (2) Lists of signs, alphabets, syllabaries, and so on can be devised by the “inventors” of a writing at the time when the writing is first introduced; or they can be reconstructed in later times from the written sources by teachers or scholars for didactic or scholarly purposes. (3) Studies are based on written sources or lists of signs, alphabets, syllabaries, and so on. Such studies can be primary, when they are based directly on sources, or secondary, when they must rely on primary studies for information.A partial study, which is delimited by the subject matter, times, and area, must be distinguished from a general, comprehensive study of the subject. Thus an analysis of the writing of Middle English, Egyptian hieroglyphic writing, or the Greek alphabet may be contrasted with an analysis of the structure of alphabetic writings or a study of cursive or monumental writing.Definitions of termsAs has been noted above, men communicate with each other by means of various systems of signs, of which the most universal are language, a system of auditory communication; and gesture language and writing, two systems of visual communication. For the general science of signs, several terms have been proposed, of which the term semiotic, or semiotics, as here preferred, may perhaps be the most appropriate. Semiotics covers a much broader area than the term semantics, which deals with the meaning of linguistic elements.Philology, involved mainly in the study of the linguistic sources of a people or a group of peoples, forms the basic means for the comprehension of their respective cultures. It deals less with oral sources than with written sources, mainly literature (whatever its exact meaning). Philology deals with the formal aspects of writing under the topic of epigraphy and paleography. Linguistics is concerned with the study of linguistic systems as reconstructed mainly from oral sources. Pursued less than the study of “oral language,” the study of the “written language”—that is, of the language as it is used in written sources—is also a matter of linguistics. Linguistics deals with the structural aspects of writing under the heading of graphemics.The field of study that deals with writing in the broadest sense is called grammatology. Equally appropriate terms for this subject are grammatonomy and graphonomy.GrammatologyThree main approachesThree main approaches to grammatology can be distinguished: descriptive-historical, typological-structural, and formal.Descriptive-historicalThe traditional descriptive-historical approach to the study of writing is by far the most common. This is a simple narrative approach to the description of writing in its historical evolution. The apparent shortcoming of descriptive-historical texts is the general lack of systematic typology—that is, systematic classification by type. Good studies on individual writings, such as hieroglyphic Egyptian and the Greek alphabet, are not wanting. What is entirely missing is theoretical and comparative evaluation of the various types of writing, such as discussions of various types of syllabaries, alphabets, word signs, and logo-syllabic writings.The historical approach is further vitiated through confusion with considerations of a geographic nature, as evidenced by such chapter titles as “Asiatic Writings” and “American Writings,” or “Writings of Asia” and “Writings of America,” which are frequently found in the standard manuals on writing.The typological-structural approach is based on the realization of the importance of structure and typology in the study of writing. In contrast to the traditional approach, according to which the writings of the world are described in their evolutionary progress, the new approach requires first a thorough analysis of the structure of the individual writing systems and then their classification by type within the framework of writing in general.Under “structure” is meant here primarily what is sometimes called the “inner structure,” which is concerned with the function of a writing system, in contrast to the “outer structure,” which involves its formal characteristics. Considerations of function are based on such points as word writing (logography), syllable writing (syllabography), and letter writing (alphabetography), and the typology of logographic, syllabic, and alphabetic systems of signs. Considerations of form involve such points as the contrast between the pictorial and linear systems or between the monumental and cursive writings. While, theoretically, both approaches are acceptable, the emphasis placed here on function, rather than form, may be illustrated by considering the following case. From the point of view of function, the Morse alphabet represents the same type of alphabet as the Latin writing and its descendants; from the point of view of form, the Morse alphabet is formally independent of the Latin writing. Based on considerations of function, the Morse alphabet is considered as being of the Latin type, despite its different formal, outer structure.There are two kinds of formal approach to the field of grammatology: the traditional approach, as practiced mainly in the philological disciplines under the topic of epigraphy and paleography (see below), and the formal approach to sign analysis recently initiated in the United States. The aim of the latter is to provide a scientific account of cursive writing as practiced in that country. Its procedure consists first of breaking up the letters of the cursive writing into its component segments, which appear in the form of bars, hooks, arches, and loops, and then of providing formation rules governing the linking together of these segments into letters and into larger strings corresponding to words of the language. It is said that 18 such segments are sufficient to describe all English lower- and upper-case letters.Subdivisions of grammatologyThree main subdivisions of grammatology can be distinguished: subgraphemics, graphemics, and metagraphemics. They are treated mainly by scholars versed in philological and linguistic disciplines.SubgraphemicsThe field of subgraphemics deals mainly with primitive forerunners of writing that utilize visual marks having no set correspondences in language. In its sublinguistic aspects, subgraphemics can be compared to kinesics, the study of the various communicational aspects of learned, patterned body motion behaviour; and cherology, the study of gesture language.The field of graphemics deals with full writing or phonography, as represented in systems of writing in which written signs generally have set correspondences in elements of language. The field of graphemics thus deals with writing after it became a secondary transfer of the language, a vehicle by which elements of the spoken language were expressed in a more or less exact form by means of visual signs used conventionally. This took place for the first time about 5,000 years ago in the Sumerian and Egyptian writings.Instead of “graphemics,” other scholars use the terms “graphics” or “graphic linguistics.” All three terms are frequently misused by scholars who limit the terms to the study of alphabetic writings, overlooking or paying scant attention to all other types of writing, such as the logosyllabic and syllabic systems.Little work has been done in the field of relations of writing to language. Philologists have been concerned mainly with the historical evolution of writing and have paid little attention to the interrelations between writing and language. Linguists have been more concerned with the spoken language than with the written language. When interested in written languages, they have often limited their study to living written languages, neglecting the rich sources of information that can be culled from ancient written languages and from pre-alphabetic systems. The question of the relationship of writing to language has been pursued in recent years mainly by scholars with a background in linguistics. Because of their interest in modern languages and writings, this implies generally relations between the alphabet and language. A general treatment of the subject can be found in the respective chapters of the introductory manuals to linguistics. Linguists generally have stressed the independent character of writing and have studied it as an independent system rather than as a system ultimately based on and related to the underlying language.While the connections between language and writing are close, there has never been a one-to-one correspondence between the elements of language and the signs of writing. The “fit” (i.e., the correspondence) between language and writing is generally stronger in the earlier stages of a certain system of writing and weaker in its later stages. This is because a writing system when first introduced generally reproduces rather faithfully the underlying phonemic structure (structure of sounds). In the course of time, writing, more conservative than language, generally fails to keep up with the continuous changes of language and, as time progresses, diverges more and more from its linguistic counterpart. A good example is the old Latin writing, with its relatively good “fit” between graphemes (the written letters or group of letters that represent one phoneme or individual sound) and phonemes as compared with the present-day French or English writing, with their tremendous divergences between graphemes and phonemes. In some cases, recent spelling reforms have helped to remedy the existing discrepancies between writing and language. The best “fit” between phonemes and graphemes has been achieved in the Korean writing in the 16th century and in the Finnish and Czech writings of modern times.Families of writings are not related to families of languages. Note, for example, that English and Finnish are written in the Latin writing but belong to two different families of languages, and that the cuneiform writing was used in antiquity by peoples speaking many different languages.The temporal primacy of language over writing has been taken for granted by most scholars, especially the American linguists. It has been contested by some European scholars, who claim that writing is as old as oral language and gesture language. The fact is that full writing, expressing linguistic elements, originated only about 5,000 years ago in Mesopotamia and Egypt and that full writing is therefore much younger than language. Only if the semasiographic stage is included under writing can the assumption of equal temporal hierarchy of writing and language be admitted. (Semasiography is the use of marks to convey meaning without the presence of linguistic elements.) As noted elsewhere, however, the semasiographic stage should not be treated as full writing, but as a forerunner of writing.MetagraphemicsA study of the various metagraphic devices (e.g., punctuation marks, capital or italic letter forms) that are used besides or in addition to writing proper may be called metagraphemics or paragraphemics. This is still an obscure field, and its relationship to both subgraphemics and graphemics needs a thorough investigation.Epigraphy and paleographyThe investigation of writing from the formal point of view has been traditionally the prime domain of the epigrapher and paleographer. Epigraphy is concerned mainly with inscriptions written in characters that are incised or scratched with a sharp tool on hard material, such as stone or metal; paleography deals mainly with manuscripts written in characters that are drawn or painted with pen, pencil, or brush on soft material, such as leather, papyrus, or paper. Since epigraphy means “writing upon something” and paleography means “old writing,” it is clear that the distinction made above between epigraphy and paleography cannot be justified on etymological grounds. The distinction has grown artificially over the years, as one scholar or another began to apply one or the other term to his own branch of study of written sources. Because of the close interrelations between epigraphy and paleography, some scholars refuse to admit any distinction between the two and prefer to use only the term paleography.The main characteristics of epigraphy and paleography as listed above may be applied, with some leeway, to the ancient Near East (Mesopotamia, Egypt, Anatolia), the classical world, China, India, the Islāmic world, and, in general, to the Western writings from the Middle Ages down to the introduction of the printing press. But there are some difficulties: Mesopotamian and Aegean clay tablets are soft, and the writing is cursively executed, both points characteristic of paleography, but the tablets are also durable and bulky and have incised, concave characters executed with a stylus, all points characteristic of epigraphy. Similarly, the wax tablets of the classical world are soft, perishable, and cursively executed, characteristics of paleography, but are written with concave characters, incised with a stylus, characteristic of epigraphy. There are likewise some difficulties in the classification of tablets of wood; they are soft and perishable, and the writing on them is generally cursive, characteristic of paleography, but they have characters incised or scratched with a sharp implement, characteristic of epigraphy.Paleography and epigraphy are involved in the study of written sources from two points of view: the purely formal aspect and the hermeneutical (interpretive) aspect.The study of the purely formal aspect, possible without any understanding of the contents or without an extended study of the contents, is concerned, for example, with the kind, form, and size of the materials; the technique of writing; and the form, order, and direction of writing. Hermeneutics, possible only with study of the contents, is concerned, for example, with the dating and localizing of written sources, their authorship, linguistic interpretation, and content evaluation.A general scientific discipline of epigraphy and paleography does not exist. There are no studies that treat of the subject from a general, theoretical point of view, encompassing all the written sources, wherever they may be found. There are, for example, no treatises listing and discussing the various materials, or shapes and sizes of materials, used for writing throughout the world, just as there are no structural-typological studies that treat of the formal evolution of signs from pictorial to linear or from round to angular. Among other potential topics that await investigation are: trends in ductus (hand; the general shape and style of letters), such as individual, national, and regional; the direction of writing, indication of prosodic features (quantity, stress, and tone), and names of signs (letters). The narrow fields that are represented are, for example, West Semitic epigraphy, Arabic paleography, Greek and Latin epigraphy and paleography, or Chinese epigraphy and paleography. In all cases, these narrow fields of study form subdivisions of wider but still linguistically or geographically defined fields of study, such as Semitic or Arabic philology, classical philology, Assyriology, and Sinology.History of the study of writingThe first students of writing were doubtless the very originators (“inventors”) of a new writing system. By “writing system” is meant here a full writing in which the individual signs of the writing stand for the corresponding elements of the language, which is to be contrasted with the forerunners of writing, in which the individual signs have but loose connection with language. As a result of a discrete analysis of the language for which a writing system was devised, lists of the elements of a language and their proposed written counterparts must have been first compiled and experimented with in actual practice. The establishment of a full system of writing also required conventionalization of forms and principles. Forms of signs had to be standardized so that the users would draw the signs in approximately the same way. Regulation of the system had to take place in the matter of the orientation of signs and the direction, form, and order of the lines, columns, and the sides of a text. Correspondences established between signs and words were paralleled by those between signs and definite syllabic values. After the initial period of trial and error, the established correspondences were conventionalized by being taught in schools.Studies prior to the 18th centuryThe activities involved in setting up a full system of writing are indirectly attested in the Sumerian school texts, known almost from the beginnings of the Sumerian writing, which appear in the form of lists of signs and words, and scribal and literary exercises. The scribal activities of the Sumerians and, in the later periods, Akkadians are matched, albeit to a smaller degree, as far as actual attestation is concerned, by those of other peoples of the ancient Near East, such as the Egyptians and Hittites.The study of the language and the corresponding writing was highly developed among the Chinese and Indic peoples, as best exemplified by the great Indic grammarian Pāṇini (about the 4th century BC) and his school, as well as among the Greeks and to a lesser degree among the Romans. Beginning in the Middle Ages, the Arabs and Jews showed great interest in matters pertaining to their languages and writings. Important contributions were made by the Arab scholars Sībawayh (8th century) and az-Zamakhsharī (1075–1143) and by the Jews Rashi (1040–1105) and David Kimhi (c. 1160–c. 1235). Interesting, but largely fantastic, is a collection of several dozen alphabets and sign lists put together by the Arab Aḥmad ibn Abū Bakr ibn Waḥshīyah (c. 800). Among the early Europeans were the Spanish bishop Diego de Landa (1524–79), with his analysis of the Maya writing, and the German scholar Athanasius Kircher (1602–80), with his frequently mystic ideas about writing, especially the Egyptian hieroglyphic.Modern Western studiesModern general studies of writing in the West began in the second half of the 18th century. This group of early studies was based almost exclusively on Greco-Latin writing, with its further developments in the Middle Ages and modern times, and supplemented by a scattering of Semitic alphabets, such as the Hebrew, Arabic, and Syriac. The books of this first period are of no more than historical interest today.In contrast to these early studies, the second group consists of the larger and much more serious undertakings of François Lenormant (1872), Heinrich Wuttke (1872), Carl Faulmann (1880), Isaac Taylor (1883), and Philippe Berger (1891). These books became standard manuals in the field of writing and served that function until the first half of the 20th century.Next in time is a group of studies generally smaller in size and more limited in coverage than the group discussed just above, as represented by Walter James Hoffmann (1895), R. Stübe (1907), Theodore Wilhelm Danzel (1912), Karl Weule (1915), and William A. Mason (1920). On the basis of such monumental works on the American Indian writings as those of Henry R. Schoolcraft (Schoolcraft, Henry Rowe) (1851), Garrick Mallery (1886 and 1893), and other similar works, the authors of this group of studies have emphasized much more than did the earlier scholars the importance of the forerunners of writing in the whole field of the study of writing.The main characteristic of the next period, the first half of the 20th century, during which great manuals on writing were produced, was the descriptive-historical approach, as it was of all the preceding group of studies on writing.The typological-structural approach to the study of writing appeared in the latter half of the 20th century, along with interest in the relationship of writing to society. Representatives of this and of the previous group are briefly characterized in the bibliography.Ignace J. Gelb Ed.Additional ReadingW.P. Alston, Philosophy of Language (1964), is a good introduction to philosophical semantics. Stephen Ullmann, Semantics: An Introduction to the Science of Meaning (1962), has become a classic work. M. Black, Language and Philosophy (1949), discusses some earlier views. L. Bloomfield, Language (1933), contains a classic discussion of scientific semantics. B.L. Whorf, Language, Thought and Reality, ed. by J.B. Carroll (1956), raises the issue of linguistic relativism. J.J. Katz, The Philosophy of Language (1966), offers a semantic theory tied to generative grammar, the best expression of which is found in N. Chomsky, Aspects of the Theory of Syntax (1965). W.V. Quine, Word and Object (1960); and P. Ziff, Semantic Analysis (1960), represent two different but influential semantic theories.
* * *