Compositionality from a “Use-Theoretic” Perspective
Compositionality from a “Use-Theoretic” Perspective

Abstract

What does it mean to say that language is a potential-infinite object, as opposed to an actual-infinite one? I want to inquiry into the notion of the infinite that could be attributed to language, in such a way that we move closer to an inquiry into language-understanding. Philosophers have attributed to language the property of actual-infinity. This has motivated the study of compositional theories of meaning. Compositionality is also supposed to solve the problem of productivity of language. I will argue that the assumption that languages are actual-infinite objects leads to insurmountable obstacles to putting the notion of understanding back into the picture –and therefore, that compositionality doesn't do the job it was hired for. I will follow instead the idea that it is fruitful to analyze the problem of productivity in a way parallel to Wittgenstein’s discussion of the infinity of natural numbers.

Table of contents

    1. Introduction

    What does it mean to say that language is a potential-infinite object, as opposed to an actual-infinite one? What is at stake here? It is not only that our conceptualization of language is different from one choice of infinite to the other. In fact, it is our very understanding in language that changes. Let me elaborate. For the sake of the argument, if one accepts the metaphor that thought takes place in language, the ascription to language of either kind of infinite has consequences for the notion of thought. For thinking as a process is closer to a notion of a potential-infinite language, whereas thinking as a way of perception relates better to the idea of an actual-infinite language. But this is only a metaphor, to be sure, and a misleading one. Its only purpose here is to direct our attention to how we conceptualize language, because this might have consequences on how we put thought back into the picture.

    I am interested in the understanding that takes place when one understands language. So I must distinguish the subject of my inquiry from a study of language as an object, as it occurs in, e.g., typology. It might be convenient to use an expression such as “language-understanding” to refer unambiguously to the understanding that is characteristic of what goes on when we read a book, conduct a conversation, give a speech, write a letter, etc. This understanding is clearly dependent on how one defines its object, namely language. But we must be clear that the definition of language is subsidiary to that which we understand.

    This task is formidable. However, for the purposes of this paper, I only need to show what it does not consist in. To bring home the point, let me resort to an analogy with the case of perception. In this area it is clear, I believe, that it is one thing to investigate into our experience of colors and shapes, and quite another to provide an algorithm for mapping 2D arrays of intensity vectors into 3D matrices. For one thing, a 3D matrix is as much in need of interpretation as the original 2D array. An explanation of the mapping just doesn’t count as an explanation of the perceptual experience. In the case of language, too, there seems to be a difference between our language-understanding and the non-introspectable mechanisms which are supposed to constitute the language faculty.

    With these clarifications in mind, I want to examine the claim that natural languages are infinite objects. I want to inquire into the notion of the infinite that could be attributed to language, in such a way that we move closer to an inquiry into language-understanding. I will show in the next section that there is no compelling reason for the claim that languages are actual-infinite objects. After a systematic argument to this effect, I will argue in section 3 that the argument for the infinity of language can be “re-analyzed” so as to still throw light on the problem of “language productivity”, but without the negative effects mentioned. To this effect, I will (a) discuss the parallel between language’s (purported) recursive syntax and the successor function on natural numbers; (b) discuss our understanding of natural numbers and their infinity in the light of Wittgenstein’s late philosophy of mathematics; and (c) draw consequences for language-understanding from the discussion in (b).

    2. Languages and Recursive Syntax

    The claim to be discussed here is that language is an actual-infinite object –i.e., an infinite set of sentences. This claim is both surprising and unsurprising. It is unsurprising when it comes to formal languages. The recursion of the syntax with which most formal languages are defined is on a par with the recursion of the successor function on natural numbers, so the same type of infinity is associated to both cases –traditionally, the actual infinite. However, the claim that language is an infinite object, in the sense of infinity that evokes actuality, is surprising in the case of natural languages. What would support such ontological claim? Language is infinite, so the received view goes, because it is generated by a recursive syntax. This would explain how, with finitely many resources, language can be infinite. That a natural language such as English is infinite is a “fact” that follows, for instance, from rule (1):

        (1) If S is a sentence of English, then I believe that S is a sentence of English

    It must be clear that we are dealing with two different kinds of entities here: rules and languages. But if there is a difference between them, and it is languages that we are interested in, the idea that we analyze languages by means of rules raises the problem of the adequacy of rules: how do we know that these rules are the rules of this language? The only way to answer this question is to have an independent specification of the language –and one that shows that it is infinite— that the rules have to conform to. But since it is such specification that we are after, an analysis of language in terms of rules only pushes the problem one step back. A move here could be to abandon languages altogether in favor of rules. But this is not a viable move if what we are investigating is language-understanding. For we should ask ourselves what comes first in language-understanding: sentences or “tacitly known” rules? Thus, we are not compelled to accept this argument for the infinity of language. For even if certain recursive rules can generate an unbounded supply of sentences, nothing guarantees that these sentences are sentences of English and so that English is infinite.

    The adequacy of rules is not the only problem for this argument that language is infinite. Another problem is the far reaching constrains that we need to apply to the notion of a sentence if the argument is to make sense. First, only if we have a theory-independent notion of a sentence can we say that (1) is a fact of language. Second, the notion of a sentence should also be independent from what people actually utter/write. Otherwise the idea of infinitely many sentences is meaningless. But what could be a notion of a sentence that is both theory- and use-independent? Only the notion of a sentence either as a material or as a platonic object will do. However, there are not infinitely many material objects, so sentences must be platonic objects. But if sentences are platonic objects, how do we understand them? How do we know there are infinitely many of them? What would an argument to this effect look like? At the very least, the argument could not be an empirical one.

    Despite of this, philosophers do have attributed language the property of infinity in the actual sense. This has also provided motivation to come up with a compositional theory of meaning. In particular, one of the main issues in (formal) semantics is to “explain” how the meanings of sentences depend on the meanings of words and the way they are put together. Compositionality is also supposed to solve the following, related problem. Along with the observation that people develop mastery of a language, consisting in their ability to understand its sentences, the presumed infinity of language gives rise to the “observation” that people can understand and use infinitely many sentences, in particular, sentences they have never heard before.

    However, the problem of productivity –i.e., how to explain that people can understand and use sentences they have never heard before– is independent from the claim of the infinity of language. This becomes clear from the fact that productivity as such cannot be an argument for the infinity of language. Actually, “productivity” is a misleading term. It is classified as a claim about language, whereas it is a claim about language users (Groenendijk and Stokhof 2005). It says that language users are able to understand and use sentences they haven't heard before. But does this mean that no-one uttered or wrote these sentences? Does it mean that there are infinitely many sentences? Since productivity is a claim about language users, it is not clear how it can be transformed into a claim attributing a property to language.

    In the next section I will argue that the puzzlement about recursive syntax that gave rise to the idea of the infinity of language can be studied in a quite illuminating way. It will be illuminating because it will throw light into the notion of productivity.

    3. The Infinity of Natural Numbers

    The methodological strategy suggested is not to use mathematics as an uncritical source of understanding, but as a place where the kind of understanding that we want to conceptualize can be fruitfully discussed. The aim is to conceptualize language in such a way that it becomes perspicuous how we understand it. In particular, we need an account of the fact that we are able to understand sentences we have never heard before. To this effect, we will ask how natural numbers should be conceptualized so that it becomes perspicuous how we understand them. To be sure, the cases of numbers and language are not prima facie on a par. But the analogy might be interesting since it might suggest an improved methodology for the study of language-understanding.

    We start our conceptualization of natural numbers in terms of the ability to write down numerals. The technique is easier to explain with strokes as numerals. Once a stroke for 1 is agreed upon, say |, we define it as the numeral for the number one. The numeral for the successor of a number represented by a given numeral can be obtained by putting another stroke to the right of this numeral. In this way we can construct all the numerals, each of them corresponding to a natural number.

    Two things are important to note. First, this conceptualization does not commit us to actual-infinite entities such as the set of all natural numbers. A technical reason can be found in the existence of strictly finitistic approaches to mathematics (for example van Bendegem 1987). Another reason is manifest in the intelligibility of the distinction between the actual and the potential infinite, which dates back to Aristotle (cf. Aristotle, Physics, book 3, chapter 6; cf. Moore 1991 for discussion.).

    The conceptualization of the natural numbers as explained above can be analyzed in the following way (cf. Wittgenstein 1976, p. 31). One may ask how many numerals one has learned to write down. The answer could not be other than ℵ0. For clearly, any technique for writing down numerals that only yields a limited number of numerals is different from our own technique, which is unbounded. Our experience of the technique is that it doesn't get exhausted –numbers are infinite precisely because of this! This shows that we do not survey the totality of numbers a priori; numbers are unlimited in this sense that they are not epistemically accessible a priori. If the natural numbers were conceptualized as an actual-infinite set in some platonic realm, our chances for explaining how we know them grow thinner. For how do we grasp them? How do we find which properties they have? But, even more importantly, the actual-infinite is not the way in which we experience them. The fact that we can not actually finish the process is what gives us the experience of there being infinitely many of them. We do not survey the totality of the natural numbers in our minds. We have a technique for constructing more and more, but each time this technique has to be applied.

    Now, any explanation of our understanding of natural numbers requires, besides showing how to write down numerals, also showing that one can operate with them, that we can find relations between pairs or tuples of them –e.g., being lesser or equal than–, etc. But it is clear that the bigger the numbers –i.e., the more strokes their numerals have—, the lesser the possibility of doing operations with them (and this is so even for machines, but that is beside the point). This also shows that positing a rule of understanding which is parallel to the successor function is not a fruitful strategy. For one thing, the rule would predict that we understand very big natural numbers in the same way as smaller ones. In fact, the rule would predict that we understand all numbers in the same way. But this just runs against our earlier observation that such similarity breaks down at some point.

    The way to bring these observations concerning numbers back to an observation of language is clear (cf. Baker and Hacker 1984; Groenendijk and Stokhof 2005). As masters of language we have a technique for constructing and using sentences. But this technique does not give us a way to survey the totality of sentences in an a priori manner. In each case, when a sentence is presented to a hearer/reader, he can apply his ability without already having understood the sentence beforehand. The speaker can even say I don't understand that S, where S is what he just heard/read. And this will have a clear meaning in English. But this does not mean that there is a rule of understanding attached to this way of responding, let alone one that applies to the construction of all sentences. The reason is similar to the case of the natural numbers. To understand a sentence requires, among other things, the ability to operate with it, for example, drawing inferential relations. As it was the case with numbers, the bigger the sentence, the lesser the possibility of operating with it. Accordingly, our language-understanding is not uniform across sentences. Compositionality delivers a wrong “explanation” of our language-understanding, for it asserts that we have an a priori understanding of all language. As in the case of numbers, this is a wrong prediction.*

    Literature

    1. Baker, G.P. and Hacker, P.M.S. 1984 Language, Sense and Nonsense: A critical investigation into modern theories of language, Oxford: Basil Blackwell.
    2. Van Bendegem, Jean Paul 1987 Finite, Empirical Mathematics: Outline of a Model, Rijksuniversiteit te Gent.
    3. Moore , Adrian 1991 The Infinite, London and New York: Routledge.
    4. Groenendijk, Jeroen and Stokhof, Martin 2005 “Why compositionality?” in: G. Carlson and J. Pelletier (eds.), Reference and Quantification: The Partee Effect, Stanford: CSLI Press, 83-106.
    5. Wittgenstein 1976 Wittgenstein's Lectures on the Foundations of Mathematics, Cambridge, 1939, Cora Diamond (ed.). Sussex: Harvester Press.
    Notes
    *
    With thanks to Martin Stokhof for his useful guidance in the preparation of this paper.
    Edgar José Andrade-Lotero. Date: XML TEI markup by WAB (Rune J. Falch, Heinz W. Krüger, Alois Pichler, Deirdre C.P. Smith) 2011-13. Last change 18.12.2013.
    This page is made available under the Creative Commons General Public License "Attribution, Non-Commercial, Share-Alike", version 3.0 (CCPL BY-NC-SA)

    Refbacks

    • There are currently no refbacks.