Posted by: Alexandre Borovik | December 1, 2011

Expressive power of natural languages

I will follow discussion of that question on the Foundation of Mathematics mail list with great interest:

Alex Nowak:

I was wondering if there is any (at least semi-)conclusive view about the expressive power of a natural language like english resulting in a statement like “whatever it is, it is a language of at least 2nd order”. Of course, I know of Tarski’s comment suspecting natural languages to be somehow (semantically) universal. But what I’m interested in is a hint pointing me in a direction what to look for, i.e. is the fact that one quantifies over classes in a natural language enough to label it higher order? Can there be anything wrong to take it to be at least a many-sorted first-order language?

Robert Black

Well, there are reasonably natural and provably non-first-orderizable sentences like ‘Some critics admire only one another’ (taking this to mean that there are some critics such that anyone admired by one of them is another one of them) or, even more famously, ‘Napoleon is not one of my ancestors’ (assuming that our only access to the concept of ancestor is via the concept of parent). The classic reference on these matters is George Boolos’s ‘To Be is to Be a Value of a Variable (or to Be Some Values of Some Variables)’, J.Phil. 1984.

Monroe Eskew:

It seems to me that English cannot be classified as any type of formal language.

First, how would many-sorted first order logic be enough to capture things like tenses, subjunctive moods, commands and exclamations, gerunds, prepositions, adverbs, reference to English itself, etc. in a way that at all resembles the actual structure of spoken English?

Second, the rules of English grammar are somewhat fluid and the language changes over time. Without well-defined syntax, how could it be formal?

Third, unique readability fails. Bob said Joe saw his friend.

Fourth it is often vague.

Fifth, what rules of grammar or semantics prevent the Berry paradox in English? Nothing; the paradox makes us realize that the intuitive semantics don’t work.

Richard L. “Arf” Epstein:

Ordinary language is not first-order, nor is it second-order. Fragments of it can be formalized in first-order logic and in second-order logic, but only with very strong metaphysical assumptions. Those assumptions are more suspect than any evidence we may have that the use of the language commits us to belief in classes. In my *Predicate Logic* I discuss formalizations of ordinary language in predicate logic and show how the use of second-order quantification appears to be necessary for many of them. But it should be noted that this is against a background of metaphysics of predicate logic, and in any case it is not a demonstration that there cannot be formalizations of those examples in first-order logic, only that there cannot be such formalizations that respect the metaphysics of predicate logic. In my *The Internal Structure of Predicates and Names with an Analysis of Reasoning about Process* (a draft of which is available at www.AdvancedReasoningForum.org) I give strong evidence that English and other European languages are not universal, for they cannot express simple claims about the world as process.

Richard Heck:

Most linguists nowadays take it to be patently obvious that natural language contains the resources for so-called “plural” quantification, as in the so-called Geach-Kaplan sentence, “Some critics admire only one another”, which can be shown not to be first-order expressible. And, as George Boolos observed, plural quantification is inter-translatable with monadic second-order quantification (his own interest being in second-order set theories). In that sense, it is clear that natural languages can express monadic second-order quantification.

In some sense, it’s of course also obvious that any second-order language is inter-translatable with a many-sorted first-order language. Issues in that vicinity are not likely to be empirically resolvable, however. That said, the difference between these perspectives presumably comes down to one’s attitude towards the comprehension axioms: whether one regards them, as in the second-order setting, as logically true, or, as in the first-order setting, as non-logical axioms with no special claim on our credence. Here again, Boolos thinks it is just obvious, and logically so, that, e.g., there are some sets that are not members of themselves, where that is meant to be a plural comprehension axiom (\exists xx \forall y[y is among xx iff y \notin y). Here “is among” is a logical relation among pluralities and the objects that constitute them.

A good place to start with this is the Stanford Encyclopedia article on plural quantification: http://plato.stanford.edu/entries/plural-quant/ Boolos’s papers, collected in his /Logic, Logic, and Logic/, are the source for much of this literature. An early work in linguistics is Barry Shein’s /Plurals and Events/.

W. Taylor:

Surely you’re going to have to put *some* restriction on natural language for this to make sense? English (with a truth predicate, which natural English most certainly does) is known to be inconsistent, (Russell set), so is in some sense of every possible order.

John Kadvany:

Though I’m not a linguist, here’s what I’ve gleaned from some of the literature on mathematical computation and natural language:

– In the ’50s, Chomsky noted already that natural language grammars would not need the full power of generic Turing machines. So one framing of the issue is in terms of computational power rather than set-theoretic power, as suggested in the earlier FOM email.

– When Chomsky introduced transformations, that was shown by Rice and others to be potentially the power of a universal Turing machine, hence too powerful for linguistic models. There may have been miscommunication here between linguists and mathematicians in terms of what counts as a natural language grammar and the role of transformations.

– Generally, the complexity of natural languages derives from linguistic intricacies (anaphor, ‘long-distance’ reference, ‘movement’, etc.), not computational complexity. Pullum (of ‘Eskimo snow’ fame), Cullicover and others have suggested that there isn’t much need for complex logical machinery to account for linguistic generativity. Context-sensitive rules (already understood and formalized by ancient Indian linguists, cf. Staal) look to be about the maximum complexity level needed. Cullicover argued in a review that Pullum and Huddleston’s Cambridge English Grammar (~1700pp, 2002) could effectively be representative of sufficient implicit knowledge of the language. I take this to mean that this large compendium of generic constructions simply get ‘cut and pasted’ (‘Merge’ may be the term of art) to yield all possible sentences. So the process is ultimately recursive and decidable, but of such messy complexity that a single ‘master formula’ for grammaticality is virtually impossible. My sense is also that Jackendoff, long of the generative school, also sees much of linguistic complexity as being handled by human cognition rather than being coded up via some mathematical representation.

– As an observation, many languages can be used to directly formulate additive number words, akin to Roman numerals with a highest unit (e.g. ’50s’ or ‘100s’). Positional multiplicative value was a big discovery, not obvious, and perhaps requiring inscription rather than speech alone. So that’s more evidence that at least the ‘natural’ level of natural language complexity is roughly that of additive arithmetic, and therefore falling short of general computation including a single unbounded multiplcation operation.

– The linguist Dan Everett argued a couple years back that the Piraha (Amazon) language is not even quite recursive. This discussion received a lot of attention and there is literature on both sides of the debate.

Mark Lance:

This is a follow-up on the line of thought introduced by Monroe. The first question is what one means by “a natural language”. Some mathematicians and some formal linguists try to see a natural language as a fixed structure. But language as a phenomenon – as a social practice – as Monroe notes – involves constant revision. In particular, it is absolutely central to the power and social function of natural language that one can introduce new concepts. This can be as mundane as naming a new baby – thereby introducing a new name into circulation – to postulating a theoretical entity to explain some elaborate natural phenomenon. In between – or maybe to the side – we have such wildly impredicative phenomena as characterization of social movements, assignments and criticisms of social identities, descriptions of linguistic habits and practices themselves, etc. If this is right, if this is all essential to language, the question of whether natural language has the expressi!
ve power of some formal language is literally obvious. The “formal language” – that is all the talk we engage in when we do mathematics – *is* natural language. We say “let x be a structure such that ?”. How could any of that noise/marks on paper/etc mean anything if it weren’t introduced in natural language.

Now on the one hand this might seem merely a verbal quibble: What the original question meant to ask was whether natural language prior to the introduction of formal methods is a language of second order. But there is a substantive issue. It is crucial not to obscure what makes thinking about the power of natural language so hard: and this is precisely the fact that it incorporates everything, in the sense of holding open the potentiality for any expansion whatsoever. And by supposing that one can wall off some well-defined sub-practice – “natural language without formal methods” – and treat it (a) as a determinate totality about which questions like “what is it’s expressive power?” can be asked, and (b) suppose that we have not thereby turned it into something that is utterly unlike natural language, we may well be obscuring precisely the most interesting and important feature of natural language.

Patrik Eklund:

Is it really a question about the expressive power of (natural) language,
or expressive power of individuals using a language of their choice?

I believe (at least) in the following two things:

I. Every individual uses a language of their choice. If the language is
logical, individuals can provide reasoning, and potentially also
dialectics. If not, no reasoning is possible, and everything is basically
individual rhethorics (and all operators used in the underlying signature
are basically of zero arity).

To be a bit more formal, suppose these languages individually adopted are
logical, and two individuals communicate, using their own individual
logic. An individual arrives at or derives a sentence delivering it as
true (in the sense of the semantics as understood or selected by the
individual for her/his logic) to the other individual. There must now be
some kind of transformation between these two logics, so that the sentence
is transformed from one to the other. The transformed sentence so received
by the other individual is up for evaluation in the theory embraced by
that individual. It may even happen that the transformed sentence is not
even valid as a sentence in the other logic, so the other individual would
reply “I don’t even understand what you are trying to say”. If the
transformed sentence indeed is valid as a sentence, but not necessarily as
true in the other logic as it is, untransformed, in the first logic, a
dialectic may be intitiated, where sentences, back and forth tranformed
are delivered within some communication pattern.

The interesting thing here is the question about who owns the
transformation. The respective logic is clearly owned by each individual,
but transformations used are subject to dynamics, as is is respective
logics. Nothing basically says anybody has to stay within a logic adopted
sometimes in the past. Quite on the contrary, we usually do whatever it
takes to maintain our views to be the valid and true ones.

Social choice for instance can be extended in these views.

II. Now one may claim everything is basically some-order from start, so
that whatever we do it can be boiled down to some-order expressions.
Hilbert tried it, and some (!) still believe in his thesis. I neither do
or don’t. I believe on metalanguage and object language, and actually in
the respectful relation between the two, and that we should not be
allowed to switch back and forth (some call it being self-referential)
and even by-pass as we please. There is a some-order for mathematics,
and there is a set theory for mathematicians. From formal theory point of
view, they developed hand in hand, as also Hilbert said.

Now, on this apparatus we can place e.g. category theory as a language,
and making it a metalanguage for logic as an object language. My
favourite is then the Goguen and Meseguer approaches to institutions and
entailment systems, a formalism that, by-the-way, can be further
generalized.

Developing signatures and terms, and then closing that shop, then
developing sentences based on terms, and closing that shop, and so on,
step by step defining your logic, places some interesting restrictions
e.g. on sentence creation that e.g. G?del wouldn’t like at all.

Not encapsulating “first-order for mathematics and set theory” into what
it really is intended for, and allowing it to embrace everything and
enable self-referentiality is a huge mistake of mankind.

On Chomsky I would say it’s not logical. It’s language in the logically
restricted sense of involving regular expressions, so basically there are
terms, and some may say there are sentences. But there is no inference
calculi in these views of “language”.

Connecting “natural language”, as ‘language’ only in the sense of
being part of “automata and languages” is historically a huge mistake in
computer science.

Jakub Szymanik:

There is a bunch of expressibility results about quantifier fragments
of natural language, for example, ?more than half? is not definable in
elementary logic, even if you restrict attention to the finite
universes. One question here is to decide which GQs are realized in
NL. There has been also a lot of focus on the problem how much logic
is needed to formalize particular fragments of NL, for example some
multi-quantifier sentences.?A classic example is the debate revolving
around so-called Hinitkka sentences, like ?Some relative of every
townsman and some relative of every villager hate each other?
(Hintikka) or ?Most villagers and most townsmen hate each other?
(Barwise). ?Their branching reading is \Sigma_1^1. A good starting
point is the paper by Dag Westerstahl in Stanford Encyclopedia of
Philosophy, see:
http://plato.stanford.edu/entries/generalized-quantifiers/

Another related question might be how much resources is needed for
processing. Most quantifier constructions in NL are tractable but
still there are examples of intractable quantifier combinations, see:
http://dx.doi.org/10.1007/s10988-010-9076-z
Moreover, there is a research classifying various syllogistic
fragments of NL w.r.t computational complexity, see, e.g.,
http://dx.doi.org/10.1023/B:JLLI.0000024735.97006.5a
As far as the collective quantification is concerned, it has been
recently observed that second-order generalized quantifier MOST is not
definable in second-order logic, see:
http://dx.doi.org/10.1007/978-3-642-20920-8_20
Our earlier paper discusses some consequences for NL semantics:
http://dx.doi.org/10.1007/s10849-007-9055-0
Finally, there is some research trying to connect mathematical
predictions with linguistic and cognitive reality, see e.g.:
http://dx.doi.org/10.1111/j.1551-6709.2009.01078.x
http://dx.doi.org/10.1016/j.neuropsychologia.2005.02.012
http://dx.doi.org/10.1016/j.jcomdis.2011.07.005
http://dx.doi.org/10.1093/jos/ffp008
You may also be interested in 2 recent survey
papers:http://dx.doi.org/10.1016/B978-0-444-53726-3.00019-0http://dx.doi.org/10.1016/B978-0-444-53726-3.00020-7


Responses

  1. The language of second-order logic extends the language of first-order logic by allowing quantification of predicate symbols and function symbols. As the foregoing example shows, in a second-order language for arithmetic, we can say that the natural numbers are well ordered. We know that the well-ordering property is not expressible by any first-order sentence, because the non-standard models of the (first-order) theory of (N; 0, S, <, , ×) are never well ordered. So going to second-order logic is a genuine extension. That is, we can translate some natural-language sentences, such as “The relation < is a well-ordering,” into the language of second-order logic that are not translatable into the language of first-order logic.

  2. For this discussion to make sense at all, at least two important (and related) aspects of a natural language to be excluded: self-referenciality and not being well-defined. However, I would be interested to understand whether the language used in maths books, is first-order …

    E.g. Gromov writes:

    Self-referentiality, such as in noun⤾n.is rather technical but expressions like
    ”It has been already said that…” (abhorred by logicians) are ubiquitous in all (?)
    human languages. The presence of self-referential entries in a dictionary is an
    indicator of this being a language dictionary rather than an ”ergo-dictionary”
    of chess, for example. Also, ”understanding” self-referential phrases marks a
    certain developmental level of an ergosystem. (A computer program, which can
    not correctly answer the question ”Have we already spoken about it?”, fails
    Turing test.)

    Even an evaluation of a string [as an English sentence] is not straightforward, since every sentence, no
    matter how ridiculous, becomes palatable within an additional (self)explanatory
    framework. Thus, ”colorless green ideas sleep furiously” on 30 000 Google pages,
    and if you do not expect meeting ”an experienced dentist at the wheel” in print
    anywhere (except for pages of an advanced course in English), ”an experienced
    dentist but a novice at the wheel” is a bona fide cartoon character. (This feature
    is characteristic of natural languages which have built in self-reflection/self-
    reference ability. An absurd sequence of moves, for example, can not be included
    into a true chess party.)
    ….


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Categories

%d bloggers like this: