There are many benefits to knowing more than one language. For example, it has been shown that aging adults who speak more than one language have less likelihood of developing dementia.
Additionally, the bilingual brain becomes better at filtering out distractions, and learning multiple languages improves creativity. Evidence also shows that learning subsequent languages is easier than learning the first foreign language.
Why is foreign language study important at the university level?
As an applied linguist, I study how learning multiple languages can have cognitive and emotional benefits. One of these benefits that’s not obvious is that language learning improves tolerance.
This happens in two important ways.
The first is that it opens people’s eyes to a way of doing things in a way that’s different from their own, which is called “cultural competence.”
The second is related to the comfort level of a person when dealing with unfamiliar situations, or “tolerance of ambiguity.”
Gaining cross-cultural understanding
Cultural competence is key to thriving in our increasingly globalized world. How specifically does language learning improve cultural competence? The answer can be illuminated by examining different types of intelligence.
Psychologist Robert Sternberg’sresearch on intelligence describes different types of intelligence and how they are related to adult language learning. What he refers to as “practical intelligence” is similar to social intelligence in that it helps individuals learn nonexplicit information from their environments, including meaningful gestures or other social cues.
Language learning inevitably involves learning about different cultures. Students pick up clues about the culture both in language classes and through meaningful immersion experiences.
Researchers Hanh Thi Nguyen and Guy Kellogg have shown that when students learn another language, they develop new ways of understanding culture through analyzing cultural stereotypes. They explain that “learning a second language involves the acquisition not only of linguistic forms but also ways of thinking and behaving.”
With the help of an instructor, students can critically think about stereotypes of different cultures related to food, appearance and conversation styles.
Dealing with the unknown
The second way that adult language learning increases tolerance is related to the comfort level of a person when dealing with “tolerance of ambiguity.”
Someone with a high tolerance of ambiguity finds unfamiliar situations exciting, rather than frightening. My research on motivation, anxiety and beliefs indicates that language learning improves people’s tolerance of ambiguity, especially when more than one foreign language is involved.
It’s not difficult to see why this may be so. Conversations in a foreign language will inevitably involve unknown words. It wouldn’t be a successful conversation if one of the speakers constantly stopped to say, “Hang on – I don’t know that word. Let me look it up in the dictionary.” Those with a high tolerance of ambiguity would feel comfortable maintaining the conversation despite the unfamiliar words involved.
Applied linguists Jean-Marc Dewaele and Li Wei also study tolerance of ambiguity and have indicated that those with experience learning more than one foreign language in an instructed setting have more tolerance of ambiguity.
Individuals with higher levels of tolerance of ambiguity have also been found to be more entrepreneurial (i.e., are more optimistic, innovative and don’t mind taking risks).
In the current climate, universities are frequently being judged by the salaries of their graduates. Taking it one step further, based on the relationship of tolerance of ambiguity and entrepreneurial intention, increased tolerance of ambiguity could lead to higher salaries for graduates, which in turn, I believe, could help increase funding for those universities that require foreign language study.
Those who have devoted their lives to theorizing about and the teaching of languages would say, “It’s not about the money.” But perhaps it is.
Language learning in higher ed
Most American universities have a minimal language requirement that often varies depending on the student’s major. However, students can typically opt out of the requirement by taking a placement test or providing some other proof of competency.
In contrast to this trend, Princeton recently announced that all students, regardless of their competency when entering the university, would be required to study an additional language.
I’d argue that more universities should follow Princeton’s lead, as language study at the university level could lead to an increased tolerance of the different cultural norms represented in American society, which is desperately needed in the current political climate with the wave of hate crimes sweeping university campuses nationwide.
Knowledge of different languages is crucial to becoming global citizens. As former Secretary of Education Arne Duncan noted,
“Our country needs to create a future in which all Americans understand that by speaking more than one language, they are enabling our country to compete successfully and work collaboratively with partners across the globe.”
Considering the evidence that studying languages as adults increases tolerance in two important ways, the question shouldn’t be “Why should universities require foreign language study?” but rather “Why in the world wouldn’t they?”
Do you remember being taught you should never start your sentences with “And” or “But”?
What if I told you that your teachers were wrong and there are lots of other so-called grammar rules that we’ve probably been getting wrong in our English classrooms for years?
How did grammar rules come about?
To understand why we’ve been getting it wrong, we need to know a little about the history of grammar teaching.
Grammar is how we organise our sentences in order to communicate meaning to others.
Those who say there is one correct way to organise a sentence are called prescriptivists. Prescriptivist grammarians prescribe how sentences must be structured.
Prescriptivists had their day in the sun in the 18th century. As books became more accessible to the everyday person, prescriptivists wrote the first grammar books to tell everyone how they must write.
These self-appointed guardians of the language just made up grammar rules for English, and put them in books that they sold. It was a way of ensuring that literacy stayed out of reach of the working classes.
They took their newly concocted rules from Latin. This was, presumably, to keep literate English out of reach of anyone who wasn’t rich or posh enough to attend a grammar school, which was a school where you were taught Latin.
And yes, that is the origin of today’s grammar schools.
The other camp of grammarians are the descriptivists. They write grammar guides that describe how English is used by different people, and for different purposes. They recognise that language isn’t static, and it isn’t one-size-fits-all.
1. You can’t start a sentence with a conjunction
Let’s start with the grammatical sin I have already committed in this article. You can’t start a sentence with a conjunction.
Obviously you can, because I did. And I expect I will do it again before the end of this article. There, I knew I would!
Those who say it is always incorrect to start a sentence with a conjunction, like “and” or “but”, sit in the prescriptivist camp.
However, according to the descriptivists, at this point in our linguistic history,
it is fine to start a sentence with a conjunction in an op-ed article like this, or in a novel or a poem.
It is less acceptable to start a sentence with a conjunction in an academic journal article, or in an essay for my son’s high school economics teacher, as it turns out. But times are changing.
2. You can’t end a sentence with a preposition
Well, in Latin you can’t. In English you can, and we do all the time.
Admittedly a lot of the younger generation don’t even know what a preposition is, so this rule is already obsolete. But let’s have a look at it anyway, for old time’s sake.
According to this rule, it is wrong to say “Who did you go to the movies with?”
Instead, the prescriptivists would have me say “With whom did you go to the movies?”
I’m saving that structure for when I’m making polite chat with the Queen on my next visit to the palace.
That’s not a sarcastic comment, just a fanciful one. I’m glad I know how to structure my sentences for different audiences. It is a powerful tool. It means I usually feel comfortable in whatever social circumstances I find myself in, and I can change my writing style according to purpose and audience.
That is why we should teach grammar in schools. We need to give our children a full repertoire of language so that they can make grammatical choices that will allow them to speak and write for a wide range of audiences.
3. Put a comma when you need to take a breath
It’s a novel idea, synchronising your writing with your breathing, but the two have nothing to do with one another and if this is the instruction we give our children, it is little wonder commas are so poorly used.
Commas provide demarcation between like grammatical structures. When adjectives, nouns, phrases or clauses are butting up against each other in a sentence, we separate them with a comma. That’s why I put commas between the three nouns and the two clauses in that last sentence.
Commas also provide demarcation for words, phrases or clauses that are embedded in a sentence for effect. The sentence would still be a sentence even if we took those words away. See, for example, the use of commas in this sentence.
4. To make your writing more descriptive, use more adjectives
William Shakespeare died on 23 April 1616, 400 years ago, in the small Warwickshire town of his birth. He was 52 years of age: still young (or youngish, at least) by modern reckonings, though his death mightn’t have seemed to his contemporaries like an early departure from the world.
Most of the population who survived childhood in England at this time were apt to die before the age of 60, and old age was a state one entered at what today might be thought a surprisingly youthful age.
Many of Shakespeare’s fellow-writers had died, or were soon to do so, at a younger age than he: Christopher Marlowe, in a violent brawl, at 29; Francis Beaumont, following a stroke, at 31 (also in 1616: just 48 days, as it happened, before Shakespeare’s own death); Robert Greene, penitent and impoverished, of a fever, in the garret of a shoemaker’s house, at 34; Thomas Kyd, after “bitter times and privy broken passions”, at 35; George Herbert, of consumption, at 39; John Fletcher, from the plague, at 46; Edmund Spenser, “for lack of bread” (so it was rumoured), at 47; and Thomas Middleton, also at 47, from causes unknown.
The cause or causes of Shakespeare’s death are similarly unknown, though in recent years they have become a topic of persistent speculation. Syphilis contracted by visits to the brothels of Turnbull Street, mercury or arsenic poisoning following treatment for this infection, alcoholism, obesity, cardiac failure, a sudden stroke brought on by the alarming news of a family disgrace – that Shakespeare’s son-in-law, Thomas Quiney, husband of his younger daughter, Judith, had been responsible for the pregnancy and death of a young local woman named Margaret Wheeler – have all been advanced as possible factors leading to Shakespeare’s death.
Francis Thackeray, Director of the Institute for Human Evolution at the University of Witwatersrand, believes that cannabis was the ultimate cause of Shakespeare’s death, and has been hoping – in defiance of the famous ban on Shakespeare’s tomb (“Curst be he that moves my bones”, etc.) to inspect the poet’s teeth in order to confirm this theory. (“Teeth are not bones”, Dr Thackeray somewhat controversially insists.) No convincing evidence, alas, has yet been produced to support any of these theories.
More intriguing than the actual pathology of Shakespeare’s death, however, may be another set of problems that have largely evaded the eye of biographers, though they seem at times – in a wider, more general sense – to have held the poet’s own sometimes playful attention. They turn on the question of fame: how it is constituted; how slowly and indirectly it’s often achieved, how easily it may be delayed, diverted, or lost altogether from view.
No memorial gathering
On 25 April 1616, two days after his death, Shakespeare was buried in the chancel of Holy Trinity Church at Stratford, having earned this modest place of honour as much (it would seem) through his local reputation as a respected citizen as from any deep sense of his wider professional achievements.
No memorial gatherings were held in the nation’s capital, where he had made his career, or, it would seem, elsewhere in the country. The company of players that he had led for so long did not pause (so far as we know) to acknowledge his passing, nor did his patron and protector, King James, whom he had loyally served.
Only one writer, a minor Oxfordshire poet named William Basse, felt moved to offer, at some unknown date following his death, a few lines to the memory of Shakespeare, with whom he may not have been personally acquainted. Hoping that Shakespeare might be interred at Westminster but foreseeing problems of crowding at the Abbey, Basse began by urging other distinguished English poets to roll over in their tombs, in order to make room for the new arrival.
Renownèd Spenser, lie a thought more nigh.
To learned Chaucer; and rare Beaumont, lie
A little nearer Spenser, to make room
For Shakespeare in your threefold, fourfold tomb.
None of these poets responded to Basse’s injunctions, however, and Shakespeare was not to win his place in the Abbey for more than a hundred years, when Richard Boyle, third Earl of Burlington, commissioned William Kent to design and Peter Scheemakers to sculpt this life-size white marble statue of the poet – standing cross-legged, leaning thoughtfully on a pile of books – to adorn Poets’ Corner.
On the wall behind this statue, erected in the Abbey in January 1741, is a tablet with a Latin inscription (perhaps contributed by the poet Alexander Pope) conceding the belated arrival of the memorial: “William Shakespeare,/124 years after his death/ erected by public love”.
Basse’s verses were in early circulation, but not published until 1633. No other poem to Shakespeare’s memory is known to have been written before the appearance of the First Folio in 1623. No effort appears to have been made in the months and years following the poet’s death to assemble a tributary volume, honouring the man and his works. None of Shakespeare’s other contemporaries noted the immediate fact of his passing in any surviving letter, journal, or record. No dispatches, private or diplomatic, carried the news of his death beyond Britain to the wider world.
Why did the death of Shakespeare cause so little public grief, so little public excitement, in and beyond the country of his birth? Why wasn’t his passing an occasion for widespread mourning, and widespread celebration of his prodigious achievements? What does this curious silence tell us about Shakespeare’s reputation in 1616; about the status of his profession and the state of letters more generally in Britain at this time?
A very quiet death
Shakespeare’s death occurred upon St George’s Day. That day was famous for the annual rites of prayer, procession, and feasting at Windsor by members of the Order of the Garter, England’s leading chivalric institution, founded in 1348 by Edward III. Marking as it did the anniversary of the supposed martyrdom in AD 303 of St George of Cappadocia, St George’s Day was celebrated in numerous countries in and beyond Europe, as it is today, but had emerged somewhat bizarrely in late mediaeval times as a day of national significance in England.
On St George’s Day 1616, as Shakespeare lay dying in far-off Warwickshire, King James – seemingly untroubled by prior knowledge of this event – was entertained in London by a poet of a rather different order named William Fennor.
Fennor was something of a royal favourite, famed for his facetious contests in verse, often in the King’s presence, with the Thames bargeman, John Taylor, the so-called Water Poet: a man whom James – as Ben Jonson despairingly reported to William Drummond – reckoned to be the finest poet in the kingdom.
In the days and weeks that followed, as the news of the poet’s death (one must assume) filtered gradually through to the capital, there is no recorded mention in private correspondence or official documents of Shakespeare’s name. Other more pressing matters were now absorbing the nation. Shakespeare had made a remarkably modest exit from the theatre of the world: largely un-applauded, largely unobserved. It was a very quiet death.
An age of public mourning
The silence that followed the death of Shakespeare is the more remarkable coming as it did in an age that had developed such elaborate rituals of public mourning, panegyric, and commemoration, most lavishly displayed at the death of a monarch or peer of the realm, but also occasionally set in train by the death of an exceptional commoner.
Consider the tributes paid to another great writer of the period, William Camden, antiquarian scholar and Clarenceux herald of arms, who died in London in late November 1623; a couple of weeks, as chance would have it, after the publication of Shakespeare’s First Folio.
Portrait of William Camden by Marcus Gheeraerts the Younger (1609). Wikimedia commons
Camden was a man of quite humble social origins – like Shakespeare himself, whose father was a maker of gloves and leather goods in Stratford. Camden’s father was a painter-stainer, whose job it was to decorate coats of arms and other heraldic devices. By the time of his death Camden was widely recognized, in Britain and abroad, as one of the country’s outstanding scholars.
Eulogies were delivered at Oxford and published along with other tributes in a memorial volume soon after his death. At Westminster his body was escorted to the Abbey on 19 November by a large retinue of mourners, led by 26 poor men wearing gowns, followed by soberly attired gentlemen, esquires, knights, and members of the College of Arms, the hearse being flanked by earls, barons, and other peers of the realm, together with the Lord Keeper, Bishop John Williams, and other divines. Camden’s imposing funeral mirrored on a smaller scale the huge procession of 1,600 mourners which in 1603 had accompanied the body of Elizabeth I to its final resting place in the Abbey.
There were particular reasons, then, why Camden should have been accorded a rather grand funeral of his own. But mightn’t there have been good reasons for Shakespeare, likewise – whom we see today as the outstanding writer of his age – to have been honoured at his death in a suitably ceremonious fashion? It’s curious to realize, however, that Shakespeare at the time of his death wasn’t yet universally seen as the outstanding writer of his age.
At this quite extraordinary moment in the history of English letters and intellectual exchange there was more than one contender for that title. William Camden himself – an admired poet in addition to his other talents, and friend and mentor of other poets of the day – had included Shakespeare’s name in a list, published in 1614, of “the most pregnant wits of these our times, whom succeeding ages may justly admire”, placing him, without differentiation, alongside Edmund Spenser, John Owen, Thomas Campion, Michael Drayton, George Chapman, John Marston, Hugh Holland and Ben Jonson, the last two of whom he had taught at Westminster School.
But it was another poet, Sir Philip Sidney, whom Camden had befriended during his student days at Oxford, that he most passionately admired, and continued to regard – following Sidney’s early death at the age of 32 in 1586 – as the country’s supreme writer. “Our Britain is the glory of earth and its precious jewel,/ But Sidney was the precious jewel of Britain”, Camden had written in a memorial poem in Latin mourning his friend’s death.
No commoner poet in England had ever been escorted to his grave with such pomp as was furnished for Sidney’s funeral at St Paul’s Cathedral, London, on 16 February 1587.
The 700-man procession was headed by 32 poor men, representing the number of years that Sidney had lived, with fifes and drums “playing softly” beside them. They were followed by trumpeters and gentlemen and yeomen servants, physicians, surgeons, chaplains, knights and esquires, heralds bearing aloft Sidney’s spurs and gauntlet, his helm and crest, his sword and targe, his coat of arms. Then came the hearse containing Sidney’s body. Behind them walked the chief mourner, Philip’s young brother, Robert, accompanied by the Earls of Leicester, Pembroke, Huntingdon, and Essex, followed by representatives from the states of Holland and Zealand. Next came the Lord Mayor and Aldermen of the City of London, with 120 members of the Company of Grocers, and, at the rear of the procession, “citizens of London practised in arms, about 300, who marched three by three”.
Sidney’s funeral was a moving salute to a man who was widely admired not just for his military, civic and diplomatic virtues, but as the outstanding writer of his day. He fulfilled in exemplary fashion, as Shakespeare curiously did not, the Renaissance ideal of what a poet should strive to be.
In an extraordinary act of homage not before seen in England, but soon to be commonly followed at the death of distinguished writers, the Universities of Oxford and Cambridge produced three volumes of Latin verse lauding Sidney’s achievements, while a fourth volume of similar tributes was published by the University of Leiden. The collection from Cambridge, presented contributions from 63 Cambridge men, together with a sonnet in English by King James VI of Scotland, the future King James I of Britain.
Earlier English poets had been mourned at their passing, if not in these terms and not on this scale, then with more enthusiasm than was evident at the death of Shakespeare. Edmund Spenser at his death in 1599 was buried in Westminster Abbey next to Chaucer, “this hearse being attended by poets, and mournful elegies and poems with the pens that wrote them thrown into his tomb”. The deaths of Thomas Wyatt and Michael Drayton were similarly lamented.
When, 21 years after Shakespeare’s death, his former friend and colleague Ben Jonson came at last to die, the crowd that gathered at his house in Westminster to accompany his body to his grave in the Abbey included “all or the greatest part of the nobility and gentry then in the town”. Within months of his death a volume of 33 poems was in preparation and a dozen additional elegies had appeared in print. Jonson was hailed at his death as “king of English poetry”, as England’s “rare arch-poet”. With his death, as more than one memorialist declared, English poetry itself now seemed also to have died. No one had spoken in these terms at the death of Shakespeare.
To take one last example: at the death in 1643 of the dramatist William Cartwright whose works and whose very name are barely known to most people today – Charles I elected to wear black, remarking that
since the muses had so much mourned for the loss of such a son, it would be a shame for him not to appear in mourning for the loss of such a subject.
At the death of Shakespeare in 1616 James had shown no such minimal courtesy.
Why should Shakespeare at his death have been so neglected? One simple answer is that King James, unlike his son, Charles, had no great passion for the theatre, and no very evident regard for Shakespeare’s genius. Early in his reign, so Dudley Carleton reported,
The first holy days we had every night a public play in the great hall, at which the King was ever present, and liked or disliked as he saw cause: but it seems he takes no extraordinary pleasure in them.
But Shakespeare and his company were not merely royal servants, bound to provide a steady supply of dramatic entertainment at court; they also catered for the London public who flocked to see their plays at Blackfriars and the Globe, and who had their own ways of expressing their pleasure, their frustrations, and – at the death of a player – their grief.
When Richard Burbage, the principal actor for the King’s Men, died on 9 March 1619, just seven days after the death of Queen Anne, the London public were altogether more upset by that event than they had been over the death of the Queen, as one contemporary writer – quoting, ironically, the opening lines of Shakespeare’s 1 Henry VI – tartly observed.
So it’s necessary, I think, to pose a further question. Why should the death of Burbage have affected the London public more profoundly than the death not merely of the Queen but of the dramatist whose work he so skilfully interpreted?
I believe the answer lies, partly at least, in the status of the profession to which Shakespeare belonged, a profession which didn’t yet have a regular name: the very words playwright and dramatist not entering the language until half a century after Shakespeare’s death.
Prominent actors at this time were far better known to the public than the writers who provided their livelihood. The writers were on the whole invisible people, who worked as backroom boys, often anonymously and in small teams; playgoers had no easy way of discovering their identity. Theatre programmes didn’t yet exist. Playbills often announced the names of leading actors, but not until the very last decade of the 17th century did they include the names of authors.
Only a fraction of the large number of plays performed in this period moreover found their way into print, and those that were published didn’t always disclose the names of their authors.
At the time of Shakespeare’s death half of his plays weren’t yet available in print, and there were no known plans to produce a collected edition of his works. The total size and shape of the canon were therefore still imperfectly known. Shakespeare was not yet fully visible.
In 1616 the world didn’t yet realise what they had got, or who it was that they’d lost. Hence, I believe, the otherwise inexplicable silence at his passing.
To the Memory of My Beloved
At the time of Shakespeare’s death another English writer was arguably better known to the general public than Shakespeare himself, and more highly esteemed by the brokers of power at King James’s court. That writer was Shakespeare’s friend and colleague Ben Jonson, who early in 1616 had been awarded a pension of one hundred marks to serve as King James’s laureate poet.
A first folio edition of Shakespeare’s collected plays was finally published in London with Jonson’s assistance and oversight in 1623. This monumental volume at last gave readers in England some sense of the wider reach of Shakespeare’s theatrical achievement, and laid the essential foundations of his modern reputation.
At the head of this volume stand two poems by Ben Jonson: the second, To the Memory of My Beloved, the Author, Mr William Shakespeare, and What He Hath Left Us assesses the achievement of this extraordinary writer. Shakespeare had been praised during his lifetime as a “sweet”, “mellifluous”, “honey-tongued”, “honey-flowing”, “pleasing” writer. No one until this moment had presented him in the astounding terms that Jonson here proposes: as the pre-eminent figure, the “soul” and the “star” of his age; and as something even more than that: as one who could be confidently ranked with the greatest writers of antiquity and of the modern era.
Triumph, my Britain, thou has one to show
To whom all scenes of Europe homage owe,
He was not of an age, but for all time!
Today, 400 years on, that last line sounds like a truism, for Shakespeare’s fame has indeed endured. He is without doubt the most famous writer the world has ever seen. But in 1623 this was a bold and startling prediction. No one before that date had described Shakespeare’s achievement in such terms as these.
This is an edited version of a public lecture given at the University of Melbourne.
On the 400th anniversary of Shakespeare’s death, the Faculty of Arts at the University of Melbourne is establishing the Shakespeare 400 Trust to raise funds to support the teaching of Shakespeare at the University into the future. For more information, or if you would like to support the Shakespeare 400 Trust, please contact Julie du Plessis at firstname.lastname@example.org
Unlikely as it sounds, the topic of adjective use has gone “viral”. The furore centres on the claim, taken from Mark Forsyth’s book The Elements of Eloquence, that adjectives appearing before a noun must appear in the following strict sequence: opinion, size, age, shape, colour, origin, material, purpose, Noun. Even the slightest attempt to disrupt this sequence, according to Forsyth, will result in the speaker sounding like a maniac. To illustrate this point, Forsyth offers the following example: “a lovely little old rectangular green French silver whittling knife”.
But is the “rule” worthy of an internet storm – or is it more of a ripple in a teacup? Well, certainly the example is a rather unlikely sentence, and not simply because whittling knives are not in much demand these days – ignoring the question of whether they can be both green and silver. This is because it is unusual to have a string of attributive adjectives (ones that appear before the noun they describe) like this.
More usually, speakers of English break up the sequence by placing some of the adjectives in predicative position – after the noun. Not all adjectives, however, can be placed in either position. I can refer to “that man who is asleep” but it would sound odd to refer to him as “that asleep man”; we can talk about the “Eastern counties” but not the “counties that are Eastern”. Indeed, our distribution of adjectives both before and after the noun reveals another constraint on adjective use in English – a preference for no more than three before a noun. An “old brown dog” sounds fine, a “little old brown dog” sounds acceptable, but a “mischievous little old brown dog” sounds plain wrong.
Rules, rules, rules
Nevertheless, however many adjectives we choose to employ, they do indeed tend to follow a predictable pattern. While native speakers intuitively follow this rule, most are unaware that they are doing so; we agree that the “red big dog” sounds wrong, but don’t know why. In order to test this intuition linguists have analysed large corpora of electronic data, to see how frequently pairs of adjectives like “big red” are preferred to “red big”. The results confirm our native intuition, although the figures are not as comprehensive as we might expect – the rule accounts for 78% of the data.
But while linguists have been able to confirm that there are strong preferences in the ordering of pairs of adjectives, no such statistics have been produced for longer strings. Consequently, while Forsyth’s rule appears to make sense, it remains an untested, hypothetical, large, sweeping (sorry) claim.
In fact, even if we stick to just two adjectives it is possible to find examples that appear to break the rule. The “big bad wolf” of fairy tale, for instance, shows the size adjective preceding the opinion one; similarly, “big stupid” is more common than “stupid big”. Examples like these are instead witness to the “Polyanna Principle”, by which speakers prefer to present positive, or indifferent, values before negative ones.
Another consideration of Forsyth’s proposed ordering sequence is that it makes no reference to other constraints that influence adjective order, such as when we use two adjectives that fall into the same category. Little Richard’s song “Long Tall Sally” would have sounded strange if he had called it Tall Long Sally, but these are both adjectives of size.
Similarly, we might describe a meal as “nice and spicy” but never “spicy and nice” – reflecting a preference for the placement of general opinions before more specific ones. We also need to bear in mind the tendency for noun phrases to become lexicalised – forming words in their own right. Just as a blackbird is not any kind of bird that is black, a little black dress does not refer to any small black dress but one that is suitable for particular kinds of social engagement.
Since speakers view a “little black dress” as a single entity, its order is fixed; as a result, modifying adjectives must precede little – a “polyester little black dress”. This means that an adjective specifying its material appears before those referring to size and colour, once again contravening Forsyth’s rule.
Making sense of language
Of course, the rule is a fair reflection of much general usage – although the reasons behind this complex set of constraints in adjective order remain disputed. Some linguists have suggested that it reflects the “nouniness” of an adjective; since colour adjectives are commonly used as nouns – “red is my favourite colour” – they appear close to that slot.
Another conditioning factor may be the degree to which an adjective reflects a subjective opinion rather than an objective description – therefore, subjective adjectives that are harder to quantify (boring, massive, middle-aged) tend to appear further away from the noun than more concrete ones (red, round, French).
Prosody, the rhythm and sound of poetry, is likely to play a role, too – as there is a tendency for speakers to place longer adjectives after shorter ones. But probably the most compelling theory links adjective position with semantic closeness to the noun being described; adjectives that are closely related to the noun in meaning, and are therefore likely to appear frequently in combination with it, are placed closest, while those that are less closely related appear further away.
In Forsyth’s example, it is the knife’s whittling capabilities that are most significant – distinguishing it from a carving, fruit or butter knife – while its loveliness is hardest to define (what are the standards for judging the loveliness of a whittling knife?) and thus most subjective. Whether any slight reorganisation of the other adjectives would really prompt your friends to view you as a knife-wielding maniac is harder to determine – but then, at least it’s just a whittling knife.
For the first time in history a truly global language has emerged. English enables international communication par excellence, with a far wider reach than other possible candidates for this position – like Latin in the past, and French, Spanish and Mandarin in the present.
In a memorable phrase, former Tanzanian statesman Julius Nyerere once characterised English as the Kiswahili of the world. In Africa, English is more widely spoken than other important lingua francas like Kiswahili, Arabic, French and Portuguese, with at least 26 countries using English as one of their official languages.
But English in Africa comes in many different shapes and forms. It has taken root in an exceptionally multilingual context, with well over a thousand languages spoken on the continent. The influence of this multilingualism tends to be largely erased at the most formal levels of use – for example, in the national media and in higher educational contexts. But at an everyday level, the Queen’s English has had to defer to the continent’s rich abundance of languages. Pidgin, creole, second-language and first-language English all flourish alongside them.
The birth of new languages
English did not enter Africa as an innocent language. Its history is tied up with trade and exploitation, capitalist expansion, slavery and colonisation.
As the need for communication arose and increased under these circumstances, forms of English, known as pidgins and creoles, developed. This took place within a context of unequal encounters, a lack of sustained contact with speakers of English and an absence of formal education. Under these conditions, English words were learnt and attached to an emerging grammar that owed more to African languages than to English.
A pidgin is defined by linguists as an initially simple form of communication that arises from contact between speakers of disparate languages who have
no other means of communication in common. Pidgins, therefore, do not have mother-tongue speakers. The existence of pidgins in the early period of West African-European contact is not well documented, and some linguists like Salikoko Mufwene judge their early significance to be overestimated.
Pidgins can become more complex if they take on new functions. They are relabelled creoles if, over time and under specific circumstances, they become fully developed as the first language of a group of speakers.
Ultimately, pidgins and creoles develop grammatical norms that are far removed from the colonial forms that partially spawned them: to a British English speaker listening to a pidgin or creole, the words may seem familiar in form, but not always in meaning.
Linguists pay particular attention to these languages because they afford them the opportunity to observe creativity at first hand: the birth of new languages.
The creoles of West Africa
West Africa’s creoles are of two types: those that developed outside Africa; and those that first developed from within the continent.
The West African creoles that developed outside Africa emerged out of the multilingual and oppressive slave experience in the New World. They were then brought to West Africa after 1787 by freed slaves repatriated from Britain, North America and the Caribbean. “Krio” was the name given to the English-based creole of slaves freed from Britain who were returned to Sierra Leone, where they were joined by slaves released from Nova Scotia and Jamaica.
Some years after that, in 1821, Liberia was established as an African homeland for freed slaves from the US. These men and women brought with them what some linguists call “Liberian settler English”. This particular creole continues to make Liberia somewhat special on the continent, with American rather than British forms of English dominating there.
These languages from the New World were very influential in their new environments, especially over the developing West African pidgin English.
A more recent, homegrown type of West African creole has emerged in the region. This West African creole is spreading in the context of urban multilingualism and changing youth identities. Over the past 50 years, it has grown spectacularly in Ghana, Cameroon, Equatorial Guinea and Sierra Leone, and it is believed to be the fastest-growing language in Nigeria. In this process pidgin English has been expanded into a creole, used as one of the languages of the home. For such speakers, the designation “pidgin” is now a misnomer, although it remains widely used.
In East Africa, in contrast, the strength and historicity of Kiswahili as a lingua franca prevented the rapid development of pidgins based on colonial languages. There, traders and colonists had to learn Kiswahili for successful everyday communication. This gave locals more time to master English as a fully-fledged second language.
Other varieties of English
Africa, mirroring the trend in the rest of the world, has a large and increasing number of second-language English speakers. Second-language varieties of English are mutually intelligible with first-language versions, while showing varying degrees of difference in accent, grammar and nuance of vocabulary. Formal colonisation and the educational system from the 19th century onwards account for the wide spread of second-language English.
What about first-language varieties of English on the continent? The South African variety looms large in this history, showing similarities with English in Australia and New Zealand, especially in details of accent.
In post-apartheid South Africa many young black people from middle-class backgrounds now speak this variety either as a dominant language or as a “second first-language”. But for most South Africans English is a second language – a very important one for education, business and international communication.
For family and cultural matters, African languages remain of inestimable value throughout the continent.
As a young adult in college, I decided to learn Japanese. My father’s family is from Japan, and I wanted to travel there someday.
However, many of my classmates and I found it difficult to learn a language in adulthood. We struggled to connect new sounds and a dramatically different writing system to the familiar objects around us.
It wasn’t so for everyone. There were some students in our class who were able to acquire the new language much more easily than others.
So, what makes some individuals “good language learners?” And do such individuals have a “second language aptitude?”
What we know about second language aptitude
Past research on second language aptitude has focused on how people perceive sounds in a particular language and on more general cognitive processes such as memory and learning abilities. Most of this work has used paper-and-pencil and computerized tests to determine language-learning abilities and predict future learning.
Researchers have also studied brain activity as a way of measuring linguistic and cognitive abilities. However, much less is known about how brain activity predicts second language learning.
Is there a way to predict the aptitude of second language learning?
In a recently published study, Chantel Prat, associate professor of psychology at the Institute for Learning and Brain Sciences at the University of Washington, and I explored how brain activity recorded at rest – while a person is relaxed with their eyes closed – could predict the rate at which a second language is learned among adults who spoke only one language.
Studying the resting brain
Resting brain activity is thought to reflect the organization of the brain and it has been linked to intelligence, or the general ability used to reason and problem-solve.
We measured brain activity obtained from a “resting state” to predict individual differences in the ability to learn a second language in adulthood.
To do that, we recorded five minutes of eyes-closed resting-state electroencephalography, a method that detects electrical activity in the brain, in young adults. We also collected two hours of paper-and-pencil and computerized tasks.
We then had 19 participants complete eight weeks of French language training using a computer program. This software was developed by the U.S. armed forces with the goal of getting military personnel functionally proficient in a language as quickly as possible.
The software combined reading, listening and speaking practice with game-like virtual reality scenarios. Participants moved through the content in levels organized around different goals, such as being able to communicate with a virtual cab driver by finding out if the driver was available, telling the driver where their bags were and thanking the driver.
Here’s a video demonstration:
Nineteen adult participants (18-31 years of age) completed two 30-minute training sessions per week for a total of 16 sessions. After each training session, we recorded the level that each participant had reached. At the end of the experiment, we used that level information to calculate each individual’s learning rate across the eight-week training.
As expected, there was large variability in the learning rate, with the best learner moving through the program more than twice as quickly as the slowest learner. Our goal was to figure out which (if any) of the measures recorded initially predicted those differences.
A new brain measure for language aptitude
When we correlated our measures with learning rate, we found that patterns of brain activity that have been linked to linguistic processes predicted how easily people could learn a second language.
Patterns of activity over the right side of the brain predicted upwards of 60 percent of the differences in second language learning across individuals. This finding is consistent with previous research showing that the right half of the brain is more frequently used with a second language.
Our results suggest that the majority of the language learning differences between participants could be explained by the way their brain was organized before they even started learning.
Implications for learning a new language
Does this mean that if you, like me, don’t have a “quick second language learning” brain you should forget about learning a second language?
First, it is important to remember that 40 percent of the difference in language learning rate still remains unexplained. Some of this is certainly related to factors like attention and motivation, which are known to be reliable predictors of learning in general, and of second language learning in particular.
Second, we know that people can change their resting-state brain activity. So training may help to shape the brain into a state in which it is more ready to learn. This could be an exciting future research direction.
Second language learning in adulthood is difficult, but the benefits are large for those who, like myself, are motivated by the desire to communicate with others who do not speak their native tongue.
The British Council is to open a bilingual pre-school in Hong Kong in August. The International Pre-School, which will teach English and Cantonese and have specific times set aside for Mandarin, will follow the UK-based International Primary Curriculum.
The British Council already has bilingual pre-schools in Singapore (pictured above) and Madrid. The adoption of a bilingual model of early years learning, rather than a purely English-medium one, is supported by much of the research on this age group. In a randomised control trial in the US state of New Jersey, for example, three- and four-year-olds from both Spanish- and English-speaking backgrounds were assigned by lottery to either an all-English or English–Spanish pre-school programme which used an identical curriculum. The study found that children from the bilingual programme emerged with the same level of English as those in the English-medium one, but both the Spanish-speaking and anglophone children had a much higher level of Spanish.
When an army deploys in a foreign country, there are clear advantages if the soldiers are able to speak the local language or dialect. But what if your recruits are no good at other languages? In the UK, where language learning in schools and universities is facing a real crisis, the British army began to see this as a serious problem.
In a new report on the value of languages, my colleagues and I showcased how a new language policy instituted last year within the British Army, was triggered by a growing appreciation of the risks of language shortages for national security.
Following the conflicts in Iraq and Afghanistan, the military sought to implement language skills training as a core competence. Speakers of other languages are encouraged to take examinations to register their language skills, whether they are language learners or speakers of heritage or community languages.
The UK Ministry of Defence’s Defence Centre for Language and Culture also offers training to NATO standards across the four language skills – listening, speaking, reading and writing. Core languages taught are Arabic, Dari, Farsi, French, Russian, Spanish and English as a foreign language. Cultural training that provides regional knowledge and cross-cultural skills is still embryonic, but developing fast.
There are two reasons why this is working. The change was directed by the vice chief of the defence staff, and therefore had a high-level champion. There are also financial incentives for army personnel to have their linguistic skills recorded, ranging from £360 for a lower-level western European language, to £11,700 for a high level, operationally vital linguist. Currently any army officer must have a basic language skill to be able to command a sub unit.
We should not, of course, overstate the progress made. The numbers of Ministry of Defence linguists for certain languages, including Arabic, are still precariously low and, according to recent statistics, there are no speakers of Ukrainian or Estonian classed at level three or above in the armed forces. But, crucially, the organisational culture has changed and languages are now viewed as an asset.
The British military’s new approach is a good example of how an institution can change the culture of the way it thinks about languages. It’s also clear that language policy can no longer simply be a matter for the Department for Education: champions for language both within and outside government are vital for issues such as national security.
This is particularly important because of the fragmentation of language learning policy within the UK government, despite an informal cross-Whitehall language focus group.
Experience on the ground illustrates the value of cooperation when it comes to security. For example, in January, the West Midlands Counter Terrorism Unit urgently needed a speaker of a particular language dialect to assist with translating communications in an ongoing investigation. The MOD was approached and was able to source a speaker within another department.
There is a growing body of research demonstrating the cost to business of the UK’s lack of language skills. Much less is known about their value to national security, defence and diplomacy, conflict resolution and social cohesion. Yet language skills have to be seen as an asset, and appreciation is needed across government for their wider value to society and security.
After Brexit, there are various things that some in the EU hope to see and hear less in the future. One is Nigel Farage. Another is the English language.
In the early hours of June 24, as the referendum outcome was becoming clear, Jean-Luc Mélenchon, left-wing MEP and French presidential candidate, tweeted that “English cannot be the third working language of the European parliament”.
This is not the first time that French and German opinion has weighed in against alleged disproportionate use of English in EU business. In 2012, for example, a similar point was made about key eurozone recommendations from the European Commission being published initially “in a language which [as far as the Euro goes] is only spoken by less than 5m Irish”. With the number of native speakers of English in the EU set to drop from 14% to around 1% of the bloc’s total with the departure of the UK, this point just got a bit sharper.
Official EU language policy is multilingualism with equal rights for all languages used in member states. It recommends that “every European citizen should master two other languages in addition to their mother tongue” – Britain’s abject failure to achieve this should make it skulk away in shame.
The EU recognises 24 “official and working” languages, a number that has mushroomed from the original four (Dutch, French, German and Italian) as more countries have joined. All EU citizens have a right to access EU documents in any of those languages. This calls for a translation team numbering around 2,500, not to mention a further 600 full-time interpreters. In practice most day-to-day business is transacted in either English, French or German and then translated, but it is true that English dominates to a considerable extent.
The preponderance of English has nothing to do with the influence of Britain or even Britain’s membership of the EU. Historically, the expansion of the British empire, the impact of the industrial revolution and the emergence of the US as a world power have embedded English in the language repertoire of speakers across the globe.
Unlike Latin, which outlived the Roman empire as the lingua franca of medieval and renaissance Europe, English of course has native speakers (who may be unfairly advantaged), but it is those who have learned English as a foreign language – “Euro-English” or “English as a lingua franca” – who now constitute the majority of users.
According to the 2012 Special Eurobarometer on Europeans and their Languages, English is the most widely spoken foreign language in 19 of the member states where it is not an official language. Across Europe, 38% of people speak English well enough as a foreign language to have a conversation, compared to 12% speaking French and 11% in German.
The report also found that 67% of Europeans consider English the most useful foreign language, and that the numbers favouring German (17%) or French (16%) have declined. As a result, 79% of Europeans want their children to learn English, compared to 20% for French and German.
Too much invested in English
Huge sums have been invested in English teaching by both national governments and private enterprise. As the demand for learning English has increased, so has the supply. English language learning worldwide was estimated to be worth US$63.3 billion (£47.5 billion) in 2012, and it is expected that this market will rise to US$193.2 billion (£145.6 billion) by 2017. The value of English for speakers of other languages is not going to diminish any time soon. There is simply too much invested in it.
Speakers of English as a second language outnumber first-language English speakers by 2:1 both in Europe and globally. For many Europeans, and especially those employed in the EU, English is a useful piece in a toolbox of languages to be pressed into service when needed – a point which was evident in a recent project on whether the use of English in Europe was an opportunity or a threat. So in the majority of cases using English has precisely nothing to do with the UK or Britishness. The EU needs practical solutions and English provides one.
English is unchallenged as the lingua franca of Europe. It has even been suggested that in some countries of northern Europe it has become a second rather than a foreign language. Jan Paternotte, D66 party leader in Amsterdam, has proposed that English should be decreed the official second language of that city.
English has not always held its current privileged status. French and German have both functioned as common languages for high-profile fields such as philosophy, science and technology, politics and diplomacy, not to mention Church Slavonic, Russian, Portuguese and other languages in different times and places.
We can assume that English will not maintain its privileged position forever. Who benefits now, however, are not the predominantly monolingual British, but European anglocrats whose multilingualism provides them with a key to international education and employment.
Much about the EU may be about to change, but right now an anti-English language policy so dramatically out of step with practice would simply make the post-Brexit hangover more painful.
The limits of our language are said to define the boundaries of our world. This is because in our everyday lives, we can only really register and make sense of what we can name. We are restricted by the words we know, which shape what we can and cannot experience.
It is true that sometimes we may have fleeting sensations and feelings that we don’t quite have a name for – akin to words on the “tip of our tongue”. But without a word to label these sensations or feelings they are often overlooked, never to be fully acknowledged, articulated or even remembered. And instead, they are often lumped together with more generalised emotions, such as “happiness” or “joy”. This applies to all aspects of life – and not least to that most sought-after and cherished of feelings, happiness. Clearly, most people know and understand happiness, at least vaguely. But they are hindered by their “lexical limitations” and the words at their disposal.
As English speakers, we inherit, rather haphazardly, a set of words and phrases to represent and describe our world around us. Whatever vocabulary we have managed to acquire in relation to happiness will influence the types of feelings we can enjoy. If we lack a word for a particular positive emotion, we are far less likely to experience it. And even if we do somehow experience it, we are unlikely to perceive it with much clarity, think about it with much understanding, talk about it with much insight, or remember it with much vividness.
Speaking of happiness
While this recognition is sobering, it is also exciting, because it means by learning new words and concepts, we can enrich our emotional world. So, in theory, we can actually enhance our experience of happiness simply through exploring language. Prompted by this enthralling possibility, I recently embarked on a project to discover “new” words and concepts relating to happiness.
I did this by searching for so-called “untranslatable” words from across the world’s languages. These are words where no exact equivalent word or phrase exists in English. And as such, suggest the possibility that other cultures have stumbled upon phenomena that English-speaking places have somehow overlooked.
Perhaps the most famous example is “Schadenfreude”, the German term describing pleasure at the misfortunes of others. Such words pique our curiosity, as they appear to reveal something specific about the culture that created them – as if German people are potentially especially liable to feelings of Schadenfreude (though I don’t believe that’s the case).
However, these words actually may be far more significant than that. Consider the fact that Schadenfreude has been imported wholesale into English. Evidently, English speakers had at least a passing familiarity with this kind of feeling, but lacked the word to articulate it (although I suppose “gloating” comes close) – hence, the grateful borrowing of the German term. As a result, their emotional landscape has been enlivened and enriched, able to give voice to feelings that might previously have remained unconceptualised and unexpressed.
My research, searched for these kind of “untranslatable words” – ones that specifically related to happiness and well-being. And so I trawled the internet looking for relevant websites, blogs, books and academic papers, and gathered a respectable haul of 216 such words. Now, the list has expanded – partly due to the generous feedback of visitors to my website – to more than 600 words.
When analysing these “untranslatable words”, I divide them into three categories based on my subjective reaction to them. Firstly, there are those that immediately resonate with me as something I have definitely experienced, but just haven’t previously been able to articulate. For instance, I love the strange German noun “Waldeinsamkeit”, which captures that eerie, mysterious feeling that often descends when you’re alone in the woods.
A second group are words that strike me as somewhat familiar, but not entirely, as if I can’t quite grasp their layers of complexity. For instance, I’m hugely intrigued by various Japanese aesthetic concepts, such as “aware” (哀れ), which evokes the bitter-sweetness of a brief, fading moment of transcendent beauty. This is symbolised by the cherry blossom – and as spring bloomed in England I found myself reflecting at length on this powerful yet intangible notion.
Finally, there is a mysterious set of words which completely elude my grasp, but which for precisely that reason are totally captivating. These mainly hail from Eastern religions – terms such as “Nirvana” or “Brahman” – which translates roughly as the ultimate reality underlying all phenomena in the Hindu scriptures. It feels like it would require a lifetime of study to even begin to grasp the meaning – which is probably exactly the point of these types of words.
I believe these words offer a unique window onto the world’s cultures, revealing diversity in the way people in different places experience and understand life. People are naturally curious about other ways of living, about new possibilities in life, and so are drawn to ideas – like these untranslatable words – that reveal such possibilities.
There is huge potential for these words to enrich and expand people’s own emotional worlds, with each of these words comes a tantalising glimpse into unfamiliar and new positive feelings and experiences. And at the end of the day, who wouldn’t be interested in adding a bit more happiness to their own lives?
by Gianfranco Conti, PhD. Co-author of 'The Language Teacher toolkit' and "Breaking the sound barrier: teaching learners how to listen', winner of the 2015 TES best resource contributor award and founder of www.language-gym.com