Things you were taught at school that are wrong


Capture.JPG

Misty Adoniou, University of Canberra

Do you remember being taught you should never start your sentences with “And” or “But”?

What if I told you that your teachers were wrong and there are lots of other so-called grammar rules that we’ve probably been getting wrong in our English classrooms for years?

How did grammar rules come about?

To understand why we’ve been getting it wrong, we need to know a little about the history of grammar teaching.

Grammar is how we organise our sentences in order to communicate meaning to others.

Those who say there is one correct way to organise a sentence are called prescriptivists. Prescriptivist grammarians prescribe how sentences must be structured.

Prescriptivists had their day in the sun in the 18th century. As books became more accessible to the everyday person, prescriptivists wrote the first grammar books to tell everyone how they must write.

These self-appointed guardians of the language just made up grammar rules for English, and put them in books that they sold. It was a way of ensuring that literacy stayed out of reach of the working classes.

They took their newly concocted rules from Latin. This was, presumably, to keep literate English out of reach of anyone who wasn’t rich or posh enough to attend a grammar school, which was a school where you were taught Latin.

And yes, that is the origin of today’s grammar schools.

The other camp of grammarians are the descriptivists. They write grammar guides that describe how English is used by different people, and for different purposes. They recognise that language isn’t static, and it isn’t one-size-fits-all.

1. You can’t start a sentence with a conjunction

Let’s start with the grammatical sin I have already committed in this article. You can’t start a sentence with a conjunction.

Obviously you can, because I did. And I expect I will do it again before the end of this article. There, I knew I would!

Those who say it is always incorrect to start a sentence with a conjunction, like “and” or “but”, sit in the prescriptivist camp.

However, according to the descriptivists, at this point in our linguistic history,
it is fine to start a sentence with a conjunction in an op-ed article like this, or in a novel or a poem.

It is less acceptable to start a sentence with a conjunction in an academic journal article, or in an essay for my son’s high school economics teacher, as it turns out. But times are changing.

2. You can’t end a sentence with a preposition

Well, in Latin you can’t. In English you can, and we do all the time.

Admittedly a lot of the younger generation don’t even know what a preposition is, so this rule is already obsolete. But let’s have a look at it anyway, for old time’s sake.

According to this rule, it is wrong to say “Who did you go to the movies with?”

Instead, the prescriptivists would have me say “With whom did you go to the movies?”

I’m saving that structure for when I’m making polite chat with the Queen on my next visit to the palace.

That’s not a sarcastic comment, just a fanciful one. I’m glad I know how to structure my sentences for different audiences. It is a powerful tool. It means I usually feel comfortable in whatever social circumstances I find myself in, and I can change my writing style according to purpose and audience.

That is why we should teach grammar in schools. We need to give our children a full repertoire of language so that they can make grammatical choices that will allow them to speak and write for a wide range of audiences.

3. Put a comma when you need to take a breath

It’s a novel idea, synchronising your writing with your breathing, but the two have nothing to do with one another and if this is the instruction we give our children, it is little wonder commas are so poorly used.

Punctuation is a minefield and I don’t want to risk blowing up the internet. So here is a basic description of what commas do, and read this for a more comprehensive guide.

Commas provide demarcation between like grammatical structures. When adjectives, nouns, phrases or clauses are butting up against each other in a sentence, we separate them with a comma. That’s why I put commas between the three nouns and the two clauses in that last sentence.

Commas also provide demarcation for words, phrases or clauses that are embedded in a sentence for effect. The sentence would still be a sentence even if we took those words away. See, for example, the use of commas in this sentence.

4. To make your writing more descriptive, use more adjectives

American writer Mark Twain had it right.

“When you catch an adjective, kill it. No, I don’t mean utterly, but kill most of them – then the rest will be valuable.”

If you want your writing to be more descriptive, play with your sentence structure.

Consider this sentence from Liz Lofthouse’s beautiful children’s book Ziba came on a boat. It comes at a key turning point in the book, the story of a refugee’s escape.

“Clutching her mother’s hand, Ziba ran on and on, through the night, far away from the madness until there was only darkness and quiet.”

A beautifully descriptive sentence, and not an adjective in sight.

5. Adverbs are the words that end in ‘ly’

Lots of adverbs end in “ly”, but lots don’t.

Adverbs give more information about verbs. They tell us when, where, how and why the verb happened. So that means words like “tomorrow”, “there” and “deep” can be adverbs.

I say they can be adverbs because, actually, a word is just a word. It becomes an adverb, or a noun, or an adjective, or a verb when it is doing that job in a sentence.

Deep into the night, and the word deep is an adverb. Down a deep, dark hole and it is an adjective. When I dive into the deep, it is doing the work of a noun.

Time to take those word lists of adjectives, verbs and nouns off the classroom walls.

Time, also, to ditch those old Englishmen who wrote a grammar for their times, not ours.

If you want to understand what our language can do and how to use it well, read widely, think deeply and listen carefully. And remember, neither time nor language stands still – for any of us.

The Conversation

Misty Adoniou, Associate Professor in Language, Literacy and TESL, University of Canberra

This article was originally published on The Conversation. Read the original article.

Why was Shakespeare’s death such a non-event at the time?


shakespeare

Ian Donaldson, University of Melbourne

William Shakespeare died on 23 April 1616, 400 years ago, in the small Warwickshire town of his birth. He was 52 years of age: still young (or youngish, at least) by modern reckonings, though his death mightn’t have seemed to his contemporaries like an early departure from the world.

Most of the population who survived childhood in England at this time were apt to die before the age of 60, and old age was a state one entered at what today might be thought a surprisingly youthful age.

Many of Shakespeare’s fellow-writers had died, or were soon to do so, at a younger age than he: Christopher Marlowe, in a violent brawl, at 29; Francis Beaumont, following a stroke, at 31 (also in 1616: just 48 days, as it happened, before Shakespeare’s own death); Robert Greene, penitent and impoverished, of a fever, in the garret of a shoemaker’s house, at 34; Thomas Kyd, after “bitter times and privy broken passions”, at 35; George Herbert, of consumption, at 39; John Fletcher, from the plague, at 46; Edmund Spenser, “for lack of bread” (so it was rumoured), at 47; and Thomas Middleton, also at 47, from causes unknown.

The cause or causes of Shakespeare’s death are similarly unknown, though in recent years they have become a topic of persistent speculation. Syphilis contracted by visits to the brothels of Turnbull Street, mercury or arsenic poisoning following treatment for this infection, alcoholism, obesity, cardiac failure, a sudden stroke brought on by the alarming news of a family disgrace – that Shakespeare’s son-in-law, Thomas Quiney, husband of his younger daughter, Judith, had been responsible for the pregnancy and death of a young local woman named Margaret Wheeler – have all been advanced as possible factors leading to Shakespeare’s death.

A portrait of Shakespeare from the First Folio of his plays.

Francis Thackeray, Director of the Institute for Human Evolution at the University of Witwatersrand, believes that cannabis was the ultimate cause of Shakespeare’s death, and has been hoping – in defiance of the famous ban on Shakespeare’s tomb (“Curst be he that moves my bones”, etc.) to inspect the poet’s teeth in order to confirm this theory. (“Teeth are not bones”, Dr Thackeray somewhat controversially insists.) No convincing evidence, alas, has yet been produced to support any of these theories.

More intriguing than the actual pathology of Shakespeare’s death, however, may be another set of problems that have largely evaded the eye of biographers, though they seem at times – in a wider, more general sense – to have held the poet’s own sometimes playful attention. They turn on the question of fame: how it is constituted; how slowly and indirectly it’s often achieved, how easily it may be delayed, diverted, or lost altogether from view.

No memorial gathering

On 25 April 1616, two days after his death, Shakespeare was buried in the chancel of Holy Trinity Church at Stratford, having earned this modest place of honour as much (it would seem) through his local reputation as a respected citizen as from any deep sense of his wider professional achievements.

No memorial gatherings were held in the nation’s capital, where he had made his career, or, it would seem, elsewhere in the country. The company of players that he had led for so long did not pause (so far as we know) to acknowledge his passing, nor did his patron and protector, King James, whom he had loyally served.

Only one writer, a minor Oxfordshire poet named William Basse, felt moved to offer, at some unknown date following his death, a few lines to the memory of Shakespeare, with whom he may not have been personally acquainted. Hoping that Shakespeare might be interred at Westminster but foreseeing problems of crowding at the Abbey, Basse began by urging other distinguished English poets to roll over in their tombs, in order to make room for the new arrival.

Renownèd Spenser, lie a thought more nigh.

To learned Chaucer; and rare Beaumont, lie

A little nearer Spenser, to make room

For Shakespeare in your threefold, fourfold tomb.

None of these poets responded to Basse’s injunctions, however, and Shakespeare was not to win his place in the Abbey for more than a hundred years, when Richard Boyle, third Earl of Burlington, commissioned William Kent to design and Peter Scheemakers to sculpt this life-size white marble statue of the poet – standing cross-legged, leaning thoughtfully on a pile of books – to adorn Poets’ Corner.

A Derby porcelain figure of Shakespeare modelled after the statue of 1741 by Peter Scheemakers in Poets’ Corner.
Wikimeida images

On the wall behind this statue, erected in the Abbey in January 1741, is a tablet with a Latin inscription (perhaps contributed by the poet Alexander Pope) conceding the belated arrival of the memorial: “William Shakespeare,/124 years after his death/ erected by public love”.

Basse’s verses were in early circulation, but not published until 1633. No other poem to Shakespeare’s memory is known to have been written before the appearance of the First Folio in 1623. No effort appears to have been made in the months and years following the poet’s death to assemble a tributary volume, honouring the man and his works. None of Shakespeare’s other contemporaries noted the immediate fact of his passing in any surviving letter, journal, or record. No dispatches, private or diplomatic, carried the news of his death beyond Britain to the wider world.

Why did the death of Shakespeare cause so little public grief, so little public excitement, in and beyond the country of his birth? Why wasn’t his passing an occasion for widespread mourning, and widespread celebration of his prodigious achievements? What does this curious silence tell us about Shakespeare’s reputation in 1616; about the status of his profession and the state of letters more generally in Britain at this time?

A very quiet death

Shakespeare’s death occurred upon St George’s Day. That day was famous for the annual rites of prayer, procession, and feasting at Windsor by members of the Order of the Garter, England’s leading chivalric institution, founded in 1348 by Edward III. Marking as it did the anniversary of the supposed martyrdom in AD 303 of St George of Cappadocia, St George’s Day was celebrated in numerous countries in and beyond Europe, as it is today, but had emerged somewhat bizarrely in late mediaeval times as a day of national significance in England.

Tourists watch actors perform at the house where William Shakespeare was born during celebrations to mark the 400th anniversary of his death.
Dylan Martinez/Reuters

On St George’s Day 1616, as Shakespeare lay dying in far-off Warwickshire, King James – seemingly untroubled by prior knowledge of this event – was entertained in London by a poet of a rather different order named William Fennor.

Fennor was something of a royal favourite, famed for his facetious contests in verse, often in the King’s presence, with the Thames bargeman, John Taylor, the so-called Water Poet: a man whom James – as Ben Jonson despairingly reported to William Drummond – reckoned to be the finest poet in the kingdom.

In the days and weeks that followed, as the news of the poet’s death (one must assume) filtered gradually through to the capital, there is no recorded mention in private correspondence or official documents of Shakespeare’s name. Other more pressing matters were now absorbing the nation. Shakespeare had made a remarkably modest exit from the theatre of the world: largely un-applauded, largely unobserved. It was a very quiet death.

An age of public mourning

The silence that followed the death of Shakespeare is the more remarkable coming as it did in an age that had developed such elaborate rituals of public mourning, panegyric, and commemoration, most lavishly displayed at the death of a monarch or peer of the realm, but also occasionally set in train by the death of an exceptional commoner.

Consider the tributes paid to another great writer of the period, William Camden, antiquarian scholar and Clarenceux herald of arms, who died in London in late November 1623; a couple of weeks, as chance would have it, after the publication of Shakespeare’s First Folio.

Portrait of William Camden by Marcus Gheeraerts the Younger (1609).
Wikimedia commons

Camden was a man of quite humble social origins – like Shakespeare himself, whose father was a maker of gloves and leather goods in Stratford. Camden’s father was a painter-stainer, whose job it was to decorate coats of arms and other heraldic devices. By the time of his death Camden was widely recognized, in Britain and abroad, as one of the country’s outstanding scholars.

Eulogies were delivered at Oxford and published along with other tributes in a memorial volume soon after his death. At Westminster his body was escorted to the Abbey on 19 November by a large retinue of mourners, led by 26 poor men wearing gowns, followed by soberly attired gentlemen, esquires, knights, and members of the College of Arms, the hearse being flanked by earls, barons, and other peers of the realm, together with the Lord Keeper, Bishop John Williams, and other divines. Camden’s imposing funeral mirrored on a smaller scale the huge procession of 1,600 mourners which in 1603 had accompanied the body of Elizabeth I to its final resting place in the Abbey.

There were particular reasons, then, why Camden should have been accorded a rather grand funeral of his own. But mightn’t there have been good reasons for Shakespeare, likewise – whom we see today as the outstanding writer of his age – to have been honoured at his death in a suitably ceremonious fashion? It’s curious to realize, however, that Shakespeare at the time of his death wasn’t yet universally seen as the outstanding writer of his age.

Ben Jonson by George Vertue (1684-1786) after Gerard van Honthorst (1590-1656).
Wikimedia images

At this quite extraordinary moment in the history of English letters and intellectual exchange there was more than one contender for that title. William Camden himself – an admired poet in addition to his other talents, and friend and mentor of other poets of the day – had included Shakespeare’s name in a list, published in 1614, of “the most pregnant wits of these our times, whom succeeding ages may justly admire”, placing him, without differentiation, alongside Edmund Spenser, John Owen, Thomas Campion, Michael Drayton, George Chapman, John Marston, Hugh Holland and Ben Jonson, the last two of whom he had taught at Westminster School.

But it was another poet, Sir Philip Sidney, whom Camden had befriended during his student days at Oxford, that he most passionately admired, and continued to regard – following Sidney’s early death at the age of 32 in 1586 – as the country’s supreme writer. “Our Britain is the glory of earth and its precious jewel,/ But Sidney was the precious jewel of Britain”, Camden had written in a memorial poem in Latin mourning his friend’s death.

No commoner poet in England had ever been escorted to his grave with such pomp as was furnished for Sidney’s funeral at St Paul’s Cathedral, London, on 16 February 1587.

The 700-man procession was headed by 32 poor men, representing the number of years that Sidney had lived, with fifes and drums “playing softly” beside them. They were followed by trumpeters and gentlemen and yeomen servants, physicians, surgeons, chaplains, knights and esquires, heralds bearing aloft Sidney’s spurs and gauntlet, his helm and crest, his sword and targe, his coat of arms. Then came the hearse containing Sidney’s body. Behind them walked the chief mourner, Philip’s young brother, Robert, accompanied by the Earls of Leicester, Pembroke, Huntingdon, and Essex, followed by representatives from the states of Holland and Zealand. Next came the Lord Mayor and Aldermen of the City of London, with 120 members of the Company of Grocers, and, at the rear of the procession, “citizens of London practised in arms, about 300, who marched three by three”.

A 1587 engraving by Theodor de Bry showing the casket of Sir Philip Sidney carried by pallbearers.
Wikimedia images

Sidney’s funeral was a moving salute to a man who was widely admired not just for his military, civic and diplomatic virtues, but as the outstanding writer of his day. He fulfilled in exemplary fashion, as Shakespeare curiously did not, the Renaissance ideal of what a poet should strive to be.

In an extraordinary act of homage not before seen in England, but soon to be commonly followed at the death of distinguished writers, the Universities of Oxford and Cambridge produced three volumes of Latin verse lauding Sidney’s achievements, while a fourth volume of similar tributes was published by the University of Leiden. The collection from Cambridge, presented contributions from 63 Cambridge men, together with a sonnet in English by King James VI of Scotland, the future King James I of Britain.

Earlier English poets had been mourned at their passing, if not in these terms and not on this scale, then with more enthusiasm than was evident at the death of Shakespeare. Edmund Spenser at his death in 1599 was buried in Westminster Abbey next to Chaucer, “this hearse being attended by poets, and mournful elegies and poems with the pens that wrote them thrown into his tomb”. The deaths of Thomas Wyatt and Michael Drayton were similarly lamented.

When, 21 years after Shakespeare’s death, his former friend and colleague Ben Jonson came at last to die, the crowd that gathered at his house in Westminster to accompany his body to his grave in the Abbey included “all or the greatest part of the nobility and gentry then in the town”. Within months of his death a volume of 33 poems was in preparation and a dozen additional elegies had appeared in print. Jonson was hailed at his death as “king of English poetry”, as England’s “rare arch-poet”. With his death, as more than one memorialist declared, English poetry itself now seemed also to have died. No one had spoken in these terms at the death of Shakespeare.

To take one last example: at the death in 1643 of the dramatist William Cartwright whose works and whose very name are barely known to most people today – Charles I elected to wear black, remarking that

since the muses had so much mourned for the loss of such a son, it would be a shame for him not to appear in mourning for the loss of such a subject.

At the death of Shakespeare in 1616 James had shown no such minimal courtesy.

Backroom boys

Why should Shakespeare at his death have been so neglected? One simple answer is that King James, unlike his son, Charles, had no great passion for the theatre, and no very evident regard for Shakespeare’s genius. Early in his reign, so Dudley Carleton reported,

The first holy days we had every night a public play in the great hall, at which the King was ever present, and liked or disliked as he saw cause: but it seems he takes no extraordinary pleasure in them.

But Shakespeare and his company were not merely royal servants, bound to provide a steady supply of dramatic entertainment at court; they also catered for the London public who flocked to see their plays at Blackfriars and the Globe, and who had their own ways of expressing their pleasure, their frustrations, and – at the death of a player – their grief.

A portrait of the actor Richard Burbage.
Wikimedia images

When Richard Burbage, the principal actor for the King’s Men, died on 9 March 1619, just seven days after the death of Queen Anne, the London public were altogether more upset by that event than they had been over the death of the Queen, as one contemporary writer – quoting, ironically, the opening lines of Shakespeare’s 1 Henry VI – tartly observed.

So it’s necessary, I think, to pose a further question. Why should the death of Burbage have affected the London public more profoundly than the death not merely of the Queen but of the dramatist whose work he so skilfully interpreted?

I believe the answer lies, partly at least, in the status of the profession to which Shakespeare belonged, a profession which didn’t yet have a regular name: the very words playwright and dramatist not entering the language until half a century after Shakespeare’s death.

Prominent actors at this time were far better known to the public than the writers who provided their livelihood. The writers were on the whole invisible people, who worked as backroom boys, often anonymously and in small teams; playgoers had no easy way of discovering their identity. Theatre programmes didn’t yet exist. Playbills often announced the names of leading actors, but not until the very last decade of the 17th century did they include the names of authors.

Only a fraction of the large number of plays performed in this period moreover found their way into print, and those that were published didn’t always disclose the names of their authors.

At the time of Shakespeare’s death half of his plays weren’t yet available in print, and there were no known plans to produce a collected edition of his works. The total size and shape of the canon were therefore still imperfectly known. Shakespeare was not yet fully visible.

In 1616 the world didn’t yet realise what they had got, or who it was that they’d lost. Hence, I believe, the otherwise inexplicable silence at his passing.

To the Memory of My Beloved

At the time of Shakespeare’s death another English writer was arguably better known to the general public than Shakespeare himself, and more highly esteemed by the brokers of power at King James’s court. That writer was Shakespeare’s friend and colleague Ben Jonson, who early in 1616 had been awarded a pension of one hundred marks to serve as King James’s laureate poet.

A 1623 copy of the calf-bound First Folio edition of William Shakespeare’s plays.
Dylan Martinez/Reuters

A first folio edition of Shakespeare’s collected plays was finally published in London with Jonson’s assistance and oversight in 1623. This monumental volume at last gave readers in England some sense of the wider reach of Shakespeare’s theatrical achievement, and laid the essential foundations of his modern reputation.

At the head of this volume stand two poems by Ben Jonson: the second, To the Memory of My Beloved, the Author, Mr William Shakespeare, and What He Hath Left Us assesses the achievement of this extraordinary writer. Shakespeare had been praised during his lifetime as a “sweet”, “mellifluous”, “honey-tongued”, “honey-flowing”, “pleasing” writer. No one until this moment had presented him in the astounding terms that Jonson here proposes: as the pre-eminent figure, the “soul” and the “star” of his age; and as something even more than that: as one who could be confidently ranked with the greatest writers of antiquity and of the modern era.

Triumph, my Britain, thou has one to show

To whom all scenes of Europe homage owe,

He was not of an age, but for all time!

Today, 400 years on, that last line sounds like a truism, for Shakespeare’s fame has indeed endured. He is without doubt the most famous writer the world has ever seen. But in 1623 this was a bold and startling prediction. No one before that date had described Shakespeare’s achievement in such terms as these.

This is an edited version of a public lecture given at the University of Melbourne.

On the 400th anniversary of Shakespeare’s death, the Faculty of Arts at the University of Melbourne is establishing the Shakespeare 400 Trust to raise funds to support the teaching of Shakespeare at the University into the future. For more information, or if you would like to support the Shakespeare 400 Trust, please contact Julie du Plessis at julie.dp@unimelb.edu.au

The Conversation

Ian Donaldson, Honorary Professorial Fellow, School of Culture and Communication, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Beware the bad big wolf: why you need to put your adjectives in the right order


image-20160906-25260-dcj9cp.jpg

Simon Horobin, University of Oxford

Unlikely as it sounds, the topic of adjective use has gone “viral”. The furore centres on the claim, taken from Mark Forsyth’s book The Elements of Eloquence, that adjectives appearing before a noun must appear in the following strict sequence: opinion, size, age, shape, colour, origin, material, purpose, Noun. Even the slightest attempt to disrupt this sequence, according to Forsyth, will result in the speaker sounding like a maniac. To illustrate this point, Forsyth offers the following example: “a lovely little old rectangular green French silver whittling knife”.

 

But is the “rule” worthy of an internet storm – or is it more of a ripple in a teacup? Well, certainly the example is a rather unlikely sentence, and not simply because whittling knives are not in much demand these days – ignoring the question of whether they can be both green and silver. This is because it is unusual to have a string of attributive adjectives (ones that appear before the noun they describe) like this.

More usually, speakers of English break up the sequence by placing some of the adjectives in predicative position – after the noun. Not all adjectives, however, can be placed in either position. I can refer to “that man who is asleep” but it would sound odd to refer to him as “that asleep man”; we can talk about the “Eastern counties” but not the “counties that are Eastern”. Indeed, our distribution of adjectives both before and after the noun reveals another constraint on adjective use in English – a preference for no more than three before a noun. An “old brown dog” sounds fine, a “little old brown dog” sounds acceptable, but a “mischievous little old brown dog” sounds plain wrong.

Rules, rules, rules

Nevertheless, however many adjectives we choose to employ, they do indeed tend to follow a predictable pattern. While native speakers intuitively follow this rule, most are unaware that they are doing so; we agree that the “red big dog” sounds wrong, but don’t know why. In order to test this intuition linguists have analysed large corpora of electronic data, to see how frequently pairs of adjectives like “big red” are preferred to “red big”. The results confirm our native intuition, although the figures are not as comprehensive as we might expect – the rule accounts for 78% of the data.

We know how to use them … without even being aware of it.
Shutterstock

But while linguists have been able to confirm that there are strong preferences in the ordering of pairs of adjectives, no such statistics have been produced for longer strings. Consequently, while Forsyth’s rule appears to make sense, it remains an untested, hypothetical, large, sweeping (sorry) claim.

In fact, even if we stick to just two adjectives it is possible to find examples that appear to break the rule. The “big bad wolf” of fairy tale, for instance, shows the size adjective preceding the opinion one; similarly, “big stupid” is more common than “stupid big”. Examples like these are instead witness to the “Polyanna Principle”, by which speakers prefer to present positive, or indifferent, values before negative ones.

Another consideration of Forsyth’s proposed ordering sequence is that it makes no reference to other constraints that influence adjective order, such as when we use two adjectives that fall into the same category. Little Richard’s song “Long Tall Sally” would have sounded strange if he had called it Tall Long Sally, but these are both adjectives of size.

Definitely not Tall Long Sally.

Similarly, we might describe a meal as “nice and spicy” but never “spicy and nice” – reflecting a preference for the placement of general opinions before more specific ones. We also need to bear in mind the tendency for noun phrases to become lexicalised – forming words in their own right. Just as a blackbird is not any kind of bird that is black, a little black dress does not refer to any small black dress but one that is suitable for particular kinds of social engagement.

Since speakers view a “little black dress” as a single entity, its order is fixed; as a result, modifying adjectives must precede little – a “polyester little black dress”. This means that an adjective specifying its material appears before those referring to size and colour, once again contravening Forsyth’s rule.

Making sense of language

Of course, the rule is a fair reflection of much general usage – although the reasons behind this complex set of constraints in adjective order remain disputed. Some linguists have suggested that it reflects the “nouniness” of an adjective; since colour adjectives are commonly used as nouns – “red is my favourite colour” – they appear close to that slot.

Another conditioning factor may be the degree to which an adjective reflects a subjective opinion rather than an objective description – therefore, subjective adjectives that are harder to quantify (boring, massive, middle-aged) tend to appear further away from the noun than more concrete ones (red, round, French).

Prosody, the rhythm and sound of poetry, is likely to play a role, too – as there is a tendency for speakers to place longer adjectives after shorter ones. But probably the most compelling theory links adjective position with semantic closeness to the noun being described; adjectives that are closely related to the noun in meaning, and are therefore likely to appear frequently in combination with it, are placed closest, while those that are less closely related appear further away.

In Forsyth’s example, it is the knife’s whittling capabilities that are most significant – distinguishing it from a carving, fruit or butter knife – while its loveliness is hardest to define (what are the standards for judging the loveliness of a whittling knife?) and thus most subjective. Whether any slight reorganisation of the other adjectives would really prompt your friends to view you as a knife-wielding maniac is harder to determine – but then, at least it’s just a whittling knife.

The Conversation

Simon Horobin, Professor of English Language and Literature, University of Oxford

This article was originally published on The Conversation. Read the original article.

How the Queen’s English has had to defer to Africa’s rich multilingualism


Rajend Mesthrie, University of Cape Town

For the first time in history a truly global language has emerged. English enables international communication par excellence, with a far wider reach than other possible candidates for this position – like Latin in the past, and French, Spanish and Mandarin in the present.

In a memorable phrase, former Tanzanian statesman Julius Nyerere once characterised English as the Kiswahili of the world. In Africa, English is more widely spoken than other important lingua francas like Kiswahili, Arabic, French and Portuguese, with at least 26 countries using English as one of their official languages.

But English in Africa comes in many different shapes and forms. It has taken root in an exceptionally multilingual context, with well over a thousand languages spoken on the continent. The influence of this multilingualism tends to be largely erased at the most formal levels of use – for example, in the national media and in higher educational contexts. But at an everyday level, the Queen’s English has had to defer to the continent’s rich abundance of languages. Pidgin, creole, second-language and first-language English all flourish alongside them.

The birth of new languages

English did not enter Africa as an innocent language. Its history is tied up with trade and exploitation, capitalist expansion, slavery and colonisation.

The history of English is tied up with trade, capitalist expansion, slavery and colonialism.
Shutterstock

As the need for communication arose and increased under these circumstances, forms of English, known as pidgins and creoles, developed. This took place within a context of unequal encounters, a lack of sustained contact with speakers of English and an absence of formal education. Under these conditions, English words were learnt and attached to an emerging grammar that owed more to African languages than to English.

A pidgin is defined by linguists as an initially simple form of communication that arises from contact between speakers of disparate languages who have
no other means of communication in common. Pidgins, therefore, do not have mother-tongue speakers. The existence of pidgins in the early period of West African-European contact is not well documented, and some linguists like Salikoko Mufwene judge their early significance to be overestimated.

Pidgins can become more complex if they take on new functions. They are relabelled creoles if, over time and under specific circumstances, they become fully developed as the first language of a group of speakers.

Ultimately, pidgins and creoles develop grammatical norms that are far removed from the colonial forms that partially spawned them: to a British English speaker listening to a pidgin or creole, the words may seem familiar in form, but not always in meaning.

Linguists pay particular attention to these languages because they afford them the opportunity to observe creativity at first hand: the birth of new languages.

The creoles of West Africa

West Africa’s creoles are of two types: those that developed outside Africa; and those that first developed from within the continent.

The West African creoles that developed outside Africa emerged out of the multilingual and oppressive slave experience in the New World. They were then brought to West Africa after 1787 by freed slaves repatriated from Britain, North America and the Caribbean. “Krio” was the name given to the English-based creole of slaves freed from Britain who were returned to Sierra Leone, where they were joined by slaves released from Nova Scotia and Jamaica.

Some years after that, in 1821, Liberia was established as an African homeland for freed slaves from the US. These men and women brought with them what some linguists call “Liberian settler English”. This particular creole continues to make Liberia somewhat special on the continent, with American rather than British forms of English dominating there.

These languages from the New World were very influential in their new environments, especially over the developing West African pidgin English.

A more recent, homegrown type of West African creole has emerged in the region. This West African creole is spreading in the context of urban multilingualism and changing youth identities. Over the past 50 years, it has grown spectacularly in Ghana, Cameroon, Equatorial Guinea and Sierra Leone, and it is believed to be the fastest-growing language in Nigeria. In this process pidgin English has been expanded into a creole, used as one of the languages of the home. For such speakers, the designation “pidgin” is now a misnomer, although it remains widely used.

In East Africa, in contrast, the strength and historicity of Kiswahili as a lingua franca prevented the rapid development of pidgins based on colonial languages. There, traders and colonists had to learn Kiswahili for successful everyday communication. This gave locals more time to master English as a fully-fledged second language.

Other varieties of English

Africa, mirroring the trend in the rest of the world, has a large and increasing number of second-language English speakers. Second-language varieties of English are mutually intelligible with first-language versions, while showing varying degrees of difference in accent, grammar and nuance of vocabulary. Formal colonisation and the educational system from the 19th century onwards account for the wide spread of second-language English.

What about first-language varieties of English on the continent? The South African variety looms large in this history, showing similarities with English in Australia and New Zealand, especially in details of accent.

In post-apartheid South Africa many young black people from middle-class backgrounds now speak this variety either as a dominant language or as a “second first-language”. But for most South Africans English is a second language – a very important one for education, business and international communication.

For family and cultural matters, African languages remain of inestimable value throughout the continent.

The Conversation

Rajend Mesthrie, Professor of Linguistics, University of Cape Town

This article was originally published on The Conversation. Read the original article.

Why it’s hard for adults to learn a second language


image-20160804-473-32tg9n.jpg

Brianna Yamasaki, University of Washington

As a young adult in college, I decided to learn Japanese. My father’s family is from Japan, and I wanted to travel there someday.

However, many of my classmates and I found it difficult to learn a language in adulthood. We struggled to connect new sounds and a dramatically different writing system to the familiar objects around us.

It wasn’t so for everyone. There were some students in our class who were able to acquire the new language much more easily than others.

So, what makes some individuals “good language learners?” And do such individuals have a “second language aptitude?”

What we know about second language aptitude

Past research on second language aptitude has focused on how people perceive sounds in a particular language and on more general cognitive processes such as memory and learning abilities. Most of this work has used paper-and-pencil and computerized tests to determine language-learning abilities and predict future learning.

Researchers have also studied brain activity as a way of measuring linguistic and cognitive abilities. However, much less is known about how brain activity predicts second language learning.

Is there a way to predict the aptitude of second language learning?

How does brain activity change while learning languages?
Brain image via www.shutterstock.com

In a recently published study, Chantel Prat, associate professor of psychology at the Institute for Learning and Brain Sciences at the University of Washington, and I explored how brain activity recorded at rest – while a person is relaxed with their eyes closed – could predict the rate at which a second language is learned among adults who spoke only one language.

Studying the resting brain

Resting brain activity is thought to reflect the organization of the brain and it has been linked to intelligence, or the general ability used to reason and problem-solve.

We measured brain activity obtained from a “resting state” to predict individual differences in the ability to learn a second language in adulthood.

To do that, we recorded five minutes of eyes-closed resting-state electroencephalography, a method that detects electrical activity in the brain, in young adults. We also collected two hours of paper-and-pencil and computerized tasks.

We then had 19 participants complete eight weeks of French language training using a computer program. This software was developed by the U.S. armed forces with the goal of getting military personnel functionally proficient in a language as quickly as possible.

The software combined reading, listening and speaking practice with game-like virtual reality scenarios. Participants moved through the content in levels organized around different goals, such as being able to communicate with a virtual cab driver by finding out if the driver was available, telling the driver where their bags were and thanking the driver.

Here’s a video demonstration:

Nineteen adult participants (18-31 years of age) completed two 30-minute training sessions per week for a total of 16 sessions. After each training session, we recorded the level that each participant had reached. At the end of the experiment, we used that level information to calculate each individual’s learning rate across the eight-week training.

As expected, there was large variability in the learning rate, with the best learner moving through the program more than twice as quickly as the slowest learner. Our goal was to figure out which (if any) of the measures recorded initially predicted those differences.

A new brain measure for language aptitude

When we correlated our measures with learning rate, we found that patterns of brain activity that have been linked to linguistic processes predicted how easily people could learn a second language.

Patterns of activity over the right side of the brain predicted upwards of 60 percent of the differences in second language learning across individuals. This finding is consistent with previous research showing that the right half of the brain is more frequently used with a second language.

Our results suggest that the majority of the language learning differences between participants could be explained by the way their brain was organized before they even started learning.

Implications for learning a new language

Does this mean that if you, like me, don’t have a “quick second language learning” brain you should forget about learning a second language?

Not quite.

Language learning can depend on many factors.
Child image via www.shutterstock.com

First, it is important to remember that 40 percent of the difference in language learning rate still remains unexplained. Some of this is certainly related to factors like attention and motivation, which are known to be reliable predictors of learning in general, and of second language learning in particular.

Second, we know that people can change their resting-state brain activity. So training may help to shape the brain into a state in which it is more ready to learn. This could be an exciting future research direction.

Second language learning in adulthood is difficult, but the benefits are large for those who, like myself, are motivated by the desire to communicate with others who do not speak their native tongue.

The Conversation

Brianna Yamasaki, Ph.D. Student, University of Washington

This article was originally published on The Conversation. Read the original article.

British Council Backs Bilingual Babies


3930162_orig

The British Council is to open a bilingual pre-school in Hong Kong in August. The International Pre-School, which will teach English and Cantonese and have specific times set aside for Mandarin, will follow the UK-based International Primary Curriculum.

The British Council already has bilingual pre-schools in Singapore (pictured above) and Madrid. The adoption of a bilingual model of early years learning, rather than a purely English-medium one, is supported by much of the research on this age group. In a randomised control trial in the US state of New Jersey, for example, three- and four-year-olds from both Spanish- and English-speaking backgrounds were assigned by lottery to either an all-English or English–Spanish pre-school programme which used an identical curriculum. The study found that children from the bilingual programme emerged with the same level of English as those in the English-medium one, but both the Spanish-speaking and anglophone children had a much higher level of Spanish.

http://www.elgazette.com/item/281-british-council-backs-bilingual-babies.html

How the British military became a champion for language learning


education-military.jpg

Wendy Ayres-Bennett, University of Cambridge

When an army deploys in a foreign country, there are clear advantages if the soldiers are able to speak the local language or dialect. But what if your recruits are no good at other languages? In the UK, where language learning in schools and universities is facing a real crisis, the British army began to see this as a serious problem.

In a new report on the value of languages, my colleagues and I showcased how a new language policy instituted last year within the British Army, was triggered by a growing appreciation of the risks of language shortages for national security.

Following the conflicts in Iraq and Afghanistan, the military sought to implement language skills training as a core competence. Speakers of other languages are encouraged to take examinations to register their language skills, whether they are language learners or speakers of heritage or community languages.

The UK Ministry of Defence’s Defence Centre for Language and Culture also offers training to NATO standards across the four language skills – listening, speaking, reading and writing. Core languages taught are Arabic, Dari, Farsi, French, Russian, Spanish and English as a foreign language. Cultural training that provides regional knowledge and cross-cultural skills is still embryonic, but developing fast.

Cash incentives

There are two reasons why this is working. The change was directed by the vice chief of the defence staff, and therefore had a high-level champion. There are also financial incentives for army personnel to have their linguistic skills recorded, ranging from £360 for a lower-level western European language, to £11,700 for a high level, operationally vital linguist. Currently any army officer must have a basic language skill to be able to command a sub unit.

A British army sergeant visits a school in Helmand, Afghanistan.
Defence Images/flickr.com, CC BY-NC

We should not, of course, overstate the progress made. The numbers of Ministry of Defence linguists for certain languages, including Arabic, are still precariously low and, according to recent statistics, there are no speakers of Ukrainian or Estonian classed at level three or above in the armed forces. But, crucially, the organisational culture has changed and languages are now viewed as an asset.

Too fragmented

The British military’s new approach is a good example of how an institution can change the culture of the way it thinks about languages. It’s also clear that language policy can no longer simply be a matter for the Department for Education: champions for language both within and outside government are vital for issues such as national security.

This is particularly important because of the fragmentation of language learning policy within the UK government, despite an informal cross-Whitehall language focus group.

Experience on the ground illustrates the value of cooperation when it comes to security. For example, in January, the West Midlands Counter Terrorism Unit urgently needed a speaker of a particular language dialect to assist with translating communications in an ongoing investigation. The MOD was approached and was able to source a speaker within another department.

There is a growing body of research demonstrating the cost to business of the UK’s lack of language skills. Much less is known about their value to national security, defence and diplomacy, conflict resolution and social cohesion. Yet language skills have to be seen as an asset, and appreciation is needed across government for their wider value to society and security.

The Conversation

Wendy Ayres-Bennett, Professor of French Philology and Linguistics, University of Cambridge

This article was originally published on The Conversation. Read the original article.