Textbooks are a crucial part of any child’s learning. A large body of research has proved this many times and in many very different contexts. Textbooks are a physical representation of the curriculum in a classroom setting. They are powerful in shaping the minds of children and young people.
UNESCO has recognised this power and called for every child to have a textbook for every subject. The organisation argues that
next to an engaged and prepared teacher, well-designed textbooks in sufficient quantities are the most effective way to improve instruction and learning.
But there’s an elephant in the room when it comes to textbooks in African countries’ classrooms: language.
Rwanda is one of many African countries that’s adopted a language instruction policy which sees children learning in local or mother tongue languages for the first three years of primary school. They then transition in upper primary and secondary school into a dominant, so-called “international” language. This might be French or Portuguese. In Rwanda, it has been Englishsince 2008.
Evidence from across the continent suggests that at this transition point, many learners have not developed basic literacy and numeracy skills. And, significantly, they have not acquired anywhere near enough of the language they are about to learn in to be able to engage in learning effectively.
I do not wish to advocate for English medium instruction, and the arguments for mother-tongue based education are compelling. But it’s important to consider strategies for supporting learners within existing policy priorities. Using appropriate learning and teaching materials – such as textbooks – could be one such strategy.
A different approach
It’s not enough to just hand out textbooks in every classroom. The books need to tick two boxes: learners must be able to read them and teachers must feel enabled to teach with them.
Existing textbooks tend not to take these concerns into consideration. The language is too difficult and the sentence structures too complex. The paragraphs too long and there are no glossaries to define unfamiliar words. And while textbooks are widely available to those in the basic education system, they are rarely used systematically. Teachers cite the books’ inaccessibility as one of the main reasons for not using them.
A recent initiative in Rwanda has sought to address this through the development of “language supportive” textbooks for primary 4 learners who are around 11 years old. These were specifically designed in collaboration with local publishers, editors and writers.
There are two key elements to a “language supportive” textbook.
Firstly, they are written at a language level which is appropriate for the learner. As can be seen in Figure 1, the new concept is introduced in as simple English as possible. The sentence structure and paragraph length are also shortened and made as simple as possible. The key word (here, “soil”) is also repeated numerous times so that the learner becomes accustomed to this word.
Secondly, they include features – activities, visuals, clear signposting and vocabulary support – that enable learners to practice and develop their language proficiency while learning the key elements of the curriculum.
The books are full of relevant activities that encourage learners to regularly practice their listening, speaking, reading and writing of English in every lesson. This enables language development.
Crucially, all of these activities are made accessible to learners – and teachers – by offering support in the learners’ first language. In this case, the language used was Kinyarwanda, which is the first language for the vast majority of Rwandan people. However, it’s important to note that initially many teachers were hesitant about incorporating Kinyarwanda into their classroom practice because of the government’s English-only policy.
Improved test scores
The initiative was introduced with 1075 students at eight schools across four Rwandan districts. The evidence from our initiative suggests that learners in classrooms where these books were systematically used learnt more across the curriculum.
When these learners sat tests before using the books, they scored similar results to those in other comparable schools. After using the materials for four months, their test scores were significantly higher. Crucially, both learners and teachers pointed out how important it was that the books sanctioned the use of Kinyarwanda. The classrooms became bilingual spaces and this increased teachers’ and learners’ confidence and competence.
All of this supports the importance of textbooks as effective learning and teaching materials in the classroom and shows that they can help all learners. But authorities mustn’t assume that textbooks are being used or that the existing books are empowering teachers and learners.
Textbooks can matter – but it’s only when consideration is made for the ways they can help all learners that we can say that they can contribute to quality education for all.
It’s often thought that it is better to start learning a second language at a young age. But research shows that this is not necessarily true. In fact, the best age to start learning a second language can vary significantly, depending on how the language is being learned.
The belief that younger children are better language learners is based on the observation that children learn to speak their first language with remarkable skill at a very early age.
Before they can add two small numbers or tie their own shoelaces, most children develop a fluency in their first language that is the envy of adult language learners.
Why younger may not always be better
Two theories from the 1960s continue to have a significant influence on how we explain this phenomenon.
The theory of “universal grammar” proposes that children are born with an instinctive knowledge of the language rules common to all humans. Upon exposure to a specific language, such as English or Arabic, children simply fill in the details around those rules, making the process of learning a language fast and effective.
The other theory, known as the “critical period hypothesis”, posits that at around the age of puberty most of us lose access to the mechanism that made us such effective language learners as children. These theories have been contested, but nevertheless they continue to be influential.
Despite what these theories would suggest, however, research into language learning outcomes demonstrates that younger may not always be better.
In some language learning and teaching contexts, older learners can be more successful than younger children. It all depends on how the language is being learned.
Language immersion environment best for young children
Living, learning and playing in a second language environment on a regular basis is an ideal learning context for young children. Research clearly shows that young children are able to become fluent in more than one language at the same time, provided there is sufficient engagement with rich input in each language. In this context, it is better to start as young as possible.
Learning in classroom best for early teens
Learning in language classes at school is an entirely different context. The normal pattern of these classes is to have one or more hourly lessons per week.
To succeed at learning with such little exposure to rich language input requires meta-cognitive skills that do not usually develop until early adolescence.
For this style of language learning, the later years of primary school is an ideal time to start, to maximise the balance between meta-cognitive skill development and the number of consecutive years of study available before the end of school.
Self-guided learning best for adults
There are, of course, some adults who decide to start to learn a second language on their own. They may buy a study book, sign up for an online course, purchase an app or join face-to-face or virtual conversation classes.
To succeed in this learning context requires a range of skills that are not usually developed until reaching adulthood, including the ability to remain self-motivated. Therefore, self-directed second language learning is more likely to be effective for adults than younger learners.
How we can apply this to education
What does this tell us about when we should start teaching second languages to children? In terms of the development of language proficiency, the message is fairly clear.
If we are able to provide lots of exposure to rich language use, early childhood is better. If the only opportunity for second language learning is through more traditional language classes, then late primary school is likely to be just as good as early childhood.
However, if language learning relies on being self-directed, it is more likely to be successful after the learner has reached adulthood.
Every December, lexicographers around the world choose their “words of the year”, and this year, perhaps more than ever, the stories these tell provide a fascinating insight into how we’ve experienced the drama and trauma of the last 12 months.
There was much potential in 2016. It was 500 years ago that Thomas More wrote his Utopia, and January saw the launch of a year’s celebrations under the slogan “A Year of Imagination and Possibility” – but as 2017 looms, this slogan rings hollow. Instead of utopian dreams, we’ve had a year of “post-truth” and “paranoia”, of “refugee” crises, “xenophobia” and a close shave with “fascism”.
Earlier in the year, a campaign was launched to have “Essex Girl” removed from the Oxford English Dictionary (OED). Those behind the campaign were upset at the derogatory definition – a young woman “characterised as unintelligent, promiscuous, and materialistic” – so wanted it to be expunged from the official record of the language.
The OED turned down the request, a spokeswoman explaining that since the OED is a historical dictionary, nothing is ever removed; its purpose, she said, is to describe the language as people use it, and to stand as a catalogue of the trends and preoccupations of the time.
The words of the year tradition began with the German Wort des Jahres in the 1970s. It has since spread to other languages, and become increasingly popular the world over. Those in charge of the choices are getting more innovative: in 2015, for the first time, Oxford Dictionaries chose a pictograph as their “word”: the emoji for “Face with Tears of Joy”.
In 2016, however, the verbal was very much back in fashion. The results speak volumes.
In English, there are a range of competing words, with all the major dictionaries making their own choices. Having heralded a post-language era last year, Oxford Dictionaries decided on “post-truth” this time, defining it as the situation when “objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”. In a year of evidence-light Brexit promises and Donald Trump’s persistent lies and obfuscations, this has a definite resonance. In the same dystopian vein, the Cambridge Dictionary chose “paranoid”, while Dictionary.com went for “xenophobia”.
Merriam-Webster valiantly tried to turn back the tide of pessimism. When “fascism” looked set to win its online poll, it tweeted its readers imploring them to get behind something – anything – else. The plea apparently worked, and in the end “surreal” won the day. Apt enough for a year in which events time and again almost defied belief.
Collins, meanwhile, chose “Brexit”, a term which its spokesperson suggested has become as flexible and influential in political discourse as “Watergate”.
Just as the latter spawned hundreds of portmanteau words whenever a political scandal broke, so Brexit begat “Bremain”, “Bremorse” and “Brexperts” – and will likely be adapted for other upcoming political rifts for many years to come. It nearly won out in Australia in fact, where “Ausexit” (severing ties with the British monarchy or the United Nations) was on the shortlist. Instead, the Australian National Dictionary went for “democracy sausage” – the tradition of eating a barbecued sausage on election day.
Switzerland’s Deaf Association, meanwhile, chose a Sign of the Year for the first time. Its choice was “Trump”, consisting of a gesture made by placing an open palm on the top of the head, mimicking the president-elect’s extravagant hairstyle.
Trump’s hair also featured in Japan’s choice for this year. Rather than a word, Japan chooses a kanji (Chinese character); 2016’s choice is “金” (gold). This represented a number of different topical issues: Japan’s haul of medals at the Rio Olympics, fluctuating interest rates, the gold shirt worn by singer and YouTube sensation Piko Taro, and, inevitably, the colour of Trump’s hair.
And then there’s Austria, whose word is 51 letters long: “Bundespräsidentenstichwahlwiederholungsverschiebung”. It means “the repeated postponement of the runoff vote for Federal President”. Referring to the seven months of votes, legal challenges and delays over the country’s presidential election, this again references an event that flirted with extreme nationalism and exposed the convoluted nature of democracy. As a new coinage, it also illustrates language’s endless ability to creatively grapple with unfolding events.
Which brings us, finally, to “unpresidented”, a neologism Donald Trump inadvertently created when trying to spell “unprecedented” in a tweet attacking the Chinese. At the moment, it’s a word in search of a meaning, but the possibilities it suggests seem to speak perfectly to the history of the present moment. And depending on what competitors 2017 throws up, it could well emerge as a future candidate.
William Shakespeare died on 23 April 1616, 400 years ago, in the small Warwickshire town of his birth. He was 52 years of age: still young (or youngish, at least) by modern reckonings, though his death mightn’t have seemed to his contemporaries like an early departure from the world.
Most of the population who survived childhood in England at this time were apt to die before the age of 60, and old age was a state one entered at what today might be thought a surprisingly youthful age.
Many of Shakespeare’s fellow-writers had died, or were soon to do so, at a younger age than he: Christopher Marlowe, in a violent brawl, at 29; Francis Beaumont, following a stroke, at 31 (also in 1616: just 48 days, as it happened, before Shakespeare’s own death); Robert Greene, penitent and impoverished, of a fever, in the garret of a shoemaker’s house, at 34; Thomas Kyd, after “bitter times and privy broken passions”, at 35; George Herbert, of consumption, at 39; John Fletcher, from the plague, at 46; Edmund Spenser, “for lack of bread” (so it was rumoured), at 47; and Thomas Middleton, also at 47, from causes unknown.
The cause or causes of Shakespeare’s death are similarly unknown, though in recent years they have become a topic of persistent speculation. Syphilis contracted by visits to the brothels of Turnbull Street, mercury or arsenic poisoning following treatment for this infection, alcoholism, obesity, cardiac failure, a sudden stroke brought on by the alarming news of a family disgrace – that Shakespeare’s son-in-law, Thomas Quiney, husband of his younger daughter, Judith, had been responsible for the pregnancy and death of a young local woman named Margaret Wheeler – have all been advanced as possible factors leading to Shakespeare’s death.
Francis Thackeray, Director of the Institute for Human Evolution at the University of Witwatersrand, believes that cannabis was the ultimate cause of Shakespeare’s death, and has been hoping – in defiance of the famous ban on Shakespeare’s tomb (“Curst be he that moves my bones”, etc.) to inspect the poet’s teeth in order to confirm this theory. (“Teeth are not bones”, Dr Thackeray somewhat controversially insists.) No convincing evidence, alas, has yet been produced to support any of these theories.
More intriguing than the actual pathology of Shakespeare’s death, however, may be another set of problems that have largely evaded the eye of biographers, though they seem at times – in a wider, more general sense – to have held the poet’s own sometimes playful attention. They turn on the question of fame: how it is constituted; how slowly and indirectly it’s often achieved, how easily it may be delayed, diverted, or lost altogether from view.
No memorial gathering
On 25 April 1616, two days after his death, Shakespeare was buried in the chancel of Holy Trinity Church at Stratford, having earned this modest place of honour as much (it would seem) through his local reputation as a respected citizen as from any deep sense of his wider professional achievements.
No memorial gatherings were held in the nation’s capital, where he had made his career, or, it would seem, elsewhere in the country. The company of players that he had led for so long did not pause (so far as we know) to acknowledge his passing, nor did his patron and protector, King James, whom he had loyally served.
Only one writer, a minor Oxfordshire poet named William Basse, felt moved to offer, at some unknown date following his death, a few lines to the memory of Shakespeare, with whom he may not have been personally acquainted. Hoping that Shakespeare might be interred at Westminster but foreseeing problems of crowding at the Abbey, Basse began by urging other distinguished English poets to roll over in their tombs, in order to make room for the new arrival.
Renownèd Spenser, lie a thought more nigh.
To learned Chaucer; and rare Beaumont, lie
A little nearer Spenser, to make room
For Shakespeare in your threefold, fourfold tomb.
None of these poets responded to Basse’s injunctions, however, and Shakespeare was not to win his place in the Abbey for more than a hundred years, when Richard Boyle, third Earl of Burlington, commissioned William Kent to design and Peter Scheemakers to sculpt this life-size white marble statue of the poet – standing cross-legged, leaning thoughtfully on a pile of books – to adorn Poets’ Corner.
On the wall behind this statue, erected in the Abbey in January 1741, is a tablet with a Latin inscription (perhaps contributed by the poet Alexander Pope) conceding the belated arrival of the memorial: “William Shakespeare,/124 years after his death/ erected by public love”.
Basse’s verses were in early circulation, but not published until 1633. No other poem to Shakespeare’s memory is known to have been written before the appearance of the First Folio in 1623. No effort appears to have been made in the months and years following the poet’s death to assemble a tributary volume, honouring the man and his works. None of Shakespeare’s other contemporaries noted the immediate fact of his passing in any surviving letter, journal, or record. No dispatches, private or diplomatic, carried the news of his death beyond Britain to the wider world.
Why did the death of Shakespeare cause so little public grief, so little public excitement, in and beyond the country of his birth? Why wasn’t his passing an occasion for widespread mourning, and widespread celebration of his prodigious achievements? What does this curious silence tell us about Shakespeare’s reputation in 1616; about the status of his profession and the state of letters more generally in Britain at this time?
A very quiet death
Shakespeare’s death occurred upon St George’s Day. That day was famous for the annual rites of prayer, procession, and feasting at Windsor by members of the Order of the Garter, England’s leading chivalric institution, founded in 1348 by Edward III. Marking as it did the anniversary of the supposed martyrdom in AD 303 of St George of Cappadocia, St George’s Day was celebrated in numerous countries in and beyond Europe, as it is today, but had emerged somewhat bizarrely in late mediaeval times as a day of national significance in England.
On St George’s Day 1616, as Shakespeare lay dying in far-off Warwickshire, King James – seemingly untroubled by prior knowledge of this event – was entertained in London by a poet of a rather different order named William Fennor.
Fennor was something of a royal favourite, famed for his facetious contests in verse, often in the King’s presence, with the Thames bargeman, John Taylor, the so-called Water Poet: a man whom James – as Ben Jonson despairingly reported to William Drummond – reckoned to be the finest poet in the kingdom.
In the days and weeks that followed, as the news of the poet’s death (one must assume) filtered gradually through to the capital, there is no recorded mention in private correspondence or official documents of Shakespeare’s name. Other more pressing matters were now absorbing the nation. Shakespeare had made a remarkably modest exit from the theatre of the world: largely un-applauded, largely unobserved. It was a very quiet death.
An age of public mourning
The silence that followed the death of Shakespeare is the more remarkable coming as it did in an age that had developed such elaborate rituals of public mourning, panegyric, and commemoration, most lavishly displayed at the death of a monarch or peer of the realm, but also occasionally set in train by the death of an exceptional commoner.
Consider the tributes paid to another great writer of the period, William Camden, antiquarian scholar and Clarenceux herald of arms, who died in London in late November 1623; a couple of weeks, as chance would have it, after the publication of Shakespeare’s First Folio.
Portrait of William Camden by Marcus Gheeraerts the Younger (1609). Wikimedia commons
Camden was a man of quite humble social origins – like Shakespeare himself, whose father was a maker of gloves and leather goods in Stratford. Camden’s father was a painter-stainer, whose job it was to decorate coats of arms and other heraldic devices. By the time of his death Camden was widely recognized, in Britain and abroad, as one of the country’s outstanding scholars.
Eulogies were delivered at Oxford and published along with other tributes in a memorial volume soon after his death. At Westminster his body was escorted to the Abbey on 19 November by a large retinue of mourners, led by 26 poor men wearing gowns, followed by soberly attired gentlemen, esquires, knights, and members of the College of Arms, the hearse being flanked by earls, barons, and other peers of the realm, together with the Lord Keeper, Bishop John Williams, and other divines. Camden’s imposing funeral mirrored on a smaller scale the huge procession of 1,600 mourners which in 1603 had accompanied the body of Elizabeth I to its final resting place in the Abbey.
There were particular reasons, then, why Camden should have been accorded a rather grand funeral of his own. But mightn’t there have been good reasons for Shakespeare, likewise – whom we see today as the outstanding writer of his age – to have been honoured at his death in a suitably ceremonious fashion? It’s curious to realize, however, that Shakespeare at the time of his death wasn’t yet universally seen as the outstanding writer of his age.
At this quite extraordinary moment in the history of English letters and intellectual exchange there was more than one contender for that title. William Camden himself – an admired poet in addition to his other talents, and friend and mentor of other poets of the day – had included Shakespeare’s name in a list, published in 1614, of “the most pregnant wits of these our times, whom succeeding ages may justly admire”, placing him, without differentiation, alongside Edmund Spenser, John Owen, Thomas Campion, Michael Drayton, George Chapman, John Marston, Hugh Holland and Ben Jonson, the last two of whom he had taught at Westminster School.
But it was another poet, Sir Philip Sidney, whom Camden had befriended during his student days at Oxford, that he most passionately admired, and continued to regard – following Sidney’s early death at the age of 32 in 1586 – as the country’s supreme writer. “Our Britain is the glory of earth and its precious jewel,/ But Sidney was the precious jewel of Britain”, Camden had written in a memorial poem in Latin mourning his friend’s death.
No commoner poet in England had ever been escorted to his grave with such pomp as was furnished for Sidney’s funeral at St Paul’s Cathedral, London, on 16 February 1587.
The 700-man procession was headed by 32 poor men, representing the number of years that Sidney had lived, with fifes and drums “playing softly” beside them. They were followed by trumpeters and gentlemen and yeomen servants, physicians, surgeons, chaplains, knights and esquires, heralds bearing aloft Sidney’s spurs and gauntlet, his helm and crest, his sword and targe, his coat of arms. Then came the hearse containing Sidney’s body. Behind them walked the chief mourner, Philip’s young brother, Robert, accompanied by the Earls of Leicester, Pembroke, Huntingdon, and Essex, followed by representatives from the states of Holland and Zealand. Next came the Lord Mayor and Aldermen of the City of London, with 120 members of the Company of Grocers, and, at the rear of the procession, “citizens of London practised in arms, about 300, who marched three by three”.
Sidney’s funeral was a moving salute to a man who was widely admired not just for his military, civic and diplomatic virtues, but as the outstanding writer of his day. He fulfilled in exemplary fashion, as Shakespeare curiously did not, the Renaissance ideal of what a poet should strive to be.
In an extraordinary act of homage not before seen in England, but soon to be commonly followed at the death of distinguished writers, the Universities of Oxford and Cambridge produced three volumes of Latin verse lauding Sidney’s achievements, while a fourth volume of similar tributes was published by the University of Leiden. The collection from Cambridge, presented contributions from 63 Cambridge men, together with a sonnet in English by King James VI of Scotland, the future King James I of Britain.
Earlier English poets had been mourned at their passing, if not in these terms and not on this scale, then with more enthusiasm than was evident at the death of Shakespeare. Edmund Spenser at his death in 1599 was buried in Westminster Abbey next to Chaucer, “this hearse being attended by poets, and mournful elegies and poems with the pens that wrote them thrown into his tomb”. The deaths of Thomas Wyatt and Michael Drayton were similarly lamented.
When, 21 years after Shakespeare’s death, his former friend and colleague Ben Jonson came at last to die, the crowd that gathered at his house in Westminster to accompany his body to his grave in the Abbey included “all or the greatest part of the nobility and gentry then in the town”. Within months of his death a volume of 33 poems was in preparation and a dozen additional elegies had appeared in print. Jonson was hailed at his death as “king of English poetry”, as England’s “rare arch-poet”. With his death, as more than one memorialist declared, English poetry itself now seemed also to have died. No one had spoken in these terms at the death of Shakespeare.
To take one last example: at the death in 1643 of the dramatist William Cartwright whose works and whose very name are barely known to most people today – Charles I elected to wear black, remarking that
since the muses had so much mourned for the loss of such a son, it would be a shame for him not to appear in mourning for the loss of such a subject.
At the death of Shakespeare in 1616 James had shown no such minimal courtesy.
Why should Shakespeare at his death have been so neglected? One simple answer is that King James, unlike his son, Charles, had no great passion for the theatre, and no very evident regard for Shakespeare’s genius. Early in his reign, so Dudley Carleton reported,
The first holy days we had every night a public play in the great hall, at which the King was ever present, and liked or disliked as he saw cause: but it seems he takes no extraordinary pleasure in them.
But Shakespeare and his company were not merely royal servants, bound to provide a steady supply of dramatic entertainment at court; they also catered for the London public who flocked to see their plays at Blackfriars and the Globe, and who had their own ways of expressing their pleasure, their frustrations, and – at the death of a player – their grief.
When Richard Burbage, the principal actor for the King’s Men, died on 9 March 1619, just seven days after the death of Queen Anne, the London public were altogether more upset by that event than they had been over the death of the Queen, as one contemporary writer – quoting, ironically, the opening lines of Shakespeare’s 1 Henry VI – tartly observed.
So it’s necessary, I think, to pose a further question. Why should the death of Burbage have affected the London public more profoundly than the death not merely of the Queen but of the dramatist whose work he so skilfully interpreted?
I believe the answer lies, partly at least, in the status of the profession to which Shakespeare belonged, a profession which didn’t yet have a regular name: the very words playwright and dramatist not entering the language until half a century after Shakespeare’s death.
Prominent actors at this time were far better known to the public than the writers who provided their livelihood. The writers were on the whole invisible people, who worked as backroom boys, often anonymously and in small teams; playgoers had no easy way of discovering their identity. Theatre programmes didn’t yet exist. Playbills often announced the names of leading actors, but not until the very last decade of the 17th century did they include the names of authors.
Only a fraction of the large number of plays performed in this period moreover found their way into print, and those that were published didn’t always disclose the names of their authors.
At the time of Shakespeare’s death half of his plays weren’t yet available in print, and there were no known plans to produce a collected edition of his works. The total size and shape of the canon were therefore still imperfectly known. Shakespeare was not yet fully visible.
In 1616 the world didn’t yet realise what they had got, or who it was that they’d lost. Hence, I believe, the otherwise inexplicable silence at his passing.
To the Memory of My Beloved
At the time of Shakespeare’s death another English writer was arguably better known to the general public than Shakespeare himself, and more highly esteemed by the brokers of power at King James’s court. That writer was Shakespeare’s friend and colleague Ben Jonson, who early in 1616 had been awarded a pension of one hundred marks to serve as King James’s laureate poet.
A first folio edition of Shakespeare’s collected plays was finally published in London with Jonson’s assistance and oversight in 1623. This monumental volume at last gave readers in England some sense of the wider reach of Shakespeare’s theatrical achievement, and laid the essential foundations of his modern reputation.
At the head of this volume stand two poems by Ben Jonson: the second, To the Memory of My Beloved, the Author, Mr William Shakespeare, and What He Hath Left Us assesses the achievement of this extraordinary writer. Shakespeare had been praised during his lifetime as a “sweet”, “mellifluous”, “honey-tongued”, “honey-flowing”, “pleasing” writer. No one until this moment had presented him in the astounding terms that Jonson here proposes: as the pre-eminent figure, the “soul” and the “star” of his age; and as something even more than that: as one who could be confidently ranked with the greatest writers of antiquity and of the modern era.
Triumph, my Britain, thou has one to show
To whom all scenes of Europe homage owe,
He was not of an age, but for all time!
Today, 400 years on, that last line sounds like a truism, for Shakespeare’s fame has indeed endured. He is without doubt the most famous writer the world has ever seen. But in 1623 this was a bold and startling prediction. No one before that date had described Shakespeare’s achievement in such terms as these.
This is an edited version of a public lecture given at the University of Melbourne.
On the 400th anniversary of Shakespeare’s death, the Faculty of Arts at the University of Melbourne is establishing the Shakespeare 400 Trust to raise funds to support the teaching of Shakespeare at the University into the future. For more information, or if you would like to support the Shakespeare 400 Trust, please contact Julie du Plessis at email@example.com
The most dangerous part of flying is driving to the airport.
That’s a standard joke among pilots, who know even better than the flying public that aviation is the safest mode of transportation.
But there are still those headlines and TV shows about airline crashes, and those statistics people like to repeat, such as:
Between 1976 and 2000, more than 1,100 passengers and crew lost their lives in accidents in which investigators determined that language had played a contributory role.
True enough, 80% of all air incidents and accidents occur because of human error. Miscommunication combined with other human factors such as fatigue, cognitive workload, noise, or forgetfulness have played a role in some of the deadliest accidents.
The most well-known, and widely discussed, is the collision on the ground of two Boeing 747 aircraft in 1977 in Tenerife, which resulted in 583 fatalities. The incident was due in part to difficult communications between the pilot, whose native language was Dutch, and the Spanish air traffic controller.
In such a high-stakes environment as commercial aviation, where the lives of hundreds of passengers and innocent people on the ground are involved, communication is critical to safety.
So, it was decided that Aviation English would be the international language of aviation and that all aviation professionals – pilots and air traffic controllers (ATC) – would need to be proficient in it. It is a language designed to minimise ambiguities and misunderstandings, highly structured and codified.
Pilots and ATC expect to hear certain bits of information in certain ways and in a given order. The “phraseology”, with its particular pronunciation (for example, “fife” and “niner” instead of “five” and “nine”, so they’re not confused with each other), specific words (“Cleared to land”), international alphabet (“Mike Hotel Foxtrot”) and strict conversation rules (you must repeat, or “read back”, an instruction), needs to be learned and practised.
In spite of globalisation and the spread of English, most people around the world are not native English speakers, and an increasing number of aviation professionals do not speak English as their first language.
Native speakers have an advantage when they learn Aviation English, since they already speak English at home and in their daily lives. But they encounter many pilots or ATC who learned English as a second or even third language.
Whose responsibility is it to ensure that communication is successful? Can native speakers simply speak the way they do at home and expect to be understood? Or do they also have the responsibility to make themselves understood and to learn how to understand pilots or ATC who are not native English speakers?
As a linguist, I analyse aviation language from a linguistics perspective. I have noted the restricted meaning of the few verbs and adjectives; that the only pronouns are “you” and sometimes “we” (“How do you read?”; “We’re overhead Camden”; how few questions there are, mostly imperatives (“Maintain heading 180”); and that the syntax is so simple (no complement clauses, no relative clauses, no recursion), it might not even count as a human language for Chomsky.
But, as a pilot and a flight instructor, I look at it from the point of view of student pilots learning to use it in the cockpit while also learning to fly the airplane and navigate around the airfield.
How much harder it is to remember what to say when the workload goes up, and more difficult to speak over the radio when you know everyone else on the frequency is listening and will notice every little mistake you make?
Imagine, then, how much more difficult this is for pilots with English as a second language.
Everyone learning another language knows it’s suddenly more challenging to hold a conversation over the phone than face-to-face, even with someone you already know. When it’s over the radio, with someone you don’t know, against the noise of the engine, static noise in the headphones, and while trying to make the plane do what you want it to do, it can be quite daunting.
No wonder student pilots who are not native English speakers sometimes prefer to stay silent, and even some experienced native English speakers will too, when the workload is too great.
This is one of the results of my research conducted in collaboration with UNSW’s Brett Molesworth, combining linguistics and aviation human factors.
Experiments in a flight simulator with pilots of diverse language backgrounds and flying experience explored conditions likely to result in pilots making mistakes or misunderstanding ATC instructions. Not surprisingly, increased workload, too much information, and rapid ATC speech, caused mistakes.
Also not surprisingly, less experienced pilots, no matter their English proficiency, made more mistakes. But surprisingly, it was the level of training, rather than number of flying hours or language background, that predicted better communication.
Once we understand the factors contributing to miscommunication in aviation, we can propose solutions to prevent them. For example, technologies such as Automatic Speech Recognition and Natural Language Understanding may help catch errors in pilot readbacks that ATC did not notice and might complement training for pilots and ATC.
It is vital that they understand each other, whatever their native language.
As a young adult in college, I decided to learn Japanese. My father’s family is from Japan, and I wanted to travel there someday.
However, many of my classmates and I found it difficult to learn a language in adulthood. We struggled to connect new sounds and a dramatically different writing system to the familiar objects around us.
It wasn’t so for everyone. There were some students in our class who were able to acquire the new language much more easily than others.
So, what makes some individuals “good language learners?” And do such individuals have a “second language aptitude?”
What we know about second language aptitude
Past research on second language aptitude has focused on how people perceive sounds in a particular language and on more general cognitive processes such as memory and learning abilities. Most of this work has used paper-and-pencil and computerized tests to determine language-learning abilities and predict future learning.
Researchers have also studied brain activity as a way of measuring linguistic and cognitive abilities. However, much less is known about how brain activity predicts second language learning.
Is there a way to predict the aptitude of second language learning?
In a recently published study, Chantel Prat, associate professor of psychology at the Institute for Learning and Brain Sciences at the University of Washington, and I explored how brain activity recorded at rest – while a person is relaxed with their eyes closed – could predict the rate at which a second language is learned among adults who spoke only one language.
Studying the resting brain
Resting brain activity is thought to reflect the organization of the brain and it has been linked to intelligence, or the general ability used to reason and problem-solve.
We measured brain activity obtained from a “resting state” to predict individual differences in the ability to learn a second language in adulthood.
To do that, we recorded five minutes of eyes-closed resting-state electroencephalography, a method that detects electrical activity in the brain, in young adults. We also collected two hours of paper-and-pencil and computerized tasks.
We then had 19 participants complete eight weeks of French language training using a computer program. This software was developed by the U.S. armed forces with the goal of getting military personnel functionally proficient in a language as quickly as possible.
The software combined reading, listening and speaking practice with game-like virtual reality scenarios. Participants moved through the content in levels organized around different goals, such as being able to communicate with a virtual cab driver by finding out if the driver was available, telling the driver where their bags were and thanking the driver.
Here’s a video demonstration:
Nineteen adult participants (18-31 years of age) completed two 30-minute training sessions per week for a total of 16 sessions. After each training session, we recorded the level that each participant had reached. At the end of the experiment, we used that level information to calculate each individual’s learning rate across the eight-week training.
As expected, there was large variability in the learning rate, with the best learner moving through the program more than twice as quickly as the slowest learner. Our goal was to figure out which (if any) of the measures recorded initially predicted those differences.
A new brain measure for language aptitude
When we correlated our measures with learning rate, we found that patterns of brain activity that have been linked to linguistic processes predicted how easily people could learn a second language.
Patterns of activity over the right side of the brain predicted upwards of 60 percent of the differences in second language learning across individuals. This finding is consistent with previous research showing that the right half of the brain is more frequently used with a second language.
Our results suggest that the majority of the language learning differences between participants could be explained by the way their brain was organized before they even started learning.
Implications for learning a new language
Does this mean that if you, like me, don’t have a “quick second language learning” brain you should forget about learning a second language?
First, it is important to remember that 40 percent of the difference in language learning rate still remains unexplained. Some of this is certainly related to factors like attention and motivation, which are known to be reliable predictors of learning in general, and of second language learning in particular.
Second, we know that people can change their resting-state brain activity. So training may help to shape the brain into a state in which it is more ready to learn. This could be an exciting future research direction.
Second language learning in adulthood is difficult, but the benefits are large for those who, like myself, are motivated by the desire to communicate with others who do not speak their native tongue.
The British Council is to open a bilingual pre-school in Hong Kong in August. The International Pre-School, which will teach English and Cantonese and have specific times set aside for Mandarin, will follow the UK-based International Primary Curriculum.
The British Council already has bilingual pre-schools in Singapore (pictured above) and Madrid. The adoption of a bilingual model of early years learning, rather than a purely English-medium one, is supported by much of the research on this age group. In a randomised control trial in the US state of New Jersey, for example, three- and four-year-olds from both Spanish- and English-speaking backgrounds were assigned by lottery to either an all-English or English–Spanish pre-school programme which used an identical curriculum. The study found that children from the bilingual programme emerged with the same level of English as those in the English-medium one, but both the Spanish-speaking and anglophone children had a much higher level of Spanish.
After Brexit, there are various things that some in the EU hope to see and hear less in the future. One is Nigel Farage. Another is the English language.
In the early hours of June 24, as the referendum outcome was becoming clear, Jean-Luc Mélenchon, left-wing MEP and French presidential candidate, tweeted that “English cannot be the third working language of the European parliament”.
This is not the first time that French and German opinion has weighed in against alleged disproportionate use of English in EU business. In 2012, for example, a similar point was made about key eurozone recommendations from the European Commission being published initially “in a language which [as far as the Euro goes] is only spoken by less than 5m Irish”. With the number of native speakers of English in the EU set to drop from 14% to around 1% of the bloc’s total with the departure of the UK, this point just got a bit sharper.
Official EU language policy is multilingualism with equal rights for all languages used in member states. It recommends that “every European citizen should master two other languages in addition to their mother tongue” – Britain’s abject failure to achieve this should make it skulk away in shame.
The EU recognises 24 “official and working” languages, a number that has mushroomed from the original four (Dutch, French, German and Italian) as more countries have joined. All EU citizens have a right to access EU documents in any of those languages. This calls for a translation team numbering around 2,500, not to mention a further 600 full-time interpreters. In practice most day-to-day business is transacted in either English, French or German and then translated, but it is true that English dominates to a considerable extent.
The preponderance of English has nothing to do with the influence of Britain or even Britain’s membership of the EU. Historically, the expansion of the British empire, the impact of the industrial revolution and the emergence of the US as a world power have embedded English in the language repertoire of speakers across the globe.
Unlike Latin, which outlived the Roman empire as the lingua franca of medieval and renaissance Europe, English of course has native speakers (who may be unfairly advantaged), but it is those who have learned English as a foreign language – “Euro-English” or “English as a lingua franca” – who now constitute the majority of users.
According to the 2012 Special Eurobarometer on Europeans and their Languages, English is the most widely spoken foreign language in 19 of the member states where it is not an official language. Across Europe, 38% of people speak English well enough as a foreign language to have a conversation, compared to 12% speaking French and 11% in German.
The report also found that 67% of Europeans consider English the most useful foreign language, and that the numbers favouring German (17%) or French (16%) have declined. As a result, 79% of Europeans want their children to learn English, compared to 20% for French and German.
Too much invested in English
Huge sums have been invested in English teaching by both national governments and private enterprise. As the demand for learning English has increased, so has the supply. English language learning worldwide was estimated to be worth US$63.3 billion (£47.5 billion) in 2012, and it is expected that this market will rise to US$193.2 billion (£145.6 billion) by 2017. The value of English for speakers of other languages is not going to diminish any time soon. There is simply too much invested in it.
Speakers of English as a second language outnumber first-language English speakers by 2:1 both in Europe and globally. For many Europeans, and especially those employed in the EU, English is a useful piece in a toolbox of languages to be pressed into service when needed – a point which was evident in a recent project on whether the use of English in Europe was an opportunity or a threat. So in the majority of cases using English has precisely nothing to do with the UK or Britishness. The EU needs practical solutions and English provides one.
English is unchallenged as the lingua franca of Europe. It has even been suggested that in some countries of northern Europe it has become a second rather than a foreign language. Jan Paternotte, D66 party leader in Amsterdam, has proposed that English should be decreed the official second language of that city.
English has not always held its current privileged status. French and German have both functioned as common languages for high-profile fields such as philosophy, science and technology, politics and diplomacy, not to mention Church Slavonic, Russian, Portuguese and other languages in different times and places.
We can assume that English will not maintain its privileged position forever. Who benefits now, however, are not the predominantly monolingual British, but European anglocrats whose multilingualism provides them with a key to international education and employment.
Much about the EU may be about to change, but right now an anti-English language policy so dramatically out of step with practice would simply make the post-Brexit hangover more painful.
The EFL industry in Spain enjoyed a mini boom during the early years of the global economic crisis as many adult students rushed to improve their English language skills, either to get themselves back into the job market, or else in an attempt to hang on the job they had. As we reached the new decade, the boom slowed down and then started to tail-off. But no-one expected the sudden and significant drop in adult student numbers that hit the industry at the start of the current academic year.
The drop wasn’t school, city, or even region specific; it was the same story all over Spain. And the numbers were eye-watering. Depending who you talk to (and/or who you believe) adult student numbers fell by between 10-20%. Enough to make any school owner or manager wince.
What happened? Where did all these students go? Well, as is normally the case, there is no one, simple answer. There has been a slight upturn in in-company teaching, so it may be that some students, who were previously paying for their own courses in our schools, are now studying in their company (if they’re fortunate to have a job in the first place; Spanish unemployment is still well over 20%.)
The standard of English teaching in main-stream education is also getting better, slowly, so it may be that there are more school leavers who have achieved a basic level of communicative competence.
Some adult students – especially the younger ones – may also have decided to switch from a traditional, bricks and mortar language school to a Web-based classroom.
My own theory is that it’s the free movement of labour in the European Union which is having the greatest effect on our market. In other words, as there so few jobs available in Spain, hundreds of thousands of young adults – many of whom may previously have been our students – have simply upped sticks and gone abroad to find work.
A recent survey conducted in the UK indicates that migrants from Spain rose to 137,000 in 2015 (up from 63,000 in 2011). Most of them are probably working in relatively unskilled jobs in hotels, bars and restaurants, but at least they’re working – and they’re improving their English language skills as they go.
A similar number probably emigrated to other countries in the north of Europe and another significant number emigrated to Latin America. Add up all these emigrants and we could be looking at a total of well over 300,000 migrants – just in 2015.
On a recent trip to Oxford I met a young Spanish guy, working in a hotel, who had previously been a student at our school in Barcelona. He’s a typical example. Will he ever move back to Spain, I asked him? Perhaps, in the future, he said, but only if the situation in Spain changes and he can find a decent job. His new fluency in English, learnt by living and working in Oxford, might just help him with that.
So where does that leave Spanish language schools? Will adult students come back to our schools in the same numbers as before? Probably not. But that doesn’t mean we have to give up on this market. If adult students won’t come to us, we can use the Internet to take our services to them. Even those living and working abroad.
Geezers and girls literally ain’t allowed to use slang words like “emosh” (emotional) anymore. The head teacher and staff of an academy in Essex, England appear to have taken great pleasure in banning the type of slang used in reality television series TOWIE, including many of the words in the above sentence, in a bid to improve the job prospects of their students.
Head teacher David Grant reportedly believes that by outlawing certain words and phrases and forcing students to use “proper English”, they will be in a better position to compete for jobs with non-native English speakers who may have a better command of the language. The way forward, he believes, is for young people to be using “the Queen’s English”, and not wasting time getting totes emosh about some bird or some bloke.
While nobody would doubt the good intentions behind such a scheme, it simply isn’t the way to go about achieving the desired aims. Of course, there’s always the possibility that this is all part of some clever plan to raise awareness and generate debate among the students about the language they use; in which case, great. Unfortunately, phrases such as “proper English”, “wrong usage” and “Queen’s English” suggest a very different and alarmingly narrow-minded approach to language.
Indeed, banning slang in schools is a short-sighted and inefficient way of trying to produce young people who are confident and adaptable communicators. What we should be doing is encouraging students to explore the fluidity, richness, and contextual appropriateness of an ever-changing language.
The fact is, there really is no such thing as “proper English”; there is simply English that is more or less appropriate in a given situation. Most of us would agree that “well jel” (very jealous) or “innit” have no place in most job interviews, but they do have a place elsewhere. Similarly, some people might get annoyed at what they see as the overuse of “like”, but it’s as much a part of young people’s language as “cool”, “yeah”, or “dude” might have been to their parents in their day.
This isn’t the first time a school has gone down this particular route in the quest to create more employable school leavers. In 2013, Harris Academy in south London produced a list of banned slang words and phrases including “bare” (alot), “innit” and “we woz” in a bid to improve their pupil’s chances. Fast forward to 2015 and the policy was hailed a success, with the “special measures” school now being rated “outstanding”. But are we really to believe that this turnaround was purely due to eager staff policing children’s use of a few slang words? Isn’t it perhaps more likely that the new leadership team brought with them rather more than a naughty words list?
Which is why a ban is so pointless. All it can possibly achieve is to make young people self-conscious about the way they speak, thus stifling creativity and expression. Do we really want the shy 13-year-old who has finally plucked up the courage to speak in class to be immediately silenced when the first word he or she utters is “Like…”? Or would we rather the teacher listens to what they have to say, then explores how the use of language can change the message, depending on the context? In other words, celebrate language diversity rather than restrict it.
And this is precisely what English language teachers do every day in their classes. Learning about language variation, about accents, dialects, and slang is all part of the curriculum, especially as they head towards A level. I can only imagine how frustrated they must be when their senior staff then seek to publicly undo their good work by insisting on outdated, class-based, culturally-biased notions of correct and incorrect usage.
In an English language class, students are taught how the ways in which we use language are part of how we construct and perform our social identities. Unfortunately, their break-times are then patrolled by some kind of language police who are tasked with ensuring those identities aren’t expressed (unless, presumably, they happen to be performing an acceptably middle-class job applicant identity at the time).
Different language is appropriate for different contexts. Yes, using TOWIE slang is inappropriate in a job interview, but no more inappropriate than using the Queen’s English in the playground. Unless you’re the Queen, obvs.
by Gianfranco Conti, PhD. Co-author of 'The Language Teacher toolkit' and "Breaking the sound barrier: teaching learners how to listen', winner of the 2015 TES best resource contributor award and founder of www.language-gym.com