The institution will be renamed CES Toronto and will undergo a “slow rebranding,” CES managing director Justin Quinn told The PIE News.
The group has acquired 100% shareholding from the previous owner and president of Global Village Toronto, Genevieve Bouchard, who has recently retired.
Former director of Global Village marketing Robin Adams has been appointed president of the newly-named centre, but the rest of the staff will remain the same.
“[Canada] is a very exciting market to be present in”
The move is the result of CES’ long-time interest in the Canadian market, which Quinn said he had been observing closely, waiting for the right opportunity.
“My interest in Canada has been there for a number of years, I have been watching the market very carefully and it’s a very exciting market to be present in,” he said.
“I was just waiting for the right opportunity to come along.”
CES plans to further grow and develop the school, with a view to introducing new programs, including teacher training. Quinn also hopes the school will become an Eaquals member within the next 12 months. He is the current chair of the accreditation and membership body for language schools.
“One of the things that attracted me to the school is that there is growth potential, and I certainly feel we could probably be more aggressive in our growth strategy,” Quinn explained.
The school will undertake the process to maintain its Languages Canada membership, he added.
“It’s part of the conditions – [Languages Canada] will do due diligence on us as well. It’s an impressive accreditation system,” Quinn told The PIE.
“They look at the owners, and our strategy, and our plans going forward, rather than just looking at how the school is operating at a snapshot in time.”
In a statement, CES and Global Village (both IALC accredited) said they will jointly promote their locations, which include GV Calgary, CES Dublin, CES Edinburgh, CES Harrogate, GV Hawaii, CES Leeds, CES London, CES Oxford, CES Toronto, GV Vancouver, GV Victoria, and CES Worthing.
Recent global developments have sharply polarised communities in many countries around the world. A new politics of exclusion has drawn urgent attention to the ways in which structural inequality has marginalised and silenced certain sectors of society. And yet, as a recent report shows, diversity and inclusion in fact “benefit the common good”. A more diverse group is a stronger, more creative and productive group.
In the world of literary writing, we find similar gaps and exclusions. But these are counterbalanced in some respects by new positive initiatives.
In 2015, a study revealed that literature by writers of colour had been consistently under-represented by the predominantly white British book industry. Statistics in The Bookseller show that out of thousands of books published in 2016 in the UK, fewer than 100 were by British authors of a non-white background. And out of 400 authors identified by the British public in a 2017 Royal Society of Literature survey, only 7% were black, Asian or of mixed race (compared to 13% of the population).
A similar marginalisation takes place in the curricula in schools and universities, mirroring exclusions in wider society. In most English literature courses of whatever period, the writers taught are white, largely English and largely male.
A fundamental inequality arises in which, though British culture at large is diverse, syllabuses are not. Indeed, many British readers and students find little to recognise or to identify with when they read and study mainstream British literature.
But it’s not just a case of under-representation. It’s also a case of misrepresentation.
Black and Asian writers who have been published within the mainstream British system describe the pressure they have felt to conform to cultural stereotypes in their work. Their books are often packaged and presented in ways that focus on their ethnicity, regularly using cliches. At the same time, more universal aspects of their writing are overlooked. For example, the covers of novels by Asian British writers usually stick to a limited colour palette of yellows, reds, and purples, accented by “exotic” images.
These writers bristle at the sense that they are read not as crafters of words and worlds, but as spokespeople for their communities or cultures. At its worst, this process turns these writers and their books into objects of anthropological curiosity rather than works inviting serious literary study or simply pleasurable reading. The message is that black and Asian literature is other than or outside mainstream British writing.
Against these exclusions, leading British authors such as Bernardine Evaristoand others have urged for a broader, more inclusive approach. They recognise that what and how we read shapes our sense of ourselves, our communities and the world.
Reframing the narrative
The Postcolonial Writers Make Worlds research project, based in the Oxford English Faculty and The Oxford Research Centre in the Humanities, set out to ask what it means to read contemporary fiction as British readers. Working with reading groups and in discussion with writers, we found that readers of all ages entered the relatively unfamiliar worlds created by BAME authors with interest.
For many, finding points of familiarity along gender, age, geographical or other lines was important for their ability to enjoy stories from communities different from their own. Identifying in this way gave some readers new perspectives on their own contexts. At the same time, unfamiliarity was not a barrier to identification. In some cases, universal human stories, like falling in love, acted as a bridge. This suggests that how literature is presented to readers, whether it is framed as other or not, can be as significant as what is represented.
Contemporary black and Asian writing from the UK is British writing. And this means that the work of writers such as Evaristo, Nadifa Mohamed and Daljit Nagra be placed on the same library shelf, reading list and section of the bookshop as work by Ian McEwan, Julian Barnes and Ali Smith – not exclusively in “world interest” or “global literature”.
Equally, much can be gained by thinking of white British writers like Alan Hollinghurst or Hilary Mantel as having as much of a cross-cultural or even postcolonial outlook as Aminatta Forna and Kamila Shamsie.
There are positive signs. A new EdExcel/Pearson A-level teaching resource on Contemporary Black British Literature has been developed. The Why is My Curriculum White? campaign continues to make inroads in university syllabuses. And the Jhalak Prize is raising the profile of BAME writing in Britain. Against this background, the Postcolonial Writers Make Worlds website offers a multimedia hub of resources on black and Asian British writing, providing points of departure for more inclusive, wide-ranging courses. Yet there is still much to be done.
All literature written in English in the British Isles is densely entangled with other histories, cultures, and pathways of experience both within the country and far beyond. Its syllabuses, publishing practices, and our conversations about books must reflect this.
There are many benefits to knowing more than one language. For example, it has been shown that aging adults who speak more than one language have less likelihood of developing dementia.
Additionally, the bilingual brain becomes better at filtering out distractions, and learning multiple languages improves creativity. Evidence also shows that learning subsequent languages is easier than learning the first foreign language.
Why is foreign language study important at the university level?
As an applied linguist, I study how learning multiple languages can have cognitive and emotional benefits. One of these benefits that’s not obvious is that language learning improves tolerance.
This happens in two important ways.
The first is that it opens people’s eyes to a way of doing things in a way that’s different from their own, which is called “cultural competence.”
The second is related to the comfort level of a person when dealing with unfamiliar situations, or “tolerance of ambiguity.”
Gaining cross-cultural understanding
Cultural competence is key to thriving in our increasingly globalized world. How specifically does language learning improve cultural competence? The answer can be illuminated by examining different types of intelligence.
Psychologist Robert Sternberg’sresearch on intelligence describes different types of intelligence and how they are related to adult language learning. What he refers to as “practical intelligence” is similar to social intelligence in that it helps individuals learn nonexplicit information from their environments, including meaningful gestures or other social cues.
Language learning inevitably involves learning about different cultures. Students pick up clues about the culture both in language classes and through meaningful immersion experiences.
Researchers Hanh Thi Nguyen and Guy Kellogg have shown that when students learn another language, they develop new ways of understanding culture through analyzing cultural stereotypes. They explain that “learning a second language involves the acquisition not only of linguistic forms but also ways of thinking and behaving.”
With the help of an instructor, students can critically think about stereotypes of different cultures related to food, appearance and conversation styles.
Dealing with the unknown
The second way that adult language learning increases tolerance is related to the comfort level of a person when dealing with “tolerance of ambiguity.”
Someone with a high tolerance of ambiguity finds unfamiliar situations exciting, rather than frightening. My research on motivation, anxiety and beliefs indicates that language learning improves people’s tolerance of ambiguity, especially when more than one foreign language is involved.
It’s not difficult to see why this may be so. Conversations in a foreign language will inevitably involve unknown words. It wouldn’t be a successful conversation if one of the speakers constantly stopped to say, “Hang on – I don’t know that word. Let me look it up in the dictionary.” Those with a high tolerance of ambiguity would feel comfortable maintaining the conversation despite the unfamiliar words involved.
Applied linguists Jean-Marc Dewaele and Li Wei also study tolerance of ambiguity and have indicated that those with experience learning more than one foreign language in an instructed setting have more tolerance of ambiguity.
Individuals with higher levels of tolerance of ambiguity have also been found to be more entrepreneurial (i.e., are more optimistic, innovative and don’t mind taking risks).
In the current climate, universities are frequently being judged by the salaries of their graduates. Taking it one step further, based on the relationship of tolerance of ambiguity and entrepreneurial intention, increased tolerance of ambiguity could lead to higher salaries for graduates, which in turn, I believe, could help increase funding for those universities that require foreign language study.
Those who have devoted their lives to theorizing about and the teaching of languages would say, “It’s not about the money.” But perhaps it is.
Language learning in higher ed
Most American universities have a minimal language requirement that often varies depending on the student’s major. However, students can typically opt out of the requirement by taking a placement test or providing some other proof of competency.
In contrast to this trend, Princeton recently announced that all students, regardless of their competency when entering the university, would be required to study an additional language.
I’d argue that more universities should follow Princeton’s lead, as language study at the university level could lead to an increased tolerance of the different cultural norms represented in American society, which is desperately needed in the current political climate with the wave of hate crimes sweeping university campuses nationwide.
Knowledge of different languages is crucial to becoming global citizens. As former Secretary of Education Arne Duncan noted,
“Our country needs to create a future in which all Americans understand that by speaking more than one language, they are enabling our country to compete successfully and work collaboratively with partners across the globe.”
Considering the evidence that studying languages as adults increases tolerance in two important ways, the question shouldn’t be “Why should universities require foreign language study?” but rather “Why in the world wouldn’t they?”
We’re all aware that there are stereotypes. The British are sharply sarcastic, the Americans are great at physical comedy, and the Japanese love puns. But is humour actually driven by culture to any meaningful extent? Couldn’t it be more universal – or depend largely on the individual?
There are some good reasons to believe that there is such a thing as a national sense of humour. But let’s start with what we actually have in common, by looking at the kinds of humour that most easily transcend borders.
Certain kinds of humour are more commonly used in circumstances that are international and multicultural in nature – such as airports. When it comes to onoard entertainment, airlines, in particular, are fond of humour that transcends cultural and linguistic boundaries for obvious reasons. Slapstick humour and the bland but almost universally tolerable social transgressions and faux pas of Mr Bean permit a safe, gentle humour that we can all relate to. Also, the silent situational dilemmas of the Canadian Just for Laughs hidden camera reality television show has been a staple option for airlines for many years.
These have a broad reach and are probably unlikely to offend most people. Of course, an important component in their broad appeal is that they are not really based on language.
Language and culture
Most humour, and certainly humour that involves greater cognitive effort, is deeply embedded in language and culture. It relies on a shared language or set of culturally based constructs to function. Puns and idioms are obvious examples.
Indeed, most modern theories of humour suggest that some form of shared knowledge is one of the key foundations of humour – that is, after all, what a culture is.
Some research has demonstrated this. One study measured humour in Singaporean college students and compared it with that of North American and Israeli students. This was done using a questionnaire asking participants to describe jokes they found funny, among other things. The researchers found that the Americans were more likely to tell sex jokes than the Singaporeans. The Singaporean jokes, on the other hand, were slightly more often focused on violence. The researchers interpreted the lack of sex jokes among Singaporean students to be a reflection of a more conservative society. Aggressive jokes may be explained by a cultural emphasis on strength for survival.
Another study compared Japanese and Taiwanese students’ appreciation of English jokes. It found that the Taiwanese generally enjoyed jokes more than the Japanese and were also more eager to understand incomprehensible jokes. The authors argued that this could be down to a more hierarchical culture in Japan, leaving less room for humour.
Denigration and self-deprecation
There are many overarching themes that can be used to define a nation’s humour. A nation that laughs together is one that can show it has a strong allegiance between its citizens. Laughter is one of our main social signals and combined with humour it can emphasise social bonding – albeit sometimes at the cost of denigrating other groups. This can be seen across many countries. For example, the French tend to enjoy a joke about the Belgians while Swedes make fun of Norwegians. Indeed, most nations have a preferred country that serves as a traditional butt of their jokes.
Sexist and racist humour are also examples of this sort of denigration. The types of jokes used can vary across cultures, but the phenomenon itself can boost social bonding. Knowledge of acceptable social boundaries is therefore crucial and reinforces social cohesion. As denigration is usually not the principle aim of the interaction it shows why people often fail to realise that they are being offensive when they were “only joking”. However, as the world becomes more global and tolerant of difference, this type of humour is much less acceptable in cultures that welcome diversity.
Self-denigration or self-deprecation is also important – if it is relatively mild and remains within acceptable social norms. Benign violation theory argues that something that threatens social or cultural norms can also result in humour.
Importantly, what constitutes a benign level of harm is strongly culturally bound and differs from nation to nation, between social groups within nations and over the course of a nation’s history. What was once tolerable as national humour can now seem very unacceptable. For the British, it may be acceptable to make fun of Britons being overly polite, orderly or reluctant to talk to stangers. However, jokes about the nature of Britain’s colonial past would be much more contentious – they would probably violate social norms without being emotionally benign.
Another factor is our need to demonstrate that we understand the person we are joking with. My own ideas suggest we even have a desire to display skills of knowing what another person thinks – mind-reading in the scientific sense. For this, cultural alignment and an ability to display it are key elements in humour production and appreciation – it can make us joke differently with people from our own country than with people from other cultures.
For example, most people in the UK know that the popular phrase “don’t mention the war” refers to a Fawlty Towers sketch. Knowing that “fork handles” is funny also marks you as a UK citizen (see video above). Similarly, knowledge of “I Love Lucy” or quotes from Seinfeld create affiliation among many in the US, while reference to “Chavo del Ocho” or “Chapulín Colorado” do the same for Mexicans and most Latin Americans.
These shared cultural motifs – here drawn mostly from television – are one important aspect of a national sense of humour. They create a sense of belonging and camaraderie. They make us feel more confident about our humour and can be used to build further jokes on.
A broadly shared sense of humour is probably one of our best indicators for how assimilated we are as a nation. Indeed, a nation’s humour is more likely to show unity within a country than to display a nation as being different from other nations in any meaningful way.
Textbooks are a crucial part of any child’s learning. A large body of research has proved this many times and in many very different contexts. Textbooks are a physical representation of the curriculum in a classroom setting. They are powerful in shaping the minds of children and young people.
UNESCO has recognised this power and called for every child to have a textbook for every subject. The organisation argues that
next to an engaged and prepared teacher, well-designed textbooks in sufficient quantities are the most effective way to improve instruction and learning.
But there’s an elephant in the room when it comes to textbooks in African countries’ classrooms: language.
Rwanda is one of many African countries that’s adopted a language instruction policy which sees children learning in local or mother tongue languages for the first three years of primary school. They then transition in upper primary and secondary school into a dominant, so-called “international” language. This might be French or Portuguese. In Rwanda, it has been Englishsince 2008.
Evidence from across the continent suggests that at this transition point, many learners have not developed basic literacy and numeracy skills. And, significantly, they have not acquired anywhere near enough of the language they are about to learn in to be able to engage in learning effectively.
I do not wish to advocate for English medium instruction, and the arguments for mother-tongue based education are compelling. But it’s important to consider strategies for supporting learners within existing policy priorities. Using appropriate learning and teaching materials – such as textbooks – could be one such strategy.
A different approach
It’s not enough to just hand out textbooks in every classroom. The books need to tick two boxes: learners must be able to read them and teachers must feel enabled to teach with them.
Existing textbooks tend not to take these concerns into consideration. The language is too difficult and the sentence structures too complex. The paragraphs too long and there are no glossaries to define unfamiliar words. And while textbooks are widely available to those in the basic education system, they are rarely used systematically. Teachers cite the books’ inaccessibility as one of the main reasons for not using them.
A recent initiative in Rwanda has sought to address this through the development of “language supportive” textbooks for primary 4 learners who are around 11 years old. These were specifically designed in collaboration with local publishers, editors and writers.
There are two key elements to a “language supportive” textbook.
Firstly, they are written at a language level which is appropriate for the learner. As can be seen in Figure 1, the new concept is introduced in as simple English as possible. The sentence structure and paragraph length are also shortened and made as simple as possible. The key word (here, “soil”) is also repeated numerous times so that the learner becomes accustomed to this word.
Secondly, they include features – activities, visuals, clear signposting and vocabulary support – that enable learners to practice and develop their language proficiency while learning the key elements of the curriculum.
The books are full of relevant activities that encourage learners to regularly practice their listening, speaking, reading and writing of English in every lesson. This enables language development.
Crucially, all of these activities are made accessible to learners – and teachers – by offering support in the learners’ first language. In this case, the language used was Kinyarwanda, which is the first language for the vast majority of Rwandan people. However, it’s important to note that initially many teachers were hesitant about incorporating Kinyarwanda into their classroom practice because of the government’s English-only policy.
Improved test scores
The initiative was introduced with 1075 students at eight schools across four Rwandan districts. The evidence from our initiative suggests that learners in classrooms where these books were systematically used learnt more across the curriculum.
When these learners sat tests before using the books, they scored similar results to those in other comparable schools. After using the materials for four months, their test scores were significantly higher. Crucially, both learners and teachers pointed out how important it was that the books sanctioned the use of Kinyarwanda. The classrooms became bilingual spaces and this increased teachers’ and learners’ confidence and competence.
All of this supports the importance of textbooks as effective learning and teaching materials in the classroom and shows that they can help all learners. But authorities mustn’t assume that textbooks are being used or that the existing books are empowering teachers and learners.
Textbooks can matter – but it’s only when consideration is made for the ways they can help all learners that we can say that they can contribute to quality education for all.
It’s often thought that it is better to start learning a second language at a young age. But research shows that this is not necessarily true. In fact, the best age to start learning a second language can vary significantly, depending on how the language is being learned.
The belief that younger children are better language learners is based on the observation that children learn to speak their first language with remarkable skill at a very early age.
Before they can add two small numbers or tie their own shoelaces, most children develop a fluency in their first language that is the envy of adult language learners.
Why younger may not always be better
Two theories from the 1960s continue to have a significant influence on how we explain this phenomenon.
The theory of “universal grammar” proposes that children are born with an instinctive knowledge of the language rules common to all humans. Upon exposure to a specific language, such as English or Arabic, children simply fill in the details around those rules, making the process of learning a language fast and effective.
The other theory, known as the “critical period hypothesis”, posits that at around the age of puberty most of us lose access to the mechanism that made us such effective language learners as children. These theories have been contested, but nevertheless they continue to be influential.
Despite what these theories would suggest, however, research into language learning outcomes demonstrates that younger may not always be better.
In some language learning and teaching contexts, older learners can be more successful than younger children. It all depends on how the language is being learned.
Language immersion environment best for young children
Living, learning and playing in a second language environment on a regular basis is an ideal learning context for young children. Research clearly shows that young children are able to become fluent in more than one language at the same time, provided there is sufficient engagement with rich input in each language. In this context, it is better to start as young as possible.
Learning in classroom best for early teens
Learning in language classes at school is an entirely different context. The normal pattern of these classes is to have one or more hourly lessons per week.
To succeed at learning with such little exposure to rich language input requires meta-cognitive skills that do not usually develop until early adolescence.
For this style of language learning, the later years of primary school is an ideal time to start, to maximise the balance between meta-cognitive skill development and the number of consecutive years of study available before the end of school.
Self-guided learning best for adults
There are, of course, some adults who decide to start to learn a second language on their own. They may buy a study book, sign up for an online course, purchase an app or join face-to-face or virtual conversation classes.
To succeed in this learning context requires a range of skills that are not usually developed until reaching adulthood, including the ability to remain self-motivated. Therefore, self-directed second language learning is more likely to be effective for adults than younger learners.
How we can apply this to education
What does this tell us about when we should start teaching second languages to children? In terms of the development of language proficiency, the message is fairly clear.
If we are able to provide lots of exposure to rich language use, early childhood is better. If the only opportunity for second language learning is through more traditional language classes, then late primary school is likely to be just as good as early childhood.
However, if language learning relies on being self-directed, it is more likely to be successful after the learner has reached adulthood.
Every year around this time, dictionaries across the English-speaking world announce their “Word of the Year”. These are expressions (some newly minted and some golden oldies too) that for some reason have shot into prominence during the year.
Dictionaries make their selections in different ways, but usually it involves a combination of suggestions from the public and the editorial team (who have been meticulously tracking these words throughout the year). The Macquarie Dictionary has two selections – the Committee’s Choice made by the Word of the Year Committee, and the People’s Choice made by the public (so make sure you have your say on January 24 for the People’s Choice winner 2016).
It’s probably not surprising that these words of note draw overwhelmingly from slang language, or “slanguage” – a fall-out of the increasing colloquialisation of English usage worldwide. In Australia this love affair with the vernacular goes back to the earliest settlements of English speakers.
And now there’s the internet, especially social networking – a particularly fertile breeding ground for slang.
People enjoy playing with language, and when communicating electronically they have free rein. “Twitterholic”, “twaddiction”, “celebritweet/twit”, “twitterati” are just some of the “tweologisms” that Twitter has spawned of late. And with a reported average of 500 million tweets each day, Twitter has considerable capacity not only to create new expressions, but to spread them (as do Facebook, Instagram and other social networking platforms).
But what happens when slang terms like these make it into the dictionary? Early dictionaries give us a clue, particularly the entries that are stamped unfit for general use. Branded entries were certainly plentiful in Samuel Johnson’s 18th-century work, and many are now wholly respectable: abominably “a word of low or familiar language”, nowadays “barbarous usage”, fun “a low cant word” (what would Johnson have thought of very fun and funner?).
Since the point of slang is to mark an in-group, to amuse and perhaps even to shock outsiders with novelty, most slang expressions are short-lived. Those that survive become part of the mainstream and mundane. Quite simply, time drains them of their vibrancy and energy. J.M. Wattie put it more poetically back in 1930:
Slang terms are the mayflies of language; by the time they get themselves recorded in a dictionary, they are already museum specimens.
But, then again, expressions occasionally do sneak through the net. Not only do they survive, they stay slangy – and sometimes over centuries. Judge for yourselves. Here are some entries from A New and Comprehensive Vocabulary of the Flash Language. Written by British convict James Hardy Vaux in 1812, this is the first dictionary compiled in Australia.
croak “to die”
nuts on “to have a strong inclination towards something or someone”
on the sly “secretly”
racket “particular kind of fraud”
snitch “to betray”
stink “an uproar”
spin a yarn “tell a tale of great adventure”
These were originally terms of flash – or, as Vaux put it, “the cant language used by the family”. In other words, they belonged to underworld slang. The term slang itself meant something similar at this time; it broadened to highly colloquial language in the 1800s.
Vaux went on to point out that “to speak good flash is to be well versed in cant terms” — and, having been transported to New South Wales on three separate occasions during his “checkered and eventful life” (his words), Vaux himself was clearly well versed in the world of villainy and cant.
True, the majority of the slang terms here have dropped by the wayside (barnacles “spectacles”; lush “to drink”), and the handful that survives are now quite standard (grab “to seize”; dollop “large quantity”). But there are a few that have not only lasted, they’ve remained remarkably contemporary-sounding – some still even a little “disgraceful” (as Vaux described them).
The shelf-life of slang is a bit of mystery. Certainly some areas fray faster than others. Vaux’s prime, plummy and rum (meaning “excellent”) have well and truly bitten the dust. Cool might have made a comeback (also from the 1800s), but intensifiers generally wear out.
Far out and ace have been replaced by awesome, and there are plenty of new “awesome” words lurking in the wings. Some of these are already appearing on lists for “Most Irritating Word of the Year” – it’s almost as if their success does them in. Amazeballs, awesomesauce and phat are among the walking dead.
But as long as sausage sizzles continue to support Australian voters on election day, democracy sausages will have a place – and if adopted elsewhere, might even entice the politically uninterested into polling booths.
Do you remember being taught you should never start your sentences with “And” or “But”?
What if I told you that your teachers were wrong and there are lots of other so-called grammar rules that we’ve probably been getting wrong in our English classrooms for years?
How did grammar rules come about?
To understand why we’ve been getting it wrong, we need to know a little about the history of grammar teaching.
Grammar is how we organise our sentences in order to communicate meaning to others.
Those who say there is one correct way to organise a sentence are called prescriptivists. Prescriptivist grammarians prescribe how sentences must be structured.
Prescriptivists had their day in the sun in the 18th century. As books became more accessible to the everyday person, prescriptivists wrote the first grammar books to tell everyone how they must write.
These self-appointed guardians of the language just made up grammar rules for English, and put them in books that they sold. It was a way of ensuring that literacy stayed out of reach of the working classes.
They took their newly concocted rules from Latin. This was, presumably, to keep literate English out of reach of anyone who wasn’t rich or posh enough to attend a grammar school, which was a school where you were taught Latin.
And yes, that is the origin of today’s grammar schools.
The other camp of grammarians are the descriptivists. They write grammar guides that describe how English is used by different people, and for different purposes. They recognise that language isn’t static, and it isn’t one-size-fits-all.
1. You can’t start a sentence with a conjunction
Let’s start with the grammatical sin I have already committed in this article. You can’t start a sentence with a conjunction.
Obviously you can, because I did. And I expect I will do it again before the end of this article. There, I knew I would!
Those who say it is always incorrect to start a sentence with a conjunction, like “and” or “but”, sit in the prescriptivist camp.
However, according to the descriptivists, at this point in our linguistic history,
it is fine to start a sentence with a conjunction in an op-ed article like this, or in a novel or a poem.
It is less acceptable to start a sentence with a conjunction in an academic journal article, or in an essay for my son’s high school economics teacher, as it turns out. But times are changing.
2. You can’t end a sentence with a preposition
Well, in Latin you can’t. In English you can, and we do all the time.
Admittedly a lot of the younger generation don’t even know what a preposition is, so this rule is already obsolete. But let’s have a look at it anyway, for old time’s sake.
According to this rule, it is wrong to say “Who did you go to the movies with?”
Instead, the prescriptivists would have me say “With whom did you go to the movies?”
I’m saving that structure for when I’m making polite chat with the Queen on my next visit to the palace.
That’s not a sarcastic comment, just a fanciful one. I’m glad I know how to structure my sentences for different audiences. It is a powerful tool. It means I usually feel comfortable in whatever social circumstances I find myself in, and I can change my writing style according to purpose and audience.
That is why we should teach grammar in schools. We need to give our children a full repertoire of language so that they can make grammatical choices that will allow them to speak and write for a wide range of audiences.
3. Put a comma when you need to take a breath
It’s a novel idea, synchronising your writing with your breathing, but the two have nothing to do with one another and if this is the instruction we give our children, it is little wonder commas are so poorly used.
Commas provide demarcation between like grammatical structures. When adjectives, nouns, phrases or clauses are butting up against each other in a sentence, we separate them with a comma. That’s why I put commas between the three nouns and the two clauses in that last sentence.
Commas also provide demarcation for words, phrases or clauses that are embedded in a sentence for effect. The sentence would still be a sentence even if we took those words away. See, for example, the use of commas in this sentence.
4. To make your writing more descriptive, use more adjectives
William Shakespeare died on 23 April 1616, 400 years ago, in the small Warwickshire town of his birth. He was 52 years of age: still young (or youngish, at least) by modern reckonings, though his death mightn’t have seemed to his contemporaries like an early departure from the world.
Most of the population who survived childhood in England at this time were apt to die before the age of 60, and old age was a state one entered at what today might be thought a surprisingly youthful age.
Many of Shakespeare’s fellow-writers had died, or were soon to do so, at a younger age than he: Christopher Marlowe, in a violent brawl, at 29; Francis Beaumont, following a stroke, at 31 (also in 1616: just 48 days, as it happened, before Shakespeare’s own death); Robert Greene, penitent and impoverished, of a fever, in the garret of a shoemaker’s house, at 34; Thomas Kyd, after “bitter times and privy broken passions”, at 35; George Herbert, of consumption, at 39; John Fletcher, from the plague, at 46; Edmund Spenser, “for lack of bread” (so it was rumoured), at 47; and Thomas Middleton, also at 47, from causes unknown.
The cause or causes of Shakespeare’s death are similarly unknown, though in recent years they have become a topic of persistent speculation. Syphilis contracted by visits to the brothels of Turnbull Street, mercury or arsenic poisoning following treatment for this infection, alcoholism, obesity, cardiac failure, a sudden stroke brought on by the alarming news of a family disgrace – that Shakespeare’s son-in-law, Thomas Quiney, husband of his younger daughter, Judith, had been responsible for the pregnancy and death of a young local woman named Margaret Wheeler – have all been advanced as possible factors leading to Shakespeare’s death.
Francis Thackeray, Director of the Institute for Human Evolution at the University of Witwatersrand, believes that cannabis was the ultimate cause of Shakespeare’s death, and has been hoping – in defiance of the famous ban on Shakespeare’s tomb (“Curst be he that moves my bones”, etc.) to inspect the poet’s teeth in order to confirm this theory. (“Teeth are not bones”, Dr Thackeray somewhat controversially insists.) No convincing evidence, alas, has yet been produced to support any of these theories.
More intriguing than the actual pathology of Shakespeare’s death, however, may be another set of problems that have largely evaded the eye of biographers, though they seem at times – in a wider, more general sense – to have held the poet’s own sometimes playful attention. They turn on the question of fame: how it is constituted; how slowly and indirectly it’s often achieved, how easily it may be delayed, diverted, or lost altogether from view.
No memorial gathering
On 25 April 1616, two days after his death, Shakespeare was buried in the chancel of Holy Trinity Church at Stratford, having earned this modest place of honour as much (it would seem) through his local reputation as a respected citizen as from any deep sense of his wider professional achievements.
No memorial gatherings were held in the nation’s capital, where he had made his career, or, it would seem, elsewhere in the country. The company of players that he had led for so long did not pause (so far as we know) to acknowledge his passing, nor did his patron and protector, King James, whom he had loyally served.
Only one writer, a minor Oxfordshire poet named William Basse, felt moved to offer, at some unknown date following his death, a few lines to the memory of Shakespeare, with whom he may not have been personally acquainted. Hoping that Shakespeare might be interred at Westminster but foreseeing problems of crowding at the Abbey, Basse began by urging other distinguished English poets to roll over in their tombs, in order to make room for the new arrival.
Renownèd Spenser, lie a thought more nigh.
To learned Chaucer; and rare Beaumont, lie
A little nearer Spenser, to make room
For Shakespeare in your threefold, fourfold tomb.
None of these poets responded to Basse’s injunctions, however, and Shakespeare was not to win his place in the Abbey for more than a hundred years, when Richard Boyle, third Earl of Burlington, commissioned William Kent to design and Peter Scheemakers to sculpt this life-size white marble statue of the poet – standing cross-legged, leaning thoughtfully on a pile of books – to adorn Poets’ Corner.
On the wall behind this statue, erected in the Abbey in January 1741, is a tablet with a Latin inscription (perhaps contributed by the poet Alexander Pope) conceding the belated arrival of the memorial: “William Shakespeare,/124 years after his death/ erected by public love”.
Basse’s verses were in early circulation, but not published until 1633. No other poem to Shakespeare’s memory is known to have been written before the appearance of the First Folio in 1623. No effort appears to have been made in the months and years following the poet’s death to assemble a tributary volume, honouring the man and his works. None of Shakespeare’s other contemporaries noted the immediate fact of his passing in any surviving letter, journal, or record. No dispatches, private or diplomatic, carried the news of his death beyond Britain to the wider world.
Why did the death of Shakespeare cause so little public grief, so little public excitement, in and beyond the country of his birth? Why wasn’t his passing an occasion for widespread mourning, and widespread celebration of his prodigious achievements? What does this curious silence tell us about Shakespeare’s reputation in 1616; about the status of his profession and the state of letters more generally in Britain at this time?
A very quiet death
Shakespeare’s death occurred upon St George’s Day. That day was famous for the annual rites of prayer, procession, and feasting at Windsor by members of the Order of the Garter, England’s leading chivalric institution, founded in 1348 by Edward III. Marking as it did the anniversary of the supposed martyrdom in AD 303 of St George of Cappadocia, St George’s Day was celebrated in numerous countries in and beyond Europe, as it is today, but had emerged somewhat bizarrely in late mediaeval times as a day of national significance in England.
On St George’s Day 1616, as Shakespeare lay dying in far-off Warwickshire, King James – seemingly untroubled by prior knowledge of this event – was entertained in London by a poet of a rather different order named William Fennor.
Fennor was something of a royal favourite, famed for his facetious contests in verse, often in the King’s presence, with the Thames bargeman, John Taylor, the so-called Water Poet: a man whom James – as Ben Jonson despairingly reported to William Drummond – reckoned to be the finest poet in the kingdom.
In the days and weeks that followed, as the news of the poet’s death (one must assume) filtered gradually through to the capital, there is no recorded mention in private correspondence or official documents of Shakespeare’s name. Other more pressing matters were now absorbing the nation. Shakespeare had made a remarkably modest exit from the theatre of the world: largely un-applauded, largely unobserved. It was a very quiet death.
An age of public mourning
The silence that followed the death of Shakespeare is the more remarkable coming as it did in an age that had developed such elaborate rituals of public mourning, panegyric, and commemoration, most lavishly displayed at the death of a monarch or peer of the realm, but also occasionally set in train by the death of an exceptional commoner.
Consider the tributes paid to another great writer of the period, William Camden, antiquarian scholar and Clarenceux herald of arms, who died in London in late November 1623; a couple of weeks, as chance would have it, after the publication of Shakespeare’s First Folio.
Portrait of William Camden by Marcus Gheeraerts the Younger (1609). Wikimedia commons
Camden was a man of quite humble social origins – like Shakespeare himself, whose father was a maker of gloves and leather goods in Stratford. Camden’s father was a painter-stainer, whose job it was to decorate coats of arms and other heraldic devices. By the time of his death Camden was widely recognized, in Britain and abroad, as one of the country’s outstanding scholars.
Eulogies were delivered at Oxford and published along with other tributes in a memorial volume soon after his death. At Westminster his body was escorted to the Abbey on 19 November by a large retinue of mourners, led by 26 poor men wearing gowns, followed by soberly attired gentlemen, esquires, knights, and members of the College of Arms, the hearse being flanked by earls, barons, and other peers of the realm, together with the Lord Keeper, Bishop John Williams, and other divines. Camden’s imposing funeral mirrored on a smaller scale the huge procession of 1,600 mourners which in 1603 had accompanied the body of Elizabeth I to its final resting place in the Abbey.
There were particular reasons, then, why Camden should have been accorded a rather grand funeral of his own. But mightn’t there have been good reasons for Shakespeare, likewise – whom we see today as the outstanding writer of his age – to have been honoured at his death in a suitably ceremonious fashion? It’s curious to realize, however, that Shakespeare at the time of his death wasn’t yet universally seen as the outstanding writer of his age.
At this quite extraordinary moment in the history of English letters and intellectual exchange there was more than one contender for that title. William Camden himself – an admired poet in addition to his other talents, and friend and mentor of other poets of the day – had included Shakespeare’s name in a list, published in 1614, of “the most pregnant wits of these our times, whom succeeding ages may justly admire”, placing him, without differentiation, alongside Edmund Spenser, John Owen, Thomas Campion, Michael Drayton, George Chapman, John Marston, Hugh Holland and Ben Jonson, the last two of whom he had taught at Westminster School.
But it was another poet, Sir Philip Sidney, whom Camden had befriended during his student days at Oxford, that he most passionately admired, and continued to regard – following Sidney’s early death at the age of 32 in 1586 – as the country’s supreme writer. “Our Britain is the glory of earth and its precious jewel,/ But Sidney was the precious jewel of Britain”, Camden had written in a memorial poem in Latin mourning his friend’s death.
No commoner poet in England had ever been escorted to his grave with such pomp as was furnished for Sidney’s funeral at St Paul’s Cathedral, London, on 16 February 1587.
The 700-man procession was headed by 32 poor men, representing the number of years that Sidney had lived, with fifes and drums “playing softly” beside them. They were followed by trumpeters and gentlemen and yeomen servants, physicians, surgeons, chaplains, knights and esquires, heralds bearing aloft Sidney’s spurs and gauntlet, his helm and crest, his sword and targe, his coat of arms. Then came the hearse containing Sidney’s body. Behind them walked the chief mourner, Philip’s young brother, Robert, accompanied by the Earls of Leicester, Pembroke, Huntingdon, and Essex, followed by representatives from the states of Holland and Zealand. Next came the Lord Mayor and Aldermen of the City of London, with 120 members of the Company of Grocers, and, at the rear of the procession, “citizens of London practised in arms, about 300, who marched three by three”.
Sidney’s funeral was a moving salute to a man who was widely admired not just for his military, civic and diplomatic virtues, but as the outstanding writer of his day. He fulfilled in exemplary fashion, as Shakespeare curiously did not, the Renaissance ideal of what a poet should strive to be.
In an extraordinary act of homage not before seen in England, but soon to be commonly followed at the death of distinguished writers, the Universities of Oxford and Cambridge produced three volumes of Latin verse lauding Sidney’s achievements, while a fourth volume of similar tributes was published by the University of Leiden. The collection from Cambridge, presented contributions from 63 Cambridge men, together with a sonnet in English by King James VI of Scotland, the future King James I of Britain.
Earlier English poets had been mourned at their passing, if not in these terms and not on this scale, then with more enthusiasm than was evident at the death of Shakespeare. Edmund Spenser at his death in 1599 was buried in Westminster Abbey next to Chaucer, “this hearse being attended by poets, and mournful elegies and poems with the pens that wrote them thrown into his tomb”. The deaths of Thomas Wyatt and Michael Drayton were similarly lamented.
When, 21 years after Shakespeare’s death, his former friend and colleague Ben Jonson came at last to die, the crowd that gathered at his house in Westminster to accompany his body to his grave in the Abbey included “all or the greatest part of the nobility and gentry then in the town”. Within months of his death a volume of 33 poems was in preparation and a dozen additional elegies had appeared in print. Jonson was hailed at his death as “king of English poetry”, as England’s “rare arch-poet”. With his death, as more than one memorialist declared, English poetry itself now seemed also to have died. No one had spoken in these terms at the death of Shakespeare.
To take one last example: at the death in 1643 of the dramatist William Cartwright whose works and whose very name are barely known to most people today – Charles I elected to wear black, remarking that
since the muses had so much mourned for the loss of such a son, it would be a shame for him not to appear in mourning for the loss of such a subject.
At the death of Shakespeare in 1616 James had shown no such minimal courtesy.
Why should Shakespeare at his death have been so neglected? One simple answer is that King James, unlike his son, Charles, had no great passion for the theatre, and no very evident regard for Shakespeare’s genius. Early in his reign, so Dudley Carleton reported,
The first holy days we had every night a public play in the great hall, at which the King was ever present, and liked or disliked as he saw cause: but it seems he takes no extraordinary pleasure in them.
But Shakespeare and his company were not merely royal servants, bound to provide a steady supply of dramatic entertainment at court; they also catered for the London public who flocked to see their plays at Blackfriars and the Globe, and who had their own ways of expressing their pleasure, their frustrations, and – at the death of a player – their grief.
When Richard Burbage, the principal actor for the King’s Men, died on 9 March 1619, just seven days after the death of Queen Anne, the London public were altogether more upset by that event than they had been over the death of the Queen, as one contemporary writer – quoting, ironically, the opening lines of Shakespeare’s 1 Henry VI – tartly observed.
So it’s necessary, I think, to pose a further question. Why should the death of Burbage have affected the London public more profoundly than the death not merely of the Queen but of the dramatist whose work he so skilfully interpreted?
I believe the answer lies, partly at least, in the status of the profession to which Shakespeare belonged, a profession which didn’t yet have a regular name: the very words playwright and dramatist not entering the language until half a century after Shakespeare’s death.
Prominent actors at this time were far better known to the public than the writers who provided their livelihood. The writers were on the whole invisible people, who worked as backroom boys, often anonymously and in small teams; playgoers had no easy way of discovering their identity. Theatre programmes didn’t yet exist. Playbills often announced the names of leading actors, but not until the very last decade of the 17th century did they include the names of authors.
Only a fraction of the large number of plays performed in this period moreover found their way into print, and those that were published didn’t always disclose the names of their authors.
At the time of Shakespeare’s death half of his plays weren’t yet available in print, and there were no known plans to produce a collected edition of his works. The total size and shape of the canon were therefore still imperfectly known. Shakespeare was not yet fully visible.
In 1616 the world didn’t yet realise what they had got, or who it was that they’d lost. Hence, I believe, the otherwise inexplicable silence at his passing.
To the Memory of My Beloved
At the time of Shakespeare’s death another English writer was arguably better known to the general public than Shakespeare himself, and more highly esteemed by the brokers of power at King James’s court. That writer was Shakespeare’s friend and colleague Ben Jonson, who early in 1616 had been awarded a pension of one hundred marks to serve as King James’s laureate poet.
A first folio edition of Shakespeare’s collected plays was finally published in London with Jonson’s assistance and oversight in 1623. This monumental volume at last gave readers in England some sense of the wider reach of Shakespeare’s theatrical achievement, and laid the essential foundations of his modern reputation.
At the head of this volume stand two poems by Ben Jonson: the second, To the Memory of My Beloved, the Author, Mr William Shakespeare, and What He Hath Left Us assesses the achievement of this extraordinary writer. Shakespeare had been praised during his lifetime as a “sweet”, “mellifluous”, “honey-tongued”, “honey-flowing”, “pleasing” writer. No one until this moment had presented him in the astounding terms that Jonson here proposes: as the pre-eminent figure, the “soul” and the “star” of his age; and as something even more than that: as one who could be confidently ranked with the greatest writers of antiquity and of the modern era.
Triumph, my Britain, thou has one to show
To whom all scenes of Europe homage owe,
He was not of an age, but for all time!
Today, 400 years on, that last line sounds like a truism, for Shakespeare’s fame has indeed endured. He is without doubt the most famous writer the world has ever seen. But in 1623 this was a bold and startling prediction. No one before that date had described Shakespeare’s achievement in such terms as these.
This is an edited version of a public lecture given at the University of Melbourne.
On the 400th anniversary of Shakespeare’s death, the Faculty of Arts at the University of Melbourne is establishing the Shakespeare 400 Trust to raise funds to support the teaching of Shakespeare at the University into the future. For more information, or if you would like to support the Shakespeare 400 Trust, please contact Julie du Plessis at email@example.com
Teachers have been shaping lives for centuries. Everyone remembers their favourite (and of course their least favourite) teachers. This important group of people even has its own special day, marked each October by the United Nations.
Teachers are at the coal face when it comes to watching societies change. South Africa’s classrooms, for instance, look vastly different today than they did two decades ago. They bring together children from different racial, cultural, economic and social backgrounds. This can sometimes cause conflict as varied ways of understanding the world bump up against each other.
How can teachers develop the skills to work with these differences in productive ways? What practical support do they need to bring the values of the Constitution to life in their classes?
To answer these questions, my colleagues and I in the Faculty of Education at Stellenbosch University have put together four examples from modules within our faculty’s teacher education programme. These ideas are by no means exhaustive; other institutions also tackle these issues. What we present here is based on our own research, teaching and experience and is open to further discussion.
1. Working with multilingualism
English is only South Africa’s fifth most spoken home language. Teachers must remember this: even if their pupils are speaking English in the classroom, their home languages may be far more diverse.
Trainee teachers can benefit enormously from a course on multilingual education. In our faculty, for instance, students are given the chance to place multilingual education in a South African policy framework. They model multilingual classroom strategies like code switching and translation. They visit schools to observe how such strategies are applied in the real classroom. Students then report back on whether this approach helps learners from different language backgrounds to participate actively in the lesson.
There’s also great value in introducing student teachers to the notion of “World Englishes”. This focuses on the role of English in multilingual communities, where it is seen as being used for communication and academic purposes rather than as a way for someone to be integrated into an English community.
2. Supporting diverse learning needs
Student teachers must be trained to identify and support pupils’ diverse learning needs. This helps teachers to identify and address barriers to learning and development and encourages linkages between the home and the school.
This is even more meaningful when it is embedded in experiential learning. For instance, in guided exercises with their own class groups, our students engage with their feelings, experiences and thinking about their own backgrounds and identities. Other activities may be based on real scenarios, such as discussing the case of a boy who was sanctioned by his school for wearing his hair in a way prescribed by his religion.
In these modules we focus on language, culture, race, socioeconomic conditions, disability, sexual orientation, learning differences and behavioural, health or emotional difficulties. The students also learn how to help vulnerable learners who are being bullied.
And these areas are constantly expanding. At Stellenbosch University, we’ve recently noted that we need to prepare teachers to deal with the bullying of LGBT learners. They also need to be equipped with the tools to support pupils who’ve immigrated from elsewhere in Africa.
3. Advancing a democratic classroom
Courses that deal with the philosophy of education are an important element of teacher education. These explore notions of diversity, human dignity, social justice and democratic citizenship.
In these classes, student teachers are encouraged to see their own lecture rooms as spaces for open and equal engagement, with regard and respect for different ways of being. They’re given opportunities to express and engage with controversial views. This stands them in good stead to create such spaces in their own classrooms.
Most importantly, students are invited to critically reconsider commonly held beliefs – and to disrupt their ideas of the world – so that they might encounter the other as they are and not as they desire them to be. In such a classroom, a teacher promotes discussion and debate. She cultivates respect and regard for the other by listening to different accounts and perspectives. Ultimately, the teacher accepts that she is just one voice in the classroom.
4. Understanding constitutional rights in the classroom
All the approaches to teacher education described here are underpinned by the Constitution.
The idea is that teacher education programmes should develop teachers who understand notions of justice, citizenship and social cohesion. Any good teacher needs to be able to reflect critically on their own role as leader and manager within the contexts of classrooms, schools and the broader society. This includes promoting values of democracy, social justice and equality, and building attitudes of respect and reciprocity.
A critical reflective ethos is encouraged. Students get numerous opportunities to interrogate, debate, research, express and reflect upon educational challenges, theories and policies, from different perspectives, as these apply to practice. This is all aimed at building a positive school environment for everyone.
Moving into teaching
What about when students become teachers themselves?
For many new teachers these inclusive practices are not easy to implement in schools. One lecturer in our faculty has been approached by former students who report that as beginner teachers, they don’t have “the status or voice to change existing discriminatory practices and what some experience as the resistance to inclusive education”. This suggests that ongoing discussion and training in both pre-service and in-service education is needed.
At the same time, however, there are signs that these modules are having a positive impact. Students post comments and ideas on social media and lecturers regularly hear from first-time teachers about how useful their acquired knowledge is in different contexts. Many are also eager to study further so they can explore the issues more deeply.
Everything I’ve described here is part of one faculty’s attempts to provide safe spaces where student teachers can learn to work constructively with the issues pertaining to diversity in education. In doing so, we hope they’ll become part of building a country based on respect for all.
Author’s note: I am grateful to my colleagues Lynette Collair, Nuraan Davids, Jerome Joorst and Christa van der Walt for the ideas contained in this article.
by Gianfranco Conti, PhD. Co-author of 'The Language Teacher toolkit' and "Breaking the sound barrier: teaching learners how to listen', winner of the 2015 TES best resource contributor award and founder of www.language-gym.com