Citizenship applicants will need to demonstrate a higher level of English proficiency if the government’s proposed changes to the Australian citizenship test go ahead.
Applicants will be required to reach the equivalent of Band 6 proficiency of the International English Language Testing System (IELTS).
To achieve Band 6, applicants must correctly answer 30 out of 40 questions in the reading paper, 23 out of 40 in the listening paper, and the writing paper rewards language used “accurately and appropriately”. If a candidate’s writing has “frequent” inaccuracies in grammar and spelling, they cannot achieve Band 6
Success in IELTS requires proficiency in both the English language, and also understanding how to take – and pass – a test. The proposed changes will then make it harder for people with fragmented educational backgrounds to become citizens, such as many refugees.
How do the tests currently work?
The current citizenship test consists of 20 multiple-choice questions in English concerning Australia’s political system, history, and citizen responsibilities.
While the test does not require demonstration of English proficiency per se, it acts as an indirect assessment of language.
For example, the question: “Which official symbol of Australia identifies Commonwealth property?” demonstrates the level of linguistic complexity required.
The IELTS test is commonly taken for immigration purposes as a requirement for certain visa categories; however, the designer of IELTS argues that IELTS was never designed for this purpose. Researchers have argued that the growing strength of English as the language of politics and economics has resulted in its widespread use for immigration purposes.
For many adult refugees – who have minimal first language literacy, fragmented educational experiences, and limited opportunities to gain feedback on their written English – “competency” may be prohibitive to gaining citizenship. This is also more likely to impact refugee women, who are less likely to have had formal schooling and more likely to assume caring duties.
There are a number of questions to clarify regarding the proposed language proficiency test:
Will those dealing with trauma-related experiences gain exemption from a high-stakes, time-pressured examination?
What support mechanisms will be provided to assist applicants to study for the test?
Will financially-disadvantaged members of the community be expected to pay for classes/ materials in order to prepare for the citizenship test?
The IELTS test costs A$330, with no subsidies available. Will the IELTS-based citizenship/ language test attract similar fees?
There are also questions about the fairness of requiring applicants to demonstrate a specific type and level of English under examination conditions that is not required of all citizens. Those born in Australia are not required to pass an academic test of language in order to retain their citizenship.
Recognising diversity of experiences
There are a few things the government should consider before introducing a language test:
1) Community consultation is essential. Input from community/ migrant groups, educators, and language assessment specialists will ensure the test functions as a valid evaluation of progression towards English language proficiency. The government is currently calling for submissions related to the new citizenship test.
2) Design the test to value different forms and varieties of English that demonstrate progression in learning rather than adherence to prescriptive standards.
3) Provide educational opportunities that build on existing linguistic strengths that help people to prepare for the test.
Equating a particular type of language proficiency with a commitment to Australian citizenship is a complex and ideologically-loaded notion. The government must engage in careful consideration before potentially further disadvantaging those most in need of citizenship.
It’s often thought that it is better to start learning a second language at a young age. But research shows that this is not necessarily true. In fact, the best age to start learning a second language can vary significantly, depending on how the language is being learned.
The belief that younger children are better language learners is based on the observation that children learn to speak their first language with remarkable skill at a very early age.
Before they can add two small numbers or tie their own shoelaces, most children develop a fluency in their first language that is the envy of adult language learners.
Why younger may not always be better
Two theories from the 1960s continue to have a significant influence on how we explain this phenomenon.
The theory of “universal grammar” proposes that children are born with an instinctive knowledge of the language rules common to all humans. Upon exposure to a specific language, such as English or Arabic, children simply fill in the details around those rules, making the process of learning a language fast and effective.
The other theory, known as the “critical period hypothesis”, posits that at around the age of puberty most of us lose access to the mechanism that made us such effective language learners as children. These theories have been contested, but nevertheless they continue to be influential.
Despite what these theories would suggest, however, research into language learning outcomes demonstrates that younger may not always be better.
In some language learning and teaching contexts, older learners can be more successful than younger children. It all depends on how the language is being learned.
Language immersion environment best for young children
Living, learning and playing in a second language environment on a regular basis is an ideal learning context for young children. Research clearly shows that young children are able to become fluent in more than one language at the same time, provided there is sufficient engagement with rich input in each language. In this context, it is better to start as young as possible.
Learning in classroom best for early teens
Learning in language classes at school is an entirely different context. The normal pattern of these classes is to have one or more hourly lessons per week.
To succeed at learning with such little exposure to rich language input requires meta-cognitive skills that do not usually develop until early adolescence.
For this style of language learning, the later years of primary school is an ideal time to start, to maximise the balance between meta-cognitive skill development and the number of consecutive years of study available before the end of school.
Self-guided learning best for adults
There are, of course, some adults who decide to start to learn a second language on their own. They may buy a study book, sign up for an online course, purchase an app or join face-to-face or virtual conversation classes.
To succeed in this learning context requires a range of skills that are not usually developed until reaching adulthood, including the ability to remain self-motivated. Therefore, self-directed second language learning is more likely to be effective for adults than younger learners.
How we can apply this to education
What does this tell us about when we should start teaching second languages to children? In terms of the development of language proficiency, the message is fairly clear.
If we are able to provide lots of exposure to rich language use, early childhood is better. If the only opportunity for second language learning is through more traditional language classes, then late primary school is likely to be just as good as early childhood.
However, if language learning relies on being self-directed, it is more likely to be successful after the learner has reached adulthood.
Do you remember being taught you should never start your sentences with “And” or “But”?
What if I told you that your teachers were wrong and there are lots of other so-called grammar rules that we’ve probably been getting wrong in our English classrooms for years?
How did grammar rules come about?
To understand why we’ve been getting it wrong, we need to know a little about the history of grammar teaching.
Grammar is how we organise our sentences in order to communicate meaning to others.
Those who say there is one correct way to organise a sentence are called prescriptivists. Prescriptivist grammarians prescribe how sentences must be structured.
Prescriptivists had their day in the sun in the 18th century. As books became more accessible to the everyday person, prescriptivists wrote the first grammar books to tell everyone how they must write.
These self-appointed guardians of the language just made up grammar rules for English, and put them in books that they sold. It was a way of ensuring that literacy stayed out of reach of the working classes.
They took their newly concocted rules from Latin. This was, presumably, to keep literate English out of reach of anyone who wasn’t rich or posh enough to attend a grammar school, which was a school where you were taught Latin.
And yes, that is the origin of today’s grammar schools.
The other camp of grammarians are the descriptivists. They write grammar guides that describe how English is used by different people, and for different purposes. They recognise that language isn’t static, and it isn’t one-size-fits-all.
1. You can’t start a sentence with a conjunction
Let’s start with the grammatical sin I have already committed in this article. You can’t start a sentence with a conjunction.
Obviously you can, because I did. And I expect I will do it again before the end of this article. There, I knew I would!
Those who say it is always incorrect to start a sentence with a conjunction, like “and” or “but”, sit in the prescriptivist camp.
However, according to the descriptivists, at this point in our linguistic history,
it is fine to start a sentence with a conjunction in an op-ed article like this, or in a novel or a poem.
It is less acceptable to start a sentence with a conjunction in an academic journal article, or in an essay for my son’s high school economics teacher, as it turns out. But times are changing.
2. You can’t end a sentence with a preposition
Well, in Latin you can’t. In English you can, and we do all the time.
Admittedly a lot of the younger generation don’t even know what a preposition is, so this rule is already obsolete. But let’s have a look at it anyway, for old time’s sake.
According to this rule, it is wrong to say “Who did you go to the movies with?”
Instead, the prescriptivists would have me say “With whom did you go to the movies?”
I’m saving that structure for when I’m making polite chat with the Queen on my next visit to the palace.
That’s not a sarcastic comment, just a fanciful one. I’m glad I know how to structure my sentences for different audiences. It is a powerful tool. It means I usually feel comfortable in whatever social circumstances I find myself in, and I can change my writing style according to purpose and audience.
That is why we should teach grammar in schools. We need to give our children a full repertoire of language so that they can make grammatical choices that will allow them to speak and write for a wide range of audiences.
3. Put a comma when you need to take a breath
It’s a novel idea, synchronising your writing with your breathing, but the two have nothing to do with one another and if this is the instruction we give our children, it is little wonder commas are so poorly used.
Commas provide demarcation between like grammatical structures. When adjectives, nouns, phrases or clauses are butting up against each other in a sentence, we separate them with a comma. That’s why I put commas between the three nouns and the two clauses in that last sentence.
Commas also provide demarcation for words, phrases or clauses that are embedded in a sentence for effect. The sentence would still be a sentence even if we took those words away. See, for example, the use of commas in this sentence.
4. To make your writing more descriptive, use more adjectives
Teachers have been shaping lives for centuries. Everyone remembers their favourite (and of course their least favourite) teachers. This important group of people even has its own special day, marked each October by the United Nations.
Teachers are at the coal face when it comes to watching societies change. South Africa’s classrooms, for instance, look vastly different today than they did two decades ago. They bring together children from different racial, cultural, economic and social backgrounds. This can sometimes cause conflict as varied ways of understanding the world bump up against each other.
How can teachers develop the skills to work with these differences in productive ways? What practical support do they need to bring the values of the Constitution to life in their classes?
To answer these questions, my colleagues and I in the Faculty of Education at Stellenbosch University have put together four examples from modules within our faculty’s teacher education programme. These ideas are by no means exhaustive; other institutions also tackle these issues. What we present here is based on our own research, teaching and experience and is open to further discussion.
1. Working with multilingualism
English is only South Africa’s fifth most spoken home language. Teachers must remember this: even if their pupils are speaking English in the classroom, their home languages may be far more diverse.
Trainee teachers can benefit enormously from a course on multilingual education. In our faculty, for instance, students are given the chance to place multilingual education in a South African policy framework. They model multilingual classroom strategies like code switching and translation. They visit schools to observe how such strategies are applied in the real classroom. Students then report back on whether this approach helps learners from different language backgrounds to participate actively in the lesson.
There’s also great value in introducing student teachers to the notion of “World Englishes”. This focuses on the role of English in multilingual communities, where it is seen as being used for communication and academic purposes rather than as a way for someone to be integrated into an English community.
2. Supporting diverse learning needs
Student teachers must be trained to identify and support pupils’ diverse learning needs. This helps teachers to identify and address barriers to learning and development and encourages linkages between the home and the school.
This is even more meaningful when it is embedded in experiential learning. For instance, in guided exercises with their own class groups, our students engage with their feelings, experiences and thinking about their own backgrounds and identities. Other activities may be based on real scenarios, such as discussing the case of a boy who was sanctioned by his school for wearing his hair in a way prescribed by his religion.
In these modules we focus on language, culture, race, socioeconomic conditions, disability, sexual orientation, learning differences and behavioural, health or emotional difficulties. The students also learn how to help vulnerable learners who are being bullied.
And these areas are constantly expanding. At Stellenbosch University, we’ve recently noted that we need to prepare teachers to deal with the bullying of LGBT learners. They also need to be equipped with the tools to support pupils who’ve immigrated from elsewhere in Africa.
3. Advancing a democratic classroom
Courses that deal with the philosophy of education are an important element of teacher education. These explore notions of diversity, human dignity, social justice and democratic citizenship.
In these classes, student teachers are encouraged to see their own lecture rooms as spaces for open and equal engagement, with regard and respect for different ways of being. They’re given opportunities to express and engage with controversial views. This stands them in good stead to create such spaces in their own classrooms.
Most importantly, students are invited to critically reconsider commonly held beliefs – and to disrupt their ideas of the world – so that they might encounter the other as they are and not as they desire them to be. In such a classroom, a teacher promotes discussion and debate. She cultivates respect and regard for the other by listening to different accounts and perspectives. Ultimately, the teacher accepts that she is just one voice in the classroom.
4. Understanding constitutional rights in the classroom
All the approaches to teacher education described here are underpinned by the Constitution.
The idea is that teacher education programmes should develop teachers who understand notions of justice, citizenship and social cohesion. Any good teacher needs to be able to reflect critically on their own role as leader and manager within the contexts of classrooms, schools and the broader society. This includes promoting values of democracy, social justice and equality, and building attitudes of respect and reciprocity.
A critical reflective ethos is encouraged. Students get numerous opportunities to interrogate, debate, research, express and reflect upon educational challenges, theories and policies, from different perspectives, as these apply to practice. This is all aimed at building a positive school environment for everyone.
Moving into teaching
What about when students become teachers themselves?
For many new teachers these inclusive practices are not easy to implement in schools. One lecturer in our faculty has been approached by former students who report that as beginner teachers, they don’t have “the status or voice to change existing discriminatory practices and what some experience as the resistance to inclusive education”. This suggests that ongoing discussion and training in both pre-service and in-service education is needed.
At the same time, however, there are signs that these modules are having a positive impact. Students post comments and ideas on social media and lecturers regularly hear from first-time teachers about how useful their acquired knowledge is in different contexts. Many are also eager to study further so they can explore the issues more deeply.
Everything I’ve described here is part of one faculty’s attempts to provide safe spaces where student teachers can learn to work constructively with the issues pertaining to diversity in education. In doing so, we hope they’ll become part of building a country based on respect for all.
Author’s note: I am grateful to my colleagues Lynette Collair, Nuraan Davids, Jerome Joorst and Christa van der Walt for the ideas contained in this article.
The most dangerous part of flying is driving to the airport.
That’s a standard joke among pilots, who know even better than the flying public that aviation is the safest mode of transportation.
But there are still those headlines and TV shows about airline crashes, and those statistics people like to repeat, such as:
Between 1976 and 2000, more than 1,100 passengers and crew lost their lives in accidents in which investigators determined that language had played a contributory role.
True enough, 80% of all air incidents and accidents occur because of human error. Miscommunication combined with other human factors such as fatigue, cognitive workload, noise, or forgetfulness have played a role in some of the deadliest accidents.
The most well-known, and widely discussed, is the collision on the ground of two Boeing 747 aircraft in 1977 in Tenerife, which resulted in 583 fatalities. The incident was due in part to difficult communications between the pilot, whose native language was Dutch, and the Spanish air traffic controller.
In such a high-stakes environment as commercial aviation, where the lives of hundreds of passengers and innocent people on the ground are involved, communication is critical to safety.
So, it was decided that Aviation English would be the international language of aviation and that all aviation professionals – pilots and air traffic controllers (ATC) – would need to be proficient in it. It is a language designed to minimise ambiguities and misunderstandings, highly structured and codified.
Pilots and ATC expect to hear certain bits of information in certain ways and in a given order. The “phraseology”, with its particular pronunciation (for example, “fife” and “niner” instead of “five” and “nine”, so they’re not confused with each other), specific words (“Cleared to land”), international alphabet (“Mike Hotel Foxtrot”) and strict conversation rules (you must repeat, or “read back”, an instruction), needs to be learned and practised.
In spite of globalisation and the spread of English, most people around the world are not native English speakers, and an increasing number of aviation professionals do not speak English as their first language.
Native speakers have an advantage when they learn Aviation English, since they already speak English at home and in their daily lives. But they encounter many pilots or ATC who learned English as a second or even third language.
Whose responsibility is it to ensure that communication is successful? Can native speakers simply speak the way they do at home and expect to be understood? Or do they also have the responsibility to make themselves understood and to learn how to understand pilots or ATC who are not native English speakers?
As a linguist, I analyse aviation language from a linguistics perspective. I have noted the restricted meaning of the few verbs and adjectives; that the only pronouns are “you” and sometimes “we” (“How do you read?”; “We’re overhead Camden”; how few questions there are, mostly imperatives (“Maintain heading 180”); and that the syntax is so simple (no complement clauses, no relative clauses, no recursion), it might not even count as a human language for Chomsky.
But, as a pilot and a flight instructor, I look at it from the point of view of student pilots learning to use it in the cockpit while also learning to fly the airplane and navigate around the airfield.
How much harder it is to remember what to say when the workload goes up, and more difficult to speak over the radio when you know everyone else on the frequency is listening and will notice every little mistake you make?
Imagine, then, how much more difficult this is for pilots with English as a second language.
Everyone learning another language knows it’s suddenly more challenging to hold a conversation over the phone than face-to-face, even with someone you already know. When it’s over the radio, with someone you don’t know, against the noise of the engine, static noise in the headphones, and while trying to make the plane do what you want it to do, it can be quite daunting.
No wonder student pilots who are not native English speakers sometimes prefer to stay silent, and even some experienced native English speakers will too, when the workload is too great.
This is one of the results of my research conducted in collaboration with UNSW’s Brett Molesworth, combining linguistics and aviation human factors.
Experiments in a flight simulator with pilots of diverse language backgrounds and flying experience explored conditions likely to result in pilots making mistakes or misunderstanding ATC instructions. Not surprisingly, increased workload, too much information, and rapid ATC speech, caused mistakes.
Also not surprisingly, less experienced pilots, no matter their English proficiency, made more mistakes. But surprisingly, it was the level of training, rather than number of flying hours or language background, that predicted better communication.
Once we understand the factors contributing to miscommunication in aviation, we can propose solutions to prevent them. For example, technologies such as Automatic Speech Recognition and Natural Language Understanding may help catch errors in pilot readbacks that ATC did not notice and might complement training for pilots and ATC.
It is vital that they understand each other, whatever their native language.
Writing an article like this is just asking for trouble. Already, I can hear one reader asking “Why do you need just?” Another suggesting that like should be replaced by such as. And yet another saying “fancy using a cliché like asking for trouble!”
Another will mutter: “Where’s your evidence?”
My evidence lies in the vehement protestations that I face when going through solutions to an editing test or grammar quiz with on-campus students in my writing courses at The University of Queensland, and no, that’s not deferential capitalisation. It is capital ‘T’.
Confirming evidence lies in the querulous discussion-board posts from dozens of students when they see the answers to quizzes on the English Grammar and Style massive open online course that I designed.
Further evidence lies in the fervour with which people comment about articles such as the one that you are currently reading. For instance, a 2013 article 10 grammar rules you can forget: How to stop worrying and write proper by the style editor of The Guardian, David Marsh, prompted 956 comments. Marsh loves breaking “real” rules. The title of his recent book is For Who the Bell Tolls. I’d prefer properly to proper and whom to who, but not everybody else would.
Marsh’s 10 forgettable rules are ones that my favourite grammarian, Professor Geoffrey Pullum, co-author of The Cambridge Grammar of the English Language calls zombie rules: “though dead, they shamble mindlessly on”. A list of zombie rules invariably includes never beginning a sentence with “and”, “but”, or “because”, as well as the strictures that are a hangover from Latin: never split an infinitive and never end a sentence with a preposition. It (should it be they?) couldn’t be done in Latin, but it (they?) can be done in English. Just covering my bases here.
So, what’s my stance on adhering to Standard English? I’m certainly not a grammar Nazi, nor even a grammando, a portmanteau term that first appeared in The New York Times in 2012 that’s hardly any softer. Am I a vigilante, a pedant, a per(s)nickety person? Am I a snoot? Snoot is the acronym that the late David Foster Wallace and his mother — both English teachers — coined from Sprachgefühl Necessitates Our Ongoing Tendance or, for those with neither German nor a cache of obsolete words in their vocabulary, Syntax Nudniks of Our Time.
Foster Wallace reserves snoot for a “really extreme usage fanatic”, the sort of person whose idea of Sunday fun would have been to find mistakes in the late William Safire’s On Language column in the New York Times magazine. Safire was a style maven who wrote articles with intriguing opening lines such as this: “A sinister force for solecism exists on Madison Avenue. It is the work of the copywrongers”.
Growing up with a mother who would stage a “pretend” coughing fit when her children made a grammar error clearly contributed to Foster Wallace’s SNOOTitude. His 50-page essay “Authority and American Usage”, published in 2005, constitutes a brilliant, if somewhat eccentric, coverage of English grammar.
I need to be a bit of a snoot because part of my brief as a writing educator is to prepare graduates for their utilitarian need to function as writing workers in a writing-reliant workplace where professional standards are crucial and errors erode credibility. (I see the other part of my brief as fostering a love of language that will provide them with lifelong recreational pleasure.)
How do I teach students to avoid grammar errors, ambiguous syntax, and infelicities and gaucheries in style? In the closing chapter of my new book on effective writing, I list around 80 potential problems in grammar, punctuation, style, and syntax.
My hateful eight
My brief for this article is to highlight eight of these problems. Should I identify ones that peeve me the most or ones that cause most dissonance for readers? What’s the peevishness threshold of readers of The Conversation? Let’s go with mine, for now; they may also be yours. They are in no particular order and they depend on the writing context in which they are set: academic, corporate, creative, or journalistic.
Archaic language: amongst, whilst. Replace them with among and while.
Resistance to the singular “they” Here’s an unbearably tedious example from a book published in 2016 in London: “The four victims each found a small book like this in his or her home, or among his or her possessions, several weeks before the murder occurred in each case”. Replace his or her with their.
In January this year, The American Dialect Society announced the singular “they” as their Word of the Year for 2015, decades after Australia welcomed and widely adopted it.
Placement of modifiers. Modifiers need to have a clear, direct relationship with the word/s that they modify. The title of Rob Lowe’s autobiography should be Stories I Tell Only My Friends, not Stories I Only Tell My Friends. However, I’ll leave Brian Wilson alone with “God only knows what I’d be without you”, though I know that he meant “Only God knows what I’d be without you”.
And how amusing is this commentary, which appeared in The Times on 18 April 2015? “A longboat full of Vikings, promoting the new British Museum exhibition, was seen sailing past the Palace of Westminster yesterday. Famously uncivilised, destructive and rapacious, with an almost insatiable appetite for rough sex and heavy drinking, the MPs nevertheless looked up for a bit to admire the vessel”.
Incorrect pronouns. The irritating genteelism of “They asked Agatha and myself to dinner” and the grammatically incorrect “They asked Agatha and I to dinner”, when in both instances it should be me .
Ambiguity/obfuscation “Few Bordeaux give as much pleasure at this price”. How ethical is that on a bottle of red wine of unidentified origin?
The wrong preposition The rich are very different to you and me. (Change “to” to “from” to make sense.) Not to be mistaken with. (Change “with” to “for”). No qualms with. (Change “with” to “about”.)
The wrong word. There are dozens of “confusable” words that a spell checker won’t necessarily help with: “Yes, it is likely that working off campus may effect what you are trying to do”. Ironically, this could be correct, but I know that that wasn’t the writer’s intended message. And how about practice/practise, principal/principle, lead/led, and many more.
Worryingly equivocal language. After the Easter strike some time ago, the CEO of QANTAS, Alan Joyce, sent out an apologetic letter that included the sentence: “Despite some sensational coverage recently, safety was never an issue … We always respond conservatively to any mechanical or performance issue”. I hoped at the time that that’s not what he meant because I felt far from reassured by the message.
Alert readers will have noticed that I haven’t railed against poorly punctuated sentences. I’ll do that next time. A poorly punctuated sentence cannot be grammatically correct.
Unlikely as it sounds, the topic of adjective use has gone “viral”. The furore centres on the claim, taken from Mark Forsyth’s book The Elements of Eloquence, that adjectives appearing before a noun must appear in the following strict sequence: opinion, size, age, shape, colour, origin, material, purpose, Noun. Even the slightest attempt to disrupt this sequence, according to Forsyth, will result in the speaker sounding like a maniac. To illustrate this point, Forsyth offers the following example: “a lovely little old rectangular green French silver whittling knife”.
But is the “rule” worthy of an internet storm – or is it more of a ripple in a teacup? Well, certainly the example is a rather unlikely sentence, and not simply because whittling knives are not in much demand these days – ignoring the question of whether they can be both green and silver. This is because it is unusual to have a string of attributive adjectives (ones that appear before the noun they describe) like this.
More usually, speakers of English break up the sequence by placing some of the adjectives in predicative position – after the noun. Not all adjectives, however, can be placed in either position. I can refer to “that man who is asleep” but it would sound odd to refer to him as “that asleep man”; we can talk about the “Eastern counties” but not the “counties that are Eastern”. Indeed, our distribution of adjectives both before and after the noun reveals another constraint on adjective use in English – a preference for no more than three before a noun. An “old brown dog” sounds fine, a “little old brown dog” sounds acceptable, but a “mischievous little old brown dog” sounds plain wrong.
Rules, rules, rules
Nevertheless, however many adjectives we choose to employ, they do indeed tend to follow a predictable pattern. While native speakers intuitively follow this rule, most are unaware that they are doing so; we agree that the “red big dog” sounds wrong, but don’t know why. In order to test this intuition linguists have analysed large corpora of electronic data, to see how frequently pairs of adjectives like “big red” are preferred to “red big”. The results confirm our native intuition, although the figures are not as comprehensive as we might expect – the rule accounts for 78% of the data.
But while linguists have been able to confirm that there are strong preferences in the ordering of pairs of adjectives, no such statistics have been produced for longer strings. Consequently, while Forsyth’s rule appears to make sense, it remains an untested, hypothetical, large, sweeping (sorry) claim.
In fact, even if we stick to just two adjectives it is possible to find examples that appear to break the rule. The “big bad wolf” of fairy tale, for instance, shows the size adjective preceding the opinion one; similarly, “big stupid” is more common than “stupid big”. Examples like these are instead witness to the “Polyanna Principle”, by which speakers prefer to present positive, or indifferent, values before negative ones.
Another consideration of Forsyth’s proposed ordering sequence is that it makes no reference to other constraints that influence adjective order, such as when we use two adjectives that fall into the same category. Little Richard’s song “Long Tall Sally” would have sounded strange if he had called it Tall Long Sally, but these are both adjectives of size.
Similarly, we might describe a meal as “nice and spicy” but never “spicy and nice” – reflecting a preference for the placement of general opinions before more specific ones. We also need to bear in mind the tendency for noun phrases to become lexicalised – forming words in their own right. Just as a blackbird is not any kind of bird that is black, a little black dress does not refer to any small black dress but one that is suitable for particular kinds of social engagement.
Since speakers view a “little black dress” as a single entity, its order is fixed; as a result, modifying adjectives must precede little – a “polyester little black dress”. This means that an adjective specifying its material appears before those referring to size and colour, once again contravening Forsyth’s rule.
Making sense of language
Of course, the rule is a fair reflection of much general usage – although the reasons behind this complex set of constraints in adjective order remain disputed. Some linguists have suggested that it reflects the “nouniness” of an adjective; since colour adjectives are commonly used as nouns – “red is my favourite colour” – they appear close to that slot.
Another conditioning factor may be the degree to which an adjective reflects a subjective opinion rather than an objective description – therefore, subjective adjectives that are harder to quantify (boring, massive, middle-aged) tend to appear further away from the noun than more concrete ones (red, round, French).
Prosody, the rhythm and sound of poetry, is likely to play a role, too – as there is a tendency for speakers to place longer adjectives after shorter ones. But probably the most compelling theory links adjective position with semantic closeness to the noun being described; adjectives that are closely related to the noun in meaning, and are therefore likely to appear frequently in combination with it, are placed closest, while those that are less closely related appear further away.
In Forsyth’s example, it is the knife’s whittling capabilities that are most significant – distinguishing it from a carving, fruit or butter knife – while its loveliness is hardest to define (what are the standards for judging the loveliness of a whittling knife?) and thus most subjective. Whether any slight reorganisation of the other adjectives would really prompt your friends to view you as a knife-wielding maniac is harder to determine – but then, at least it’s just a whittling knife.
As a young adult in college, I decided to learn Japanese. My father’s family is from Japan, and I wanted to travel there someday.
However, many of my classmates and I found it difficult to learn a language in adulthood. We struggled to connect new sounds and a dramatically different writing system to the familiar objects around us.
It wasn’t so for everyone. There were some students in our class who were able to acquire the new language much more easily than others.
So, what makes some individuals “good language learners?” And do such individuals have a “second language aptitude?”
What we know about second language aptitude
Past research on second language aptitude has focused on how people perceive sounds in a particular language and on more general cognitive processes such as memory and learning abilities. Most of this work has used paper-and-pencil and computerized tests to determine language-learning abilities and predict future learning.
Researchers have also studied brain activity as a way of measuring linguistic and cognitive abilities. However, much less is known about how brain activity predicts second language learning.
Is there a way to predict the aptitude of second language learning?
In a recently published study, Chantel Prat, associate professor of psychology at the Institute for Learning and Brain Sciences at the University of Washington, and I explored how brain activity recorded at rest – while a person is relaxed with their eyes closed – could predict the rate at which a second language is learned among adults who spoke only one language.
Studying the resting brain
Resting brain activity is thought to reflect the organization of the brain and it has been linked to intelligence, or the general ability used to reason and problem-solve.
We measured brain activity obtained from a “resting state” to predict individual differences in the ability to learn a second language in adulthood.
To do that, we recorded five minutes of eyes-closed resting-state electroencephalography, a method that detects electrical activity in the brain, in young adults. We also collected two hours of paper-and-pencil and computerized tasks.
We then had 19 participants complete eight weeks of French language training using a computer program. This software was developed by the U.S. armed forces with the goal of getting military personnel functionally proficient in a language as quickly as possible.
The software combined reading, listening and speaking practice with game-like virtual reality scenarios. Participants moved through the content in levels organized around different goals, such as being able to communicate with a virtual cab driver by finding out if the driver was available, telling the driver where their bags were and thanking the driver.
Here’s a video demonstration:
Nineteen adult participants (18-31 years of age) completed two 30-minute training sessions per week for a total of 16 sessions. After each training session, we recorded the level that each participant had reached. At the end of the experiment, we used that level information to calculate each individual’s learning rate across the eight-week training.
As expected, there was large variability in the learning rate, with the best learner moving through the program more than twice as quickly as the slowest learner. Our goal was to figure out which (if any) of the measures recorded initially predicted those differences.
A new brain measure for language aptitude
When we correlated our measures with learning rate, we found that patterns of brain activity that have been linked to linguistic processes predicted how easily people could learn a second language.
Patterns of activity over the right side of the brain predicted upwards of 60 percent of the differences in second language learning across individuals. This finding is consistent with previous research showing that the right half of the brain is more frequently used with a second language.
Our results suggest that the majority of the language learning differences between participants could be explained by the way their brain was organized before they even started learning.
Implications for learning a new language
Does this mean that if you, like me, don’t have a “quick second language learning” brain you should forget about learning a second language?
First, it is important to remember that 40 percent of the difference in language learning rate still remains unexplained. Some of this is certainly related to factors like attention and motivation, which are known to be reliable predictors of learning in general, and of second language learning in particular.
Second, we know that people can change their resting-state brain activity. So training may help to shape the brain into a state in which it is more ready to learn. This could be an exciting future research direction.
Second language learning in adulthood is difficult, but the benefits are large for those who, like myself, are motivated by the desire to communicate with others who do not speak their native tongue.
The limits of our language are said to define the boundaries of our world. This is because in our everyday lives, we can only really register and make sense of what we can name. We are restricted by the words we know, which shape what we can and cannot experience.
It is true that sometimes we may have fleeting sensations and feelings that we don’t quite have a name for – akin to words on the “tip of our tongue”. But without a word to label these sensations or feelings they are often overlooked, never to be fully acknowledged, articulated or even remembered. And instead, they are often lumped together with more generalised emotions, such as “happiness” or “joy”. This applies to all aspects of life – and not least to that most sought-after and cherished of feelings, happiness. Clearly, most people know and understand happiness, at least vaguely. But they are hindered by their “lexical limitations” and the words at their disposal.
As English speakers, we inherit, rather haphazardly, a set of words and phrases to represent and describe our world around us. Whatever vocabulary we have managed to acquire in relation to happiness will influence the types of feelings we can enjoy. If we lack a word for a particular positive emotion, we are far less likely to experience it. And even if we do somehow experience it, we are unlikely to perceive it with much clarity, think about it with much understanding, talk about it with much insight, or remember it with much vividness.
Speaking of happiness
While this recognition is sobering, it is also exciting, because it means by learning new words and concepts, we can enrich our emotional world. So, in theory, we can actually enhance our experience of happiness simply through exploring language. Prompted by this enthralling possibility, I recently embarked on a project to discover “new” words and concepts relating to happiness.
I did this by searching for so-called “untranslatable” words from across the world’s languages. These are words where no exact equivalent word or phrase exists in English. And as such, suggest the possibility that other cultures have stumbled upon phenomena that English-speaking places have somehow overlooked.
Perhaps the most famous example is “Schadenfreude”, the German term describing pleasure at the misfortunes of others. Such words pique our curiosity, as they appear to reveal something specific about the culture that created them – as if German people are potentially especially liable to feelings of Schadenfreude (though I don’t believe that’s the case).
However, these words actually may be far more significant than that. Consider the fact that Schadenfreude has been imported wholesale into English. Evidently, English speakers had at least a passing familiarity with this kind of feeling, but lacked the word to articulate it (although I suppose “gloating” comes close) – hence, the grateful borrowing of the German term. As a result, their emotional landscape has been enlivened and enriched, able to give voice to feelings that might previously have remained unconceptualised and unexpressed.
My research, searched for these kind of “untranslatable words” – ones that specifically related to happiness and well-being. And so I trawled the internet looking for relevant websites, blogs, books and academic papers, and gathered a respectable haul of 216 such words. Now, the list has expanded – partly due to the generous feedback of visitors to my website – to more than 600 words.
When analysing these “untranslatable words”, I divide them into three categories based on my subjective reaction to them. Firstly, there are those that immediately resonate with me as something I have definitely experienced, but just haven’t previously been able to articulate. For instance, I love the strange German noun “Waldeinsamkeit”, which captures that eerie, mysterious feeling that often descends when you’re alone in the woods.
A second group are words that strike me as somewhat familiar, but not entirely, as if I can’t quite grasp their layers of complexity. For instance, I’m hugely intrigued by various Japanese aesthetic concepts, such as “aware” (哀れ), which evokes the bitter-sweetness of a brief, fading moment of transcendent beauty. This is symbolised by the cherry blossom – and as spring bloomed in England I found myself reflecting at length on this powerful yet intangible notion.
Finally, there is a mysterious set of words which completely elude my grasp, but which for precisely that reason are totally captivating. These mainly hail from Eastern religions – terms such as “Nirvana” or “Brahman” – which translates roughly as the ultimate reality underlying all phenomena in the Hindu scriptures. It feels like it would require a lifetime of study to even begin to grasp the meaning – which is probably exactly the point of these types of words.
I believe these words offer a unique window onto the world’s cultures, revealing diversity in the way people in different places experience and understand life. People are naturally curious about other ways of living, about new possibilities in life, and so are drawn to ideas – like these untranslatable words – that reveal such possibilities.
There is huge potential for these words to enrich and expand people’s own emotional worlds, with each of these words comes a tantalising glimpse into unfamiliar and new positive feelings and experiences. And at the end of the day, who wouldn’t be interested in adding a bit more happiness to their own lives?
This year’s winners – Jairam Hathwar from Painted Post, New York and Nihar Janga from Austin, Texas – present a familiar combination of co-champions. Jairam is the younger brother of 2013 co-champion Sriram, who also dueled with a Texan to ultimately share the trophy.
As a topic of intense speculation on broadcast and social media, the wins have elicited comments that range from curiosity to bafflement and at times outright racism. This curiosity is different from past speculation about “whether home-schooled spellers have an advantage.”
The range of responses offers a moment to consider some of the factors underlying the Indian-American success at the bee, as well as how spelling as a sport has changed. Immediately following the 2016 bee, for instance, much of the coverage has focused on the exceedingly high level of competition and drama that characterized the 25-round championship battle that ultimately resulted in a tie.
Since 2013, I have been conducting research on competitive spelling at regional and national bees with officials, spellers and their families, and media producers.
My interviews and observations reveal the changing nature of spelling as a “brain sport” and the rigorous regimens of preparation that competitive spellers engage in year-round. Being an “elite speller” is a major childhood commitment that has intensified as the bee has become more competitive in recent years.
Let’s first look at history
South Asian-American spelling success is connected to the history of this ethnic community’s immigration to the United States.
For instance, the 1965 Hart-Cellar Act solicited highly trained immigrants to meet America’s need for scientists, engineers and medical professionals and opened the door to skilled immigration from Asia and other regions. In subsequent decades, skilled migration from South Asia continued alongside the sponsorship of family members.
Today, along with smaller, older communities of Punjabi Sikhs and other South Asian ethnic groups primarily on the West Coast, South Asian-Americans constitute a diverse population that features a disproportionately high professional class, although with differences of class, languages, ethnicities and nationalities – differences that are often overlooked in favor of a narrative of Indian-American educational and professional success.
The question is, what gives the community an edge?
For upwardly mobile South Asian-Americans, success is in part due to moving from one socially and economically advantageous societal position in the subcontinent to another in the United States.
Moreover, the English-speaking abilities of most educated South Asian-Americans clearly give them an edge over immigrants from other countries. My research indicates that fluency developed in English-medium schools – a legacy of British colonialism – makes them ideal spelling interlocutors for their children, despite their variety of British spelling. Members of this population with elite educational qualifications have likewise emphasized the importance of academic achievement with their children.
Over the past few years spelling bees have been established exclusively for children of South Asian parentage.
For instance, the North South Foundation holds a range of educational contests, such as spelling bees, math contests, geography bees and essay writing, among others, whose proceeds contribute to promoting literacy efforts in India. The South Asian Spelling Bee, partnering with the insurance company Metlife, offers a highly competitive bee as well.
Taken together, this “minor league” circuit gives South Asian-American spellers far more opportunities to compete, as well as a longer “bee season” to train and practice.
This is particularly helpful because, as past champions confirm, ongoing practice and training are the key to winning.
Another factor to note here is the parental ability to dedicate time to education and extracurricular activities. Predictably, families with greater socioeconomic means are able to devote more resources and time.
These parents are as invested in spelling bees and academic competitions as families with star athletes or musicians might be in their children’s matches or performances. As several parents explained to me, spelling bees are the “brain sports” equivalent of travel soccer or Little League.
Of the 30 families I interviewed, the majority had a stay-at-home parent (usually the mother) dedicated to working with children on all activities, including spelling. In dual-income households, spelling training occurred on weeknights and weekends.
Like elite spellers of any race or ethnicity, South Asian-American spellers I spoke with studied word lists daily if possible, logging in several hours on weekends with parents or paid coaches to help them develop strategies and quiz them on words.
A few parents have been so invested in helping their children prepare that they have now started training and tutoring other aspiring spellers as well.
Like any national championship, the pressure on all spellers at a competition on the scale of the National Spelling Bee is intense. South Asian-American children are already subject to living up to the model minority stereotype and feel no reprieve here.
This is especially important to consider when South Asian-American spellers come from lower socioeconomic classes, but nonetheless succeed at spelling bees.
Among the 2015 finalists, for instance, one was the son of motel owners and a crowd favorite, as I observed. He had competed in the bee several times, and his older sister was also a speller, having made it to nationals once. Remarkably, they prepared for competitions by themselves, with no stay-at-home parent or paid coach.
Another 2015 semifinalist was featured in a broadcast segment living in the crowded immigrant neighborhood of Flushing, New York. When I visited this three-time National Spelling Bee participant in 2014, I realized that she lived in the very same apartment complex that my family did in the 1970s. This Queens neighborhood continues to be a receiving area for Indian-Americans who may not have the economic means to live in wealthier sections of New York City or its suburbs.
Many possible explanations
The point is that the reasons that Indian-American spellers are succeeding at the bee are not easily reducible to one answer.
South Asian-Americans, like other Asian immigrants, comprise varying class backgrounds and immigration histories. Yet it is noteworthy that even within this range of South Asian-American spellers, it is children of Indian-American immigrants from professional backgrounds who tend to become champions.
The time and resources Indian-American families devote to this brain sport, as I have observed, appear to be raising this competition into previously unseen levels of difficulty.
This can take a toll on elite spellers, who have to invest far more time studying spelling than in the past. With more difficult words appearing in earlier rounds of competition, spelling preparation can take up much of their time outside of school.
Nonetheless, they emphasize the perseverance they develop from competitive spelling. They learn to handle increasing levels of pressure, and alongside this, what they identify as important life skills of focus, poise and concentration.
Ultimately, what makes Indian-American children successful at spelling is the same as children of any other ethnicity. They come from families who believe in the value of education and also have the financial means to support their children through every stage of their schooling. And, they are highly intelligent individuals who devote their childhood to the study of American English.
Are they American?
Some comments on social media, however, seem to discount these factors and years of intense preparation to instead focus on race and ethnicity as sole factors for spelling success.
In a refreshing shift in tone, this year’s topics also included the ferocity of Janga’s competition style and the inspiration he drew from his football hero Dez Bryant.
Nonetheless, such comments, directed toward nonwhite children when they win this distinctly American contest, do push us to reflect: what does it mean to be an American now?
In alleging that only “Americans” should win this contest, Twitter racists ignore that these spellers too have been born and raised in the United States. Recent winners hail from suburban or small towns in upstate New York, Kansas, Missouri and Texas. They express regional pride in these locations by mentioning regional sports teams and other distinctive features in their on-air profiles.
With their American-accented English and distinctly American comportment, it is merely their skin color and names that set them apart from a white mainstream.
Like generations of white Americans and European immigrants, Indian-American parents spend countless hours preparing word lists, quizzing their children and creating ways for their children to learn. They encourage their children in whatever they are good at, including spelling.
As a result, they have elevated this American contest to a new level of competition. Clearly, this is an apt moment to expand our definition of what it means to be an American.
This is an updated version of an article first published on June 4, 2015.
by Gianfranco Conti, PhD. Co-author of 'The Language Teacher toolkit' and "Breaking the sound barrier: teaching learners how to listen', winner of the 2015 TES best resource contributor award and founder of www.language-gym.com