How learning a new language improves tolerance


Image 20161208 31364 1yz4g47
Why learn a new language?
Timothy Vollmer, CC BY

Amy Thompson, University of South Florida

There are many benefits to knowing more than one language. For example, it has been shown that aging adults who speak more than one language have less likelihood of developing dementia.

Additionally, the bilingual brain becomes better at filtering out distractions, and learning multiple languages improves creativity. Evidence also shows that learning subsequent languages is easier than learning the first foreign language.

Unfortunately, not all American universities consider learning foreign languages a worthwhile investment.

Why is foreign language study important at the university level?

As an applied linguist, I study how learning multiple languages can have cognitive and emotional benefits. One of these benefits that’s not obvious is that language learning improves tolerance.

This happens in two important ways.

The first is that it opens people’s eyes to a way of doing things in a way that’s different from their own, which is called “cultural competence.”

The second is related to the comfort level of a person when dealing with unfamiliar situations, or “tolerance of ambiguity.”

Gaining cross-cultural understanding

Cultural competence is key to thriving in our increasingly globalized world. How specifically does language learning improve cultural competence? The answer can be illuminated by examining different types of intelligence.

Psychologist Robert Sternberg’s research on intelligence describes different types of intelligence and how they are related to adult language learning. What he refers to as “practical intelligence” is similar to social intelligence in that it helps individuals learn nonexplicit information from their environments, including meaningful gestures or other social cues.

Learning a foreign language reduces social anxiety.
COD Newsroom, CC BY

Language learning inevitably involves learning about different cultures. Students pick up clues about the culture both in language classes and through meaningful immersion experiences.

Researchers Hanh Thi Nguyen and Guy Kellogg have shown that when students learn another language, they develop new ways of understanding culture through analyzing cultural stereotypes. They explain that “learning a second language involves the acquisition not only of linguistic forms but also ways of thinking and behaving.”

With the help of an instructor, students can critically think about stereotypes of different cultures related to food, appearance and conversation styles.

Dealing with the unknown

The second way that adult language learning increases tolerance is related to the comfort level of a person when dealing with “tolerance of ambiguity.”

Someone with a high tolerance of ambiguity finds unfamiliar situations exciting, rather than frightening. My research on motivation, anxiety and beliefs indicates that language learning improves people’s tolerance of ambiguity, especially when more than one foreign language is involved.

It’s not difficult to see why this may be so. Conversations in a foreign language will inevitably involve unknown words. It wouldn’t be a successful conversation if one of the speakers constantly stopped to say, “Hang on – I don’t know that word. Let me look it up in the dictionary.” Those with a high tolerance of ambiguity would feel comfortable maintaining the conversation despite the unfamiliar words involved.

Applied linguists Jean-Marc Dewaele and Li Wei also study tolerance of ambiguity and have indicated that those with experience learning more than one foreign language in an instructed setting have more tolerance of ambiguity.

What changes with this understanding

A high tolerance of ambiguity brings many advantages. It helps students become less anxious in social interactions and in subsequent language learning experiences. Not surprisingly, the more experience a person has with language learning, the more comfortable the person gets with this ambiguity.

And that’s not all.

Individuals with higher levels of tolerance of ambiguity have also been found to be more entrepreneurial (i.e., are more optimistic, innovative and don’t mind taking risks).

In the current climate, universities are frequently being judged by the salaries of their graduates. Taking it one step further, based on the relationship of tolerance of ambiguity and entrepreneurial intention, increased tolerance of ambiguity could lead to higher salaries for graduates, which in turn, I believe, could help increase funding for those universities that require foreign language study.

Those who have devoted their lives to theorizing about and the teaching of languages would say, “It’s not about the money.” But perhaps it is.

Language learning in higher ed

Most American universities have a minimal language requirement that often varies depending on the student’s major. However, students can typically opt out of the requirement by taking a placement test or providing some other proof of competency.

Why more universities should teach a foreign language.
sarspri, CC BY-NC

In contrast to this trend, Princeton recently announced that all students, regardless of their competency when entering the university, would be required to study an additional language.

I’d argue that more universities should follow Princeton’s lead, as language study at the university level could lead to an increased tolerance of the different cultural norms represented in American society, which is desperately needed in the current political climate with the wave of hate crimes sweeping university campuses nationwide.

Knowledge of different languages is crucial to becoming global citizens. As former Secretary of Education Arne Duncan noted,

“Our country needs to create a future in which all Americans understand that by speaking more than one language, they are enabling our country to compete successfully and work collaboratively with partners across the globe.”

The ConversationConsidering the evidence that studying languages as adults increases tolerance in two important ways, the question shouldn’t be “Why should universities require foreign language study?” but rather “Why in the world wouldn’t they?”

Amy Thompson, Associate Professor of Applied Linguistics, University of South Florida

This article was originally published on The Conversation. Read the original article.

 

Younger is not always better when it comes to learning a second language


Image 20170224 32726 1xtuop0
Learning a language in a classroom is best for early teenagers.
from http://www.shutterstock.com

Warren Midgley, University of Southern Queensland

It’s often thought that it is better to start learning a second language at a young age. But research shows that this is not necessarily true. In fact, the best age to start learning a second language can vary significantly, depending on how the language is being learned. The Conversation

The belief that younger children are better language learners is based on the observation that children learn to speak their first language with remarkable skill at a very early age.

Before they can add two small numbers or tie their own shoelaces, most children develop a fluency in their first language that is the envy of adult language learners.

Why younger may not always be better

Two theories from the 1960s continue to have a significant influence on how we explain this phenomenon.

The theory of “universal grammar” proposes that children are born with an instinctive knowledge of the language rules common to all humans. Upon exposure to a specific language, such as English or Arabic, children simply fill in the details around those rules, making the process of learning a language fast and effective.

The other theory, known as the “critical period hypothesis”, posits that at around the age of puberty most of us lose access to the mechanism that made us such effective language learners as children. These theories have been contested, but nevertheless they continue to be influential.

Despite what these theories would suggest, however, research into language learning outcomes demonstrates that younger may not always be better.

In some language learning and teaching contexts, older learners can be more successful than younger children. It all depends on how the language is being learned.

Language immersion environment best for young children

Living, learning and playing in a second language environment on a regular basis is an ideal learning context for young children. Research clearly shows that young children are able to become fluent in more than one language at the same time, provided there is sufficient engagement with rich input in each language. In this context, it is better to start as young as possible.

Learning in classroom best for early teens

Learning in language classes at school is an entirely different context. The normal pattern of these classes is to have one or more hourly lessons per week.

To succeed at learning with such little exposure to rich language input requires meta-cognitive skills that do not usually develop until early adolescence.

For this style of language learning, the later years of primary school is an ideal time to start, to maximise the balance between meta-cognitive skill development and the number of consecutive years of study available before the end of school.

Self-guided learning best for adults

There are, of course, some adults who decide to start to learn a second language on their own. They may buy a study book, sign up for an online course, purchase an app or join face-to-face or virtual conversation classes.

To succeed in this learning context requires a range of skills that are not usually developed until reaching adulthood, including the ability to remain self-motivated. Therefore, self-directed second language learning is more likely to be effective for adults than younger learners.

How we can apply this to education

What does this tell us about when we should start teaching second languages to children? In terms of the development of language proficiency, the message is fairly clear.

If we are able to provide lots of exposure to rich language use, early childhood is better. If the only opportunity for second language learning is through more traditional language classes, then late primary school is likely to be just as good as early childhood.

However, if language learning relies on being self-directed, it is more likely to be successful after the learner has reached adulthood.

Warren Midgley, Associate Professor of Applied Linguistics, University of Southern Queensland

This article was originally published on The Conversation. Read the original article.

The shelf-life of slang – what will happen to those ‘democracy sausages’?


sausage

Kate Burridge, Monash University

Every year around this time, dictionaries across the English-speaking world announce their “Word of the Year”. These are expressions (some newly minted and some golden oldies too) that for some reason have shot into prominence during the year.

Earlier this month The Australian National Dictionary Centre declared its winner “democracy sausage” – the barbecued snag that on election day makes compulsory voting so much easier to swallow.

Dictionaries make their selections in different ways, but usually it involves a combination of suggestions from the public and the editorial team (who have been meticulously tracking these words throughout the year). The Macquarie Dictionary has two selections – the Committee’s Choice made by the Word of the Year Committee, and the People’s Choice made by the public (so make sure you have your say on January 24 for the People’s Choice winner 2016).

It’s probably not surprising that these words of note draw overwhelmingly from slang language, or “slanguage” – a fall-out of the increasing colloquialisation of English usage worldwide. In Australia this love affair with the vernacular goes back to the earliest settlements of English speakers.

And now there’s the internet, especially social networking – a particularly fertile breeding ground for slang.

People enjoy playing with language, and when communicating electronically they have free rein. “Twitterholic”, “twaddiction”, “celebritweet/twit”, “twitterati” are just some of the “tweologisms” that Twitter has spawned of late. And with a reported average of 500 million tweets each day, Twitter has considerable capacity not only to create new expressions, but to spread them (as do Facebook, Instagram and other social networking platforms).

But what happens when slang terms like these make it into the dictionary? Early dictionaries give us a clue, particularly the entries that are stamped unfit for general use. Branded entries were certainly plentiful in Samuel Johnson’s 18th-century work, and many are now wholly respectable: abominably “a word of low or familiar language”, nowadays “barbarous usage”, fun “a low cant word” (what would Johnson have thought of very fun and funner?).

Since the point of slang is to mark an in-group, to amuse and perhaps even to shock outsiders with novelty, most slang expressions are short-lived. Those that survive become part of the mainstream and mundane. Quite simply, time drains them of their vibrancy and energy. J.M. Wattie put it more poetically back in 1930:

Slang terms are the mayflies of language; by the time they get themselves recorded in a dictionary, they are already museum specimens.

But, then again, expressions occasionally do sneak through the net. Not only do they survive, they stay slangy – and sometimes over centuries. Judge for yourselves. Here are some entries from A New and Comprehensive Vocabulary of the Flash Language. Written by British convict James Hardy Vaux in 1812, this is the first dictionary compiled in Australia.

croak “to die”

grub “food”

kid “deceive”

mug “face”

nuts on “to have a strong inclination towards something or someone”

on the sly “secretly”

racket “particular kind of fraud”

snitch “to betray”

stink “an uproar”

spin a yarn “tell a tale of great adventure”

These were originally terms of flash – or, as Vaux put it, “the cant language used by the family”. In other words, they belonged to underworld slang. The term slang itself meant something similar at this time; it broadened to highly colloquial language in the 1800s.

Vaux went on to point out that “to speak good flash is to be well versed in cant terms” — and, having been transported to New South Wales on three separate occasions during his “checkered and eventful life” (his words), Vaux himself was clearly well versed in the world of villainy and cant.

True, the majority of the slang terms here have dropped by the wayside (barnacles “spectacles”; lush “to drink”), and the handful that survives are now quite standard (grab “to seize”; dollop “large quantity”). But there are a few that have not only lasted, they’ve remained remarkably contemporary-sounding – some still even a little “disgraceful” (as Vaux described them).

The shelf-life of slang is a bit of mystery. Certainly some areas fray faster than others. Vaux’s prime, plummy and rum (meaning “excellent”) have well and truly bitten the dust. Cool might have made a comeback (also from the 1800s), but intensifiers generally wear out.

Far out and ace have been replaced by awesome, and there are plenty of new “awesome” words lurking in the wings. Some of these are already appearing on lists for “Most Irritating Word of the Year” – it’s almost as if their success does them in. Amazeballs, awesomesauce and phat are among the walking dead.

But as long as sausage sizzles continue to support Australian voters on election day, democracy sausages will have a place – and if adopted elsewhere, might even entice the politically uninterested into polling booths.

The Conversation

Kate Burridge, Professor of Linguistics, Monash University

This article was originally published on The Conversation. Read the original article.

Beware the bad big wolf: why you need to put your adjectives in the right order


image-20160906-25260-dcj9cp.jpg

Simon Horobin, University of Oxford

Unlikely as it sounds, the topic of adjective use has gone “viral”. The furore centres on the claim, taken from Mark Forsyth’s book The Elements of Eloquence, that adjectives appearing before a noun must appear in the following strict sequence: opinion, size, age, shape, colour, origin, material, purpose, Noun. Even the slightest attempt to disrupt this sequence, according to Forsyth, will result in the speaker sounding like a maniac. To illustrate this point, Forsyth offers the following example: “a lovely little old rectangular green French silver whittling knife”.

 

But is the “rule” worthy of an internet storm – or is it more of a ripple in a teacup? Well, certainly the example is a rather unlikely sentence, and not simply because whittling knives are not in much demand these days – ignoring the question of whether they can be both green and silver. This is because it is unusual to have a string of attributive adjectives (ones that appear before the noun they describe) like this.

More usually, speakers of English break up the sequence by placing some of the adjectives in predicative position – after the noun. Not all adjectives, however, can be placed in either position. I can refer to “that man who is asleep” but it would sound odd to refer to him as “that asleep man”; we can talk about the “Eastern counties” but not the “counties that are Eastern”. Indeed, our distribution of adjectives both before and after the noun reveals another constraint on adjective use in English – a preference for no more than three before a noun. An “old brown dog” sounds fine, a “little old brown dog” sounds acceptable, but a “mischievous little old brown dog” sounds plain wrong.

Rules, rules, rules

Nevertheless, however many adjectives we choose to employ, they do indeed tend to follow a predictable pattern. While native speakers intuitively follow this rule, most are unaware that they are doing so; we agree that the “red big dog” sounds wrong, but don’t know why. In order to test this intuition linguists have analysed large corpora of electronic data, to see how frequently pairs of adjectives like “big red” are preferred to “red big”. The results confirm our native intuition, although the figures are not as comprehensive as we might expect – the rule accounts for 78% of the data.

We know how to use them … without even being aware of it.
Shutterstock

But while linguists have been able to confirm that there are strong preferences in the ordering of pairs of adjectives, no such statistics have been produced for longer strings. Consequently, while Forsyth’s rule appears to make sense, it remains an untested, hypothetical, large, sweeping (sorry) claim.

In fact, even if we stick to just two adjectives it is possible to find examples that appear to break the rule. The “big bad wolf” of fairy tale, for instance, shows the size adjective preceding the opinion one; similarly, “big stupid” is more common than “stupid big”. Examples like these are instead witness to the “Polyanna Principle”, by which speakers prefer to present positive, or indifferent, values before negative ones.

Another consideration of Forsyth’s proposed ordering sequence is that it makes no reference to other constraints that influence adjective order, such as when we use two adjectives that fall into the same category. Little Richard’s song “Long Tall Sally” would have sounded strange if he had called it Tall Long Sally, but these are both adjectives of size.

Definitely not Tall Long Sally.

Similarly, we might describe a meal as “nice and spicy” but never “spicy and nice” – reflecting a preference for the placement of general opinions before more specific ones. We also need to bear in mind the tendency for noun phrases to become lexicalised – forming words in their own right. Just as a blackbird is not any kind of bird that is black, a little black dress does not refer to any small black dress but one that is suitable for particular kinds of social engagement.

Since speakers view a “little black dress” as a single entity, its order is fixed; as a result, modifying adjectives must precede little – a “polyester little black dress”. This means that an adjective specifying its material appears before those referring to size and colour, once again contravening Forsyth’s rule.

Making sense of language

Of course, the rule is a fair reflection of much general usage – although the reasons behind this complex set of constraints in adjective order remain disputed. Some linguists have suggested that it reflects the “nouniness” of an adjective; since colour adjectives are commonly used as nouns – “red is my favourite colour” – they appear close to that slot.

Another conditioning factor may be the degree to which an adjective reflects a subjective opinion rather than an objective description – therefore, subjective adjectives that are harder to quantify (boring, massive, middle-aged) tend to appear further away from the noun than more concrete ones (red, round, French).

Prosody, the rhythm and sound of poetry, is likely to play a role, too – as there is a tendency for speakers to place longer adjectives after shorter ones. But probably the most compelling theory links adjective position with semantic closeness to the noun being described; adjectives that are closely related to the noun in meaning, and are therefore likely to appear frequently in combination with it, are placed closest, while those that are less closely related appear further away.

In Forsyth’s example, it is the knife’s whittling capabilities that are most significant – distinguishing it from a carving, fruit or butter knife – while its loveliness is hardest to define (what are the standards for judging the loveliness of a whittling knife?) and thus most subjective. Whether any slight reorganisation of the other adjectives would really prompt your friends to view you as a knife-wielding maniac is harder to determine – but then, at least it’s just a whittling knife.

The Conversation

Simon Horobin, Professor of English Language and Literature, University of Oxford

This article was originally published on The Conversation. Read the original article.

How the Queen’s English has had to defer to Africa’s rich multilingualism


Rajend Mesthrie, University of Cape Town

For the first time in history a truly global language has emerged. English enables international communication par excellence, with a far wider reach than other possible candidates for this position – like Latin in the past, and French, Spanish and Mandarin in the present.

In a memorable phrase, former Tanzanian statesman Julius Nyerere once characterised English as the Kiswahili of the world. In Africa, English is more widely spoken than other important lingua francas like Kiswahili, Arabic, French and Portuguese, with at least 26 countries using English as one of their official languages.

But English in Africa comes in many different shapes and forms. It has taken root in an exceptionally multilingual context, with well over a thousand languages spoken on the continent. The influence of this multilingualism tends to be largely erased at the most formal levels of use – for example, in the national media and in higher educational contexts. But at an everyday level, the Queen’s English has had to defer to the continent’s rich abundance of languages. Pidgin, creole, second-language and first-language English all flourish alongside them.

The birth of new languages

English did not enter Africa as an innocent language. Its history is tied up with trade and exploitation, capitalist expansion, slavery and colonisation.

The history of English is tied up with trade, capitalist expansion, slavery and colonialism.
Shutterstock

As the need for communication arose and increased under these circumstances, forms of English, known as pidgins and creoles, developed. This took place within a context of unequal encounters, a lack of sustained contact with speakers of English and an absence of formal education. Under these conditions, English words were learnt and attached to an emerging grammar that owed more to African languages than to English.

A pidgin is defined by linguists as an initially simple form of communication that arises from contact between speakers of disparate languages who have
no other means of communication in common. Pidgins, therefore, do not have mother-tongue speakers. The existence of pidgins in the early period of West African-European contact is not well documented, and some linguists like Salikoko Mufwene judge their early significance to be overestimated.

Pidgins can become more complex if they take on new functions. They are relabelled creoles if, over time and under specific circumstances, they become fully developed as the first language of a group of speakers.

Ultimately, pidgins and creoles develop grammatical norms that are far removed from the colonial forms that partially spawned them: to a British English speaker listening to a pidgin or creole, the words may seem familiar in form, but not always in meaning.

Linguists pay particular attention to these languages because they afford them the opportunity to observe creativity at first hand: the birth of new languages.

The creoles of West Africa

West Africa’s creoles are of two types: those that developed outside Africa; and those that first developed from within the continent.

The West African creoles that developed outside Africa emerged out of the multilingual and oppressive slave experience in the New World. They were then brought to West Africa after 1787 by freed slaves repatriated from Britain, North America and the Caribbean. “Krio” was the name given to the English-based creole of slaves freed from Britain who were returned to Sierra Leone, where they were joined by slaves released from Nova Scotia and Jamaica.

Some years after that, in 1821, Liberia was established as an African homeland for freed slaves from the US. These men and women brought with them what some linguists call “Liberian settler English”. This particular creole continues to make Liberia somewhat special on the continent, with American rather than British forms of English dominating there.

These languages from the New World were very influential in their new environments, especially over the developing West African pidgin English.

A more recent, homegrown type of West African creole has emerged in the region. This West African creole is spreading in the context of urban multilingualism and changing youth identities. Over the past 50 years, it has grown spectacularly in Ghana, Cameroon, Equatorial Guinea and Sierra Leone, and it is believed to be the fastest-growing language in Nigeria. In this process pidgin English has been expanded into a creole, used as one of the languages of the home. For such speakers, the designation “pidgin” is now a misnomer, although it remains widely used.

In East Africa, in contrast, the strength and historicity of Kiswahili as a lingua franca prevented the rapid development of pidgins based on colonial languages. There, traders and colonists had to learn Kiswahili for successful everyday communication. This gave locals more time to master English as a fully-fledged second language.

Other varieties of English

Africa, mirroring the trend in the rest of the world, has a large and increasing number of second-language English speakers. Second-language varieties of English are mutually intelligible with first-language versions, while showing varying degrees of difference in accent, grammar and nuance of vocabulary. Formal colonisation and the educational system from the 19th century onwards account for the wide spread of second-language English.

What about first-language varieties of English on the continent? The South African variety looms large in this history, showing similarities with English in Australia and New Zealand, especially in details of accent.

In post-apartheid South Africa many young black people from middle-class backgrounds now speak this variety either as a dominant language or as a “second first-language”. But for most South Africans English is a second language – a very important one for education, business and international communication.

For family and cultural matters, African languages remain of inestimable value throughout the continent.

The Conversation

Rajend Mesthrie, Professor of Linguistics, University of Cape Town

This article was originally published on The Conversation. Read the original article.

Why it’s hard for adults to learn a second language


image-20160804-473-32tg9n.jpg

Brianna Yamasaki, University of Washington

As a young adult in college, I decided to learn Japanese. My father’s family is from Japan, and I wanted to travel there someday.

However, many of my classmates and I found it difficult to learn a language in adulthood. We struggled to connect new sounds and a dramatically different writing system to the familiar objects around us.

It wasn’t so for everyone. There were some students in our class who were able to acquire the new language much more easily than others.

So, what makes some individuals “good language learners?” And do such individuals have a “second language aptitude?”

What we know about second language aptitude

Past research on second language aptitude has focused on how people perceive sounds in a particular language and on more general cognitive processes such as memory and learning abilities. Most of this work has used paper-and-pencil and computerized tests to determine language-learning abilities and predict future learning.

Researchers have also studied brain activity as a way of measuring linguistic and cognitive abilities. However, much less is known about how brain activity predicts second language learning.

Is there a way to predict the aptitude of second language learning?

How does brain activity change while learning languages?
Brain image via www.shutterstock.com

In a recently published study, Chantel Prat, associate professor of psychology at the Institute for Learning and Brain Sciences at the University of Washington, and I explored how brain activity recorded at rest – while a person is relaxed with their eyes closed – could predict the rate at which a second language is learned among adults who spoke only one language.

Studying the resting brain

Resting brain activity is thought to reflect the organization of the brain and it has been linked to intelligence, or the general ability used to reason and problem-solve.

We measured brain activity obtained from a “resting state” to predict individual differences in the ability to learn a second language in adulthood.

To do that, we recorded five minutes of eyes-closed resting-state electroencephalography, a method that detects electrical activity in the brain, in young adults. We also collected two hours of paper-and-pencil and computerized tasks.

We then had 19 participants complete eight weeks of French language training using a computer program. This software was developed by the U.S. armed forces with the goal of getting military personnel functionally proficient in a language as quickly as possible.

The software combined reading, listening and speaking practice with game-like virtual reality scenarios. Participants moved through the content in levels organized around different goals, such as being able to communicate with a virtual cab driver by finding out if the driver was available, telling the driver where their bags were and thanking the driver.

Here’s a video demonstration:

Nineteen adult participants (18-31 years of age) completed two 30-minute training sessions per week for a total of 16 sessions. After each training session, we recorded the level that each participant had reached. At the end of the experiment, we used that level information to calculate each individual’s learning rate across the eight-week training.

As expected, there was large variability in the learning rate, with the best learner moving through the program more than twice as quickly as the slowest learner. Our goal was to figure out which (if any) of the measures recorded initially predicted those differences.

A new brain measure for language aptitude

When we correlated our measures with learning rate, we found that patterns of brain activity that have been linked to linguistic processes predicted how easily people could learn a second language.

Patterns of activity over the right side of the brain predicted upwards of 60 percent of the differences in second language learning across individuals. This finding is consistent with previous research showing that the right half of the brain is more frequently used with a second language.

Our results suggest that the majority of the language learning differences between participants could be explained by the way their brain was organized before they even started learning.

Implications for learning a new language

Does this mean that if you, like me, don’t have a “quick second language learning” brain you should forget about learning a second language?

Not quite.

Language learning can depend on many factors.
Child image via www.shutterstock.com

First, it is important to remember that 40 percent of the difference in language learning rate still remains unexplained. Some of this is certainly related to factors like attention and motivation, which are known to be reliable predictors of learning in general, and of second language learning in particular.

Second, we know that people can change their resting-state brain activity. So training may help to shape the brain into a state in which it is more ready to learn. This could be an exciting future research direction.

Second language learning in adulthood is difficult, but the benefits are large for those who, like myself, are motivated by the desire to communicate with others who do not speak their native tongue.

The Conversation

Brianna Yamasaki, Ph.D. Student, University of Washington

This article was originally published on The Conversation. Read the original article.

British Council Backs Bilingual Babies


3930162_orig

The British Council is to open a bilingual pre-school in Hong Kong in August. The International Pre-School, which will teach English and Cantonese and have specific times set aside for Mandarin, will follow the UK-based International Primary Curriculum.

The British Council already has bilingual pre-schools in Singapore (pictured above) and Madrid. The adoption of a bilingual model of early years learning, rather than a purely English-medium one, is supported by much of the research on this age group. In a randomised control trial in the US state of New Jersey, for example, three- and four-year-olds from both Spanish- and English-speaking backgrounds were assigned by lottery to either an all-English or English–Spanish pre-school programme which used an identical curriculum. The study found that children from the bilingual programme emerged with the same level of English as those in the English-medium one, but both the Spanish-speaking and anglophone children had a much higher level of Spanish.

http://www.elgazette.com/item/281-british-council-backs-bilingual-babies.html

Britain may be leaving the EU, but English is going nowhere


image-20160701-18331-1oy1oep

Andrew Linn, University of Westminster

After Brexit, there are various things that some in the EU hope to see and hear less in the future. One is Nigel Farage. Another is the English language.

In the early hours of June 24, as the referendum outcome was becoming clear, Jean-Luc Mélenchon, left-wing MEP and French presidential candidate, tweeted that “English cannot be the third working language of the European parliament”.

This is not the first time that French and German opinion has weighed in against alleged disproportionate use of English in EU business. In 2012, for example, a similar point was made about key eurozone recommendations from the European Commission being published initially “in a language which [as far as the Euro goes] is only spoken by less than 5m Irish”. With the number of native speakers of English in the EU set to drop from 14% to around 1% of the bloc’s total with the departure of the UK, this point just got a bit sharper.

Translation overload

Official EU language policy is multilingualism with equal rights for all languages used in member states. It recommends that “every European citizen should master two other languages in addition to their mother tongue” – Britain’s abject failure to achieve this should make it skulk away in shame.

The EU recognises 24 “official and working” languages, a number that has mushroomed from the original four (Dutch, French, German and Italian) as more countries have joined. All EU citizens have a right to access EU documents in any of those languages. This calls for a translation team numbering around 2,500, not to mention a further 600 full-time interpreters. In practice most day-to-day business is transacted in either English, French or German and then translated, but it is true that English dominates to a considerable extent.

Lots of work still to do.
Etienne Ansotte/EPA

The preponderance of English has nothing to do with the influence of Britain or even Britain’s membership of the EU. Historically, the expansion of the British empire, the impact of the industrial revolution and the emergence of the US as a world power have embedded English in the language repertoire of speakers across the globe.

Unlike Latin, which outlived the Roman empire as the lingua franca of medieval and renaissance Europe, English of course has native speakers (who may be unfairly advantaged), but it is those who have learned English as a foreign language – “Euro-English” or “English as a lingua franca” – who now constitute the majority of users.

According to the 2012 Special Eurobarometer on Europeans and their Languages, English is the most widely spoken foreign language in 19 of the member states where it is not an official language. Across Europe, 38% of people speak English well enough as a foreign language to have a conversation, compared to 12% speaking French and 11% in German.

The report also found that 67% of Europeans consider English the most useful foreign language, and that the numbers favouring German (17%) or French (16%) have declined. As a result, 79% of Europeans want their children to learn English, compared to 20% for French and German.

Too much invested in English

Huge sums have been invested in English teaching by both national governments and private enterprise. As the demand for learning English has increased, so has the supply. English language learning worldwide was estimated to be worth US$63.3 billion (£47.5 billion) in 2012, and it is expected that this market will rise to US$193.2 billion (£145.6 billion) by 2017. The value of English for speakers of other languages is not going to diminish any time soon. There is simply too much invested in it.

Speakers of English as a second language outnumber first-language English speakers by 2:1 both in Europe and globally. For many Europeans, and especially those employed in the EU, English is a useful piece in a toolbox of languages to be pressed into service when needed – a point which was evident in a recent project on whether the use of English in Europe was an opportunity or a threat. So in the majority of cases using English has precisely nothing to do with the UK or Britishness. The EU needs practical solutions and English provides one.

English is unchallenged as the lingua franca of Europe. It has even been suggested that in some countries of northern Europe it has become a second rather than a foreign language. Jan Paternotte, D66 party leader in Amsterdam, has proposed that English should be decreed the official second language of that city.

English has not always held its current privileged status. French and German have both functioned as common languages for high-profile fields such as philosophy, science and technology, politics and diplomacy, not to mention Church Slavonic, Russian, Portuguese and other languages in different times and places.

We can assume that English will not maintain its privileged position forever. Who benefits now, however, are not the predominantly monolingual British, but European anglocrats whose multilingualism provides them with a key to international education and employment.

Much about the EU may be about to change, but right now an anti-English language policy so dramatically out of step with practice would simply make the post-Brexit hangover more painful.

The Conversation

Andrew Linn, Pro-Vice-Chancellor and Dean of Social Sciences and Humanities, University of Westminster

This article was originally published on The Conversation. Read the original article.

How other languages can reveal the secrets to happiness


image-20160627-28362-1djxdon

Tim Lomas, University of East London

The limits of our language are said to define the boundaries of our world. This is because in our everyday lives, we can only really register and make sense of what we can name. We are restricted by the words we know, which shape what we can and cannot experience.

It is true that sometimes we may have fleeting sensations and feelings that we don’t quite have a name for – akin to words on the “tip of our tongue”. But without a word to label these sensations or feelings they are often overlooked, never to be fully acknowledged, articulated or even remembered. And instead, they are often lumped together with more generalised emotions, such as “happiness” or “joy”. This applies to all aspects of life – and not least to that most sought-after and cherished of feelings, happiness. Clearly, most people know and understand happiness, at least vaguely. But they are hindered by their “lexical limitations” and the words at their disposal.

As English speakers, we inherit, rather haphazardly, a set of words and phrases to represent and describe our world around us. Whatever vocabulary we have managed to acquire in relation to happiness will influence the types of feelings we can enjoy. If we lack a word for a particular positive emotion, we are far less likely to experience it. And even if we do somehow experience it, we are unlikely to perceive it with much clarity, think about it with much understanding, talk about it with much insight, or remember it with much vividness.

Speaking of happiness

While this recognition is sobering, it is also exciting, because it means by learning new words and concepts, we can enrich our emotional world. So, in theory, we can actually enhance our experience of happiness simply through exploring language. Prompted by this enthralling possibility, I recently embarked on a project to discover “new” words and concepts relating to happiness.

I did this by searching for so-called “untranslatable” words from across the world’s languages. These are words where no exact equivalent word or phrase exists in English. And as such, suggest the possibility that other cultures have stumbled upon phenomena that English-speaking places have somehow overlooked.

Perhaps the most famous example is “Schadenfreude”, the German term describing pleasure at the misfortunes of others. Such words pique our curiosity, as they appear to reveal something specific about the culture that created them – as if German people are potentially especially liable to feelings of Schadenfreude (though I don’t believe that’s the case).

German’s are no more likely to experience Schadenfreude than they are to drink steins of beer in Bavarian costume.
Kzenon/Shutterstock

However, these words actually may be far more significant than that. Consider the fact that Schadenfreude has been imported wholesale into English. Evidently, English speakers had at least a passing familiarity with this kind of feeling, but lacked the word to articulate it (although I suppose “gloating” comes close) – hence, the grateful borrowing of the German term. As a result, their emotional landscape has been enlivened and enriched, able to give voice to feelings that might previously have remained unconceptualised and unexpressed.

My research, searched for these kind of “untranslatable words” – ones that specifically related to happiness and well-being. And so I trawled the internet looking for relevant websites, blogs, books and academic papers, and gathered a respectable haul of 216 such words. Now, the list has expanded – partly due to the generous feedback of visitors to my website – to more than 600 words.

Enriching emotions

When analysing these “untranslatable words”, I divide them into three categories based on my subjective reaction to them. Firstly, there are those that immediately resonate with me as something I have definitely experienced, but just haven’t previously been able to articulate. For instance, I love the strange German noun “Waldeinsamkeit”, which captures that eerie, mysterious feeling that often descends when you’re alone in the woods.

A second group are words that strike me as somewhat familiar, but not entirely, as if I can’t quite grasp their layers of complexity. For instance, I’m hugely intrigued by various Japanese aesthetic concepts, such as “aware” (哀れ), which evokes the bitter-sweetness of a brief, fading moment of transcendent beauty. This is symbolised by the cherry blossom – and as spring bloomed in England I found myself reflecting at length on this powerful yet intangible notion.

Finally, there is a mysterious set of words which completely elude my grasp, but which for precisely that reason are totally captivating. These mainly hail from Eastern religions – terms such as “Nirvana” or “Brahman” – which translates roughly as the ultimate reality underlying all phenomena in the Hindu scriptures. It feels like it would require a lifetime of study to even begin to grasp the meaning – which is probably exactly the point of these types of words.

Now we can all ‘tepils’ like the Norwegians – that’s drink beer outside on a hot day, to you and me
Africa Studio/Shutterstock

I believe these words offer a unique window onto the world’s cultures, revealing diversity in the way people in different places experience and understand life. People are naturally curious about other ways of living, about new possibilities in life, and so are drawn to ideas – like these untranslatable words – that reveal such possibilities.

There is huge potential for these words to enrich and expand people’s own emotional worlds, with each of these words comes a tantalising glimpse into unfamiliar and new positive feelings and experiences. And at the end of the day, who wouldn’t be interested in adding a bit more happiness to their own lives?

The Conversation

Tim Lomas, Lecturer in Applied Positive Psychology , University of East London

This article was originally published on The Conversation. Read the original article.

WHERE HAVE ALL THE ADULT STUDENTS GONE?


college-students

The EFL industry in Spain enjoyed a mini boom during the early years of the global economic crisis as many adult students rushed to improve their English language skills, either to get themselves back into the job market, or else in an attempt to hang on the job they had. As we reached the new decade, the boom slowed down and then started to tail-off. But no-one expected the sudden and significant drop in adult student numbers that hit the industry at the start of the current academic year.

The drop wasn’t school, city, or even region specific; it was the same story all over Spain. And the numbers were eye-watering. Depending who you talk to (and/or who you believe) adult student numbers fell by between 10-20%. Enough to make any school owner or manager wince.

What happened? Where did all these students go? Well, as is normally the case, there is no one, simple answer. There has been a slight upturn in in-company teaching, so it may be that some students, who were previously paying for their own courses in our schools, are now studying in their company (if they’re fortunate to have a job in the first place; Spanish unemployment is still well over 20%.)

The standard of English teaching in main-stream education is also getting better, slowly, so it may be that there are more school leavers who have achieved a basic level of communicative competence.

Some adult students – especially the younger ones – may also have decided to switch from a traditional, bricks and mortar language school to a Web-based classroom.

My own theory is that it’s the free movement of labour in the European Union which is having the greatest effect on our market. In other words, as there so few jobs available in Spain, hundreds of thousands of young adults – many of whom may previously have been our students – have simply upped sticks and gone abroad to find work.

A recent survey conducted in the UK indicates that migrants from Spain rose to 137,000 in 2015 (up from 63,000 in 2011). Most of them are probably working in relatively unskilled jobs in hotels, bars and restaurants, but at least they’re working – and they’re improving their English language skills as they go.

A similar number probably emigrated to other countries in the north of Europe and another significant number emigrated to Latin America. Add up all these emigrants and we could be looking at a total of well over 300,000 migrants – just in 2015.

On a recent trip to Oxford I met a young Spanish guy, working in a hotel, who had previously been a student at our school in Barcelona. He’s a typical example. Will he ever move back to Spain, I asked him? Perhaps, in the future, he said, but only if the situation in Spain changes and he can find a decent job. His new fluency in English, learnt by living and working in Oxford, might just help him with that.

So where does that leave Spanish language schools? Will adult students come back to our schools in the same numbers as before? Probably not. But that doesn’t mean we have to give up on this market. If adult students won’t come to us, we can use the Internet to take our services to them. Even those living and working abroad.

 

This article was written by Jonathan Dykes – His Blog page can be found here:- https://jonathandykesblog.wordpress.com/2016/06/10/where-have-all-the-adult-students-gone/