Beware the bad big wolf: why you need to put your adjectives in the right order


Simon Horobin, University of Oxford

Unlikely as it sounds, the topic of adjective use has gone “viral”. The furore centres on the claim, taken from Mark Forsyth’s book The Elements of Eloquence, that adjectives appearing before a noun must appear in the following strict sequence: opinion, size, age, shape, colour, origin, material, purpose, Noun. Even the slightest attempt to disrupt this sequence, according to Forsyth, will result in the speaker sounding like a maniac. To illustrate this point, Forsyth offers the following example: “a lovely little old rectangular green French silver whittling knife”.


But is the “rule” worthy of an internet storm – or is it more of a ripple in a teacup? Well, certainly the example is a rather unlikely sentence, and not simply because whittling knives are not in much demand these days – ignoring the question of whether they can be both green and silver. This is because it is unusual to have a string of attributive adjectives (ones that appear before the noun they describe) like this.

More usually, speakers of English break up the sequence by placing some of the adjectives in predicative position – after the noun. Not all adjectives, however, can be placed in either position. I can refer to “that man who is asleep” but it would sound odd to refer to him as “that asleep man”; we can talk about the “Eastern counties” but not the “counties that are Eastern”. Indeed, our distribution of adjectives both before and after the noun reveals another constraint on adjective use in English – a preference for no more than three before a noun. An “old brown dog” sounds fine, a “little old brown dog” sounds acceptable, but a “mischievous little old brown dog” sounds plain wrong.

Rules, rules, rules

Nevertheless, however many adjectives we choose to employ, they do indeed tend to follow a predictable pattern. While native speakers intuitively follow this rule, most are unaware that they are doing so; we agree that the “red big dog” sounds wrong, but don’t know why. In order to test this intuition linguists have analysed large corpora of electronic data, to see how frequently pairs of adjectives like “big red” are preferred to “red big”. The results confirm our native intuition, although the figures are not as comprehensive as we might expect – the rule accounts for 78% of the data.

We know how to use them … without even being aware of it.

But while linguists have been able to confirm that there are strong preferences in the ordering of pairs of adjectives, no such statistics have been produced for longer strings. Consequently, while Forsyth’s rule appears to make sense, it remains an untested, hypothetical, large, sweeping (sorry) claim.

In fact, even if we stick to just two adjectives it is possible to find examples that appear to break the rule. The “big bad wolf” of fairy tale, for instance, shows the size adjective preceding the opinion one; similarly, “big stupid” is more common than “stupid big”. Examples like these are instead witness to the “Polyanna Principle”, by which speakers prefer to present positive, or indifferent, values before negative ones.

Another consideration of Forsyth’s proposed ordering sequence is that it makes no reference to other constraints that influence adjective order, such as when we use two adjectives that fall into the same category. Little Richard’s song “Long Tall Sally” would have sounded strange if he had called it Tall Long Sally, but these are both adjectives of size.

Definitely not Tall Long Sally.

Similarly, we might describe a meal as “nice and spicy” but never “spicy and nice” – reflecting a preference for the placement of general opinions before more specific ones. We also need to bear in mind the tendency for noun phrases to become lexicalised – forming words in their own right. Just as a blackbird is not any kind of bird that is black, a little black dress does not refer to any small black dress but one that is suitable for particular kinds of social engagement.

Since speakers view a “little black dress” as a single entity, its order is fixed; as a result, modifying adjectives must precede little – a “polyester little black dress”. This means that an adjective specifying its material appears before those referring to size and colour, once again contravening Forsyth’s rule.

Making sense of language

Of course, the rule is a fair reflection of much general usage – although the reasons behind this complex set of constraints in adjective order remain disputed. Some linguists have suggested that it reflects the “nouniness” of an adjective; since colour adjectives are commonly used as nouns – “red is my favourite colour” – they appear close to that slot.

Another conditioning factor may be the degree to which an adjective reflects a subjective opinion rather than an objective description – therefore, subjective adjectives that are harder to quantify (boring, massive, middle-aged) tend to appear further away from the noun than more concrete ones (red, round, French).

Prosody, the rhythm and sound of poetry, is likely to play a role, too – as there is a tendency for speakers to place longer adjectives after shorter ones. But probably the most compelling theory links adjective position with semantic closeness to the noun being described; adjectives that are closely related to the noun in meaning, and are therefore likely to appear frequently in combination with it, are placed closest, while those that are less closely related appear further away.

In Forsyth’s example, it is the knife’s whittling capabilities that are most significant – distinguishing it from a carving, fruit or butter knife – while its loveliness is hardest to define (what are the standards for judging the loveliness of a whittling knife?) and thus most subjective. Whether any slight reorganisation of the other adjectives would really prompt your friends to view you as a knife-wielding maniac is harder to determine – but then, at least it’s just a whittling knife.

The Conversation

Simon Horobin, Professor of English Language and Literature, University of Oxford

This article was originally published on The Conversation. Read the original article.

Why it’s hard for adults to learn a second language


Brianna Yamasaki, University of Washington

As a young adult in college, I decided to learn Japanese. My father’s family is from Japan, and I wanted to travel there someday.

However, many of my classmates and I found it difficult to learn a language in adulthood. We struggled to connect new sounds and a dramatically different writing system to the familiar objects around us.

It wasn’t so for everyone. There were some students in our class who were able to acquire the new language much more easily than others.

So, what makes some individuals “good language learners?” And do such individuals have a “second language aptitude?”

What we know about second language aptitude

Past research on second language aptitude has focused on how people perceive sounds in a particular language and on more general cognitive processes such as memory and learning abilities. Most of this work has used paper-and-pencil and computerized tests to determine language-learning abilities and predict future learning.

Researchers have also studied brain activity as a way of measuring linguistic and cognitive abilities. However, much less is known about how brain activity predicts second language learning.

Is there a way to predict the aptitude of second language learning?

How does brain activity change while learning languages?
Brain image via

In a recently published study, Chantel Prat, associate professor of psychology at the Institute for Learning and Brain Sciences at the University of Washington, and I explored how brain activity recorded at rest – while a person is relaxed with their eyes closed – could predict the rate at which a second language is learned among adults who spoke only one language.

Studying the resting brain

Resting brain activity is thought to reflect the organization of the brain and it has been linked to intelligence, or the general ability used to reason and problem-solve.

We measured brain activity obtained from a “resting state” to predict individual differences in the ability to learn a second language in adulthood.

To do that, we recorded five minutes of eyes-closed resting-state electroencephalography, a method that detects electrical activity in the brain, in young adults. We also collected two hours of paper-and-pencil and computerized tasks.

We then had 19 participants complete eight weeks of French language training using a computer program. This software was developed by the U.S. armed forces with the goal of getting military personnel functionally proficient in a language as quickly as possible.

The software combined reading, listening and speaking practice with game-like virtual reality scenarios. Participants moved through the content in levels organized around different goals, such as being able to communicate with a virtual cab driver by finding out if the driver was available, telling the driver where their bags were and thanking the driver.

Here’s a video demonstration:

Nineteen adult participants (18-31 years of age) completed two 30-minute training sessions per week for a total of 16 sessions. After each training session, we recorded the level that each participant had reached. At the end of the experiment, we used that level information to calculate each individual’s learning rate across the eight-week training.

As expected, there was large variability in the learning rate, with the best learner moving through the program more than twice as quickly as the slowest learner. Our goal was to figure out which (if any) of the measures recorded initially predicted those differences.

A new brain measure for language aptitude

When we correlated our measures with learning rate, we found that patterns of brain activity that have been linked to linguistic processes predicted how easily people could learn a second language.

Patterns of activity over the right side of the brain predicted upwards of 60 percent of the differences in second language learning across individuals. This finding is consistent with previous research showing that the right half of the brain is more frequently used with a second language.

Our results suggest that the majority of the language learning differences between participants could be explained by the way their brain was organized before they even started learning.

Implications for learning a new language

Does this mean that if you, like me, don’t have a “quick second language learning” brain you should forget about learning a second language?

Not quite.

Language learning can depend on many factors.
Child image via

First, it is important to remember that 40 percent of the difference in language learning rate still remains unexplained. Some of this is certainly related to factors like attention and motivation, which are known to be reliable predictors of learning in general, and of second language learning in particular.

Second, we know that people can change their resting-state brain activity. So training may help to shape the brain into a state in which it is more ready to learn. This could be an exciting future research direction.

Second language learning in adulthood is difficult, but the benefits are large for those who, like myself, are motivated by the desire to communicate with others who do not speak their native tongue.

The Conversation

Brianna Yamasaki, Ph.D. Student, University of Washington

This article was originally published on The Conversation. Read the original article.

How the British military became a champion for language learning


Wendy Ayres-Bennett, University of Cambridge

When an army deploys in a foreign country, there are clear advantages if the soldiers are able to speak the local language or dialect. But what if your recruits are no good at other languages? In the UK, where language learning in schools and universities is facing a real crisis, the British army began to see this as a serious problem.

In a new report on the value of languages, my colleagues and I showcased how a new language policy instituted last year within the British Army, was triggered by a growing appreciation of the risks of language shortages for national security.

Following the conflicts in Iraq and Afghanistan, the military sought to implement language skills training as a core competence. Speakers of other languages are encouraged to take examinations to register their language skills, whether they are language learners or speakers of heritage or community languages.

The UK Ministry of Defence’s Defence Centre for Language and Culture also offers training to NATO standards across the four language skills – listening, speaking, reading and writing. Core languages taught are Arabic, Dari, Farsi, French, Russian, Spanish and English as a foreign language. Cultural training that provides regional knowledge and cross-cultural skills is still embryonic, but developing fast.

Cash incentives

There are two reasons why this is working. The change was directed by the vice chief of the defence staff, and therefore had a high-level champion. There are also financial incentives for army personnel to have their linguistic skills recorded, ranging from £360 for a lower-level western European language, to £11,700 for a high level, operationally vital linguist. Currently any army officer must have a basic language skill to be able to command a sub unit.

A British army sergeant visits a school in Helmand, Afghanistan.
Defence Images/, CC BY-NC

We should not, of course, overstate the progress made. The numbers of Ministry of Defence linguists for certain languages, including Arabic, are still precariously low and, according to recent statistics, there are no speakers of Ukrainian or Estonian classed at level three or above in the armed forces. But, crucially, the organisational culture has changed and languages are now viewed as an asset.

Too fragmented

The British military’s new approach is a good example of how an institution can change the culture of the way it thinks about languages. It’s also clear that language policy can no longer simply be a matter for the Department for Education: champions for language both within and outside government are vital for issues such as national security.

This is particularly important because of the fragmentation of language learning policy within the UK government, despite an informal cross-Whitehall language focus group.

Experience on the ground illustrates the value of cooperation when it comes to security. For example, in January, the West Midlands Counter Terrorism Unit urgently needed a speaker of a particular language dialect to assist with translating communications in an ongoing investigation. The MOD was approached and was able to source a speaker within another department.

There is a growing body of research demonstrating the cost to business of the UK’s lack of language skills. Much less is known about their value to national security, defence and diplomacy, conflict resolution and social cohesion. Yet language skills have to be seen as an asset, and appreciation is needed across government for their wider value to society and security.

The Conversation

Wendy Ayres-Bennett, Professor of French Philology and Linguistics, University of Cambridge

This article was originally published on The Conversation. Read the original article.

How other languages can reveal the secrets to happiness


Tim Lomas, University of East London

The limits of our language are said to define the boundaries of our world. This is because in our everyday lives, we can only really register and make sense of what we can name. We are restricted by the words we know, which shape what we can and cannot experience.

It is true that sometimes we may have fleeting sensations and feelings that we don’t quite have a name for – akin to words on the “tip of our tongue”. But without a word to label these sensations or feelings they are often overlooked, never to be fully acknowledged, articulated or even remembered. And instead, they are often lumped together with more generalised emotions, such as “happiness” or “joy”. This applies to all aspects of life – and not least to that most sought-after and cherished of feelings, happiness. Clearly, most people know and understand happiness, at least vaguely. But they are hindered by their “lexical limitations” and the words at their disposal.

As English speakers, we inherit, rather haphazardly, a set of words and phrases to represent and describe our world around us. Whatever vocabulary we have managed to acquire in relation to happiness will influence the types of feelings we can enjoy. If we lack a word for a particular positive emotion, we are far less likely to experience it. And even if we do somehow experience it, we are unlikely to perceive it with much clarity, think about it with much understanding, talk about it with much insight, or remember it with much vividness.

Speaking of happiness

While this recognition is sobering, it is also exciting, because it means by learning new words and concepts, we can enrich our emotional world. So, in theory, we can actually enhance our experience of happiness simply through exploring language. Prompted by this enthralling possibility, I recently embarked on a project to discover “new” words and concepts relating to happiness.

I did this by searching for so-called “untranslatable” words from across the world’s languages. These are words where no exact equivalent word or phrase exists in English. And as such, suggest the possibility that other cultures have stumbled upon phenomena that English-speaking places have somehow overlooked.

Perhaps the most famous example is “Schadenfreude”, the German term describing pleasure at the misfortunes of others. Such words pique our curiosity, as they appear to reveal something specific about the culture that created them – as if German people are potentially especially liable to feelings of Schadenfreude (though I don’t believe that’s the case).

German’s are no more likely to experience Schadenfreude than they are to drink steins of beer in Bavarian costume.

However, these words actually may be far more significant than that. Consider the fact that Schadenfreude has been imported wholesale into English. Evidently, English speakers had at least a passing familiarity with this kind of feeling, but lacked the word to articulate it (although I suppose “gloating” comes close) – hence, the grateful borrowing of the German term. As a result, their emotional landscape has been enlivened and enriched, able to give voice to feelings that might previously have remained unconceptualised and unexpressed.

My research, searched for these kind of “untranslatable words” – ones that specifically related to happiness and well-being. And so I trawled the internet looking for relevant websites, blogs, books and academic papers, and gathered a respectable haul of 216 such words. Now, the list has expanded – partly due to the generous feedback of visitors to my website – to more than 600 words.

Enriching emotions

When analysing these “untranslatable words”, I divide them into three categories based on my subjective reaction to them. Firstly, there are those that immediately resonate with me as something I have definitely experienced, but just haven’t previously been able to articulate. For instance, I love the strange German noun “Waldeinsamkeit”, which captures that eerie, mysterious feeling that often descends when you’re alone in the woods.

A second group are words that strike me as somewhat familiar, but not entirely, as if I can’t quite grasp their layers of complexity. For instance, I’m hugely intrigued by various Japanese aesthetic concepts, such as “aware” (哀れ), which evokes the bitter-sweetness of a brief, fading moment of transcendent beauty. This is symbolised by the cherry blossom – and as spring bloomed in England I found myself reflecting at length on this powerful yet intangible notion.

Finally, there is a mysterious set of words which completely elude my grasp, but which for precisely that reason are totally captivating. These mainly hail from Eastern religions – terms such as “Nirvana” or “Brahman” – which translates roughly as the ultimate reality underlying all phenomena in the Hindu scriptures. It feels like it would require a lifetime of study to even begin to grasp the meaning – which is probably exactly the point of these types of words.

Now we can all ‘tepils’ like the Norwegians – that’s drink beer outside on a hot day, to you and me
Africa Studio/Shutterstock

I believe these words offer a unique window onto the world’s cultures, revealing diversity in the way people in different places experience and understand life. People are naturally curious about other ways of living, about new possibilities in life, and so are drawn to ideas – like these untranslatable words – that reveal such possibilities.

There is huge potential for these words to enrich and expand people’s own emotional worlds, with each of these words comes a tantalising glimpse into unfamiliar and new positive feelings and experiences. And at the end of the day, who wouldn’t be interested in adding a bit more happiness to their own lives?

The Conversation

Tim Lomas, Lecturer in Applied Positive Psychology , University of East London

This article was originally published on The Conversation. Read the original article.

Could early music training help babies learn language?


Christina Zhao, University of Washington

Growing up in China, I started playing piano when I was nine years old and learning English when I was 12. Later, when I was a college student, it struck me how similar language and music are to each other.

Language and music both require rhythm; otherwise they don’t make any sense. They’re also both built from smaller units – syllables and musical beats. And the process of mastering them is remarkably similar, including precise movements, repetitive practice and focused attention. I also noticed that my musician peers were particularly good at learning new languages.

All of this made me wonder if music shapes how the brain perceives sounds other than musical notes. And if so, could learning music help us learn languages?

Music experience and speech

Music training early in life (before the age of seven) can have a wide range of benefits beyond musical ability.

For instance, school-age children (six to eight years old) who participated in two years of musical classes four hours each week showed better brain responses to consonants compared with their peers who started one year later. This suggests that music experience helped children hear speech sounds.

Music may have a range of benefits.
Breezy Baldwin, CC BY

But what about babies who aren’t talking yet? Can music training this early give babies a boost in the steps it takes to learn language?

The first year of life is the best time in the lifespan to learn speech sounds; yet no studies have looked at whether musical experience during infancy can improve speech learning.

I sought to answer this question with Patricia K. Kuhl, an expert in early childhood learning. We set out to study whether musical experience at nine months of age can help infants learn speech.

Nine months is within the peak period for infants’ speech sound learning. During this time, they’re learning to pay attention to the differences among the different speech sounds that they hear in their environment. Being able to differentiate these sounds is key for learning to speak later. A better ability to tell speech sounds apart at this age is associated with producing more words at 30 months of age.

Here is how we did our study

In our study, we randomly put 47 nine-month-old infants in either a musical group or a control group and completed 12 15-minute-long sessions of activities designed for that group.

Babies in the music group sat with their parents, who guided them through the sessions by tapping out beats in time with the music with the goal of helping them learn a difficult musical rhythm.

Here is a short video demonstration of what a music session looked like.

Infants in the control group played with toy cars, blocks and other objects that required coordinated movements in social play, but without music.

After the sessions, we measured the babies’ brains responses to musical and speech rhythms using magnetoencephalography (MEG), a brain imaging technique.

New music and speech sounds were presented in rhythmic sequences, but the rhythms were occasionally disrupted by skipping a beat.

These rhythmic disruptions help us measure how well the babies’ brains were honed to rhythms. The brain gives a specific response pattern when detecting an unexpected change. A bigger response indicates that the baby was following rhythms better.

Babies in the music group had stronger brain responses to both music and speech sounds compared with babies in the control group. This shows that musical experience, as early as nine month of age, improved infants’ ability to process both musical and speech rhythms.

These skills are important building blocks for learning to speak.

Other benefits from music experience

Language is just one example of a skill that can be improved through music training. Music can help with social-emotional development, too. An earlier study by researchers Tal-Chen Rabinowitch and Ariel Knafo-Noam showed that pairs of eight-year-olds who didn’t know each other reported feeling more close and connected with one another after a short exercise of tapping out beats in sync with each other.

Music helps children bond better.
Boy image via

Another researcher, Laura Cirelli, showed that 14-month-old babies were more likely to show helping behaviors toward an adult after the babies had been bounced in sync with the adult who was also moving rhythmically.

There are many more exciting questions that remain to be answered as researchers continue to study the effects of music experience on early development.

For instance, does the music experience need to be in a social setting? Could babies get the benefits of music from simply listening to music? And, how much experience do babies need over time to sustain this language-boosting benefit?

Music is an essential part of being human. It has existed in human cultures for thousands of years, and it is one of the most fun and powerful ways for people to connect with each other. Through scientific research, I hope we can continue to reveal how music experience influences brain development and language learning of babies.

The Conversation

Christina Zhao, Postdoctoral Fellow, University of Washington

This article was originally published on The Conversation. Read the original article.

Norwegians using ‘Texas’ to mean ‘crazy’ actually isn’t so crazy

Laurel Stvan, University of Texas Arlington

If you haven’t heard by now, the American press recently picked up on an interesting linguistic phenomenon in Norway, where the word “Texas” is slang for “crazy.”

Indeed, it turns out that for several years Norwegians have used the word to describe a situation that is chaotic, out of control or excitingly unpredictable (The crowd at the concert last night was totally Texas!).

While this may seem like a bit of a stretch to many American English speakers, when examined through the lens of linguistics it’s actually a pretty natural extension of the word Texas.

How new meanings emerge

It’s fairly common for a word’s meaning to shift over time. Speakers will often use a word in a new way that applies to just one aspect of the term’s earlier connotations, and emphasizing this single aspect will eventually narrow the word’s meaning, depending on its context.

In fact, “crazy” itself is currently undergoing multiple meaning changes. More broadly, it was traditionally used to convey insane or aberrant thinking.

However, English speakers have since split apart these aspects, emphasizing just one to create a new meaning:

  • crazy = fast-paced, frantic (I’ve been crazy busy this week.)
  • crazy = bizarre, odd (Mustard on your taco? That’s crazy!)
  • crazy = dangerous, lethal (What’s your plan for when a crazy gunman breaks into your school?)

This last usage, in particular, has annoyed mental health activists. (Even though people with mental illness aren’t usually dangerous, expressions like “psycho killer” and “crazed gunman” often appear together.)

It’s the first meaning of crazy, however, that Norwegians are invoking when referring to situations that are “totally Texas”: the kind of crazy that is wild, frantic or chaotic.

You used to call me on my…handy?

The story of Norwegians using Texas also demonstrates that words don’t simply get refashioned within the same language; rather, words can be borrowed from one and applied to another, which often results in a changed meaning in the new setting. This, too, has a long history that can be seen as words cross geographic and cultural boundaries.

German Chancellor Angela Merkel texts on her ‘handy.’
Tobias Schwarz/Reuters

Beyond Texas, other English words have changed meaning when borrowed by other languages. For example, the Japanese now use the word feminisuto, adapted from feminist. In Japanese it means a chivalrous man, one who “does things like being polite to women.”

Another shift shows up in the word handy, which Germans borrowed from the English language. There, it refers to a cellphone.

Words can also change meaning when absorbed into English. For instance, poncho has become narrower in meaning. Borrowed from South American Spanish, it originally meant “woolen fabric”; now it describes a particular piece of clothing, often a plastic one used in the rain. And tycoon had shifted, too. It’s borrowed from Chinese (via Japanese) and originally meant “high official” or “great nobleman.” Today it’s primarily used to describe a businessman who’s made lots of money.

Dreams of the American West

The borrowing of words isn’t a modern phenomenon. According to Diane Nicholls of MacMillan English Dictionaries, it’ll often take place when “different language communities come into contact with each other.”

And settlers did come from Norway to Texas. The town of Clifton, Texas, where a third of the population is of Norwegian descent, has been dubbed the Norwegian Capital of Texas. (However, this New World outpost of Norway uses a different, older dialect of Norwegian, so Texans from Clifton are unfamiliar with this new bit of slang.)

It turns out that communities can come into contact in ways that are not actually physical. In the case of Norwegians’ use of Texas, the borrowing may not originate from physical contact, but through cultural aspiration. In fact, throughout much of Europe the image of the American Wild West appeals to a set of beliefs (perhaps stereotypical or false) about the apparent freedom and lawlessness in the West during the 19th century.

A cowboy-themed amusement park in Sweden.
Naomi Harris/Feature Shoot

These enthusiastic ideas about American frontier life can be seen in places like Sweden’s Wild West theme parks. And Germany has been fascinated with cowboys and the American frontier from as early as the the 1890s, when Buffalo Bill Cody toured Germany. Twentieth-century movies, novels and TV shows continue to promote myths of the Wild West, while prominently featuring Texas.

Ultimately, the Norwegian use of Texas makes sense because it follows some recognized linguistic principles: it indicates the narrowing of meaning over time, it reflects a change in meaning when applied to a new cultural context, and it represents a glamorized (if stereotypical) view of another culture.

So why did Norwegians settle on the term Texas to describe something fast-paced and frantic?

Given the portrayal of Texas in 19th- and 20th-century popular culture, they’d be crazy not to.

The Conversation

Laurel Stvan, Associate Professor and Chair of Linguistics, University of Texas Arlington

This article was originally published on The Conversation. Read the original article.

Here’s how much public funding for university students varies across the UK


Here’s how much public funding for university students varies across the UK

David Eiser, University of Stirling

An English student, a Scottish student, a Welsh student and a Northern Irish student walk into a bar. Who should buy the round? Across the four territories of the UK, different types and levels of grant and loan are available to students. This means that students from some parts of the UK are getting more support from their devolved government than others.

A new report by London Economics for the University and College Union has benchmarked this funding per student across the UK. It has done this for different types of student – such as full-time or part-time and undergraduate or post-graduate – and by funding source. The fact that the funding systems are so different makes this kind of comparison tricky, particularly as there are differences in the type and level of data that is available on each education system.

For full-time undergraduate students at university, there was a wide variation in the total level of resource per student in 2013-14 – ranging from £13,983 in England to £11,310 in Scotland. In Wales it was £13,441 and £11,358 in Northern Ireland. This includes funding from all sources, including government grants to the universities directly for teaching and research, and grants and loans paid to students to fund tuition and maintenance.

The total level of funding per university student is higher in England than in Scotland because of the higher tuition fees in England that students are expected to pay themselves. But the fees paid by students in England are largely supported by tuition fee loans from the government to students. Given that a proportion of these loans will never be repaid, to calculate the total level of public funding per student, assumptions must be made about what proportion of these loans will not be repaid.

Who gets what

The proportion of a loan that will not be paid back is known as the RAB charge (Resource Accounting and Budgeting). This is because students only have to pay their loans back when they earn above a particular threshold.

Assumptions about the RAB charge are critical in determining the value of the total public subsidy per university student, and the public-private split in total funding. The assumed RAB charges used in the report vary depending on the type and value of loan. RAB charges for tuition fee loans in England for example are assumed at 45%, (implying 45% of the total value of student loans will not be repaid), whereas the RAB charge is lower for Northern Irish and Welsh students, who face lower tuition fee caps and therefore borrow less.

Once RAB charges for fee and maintenance loan repayment are factored in, as the graph above shows, the level of public funding per full-time undergraduate is broadly similar in England, Wales and Scotland – £8,900, £9,500 and £9,000 respectively – although lower in NI where it is £7,700. But the proportion of this funding provided per full-time undergraduate by the public sector varies from 80% in Scotland to 63% in England.

The report also highlights discrepancies between the level of resources available to those students domiciled in each of the four countries who decide to go and study in another country. For example, relatively few Scottish students choose to study in England, given the increase in costs to them of doing so, and the fact that relatively little support funding is available to them if they did.

In contrast, English students face no differential cost in where they enrol. The result, according to the report, is that approximately £10m of student support resources flow from Scotland to England, while £73m of resources flow from England to Scotland.

Learning from different policies

The report highlights a number of funding inequities, particularly in terms of the total resource per full-time undergraduate. In itself, however, it is not immediately apparent that such differences are undesirable. After all, the objective of devolution is to allow devolved governments to allocate spending as they (and their electorates) see fit.

But what is clear is that balancing the need to maintain (or increase) total funding for higher education, while ensuring accessibility to university by students from all backgrounds, is likely to become increasingly challenging.

Important questions to be debated include where the public-private funding split should lie across the UK and how this split should be distributed across students of different financial means. There’s also a question on whether private contributions should be paid upfront (as they currently are through loans), or, for example, through a higher graduate tax payable throughout their working lives.

The core case for devolving policy responsibility in areas such as education is that it enables the devolved governments to tailor policy to the needs and preferences of their electorates. The resulting variation in policies informs understanding of which approaches work best under different circumstances, enabling “policy learning” to take place more quickly than if policy approaches were uniform. We need more comparative benchmarking studies such as this one if we are to benefit from the “policy learning” that devolution is hoped to provide.

The Conversation

David Eiser, Research Fellow, Economics , University of Stirling

This article was originally published on The Conversation. Read the original article.

Why 1904 testing methods should not be used for today’s students


Robert Sternberg, Cornell University

When I was an elementary school student, schools in my hometown administered IQ tests every couple of years. I felt very scared of the psychologist who came in to give those tests.

I also performed terribly. As a result, at one point, I was moved to a lower-grade classroom so I could take a test more suitable to my IQ level.

Consequently, I believed that my teachers considered me stupid. I, of course, thought I was stupid. In addition, I also thought my teachers expected low-quality work from a child of such low IQ. So, I gave them what they expected.

Had it not been for my fourth grade teacher, who thought there was more to a person than an IQ test score, I almost certainly would not be a professor today.

You might think things have gotten better. Not quite. I have two generations of children (from different marriages), and something similar happened to both my sons: Seth, age 36, now a successful entrepreneur in Silicon Valley, and Sammy, age four.

Some children as young as Sammy take preschool tests. And almost all our students – at least those wanting to go on to college – take what one might call proxies for IQ tests – the SAT and ACT – which are IQ tests by another name.

Testing is compromising the future of many of our able students. Today’s testing comes at the expense of validity (strong prediction of future success), equity (ensuring that members of various groups have an equal shot), and common sense in identifying those students who think deeply and reflectively rather than those who are good at answering shallow multiple-choice questions.

How should today’s students be assessed?

Intelligence tests in Halloween costumes

Psychology professor Douglas Detterman and his colleagues have shown that the SAT and the ACT are little more than disguised IQ tests.

They may look slightly different from the IQ tests, but they closely resemble the intelligence tests used by Charles Spearman (1904), Alfred Binet and Theodore Simon (1916), famous psychologists in Great Britain and France, respectively, who created the first IQ tests a century ago.

While these tests may have been at the cutting edge at the turn of the 20th century, today they are archaic. Imagine using medical tests designed at the beginning of the 20th century to diagnose, say, cancer or heart disease.

Multiple choice questions don’t teach life skills.
biologycorner, CC BY-NC

People’s success today scarcely hinges on solving simple, pat academic problems with unique solutions conveniently presented as multiple-choice options.

When your kids (or colleagues) misbehave, does anyone give you five options, one of which is uniquely correct, to solve the problem of how to get them to behave properly?

Or, are there any multiple-choice answers for how to solve serious crises, whether in international affairs (eg, in Syria), in business (eg, at Volkswagen) or in education (eg, skyrocketing college tuitions)?

How do we test for success?

The odd thing is that we can do much better. That would mean taking into account that academic and life success involves much more than IQ.

In my research conducted with my colleagues who include Florida State University professor Richard Wagner and a former professor at the US Military Academy at West Point, George Forsythe, we found that success in managerial, military and other leadership jobs can be predicted independent of IQ levels.

More generally, we have found that practical intelligence, or common sense, is itself largely independent of IQ. Moreover, my research with Todd Lubart, now a professor at the University of Paris V, has shown that creative intelligence also is distinct from IQ.

My colleagues and I, including Professor Elena Grigorenko at Yale, have shown in studies on five continents that children from diverse cultures, such as Yup’ik Eskimos in Alaska, Latino-American students in San Jose, California, and rural Kenyan schoolchildren, may have practical adaptive skills that far exceed those of their teachers (such as how to hunt in the frozen tundra, ice-fish, or treat parasitic illnesses such as malaria with natural herbal medicines).

Yet teachers – and IQ tests – may view these children as intellectually challenged.

What are we testing, anyway?

Our theory of “successful intelligence” can help predict the academic, extracurricular and leadership success of college students. In addition, it could increase applications from qualified applicants and decrease differences among ethnic groups, such as between African-American and Euro-American students, that are found in the SAT/ACT.

The idea behind “successful intelligence” is not only to measure analytical skills as is done by the SAT/ACT, but also other skills that are important to college and life success. Although this does mean additional testing, it is an assessment of strength-based skills that actually are fun to take.

What are these other skills and assessments, exactly?

The truth is, you can’t get by in life only on analytical skills – you also need to come up with your own new ideas (creativity), know how to apply your ideas (practical common sense), and ensure they benefit others beside yourself (wisdom).

So, assessments of “successful intelligence” would measure creativity, common sense and wisdom/ethics, in addition to analytical skills, as measured by the SAT/ACT.

Here is how measurement of successful intelligence works:

Creative skills can be measured by having students write or tell a creative story, design a scientific experiment, draw something new, caption a cartoon or suggest what the world might be like today if some past event (such as the defeat of the Nazis in World War II) had turned out differently.

Practical skills can be measured by having students watch several videos of college students facing practical problems – and then solving the problems for the students in the videos, or by having students comment on how they persuaded a friend of some ideas that the friend did not initially accept.

Wisdom-based and ethical skills can be measured by problems such as what to do upon observing a student cheating, or commenting on how one could, in the future, make a positive and meaningful difference to the world, at some level.

A new way to test

My collaborators and I first tested our ideas between 2000 and 2005 when I was IBM professor of psychology and education and professor of management at Yale. We found (in our “Rainbow Project”) that we could double prediction of freshman-year grades over that obtained from the SAT.

Also, relative to the SAT, we reduced by more than half ethnic-group differences between Euro-Americans, Asian-Americans, African-Americans, Latino-Americans and American Indians.

Later in 2011, I engaged, in collaboration with Lee Coffin, dean of undergraduate admissions at Tufts University, in a project called Kaleidoscope. At the time, I was dean of arts and sciences at Tufts. Kaleidoscope was optional for all undergraduate applicants to Tufts – tens of thousands did Kaleidoscope over the years.

We increased prediction not only of academic success, but also of extracurricular and leadership success, while greatly reducing ethnic-group differences.

Then again, when I was provost and senior vice president of Oklahoma State University (OSU), in collaboration with Kyle Wray, VP for enrollment management, we implemented a similar program at OSU (called the “Panorama Project”) that also was available to all applicants.

The measures are still being used at Tufts and at Oklahoma State. These projects have resulted in students being admitted to Tufts and OSU who never would have made it on the basis of the high school GPAs and SATs.

On our assessments, the students displayed potential that was hidden by traditional standardized tests and even by high school grades.

The problem of being stuck

So why don’t colleges move on?

There are several reasons, but the most potent is sheer inertia and fear of change.

College and university presidents and admissions deans around the country have revealed to me in informal conversations that they want change but are afraid to rock the boat.

There are other ways of testing kids.
BarbaraLN, CC BY-SA

Moreover, because the SAT, unlike our assessment, is highly correlated with socioeconomic status, colleges like it. College tuition brings in big money, and anything that could affect the dollars is viewed with fear. Students who do well on standardized tests are more likely to be full-pay students, an attraction to institutions of higher learning.

As I know only too well, colleges mostly do what they did before, and changes often require approval of many different stakeholders. The effort to effect change can be daunting.

Finally, there is the problem of self-fulfilling prophecy. We use conventional standardized tests to select students. We then give those high-scoring students better opportunities not only in college but for jobs in our society.

As a result, the tests often make their predictions come true. Given my family history, I know all too well how real the problem of self-fulfilling prophecies is.

The Conversation

Robert Sternberg, Professor of Human Development, Cornell University

This article was originally published on The Conversation. Read the original article.

Signs of our times: why emoji can be even more powerful than words


Vyvyan Evans, Bangor University

Each year, Oxford Dictionaries – one of the world’s leading arbiters on the English language – selects a word that has risen to prominence over the past 12 months as its “Word of the Year”. The word is carefully chosen, based on a close analysis of how often it is used and what it reveals about the times we live in. Past examples include such classics as “vape”, “selfie” and “omnishambles”.

But the 2015 word of the year is not a word at all. It’s an emoji – the “face with tears of joy” emoji, to be precise.

Formerly regarded with disdain as the textual equivalent of an adolescent grunt, it appears that emoji has now gone mainstream. Even if it’s not a fully-fledged language, then it is – at the very least – something that most of us use, most of the time. In fact, more than 80% of all adult smartphone users in the UK regularly use emoji, a finding based on a study I reported in an earlier article.

Yet predictably, Oxford Dictionaries’ selection has raised eyebrows in some quarters. Writing in The Guardian, Hannah Jane Parkinson brands the decision “ridiculous”. For Parkinson, and I’m sure for many other language mavens out there, it’s “ridiculous” because the emoji is not even a word. Surely this is a stunt, they’ll say, dreamt up by clever marketing executives bent on demonstrating just how hip Oxford Dictionaries actually is.

But Parkinson also objects on the basis that there are many other emojis which would make a better word of the year. She suggests the nail painting emoji and the aubergine (or eggplant) emoji as just two examples which have a stronger claim to the title.

Missing the point

But both these complaints miss the point. Emoji – from the Japanese meaning “picture character” (a word which only entered the Oxford Dictionaries in 2013) – is in many respects language-like. Spoken or signed language enables us to convey a message, influence the mental states and behaviours of others and enact changes to our civil and social status. We use language to propose marriage, and confirm it, to quarrel, make-up and get divorced. Yet emoji has similar functions – it can even get you arrested!

Consider an unusual case from earlier this year: a 17-year-old African American teenager posted a public status update on his Facebook page, featuring a police officer emoji with handgun emojis pointing towards it. This landed him in hot water: the New York District Attorney issued an arrest warrant, for an alleged “terroristic threat”, claiming that the emojis amounted to a threat to harm, or incite others to cause harm, to New York’s finest.

A grand jury ultimately declined to indict the teenager for what is arguably the world’s first alleged emoji terror offence. But the point is that emojis, like language, can both convey a message and provide a means of enacting it – in this case, an alleged call to arms against the NYPD.

Like our treasured English words, emojis are powerful instruments of thought and, potentially, persuasion. Just like language, they can and will be used as evidence against you in a court of law. In short, those who dismiss the language-like nature of emoji fundamentally misunderstand how human communication works in our brave new digital world.

Evolution of the emoji

The second complaint – that there are other emojis more deserving of Oxford Dictionaries’ esteem – also misunderstands how language is evolving in the digital domain.

Emoji perfection

For one thing, recent research suggests that just under 60% of the world’s daily emoji use is made up of smiling or sad faces, of various kinds. And this particular emoji now accounts for around 20% of all emoji usage in the UK (representing a fourfold increase in use over the past 12 months). It is arguably one of the most frequently used emojis today. In this sense, the “face with tears of joy” emoji is a perfectly appropriate representation of the main ways we use emoji in our everyday digital lives.

Yet this specific emoji is apt for a deeper reason, too. Emoji is to text-speak what intonation, facial expression and body language are to spoken interaction. While emoji are not conventional words, they nevertheless provide an important contextualisation cue, which enables us to punctuate the otherwise emotionally arid landscape of digital text with personal expression.

Importantly, emoji helps us to elicit empathy from the person we’re addressing – a central requirement of effective communication. It allows us to influence the way our text is interpreted and better express our emotional selves.

One could even argue that, in some ways, emojis are more powerful than words. The “laughing face with tears of joy” emoji effectively conveys a complex emotional spectrum – which would otherwise require several words to convey – in a single, relatively simple glyph. It manages to evoke an immediate emotional resonance, which might otherwise be lost in a string of words.

Occasionally, emojis they can even replace words – this is what linguists refer to as code-switching. In more extreme examples – such as the translation of literary works such as Alice in Wonderland – they function exclusively as words and are also given grammatical structure. There’s truly no arguing with the expressive power of emoji.

So while some will unkindly accuse Oxford Dictionaries of a marketing stunt, I applaud them. We are increasingly living in an age of emoji: they are, quite literally, a sign of our times. There’s no doubt that language is here to stay – the great English word is not in peril, and won’t be any time soon. But emoji fills a gap in digital communication – and makes us better at it in the process.

The Conversation

Vyvyan Evans, Professor of Linguistics, Bangor University

This article was originally published on The Conversation. Read the original article.

What will the English language be like in 100 years?


Simon Horobin, University of Oxford

One way of predicting the future is to look back at the past. The global role English plays today as a lingua franca – used as a means of communication by speakers of different languages – has parallels in the Latin of pre-modern Europe.

Having been spread by the success of the Roman Empire, Classical Latin was kept alive as a standard written medium throughout Europe long after the fall of Rome. But the Vulgar Latin used in speech continued to change, forming new dialects, which in time gave rise to the modern Romance languages: French, Spanish, Portuguese, Romanian and Italian.

Similar developments may be traced today in the use of English around the globe, especially in countries where it functions as a second language. New “interlanguages” are emerging, in which features of English are mingled with those of other native tongues and their pronunciations.

Despite the Singaporean government’s attempts to promote the use of Standard British English through the Speak Good English Movement, the mixed language known as “Singlish” remains the variety spoken on the street and in the home.

Spanglish, a mixture of English and Spanish, is the native tongue of millions of speakers in the United States, suggesting that this variety is emerging as a language in its own right.

Meanwhile, the development of automatic translation software, such as Google Translate, will come to replace English as the preferred means of communication employed in the boardrooms of international corporations and government agencies.

So the future for English is one of multiple Englishes.

Looking back to the early 20th century, it was the Standard English used in England, spoken with the accent known as “Received Pronunciation”, that carried prestige.

But today the largest concentration of native speakers is in the US, and the influence of US English can be heard throughout the world: can I get a cookie, I’m good, did you eat, the movies, “skedule” rather than “shedule”. In the future, to speak English will be to speak US English.

US spellings such as disk and program are already preferred to British equivalents disc and programme in computing. The dominance of US usage in the digital world will lead to the wider acceptance of further American preferences, such as favorite, donut, dialog, center.

What is being lost?

In the 20th century, it was feared that English dialects were dying out with their speakers. Projects such as the Survey of English Dialects (1950-61) were launched at the time to collect and preserve endangered words before they were lost forever. A similar study undertaken by the BBC’s Voices Project in 2004 turned up a rich range of local accents and regional terms which are available online, demonstrating the vibrancy and longevity of dialect vocabulary.

But while numerous dialect words were collected for “young person in cheap trendy clothes and jewellery” – pikey, charva, ned, scally – the word chav was found throughout England, demonstrating how features of the Estuary English spoken in the Greater London area are displacing local dialects, especially among younger generations.

The turn of the 20th century was a period of regulation and fixity – the rules of Standard English were established and codified in grammar books and in the New (Oxford) English Dictionary on Historical Principles, published as a series of volumes from 1884-1928. Today we are witnessing a process of de-standardisation, and the emergence of competing norms of usage.

In the online world, attitudes to consistency and correctness are considerably more relaxed: variant spellings are accepted and punctuation marks omitted, or repurposed to convey a range of attitudes. Research has shown that in electronic discourse exclamation marks can carry a range of exclamatory functions, including apologising, challenging, thanking, agreeing, and showing solidarity.

Capital letters are used to show anger, misspellings convey humour and establish group identity, and smiley-faces or emoticons express a range of reactions.

Getting shorter

Some have questioned whether the increasing development and adoption of emoji pictograms, which allow speakers to communicate without the need for language, mean that we will cease to communicate in English at all? 😉

The fast-changing world of social media is also responsible for the coining and spreading of neologisms, or “new words”. Recent updates to Oxford Dictionaries give a flavour: mansplaining, awesomesauce, rly, bants, TL;DR (too long; didn’t read).

How Oxford Dictionaries choose which new words to include.

Clipped forms, acronyms, blends and abbreviations have long been productive methods of word formation in English (think of bus, smog and scuba) but the huge increase in such coinages means that they will be far more prominent in the English of 2115.

Whether you 👍 or h8 such words, think they are NBD or meh, they are undoubtedly here to stay.

The Conversation

Simon Horobin, Professor of English Language and Literature, University of Oxford

This article was originally published on The Conversation. Read the original article.