Beware the bad big wolf: why you need to put your adjectives in the right order


image-20160906-25260-dcj9cp.jpg

Simon Horobin, University of Oxford

Unlikely as it sounds, the topic of adjective use has gone “viral”. The furore centres on the claim, taken from Mark Forsyth’s book The Elements of Eloquence, that adjectives appearing before a noun must appear in the following strict sequence: opinion, size, age, shape, colour, origin, material, purpose, Noun. Even the slightest attempt to disrupt this sequence, according to Forsyth, will result in the speaker sounding like a maniac. To illustrate this point, Forsyth offers the following example: “a lovely little old rectangular green French silver whittling knife”.

 

But is the “rule” worthy of an internet storm – or is it more of a ripple in a teacup? Well, certainly the example is a rather unlikely sentence, and not simply because whittling knives are not in much demand these days – ignoring the question of whether they can be both green and silver. This is because it is unusual to have a string of attributive adjectives (ones that appear before the noun they describe) like this.

More usually, speakers of English break up the sequence by placing some of the adjectives in predicative position – after the noun. Not all adjectives, however, can be placed in either position. I can refer to “that man who is asleep” but it would sound odd to refer to him as “that asleep man”; we can talk about the “Eastern counties” but not the “counties that are Eastern”. Indeed, our distribution of adjectives both before and after the noun reveals another constraint on adjective use in English – a preference for no more than three before a noun. An “old brown dog” sounds fine, a “little old brown dog” sounds acceptable, but a “mischievous little old brown dog” sounds plain wrong.

Rules, rules, rules

Nevertheless, however many adjectives we choose to employ, they do indeed tend to follow a predictable pattern. While native speakers intuitively follow this rule, most are unaware that they are doing so; we agree that the “red big dog” sounds wrong, but don’t know why. In order to test this intuition linguists have analysed large corpora of electronic data, to see how frequently pairs of adjectives like “big red” are preferred to “red big”. The results confirm our native intuition, although the figures are not as comprehensive as we might expect – the rule accounts for 78% of the data.

We know how to use them … without even being aware of it.
Shutterstock

But while linguists have been able to confirm that there are strong preferences in the ordering of pairs of adjectives, no such statistics have been produced for longer strings. Consequently, while Forsyth’s rule appears to make sense, it remains an untested, hypothetical, large, sweeping (sorry) claim.

In fact, even if we stick to just two adjectives it is possible to find examples that appear to break the rule. The “big bad wolf” of fairy tale, for instance, shows the size adjective preceding the opinion one; similarly, “big stupid” is more common than “stupid big”. Examples like these are instead witness to the “Polyanna Principle”, by which speakers prefer to present positive, or indifferent, values before negative ones.

Another consideration of Forsyth’s proposed ordering sequence is that it makes no reference to other constraints that influence adjective order, such as when we use two adjectives that fall into the same category. Little Richard’s song “Long Tall Sally” would have sounded strange if he had called it Tall Long Sally, but these are both adjectives of size.

Definitely not Tall Long Sally.

Similarly, we might describe a meal as “nice and spicy” but never “spicy and nice” – reflecting a preference for the placement of general opinions before more specific ones. We also need to bear in mind the tendency for noun phrases to become lexicalised – forming words in their own right. Just as a blackbird is not any kind of bird that is black, a little black dress does not refer to any small black dress but one that is suitable for particular kinds of social engagement.

Since speakers view a “little black dress” as a single entity, its order is fixed; as a result, modifying adjectives must precede little – a “polyester little black dress”. This means that an adjective specifying its material appears before those referring to size and colour, once again contravening Forsyth’s rule.

Making sense of language

Of course, the rule is a fair reflection of much general usage – although the reasons behind this complex set of constraints in adjective order remain disputed. Some linguists have suggested that it reflects the “nouniness” of an adjective; since colour adjectives are commonly used as nouns – “red is my favourite colour” – they appear close to that slot.

Another conditioning factor may be the degree to which an adjective reflects a subjective opinion rather than an objective description – therefore, subjective adjectives that are harder to quantify (boring, massive, middle-aged) tend to appear further away from the noun than more concrete ones (red, round, French).

Prosody, the rhythm and sound of poetry, is likely to play a role, too – as there is a tendency for speakers to place longer adjectives after shorter ones. But probably the most compelling theory links adjective position with semantic closeness to the noun being described; adjectives that are closely related to the noun in meaning, and are therefore likely to appear frequently in combination with it, are placed closest, while those that are less closely related appear further away.

In Forsyth’s example, it is the knife’s whittling capabilities that are most significant – distinguishing it from a carving, fruit or butter knife – while its loveliness is hardest to define (what are the standards for judging the loveliness of a whittling knife?) and thus most subjective. Whether any slight reorganisation of the other adjectives would really prompt your friends to view you as a knife-wielding maniac is harder to determine – but then, at least it’s just a whittling knife.

The Conversation

Simon Horobin, Professor of English Language and Literature, University of Oxford

This article was originally published on The Conversation. Read the original article.

Why it’s hard for adults to learn a second language


image-20160804-473-32tg9n.jpg

Brianna Yamasaki, University of Washington

As a young adult in college, I decided to learn Japanese. My father’s family is from Japan, and I wanted to travel there someday.

However, many of my classmates and I found it difficult to learn a language in adulthood. We struggled to connect new sounds and a dramatically different writing system to the familiar objects around us.

It wasn’t so for everyone. There were some students in our class who were able to acquire the new language much more easily than others.

So, what makes some individuals “good language learners?” And do such individuals have a “second language aptitude?”

What we know about second language aptitude

Past research on second language aptitude has focused on how people perceive sounds in a particular language and on more general cognitive processes such as memory and learning abilities. Most of this work has used paper-and-pencil and computerized tests to determine language-learning abilities and predict future learning.

Researchers have also studied brain activity as a way of measuring linguistic and cognitive abilities. However, much less is known about how brain activity predicts second language learning.

Is there a way to predict the aptitude of second language learning?

How does brain activity change while learning languages?
Brain image via www.shutterstock.com

In a recently published study, Chantel Prat, associate professor of psychology at the Institute for Learning and Brain Sciences at the University of Washington, and I explored how brain activity recorded at rest – while a person is relaxed with their eyes closed – could predict the rate at which a second language is learned among adults who spoke only one language.

Studying the resting brain

Resting brain activity is thought to reflect the organization of the brain and it has been linked to intelligence, or the general ability used to reason and problem-solve.

We measured brain activity obtained from a “resting state” to predict individual differences in the ability to learn a second language in adulthood.

To do that, we recorded five minutes of eyes-closed resting-state electroencephalography, a method that detects electrical activity in the brain, in young adults. We also collected two hours of paper-and-pencil and computerized tasks.

We then had 19 participants complete eight weeks of French language training using a computer program. This software was developed by the U.S. armed forces with the goal of getting military personnel functionally proficient in a language as quickly as possible.

The software combined reading, listening and speaking practice with game-like virtual reality scenarios. Participants moved through the content in levels organized around different goals, such as being able to communicate with a virtual cab driver by finding out if the driver was available, telling the driver where their bags were and thanking the driver.

Here’s a video demonstration:

Nineteen adult participants (18-31 years of age) completed two 30-minute training sessions per week for a total of 16 sessions. After each training session, we recorded the level that each participant had reached. At the end of the experiment, we used that level information to calculate each individual’s learning rate across the eight-week training.

As expected, there was large variability in the learning rate, with the best learner moving through the program more than twice as quickly as the slowest learner. Our goal was to figure out which (if any) of the measures recorded initially predicted those differences.

A new brain measure for language aptitude

When we correlated our measures with learning rate, we found that patterns of brain activity that have been linked to linguistic processes predicted how easily people could learn a second language.

Patterns of activity over the right side of the brain predicted upwards of 60 percent of the differences in second language learning across individuals. This finding is consistent with previous research showing that the right half of the brain is more frequently used with a second language.

Our results suggest that the majority of the language learning differences between participants could be explained by the way their brain was organized before they even started learning.

Implications for learning a new language

Does this mean that if you, like me, don’t have a “quick second language learning” brain you should forget about learning a second language?

Not quite.

Language learning can depend on many factors.
Child image via www.shutterstock.com

First, it is important to remember that 40 percent of the difference in language learning rate still remains unexplained. Some of this is certainly related to factors like attention and motivation, which are known to be reliable predictors of learning in general, and of second language learning in particular.

Second, we know that people can change their resting-state brain activity. So training may help to shape the brain into a state in which it is more ready to learn. This could be an exciting future research direction.

Second language learning in adulthood is difficult, but the benefits are large for those who, like myself, are motivated by the desire to communicate with others who do not speak their native tongue.

The Conversation

Brianna Yamasaki, Ph.D. Student, University of Washington

This article was originally published on The Conversation. Read the original article.

How the British military became a champion for language learning


education-military.jpg

Wendy Ayres-Bennett, University of Cambridge

When an army deploys in a foreign country, there are clear advantages if the soldiers are able to speak the local language or dialect. But what if your recruits are no good at other languages? In the UK, where language learning in schools and universities is facing a real crisis, the British army began to see this as a serious problem.

In a new report on the value of languages, my colleagues and I showcased how a new language policy instituted last year within the British Army, was triggered by a growing appreciation of the risks of language shortages for national security.

Following the conflicts in Iraq and Afghanistan, the military sought to implement language skills training as a core competence. Speakers of other languages are encouraged to take examinations to register their language skills, whether they are language learners or speakers of heritage or community languages.

The UK Ministry of Defence’s Defence Centre for Language and Culture also offers training to NATO standards across the four language skills – listening, speaking, reading and writing. Core languages taught are Arabic, Dari, Farsi, French, Russian, Spanish and English as a foreign language. Cultural training that provides regional knowledge and cross-cultural skills is still embryonic, but developing fast.

Cash incentives

There are two reasons why this is working. The change was directed by the vice chief of the defence staff, and therefore had a high-level champion. There are also financial incentives for army personnel to have their linguistic skills recorded, ranging from £360 for a lower-level western European language, to £11,700 for a high level, operationally vital linguist. Currently any army officer must have a basic language skill to be able to command a sub unit.

A British army sergeant visits a school in Helmand, Afghanistan.
Defence Images/flickr.com, CC BY-NC

We should not, of course, overstate the progress made. The numbers of Ministry of Defence linguists for certain languages, including Arabic, are still precariously low and, according to recent statistics, there are no speakers of Ukrainian or Estonian classed at level three or above in the armed forces. But, crucially, the organisational culture has changed and languages are now viewed as an asset.

Too fragmented

The British military’s new approach is a good example of how an institution can change the culture of the way it thinks about languages. It’s also clear that language policy can no longer simply be a matter for the Department for Education: champions for language both within and outside government are vital for issues such as national security.

This is particularly important because of the fragmentation of language learning policy within the UK government, despite an informal cross-Whitehall language focus group.

Experience on the ground illustrates the value of cooperation when it comes to security. For example, in January, the West Midlands Counter Terrorism Unit urgently needed a speaker of a particular language dialect to assist with translating communications in an ongoing investigation. The MOD was approached and was able to source a speaker within another department.

There is a growing body of research demonstrating the cost to business of the UK’s lack of language skills. Much less is known about their value to national security, defence and diplomacy, conflict resolution and social cohesion. Yet language skills have to be seen as an asset, and appreciation is needed across government for their wider value to society and security.

The Conversation

Wendy Ayres-Bennett, Professor of French Philology and Linguistics, University of Cambridge

This article was originally published on The Conversation. Read the original article.

How other languages can reveal the secrets to happiness


image-20160627-28362-1djxdon

Tim Lomas, University of East London

The limits of our language are said to define the boundaries of our world. This is because in our everyday lives, we can only really register and make sense of what we can name. We are restricted by the words we know, which shape what we can and cannot experience.

It is true that sometimes we may have fleeting sensations and feelings that we don’t quite have a name for – akin to words on the “tip of our tongue”. But without a word to label these sensations or feelings they are often overlooked, never to be fully acknowledged, articulated or even remembered. And instead, they are often lumped together with more generalised emotions, such as “happiness” or “joy”. This applies to all aspects of life – and not least to that most sought-after and cherished of feelings, happiness. Clearly, most people know and understand happiness, at least vaguely. But they are hindered by their “lexical limitations” and the words at their disposal.

As English speakers, we inherit, rather haphazardly, a set of words and phrases to represent and describe our world around us. Whatever vocabulary we have managed to acquire in relation to happiness will influence the types of feelings we can enjoy. If we lack a word for a particular positive emotion, we are far less likely to experience it. And even if we do somehow experience it, we are unlikely to perceive it with much clarity, think about it with much understanding, talk about it with much insight, or remember it with much vividness.

Speaking of happiness

While this recognition is sobering, it is also exciting, because it means by learning new words and concepts, we can enrich our emotional world. So, in theory, we can actually enhance our experience of happiness simply through exploring language. Prompted by this enthralling possibility, I recently embarked on a project to discover “new” words and concepts relating to happiness.

I did this by searching for so-called “untranslatable” words from across the world’s languages. These are words where no exact equivalent word or phrase exists in English. And as such, suggest the possibility that other cultures have stumbled upon phenomena that English-speaking places have somehow overlooked.

Perhaps the most famous example is “Schadenfreude”, the German term describing pleasure at the misfortunes of others. Such words pique our curiosity, as they appear to reveal something specific about the culture that created them – as if German people are potentially especially liable to feelings of Schadenfreude (though I don’t believe that’s the case).

German’s are no more likely to experience Schadenfreude than they are to drink steins of beer in Bavarian costume.
Kzenon/Shutterstock

However, these words actually may be far more significant than that. Consider the fact that Schadenfreude has been imported wholesale into English. Evidently, English speakers had at least a passing familiarity with this kind of feeling, but lacked the word to articulate it (although I suppose “gloating” comes close) – hence, the grateful borrowing of the German term. As a result, their emotional landscape has been enlivened and enriched, able to give voice to feelings that might previously have remained unconceptualised and unexpressed.

My research, searched for these kind of “untranslatable words” – ones that specifically related to happiness and well-being. And so I trawled the internet looking for relevant websites, blogs, books and academic papers, and gathered a respectable haul of 216 such words. Now, the list has expanded – partly due to the generous feedback of visitors to my website – to more than 600 words.

Enriching emotions

When analysing these “untranslatable words”, I divide them into three categories based on my subjective reaction to them. Firstly, there are those that immediately resonate with me as something I have definitely experienced, but just haven’t previously been able to articulate. For instance, I love the strange German noun “Waldeinsamkeit”, which captures that eerie, mysterious feeling that often descends when you’re alone in the woods.

A second group are words that strike me as somewhat familiar, but not entirely, as if I can’t quite grasp their layers of complexity. For instance, I’m hugely intrigued by various Japanese aesthetic concepts, such as “aware” (哀れ), which evokes the bitter-sweetness of a brief, fading moment of transcendent beauty. This is symbolised by the cherry blossom – and as spring bloomed in England I found myself reflecting at length on this powerful yet intangible notion.

Finally, there is a mysterious set of words which completely elude my grasp, but which for precisely that reason are totally captivating. These mainly hail from Eastern religions – terms such as “Nirvana” or “Brahman” – which translates roughly as the ultimate reality underlying all phenomena in the Hindu scriptures. It feels like it would require a lifetime of study to even begin to grasp the meaning – which is probably exactly the point of these types of words.

Now we can all ‘tepils’ like the Norwegians – that’s drink beer outside on a hot day, to you and me
Africa Studio/Shutterstock

I believe these words offer a unique window onto the world’s cultures, revealing diversity in the way people in different places experience and understand life. People are naturally curious about other ways of living, about new possibilities in life, and so are drawn to ideas – like these untranslatable words – that reveal such possibilities.

There is huge potential for these words to enrich and expand people’s own emotional worlds, with each of these words comes a tantalising glimpse into unfamiliar and new positive feelings and experiences. And at the end of the day, who wouldn’t be interested in adding a bit more happiness to their own lives?

The Conversation

Tim Lomas, Lecturer in Applied Positive Psychology , University of East London

This article was originally published on The Conversation. Read the original article.

Could early music training help babies learn language?


image-20160512-16410-1i0hrpb.jpg

Christina Zhao, University of Washington

Growing up in China, I started playing piano when I was nine years old and learning English when I was 12. Later, when I was a college student, it struck me how similar language and music are to each other.

Language and music both require rhythm; otherwise they don’t make any sense. They’re also both built from smaller units – syllables and musical beats. And the process of mastering them is remarkably similar, including precise movements, repetitive practice and focused attention. I also noticed that my musician peers were particularly good at learning new languages.

All of this made me wonder if music shapes how the brain perceives sounds other than musical notes. And if so, could learning music help us learn languages?

Music experience and speech

Music training early in life (before the age of seven) can have a wide range of benefits beyond musical ability.

For instance, school-age children (six to eight years old) who participated in two years of musical classes four hours each week showed better brain responses to consonants compared with their peers who started one year later. This suggests that music experience helped children hear speech sounds.

Music may have a range of benefits.
Breezy Baldwin, CC BY

But what about babies who aren’t talking yet? Can music training this early give babies a boost in the steps it takes to learn language?

The first year of life is the best time in the lifespan to learn speech sounds; yet no studies have looked at whether musical experience during infancy can improve speech learning.

I sought to answer this question with Patricia K. Kuhl, an expert in early childhood learning. We set out to study whether musical experience at nine months of age can help infants learn speech.

Nine months is within the peak period for infants’ speech sound learning. During this time, they’re learning to pay attention to the differences among the different speech sounds that they hear in their environment. Being able to differentiate these sounds is key for learning to speak later. A better ability to tell speech sounds apart at this age is associated with producing more words at 30 months of age.

Here is how we did our study

In our study, we randomly put 47 nine-month-old infants in either a musical group or a control group and completed 12 15-minute-long sessions of activities designed for that group.

Babies in the music group sat with their parents, who guided them through the sessions by tapping out beats in time with the music with the goal of helping them learn a difficult musical rhythm.

Here is a short video demonstration of what a music session looked like.

Infants in the control group played with toy cars, blocks and other objects that required coordinated movements in social play, but without music.

After the sessions, we measured the babies’ brains responses to musical and speech rhythms using magnetoencephalography (MEG), a brain imaging technique.

New music and speech sounds were presented in rhythmic sequences, but the rhythms were occasionally disrupted by skipping a beat.

These rhythmic disruptions help us measure how well the babies’ brains were honed to rhythms. The brain gives a specific response pattern when detecting an unexpected change. A bigger response indicates that the baby was following rhythms better.

Babies in the music group had stronger brain responses to both music and speech sounds compared with babies in the control group. This shows that musical experience, as early as nine month of age, improved infants’ ability to process both musical and speech rhythms.

These skills are important building blocks for learning to speak.

Other benefits from music experience

Language is just one example of a skill that can be improved through music training. Music can help with social-emotional development, too. An earlier study by researchers Tal-Chen Rabinowitch and Ariel Knafo-Noam showed that pairs of eight-year-olds who didn’t know each other reported feeling more close and connected with one another after a short exercise of tapping out beats in sync with each other.

Music helps children bond better.
Boy image via www.shutterstock.com

Another researcher, Laura Cirelli, showed that 14-month-old babies were more likely to show helping behaviors toward an adult after the babies had been bounced in sync with the adult who was also moving rhythmically.

There are many more exciting questions that remain to be answered as researchers continue to study the effects of music experience on early development.

For instance, does the music experience need to be in a social setting? Could babies get the benefits of music from simply listening to music? And, how much experience do babies need over time to sustain this language-boosting benefit?

Music is an essential part of being human. It has existed in human cultures for thousands of years, and it is one of the most fun and powerful ways for people to connect with each other. Through scientific research, I hope we can continue to reveal how music experience influences brain development and language learning of babies.

The Conversation

Christina Zhao, Postdoctoral Fellow, University of Washington

This article was originally published on The Conversation. Read the original article.

Norwegians using ‘Texas’ to mean ‘crazy’ actually isn’t so crazy


Laurel Stvan, University of Texas Arlington

If you haven’t heard by now, the American press recently picked up on an interesting linguistic phenomenon in Norway, where the word “Texas” is slang for “crazy.”

Indeed, it turns out that for several years Norwegians have used the word to describe a situation that is chaotic, out of control or excitingly unpredictable (The crowd at the concert last night was totally Texas!).

While this may seem like a bit of a stretch to many American English speakers, when examined through the lens of linguistics it’s actually a pretty natural extension of the word Texas.

How new meanings emerge

It’s fairly common for a word’s meaning to shift over time. Speakers will often use a word in a new way that applies to just one aspect of the term’s earlier connotations, and emphasizing this single aspect will eventually narrow the word’s meaning, depending on its context.

In fact, “crazy” itself is currently undergoing multiple meaning changes. More broadly, it was traditionally used to convey insane or aberrant thinking.

However, English speakers have since split apart these aspects, emphasizing just one to create a new meaning:

  • crazy = fast-paced, frantic (I’ve been crazy busy this week.)
  • crazy = bizarre, odd (Mustard on your taco? That’s crazy!)
  • crazy = dangerous, lethal (What’s your plan for when a crazy gunman breaks into your school?)

This last usage, in particular, has annoyed mental health activists. (Even though people with mental illness aren’t usually dangerous, expressions like “psycho killer” and “crazed gunman” often appear together.)

It’s the first meaning of crazy, however, that Norwegians are invoking when referring to situations that are “totally Texas”: the kind of crazy that is wild, frantic or chaotic.

You used to call me on my…handy?

The story of Norwegians using Texas also demonstrates that words don’t simply get refashioned within the same language; rather, words can be borrowed from one and applied to another, which often results in a changed meaning in the new setting. This, too, has a long history that can be seen as words cross geographic and cultural boundaries.

German Chancellor Angela Merkel texts on her ‘handy.’
Tobias Schwarz/Reuters

Beyond Texas, other English words have changed meaning when borrowed by other languages. For example, the Japanese now use the word feminisuto, adapted from feminist. In Japanese it means a chivalrous man, one who “does things like being polite to women.”

Another shift shows up in the word handy, which Germans borrowed from the English language. There, it refers to a cellphone.

Words can also change meaning when absorbed into English. For instance, poncho has become narrower in meaning. Borrowed from South American Spanish, it originally meant “woolen fabric”; now it describes a particular piece of clothing, often a plastic one used in the rain. And tycoon had shifted, too. It’s borrowed from Chinese (via Japanese) and originally meant “high official” or “great nobleman.” Today it’s primarily used to describe a businessman who’s made lots of money.

Dreams of the American West

The borrowing of words isn’t a modern phenomenon. According to Diane Nicholls of MacMillan English Dictionaries, it’ll often take place when “different language communities come into contact with each other.”

And settlers did come from Norway to Texas. The town of Clifton, Texas, where a third of the population is of Norwegian descent, has been dubbed the Norwegian Capital of Texas. (However, this New World outpost of Norway uses a different, older dialect of Norwegian, so Texans from Clifton are unfamiliar with this new bit of slang.)

It turns out that communities can come into contact in ways that are not actually physical. In the case of Norwegians’ use of Texas, the borrowing may not originate from physical contact, but through cultural aspiration. In fact, throughout much of Europe the image of the American Wild West appeals to a set of beliefs (perhaps stereotypical or false) about the apparent freedom and lawlessness in the West during the 19th century.

A cowboy-themed amusement park in Sweden.
Naomi Harris/Feature Shoot

These enthusiastic ideas about American frontier life can be seen in places like Sweden’s Wild West theme parks. And Germany has been fascinated with cowboys and the American frontier from as early as the the 1890s, when Buffalo Bill Cody toured Germany. Twentieth-century movies, novels and TV shows continue to promote myths of the Wild West, while prominently featuring Texas.

Ultimately, the Norwegian use of Texas makes sense because it follows some recognized linguistic principles: it indicates the narrowing of meaning over time, it reflects a change in meaning when applied to a new cultural context, and it represents a glamorized (if stereotypical) view of another culture.

So why did Norwegians settle on the term Texas to describe something fast-paced and frantic?

Given the portrayal of Texas in 19th- and 20th-century popular culture, they’d be crazy not to.

The Conversation

Laurel Stvan, Associate Professor and Chair of Linguistics, University of Texas Arlington

This article was originally published on The Conversation. Read the original article.

Here’s how much public funding for university students varies across the UK


image-20151120-408-1tfbhz3

Here’s how much public funding for university students varies across the UK

David Eiser, University of Stirling

An English student, a Scottish student, a Welsh student and a Northern Irish student walk into a bar. Who should buy the round? Across the four territories of the UK, different types and levels of grant and loan are available to students. This means that students from some parts of the UK are getting more support from their devolved government than others.

A new report by London Economics for the University and College Union has benchmarked this funding per student across the UK. It has done this for different types of student – such as full-time or part-time and undergraduate or post-graduate – and by funding source. The fact that the funding systems are so different makes this kind of comparison tricky, particularly as there are differences in the type and level of data that is available on each education system.

For full-time undergraduate students at university, there was a wide variation in the total level of resource per student in 2013-14 – ranging from £13,983 in England to £11,310 in Scotland. In Wales it was £13,441 and £11,358 in Northern Ireland. This includes funding from all sources, including government grants to the universities directly for teaching and research, and grants and loans paid to students to fund tuition and maintenance.

The total level of funding per university student is higher in England than in Scotland because of the higher tuition fees in England that students are expected to pay themselves. But the fees paid by students in England are largely supported by tuition fee loans from the government to students. Given that a proportion of these loans will never be repaid, to calculate the total level of public funding per student, assumptions must be made about what proportion of these loans will not be repaid.

Who gets what

The proportion of a loan that will not be paid back is known as the RAB charge (Resource Accounting and Budgeting). This is because students only have to pay their loans back when they earn above a particular threshold.

Assumptions about the RAB charge are critical in determining the value of the total public subsidy per university student, and the public-private split in total funding. The assumed RAB charges used in the report vary depending on the type and value of loan. RAB charges for tuition fee loans in England for example are assumed at 45%, (implying 45% of the total value of student loans will not be repaid), whereas the RAB charge is lower for Northern Irish and Welsh students, who face lower tuition fee caps and therefore borrow less.

https://datawrapper.dwcdn.net/yriOc/2/

Once RAB charges for fee and maintenance loan repayment are factored in, as the graph above shows, the level of public funding per full-time undergraduate is broadly similar in England, Wales and Scotland – £8,900, £9,500 and £9,000 respectively – although lower in NI where it is £7,700. But the proportion of this funding provided per full-time undergraduate by the public sector varies from 80% in Scotland to 63% in England.

The report also highlights discrepancies between the level of resources available to those students domiciled in each of the four countries who decide to go and study in another country. For example, relatively few Scottish students choose to study in England, given the increase in costs to them of doing so, and the fact that relatively little support funding is available to them if they did.

In contrast, English students face no differential cost in where they enrol. The result, according to the report, is that approximately £10m of student support resources flow from Scotland to England, while £73m of resources flow from England to Scotland.

Learning from different policies

The report highlights a number of funding inequities, particularly in terms of the total resource per full-time undergraduate. In itself, however, it is not immediately apparent that such differences are undesirable. After all, the objective of devolution is to allow devolved governments to allocate spending as they (and their electorates) see fit.

But what is clear is that balancing the need to maintain (or increase) total funding for higher education, while ensuring accessibility to university by students from all backgrounds, is likely to become increasingly challenging.

Important questions to be debated include where the public-private funding split should lie across the UK and how this split should be distributed across students of different financial means. There’s also a question on whether private contributions should be paid upfront (as they currently are through loans), or, for example, through a higher graduate tax payable throughout their working lives.

The core case for devolving policy responsibility in areas such as education is that it enables the devolved governments to tailor policy to the needs and preferences of their electorates. The resulting variation in policies informs understanding of which approaches work best under different circumstances, enabling “policy learning” to take place more quickly than if policy approaches were uniform. We need more comparative benchmarking studies such as this one if we are to benefit from the “policy learning” that devolution is hoped to provide.

The Conversation

David Eiser, Research Fellow, Economics , University of Stirling

This article was originally published on The Conversation. Read the original article.