Teaching and learning in a post-truth world

Image 20161222 17318 18cc7b0
As alt-right ideology spreads worldwide, teachers and students develop skills to learn about respect and diversity of thoughts.
Peggy_Marco/pixabay

Michelle Mielly, Grenoble École de Management (GEM)

In today’s post-truth environment, university educators face new challenges. Their students are surrounded by a broader spectrum of ideologies and beliefs than ever. Some are fuelled by the US alt-right movement and the growth of similar identitarian movements across Europe. The Conversation

With Brexit and Trump as its mascots, 2016 has ushered in what David Simas calls a whole new “permission structure” enabled by social media. This new structure enables individuals to bypass traditional authorities who once served as the standard bearers of acceptable political discourse, including spiritual leaders, political party elders and influential journalists.

The US elections demonstrated a turning point, in terms of emboldening behind-the-scenes supporters of totalitarian thinking. One common trick in the reactionary playbook is to invert traditional roles: those usually stereotyped as racists are depicted as oppressed victims, and those once labelled as liberal progressives are transformed into politically correct thought police.

Conservative writer David Horowitz, for example, has made a living out of this role inversion principle, targeting “dangerous professors” or “progressive racism” in his books.

New tools

So teachers and scholars have lots of work to do. How can we respond to alt-right rhetoric? What are the core skills needed to foster citizenry in democratic societies and global workplaces?

Some answers can be found in a 2013 British Council study, Culture at Work which set out to understand what qualities were most important to employers.

After surveying hundreds of Global HR managers to rank the skills, values, and attitudes most needed in new recruits to their organisations, they found some key elements, highlighted in yellow in the graph below. They observed not only emotional intelligence qualities, but also cultural intelligence elements; an “outsider’s seemingly natural ability to interpret someone’s unfamiliar and ambiguous gestures in just the way that person’s compatriots would.”

Excerpt from ‘Culture at Work’, British Council, 2013.

As educators, we strive for greater cultural intelligence in our students by exposing them to a variety of situations: group projects in multicultural teams, remote global team assignments in multiple time zones, and collaborating in cross-border MBA projects. And yet none of this work can take place effectively without one key ingredient.

If we look at the top of the above scale, one quality is clearly ranked higher than any other on the continuum: demonstrating respect for others.

There are a number of behaviours and attitudes which indicate respect: openness to others, willingness to listen to different opinions, and the ability to include as many perspectives as possible in decision-making processes. Below I highlight three especially useful activities in the current ideologically charged environment.

Building respect for others

Don’t underestimate your implicit bias. We live with differences within in our own families, and then in a series of concentric circles moving outwards from the nuclear family, deal with people who think, look, live, and act differently from ourselves.

A central aspiration of instructors in the humanities and social sciences is to raise awareness of the collectively imagined, socially constructed nature of differences in general. Identity, race, religion, or gender are human constructs and subject to human error. This is knowns as cognitive bias and entire disciplines such as behavioural economics have been founded on its premises.

Making students aware of their own preconceived ideas, or “mindbugs” should lead them to see their own blindspots: those greyish areas in which we are ignorant and unaware of our own inherent biases.

An exploration of this principle can be found in Harvard’s Project Implicit; which gathers data on implicit social cognition and the unconscious mental associations that humans make. The project’s goal is to educate the public about hidden biases and to provide a virtual laboratory for collecting data. Individuals can discover the unconscious associations their mind makes by taking one of the many implicit association tests available in multiple languages.

This way, anyone can learn about his or her implicit biases on topics like gender, religion, politics, obesity, skin colour, or sexuality. The objective is not to guilt-trip anyone but rather to experience the test and reflect on the various implications in daily life.

Not underestimating our bias means knowing ourselves a little better. This fosters greater awareness of those blindspots we all have, especially when it comes to how we perceive our “others”.

Challenge you original bias

Students need to reflect on the many assumptions they make linked to social issues (race, gender, religion) and John Rawls’ Theory of Justice of the “original position” can help them do that. The original position is unadulterated by societal forces, and can be found behind a veil of ignorance where according to Rawls :

No one knows his place in society, his class position or social status, nor does anyone know his fortune in the distribution of natural assets and abilities, his intelligence, strength, and the like.

Working in small groups, students are put in the role of the original position:

“I have to state my ideas on slavery, but I do not know if I am black or white, rich landowner or poor slave, in the USA or the Saudi Arabia.”

“I have to decide on how to redistribute resources in an economy, but I do not know if I am a billionaire or a homeless person.”

Making decisions behind the veil of ignorance forces – if even very briefly – empathy and ivrespect for the position of others.

Groupwork on Veil of Ignorance debate: this one was on wearing the burqa in a secular society.

Learn from history

Fostering a historical consciousness and a sense of historical urgency (a strong sense of the “now”) can help students unpack cultural identity questions in three crucial ways. It can give them an understanding of their place in the world now vis-à-vis the past, help them understand the role their ancestors and respective histories have played in getting us to today, and help them grasp the irretrievable nature of the past and its errors in trying to build the future.

To do this we have a multitude of tools at our disposal. In the contemporary context, Walter Benjamin’s Theses on the Philosophy of History (1940) are quite useful in conceptualising on where we are historically.

I assign specific theses to a group of students and ask them to draw up the connections between what is written and where we are today, and then to make inferences on the implications for their own identity questions.

These theses are actually short aphorisms or statements written at a time of great historical urgency in Europe. As Benjamin wrote, “The emergency situation in which we live is the rule.” Every moment is history in the making, and carries a crucial message that we must attempt to grasp.

Students react in contrasting ways to these activities and debates. The goal is not to gain their agreement or adhesion to a given agenda, but to push them to develop, more than ever, their critical thinking skills, and challenge their understanding of how they receive information.

Debates rage over how students will manage working with minority members in their teams, how they will react to radically different work environments and colleagues, or the big one: just who’s supposed to be doing the adaptation work, and how far should a person go when adapting?

Our job is to protect everyone’s right, even those with whom we disagree, to free expression in the classroom.

This can be done if two important conditions are met up front by students: they speak from a place of critically informed thinking, and their intention is benevolent.

Michelle Mielly, Associate Professor in People, Organizations, Society, Grenoble École de Management (GEM)

This article was originally published on The Conversation. Read the original article.

The way we teach most children to read sets them up to fail


Pamela Snow, Monash University

A new batch of Australian five-year-olds has just started school, eager to learn to read and write. Unfortunately for them, English has one of the most difficult spelling systems of any language, thanks to the way it developed. The Conversation

A patchwork of many languages

Words from Germanic Anglo-Saxon (woman, Wednesday) and Old Norse (thrust, give) were mixed with words from the church’s Latin (annual, bishop), and Norman French (beef, war). Pronunciation changed dramatically in England between 1350 and 1700 (The Great Vowel Shift), and scribes paid by the character added letters to words.

Science, technology and The Enlightenment added words, often based on Latin or Greek (anthropology, phone, school), wars and globalisation added even more, like “verandah” from Hindi, “tomato” from Nahuatl (Aztec) via Spanish, and “yakka” from Yagara (an Australian Indigenous language). Words are also continually being invented and added to contemporary dictionaries.

Words from other languages typically carry their spelling patterns into English. So, for example, the spelling “ch” represents different sounds in words drawn from Germanic (cheap, rich, such), Greek (chemist, anchor, echo) and French (chef, brochure, parachute).

English has 26 characters, but many more sounds.
Shutterstock

Our originally Latin alphabet has only 26 letters for the 44 sounds in modern Australian English. To master our spelling system, children must grasp that words are made of sounds represented by letters, that sometimes we use two, three or four letters for a sound (f ee t, bri dge, c augh t), that most sounds have several spellings (H er f ir st n ur se w or ks ear ly), and that many spellings represent a few sounds (f oo d, l oo k, fl oo d, br oo ch).

How should children be taught this complex code?

In his internationally acclaimed analysis of the effectiveness of teaching methods, Professor John Hattie assigns “effect sizes” ranging from 1.44 (highly effective) to -0.34 (harmful). Effect sizes above 0.4 indicate methods worth serious attention.

There are two main schools of thought about how to teach children to read and write, one focused on meaning (whole language) and one focused on word structure (phonics). Hattie’s meta-analysis gives whole language an effect size of 0.06, and phonics an effect size of 0.54.

But which type of phonics works best? The Clackmannanshire study provides convincing evidence for synthetic phonics. This starts from just a few sounds and letters in short words, and systematically adds and practises more sounds, spellings and syllable types, until children can read well enough to independently tackle the “real books” adults have been reading them.

Clackmannanshire is a disadvantaged area of Scotland, but by the end of primary school the children using this program were three years ahead of the national average on word reading, 21 months ahead on spelling and five months ahead on reading comprehension.

In 2005, Australia’s National Inquiry into Teaching Reading recommended that young children should be provided with systematic, explicit and direct phonics instruction, and that teachers be equipped to provide this. Similar inquiries in the US and UK agreed.

Are children being taught this way?

The short answer is no. The main reason is that few teachers are trained or equipped to teach synthetic phonics. They’re often taught at university by academics whose careers, publication records and reputations are based on whole-language teaching approaches, considered modern, progressive and child-centred. Phonics, conversely, is framed as old-fashioned, reactionary and teacher-centred, so is used less.

Children are typically encouraged to read “real books” containing long words and difficult spellings, and to guess unknown words from first letters and pictures. They try to write words that are too hard for them, and often the resulting spelling mistakes are put up on the wall for everyone to learn. They memorise lists of high-frequency words.

Phonics work in Australian classrooms typically focuses on initial letters and a few basic strategies, not sounds and their spellings in all word positions. There is little systematic instruction in word blending or segmenting (breaking words into parts, such as syllables), or in many of English’s 170 or so major spelling patterns. Australian curriculum requirements for English reinforce this mess-of-methods approach.

Many confused children learn to guess and memorise words rather than sounding them out. This seems to work at first, but by their third year of schooling lack of visual memory (disk full!) means they start to fail. The well-intended Reading Recovery program, about 80% whole language and 20% phonics, often fails to provide the boost these learners need.

Children who can’t read much by age nine are in serious trouble. By then, teachers expect them to have finished learning to read and to start seriously reading to learn. Yet the 2011 Progress in International Reading Literacy Study found that a quarter of Australian Year 4 students fell below international benchmarks in reading, with 7% scoring “very low”.

Using evidence in education

If large numbers of children were contracting a serious, preventable illness and you asked your doctor how to protect your child, you’d be rightly angry if the doctor didn’t understand the current medical research and thus recommended what s/he learnt at university, or had used before and preferred. You might contact the Medical Board to make a complaint or, if you had followed bad health advice, lodge a malpractice suit in the courts.

Evidence-based practice is deeply embedded in the culture of health professionals. Graduates are taught to read and understand the language of rigorous research and to turn to peer-reviewed academic journals and properly controlled experimental designs as the best sources of evidence. This doesn’t happen nearly enough in education.

Children’s opportunities are seriously compromised if they don’t learn to read and spell. They are much more likely to drop out of school early, be unemployed, suffer ill health and get on the wrong side of the law.

The vast majority of children will only learn to read and spell in the right developmental window when teachers are equipped with the best available methods, based on the best available evidence.


Alison Clarke co-authored this article. Alison is a speech pathologist at the Clifton Hill Child and Adolescent Therapy Group in Melbourne and is on Learning Difficulties Australia‘s Council. Disclosure Statement: Alison Clarke is a speech pathologist in private practice who also runs the website http://www.spelfabet.com.au

Pamela Snow, Associate Professor of Psychology, Monash University

This article was originally published on The Conversation. Read the original article.

What brain regions control our language? And how do we know this?


Image 20160908 25244 1zf7n8
Our language abilities are enabled by a co-ordinated network of brain regions that have evolved to give humans a sophisticated ability to communicate.
[bastian.]/Flickr, CC BY

David Abbott, Florey Institute of Neuroscience and Mental Health

The brain is key to our existence, but there’s a long way to go before neuroscience can truly capture its staggering capacity. For now, though, our Brain Control series explores what we do know about the brain’s command of six central functions: language, mood, memory, vision, personality and motor skills – and what happens when things go wrong. The Conversation


When you read something, you first need to detect the words and then to interpret them by determining context and meaning. This complex process involves many brain regions.

Detecting text usually involves the optic nerve and other nerve bundles delivering signals from the eyes to the visual cortex at the back of the brain. If you are reading in Braille, you use the sensory cortex towards the top of the brain. If you listen to someone else reading, then you use the auditory cortex not far from your ears.

A system of regions towards the back and middle of your brain help you interpret the text. These include the angular gyrus in the parietal lobe, Wernicke’s area (comprising mainly the top rear portion of the temporal lobe), insular cortex, basal ganglia and cerebellum.


The Conversation, CC BY-ND

These regions work together as a network to process words and word sequences to determine context and meaning. This enables our receptive language abilities, which means the ability to understand language. Complementary to this is expressive language, which is the ability to produce language.

To speak sensibly, you must think of words to convey an idea or message, formulate them into a sentence according to grammatical rules and then use your lungs, vocal cords and mouth to create sounds. Regions in your frontal, temporal and parietal lobes formulate what you want to say and the motor cortex, in your frontal lobe, enables you to speak the words.

Most of this language-related brain activity is likely occurring in the left side of your brain. But some people use an even mix of both sides and, rarely, some have right dominance for language. There is an evolutionary view that specialisation of certain functions to one side or the other may be an advantage, as many animals, especially vertebrates, exhibit brain function with prominence on one side.

Why the left side is favoured for language isn’t known. But we do know that injury or conditions such as epilepsy, if it affects the left side of the brain early in a child’s development, can increase the chances language will develop on the right side. The chance of the person being left-handed is also increased. This makes sense, because the left side of the body is controlled by the motor cortex on the right side of the brain.

To speak sensibly, you must think of words to convey an idea or message, formulate them into a sentence according to grammatical rules and then use your lungs, vocal cords and mouth to create sounds.
paul pod/Flickr, CC BY

Selective problems

In 1861, French neurologist Pierre Paul Broca described a patient unable to speak who had no motor impairments to account for the inability. A postmortem examination showed a lesion in a large area towards the lower middle of his left frontal lobe particularly important in language formulation. This is now known as Broca’s area.

The clinical symptom of being unable to speak despite having the motor skills is known as expressive aphasia, or Broca’s aphasia.

In 1867, Carl Wernicke observed an opposite phenomenon. A patient was able to speak but not understand language. This is known as receptive aphasia, or Wernicke’s aphasia. The damaged region, as you might correctly guess, is the Wernicke’s area mentioned above.

Scientists have also observed injured patients with other selective problems, such as an inability to understand most words except nouns; or words with unusual spelling, such as those with silent consonants, like reign.

These difficulties are thought to arise from damage to selective areas or connections between regions in the brain’s language network. However, precise localisation can often be difficult given the complexity of individuals’ symptoms and the uncontrolled nature of their brain injury.

We also know the brain’s language regions work together as a co-ordinated network, with some parts involved in multiple functions and a level of redundancy in some processing pathways. So it’s not simply a matter of one brain region doing one thing in isolation.

Broca’s area is named after French neurologist Pierre Paul Broca.
Wikimedia Commons

How do we know all this?

Before advanced medical imaging, most of our knowledge came from observing unfortunate patients with injuries to particular brain parts. One could relate the approximate region of damage to their specific symptoms. Broca’s and Wernicke’s observations are well-known examples.

Other knowledge was inferred from brain-stimulation studies. Weak electrical stimulation of the brain while a patient is awake is sometimes performed in patients undergoing surgery to remove a lesion such as a tumour. The stimulation causes that part of the brain to stop working for a few seconds, which can enable the surgeon to identify areas of critically important function to avoid damaging during surgery.

In the mid-20th century, this helped neurosurgeons discover more about the localisation of language function in the brain. It was clearly demonstrated that while most people have language originating on the left side of their brain, some could have language originating on the right.

Towards the later part of the 20th century, if a surgeon needed to find out which side of your brain was responsible for language – so he didn’t do any damage – he would put to sleep one side of your brain with an anaesthetic. The doctor would then ask you a series of questions, determining your language side from your ability or inability to answer them. This invasive test (which is less often used today due to the availability of functional brain imaging) is known as the Wada test, named after Juhn Wada, who first described it just after the second world war.

Brain imaging

Today, we can get a much better view of brain function by using imaging techniques, especially magnetic resonance imaging (MRI), a safe procedure that uses magnetic fields to take pictures of your brain.

When we see activity in a region of the brain, that’s when there is an increase in freshly oxygenated blood flow.
from shutterstock.com

Using MRI to measure brain function is called functional MRI (fMRI), which detects signals from magnetic properties of blood in vessels supplying oxygen to brain cells. The fMRI signal changes depending on whether the blood is carrying oxygen, which means it slightly reduces the magnetic field, or has delivered up its oxygen, which slightly increases the magnetic field.

A few seconds after brain neurons become active in a brain region, there is an increase in freshly oxygenated blood flow to that brain part, much more than required to satisfy the oxygen demand of the neurons. This is what we see when we say a brain region is activated during certain functions.

Brain-imaging methods have revealed that much more of our brain is involved in language processing than previously thought. We now know that numerous regions in every major lobe (frontal, parietal, occipital and temporal lobes; and the cerebellum, an area at the bottom of the brain) are involved in our ability to produce and comprehend language.

Functional MRI is also becoming a useful clinical tool. In some centres it has replaced the Wada test to determine where language is in the brain.

Scientists are also using fMRI to build up a finer picture of how the brain processes language by designing experiments that compare which areas are active during various tasks. For instance, researchers have observed differences in brain language regions of dyslexic children compared to those without dyslexia.

Researchers compared fMRI images of groups of children with and without dyslexia while they performed language-related tasks. They found that dyslexic children had, on average, less activity in Broca’s area mainly on the left during this task. They also had less activity in or near Wernicke’s area on the left and right, and a portion of the front of the temporal lobe on the right.

Could this type of brain imaging provide a diagnostic signature of dyslexia? This is a work-in-progress, but we hope further study will one day lead to a robust, objective and early brain-imaging test for dyslexia and other disorders.


Want to know how the brain controls your mood? Read today’s accompanying piece here.

David Abbott, Senior Research Fellow and Head of the Epilepsy Neuroinformatics Laboratory, Florey Institute of Neuroscience and Mental Health

This article was originally published on The Conversation. Read the original article.

Amber Rudd gives us another ill-informed and imprudent attack on international students


Johanna Waters, University of Oxford

The home secretary, Amber Rudd, has outlined plans for a new student immigration system that would make it harder for graduating students to work in the UK. In her speech at the Conservative Party conference Rudd revealed government plans to create “two-tier visa rules” which would affect poorer quality universities and courses. This would essentially mean that “lesser” UK universities will be discouraged from recruiting international students.

This is not only yet another misguided and myopic attack on overseas students, it is also an insult to the rich diversity of universities on display within UK higher education. Because the fact is, universities excel in different academic areas. Yes, a few are outstanding across the board, but many post-1992 institutions which converted from polytechnics provide exceptional teaching in particular subject areas – and excellent international students are attracted to those programmes.

Then there is also the small issue of finances. A recent briefing from the University of Oxford’s Migration Observatory revealed that in 2014-2015, tuition fee income from non-EU students made up almost 13% of UK universities’ total income.

There is no limit on how much universities can charge non-EU students for their courses – but it has been estimated that the average fee for a classroom-based undergraduate degree in the 2014-15 academic year was £12,100 for a non-EU student. And many post-1992 universities are reliant on income from international students as a significant source of revenue. Just how the government propose universities replace the income generated by international student tuition fees, is as yet unclear.

International economy

What is clear, is that the government has failed fundamentally to understand the value of international students to British society. Non-EU students in the UK are thought to generate around £11 billion annually in export revenues alone. This includes tuition fees and other personal expenditure – international students often spend a lot on food and goods while residing in the UK.

The government’s proposal also fails to recognise the longer-term link between international student mobility and a successful domestic “knowledge economy” – because international students are tomorrow’s knowledge workers.

It is also a fact that creativity in industry relies fundamentally on international mobility. Just look at the success of Silicon Valley’s multi-billion dollar technology industry, which is dependent upon immigration. And many of its workers had immigrated as international students, before being headhunted to work in a particular firm.

International students are good for the economy.
Shutterstock

The success of British industry is no different – it relies on creativity and knowledge transfer and exchange. And it is very shortsighted to think British schools and pupils will produce all the knowledge, creativity and insight that we will ever need.

Cultural diversity

International students’ diverse backgrounds and experiences also enrich the entire student body, not to mention society more broadly. They engage in a two-way cultural exchange that is of mutual benefit to both international students and domestic students – and to wider communities.

Although not a primary task of British universities, we are nevertheless trying to create citizens that are cosmopolitan and open-minded in outlook. And there are immeasurable benefits to be had from interacting with students from diverse cultural and linguistic backgrounds. If British universities are to be “world class”, then they have also to be “in the world”, in the fullest possible way. And they need international students to fulfil their potential in both a practical and philosophical sense.

Of course, it is not the case that international students are an unquestionable “good”. If we take a global perspective, there are some compelling social reasons for at least reflecting on what happens to international students when they return home. International students are nearly always the most privileged members of their home societies – and being educated in the UK only enhances and reinforces that privilege.

Consequently, British universities are rarely a force for “social mobility” in students’ home countries, and from a “development” perspective, we should be aware of the undermining and devaluing impact that UK qualifications might have on local education systems overseas. There are also neo-colonial implications of educating the next generation of leaders in other parts of the world. However, from a purely UK standpoint, we must continue to encourage and support applications from overseas applicants to our universities.

Fixing the figures

When it comes to immigration figures, the University and College Union and Universities UK have called for the government to no longer count international students within its statistics. The Australian government, for example, makes this separation, and classifies international students as “temporary migrants”, which, unlike “permanent” migrants, are not subject to caps or quotas but are “demand driven”.

Just another way of fixing the figures?
Shutterstock

In a recent survey, 59% of people agreed that the government should not reduce international student numbers – even if that limits the government’s ability to cut immigration numbers overall – only 22% took the opposing view. The study also found that the majority of people did not understand why international students would even be included in total immigration figures.

So given that there is no public desire to reduce the number of international students in the UK, it would instead seem they have become a target – because the government has no better ideas for reducing immigration. It feels like a “quick fix” and is not, I would suggest, the way to go.

Johanna Waters, Associate Professor in Human Geography, University of Oxford

This article was originally published on The Conversation. Read the original article.

Could the language barrier actually fall within the next 10 years?


Image 20160325 17840 l4cqce
Pieter Bruegel the Elder’s ‘The Tower of Babel’ (1563).
Wikimedia Commons

David Arbesú, University of South Florida

Wouldn’t it be wonderful to travel to a foreign country without having to worry about the nuisance of communicating in a different language?

In a recent Wall Street Journal article, technology policy expert Alec Ross argued that, within a decade or so, we’ll be able to communicate with one another via small earpieces with built-in microphones.

No more trying to remember your high school French when checking into a hotel in Paris. Your earpiece will automatically translate “Good evening, I have a reservation” to Bon soir, j’ai une réservation – while immediately translating the receptionist’s unintelligible babble to “I am sorry, Sir, but your credit card has been declined.”

Ross argues that because technological progress is exponential, it’s only a matter of time.

Indeed, some parents are so convinced that this technology is imminent that they’re wondering if their kids should even learn a second language.

Max Ventilla, one of AltSchool Brooklyn’s founders, recently told The New Yorker

…if the reason you are having your child learn a foreign language is so that they can communicate with someone in a different language twenty years from now – well, the relative value of that is changed, surely, by the fact that everyone is going to be walking around with live-translation apps.

Needless to say, communication is only one of the many advantages of learning another language (and I would argue that it’s not even the most important one).

Furthermore, while it’s undeniable that translation tools like Bing Translator, Babelfish or Google Translate have improved dramatically in recent years, prognosticators like Ross could be getting ahead of themselves.

As a language professor and translator, I understand the complicated nature of language’s relationship with technology and computers. In fact, language contains nuances that are impossible for computers to ever learn how to interpret.

Language rules are special

I still remember grading assignments in Spanish where someone had accidentally written that he’d sawed his parents in half, or where a student and his brother had acquired a well that was both long and pretty. Obviously, what was meant was “I saw my parents” and “my brother and I get along pretty well.” But leave it to a computer to navigate the intricacies of human languages, and there are bound to be blunders.

Even earlier this month, when asked about Twitter‘s translation feature for foreign language tweets, the company’s CEO Jack Dorsey conceded that it does not happen in “real time, and the translation is not great.”

Still, anything a computer can “learn,” it will learn. And it’s safe to assume that any finite set of data (like every single work of literature ever written) will eventually make its way into the cloud.

So why not log all the rules by which languages govern themselves?

Simply put: because this is not how languages work. Even if the Florida State Senate has recently ruled that studying computer code is equivalent to learning a foreign language, the two could not be more different.

Programming is a constructed, formal language. Italian, Russian or Chinese – to name a few of the estimated 7,000 languages in the world – are natural, breathing languages which rely as much on social convention as on syntactic, phonetic or semantic rules.

Words don’t indicate meaning

As long as one is dealing with a simple written text, online translation tools will get better at replacing one “signifier” – the name Swiss linguist Ferdinand de Saussure gave to the idea that a sign’s physical form is distinct from its meaning – with another.

Or, in other words, an increase in the quantity and accuracy of the data logged into computers will make them more capable of translating “No es bueno dormir mucho” as “It’s not good to sleep too much,” instead of the faulty “Not good sleep much,” as Google Translate still does.

Replacing a word with its equivalent in the target language is actually the “easy part” of a translator’s job. But even this seems to be a daunting task for computers.

So why do programs continue to stumble on what seem like easy translations?

It’s so difficult for computers because translation doesn’t – or shouldn’t – involve simply translating words, sentences or paragraphs. Rather, it’s about translating meaning.

And in order to infer meaning from a specific utterance, humans have to interpret a multitude of elements at the same time.

Think about all the contextual clues that go into understanding an utterance: volume, pitch, situation, even your culture – all are as likely to convey as much meaning as the words you use. Certainly, a mother’s soft-spoken advice to “be careful” elicits a much different response than someone yelling “Be careful!” from the passenger’s seat of your car.

So can computers really interpret?

As the now-classic book Metaphors We Live By has shown, languages are more metaphorical than factual in nature. Language acquisition often relies on learning abstract and figurative concepts that are very hard – if not impossible – to “explain” to a computer.

Since the way we speak often has nothing to do with the reality that surrounds us, machines are – and will continue to be – puzzled by the metaphorical nature of human communications.

This is why even a promising newcomer to the translation game like the website Unbabel, which defines itself as an “AI-powered human-quality translation,” has to rely on an army of 42,000 translators around the world to fine-tune acceptable translations.

You need a human to tell the computer that “I’m seeing red” has little to do with colors, or that “I’m going to change” probably refers to your clothes and not your personality or your self.

If interpreting the intended meaning of a written word is already overwhelming for computers, imagine a world where a machine is in charge of translating what you say out loud in specific situations.

The translation paradox

Nonetheless, technology seems to be trending in that direction. Just as “intelligent personal assistants” like Siri or Alexa are getting better at understanding what you say, there is no reason to think that the future will not bring “personal assistant translators.”

But translating is an altogether different task than finding the nearest Starbucks, because machines aim for perfection and rationality, while languages – and humans – are always imperfect and irrational.

This is the paradox of computers and languages.

If machines become too sophisticated and logical, they’ll never be able to correctly interpret human speech. If they don’t, they’ll never be able to fully interpret all the elements that come into play when two humans communicate.

Therefore, we should be very wary of a device that is incapable of interpreting the world around us. If people from different cultures can offend each other without realizing it, how can we expect a machine to do better?

Will this device be able to detect sarcasm? In Spanish-speaking countries, will it know when to use “tú” or “usted” (the informal and formal personal pronouns for “you”)? Will it be able to sort through the many different forms of address used in Japanese? How will it interpret jokes, puns and other figures of speech?

Unless engineers actually find a way to breathe a soul into a computer – pardon my figurative speech – rest assured that, when it comes to conveying and interpreting meaning using a natural language, a machine will never fully take our place.

David Arbesú, Assistant Professor of Spanish, University of South Florida

This article was originally published on The Conversation. Read the original article.

How strong academic support can change university students’ lives


h1>How strong academic support can change university students’ lives

Image 20170301 5492 15hasrs

Black South African students need fewer excuses and more support from universities.
Kim Ludbrook/EPA

Savo Heleta, Nelson Mandela Metropolitan University

In South Africa tens of thousands of students leave universities each year without completing their degrees. They are largely being pushed out of the system due to funding issues and a lack of academic support.

Funding is a national problem. But what about the lack of comprehensive academic support for students who really need it? The fault here lies squarely with universities.

Universities blame the country’s disastrous public schooling system for the fact that many students enter higher education unprepared.

Public schooling is definitely a massive problem. Research suggests that of one million children who enter Grade 1 in South Africa each year, half do not go on to complete secondary school. Only 100,000 get to university and only 53,000 graduate from university after six years in the tertiary system.

We must stop expecting first-year students – many of whom come from public schools and whose first language isn’t English – to somehow figure out how to cope with the rigorous demands of any university degree without genuine, committed support.

There are some programmes in place to ease the transition. But many students at my own institution have confided in me that these programmes are often inadequate. Most classes to improve second language speakers’ grasp of English are optional, as are workshops on academic preparedness. Some students attend them; others struggle to find time due to packed class schedules.

My institution has a writing centre to support students with essay and assignment writing. The problem is that it’s understaffed and students often have to wait weeks for an appointment.

But there’s a fascinating and troubling contradiction at play: this very same institution offers comprehensive and compulsory programmes to help students who don’t speak English as a first language – as long as they’re international students from outside South Africa. And these programmes work very well, helping students cope with university demands and go on to graduate.

These programmes must be adapted, broadened and rolled out to ensure that South African students who are struggling with English and the demands of university education don’t get left behind.

I’m speaking from experience. Fifteen years ago I barely spoke any English but managed to earn a scholarship to a university in the United States. The support I received there made a world of difference. Similar support can change South African students’ university experience – and their lives, too.

Comprehensive and dedicated support

In 2002 I received a scholarship to study at the College of St Benedict and St John’s University in Minnesota. I’m from Bosnia and Herzegovina, and English isn’t my first language. I learned a bit of English in primary school. Then the war interrupted my primary school education for two years. After the war, the education system was dysfunctional.

When I got to the US in 2002 I could hardly speak, read or write English.

I spent two months in a school for students learning English as a second language, then headed to university. This helped a bit but I needed so much more.

The first year at university was hell, academically speaking. I struggled to understand what was going on around me. I could hardly express myself or write my assignments. Often, I doubted myself and my choice to accept the scholarship. I doubted my own intelligence.

Over the years in South Africa, I have heard many accounts of similar struggles experienced by South African students whose first language isn’t English. They all speak about the inability to engage in English, to cope, follow lectures. They, too, often think that they are not good enough to be at the university.

The best thing about my first year was the English language class I attended with other international students. Our professor taught us to read, write, speak and present in English. There were three classes a week, but she supported us way beyond those set times.

Without her, I probably would have quit my studies. Instead, my marks improved dramatically and my confidence grew. In 2005 I was persuaded by my American friends to write a book about my wartime experiences. I wrote it in English. It was published in 2008.

I’ve been in South Africa since 2007, obtaining a Masters and PhD. Today I write, do research, publish, lecture, present at national and international conferences. All in English.

I didn’t accomplish any of this because I was special. The support I received at the start of my university education made all the difference.

Becoming student-ready institutions

In South Africa, the lack of comprehensive academic support for all who need it is excused by the lack of capacity and the price tag. But surely investing in programmes that bolster student success makes sense? After all, universities receive government funding partly based on their graduate numbers. And more graduates can boost the economy.

In 2013, the Council on Higher Education proposed that university studies and “qualifications should accord with the learning needs of the majority of the student intake”. This, the council argued, would entail extending undergraduate programmes by a year. The first year would become foundational, with students spending a considerable amount of time on compulsory academic preparedness and development.

This has not yet been implemented.

Byron White, vice president for university engagement at Cleveland State University, argues that universities need to stop complaining that their first-year students aren’t prepared for academic life. This approach, White says,

has allowed higher education to deflect accountability. It’s time that we fully embrace the burden of being student-ready institutions … It turns out the problem was not as much about the students as we thought. It was largely us, uninformed about what it takes to help them succeed or unwilling to allocate the resources necessary to put it into practice.

Universities must ditch the excuses and do more. Extensive academic support changes lives. It’s time we got to work.

Savo Heleta, Manager, Internationalisation at Home and Research, Nelson Mandela Metropolitan University

This article was originally published on The Conversation. Read the original article.

Accessible, engaging textbooks could improve children’s learning


Image 20170313 9408 bb6pp1
It’s not enough for textbooks just to be present in a classroom. They must support learning.
Global Partnership for Education/Flickr, CC BY-NC-ND

Lizzi O. Milligan, University of Bath

Textbooks are a crucial part of any child’s learning. A large body of research has proved this many times and in many very different contexts. Textbooks are a physical representation of the curriculum in a classroom setting. They are powerful in shaping the minds of children and young people. The Conversation

UNESCO has recognised this power and called for every child to have a textbook for every subject. The organisation argues that

next to an engaged and prepared teacher, well-designed textbooks in sufficient quantities are the most effective way to improve instruction and learning.

But there’s an elephant in the room when it comes to textbooks in African countries’ classrooms: language.

Rwanda is one of many African countries that’s adopted a language instruction policy which sees children learning in local or mother tongue languages for the first three years of primary school. They then transition in upper primary and secondary school into a dominant, so-called “international” language. This might be French or Portuguese. In Rwanda, it has been English since 2008.

Evidence from across the continent suggests that at this transition point, many learners have not developed basic literacy and numeracy skills. And, significantly, they have not acquired anywhere near enough of the language they are about to learn in to be able to engage in learning effectively.

I do not wish to advocate for English medium instruction, and the arguments for mother-tongue based education are compelling. But it’s important to consider strategies for supporting learners within existing policy priorities. Using appropriate learning and teaching materials – such as textbooks – could be one such strategy.

A different approach

It’s not enough to just hand out textbooks in every classroom. The books need to tick two boxes: learners must be able to read them and teachers must feel enabled to teach with them.

Existing textbooks tend not to take these concerns into consideration. The language is too difficult and the sentence structures too complex. The paragraphs too long and there are no glossaries to define unfamiliar words. And while textbooks are widely available to those in the basic education system, they are rarely used systematically. Teachers cite the books’ inaccessibility as one of the main reasons for not using them.

A recent initiative in Rwanda has sought to address this through the development of “language supportive” textbooks for primary 4 learners who are around 11 years old. These were specifically designed in collaboration with local publishers, editors and writers.

Language supportive textbooks have been shown to make a difference in some Rwandan classrooms.

There are two key elements to a “language supportive” textbook.

Firstly, they are written at a language level which is appropriate for the learner. As can be seen in Figure 1, the new concept is introduced in as simple English as possible. The sentence structure and paragraph length are also shortened and made as simple as possible. The key word (here, “soil”) is also repeated numerous times so that the learner becomes accustomed to this word.

University of Bristol and the British Council

Secondly, they include features – activities, visuals, clear signposting and vocabulary support – that enable learners to practice and develop their language proficiency while learning the key elements of the curriculum.

The books are full of relevant activities that encourage learners to regularly practice their listening, speaking, reading and writing of English in every lesson. This enables language development.

Crucially, all of these activities are made accessible to learners – and teachers – by offering support in the learners’ first language. In this case, the language used was Kinyarwanda, which is the first language for the vast majority of Rwandan people. However, it’s important to note that initially many teachers were hesitant about incorporating Kinyarwanda into their classroom practice because of the government’s English-only policy.

Improved test scores

The initiative was introduced with 1075 students at eight schools across four Rwandan districts. The evidence from our initiative suggests that learners in classrooms where these books were systematically used learnt more across the curriculum.

When these learners sat tests before using the books, they scored similar results to those in other comparable schools. After using the materials for four months, their test scores were significantly higher. Crucially, both learners and teachers pointed out how important it was that the books sanctioned the use of Kinyarwanda. The classrooms became bilingual spaces and this increased teachers’ and learners’ confidence and competence.

All of this supports the importance of textbooks as effective learning and teaching materials in the classroom and shows that they can help all learners. But authorities mustn’t assume that textbooks are being used or that the existing books are empowering teachers and learners.

Textbooks can matter – but it’s only when consideration is made for the ways they can help all learners that we can say that they can contribute to quality education for all.

Lizzi O. Milligan, Lecturer in International Education, University of Bath

This article was originally published on The Conversation. Read the original article.