After 5 months of continuous design and testing, across Zambia, the Philippines, Somalia and with other colleagues around the world, Loop has learnt a lot!
One barrier to feeding back for many people is language. If they do not speak one of the primary languages of the country or service provider, then they miss out on vital information and opportunities to input and engage. For this reason, it was essential for Loop to be accessible to underserved populations, in multiple languages, in near real time, including languages where Machine Learning does not exist.
In our learning series we highlight what we learnt, what are our takeaways as a result of this learning and what we will continue to monitor and enquire about:
Loop learning #1: the potential for machine translation is enormous and growing every day.
As a result of the digital revolution, machine translation of languages will have an incredible ability to bring more people into conversations and will reduce barriers to access. Every month new languages are being added and include over 100 languages already. Translation is far from perfect, but it enables access which otherwise doesn't exist. This exponentially reduces the cost and time to have things available in multiple languages, if considered at all. The opportunity this brings for Loop and the wider development and humanitarian approaches is significant and only just beginning. However, gaps still remain.
Loop's takeaway #1: Integrate machine and human translation systems to adapt as machine translation improves.
Loop has developed a flexible approach using a sliding scale system based on the effectiveness and availability of machine translation. Languages graduate from one approach to the next:
- translated by a human only,
- a human checks and improves machine translations
- automatic machine translation
We have also built in the ability to benefit from crowdsourced improvements. Anyone can identify a poor translation, propose an improved translation or ask for a conversation thread to be translated into a non-machine translated language.
Loops continued enquiry: We will be watching to learn how well machine translations can translate context specific short feedback accurately.
Loop learning #2: Many primarily oral languages have so many dialects that there is no common written approach.
Through our multiple field tests and numerous prototypes, we learnt that in many primarily oral languages, the dialects are so diverse that often a written version is not understood by all speakers.
For example, we tested Nyanja and Bemba in Zambia with local people who do not speak English confidently. Every time it was tested in a new environment, they fed back on improvements on how to write it. This became circular with no common approach for all native speakers being found.
Loop’s takeaway #2: Switch to Interactive Voice Technology as soon as possible – see below
Loops continued enquiry: We have invested in having professional translators provide much of the static translations of key documents and text. We will monitor the analytics to see how many people read this in underserved languages versus the majority languages (English, Spanish, French, Arabic etc) version in each context. This will teach us about if those who want to read a one-page guideline are only those who are also confident and literate in majority languages.
Loops learning #3: Automated reading of written text breaks down few barriers.
To help remove obstacles to literacy we also built in a ‘Read to me’ function on Loop. This is where the phone can read out any of the text with the click of a button. This function is only available on Smartphones. Many of the people we wanted to target who were not confident readers were also those who did not have a Smartphone. Thus, this still misses out much of the target population for this function.
In addition, for those who did have a Smartphone in Zambia, they were often unfamiliar with the ‘Read to me’ function and people couldn’t easily identify its presence or how to use it.
Loop’s takeaway #3: The 'Read to me' function adds value to a subsection of people.
We will have a ‘Read to me’ function available on Smartphones in the languages we are able to serve.
Loops continued enquiry: We will monitor the analytics to see if some subsections of the population benefit from this function. We will also monitor the analytics to see how many people use the ‘Read to me’ function and in what languages. This will teach us where to prioritise translations and approaches to promote inclusion and informed access to feedback and engagement through Loop.
Loop learning #4: People who speak underserved languages tend to also be the same people who have other barriers to engagement.
People who speak underserved languages only were primarily those who were affected by the digital divide and in addition were not confident readers and writers in any language. They face multiple barriers to feeding back and raising their voices, including:
- literacy
- access to electricity,
- access to smartphones and
- access to internet connectivity.
Loops takeaway #4: Pivot to building Interactive Voice technology.
Loop’s technology and approach reflects our design of being as local and accessible as possible, so we have now prioritised the development of Interactive Voice technology over phone (GSM). We can include interactive voice over the internet afterwards, but this does not target those people facing multiple barriers to engagement.
We have designed and are testing in Somalia, the following flows:
A) Customer of Aid submits feedback by voice over phone:
- You phone a 3 or 4 digit short code number for free.
- Say the language you speak (eg: Somali, Mai).
- A recorded message, in your selected language (Somali)gives you instructions and records your voice messages.
- Hang up.
- The Audio file is translated and transcribed, tagged and posted on Loop.
B) Customer of Aid above receives a reply to their feedback on their phone:
- A person replies to their story in any language, from anywhere.
- The typed reply is translated into Somali.
- The text is recorded onto an audio file.
- Audio file is sent to the phone number.
- The customer of Aid picks up the phone, hears the recording and is invited to reply.
Loops continued enquiry: Interactive Voice will help to close the digital divide. Technology continues to improve; how can we best use it to build in efficiencies.
The technology that many of us are familiar with in our day to day lives, such as:
- Automatic speech recognition - Alexa
- Speech to text - WhatsApp
- Text to speech - Read to me buttons on search engines
- Machine translations - Google translate
These can all be linked, in appropriate, safe ways, as they evolve and improve, to revolutionise the Aid sector's ability to listen to, and speak directly with, people who are currently excluded from technology at scale. The possibilities are limitless.
This could be one significant contribution to closing the digital divide.