How Are LLMs Shaping the Future of Translation Technology? 

Square

How Are LLMs Shaping the Future of Translation Technology?

Large language models (LLMs) are trending in the translation industry, and it’s easy to see why. Models such as OpenAI’s GPT-4 and Google’s BERT have the potential to generate human-like translations quickly, accurately, and with minimal intervention.  

Compared to machine translation (MT), LLMs reduce reliance on bilingual data during pre-training for major languages and produce better long-sentence translations. They excel at translating sentences with fewer than 80 words and consistently perform well on documents with about 500 words—a significant hurdle for MT. 

LLMs are still in their infancy and are prone to mistakes such as terminology mismatches, style discrepancies, and hallucinations. As they improve, their use in translation will no doubt increase. The question is: which tasks will they handle, and which ones will be left to humans?

Use Cases in Localization  

To predict how LLMs will impact translation technology, we must consider their potential uses. Here are four ways the translation industry can integrate these models.  

Multilingual customer support 

LLM-powered chatbots already respond to routine customer questions in multiple languages. In addition to providing instant answers, they gather information and escalate complex issues to human support. 

Chatbots may even deliver more empathetic responses than humans when provided with the appropriate prompts. For example, ChatGPT scored an unprecedented 100% on standardized empathy tests, outperforming the average human score of 70%.

However, linguists continue to play a crucial role in their development as they assist with: 

  • Intent recognition: Linguists help identify the intent and purpose behind user inputs. They create a taxonomy of intents the chatbot should recognize, allowing the system to provide relevant responses based on user queries. 
  • Language generation: Linguists craft linguistically accurate responses that are contextually appropriate and aligned with the chatbot’s brand or purpose, ensuring the chatbot’s language is natural and reflects the desired tone.
  • User experience optimization: Linguists optimize the overall user experience by refining conversation flow. This ensures that chatbot interactions are coherent, engaging, and meet user expectations. 

Content personalization  

LLMs can also assist with AI-powered personalization by generating content such as emails, in-app messages, and recommendations based on user interactions, preferences, and historical data. This process allows companies to develop highly targeted, personalized campaigns that increase customer engagement and conversion rates. For example, an online retailer could send special offers based on past purchases and browsing behavior. 

Like chatbots, linguists are involved in key areas of the process, including:

  • Content curation. Linguists curate and categorize content. Using their linguistic and cultural expertise, they assess the relevance, quality, and appropriateness of different pieces of content.
  • Fine-tuning algorithms. Linguists work with data scientists and machine learning engineers to fine-tune algorithms. They provide insights into linguistic patterns, sentiment analysis, and other language-related features that can improve content recommendation accuracy and relevance.
  • User feedback analysis. Linguists analyze user feedback to understand how well personalized content aligns with user expectations. This feedback loop is valuable for continuous improvement in personalization algorithms.

Terminology consistency and glossary management  

LLMs can even help maintain consistent terminology across localized content. Localization often involves industry-specific or brand-specific terms. LLMs can analyze and suggest translations for such terms, helping localization teams adhere to established glossaries.

Linguists help train and improve the model’s understanding of domain-specific terminology through 

  • Glossary creation and maintenance. Linguists create and maintain glossaries containing industry-specific and brand-specific terms and other specialized vocabulary.
  • Contextual understanding. Linguists improve the contextual understanding of terminology usage, which helps models distinguish between multiple meanings and choose the best translation depending on the context.
  • Fine-tuning for industry-specific terms. Linguists fine-tune LLMs to understand better and generate content related to specific domains. This process involves training the models with additional data, which ensures more accurate and contextually appropriate translations.

Human-in-the-loop (HITL) systems

HITL systems are another approach. This setup allows LLMs to provide suggestions or initial translations, which human translators can refine. Combining a human translator with a natural language processing system can overcome LLM limitations and biases and ensure higher quality and more accurate translations.

The Translation Challenges of LLMs   

Yet, despite the potential of LLM translation technology, its use is largely experimental. These models remain limited due to a variety of challenges. Here are five issues LLMs must address before they become the norm in translation.   

Translation accuracy 

While LLMs can produce translations, they weren’t designed to do it. English language corpora dominate training data, which means translations into another language may not be as accurate. This is especially true for languages with fewer resources, such as languages of lesser diffusion. 

One possible exception is CroissantLLM, which claims to be a truly bilingual French-English language model. Building multilingual models and training English-language models on more data could improve accuracy.

Pre-trained data sets

The next challenge lies in training. LLMs are trained using fixed data representing knowledge up to a given point in time. And problems can arise from inaccurate or outdated information. 

Although pre-trained data sets can provide valuable insights into language nuances, they may not consider a domain’s specific needs. As a result, they’re less relevant for dynamic fields such as technology, finance, and medicine. 

Humans are still better at translating specialized content. They’re better at capturing the nuances of the language and have a deeper understanding of these domains. LLMs need additional training in specific fields to achieve the same accuracy and quality. 

Bias in training data

However, training LLMs on additional data can also cause problems. These models tend to assimilate bias from their training data, resulting in sexist, racist, and ableist tendencies. As they become more powerful, bias increases. For example, a model with 280 billion parameters showed 29% more toxicity than one with 117 million parameters. 

To combat this problem, linguists must continually monitor training data to identify and eliminate bias. That includes employing regular data reviews and processes to detect training data bias.

Capacity limitations 

Besides data challenges, LLMs have capacity limitations that prevent them from translating larger, more complex texts. 

Models can process a maximum number of input tokens, which prevents LLMs from comprehending and producing outputs that surpass the token threshold. ChatGPT-3.5, for example, has a 2048-token limit, or approximately 1,500 words, while GPT-4 extends its capacity to 25,000 words. While this is a major improvement, LLMs must continue expanding their capacity before they can be widely used in the translation industry. 

Prohibitive Costs 

Lastly, LLMs can be prohibitively expensive, which poses a major challenge when developing software for a broader audience.

Models use large amounts of computing power, making their size challenging to scale. Integrating them into existing translation systems may require significant software architecture changes. 

Their size can also slow translation. On average, MT models require 0.3 seconds, while LLMs require 30 seconds. This delay raises concerns about real-time deployments. So, until LLMs can match the speed of MTs, they may not be a viable alternative for certain projects. 

What’s Next for Large Language Models? 

Fortunately, researchers are already exploring ways to solve common LLM issues. Here are a few ways they’re tackling inaccuracies, limited data sets, and inefficiency. 

Fact-Checking 

We noted that pre-trained data sets limit an LLM’s ability to provide accurate, up-to-date information. To overcome this challenge, models need access to external sources for reference. Google’s REALM and Facebook’s RAG are two examples that use citations and references, similar to human researchers.

OpenAI’s WebGPT is another promising option. A fine-tuned version of its GPT model, WebGPT uses Microsoft Bing to generate precise answers. Research shows that WebGPT outperforms GPT-3 in accuracy and informativeness.

Synthetic training data 

Next up is training data. Research is underway to develop large-scale language models to solve the problem of limited training data. In fact, a Google study created a model capable of

  • Creating questions,
  • Generating comprehensive answers,
  • Filtering responses, 
  • Fine-tuning itself.

The result? Superior performance across multiple language tasks. By optimizing themselves, models can reduce biases and toxicity of their outputs and fine-tune their performance with desired data sets. 

Sparse expertise models 

Lastly, sparse expertise models offer an efficient alternative to dense models that slow performance. 

Several language models use densely activated transformers, including OpenAI’s GPT-3, NVIDIA/Microsoft’s Megatron-Turing, and Google’s BERT. Dense models use all their parameters, making them less effective and user-friendly.

In sparse expertise models, such as Google GLaM. Despite its seven-fold larger size, GLaM consumes two-thirds less energy for training and interference, and it outperforms GPT-3 on natural language tasks. Future language models can benefit from this approach because it is more efficient and environmentally friendly.

LLMs and the Future of Translation 

In short, LLMs have the potential to significantly enhance translation and localization technology. However, they’ll likely require ongoing refinement to improve accuracy and efficiency. Linguists, translators, and other localization experts will continue to contribute to translation. LLMs are valuable tools, but they won’t replace human expertise. 

At Vistatec, we combine cutting-edge technology with human expertise to provide the highest quality translations. Contact us today to learn more.