Many quality standards and practices consider evaluating the quality of products and services by measuring actual or probable defects. Translation is much the same as other service industries. The majority of translation quality assessment, evaluation metrics and technologies follow the same concept of counting actual defects (errors), then categorizing them and assigning them a severity level. Errors are counted either from the entire translated content or from randomly extracted representative samples.
From the LISA (Localization Industry Standards Association) QA Model to other models like SAE J2450 Translation Quality Metric, TAUS DQF, and MQM, metrics and frameworks are nearly all based on measuring errors, with the American Translators Association focusing on defining and standardizing error categories.
Some quality management systems and standards, such as ISO 17100:2015 (and previously EN15038:2008), focus on the translation management process, including project management, linguist qualifications, and requirements. Those standards consider whether:
- Right inputs are provided,
- Scope is well defined,
- Qualified linguists are involved,
- Specific steps are followed,
- Roles are well defined, and
- Quality checks are performed.
Follow these steps and the most likely outcome is a successful translation project at the desired quality level.
Vocalink believes in simplification and flexibility. As such, we designed our own quality management system combining the “sweet spots” from ISO 17100:2015 and other quality evaluation metrics.
Despite the fact that customizable workflows are designed based on client requirements and the nature of each project, fundamental quality procedures to follow include:
- Assessing the source content,
- Defining the target audience,
- Defining and using domain terminology,
- Engaging the right linguists,
- Collaborating with the client’s reviewers (ICRs) and product experts,
- Using the right technology, and
- Conducting human and automated quality checks throughout the translation process.
Quality Assurance professionals—namely Reviewers and Proofreaders—must have access to the same resources as the translator and must ensure that all project instructions are followed. These professionals review the translated content in two stages: first comparing the translated content to the source material and second reviewing the content on its own, separate from its source, while recording all errors they find in each stage.
Working in a transparent collaborative platform, QA professionals (including ICRs) record errors, create a list of errors that includes suggested changes and the reasoning behind those changes. Each recorded error is then categorized once based on the type of error, and once for the severity of the error.
A numeric value is calculated for each recorded error, combining the weight of the error category and the severity of the error. The evaluation metric calculates the total weighed value of all recorded errors and compares it to the maximum allowed value, which itself is calculated based on the evaluated sample size.
Error categories vary from one content type (software, documentation, marketing, web…) to another. Different error categories may be more or less relevant based on the type of content involved, and may not be counted at all. For example, functionality is very important in software localization, while it is generally not measured in marketing translation or Transcreation. Style, on the other hand, is very important in translating a marketing piece—giving it a higher weight—where style may carry less weight in technical content.
Next week we’ll continue with the third installment in our series on Translation Quality Evaluation. If you missed part one, you can find it here. To speak to someone about Vocalink’s language solutions, please call us at 877.492.7754.