locAGI is the AI-augmented translation QA pipeline our linguists use every day. It turns any messy post-MT bilingual XLIFF — from memoQ, Trados, XTM, Phrase, or any CAT tool that exports standard XLIFF — into a scored, commented, client-ready deliverable. Humans stay in charge of every final call.
Before any AI touches a segment, we centralise every confirmed term, acronym, and forbidden phrase in a termbase. That way every downstream step — MT, AI post-editing, QA — speaks the same vocabulary as the client.
| EN | DE | Note |
|---|---|---|
| contraindication | Gegenanzeige | § regulatory |
| active ingredient | Wirkstoff | — |
| DO-NOT-TRANSLATE: MedPortal | MedPortal | 🔒 brand |
A template is the client's style guide in one payload: free-text instructions, linked termbases, acronym handling, date / number / currency localisation, register, tone. Link a template once — every AI step after that behaves like the client expects.
MT/AI reads the XLIFF TM match score for each segment and adapts intensity: high matches are barely touched, low matches get a full rewrite. 100% matches stay locked by default. You keep the leverage your linguists already earned in their CAT tool.
When you want every segment rewritten to a target register — not just the low matches — AIPE runs the whole file through an AI model you pick, in chunked batches, building a per-file glossary as it goes so terminology stays consistent across the entire document.
Linguists shouldn't waste time on problems a machine can catch first. The Post-Processing stage applies deterministic fixes that need no judgement: tag balance, number consistency with the source, leading / trailing space, consistent punctuation.
| Check | Segments | Verdict |
|---|---|---|
| Tag balance | 3,412 | ✓ clean |
| Numbers match | 3,412 | ⚠ 4 to review |
| Leading/trailing ws | 3,412 | ✓ clean |
| Smart quotes | 3,412 | ⚙ 27 normalised |
AI drafts, humans decide. The cleaned bilingual XLIFF opens in whatever CAT tool the linguist prefers — memoQ, Trados Studio, XTM, Phrase, Wordfast, Smartcat — where they confirm, override, or rewrite the AI output with full TM leverage, concordance, and inline term highlighting. This step is outside locAGI by design: we don't lock you into any one vendor.
| # | Source | AI draft | Linguist |
|---|---|---|---|
| 142 | The patient must take the medication before meals. | Der Patient muss das Medikament vor dem Essen einnehmen. | ✓ confirmed |
| 143 | Consult your doctor. | Konsultieren Sie Ihren Arzt. | ✎ edited |
| 144 | Store at room temperature. | Bei Raumtemperatur lagern. | ✓ confirmed |
After the linguist, a second AI pass reads the finished translation as a reviewer would. It flags potential issues using the industry-standard MQM framework — typed errors with severity 1-5, a suggested revised target, and a comment. You approve, override, or mark as false positive. Export as Excel and annotated bilingual XLIFF.
The final XLIFF carries the linguist's confirmed translation, the QA reviewer's findings baked in as CAT-native comments (so they show up in memoQ, Trados, XTM — wherever the client opens the file), and the quality score as metadata. The Excel export gives the project manager the same findings in a format they can share, filter, and archive.
Six things that turn locAGI from a clever tool into a platform you can run a real localisation business on.
Every AI job is split into 150-segment batches. Each batch is a single LLM call with its own prompt, the linked termbase, and a running glossary harvested from earlier batches so terminology and tone stay consistent across a 10,000-segment file. Partial failures retry segment-by-segment — nothing is dropped silently.
Any provider, not a fixed list. locAGI is a gateway: plug in any provider that speaks a chat-completions API and it slots into every step. Swap models per project, per stage, or let the platform rank them by past performance on your domain. Your compliance team — not us — decides which providers are allowed for which data.
Need a new rule, a new export format, a new MQM category? Plugin hooks for import, export, QA checks, and MT providers. Customer suggestions that benefit the whole platform are implemented free of charge, typically within a week. You're not paying twice for the roadmap.
Share findings with an external linguist via a secure signed link — they edit directly in the browser, no install, no account — or hand them the commented bilingual XLIFF + Excel package for offline work in their own CAT tool. Their edits flow back into the project automatically either way.
Every action the UI can perform is available over HTTP: create a project, upload an XLIFF, launch AIPE, poll QA status, fetch findings, trigger exports. Wire locAGI into your TMS, your CI, or your project-management tool.
Same product, three delivery modes. Use our managed cloud for fastest start; run it on your own private server when data has to stay inside your firewall; or install locally on a linguist's workstation for on-device privacy. Full feature parity in all three.
The pipeline is designed to be used end-to-end, but every stage stands on its own. Run an AI-only project with AIPE, use just the QA module on a file someone else translated, or drive the whole flow from termbase to delivery.