The core of my app is an SQLite database containing over 16,000 strings.
My goal is to translate them into multiple languages in the most performant, memory-efficient, and maintainable way.
I understand how to handle standard UI strings using different strings.xml files. My question is specifically about the large, static dataset in the database.
I have explored several approaches, but each seems to have a significant drawback:
Using XML String Resources (getIdentifier()):
Move all 16,000 meanings into res/values/strings.xml with names like <string name="word_1">text</string>. Then, in my code, fetch the string dynamically using context.getResources().getIdentifier("word_" + id, "string", context.getPackageName()).
The official documentation warns that getIdentifier() is very slow, and I'm worried about performance for this many strings.
Normalized Database (Separate translations table):
Create a word_translations table with columns like word_id, language_code, translated_meaning. Then JOIN on the user's current language.
This seems like the "correct" database design, but I'm trying to avoid adding complexity to the database schema and in-app migration logic if possible. I'm hoping to leverage Android's resource system more directly for easier management.
Given a large, static dataset of 16,000+ items, what is the industry-standard, most robust pattern for localization in an Android app?