Update translations

This commit is contained in:
Cohee 2024-09-14 16:31:04 +03:00
parent 1cc935796f
commit 33e6ffd36e
15 changed files with 15 additions and 15 deletions

View File

@ -215,7 +215,7 @@
<label class="checkbox_label">
<input id="max_context_unlocked" type="checkbox" />
<small><span data-i18n="unlocked">Unlocked</span>
<div id="max_context_unlocked_warning" class="fa-solid fa-circle-info opacity50p " data-i18n="[title]Only enable this if your model supports context sizes greater than 8192 tokens" title="Only enable this if your model supports context sizes greater than 4096 tokens.&#13;Increase only if you know what you're doing."></div>
<div id="max_context_unlocked_warning" class="fa-solid fa-circle-info opacity50p " data-i18n="[title]Only enable this if your model supports context sizes greater than 8192 tokens" title="Only enable this if your model supports context sizes greater than 8192 tokens.&#13;Increase only if you know what you're doing."></div>
</small>
</label>
</div>

View File

@ -32,7 +32,7 @@
"Streaming_desc": "عرض الاستجابة لحظيا كما يتم إنشاؤها.",
"context size(tokens)": "حجم الاحرف (بعدد الاحرف او الرموز)",
"unlocked": "مفتوح",
"Only enable this if your model supports context sizes greater than 4096 tokens": "قم بتمكين هذا فقط إذا كانت نموذجك يدعم مقاطع السياق بأحجام أكبر من 4096 رمزًا.",
"Only enable this if your model supports context sizes greater than 8192 tokens": "قم بتمكين هذا فقط إذا كانت نموذجك يدعم مقاطع السياق بأحجام أكبر من 8192 رمزًا.",
"Max prompt cost:": "أقصى تكلفة فورية:",
"Display the response bit by bit as it is generated.": "عرض الاستجابة بتدريج كما يتم إنشاؤها.",
"When this is off, responses will be displayed all at once when they are complete.": "عند إيقاف هذا الخيار، سيتم عرض الردود جميعها دفعة واحدة عند اكتمالها.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Zeige die Antwort Stück für Stück an, während sie generiert wird.",
"context size(tokens)": "Größe des Zusammenhangs (Tokens)",
"unlocked": "Freigeschaltet",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Aktiviere dies nur, wenn dein Modell Kontextgrößen von mehr als 4096 Tokens unterstützt.",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Aktiviere dies nur, wenn dein Modell Kontextgrößen von mehr als 8192 Tokens unterstützt.",
"Max prompt cost:": "Maximale Sofortkosten:",
"Display the response bit by bit as it is generated.": "Zeige die Antwort Stück für Stück, während sie generiert wird.",
"When this is off, responses will be displayed all at once when they are complete.": "Wenn dies ausgeschaltet ist, werden Antworten angezeigt, sobald sie vollständig sind.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Mostrar la respuesta poco a poco según se genera",
"context size(tokens)": "Tamaño de contexto (tokens)",
"unlocked": "Desbloqueado",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Habilita esto solo si tu modelo admite tamaños de contexto mayores de 4096 tokens",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Habilita esto solo si tu modelo admite tamaños de contexto mayores de 8192 tokens",
"Max prompt cost:": "Costo inmediato máximo:",
"Display the response bit by bit as it is generated.": "Mostrar la respuesta poco a poco a medida que se genera.",
"When this is off, responses will be displayed all at once when they are complete.": "Cuando esto está apagado, las respuestas se mostrarán de una vez cuando estén completas.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Afficher la réponse bit par bit au fur et à mesure de sa génération",
"context size(tokens)": "Taille du contexte (en tokens)",
"unlocked": "Déverrouillé",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Activez cela uniquement si votre modèle prend en charge des tailles de contexte supérieures à 4096 tokens",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Activez cela uniquement si votre modèle prend en charge des tailles de contexte supérieures à 8192 tokens",
"Max prompt cost:": "Coût rapide maximum :",
"Display the response bit by bit as it is generated.": "Afficher la réponse morceau par morceau au fur et à mesure de sa génération.",
"When this is off, responses will be displayed all at once when they are complete.": "Lorsque cette fonction est désactivée, les réponses s'affichent toutes en une fois lorsqu'elles sont complètes.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Birta svarið bita fyrir bita þegar það er myndað.",
"context size(tokens)": "Stærð samhengis (í táknum eða stöfum)",
"unlocked": "Opinn",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Virkjið þetta aðeins ef stærð samhengis styður model meira en 4096 tákn.",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Virkjið þetta aðeins ef stærð samhengis styður model meira en 8192 tákn.",
"Max prompt cost:": "Hámarks skyndikostnaður:",
"Display the response bit by bit as it is generated.": "Birta svarid bita fyrir bita þegar það er búið til.",
"When this is off, responses will be displayed all at once when they are complete.": "Þegar þetta er slökkt verða svör birt allt í einu þegar þau eru búin.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Mostra la risposta pezzo per pezzo man mano che viene generata",
"context size(tokens)": "Dimensione del contesto (token)",
"unlocked": "Sbloccato",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Abilita solo se il tuo modello supporta dimensioni del contesto superiori a 4096 token",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Abilita solo se il tuo modello supporta dimensioni del contesto superiori a 8192 token",
"Max prompt cost:": "Costo massimo immediato:",
"Display the response bit by bit as it is generated.": "Visualizza la risposta pezzo per pezzo mentre viene generata.",
"When this is off, responses will be displayed all at once when they are complete.": "Quando questo è disattivato, le risposte verranno visualizzate tutte in una volta quando sono complete.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "生成された応答を逐次表示します。",
"context size(tokens)": "コンテキストのサイズ(トークン数)",
"unlocked": "ロック解除",
"Only enable this if your model supports context sizes greater than 4096 tokens": "モデルが4096トークンを超えるコンテキストサイズをサポートしている場合にのみ有効にします",
"Only enable this if your model supports context sizes greater than 8192 tokens": "モデルが8192トークンを超えるコンテキストサイズをサポートしている場合にのみ有効にします",
"Max prompt cost:": "最大プロンプトコスト:",
"Display the response bit by bit as it is generated.": "生成されるたびに、応答を逐次表示します。",
"When this is off, responses will be displayed all at once when they are complete.": "この機能がオフの場合、応答は完全に生成されたときに一度ですべて表示されます。",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "생성되는대로 응답을 조금씩 표시하십시오",
"context size(tokens)": "컨텍스트 크기 (토큰)",
"unlocked": "잠금 해제됨",
"Only enable this if your model supports context sizes greater than 4096 tokens": "모델이 4096 토큰보다 큰 컨텍스트 크기를 지원하는 경우에만 활성화하십시오",
"Only enable this if your model supports context sizes greater than 8192 tokens": "모델이 8192 토큰보다 큰 컨텍스트 크기를 지원하는 경우에만 활성화하십시오",
"Max prompt cost:": "최대 프롬프트 비용:",
"Display the response bit by bit as it is generated.": "생성되는 대답을 조금씩 표시합니다.",
"When this is off, responses will be displayed all at once when they are complete.": "이 기능이 꺼져 있으면 대답은 완료되면 한 번에 모두 표시됩니다.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Toon de reactie beetje bij beetje zoals deze wordt gegenereerd",
"context size(tokens)": "Contextgrootte (tokens)",
"unlocked": "Ontgrendeld",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Schakel dit alleen in als uw model contextgroottes ondersteunt groter dan 4096 tokens",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Schakel dit alleen in als uw model contextgroottes ondersteunt groter dan 8192 tokens",
"Max prompt cost:": "Maximale promptkosten:",
"Display the response bit by bit as it is generated.": "Toon het antwoord stuk voor stuk terwijl het wordt gegenereerd.",
"When this is off, responses will be displayed all at once when they are complete.": "Als dit uit staat, worden reacties in één keer weergegeven wanneer ze compleet zijn.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Exibir a resposta pouco a pouco conforme ela é gerada",
"context size(tokens)": "Tamanho do contexto (tokens)",
"unlocked": "Desbloqueado",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Ative isso apenas se seu modelo suportar tamanhos de contexto maiores que 4096 tokens",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Ative isso apenas se seu modelo suportar tamanhos de contexto maiores que 8192 tokens",
"Max prompt cost:": "Custo imediato máximo:",
"Display the response bit by bit as it is generated.": "Exibir a resposta bit a bit conforme é gerada.",
"When this is off, responses will be displayed all at once when they are complete.": "Quando isso estiver desligado, as respostas serão exibidas de uma vez quando estiverem completas.",

View File

@ -89,7 +89,7 @@
"Text Completion presets": "Пресеты для Text Completion",
"Documentation on sampling parameters": "Документация по параметрам сэмплеров",
"Set all samplers to their neutral/disabled state.": "Установить все сэмплеры в нейтральное/отключенное состояние.",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Включайте эту опцию, только если ваша модель поддерживает размер контекста более 4096 токенов.\nУвеличивайте только если вы знаете, что делаете.",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Включайте эту опцию, только если ваша модель поддерживает размер контекста более 8192 токенов.\nУвеличивайте только если вы знаете, что делаете.",
"Wrap in Quotes": "Заключать в кавычки",
"Wrap entire user message in quotes before sending.": "Перед отправкой заключать всё сообщение пользователя в кавычки.",
"Leave off if you use quotes manually for speech.": "Оставьте выключенным, если вручную выставляете кавычки для прямой речи.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "Поступово відображати відповідь по мірі її створення",
"context size(tokens)": "Контекст (токени)",
"unlocked": "Розблоковано",
"Only enable this if your model supports context sizes greater than 4096 tokens": "Увімкніть це лише в разі підтримки моделлю розмірів контексту більше 4096 токенів",
"Only enable this if your model supports context sizes greater than 8192 tokens": "Увімкніть це лише в разі підтримки моделлю розмірів контексту більше 8192 токенів",
"Max prompt cost:": "Максимальна оперативна вартість:",
"Display the response bit by bit as it is generated.": "Показувати відповідь по бітах по мірі її генерації.",
"When this is off, responses will be displayed all at once when they are complete.": "Коли це вимкнено, відповіді будуть відображатися разом, коли вони будуть завершені.",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "逐位显示生成的回复",
"context size(tokens)": "上下文长度(以词符数计)",
"unlocked": "解锁",
"Only enable this if your model supports context sizes greater than 4096 tokens": "仅在您的模型支持大于4096个词符的上下文大小时启用此选项",
"Only enable this if your model supports context sizes greater than 8192 tokens": "仅在您的模型支持大于8192个词符的上下文大小时启用此选项",
"Max prompt cost:": "最大提示词费用:",
"Display the response bit by bit as it is generated.": "随着回复的生成,逐位显示结果。",
"When this is off, responses will be displayed all at once when they are complete.": "当此选项关闭时,回复将在完成后一次性显示。",

View File

@ -32,7 +32,7 @@
"Streaming_desc": "生成時逐位顯示回應。當此功能關閉時,回應將在完成後一次顯示。",
"context size(tokens)": "上下文大小(符記數)",
"unlocked": "解鎖",
"Only enable this if your model supports context sizes greater than 4096 tokens": "僅在您的模型支援超過4096個符記的上下文大小時啟用此功能",
"Only enable this if your model supports context sizes greater than 8192 tokens": "僅在您的模型支援超過8192個符記的上下文大小時啟用此功能",
"Max prompt cost:": "最多提示詞費用",
"Display the response bit by bit as it is generated.": "生成時逐位顯示回應。",
"When this is off, responses will be displayed all at once when they are complete.": "關閉時,回應將在完成後一次性顯示。",