In a joint effort with DiscoResearch we release at set of new German language models, available on huggingface. All models are based on Llama-3-8B and were continually pre-trained on 65B high-quality German tokens from our occiglot-fineweb dataset. Similar to our prior releases, we provide both a base and instruction-tuned versions of the model. In addition to these variants that were solely trained on 8k context, we also release a long context variant (DiscoResearch/Llama3_German_8B_32k). Lastly, we also release an experimental checkpoint that we obtained through a DARE-TIES merge between our instruction tuned model and the original one provided by Meta. Compared to prior releases we made several improvements that result in overall stronger models.

  1. Llama-3 as a stronger base model.
    Llama-3 is widely considered to be significantly stronger than comparatively sized Mistral models. Similarly, Llama-3 also exhibits basic multilingual capabilities that we extend upon.
  2. Higher quality data for continual pre-training.
    We utilized 65B tokens that were additionally cleaned and deduplicated (see our latest dataset announcement)
  3. More efficient sample packing during pre-training.
    We ensured that no documents were truncated in the pre-training data, while maintaining over around 99% packing efficiency. Our benchmark results align with observation from prior research that this step alone improves performance significantly.
  4. Updated fine-tuning dataset.
    We additionally, augmented the instruction tuning dataset from DiscoResearch that we also used for our initial german models, to now contain dedicated examples on RAG.

Evaluation Results

Preliminary evaluation results can be found below. Please note that the non-English results are based on partially machine-translated datasets and thus should be interpreted with caution. Additionally, we observed scores to widely differ with different translations of the same benchmarks. Consequently, meaningful comparisons to evaluation results that are, for example, based on the okapi translations are not possible. Currently, we are working on more suitable multilingual benchmarks. German evaluation results were calculated using GermanBench.

Modeltruthful_qa_detruthfulqa_mcarc_challengearc_challenge_dehellaswaghellaswag_deMMLUMMLU-DEmean
meta-llama/Meta-Llama-3-8B-Instruct0.474980.439230.596420.479520.820250.600080.666580.535410.57656
DiscoResearch/Llama3_German_8B0.494990.448380.558020.498290.799240.653950.622400.544130.57743
DiscoResearch/Llama3_German_8B_32k0.489200.451380.544370.492320.790780.643100.587740.479710.55982
DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.10.530420.528670.595560.538390.807210.664400.618980.560530.60552
DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.10.527490.532450.587880.537540.807700.667090.621230.562380.60547

Document Packing

We also evaluated our packing implementation against the naive approach of concatenating documents and truncating them at the context length. The results below are from initial experiments with 3e-5 lr and 12k steps and show improvements comparable to those observed in the original paper.

TaskNaive PackingFewer Truncations PackingPercentage Increase
truthfulqa_mc0.4526480.4676873.32%
arc_challenge0.5179180.5281571.98%
truthful_qa_de0.4855290.4929791.53%
arc_challenge_de0.4803750.4931742.66%
hellaswag0.7760410.773352-0.35%
hellaswag_de0.6552480.653356-0.29%
MMLU0.5737190.5798021.06%
MMLU-DE0.5045090.503863-0.13%

Thanks and Accreditation

These models are the result of a joint effort between DiscoResearch and OcciGlot with support from the DFKI (German Research Center for Artificial Intelligence) and hessian.Ai. Occiglot handled data preprocessing and filtering as part of our latest dataset release, and shared our compute allocation at HessianAI’s 42 Supercomputer. The model was trained and evaluated by Björn Plüster (DiscoResearch, ellamind) with data preparation and project supervision by Manuel Brack (DFKI, TU-Darmstadt). Instruction tuning was done with the DiscoLM German dataset created by Jan-Philipp Harries and Daniel Auras (DiscoResearch, ellamind) .