davanstrien's picture
davanstrien HF Staff
add configs
5a0d20e verified
metadata
annotations_creators:
  - no-annotation
language:
  - de
  - fr
  - el
  - et
  - fi
  - hr
  - ji
  - pl
  - ru
  - sr
  - sv
  - uk
language_creators:
  - machine-generated
multilinguality:
  - multilingual
pretty_name: 'Europeana Newspapers '
size_categories:
  - 1M<n<10M
source_datasets: []
tags:
  - newspapers
  - lam
  - OCR
task_categories:
  - text-generation
task_ids:
  - language-modeling
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*.parquet
  - config_name: de
    data_files:
      - split: train
        path: data/de-*.parquet
  - config_name: el
    data_files:
      - split: train
        path: data/el-*.parquet
  - config_name: et
    data_files:
      - split: train
        path: data/et-*.parquet
  - config_name: fi
    data_files:
      - split: train
        path: data/fi-*.parquet
  - config_name: fr
    data_files:
      - split: train
        path: data/fr-*.parquet
  - config_name: hr
    data_files:
      - split: train
        path: data/hr-*.parquet
  - config_name: ji
    data_files:
      - split: train
        path: data/ji-*.parquet
  - config_name: multi_language
    data_files:
      - split: train
        path: data/multi_language-*.parquet
  - config_name: no_language_found
    data_files:
      - split: train
        path: data/no_language_found-*.parquet
  - config_name: pl
    data_files:
      - split: train
        path: data/pl-*.parquet
  - config_name: ru
    data_files:
      - split: train
        path: data/ru-*.parquet
  - config_name: sr
    data_files:
      - split: train
        path: data/sr-*.parquet
  - config_name: sv
    data_files:
      - split: train
        path: data/sv-*.parquet
  - config_name: uk
    data_files:
      - split: train
        path: data/uk-*.parquet

Dataset Card for Europeana Newspapers

Dataset Overview

This dataset contains historic newspapers from Europeana, processed and converted to a format more suitable for machine learning and digital humanities research. In total, the collection contains approximately 32 billion tokens across multiple European languages, spanning from the 18th to the early 20th century.

Created by the BigLAM initiative, this unofficial version extracts text content from ALTO XML and converts it into a parquet format, making it more accessible for ML/AI-based work and large-scale digital humanities/history research.

Key Features

  • Massive historical corpus: One of the largest collections of historical text data available in a machine-learning friendly format
  • Cross-lingual coverage: Includes 12 European languages with varying degrees of representation
  • OCR quality metrics: Contains confidence scores to allow filtering based on text quality
  • Rich metadata: Preserves publication information, dates, and links to original materials
  • Structured format: Organized by language and decade for efficient access to specific subsets
  • Illustration data: Includes bounding box coordinates for visual elements on newspaper pages
  • IIIF integration: Direct links to high-quality images of the original documents

Dataset Details

Dataset Description

  • Curated by: BigLAM initiative
  • Language(s): German (de), French (fr), Greek (el), Estonian (et), Finnish (fi), Croatian (hr), Yiddish (ji), Polish (pl), Russian (ru), Serbian (sr), Swedish (sv), Ukrainian (uk)
  • License: [More Information Needed]

Dataset Structure

Each record in the dataset contains the following fields:

Field Type Description
text string Extracted text content from each newspaper page
mean_ocr float Mean OCR confidence score (0-1 scale where higher values indicate higher confidence)
std_ocr float Standard deviation of OCR confidence scores (indicates consistency of recognition quality)
bounding_boxes list of lists Coordinates for illustrations on the page in format [HEIGHT, WIDTH, VPOS, HPOS]
title string Newspaper title
date string Publication date in ISO format (YYYY-MM-DD)
language list Language codes of the content (supports multi-language detection)
item_iiif_url string IIIF URL for accessing the original digitized image
multi_language boolean Flag indicating whether the page contains multiple languages
issue_uri string Persistent URI for the newspaper issue in Europeana
id string Unique identifier combining issue URI and page number

Data Splits

The dataset is organized into files by:

  • Language (e.g., 'fr' for French, 'de' for German)
  • Decade (e.g., '1770' for newspapers from the 1770s)

This organization allows researchers to easily access specific subsets of the data relevant to their research questions.

Uses

Direct Use

To download the full dataset using the Datasets library:

from datasets import load_dataset

dataset = load_dataset("biglam/europeana_newspapers")

You can also access a subset based on language or year ranges using the following function:

from typing import List, Optional, Literal, Union
from huggingface_hub import hf_hub_url, list_repo_files

LanguageOption = Literal[
    "et",  # Estonian
    "pl",  # Polish
    "sr",  # Serbian
    "ru",  # Russian
    "sv",  # Swedish
    "no_language_found",
    "ji",  # Yiddish
    "hr",  # Croatian
    "el",  # Greek
    "uk",  # Ukrainian
    "fr",  # French
    "fi",  # Finnish
    "de",  # German
    "multi_language"
]


def get_files_for_lang_and_years(
    languages: Optional[List[LanguageOption]] = None,
    min_year: Optional[int] = None,
    max_year: Optional[int] = None,
):
    """
    Get dataset file URLs filtered by language and/or year range.
    
    Args:
        languages: List of language codes to include
        min_year: Minimum year to include (inclusive)
        max_year: Maximum year to include (inclusive)
        
    Returns:
        List of file URLs that can be passed to load_dataset
    """
    # List all files in the repository
    files = list_repo_files("biglam/europeana_newspapers", repo_type="dataset")
    parquet_files = [f for f in files if f.endswith(".parquet")]
    
    # Filter by language if specified
    if languages:
        parquet_files = [
            f for f in parquet_files if any(lang in f for lang in languages)
        ]
    
    # Filter by year range if specified
    if min_year is not None or max_year is not None:
        filtered_files = []
        for f in parquet_files:
            parts = f.split("-")
            if len(parts) > 1:
                year_part = parts[1].split(".")[0]
                if year_part.isdigit():
                    year = int(year_part)
                    if (min_year is None or min_year <= year) and (max_year is None or year <= max_year):
                        filtered_files.append(f)
        parquet_files = filtered_files

    # Convert local paths to full URLs
    return [
        hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset")
        for f in parquet_files
    ]

You can use this function to get the URLs for files you want to download from the Hub:

# Example 1: Load French newspaper data
french_files = get_files_for_lang_and_years(['fr'])
ds_french = load_dataset("parquet", data_files=french_files, num_proc=4)

# Example 2: Load Ukrainian and French newspapers between 1900 and 1950
historical_files = get_files_for_lang_and_years(
    languages=['uk', 'fr'], 
    min_year=1900, 
    max_year=1950
)
ds_historical = load_dataset("parquet", data_files=historical_files, num_proc=4)

# Example 3: Load all German newspapers from the 19th century
german_19th_century = get_files_for_lang_and_years(
    languages=['de'], 
    min_year=1800, 
    max_year=1899
)
ds_german_historical = load_dataset("parquet", data_files=german_19th_century, num_proc=4)

Use Cases

This dataset is particularly valuable for:

Machine Learning Applications

  • Training large language models on historical texts
  • Fine-tuning models for historical language understanding
  • Developing OCR post-correction models using the confidence scores
  • Training layout analysis models using the bounding box information

Digital Humanities Research

  • Cross-lingual analysis of historical newspapers
  • Studying information spread across European regions
  • Tracking cultural and political developments over time
  • Analyzing language evolution and shifts in terminology
  • Topic modeling of historical discourse
  • Named entity recognition in historical contexts

Historical Research

  • Comparative analysis of news reporting across different countries
  • Studying historical events from multiple contemporary perspectives
  • Tracking the evolution of public discourse on specific topics
  • Analyzing changes in journalistic style and content over centuries

OCR Development

  • Using the mean_ocr and std_ocr fields to assess OCR quality
  • Filtering content based on quality thresholds for specific applications
  • Benchmarking OCR improvement techniques against historical materials

Institutional Uses

  • Enabling libraries and archives to provide computational access to their collections
  • Supporting searchable interfaces for digital historical collections
  • Creating teaching resources for historical linguistics and discourse analysis

Dataset Creation

Source Data

The dataset is derived from the Europeana Newspapers collection, which contains digitized historical newspapers from various European countries. The original data is in ALTO XML format, which includes OCR text along with layout and metadata information.

Data Collection and Processing

The BigLAM initiative developed a comprehensive processing pipeline to convert the Europeana newspaper collections from their original ALTO XML format into a structured dataset format suitable for machine learning and digital humanities research:

  1. ALTO XML Parsing: Custom parsers handle various ALTO schema versions (1-5 and BnF dialect) to ensure compatibility across the entire collection.

  2. Text Extraction: The pipeline extracts full-text content while preserving reading order and handling special cases like hyphenated words.

  3. OCR Quality Assessment: For each page, the system calculates:

    • mean_ocr: Average confidence score of the OCR engine
    • std_ocr: Standard deviation of confidence scores to indicate consistency
  4. Visual Element Extraction: The pipeline captures bounding box coordinates for illustrations and visual elements, stored in the bounding_boxes field.

  5. Metadata Integration: Each page is enriched with corresponding metadata from separate XML files:

    • Publication title and date
    • Language identification (including multi-language detection)
    • IIIF URLs for accessing the original digitized images
    • Persistent identifiers linking back to the source material
  6. Parallel Processing: The system utilizes multi-processing to efficiently handle the massive collection (containing approximately 32 billion tokens).

  7. Dataset Creation: The processed data is converted to Hugging Face's Dataset format and saved as parquet files, organized by language and decade for easier access.

This processing approach preserves the valuable structure and metadata of the original collection while making it significantly more accessible for computational analysis and machine learning applications.

Bias, Risks, and Limitations

  • OCR Quality: The dataset is based on OCR'd historical documents, which may contain errors, especially in older newspapers or those printed in non-standard fonts.
  • Historical Bias: Historical newspapers reflect the biases, prejudices, and perspectives of their time periods, which may include content that would be considered offensive by modern standards.
  • Temporal and Geographic Coverage: The coverage across languages, time periods, and geographic regions may be uneven.
  • Data Completeness: Some newspaper issues or pages may be missing or incomplete in the original Europeana collection.

Recommendations

  • Users should consider the OCR confidence scores (mean_ocr and std_ocr) when working with this data, possibly filtering out low-quality content depending on their use case.
  • Researchers studying historical social trends should be aware of the potential biases in the source material and interpret findings accordingly.
  • For applications requiring high text accuracy, additional validation or correction may be necessary.

More Information

For more information about the original data source, visit Europeana Newspapers.

Dataset Card Contact

Daniel van Strien (daniel [at] hf [dot] co)

For questions about this processed version of the Europeana Newspapers dataset, please contact the BigLAM initiative representative above.