--- library_name: gliner license: other license_name: nvidia-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ pipeline_tag: token-classification language: - en tags: - nvidia - pytorch - PII - PHI - GLiNER - information extraction - entity recognition - privacy --- # GLiNER-PII Model Overview ### Description: GLiNER-PII is inspired by the Gretel GLiNER PII/PHI models. Built on the GLiNER large-v2.1 base, it detects and classifies a broad range of Personally Identifiable Information (PII) and Protected Health Information (PHI) in structured and unstructured text. It is non-generative and produces span-level entity annotations with confidence scores across 55+ categories. This model was developed by NVIDIA. This model is ready for commercial/non-commercial use.
### License/Terms of Use Use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
### Deployment Geography: Global
### Use Case:
GLiNER-PII supports detection and redaction of sensitive information across regulated and enterprise scenarios. - **Healthcare**: Redact PHI in clinical notes, reports, and medical documents. - **Finance**: Identify account numbers, SSNs, and transaction details in banking and insurance documents. - **Legal**: Protect client information in contracts, filings, and discovery materials. - **Enterprise Data Governance**: Scan documents, emails, and data stores for sensitive information. - **Data Privacy Compliance**: Support GDPR, HIPAA, and CCPA workflows across varied document types. - **Cybersecurity**: Detect sensitive data in logs, security reports, and incident records. - **Content Moderation**: Flag personal information in user-generated content. Note: performance varies by domain, format, and threshold, so validation and human review are recommended for high‑stakes deployments.
### Release Date:
Hugging Face 10/28/2025 via https://huggingface.co/nvidia/gliner-pii
## References: - GLiNER base (Hugging Face): https://huggingface.co/urchade/gliner_large-v2.1 - Gretel GLiNER PII/PHI models: https://huggingface.co/gretelai/gretel-gliner-bi-large-v1.0 - Training dataset: https://huggingface.co/datasets/nvidia/nemotron-pii - GLiNER library: https://pypi.org/project/gliner/ ## Model Architecture: **Architecture Type:** Transformer
**Network Architecture:** GLiNER
**This model was developed based on urchade/gliner_large-v2.1**
**Number of model parameters: 5.7 × 10^8**
## Input:
**Input Type(s):** Text
**Input Format:** UTF-8 string(s)
**Input Parameters:** One-Dimensional (1D)
**Other Properties Related to Input:** supports structured and unstructured text
## Output:
**Output Type(s):** Text
**Output Format:** String
**Output Parameters:** One-Dimensional (1D)
**Other Properties Related to Output:** List of dictionaries with keys {text, label, start, end, score}
## Software Integration: **Runtime Engine(s):** * PyTorch, GLiNER Python library
**Supported Hardware Microarchitecture Compatibility:**
* NVIDIA Ampere
* NVIDIA Blackwell
* NVIDIA Hopper
* NVIDIA Lovelace
* NVIDIA Pascal
* NVIDIA Turing
* NVIDIA Volta
* CPU (x86_64)
**Preferred/Supported Operating System(s):** * Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
## Model Version(s): - nvidia/gliner-pii - Version: v1.0 ## Training and Evaluation Datasets: ### Training Dataset **Link:** [nvidia/nemotron-pii](https://huggingface.co/datasets/nvidia/nemotron-pii)
**Data Modality:** Text
**Text Training Data Size:** \~100k records (\~10^5, <1B tokens)
**Data Collection Method:** Synthetic
**Labeling Method:** Synthetic
**Properties:** Synthetic persona-grounded dataset generated with NVIDIA NeMo Data Designer, spanning 50+ industries and 55+ entity types (U.S. and international formats). Includes both structured and unstructured records. Labels automatically injected during generation. ## Evaluation Datasets * [Argilla PII](https://huggingface.co/argilla) * [AI4Privacy](https://huggingface.co/ai4privacy) * [Gretel PII Dataset V1/V2](https://huggingface.co/datasets/gretelai/gretel-pii-masking-en-v1) **Data Collection Method:** Hybrid: Automated, Human
**Labeling Method:** Hybrid: Automated, Human
**Evaluation Results**
From the combined evaluation across Argilla, AI4Privacy, and Gretel PII datasets: | Benchmark | Strict F1 | | --------------------| -----------: | | Argilla PII | 0.70 | | AI4Privacy | 0.64 | | nvidia/Nemotron-PII | 0.87 | --- We evaluated the model using `threshold=0.3`.
# Inference: **Acceleration Engine:** PyTorch (via Hugging Face Transformers)
**Test Hardware:** NVIDIA A100 (Ampere, PCIe/SXM)
# Usage Recommendation First, make sure you have the gliner library installed: ``` pip install gliner ``` Now, let's try to find an email, SSN, and phone number in a messy block of text. ``` from gliner import GLiNER # 1. Define our new text text = "Hi support, I can't log in! My account username is 'johndoe88'. Every time I try, it says 'invalid credentials'. Please reset my password. You can reach me at (555) 123-4567 or johnd@example.com" # 2. Define the labels we're hunting for. labels = ["email", "phone_number", "user_name"] # 3. Load the PII model model = GLiNER.from_pretrained("nvidia/gliner-pii") # 4. Run the prediction at given threshold entities = model.predict_entities(text, labels, threshold=0.5) ``` Sample output: ``` [ { "start": 52, "end": 61, "text": "johndoe88", "label": "user_name", "score": 0.99 }, { "start": 159, "end": 173, "text": "(555) 123-4567", "label": "phone_number", "score": 0.99 }, { "start": 177, "end": 194, "text": "johnd@example.com", "label": "email", "score": 0.99 } ] ``` ## Ethical Considerations: NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Bias, Explainability, Safety & Security, and Privacy Subcards.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).