CLI tool for GDPR-compliant pseudonymization of French text documents using NLP-based entity detection and reversible mapping
Project description
GDPR Pseudonymizer
AI-Assisted Pseudonymization for French Documents with Human Verification
Transform sensitive French documents for safe AI analysis with local processing, mandatory human review, and GDPR compliance.
๐ฏ Overview
GDPR Pseudonymizer is a privacy-first CLI tool that combines AI efficiency with human accuracy to pseudonymize French text documents. Unlike fully automatic tools or cloud services, we prioritize zero false negatives and legal defensibility through mandatory validation workflows.
Perfect for:
- ๐๏ธ Privacy-conscious organizations needing GDPR-compliant AI analysis
- ๐ Academic researchers with ethics board requirements
- โ๏ธ Legal/HR teams requiring defensible pseudonymization
- ๐ค LLM users who want to analyze confidential documents safely
โจ Key Features
๐ Privacy-First Architecture
- โ 100% local processing - Your data never leaves your machine
- โ No cloud dependencies - Works completely offline after installation
- โ Encrypted mapping tables - AES-256-SIV encryption with PBKDF2 key derivation (210K iterations), passphrase-protected reversible pseudonymization
- โ Zero telemetry - No analytics, crash reporting, or external communication
๐ค AI + Human Verification
- โ Hybrid detection - AI pre-detects 40-50% of entities (NLP + regex patterns)
- โ Mandatory validation - You review and confirm all entities (ensures 100% accuracy)
- โ Fast validation UI - Rich CLI interface with keyboard shortcuts, <2 min per document
- โ Smart workflow - Entity-by-type grouping (PERSON โ ORG โ LOCATION) with context display
- โ Entity variant grouping - Related forms ("Marie Dubois", "Pr. Dubois", "Dubois") merged into one validation item with "Also appears as:" display
- โ Batch actions - Confirm/reject multiple entities efficiently
๐ Batch Processing
- โ Consistent pseudonyms - Same entity = same pseudonym across 10-100+ documents
- โ Compositional matching - "Marie Dubois" โ "Leia Organa", "Marie" alone โ "Leia"
- โ Smart name handling - Title stripping ("Dr. Marie Dubois" = "Marie Dubois"), compound names ("Jean-Pierre" treated as atomic)
- โ
Selective entity processing -
--entity-typesflag to filter by type (e.g.,--entity-types PERSON,LOCATION) - โ 50%+ time savings vs manual redaction (AI pre-detection + validation)
๐ญ Themed Pseudonyms
- โ Readable output - Star Wars, LOTR, or generic French names
- โ Maintains context - LLM analysis preserves 85% document utility (validated: 4.27/5.0)
- โ Gender-preserving - When NER provides gender classification (PERSON entities)
- โ Full entity support - PERSON, LOCATION, and ORGANIZATION pseudonyms for all themes
๐ Quick Start
Status: ๐ v1.0.0 โ Public Release (February 2026)
Realistic Expectations for v1.0
What v1.0 delivers:
- ๐ค AI-assisted detection - Hybrid NLP + regex detects ~40-50% of entities automatically
- โ Mandatory human verification - You review and confirm all entities (2-3 min per document)
- ๐ 100% accuracy guarantee - Human validation ensures zero false negatives
- โก 50%+ faster than manual - Pre-detection saves time vs pure manual redaction
What v1.0 does NOT deliver:
- โ Fully automatic "set and forget" processing
- โ 85%+ AI accuracy (current: 40-50% with hybrid approach)
- โ Optional validation mode (validation is mandatory)
Roadmap
v1.0 (MVP - Q2 2026): AI-assisted CLI with mandatory validation
- Target: Privacy-conscious early adopters who value human oversight
- 100% local processing, encrypted mapping tables, audit trails
v1.1 (Q2-Q3 2026): Quick wins & GDPR compliance
- GDPR Right to Erasure: selective entity deletion (
delete-mappingcommand, Article 17) - Gender-aware pseudonym assignment for French names
- Beta feedback bug fixes and UX improvements
v2.0 (Q3-Q4 2026): GUI & broader accessibility
- Desktop GUI wrapping CLI core (drag-and-drop, visual entity review)
- Standalone executables (.exe for Windows, .app for macOS) โ no Python required
- French-first UI with i18n architecture (multi-language ready)
- WCAG AA accessibility for professional/academic contexts
- Target: Non-technical users (HR, legal, compliance teams)
v3.0 (2027+): NLP accuracy & automation
- Fine-tuned French NER model (70-85% F1 target, up from 40-50%)
- Optional
--no-validateflag for high-confidence workflows - Confidence-based auto-processing (85%+ F1 target)
- Multi-language support (English, Spanish, German)
โ๏ธ Installation
See Installation Guide for detailed platform-specific instructions.
Prerequisites
- Python 3.10-3.12 (validated in CI/CD)
Install from PyPI (Recommended)
pip install gdpr-pseudonymizer
# Download spaCy French model (required - 571MB)
python -m spacy download fr_core_news_lg
# Verify installation
gdpr-pseudo --help
Install from Source (Developer)
# Clone repository
git clone https://github.com/LioChanDaYo/RGPDpseudonymizer.git
cd RGPDpseudonymizer
# Install dependencies via Poetry
pip install poetry>=1.7.0
poetry install
# Install spaCy French model
poetry run python -m spacy download fr_core_news_lg
# Verify installation
poetry run gdpr-pseudo --help
Quick Test
# Test on sample document
echo "Marie Dubois travaille ร Paris pour Acme SA." > test.txt
gdpr-pseudo process test.txt
# Or specify custom output file
gdpr-pseudo process test.txt -o output.txt
Expected output: "Leia Organa travaille ร Coruscant pour Rebel Alliance."
Configuration File (Optional)
Generate a config template to customize default settings:
# Generate .gdpr-pseudo.yaml template in current directory
poetry run gdpr-pseudo config --init
# View current effective configuration
poetry run gdpr-pseudo config
Example .gdpr-pseudo.yaml:
database:
path: mappings.db
pseudonymization:
theme: star_wars # neutral, star_wars, lotr
model: spacy
batch:
workers: 4 # 1-8 (use 1 for interactive validation)
output_dir: null
logging:
level: INFO
Note: Passphrase is never stored in config files (security). Use GDPR_PSEUDO_PASSPHRASE env var or interactive prompt. Minimum 12 characters required (NFR12).
๐ Documentation
Documentation Site: https://liochandayo.github.io/RGPDpseudonymizer/
For Users:
- ๐ Installation Guide - Platform-specific installation instructions
- ๐ Usage Tutorial - Step-by-step usage tutorials
- ๐ CLI Reference - Complete command documentation
- ๐ Methodology & Academic Citation - Technical approach and GDPR compliance
- โ FAQ - Common questions and answers
- ๐ง Troubleshooting - Error reference and solutions
For Developers:
- ๐ API Reference - Module documentation and extension points
- ๐๏ธ Architecture Documentation - Technical design
- ๐ NLP Benchmark Report - NER accuracy analysis
- ๐ Performance Report - NFR performance validation results
For Stakeholders:
- ๐จ Positioning & Messaging
- ๐ Deliverables Summary
๐ฌ Technical Details
NLP Library Selection (Story 1.2 - Completed)
After comprehensive benchmarking on 25 French interview/business documents (1,855 entities):
| Library | F1 Score | Precision | Recall | Decision |
|---|---|---|---|---|
spaCy fr_core_news_lg |
29.5% | 27.0% | 32.7% | โ Selected |
Stanza fr_default |
11.9% | 10.3% | 14.1% | โ Rejected |
Why both failed 85% target:
- Pre-trained models optimized for news text (not interview/business docs)
- Domain-specific language patterns (conversational, mixed registers)
- ORG detection catastrophic (3.8% precision = 96% false positives)
Approved Solution:
- โ Hybrid approach (NLP + regex) targets 40-50% F1
- โ Mandatory validation ensures 100% final accuracy
- ๐ Fine-tuning deferred to v3.0 (70-85% F1 target, requires training data from v1.x/v2.x user validations)
See full analysis: docs/nlp-benchmark-report.md
Validation Workflow (Story 1.7 - Complete)
The validation UI provides an intuitive keyboard-driven interface for reviewing detected entities:
Features:
- โ Entity-by-type grouping - Review PERSON โ ORG โ LOCATION in logical order
- โ Context display - See 10 words before/after each entity with highlighting
- โ Confidence scores - Color-coded confidence from spaCy NER (green >80%, yellow 60-80%, red <60%)
- โ Keyboard shortcuts - Single-key actions: [Space] Confirm, [R] Reject, [E] Modify, [A] Add, [C] Change pseudonym
- โ Batch operations - Accept/reject all entities of a type at once (Shift+A/R)
- โ Help overlay - Press [H] for full command reference
- โ Performance - <2 minutes for typical 20-30 entity documents
Workflow Steps:
- Summary screen (entity counts by type)
- Review entities by type with context
- Flag ambiguous entities for careful review
- Final confirmation with summary of changes
- Process document with validated entities
Deduplication Feature (Story 1.9): Duplicate entities grouped together - validate once, apply to all occurrences (66% time reduction for large docs)
Entity Variant Grouping (Story 4.6): Related entity forms automatically merged into single validation items. "Marie Dubois", "Pr. Dubois", and "Dubois" appear as one item with "Also appears as:" showing variant forms. Prevents Union-Find transitive bridging for ambiguous surnames shared by different people.
Technology Stack
| Component | Technology | Version | Purpose |
|---|---|---|---|
| Runtime | Python | 3.10-3.12 | Validated in CI/CD (3.13+ not yet tested) |
| NLP Library | spaCy | 3.8.0 | French entity detection (fr_core_news_lg) |
| CLI Framework | Typer | 0.9+ | Command-line interface |
| Database | SQLite | 3.35+ | Local mapping table storage with WAL mode |
| Encryption | cryptography (AESSIV) | 44.0+ | AES-256-SIV encryption for sensitive fields (PBKDF2 key derivation, passphrase-protected) |
| ORM | SQLAlchemy | 2.0+ | Database abstraction and session management |
| Validation UI | rich | 13.7+ | Interactive CLI entity review |
| Keyboard Input | readchar | 4.2+ | Single-keypress capture for validation UI |
| Testing | pytest | 7.4+ | Unit & integration testing |
| CI/CD | GitHub Actions | N/A | Automated testing (Windows/Mac/Linux) |
๐ค Why AI-Assisted Instead of Automatic?
Short answer: Privacy and compliance require human oversight.
Long answer:
- GDPR defensibility - Human verification provides legal audit trail
- Zero false negatives - AI misses entities, humans catch them (100% coverage)
- Current NLP limitations - French models on interview/business docs: 29.5% F1 out-of-box
- Better than alternatives:
- โ vs Manual redaction: 50%+ faster (AI pre-detection)
- โ vs Cloud services: 100% local processing (no data leakage)
- โ vs Fully automatic tools: 100% accuracy (human verification)
User Perspective:
"I WANT human review for compliance reasons. The AI saves me time by pre-flagging entities, but I control the final decision." - Compliance Officer
๐ฏ Use Cases
1. Research Ethics Compliance
Scenario: Academic researcher with 50 interview transcripts needing IRB approval
Without GDPR Pseudonymizer:
- โ Manual redaction: 16-25 hours
- โ Destroys document coherence for analysis
- โ Error-prone (human fatigue)
With GDPR Pseudonymizer:
- โ AI pre-detection: ~30 min processing
- โ Human validation: ~90 min review (50 docs ร ~2 min each)
- โ Total: 2-3 hours (85%+ time savings)
- โ Audit trail for ethics board
2. HR Document Analysis
Scenario: HR team analyzing employee feedback with ChatGPT
Without GDPR Pseudonymizer:
- โ Can't use ChatGPT (GDPR violation - employee names exposed)
- โ Manual analysis only (slow, limited insights)
With GDPR Pseudonymizer:
- โ Pseudonymize locally (employee names โ pseudonyms)
- โ Send to ChatGPT safely (no personal data exposed)
- โ Get AI insights while staying GDPR-compliant
3. Legal Document Preparation
Scenario: Law firm preparing case materials for AI legal research
Without GDPR Pseudonymizer:
- โ Cloud pseudonymization service (third-party risk)
- โ Manual redaction (expensive billable hours)
With GDPR Pseudonymizer:
- โ 100% local processing (client confidentiality)
- โ Human-verified accuracy (legal defensibility)
- โ Reversible mappings (can de-pseudonymize if needed)
โ๏ธ GDPR Compliance
How GDPR Pseudonymizer Supports Compliance
| GDPR Requirement | Implementation |
|---|---|
| Art. 25 - Data Protection by Design | Local processing, no cloud dependencies, encrypted storage |
| Art. 30 - Processing Records | Comprehensive audit logs (Story 2.5): operations table tracks timestamp, files processed, entity count, model version, theme, success/failure, processing time; JSON/CSV export for compliance reporting |
| Art. 32 - Security Measures | AES-256-SIV encryption with PBKDF2 key derivation (210,000 iterations), passphrase-protected storage, column-level encryption for sensitive fields |
| Art. 35 - Privacy Impact Assessment | Transparent methodology, cite-able approach for DPIA documentation |
| Recital 26 - Pseudonymization | Consistent pseudonym mapping, reversibility with passphrase |
What Pseudonymization Means (Legally)
According to GDPR Article 4(5):
"Pseudonymization means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately."
GDPR Pseudonymizer approach:
- โ Personal data replaced: Names, locations, organizations โ pseudonyms
- โ Separate storage: Mapping table encrypted with passphrase (separate from documents)
- โ Reversibility: Authorized users can de-pseudonymize with passphrase
- โ ๏ธ Note: Pseudonymization reduces risk but does NOT make data anonymous
Recommendation: Consult your Data Protection Officer (DPO) for specific compliance guidance.
๐ ๏ธ Development Status
All 4 MVP Epics Complete โ v1.0.0 released February 2026.
- โ Epic 1: Foundation & NLP Validation (9 stories) โ spaCy integration, validation UI, hybrid detection, entity deduplication
- โ Epic 2: Core Pseudonymization Engine (9 stories) โ pseudonym libraries, encryption, audit logging, batch processing, GDPR 1:1 mapping
- โ Epic 3: CLI Interface & Batch Processing (7 stories) โ 8 CLI commands, progress reporting, config files, parallel batch, UX polish
- โ Epic 4: Launch Readiness (8 stories) โ LLM utility validation, cross-platform testing, documentation, NER accuracy suite, performance validation, beta feedback integration, codebase refactoring, launch preparation
- Total: 33 stories, 1077+ tests, 86%+ coverage, all quality gates green
๐ค Contributing
We welcome contributions! See CONTRIBUTING.md for details on:
- Bug reports and feature requests
- Development setup and code quality requirements
- PR process and commit message format
Please read our Code of Conduct before participating.
๐ง Contact & Support
Project Lead: Lionel Deveaux - @LioChanDaYo
For questions and support:
- ๐ฌ GitHub Discussions โ General questions, use cases
- ๐ GitHub Issues โ Bug reports, feature requests
- ๐ SUPPORT.md โ Full support process and self-help checklist
๐ License
This project is licensed under the MIT License.
๐ Acknowledgments
Built with:
- spaCy - Industrial-strength NLP library
- Typer - Modern CLI framework
- rich - Beautiful CLI formatting
Inspired by:
- GDPR privacy-by-design principles
- Academic research ethics requirements
- Real-world need for safe AI document analysis
Methodology:
- Developed using BMAD-METHODโข framework
- Interactive elicitation and multi-perspective validation
โ ๏ธ Disclaimer
GDPR Pseudonymizer is a tool to assist with GDPR compliance. It does NOT provide legal advice.
Important notes:
- โ ๏ธ Pseudonymization reduces risk but is NOT anonymization
- โ ๏ธ You remain the data controller under GDPR
- โ ๏ธ Consult your DPO or legal counsel for compliance guidance
- โ ๏ธ Human validation is MANDATORY - do not skip review steps
- โ ๏ธ Test thoroughly before production use
v1.0 MVP limitations:
- AI detection: 40-50% baseline (not 85%+)
- Validation required for ALL documents (not optional)
- French language only (English, Spanish, etc. in future versions)
- Text formats only (.txt, .md - no PDF/DOCX in v1.0)
๐งช Testing
Running Tests
The project includes comprehensive unit and integration tests covering the validation workflow, NLP detection, and core functionality.
Note for Windows users: Due to known spaCy access violations on Windows (spaCy issue #12659), Windows CI runs non-spaCy tests only. Full test suite runs on Linux/macOS.
Run all tests:
poetry run pytest -v
Run only unit tests:
poetry run pytest tests/unit/ -v
Run only integration tests:
poetry run pytest tests/integration/ -v
Run accuracy validation tests (requires spaCy model):
poetry run pytest tests/accuracy/ -v -m accuracy -s
Run performance & stability tests (requires spaCy model):
# All performance tests (stability, memory, startup, stress)
poetry run pytest tests/performance/ -v -s -p no:benchmark --timeout=600
# Benchmark tests only (pytest-benchmark)
poetry run pytest tests/performance/ --benchmark-only -v -s
Run with coverage report:
poetry run pytest --cov=gdpr_pseudonymizer --cov-report=term-missing --cov-report=html
Run validation workflow integration tests specifically:
poetry run pytest tests/integration/test_validation_workflow_integration.py -v
Run quality checks:
# Code formatting check
poetry run black --check gdpr_pseudonymizer tests
# Format code automatically
poetry run black gdpr_pseudonymizer tests
# Linting check
poetry run ruff check gdpr_pseudonymizer tests
# Type checking
poetry run mypy gdpr_pseudonymizer
Run Windows-safe tests only (excludes spaCy-dependent tests):
# Run non-spaCy unit tests (follows Windows CI pattern)
poetry run pytest tests/unit/test_benchmark_nlp.py tests/unit/test_config_manager.py tests/unit/test_data_models.py tests/unit/test_file_handler.py tests/unit/test_logger.py tests/unit/test_naive_processor.py tests/unit/test_name_dictionary.py tests/unit/test_process_command.py tests/unit/test_project_config.py tests/unit/test_regex_matcher.py tests/unit/test_validation_models.py tests/unit/test_validation_stub.py -v
# Run validation workflow integration tests (Windows-safe)
poetry run pytest tests/integration/test_validation_workflow_integration.py -v
Test Coverage
- Unit tests: 777 tests covering validation models, UI components, encryption, database operations, audit logging, progress tracking, and core logic
- Integration tests: 90 tests for end-to-end workflows including validation (Story 2.0.1), encrypted database operations (Story 2.4), compositional logic, and hybrid detection
- Accuracy tests: 22 tests validating NER accuracy against 25-document ground-truth corpus (Story 4.4)
- Performance tests: 15 tests validating all NFR targets โ single-document benchmarks (NFR1), batch performance (NFR2), memory profiling (NFR4), startup time (NFR5), stability/error rate (NFR6), stress testing (Story 4.5)
- Current coverage: 86%+ across all modules (100% for progress module, 91.41% for AuditRepository)
- Total tests: 1077+ tests (post-refactoring baseline)
- CI/CD: Tests run on Python 3.10-3.12 across Windows, macOS, and Linux
- Quality gates: All pass (Black, Ruff, mypy, pytest)
Key Integration Test Scenarios
The integration test suite covers:
Validation Workflow (19 tests):
- โ Full workflow: entity detection โ summary โ review โ confirmation
- โ User actions: confirm (Space), reject (R), modify (E), add entity (A), change pseudonym (C), context cycling (X)
- โ State transitions: PENDING โ CONFIRMED/REJECTED/MODIFIED
- โ Entity deduplication with grouped review
- โ Edge cases: empty documents, large documents (320+ entities), Ctrl+C interruption, invalid input
- โ Batch operations: Accept All Type (Shift+A), Reject All Type (Shift+R) with confirmation prompts
- โ Mock user input: Full simulation of keyboard interactions and prompts
Encrypted Database (9 tests):
- โ End-to-end workflow: init โ open โ save โ query โ close
- โ Cross-session consistency: Same passphrase retrieves same data
- โ Idempotency: Multiple queries return same results
- โ Encrypted data at rest: Sensitive fields stored encrypted in SQLite
- โ Compositional logic integration: Encrypted component queries
- โ Repository integration: All repositories (mapping, audit, metadata) work with encrypted session
- โ Concurrent reads: WAL mode enables multiple readers
- โ Database indexes: Query performance optimization verified
- โ Batch save rollback: Transaction integrity on errors
๐ Project Metrics (As of 2026-02-11)
| Metric | Value | Status |
|---|---|---|
| Development Progress | v1.0.0 Released | โ All 4 Epics Complete |
| Stories Complete | 33 (Epic 1-4) | โ All stories including 4.6.1 refactoring + 4.7 launch prep |
| LLM Utility (NFR10) | 4.27/5.0 (85.4%) | โ PASSED (threshold: 80%) |
| Installation Success (NFR3) | 87.5% (7/8 platforms) | โ PASSED (threshold: 85%) |
| First Pseudonymization (NFR14) | 100% within 30 min | โ PASSED (threshold: 80%) |
| Critical Bugs Found | 1 (Story 2.8) | โ RESOLVED - Epic 3 Unblocked |
| Test Corpus Size | 25 docs, 1,855 entities | โ Complete |
| NLP Accuracy (Baseline) | 29.5% F1 (spaCy) | โ Measured |
| Hybrid Accuracy (NLP+Regex) | 35.3% F1 (+52.2% PERSON) | โ Story 1.8 Complete |
| Final Accuracy (AI+Human) | 100% (validated) | ๐ฏ By Design |
| Pseudonym Libraries | 3 themes (2,426 names + 240 locations + 588 orgs) | โ Stories 2.1, 3.0, 4.6 Complete |
| Compositional Matching | Operational (component reuse + title stripping + compound names) | โ Stories 2.2, 2.3 Complete |
| Batch Processing | Architecture validated (multiprocessing.Pool, 1.17x-2.5x speedup) | โ Story 2.7 Complete |
| Encrypted Storage | AES-256-SIV with passphrase protection (PBKDF2 210K iterations) | โ Story 2.4 Complete |
| Audit Logging | GDPR Article 30 compliance (operations table + JSON/CSV export) | โ Story 2.5 Complete |
| Validation UI | Operational with deduplication | โ Stories 1.7, 1.9 Complete |
| Validation Time | <2 min (20-30 entities), <5 min (100 entities) | โ Targets Met |
| Single-Doc Performance (NFR1) | ~6s mean for 3.5K words | โ PASSED (<30s threshold, 80% headroom) |
| Batch Performance (NFR2) | ~5 min for 50 docs | โ PASSED (<30min threshold, 83% headroom) |
| Memory Usage (NFR4) | ~1 GB Python-tracked peak | โ PASSED (<8GB threshold) |
| CLI Startup (NFR5) | 0.56s (help), 6.0s (cold start w/ model) | โ PASSED (<5s for CLI startup) |
| Error Rate (NFR6) | ~0% unexpected errors | โ PASSED (<10% threshold) |
| Test Coverage | 1077+ tests (post-refactoring baseline), 86%+ coverage | โ All Quality Checks Pass |
| Quality Gates | Ruff, mypy, pytest | โ All Pass (0 issues) |
| Supported Languages | French | ๐ซ๐ท v1.0 only |
| Supported Formats | .txt, .md | ๐ v1.0 scope |
๐ Quick Links
- ๐ Full PRD - Complete product requirements
- ๐ Benchmark Report - NLP accuracy analysis
- ๐จ Positioning Strategy - Marketing & messaging
- ๐๏ธ Architecture Docs - Technical design
- ๐ Approval Checklist - PM decision tracker
Last Updated: 2026-02-11 (v1.0.0 Public Release โ all 4 MVP epics complete)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gdpr_pseudonymizer-1.0.2.tar.gz.
File metadata
- Download URL: gdpr_pseudonymizer-1.0.2.tar.gz
- Upload date:
- Size: 116.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.11.14 Linux/6.8.0-1044-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
42e6f7a88029e88901a029138aaeb084ad122cb44a5d12cb6e87311c53872abb
|
|
| MD5 |
610e687fa813feaf830bbc8f3a388362
|
|
| BLAKE2b-256 |
f9c5cc76816830404c17209da39a24e89949b2295fa71cc1460569c6a657e263
|
File details
Details for the file gdpr_pseudonymizer-1.0.2-py3-none-any.whl.
File metadata
- Download URL: gdpr_pseudonymizer-1.0.2-py3-none-any.whl
- Upload date:
- Size: 140.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.11.14 Linux/6.8.0-1044-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
75fe4d685b0cd77e1499dfffb2b0ce391c0514b07f500f3d50827b67df3399a7
|
|
| MD5 |
9a1bfac41ca48f1164298a497204b8e8
|
|
| BLAKE2b-256 |
d2ed6a975e4335369151c722c8da1f33cc6d0c31838ada7b22985fb3e5541f18
|