Skip to main content

A powerful and extensible data validation and comparison tool for developers and testers.

Project description

Validly

A powerful and extensible data validation and comparison tool designed for developers and testers. Easily integrate into your automation projects to ensure JSON data integrity.

Table of Contents

Features

  • Deep, Recursive Comparison: Validates nested JSON structures seamlessly.
  • Flexible Options: Control validation with a rich set of options for every use case.
  • Order-Agnostic Lists: Intelligently compares lists of objects regardless of their order.
  • Domain-Specific Validations: Built-in checks for common data formats like UUIDs, PAN, and Aadhaar numbers.
  • Referencing Capabilities: Use a dynamic template to compare a field's value to another field in the actual JSON.
  • Custom Validators: Extend validation logic with your own Python methods from an external file.
  • Numeric Comparisons: Validate fields with operators like greater than (gt), less than (lt), and more.
  • Wildcard Matching: Use placeholders to ignore values that are dynamic or unpredictable.
  • JSON Filtering: Filter JSON data based on JSON paths and regex patterns with include/exclude options.
  • JSON Transformation: Transform JSON data with built-in and custom transformation functions.
  • API Contract Validation: Validate JSON data against API contracts with type checking and format validation.
  • OpenAPI/Swagger Validation: Validate JSON data against OpenAPI/Swagger specifications.

Installation

Validly is available on PyPI. Install it with pip:

pip install Validly

Available Tools

1. JSON Difference (json_difference)

The json_difference function compares two JSON objects and identifies any differences between them.

Basic Usage

Use json_difference to compare two JSON objects. It returns a list of failure messages if differences are found.

from Validly import json_difference

expected = {"id": 100, "name": "test"}
actual = {"id": 101, "name": "test"}

differences = json_difference(expected, actual)

# Output:
# {
#   'result': False,
#   'errors': [
#     {
#       'field': 'id',
#       'jsonpath': 'id',
#       'message': "Value mismatch: expected '100', got '101'"
#     }
#   ]
# }

Advanced Usage with Options

Pass a dictionary of options to customize the validation behavior.

from Validly import json_difference

# --- Sample Data ---
expected_data = {
    "user_id": "{ACTUAL_VALUE:user.id}",
    "user": {
        "id": 1234,
        "name": "Jane Doe",
        "age": 30
    },
    "uuid_field": "{ACTUAL_VALUE:user.uuid}",
    "pan_field": "{ACTUAL_VALUE:user.pan}",
    "login_count": 5
}
actual_data = {
    "user_id": 1234,
    "user": {
        "id": 1234,
        "name": "John Doe",
        "age": 32,
        "email": "test@example.com",
        "uuid": "f81d4fae-7dec-11d0-a765-00a0c91e6bf6",
        "pan": "ABCDE1234F"
    },
    "uuid_field": "f81d4fae-7dec-11d0-a765-00a0c91e6bf6",
    "pan_field": "ABCDE1234F",
    "login_count": 6
}

# --- Validation Options ---
options = {
    "wildcard_keys": ["user.name"],
    "numeric_validations": {
        "user.age": {"operator": "gt", "value": 30},
        "login_count": {"operator": "le", "value": 5}
    },
    "is_uuid_keys": ["user.uuid", "uuid_field"],
    "is_pan_keys": ["user.pan", "pan_field"],
    "is_aadhar_keys": ["user.aadhar"],
    "custom_validators": {"user.email": "validate_email_format"},
    "custom_validator_path": "custom_validators.py",
    "skip_keys": ["user_id"]
}

# --- Running the comparison ---
differences = json_difference(expected_data, actual_data, options=options)

# Expected differences:
# {
#   'result': False,
#   'errors': [
#     {
#       'field': 'login_count',
#       'jsonpath': 'login_count',
#       'message': "Numeric validation failed: Value is not less than or equal to 5"
#     },
#     {
#       'field': 'email',
#       'jsonpath': 'user.email',
#       'message': "Extra key in actual: user.email"
#     }
#   ]
# }

List Validation Modes

Validly offers two ways to compare lists, controlled by the list_validation_type option.

1. Unordered (Default)

This mode is designed for lists of objects where the order doesn't matter. It intelligently matches objects based on a set of common keys such as "name", "id", and "qId".

from Validly import json_difference

expected_list = [
    {"id": 1, "value": "a"},
    {"id": 2, "value": "b"}
]

actual_list = [
    {"id": 2, "value": "b"},
    {"id": 1, "value": "a"}
]

# The default behavior is 'unordered', so no option is needed here.
results = json_difference(expected_list, actual_list)

# { 'result': True, 'errors': [] }

2. Symmetric

This mode is for lists where the order of items is critical. It performs a direct, index-based comparison.

options = { "list_validation_type": "symmetric" }
results = json_difference(expected_list, actual_list, options=options)

# Expected result (failure due to different order):
# {
#   'result': False,
#   'errors': [
#     {
#       'field': '0',
#       'jsonpath': '[0]',
#       'message': "Value mismatch: expected {'id': 1, 'value': 'a'}, got {'id': 2, 'value': 'b'}"
#     },
#     {
#       'field': '1',
#       'jsonpath': '[1]',
#       'message': "Value mismatch: expected {'id': 2, 'value': 'b'}, got {'id': 1, 'value': 'a'}"
#     }
#   ]
# }

Custom Validators

Create a Python file (e.g., custom_validators.py) with your custom logic. Your validator methods should accept expected and actual values and return a (bool, str) tuple.

# custom_validators.py
import re
from typing import Any, Tuple

def validate_email_format(expected: Any, actual: Any) -> Tuple[bool, str]:
    """Validates if the actual value is a properly formatted email address."""
    if not isinstance(actual, str):
        return False, "Value is not a string."
    
    email_pattern = re.compile(r"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$")
    if email_pattern.match(actual):
        return True, ""
    
    return False, "Value is not a valid email format."

def validate_phone_number(expected: Any, actual: Any) -> Tuple[bool, str]:
    """Validates if the actual value is a properly formatted phone number."""
    if not isinstance(actual, str):
        return False, "Value is not a string."
    
    # Remove any non-digit characters for comparison
    digits_only = re.sub(r'\D', '', actual)
    
    # Check if it's a valid length for a phone number (adjust as needed)
    if 10 <= len(digits_only) <= 15:
        return True, ""
    
    return False, f"Value '{actual}' is not a valid phone number format."

def validate_date_format(expected: Any, actual: Any) -> Tuple[bool, str]:
    """Validates if the actual value matches the expected date format."""
    if not isinstance(actual, str):
        return False, "Value is not a string."
    
    # Expected should be a format string like "YYYY-MM-DD"
    if isinstance(expected, str) and expected.startswith("format:"):
        format_str = expected.split(":")[1].strip()
        
        # Simple validation for common formats
        if format_str == "YYYY-MM-DD":
            pattern = r"^\d{4}-\d{2}-\d{2}$"
        elif format_str == "MM/DD/YYYY":
            pattern = r"^\d{2}/\d{2}/\d{4}$"
        else:
            return False, f"Unknown date format: {format_str}"
            
        if re.match(pattern, actual):
            return True, ""
        return False, f"Value does not match the {format_str} format."
    
    # If no format specified, just do direct comparison
    return expected == actual, f"Expected {expected}, got {actual}"

Then, configure the validator in your options dictionary:

options = {
    "custom_validators": {
        "user.email": "validate_email_format",
        "user.phone": "validate_phone_number",
        "user.birthdate": "validate_date_format"
    },
    "custom_validator_path": "path/to/custom_validators.py"
}

Custom Validator Use Cases

1. Complex Format Validation

Validate complex formats that aren't covered by built-in validators:

# In custom_validators.py
def validate_credit_card(expected: Any, actual: Any) -> Tuple[bool, str]:
    """Validates credit card numbers using the Luhn algorithm."""
    if not isinstance(actual, str):
        return False, "Value is not a string."
    
    # Remove spaces and dashes
    digits = re.sub(r'[\s-]', '', actual)
    if not digits.isdigit():
        return False, "Credit card contains non-digit characters."
    
    # Luhn algorithm implementation
    checksum = 0
    for i, digit in enumerate(reversed(digits)):
        n = int(digit)
        if i % 2 == 1:  # Odd position (0-indexed from right)
            n *= 2
            if n > 9:
                n -= 9
        checksum += n
    
    if checksum % 10 == 0:
        return True, ""
    return False, "Invalid credit card number (failed Luhn check)."

2. Conditional Validation

Validate fields based on the values of other fields:

# In custom_validators.py
def validate_shipping_address(expected: Any, actual: Any) -> Tuple[bool, str]:
    """Validates shipping address based on country-specific rules."""
    if not isinstance(actual, dict):
        return False, "Value is not an object."
    
    country = actual.get('country', '')
    postal_code = actual.get('postalCode', '')
    
    # Different validation rules per country
    if country == 'US':
        if not re.match(r'^\d{5}(-\d{4})?$', postal_code):
            return False, "Invalid US ZIP code format."
    elif country == 'UK':
        if not re.match(r'^[A-Z]{1,2}[0-9][A-Z0-9]? ?[0-9][A-Z]{2}$', postal_code, re.I):
            return False, "Invalid UK postal code format."
    
    return True, ""

3. Integration with External Services

Validate data against external APIs or databases:

# In custom_validators.py
import requests

def validate_against_api(expected: Any, actual: Any) -> Tuple[bool, str]:
    """Validates data against an external API."""
    try:
        # Make API call to validate the data
        response = requests.post(
            "https://api.example.com/validate",
            json={"value": actual}
        )
        
        if response.status_code == 200:
            result = response.json()
            if result.get("valid"):
                return True, ""
            return False, result.get("message", "API validation failed.")
        
        return False, f"API validation error: {response.status_code}"
    except Exception as e:
        return False, f"API validation exception: {str(e)}"

CLI Usage

The Validly CLI allows you to perform validations from the command line without writing a Python script, making it ideal for CI/CD pipelines and automated testing.

# Basic usage
python -m Validly expected.json actual.json

# With options file
python -m Validly expected.json actual.json options.json

Example options.json with custom validators:

{
  "list_validation_type": "symmetric",
  "wildcard_keys": ["user.name"],
  "numeric_validations": {
    "user.age": {"operator": "gt", "value": 30},
    "login_count": {"operator": "le", "value": 5}
  },
  "is_uuid_keys": ["user.uuid"],
  "is_pan_keys": ["user.pan"],
  "custom_validators": {
    "user.email": "validate_email_format",
    "user.phone": "validate_phone_number"
  },
  "custom_validator_path": "./custom_validators.py",
  "skip_keys": ["user_id", "id"]
}

2. JSON Filtering (jsonfilter)

Validly provides powerful JSON filtering capabilities through two main functions: jsonfilter and jsonfilter_file.

Basic Filtering

Filter JSON data using JSON paths and regex patterns:

from Validly import jsonfilter

# Sample data
data = {
    "user": {
        "id": 1234,
        "name": "John Doe",
        "contact": {
            "email": "john@example.com",
            "phone": "555-1234"
        }
    },
    "orders": [
        {"id": 101, "product": "Laptop", "price": 999.99},
        {"id": 102, "product": "Mouse", "price": 24.99}
    ],
    "metadata": {
        "version": "1.0",
        "timestamp": "2025-09-06T06:00:00Z"
    }
}

# Filter options (include mode is default)
options = {
    "jsonpath": ["user.name", "user.contact.email", "orders"]
}

# Apply filtering
filtered_data = jsonfilter(data, options)

# Result:
# {
#     "user": {
#         "name": "John Doe",
#         "contact": {
#             "email": "john@example.com"
#         }
#     },
#     "orders": [
#         {"id": 101, "product": "Laptop", "price": 999.99},
#         {"id": 102, "product": "Mouse", "price": 24.99}
#     ]
# }

Include vs Exclude Filtering

Choose between including or excluding the matched paths:

# Include mode (default)
options = {
    "jsonpath": ["user.id", "metadata.version"],
    "filter_type": "include"  # Only keep matched paths
}

# Result:
# {
#     "user": {
#         "id": 1234
#     },
#     "metadata": {
#         "version": "1.0"
#     }
# }

# Exclude mode
options = {
    "jsonpath": ["user.id", "metadata.version"],
    "filter_type": "exclude"  # Remove matched paths, keep everything else
}

# Result:
# {
#     "user": {
#         "name": "John Doe",
#         "contact": {
#             "email": "john@example.com",
#             "phone": "555-1234"
#         }
#     },
#     "orders": [
#         {"id": 101, "product": "Laptop", "price": 999.99},
#         {"id": 102, "product": "Mouse", "price": 24.99}
#     ],
#     "metadata": {
#         "timestamp": "2025-09-06T06:00:00Z"
#     }
# }

Wildcard Filtering

Use wildcards to include multiple fields matching a pattern:

# Filter with wildcards
options = {
    "jsonpath": ["user.*", "metadata.version"]
}

filtered_data = jsonfilter(data, options)

# Result:
# {
#     "user": {
#         "id": 1234,
#         "name": "John Doe",
#         "contact": {
#             "email": "john@example.com",
#             "phone": "555-1234"
#         }
#     },
#     "metadata": {
#         "version": "1.0"
#     }
# }

Regex-based Filtering

Filter keys that match a regular expression pattern:

# Filter with regex
options = {
    "regex": "id"
}

filtered_data = jsonfilter(data, options)

# Result:
# {
#     "user": {
#         "id": 1234
#     },
#     "orders": [
#         {"id": 101},
#         {"id": 102}
#     ]
# }

Key-based Filtering Across All Levels

Filter keys exactly matching specified names at any level in the JSON structure:

# Filter by exact key names at any level
options = {
    "keys": ["id", "email"],
    "filter_type": "include"  # Only keep matched keys
}

filtered_data = jsonfilter(data, options)

# Result:
# {
#     "user": {
#         "id": 1234,
#         "contact": {
#             "email": "john@example.com"
#         }
#     },
#     "orders": [
#         {"id": 101},
#         {"id": 102}
#     ]
# }

# Exclude specific keys at any level
options = {
    "keys": ["email", "price"],
    "filter_type": "exclude"  # Remove matched keys
}

filtered_data = jsonfilter(data, options)

# Result:
# {
#     "user": {
#         "id": 1234,
#         "name": "John Doe",
#         "contact": {
#             "phone": "555-1234"
#         }
#     },
#     "orders": [
#         {"id": 101, "product": "Laptop"},
#         {"id": 102, "product": "Mouse"}
#     ],
#     "metadata": {
#         "version": "1.0",
#         "timestamp": "2025-09-06T06:00:00Z"
#     }
# }

Filtering from Files

Filter JSON data directly from files:

from Validly import jsonfilter_file

# Filter JSON from a file
options = {
    "jsonpath": ["user", "metadata.version"]
}

filtered_data = jsonfilter_file("data.json", options)

# Process the filtered data
print(filtered_data)

3. JSON Transformation (json_transform)

Validly provides powerful JSON transformation capabilities through two main functions: json_transform and json_transform_file.

Basic Transformation

Transform JSON data using built-in transformation methods:

from Validly import json_transform

# Sample data
data = {
    "user": {
        "id": "1234",  # String that needs to be converted to integer
        "name": "john doe",  # Needs to be capitalized
        "active": 1  # Needs to be converted to boolean
    },
    "price": "99.99"  # String that needs to be converted to float
}

# Transform options
options = {
    "transforms": {
        "user.id": {"method": "to_int"},
        "user.name": {"method": "format", "args": {"format": "{0.title()}"}},
        "user.active": {"method": "to_bool"},
        "price": {"method": "to_float"}
    }
}

# Apply transformation
transformed_data = json_transform(data, options)

# Result:
# {
#     "user": {
#         "id": 1234,  # Now an integer
#         "name": "John Doe",  # Now capitalized
#         "active": True  # Now a boolean
#     },
#     "price": 99.99  # Now a float
# }

Adding New Fields

Add new fields to the JSON structure:

# Add new fields
options = {
    "add_fields": {
        "user.full_name": {
            "value": "John Smith Doe",
            "parent": "user"
        },
        "metadata": {
            "value": {"created_at": "2025-09-06", "version": "1.0"},
            "parent": ""
        }
    }
}

transformed_data = json_transform(data, options)

# Result:
# {
#     "user": {
#         "id": "1234",
#         "name": "john doe",
#         "active": 1,
#         "full_name": "John Smith Doe"  # New field added
#     },
#     "price": "99.99",
#     "metadata": {  # New field added at root level
#         "created_at": "2025-09-06",
#         "version": "1.0"
#     }
# }

Custom Transformers

Create a Python file with custom transformation functions:

# custom_transformers.py
def capitalize_name(value, args, root_data):
    """Capitalize each word in a name."""
    if not isinstance(value, str):
        return value
    return value.title()

def calculate_total(value, args, root_data):
    """Calculate total price based on price and quantity."""
    price = float(root_data.get("price", 0))
    quantity = args.get("quantity", 1)
    return price * quantity

Then use these custom transformers in your code:

options = {
    "transforms": {
        "user.name": {"method": "capitalize_name"},
        "total": {"method": "calculate_total", "args": {"quantity": 3}}
    },
    "add_fields": {
        "total": {
            "value": 0,  # This will be replaced by the transformer
            "parent": ""
        }
    },
    "custom_transform_path": "custom_transformers.py"
}

transformed_data = json_transform(data, options)

# Result:
# {
#     "user": {
#         "id": "1234",
#         "name": "John Doe",  # Capitalized using custom transformer
#         "active": 1
#     },
#     "price": "99.99",
#     "total": 299.97  # Calculated using custom transformer (99.99 * 3)
# }

Transforming from Files

Transform JSON data directly from files:

from Validly import json_transform_file

# Transform JSON from a file
options = {
    "transforms": {
        "user.id": {"method": "to_int"},
        "price": {"method": "to_float"}
    }
}

transformed_data = json_transform_file("data.json", options)

# Process the transformed data
print(transformed_data)

4. JSON Validation (json_validate)

Validly provides a powerful way to validate JSON data against API contracts using the json_validate function.

Basic Validation

Validate JSON data against a contract schema:

from Validly import json_validate

# Sample data
data = {
    "user": {
        "id": "1234",
        "name": "John Doe",
        "age": 30,
        "email": "john@example.com"
    },
    "orders": [
        {"id": 101, "product": "Laptop", "price": 999.99}
    ]
}

# Contract schema
contract = {
    "user": {
        "id": "",
        "name": "",
        "age": 0,
        "email": ""
    },
    "orders": [
        {"id": 0, "product": "", "price": 0.0}
    ]
}

# Validate data against contract
result = json_validate(data, contract)

# Result:
# {
#     "result": True,
#     "errors": []
# }

Type Validation

Validate that fields have the correct data types:

options = {
    "type_validations": {
        "user.id": "string",
        "user.age": "number",
        "user.active": "boolean",
        "orders": "array"
    }
}

result = json_validate(data, contract, options)

Supported types include: string, number, boolean, array, object, and any.

Format Validation

Validate that fields match specific formats:

options = {
    "is_uuid_keys": ["user.uuid"],
    "is_pan_keys": ["user.pan"],
    "is_aadhar_keys": ["user.aadhar"],
    "regex_keys": {
        "user.email": r"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$"
    }
}

result = json_validate(data, contract, options)

Required Fields Validation

Specify fields that must be present in the data:

options = {
    "required_keys": [
        "user.id",
        "user.name",
        "user.email",
        "orders"
    ]
}

result = json_validate(data, contract, options)

Custom Validation

Use custom validators for complex validation logic:

# First, create a custom validator file
with open('custom_validators.py', 'w') as f:
    f.write(r"""
def validate_email(expected, actual):
    import re
    if not isinstance(actual, str):
        return False, "Value is not a string"
    
    email_pattern = re.compile(r"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$")
    if email_pattern.match(actual):
        return True, ""
    
    return False, f"'{actual}' is not a valid email format"
""")

# Then use the custom validator
options = {
    "custom_validators": {
        "user.email": "validate_email"
    },
    "custom_validator_path": "custom_validators.py"
}

result = json_validate(data, contract, options)

Strict Mode

Enforce that the data doesn't contain any fields not defined in the contract:

options = {
    "strict_mode": True
}

result = json_validate(data, contract, options)

5. OpenAPI Validation (validate_openapi)

Validly provides powerful validation against OpenAPI/Swagger specifications using the validate_openapi function.

Basic OpenAPI Validation

Validate JSON data against an OpenAPI schema:

from Validly import validate_openapi, validate_openapi_file, validate_openapi_url, load_openapi_schema

# Method 1: Load schema from file and validate
result = validate_openapi_file(data, 'openapi.json')

# Method 2: Load schema from URL and validate
result = validate_openapi_url(data, 'https://example.com/api/openapi.json')

# Method 3: Load schema manually and validate
with open('openapi.json', 'r') as f:
    openapi_schema = json.load(f)

# Or load from URL
openapi_schema = load_openapi_schema('https://example.com/api/openapi.json')

# Sample data to validate
data = {
    "name": "John Doe",
    "email": "john@example.com",
    "age": 30
}

# Validate against a specific schema component
user_schema = openapi_schema["components"]["schemas"]["User"]
result = validate_openapi(data, user_schema)

# Result:
# {
#     "result": True,
#     "errors": []
# }

Validating Request/Response

Validate request or response data against OpenAPI path definitions:

# Extract request schema from OpenAPI spec
request_schema = openapi_schema["paths"]["/users"]["post"]["requestBody"]["content"]["application/json"]["schema"]

# Validate request data
request_data = {
    "name": "John Doe",
    "email": "john@example.com",
    "age": 30
}

result = validate_openapi(request_data, request_schema)

# Extract response schema for a 200 response
response_schema = openapi_schema["paths"]["/users"]["post"]["responses"]["200"]["content"]["application/json"]["schema"]

# Validate response data
response_data = {
    "id": "f81d4fae-7dec-11d0-a765-00a0c91e6bf6",
    "name": "John Doe",
    "email": "john@example.com",
    "created_at": "2025-09-06T06:00:00Z"
}

result = validate_openapi(response_data, response_schema)

OpenAPI Schema Components

The validate_openapi function automatically handles OpenAPI schema features:

  • Data Types: string, number, integer, boolean, array, object
  • Formats: uuid, email, uri, date, date-time
  • Validations: required fields, minimum/maximum values, patterns
  • Schema Structures: oneOf, anyOf, allOf
  • Nested References: $ref references to other schema components
# OpenAPI schema with nested references
schema = {
    "openapi": "3.0.0",
    "components": {
        "schemas": {
            "User": {
                "type": "object",
                "properties": {
                    "name": {"type": "string"},
                    "address": {"$ref": "#/components/schemas/Address"}
                }
            },
            "Address": {
                "type": "object",
                "properties": {
                    "street": {"type": "string"},
                    "city": {"type": "string"}
                }
            }
        }
    }
}

# Data with nested structure
data = {
    "name": "John Doe",
    "address": {
        "street": "123 Main St",
        "city": "New York"
    }
}

# Validate against the schema with nested references
result = validate_openapi(data, schema["components"]["schemas"]["User"])
# OpenAPI schema with various validations
schema = {
    "type": "object",
    "required": ["name", "email"],
    "properties": {
        "name": {
            "type": "string",
            "minLength": 1
        },
        "email": {
            "type": "string",
            "format": "email"
        },
        "age": {
            "type": "integer",
            "minimum": 18
        }
    }
}

# Validate data against the schema
result = validate_openapi(data, schema)

Custom OpenAPI Validation

You can extend OpenAPI validation with custom validators:

# First, create a custom validator file
with open('custom_validators.py', 'w') as f:
    f.write(r"""
def validate_complex_email(expected, actual):
    import re
    if not isinstance(actual, str):
        return False, "Value is not a string"
    
    # More complex email validation than the standard format
    email_pattern = re.compile(r"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$")
    if not email_pattern.match(actual):
        return False, f"'{actual}' is not a valid email format"
    
    # Additional validation rules
    if actual.endswith('.test'):
        return False, "Test domains are not allowed"
    
    return True, ""
""")

# Then use the custom validator with OpenAPI validation
options = {
    "custom_validators": {
        "email": "validate_complex_email"
    },
    "custom_validator_path": "custom_validators.py"
}

result = validate_openapi(data, schema, options)

Contributing

We welcome contributions! If you have a feature idea or find a bug, please open an issue or submit a pull request on GitHub.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

validly-1.0.13.tar.gz (29.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

validly-1.0.13-py3-none-any.whl (22.0 kB view details)

Uploaded Python 3

File details

Details for the file validly-1.0.13.tar.gz.

File metadata

  • Download URL: validly-1.0.13.tar.gz
  • Upload date:
  • Size: 29.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for validly-1.0.13.tar.gz
Algorithm Hash digest
SHA256 c6bd9faacebaa9b81048b67fe880342ad06404470fff9116cdd5f386cbef44ad
MD5 ed6ccfc8307d7a1ffe720007c60391a8
BLAKE2b-256 926a0f410014908416a7834650e27f564c3561d12d2a092e2c6a94f18114e4d6

See more details on using hashes here.

File details

Details for the file validly-1.0.13-py3-none-any.whl.

File metadata

  • Download URL: validly-1.0.13-py3-none-any.whl
  • Upload date:
  • Size: 22.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for validly-1.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 44437ceaf64db38dda65d884995543a13e8eedc6a20f13d19303186bbc4eac0d
MD5 70a5a28f0415cf7c7a031e121c8a8c38
BLAKE2b-256 be1d2283cb2cd85990e53b1053d67daf40ae704e346635214a604497186f81e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page