Skip to main content

A comprehensive Python package for type casting, conversion, and validation with advanced features

Project description

Casting Expert

A comprehensive Python package for type casting, string-to-dictionary conversion, and data validation with advanced features like type inference, serialization, and schema validation.

🌟 Features

  • String to Dictionary Conversion

    • Multiple format support (JSON, Query String, Key-Value, YAML-like)
    • Auto-format detection
    • Nested structure support
  • Type Inference

    • Automatic type detection and conversion
    • Support for common data types (int, float, bool, datetime)
    • List parsing (comma-separated values)
    • Nested type inference
  • Data Serialization

    • Convert dictionaries to various string formats
    • Pretty printing options
    • Customizable delimiters and formatting
  • Advanced Validation

    • Nested schema validation
    • Custom error messages
    • Type checking
    • Pattern matching
    • Range validation
    • Length constraints
    • Custom validators
    • Field transformations
    • Conditional validation

📦 Installation

pip install casting-expert

🚀 Quick Start

Basic Usage

from casting_expert import parse_string_to_dict

# Auto-detect format and convert
data = parse_string_to_dict('{"name": "John", "age": 30}')

Type Inference

from casting_expert import TypeInference

# Infer types for single values
value1 = TypeInference.infer_type("123")         # Returns int(123)
value2 = TypeInference.infer_type("true")        # Returns bool(True)
value3 = TypeInference.infer_type("2024-01-01")  # Returns datetime object
value4 = TypeInference.infer_type("1.23")        # Returns float(1.23)
value5 = TypeInference.infer_type("a,b,c")       # Returns ["a", "b", "c"]

# Infer types for entire dictionary
data = {
    "id": "123",
    "active": "true",
    "score": "98.6",
    "tags": "python,coding,dev"
}

typed_data = TypeInference.infer_types_in_dict(data)
# Result:
# {
#     "id": 123,
#     "active": True,
#     "score": 98.6,
#     "tags": ["python", "coding", "dev"]
# }

Serialization

from casting_expert import DictSerializer

data = {
    "name": "John",
    "age": 30,
    "scores": [95, 87, 91]
}

# Convert to different formats
json_str = DictSerializer.to_json(data, pretty=True)
# {
#     "name": "John",
#     "age": 30,
#     "scores": [95, 87, 91]
# }

query_str = DictSerializer.to_query_string(data, prefix='?')
# ?name=John&age=30&scores=[95,87,91]

kv_str = DictSerializer.to_key_value(data, delimiter=': ')
# name: John
# age: 30
# scores: [95, 87, 91]

yaml_str = DictSerializer.to_yaml_like(data)
# name: John
# age: 30
# scores: 
#   - 95
#   - 87
#   - 91

Validation

Basic Validation

from casting_expert import DictValidator

# Create validation schema
schema = {
    "name": DictValidator.create_field(
        str,
        required=True,
        min_length=2,
        pattern=r'^[A-Za-z\s]+$',
        error_messages={
            "pattern": "Name should contain only letters and spaces",
            "required": "Name is required"
        }
    ),
    "age": DictValidator.create_field(
        int,
        min_value=0,
        max_value=150,
        error_messages={
            "min_value": "Age cannot be negative",
            "max_value": "Age cannot be greater than 150"
        }
    ),
    "email": DictValidator.create_field(
        str,
        required=True,
        pattern=r'^[\w\.-]+@[\w\.-]+\.\w+$',
        error_messages={"pattern": "Invalid email format"}
    )
}

# Validate data
data = {
    "name": "John Doe",
    "age": 30,
    "email": "john@example.com"
}

result = DictValidator.validate(data, schema)
if result.is_valid:
    print("Validation passed!")
else:
    for issue in result.issues:
        print(f"{issue.severity}: {issue.field} - {issue.message}")

Nested Validation

# Create nested schema
address_schema = {
    "street": DictValidator.create_field(
        str,
        required=True,
        min_length=5,
        error_messages={"min_length": "Street name is too short"}
    ),
    "city": DictValidator.create_field(
        str,
        required=True,
        choices=["New York", "Los Angeles", "Chicago"]
    ),
    "zip_code": DictValidator.create_field(
        str,
        pattern=r'^\d{5}(-\d{4})?$',
        error_messages={"pattern": "Invalid ZIP code format"}
    )
}

# Main schema with nested address
schema = {
    "name": DictValidator.create_field(str, required=True),
    "address": DictValidator.create_field(
        dict,
        required=True,
        schema=address_schema
    )
}

# Validate nested data
data = {
    "name": "John Doe",
    "address": {
        "street": "123 Main St",
        "city": "New York",
        "zip_code": "12345"
    }
}

result = DictValidator.validate(data, schema)

Custom Validators and Transformations

# Custom validator
def validate_domain(email: str) -> bool:
    return not email.endswith(('.temp', '.test'))

email_field = DictValidator.create_field(
    str,
    pattern=r'^[\w\.-]+@[\w\.-]+\.\w+$'
).add_validator(
    validate_domain,
    "Temporary email domains are not allowed"
)

# Transform before validation
username_field = DictValidator.create_field(
    str
).add_transform(
    lambda x: x.lower().strip()
)

# Combined schema
schema = {
    "email": email_field,
    "username": username_field
}

🧪 Testing

# Install development dependencies
pip install -e ".[dev]"

# Run tests
pytest

📝 Common Use Cases

1. Form Data Processing

from casting_expert import parse_string_to_dict, TypeInference, DictValidator

# Process form data
form_data = "name=John+Doe&age=30&email=john@example.com"
data = parse_string_to_dict(form_data, format='query')
typed_data = TypeInference.infer_types_in_dict(data)

# Validate
schema = {
    "name": DictValidator.create_field(str, required=True),
    "age": DictValidator.create_field(int, min_value=0),
    "email": DictValidator.create_field(str, pattern=r'^[\w\.-]+@[\w\.-]+\.\w+$')
}

result = DictValidator.validate(typed_data, schema)

2. Configuration File Processing

# Process YAML-like config
config_str = """
database:
  host: localhost
  port: 5432
  username: admin
settings:
  debug: true
  max_connections: 100
"""

config = parse_string_to_dict(config_str, format='yaml_like')
typed_config = TypeInference.infer_types_in_dict(config)

3. API Response Validation

# Define API response schema
response_schema = {
    "status": DictValidator.create_field(
        str,
        choices=["success", "error"]
    ),
    "data": DictValidator.create_field(
        dict,
        nullable=True,
        schema={
            "id": DictValidator.create_field(int, required=True),
            "name": DictValidator.create_field(str, required=True)
        }
    ),
    "message": DictValidator.create_field(str, required=True)
}

# Validate API response
response_data = {
    "status": "success",
    "data": {
        "id": 123,
        "name": "John"
    },
    "message": "Data retrieved successfully"
}

result = DictValidator.validate(response_data, response_schema)

📄 License

MIT License - feel free to use this package in your projects.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📬 Contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

casting_expert-0.1.5.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

casting_expert-0.1.5-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file casting_expert-0.1.5.tar.gz.

File metadata

  • Download URL: casting_expert-0.1.5.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.3

File hashes

Hashes for casting_expert-0.1.5.tar.gz
Algorithm Hash digest
SHA256 7c675b26b6379f3c6789e8000a93e28e49ea034c7971629301f21a585831bd17
MD5 476a47cd8ebfa8418db3b428fbe65b5b
BLAKE2b-256 fbc182e088fde63c2c3d4cabe92c6f706df071b59e056447242ea2bff741dd3a

See more details on using hashes here.

File details

Details for the file casting_expert-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for casting_expert-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 8f54aa81c532df560e0f19c2a52f082ebb1201ecc272e56ae5515b5013ce3f61
MD5 fd881aa9b23ce28eff50e3f0e2787a7b
BLAKE2b-256 ab7a24a24e934c31e46de9fc2fd6c126929dbdf2a8d1991dedfa4309ba5c311b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page