AutoMapper for Spark
Project description
SparkAutoMapper
Fluent API to map data from one view to another in Spark.
Uses native Spark functions underneath so it is just as fast as hand writing the transformations.
Since this is just Python, you can use any Python editor. Since everything is typed using Python typings, most editors will auto-complete and warn you when you do something wrong
Usage
pip install sparkautomapper
SparkAutoMapper input and output
You can pass either a dataframe to SparkAutoMapper or specify the name of a Spark view to read from.
You can receive the result as a dataframe or (optionally) pass in the name of a view where you want the result.
Dynamic Typing Examples
Set a column in destination to a text value (read from pass in data frame and return the result in a new dataframe)
Set a column in destination to a text value
from spark_auto_mapper.automappers.automapper import AutoMapper
mapper = AutoMapper(
keys=["member_id"]
).columns(
dst1="hello"
)
Set a column in destination to a text value (read from a Spark view and put result in another Spark view)
Set a column in destination to a text value
from spark_auto_mapper.automappers.automapper import AutoMapper
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst1="hello"
)
Set a column in destination to an int value
Set a column in destination to a text value
from spark_auto_mapper.automappers.automapper import AutoMapper
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst1=1050
)
Copy a column (src1) from source_view to destination view column (dst1)
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst1=A.column("src1")
)
Or you can use the shortcut for specifying a column (wrap column name in [])
from spark_auto_mapper.automappers.automapper import AutoMapper
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst1="[src1]"
)
Convert data type for a column (or string literal)
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
birthDate=A.date(A.column("date_of_birth"))
)
Use a Spark SQL Expression (Any valid Spark SQL expression can be used)
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
gender=A.expression(
"""
CASE
WHEN `Member Sex` = 'F' THEN 'female'
WHEN `Member Sex` = 'M' THEN 'male'
ELSE 'other'
END
"""
)
)
Specify multiple transformations
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst1="[src1]",
birthDate=A.date("[date_of_birth]"),
gender=A.expression(
"""
CASE
WHEN `Member Sex` = 'F' THEN 'female'
WHEN `Member Sex` = 'M' THEN 'male'
ELSE 'other'
END
"""
)
)
Use variables or parameters
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
def mapping(parameters: dict):
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst1=A.column(parameters["my_column_name"])
)
Use conditional logic
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
def mapping(parameters: dict):
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst1=A.column(parameters["my_column_name"])
)
if parameters["customer"] == "Microsoft":
mapper = mapper.columns(
important_customer=1,
customer_name=parameters["customer"]
)
return mapper
Using nested array columns
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).withColumn(
dst2=A.list(
[
"address1",
"address2"
]
)
)
Using nested struct columns
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst2=A.complex(
use="usual",
family="imran"
)
)
Using lists of structs
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapper(
view="members",
source_view="patients",
keys=["member_id"]
).columns(
dst2=A.list(
[
A.complex(
use="usual",
family="imran"
),
A.complex(
use="usual",
family="[last_name]"
)
]
)
)
Executing the AutoMapper
spark.createDataFrame(
[
(1, 'Qureshi', 'Imran'),
(2, 'Vidal', 'Michael'),
],
['member_id', 'last_name', 'first_name']
).createOrReplaceTempView("patients")
source_df: DataFrame = spark.table("patients")
df = source_df.select("member_id")
df.createOrReplaceTempView("members")
result_df: DataFrame = mapper.transform(df=df)
Statically Typed Examples
To improve the auto-complete and syntax checking even more, you can define Complex types:
Define a custom data type:
from spark_auto_mapper.type_definitions.automapper_defined_types import AutoMapperTextInputType
from spark_auto_mapper.helpers.automapper_value_parser import AutoMapperValueParser
from spark_auto_mapper.data_types.date import AutoMapperDateDataType
from spark_auto_mapper.data_types.list import AutoMapperList
from spark_auto_mapper_fhir.fhir_types.automapper_fhir_data_type_complex_base import AutoMapperFhirDataTypeComplexBase
class AutoMapperFhirDataTypePatient(AutoMapperFhirDataTypeComplexBase):
# noinspection PyPep8Naming
def __init__(self,
id_: AutoMapperTextInputType,
birthDate: AutoMapperDateDataType,
name: AutoMapperList,
gender: AutoMapperTextInputType
) -> None:
super().__init__()
self.value = dict(
id=AutoMapperValueParser.parse_value(id_),
birthDate=AutoMapperValueParser.parse_value(birthDate),
name=AutoMapperValueParser.parse_value(name),
gender=AutoMapperValueParser.parse_value(gender)
)
Now you get auto-complete and syntax checking:
from spark_auto_mapper.automappers.automapper import AutoMapper
from spark_auto_mapper.helpers.automapper_helpers import AutoMapperHelpers as A
mapper = AutoMapperFhir(
view="members",
source_view="patients",
keys=["member_id"]
).withResource(
resource=F.patient(
id_=A.column("a.member_id"),
birthDate=A.date(
A.column("date_of_birth")
),
name=A.list(
F.human_name(
use="usual",
family=A.column("last_name")
)
),
gender="female"
)
)
Publishing a new package
- Edit VERSION to increment the version
- Create a new release
- The GitHub Action should automatically kick in and publish the package
- You can see the status in the Actions tab
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sparkautomapper-0.1.69.tar.gz
.
File metadata
- Download URL: sparkautomapper-0.1.69.tar.gz
- Upload date:
- Size: 31.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/40.6.2 requests-toolbelt/0.9.1 tqdm/4.53.0 CPython/3.6.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3d39fa4483ec002f220b737bf5e0e6439bac5cc805d24f016d1815c3fbb640d3 |
|
MD5 | f6df53252fcefd82068f555753c9b23b |
|
BLAKE2b-256 | bdccb7439ee84e10f9b3125d5367cbe06611c7cfc4bfb5ae6a2dafb1524533d5 |
File details
Details for the file sparkautomapper-0.1.69-py3-none-any.whl
.
File metadata
- Download URL: sparkautomapper-0.1.69-py3-none-any.whl
- Upload date:
- Size: 91.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/40.6.2 requests-toolbelt/0.9.1 tqdm/4.53.0 CPython/3.6.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c17e50e6a54f40fbd689d7955577a24fba196e43ab455df2c2af3186809bc1f4 |
|
MD5 | cc197d27efb7750bf28cd79fbc2eba4a |
|
BLAKE2b-256 | d84712f8c7cb8473fae495885a844e33b49968bc8e5a38a374a965a600e279f4 |