Skip to main content

NielsenIQ Reader’s main purpose is to facilitate ease of processing of NielsenIQ Retail Scanner data

Project description

NielsenIQ Retail Reader

License License
Dependencies PyPI - Version | PyPI - Version | PyPI - Version | PyPI - Version | PyPI - Version
Meta PyPI - Version

Overview:

NielsenIQ Retail Reader is a special-purpose library and it's main purpose is to facilitate ease of processing of NielsenIQ Retail Scanner data of Kilt’s Center’s Nielsen IQ data used for Academic research only. The striking feature of this library is Dask which acts as an underlying framework that uniquely empowers the user to read NielsenIQ data with limited on device resources (by processing larger-than-memory data in chunks and distributed fashion). It understands the Kilts/Nielsen directory structure.

Data:

Information about the Retail Scanner data can be found here: Kilts Center for Marketing

IMPORTANT:

Access to NielsenIQ Retail Data:

Please note that NielsenIQ Retail data is proprietary and access is restricted to individuals whose institutions have an existing subscription or agreement with NielsenIQ. If you intend to use this library for accessing and analyzing NielsenIQ data, you must first ensure that you are authorized to do so by your institution. Unauthorized access or use of this data may violate terms of use and could have legal implications. Nielsen dataset should strictly follow standard naming convention as per laid out by Nielsen and Kilts Center of Marketing and under no circumstances the naming convention should be changed.

NielsenIQRetail processes Retail Scanner Data.

Table of Contents

Main Features

Here are just a few of the things that NielsenIQRetail does well:

  • Efficiently manages NielsenIQ directory and hierarchy, simplifying the process for researchers and significantly reducing the time needed to navigate through NielsenIQ documentation.
  • Size mutability: Processes dataframes larger-than-memory on a single machine through batch processing.
  • Distributed computing for terabyte sized datasets enhancing the overall data reading speed by utlising low-latency feature of Dask.
  • Provides simple yet distinct commands for separating sales, stores, and products data for analysis purposes.
  • This package has excellent compatibility with Numpy, Pandas etc.

Where to get it

The source code is currently hosted on GitHub at: https://github.com/pratikrelekar/NielsenIQDSRS

Binary installers is available at Python Package Index (PyPI)

For PyPI install:

pip install NielsenIQRetail

For Github pip install:

pip install git+https://github.com/pratikrelekar/NielsenDSRS

For pip install requirements:

python -m pip install -r requirements.txt

Dependencies

Before using NielsenIQRetail, ensure that all dependencies are correctly installed. Additionally, verify that the Client hosting the Python environment, the Scheduler, and the Worker nodes all have the same version installed.

How to use

For Local on system memory:

from dask.distributed import Client

# # Calculate memory per worker based on total system memory
total_memory_gb = SYSTEM_RAM  # Your system's total RAM in GB (Edit as per system memory)
n_workers = WORKERS         # Number of cores on system (Edit the total workers you want)
memory_per_worker_gb = int(total_memory_gb / n_workers)  # Memory per worker in GB

# Start the client with given specifications
client = Client(n_workers=n_workers, threads_per_worker=1, 
                memory_limit=f'{memory_per_worker_gb}GB')
print(client)

To utilize the power of the Dask, using auxilary memory cluster for large data processing

# you can only connect to the cluster from inside Python client environment
from dask.distributed import Client
client = Client('dask-scheduler.default.svc.cluster.local:address') #Replace the address with your actual address of the memory cluster
client

Debug

Make sure the NielsenDSRS module and all the dependencies are installed on Dask Client, Scheduler and Worker nodes. The versions should match on all. Following is the code to debug the errors related to the version mismatch:

For worker nodes:

def check_module():
    try:
        import NielsenIQDSRS
        return "Installed"
    except ImportError:
        return "Not Installed"

# Run the check across all workers
results = client.run(check_module)
for worker, result in results.items():
    print(f"{worker}: {result}")

For Scheduler:

scheduler_result = client.run_on_scheduler(check_module)
print(f"Scheduler: {scheduler_result}")

For Client:

try:
    import NielsenDSRS
    print("NielsenDSRS is installed on the client.")
except ImportError:
    print("NielsenDSRS is not installed on the client.")

If there is a mismatch or if the NielsenIQRetail is not correctly installed, follow these steps:

# Function to install NielsenDSRS
def install_nielsendsrs():
    import subprocess
    subprocess.check_call(["pip", "install", "NielsenIQRetail"])

# Install on all workers
c.run(install_nielsendsrs)

# Install on the scheduler
c.run_on_scheduler(install_nielsendsrs)

License

MIT License

Background

This library was developed at Data Science Research Services(University of Illinois at Urbana-Champaign) in 2024 and has been under active development since then. Currently supports NielsenIQ Retail Scanner data from 2006 to 2020.

Getting help

For general questions and discussions, visit DSRS mailing list.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nielseniqretail-0.2.0.tar.gz (12.3 kB view hashes)

Uploaded Source

Built Distribution

NielsenIQRetail-0.2.0-py3-none-any.whl (12.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page