Skip to main content

Intelligent Market Monitoring

Project description

open-veanu/fraudcrawler

Intelligent Market Monitoring

The pipeline for monitoring the market has the folling main steps:

  1. search for a given term using SerpAPI
  2. get product information using ZyteAPI
  3. assess relevance of the found products using an OpenAI API

Installation

python3.11 -m venv .venv
source .venv/bin/activate
pip install fraudcrawler

Usage

.env file

Make sure to create an .env file with the necessary API keys and credentials (c.f. .env.example file).

Run demo pipeline

python -m fraudcrawler.launch_demo_pipeline

Customize the pipeline

Start by initializing the client

from fraudcrawler import FraudCrawlerClient

# Initialize the client
client = FraudCrawlerClient()

For setting up the search we need 5 main objects.

search_term: str

The search term for the query (similar to search terms used within major search providers).

language: Language

The language used in SerpAPI ('hl' parameter), as well as for the optional search term enrichement (e.g. finding similar and related search terms). language=Language('German') creates an object having a language name and a language code as: Language(name='German', code='de').

location: Location

The location used in SerpAPI ('gl' parameter). location=Location('Switzerland') creates an object having a location name and a location code as Location(name='Switzerland', code='ch').

deepness: Deepness

Defines the search depth with the number of results to retrieve and optional enrichment parameters.

prompts: List[Prompt]

The list of prompts to classify a given product with (multiple) LLM calls. Each prompt object has a name, a context (used for defining the user prompt), a system_prompt (for defining the classification task), and allowed_classes (a list of possible classes).

from fraudcrawler import Language, Location, Deepness, Prompt
# Setup the search
search_term = "sildenafil"
language = Language(name="German")
location = Location(name="Switzerland")
deepness = Deepness(num_results=50)
prompts = [
    Prompt(
        name="relevance",
        system_prompt=(
            "You are a helpful and intelligent assistant. Your task is to classify any given product "
            "as either relevant (1) or not relevant (0), strictly based on the context and product details provided by the user. "
            "You must consider all aspects of the given context and make a binary decision accordingly. "
            "If the product aligns with the user's needs, classify it as 1 (relevant); otherwise, classify it as 0 (not relevant). "
            "Respond only with the number 1 or 0."
        ),
        allowed_classes=[0, 1],
    )
]

(Optional) Add search term enrichement. This will find related search terms (in a given language) and search for these as well.

from fraudcrawler import Enrichment
deepness.enrichment = Enrichment(
    additional_terms=5,
    additional_urls_per_term=10
)

(Optional) Add marketplaces where we explicitely want to look for (this will focus your search as the :site parameter for a google search)

from fraudcrawler import Host
marketplaces = [
    Host(name="International", domains="zavamed.com,apomeds.com"),
    Host(name="National", domains="netdoktor.ch, nobelpharma.ch"),
]

(Optional) Exclude urls (where you don't want to find products)

excluded_urls = [
    Host(name="Compendium", domains="compendium.ch"),
]

(Optional) Exclude previously collected urls (intends to save credits)

previously_collected_urls = [
    https://pharmaciedelabateliere.ch/shop/sante/douleurs-inflammations/dafalgan-cpr-eff-500-mg-16-pce/,
    https://eiche.ch/product/schmerzmittel-52cd81d5d206a/dafalgan-brausetabletten-1336653,
]

And finally run the pipeline

# Execute the pipeline
client.execute(
    search_term=search_term,
    language=language,
    location=location,
    deepness=deepness,
    prompts=prompts,
    # marketplaces=marketplaces,    # Uncomment this for using marketplaces
    # excluded_urls=excluded_urls   # Uncomment this for using excluded_urls
    # previously_collected_urls=previously_collected_urls    # Uncomment this for using previously_selected_urls
)

This creates a file with name pattern <search_term>_<language.code>_<location.code>_<datetime[%Y%m%d%H%M%S]>.csv inside the folder data/results/.

Once the pipeline terminated the results can be loaded and examined as follows:

df = client.load_results()
print(df.head(n=10))

If the client has been used to run multiple pipelines, an overview of the available results (for a given instance of FraudCrawlerClient) can be obtained with

client.print_available_results()

Contributing

see CONTRIBUTING.md

Async Setup

The Orchestrator class in src/base/orchestrator.py is designed to coordinate multiple services that may have interdependencies, allowing them to run in a semi-iterative manner. This means, for example, that product A can be at stage III of the pipeline while product B is still at stage I.

This behavior is enabled through an asynchronous pipeline setup. The three main steps, Search, Context Extraction, and Processing, all utilize httpx.AsyncClient. It is both possible and highly recommended to manage a single AsyncClient instance per application for efficiency. We provide a HttpxAsyncClient class that you can pass For more details, see the httpx documentation.

The following image provides a schematic representation of the package's async setup. Async Setup

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fraudcrawler-0.6.2.tar.gz (998.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fraudcrawler-0.6.2-py3-none-any.whl (1.0 MB view details)

Uploaded Python 3

File details

Details for the file fraudcrawler-0.6.2.tar.gz.

File metadata

  • Download URL: fraudcrawler-0.6.2.tar.gz
  • Upload date:
  • Size: 998.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.13.7 Linux/6.12.47-1.qubes.fc37.x86_64

File hashes

Hashes for fraudcrawler-0.6.2.tar.gz
Algorithm Hash digest
SHA256 5ee7c1f360d67d628a04348eed23514bef705e2e7986a418d265fc1185ef116f
MD5 45ffecffddac15874f9d9fad3cb84a8a
BLAKE2b-256 e54ce53206f743cc661508c1bfe3eda443638ca0d4a667e84f129bea7cf9989f

See more details on using hashes here.

File details

Details for the file fraudcrawler-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: fraudcrawler-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.13.7 Linux/6.12.47-1.qubes.fc37.x86_64

File hashes

Hashes for fraudcrawler-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5391eb064e2cd218ed4882b7588ce45509ed0b685e571adce7b236171b784459
MD5 3210de428a319475e604776262d66067
BLAKE2b-256 30b248738ec9f011a8dceb5ece65490d35054b391cdbfdd16795d1c99122bd32

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page