Skip to main content

Intelligent Market Monitoring

Project description

vianu-fraudcrawler

Intelligent Market Monitoring

The pipeline for monitoring the market has the folling main steps:

  1. search for a given term using SerpAPI
  2. get product information using ZyteAPI
  3. assess relevance of the found products using an OpenAI API

Installation

python3.11 -m venv .venv
source .venv/bin/activate
pip install vianu-fraudcrawler

Usage

.env file

Make sure to create an .env file with the necessary API keys and credentials (c.f. .env.example file).

Run demo pipeline

python -m fraudcrawler.launch_demo_pipeline

Customize the pipeline

Start by initializing the client

from fraudcrawler import FraudCrawlerClient

# Initialize the client
client = FraudCrawlerClient()

For setting up the search we need 5 main objects.

search_term: str

The search term for the query (similar to search terms used within major search providers).

language: Language

The language used in SerpAPI ('hl' parameter), as well as for the optional search term enrichement (e.g. finding similar and related search terms). language=Language('German') creates an object having a language name and a language code as: Language(name='German', code='de').

location: Location

The location used in SerpAPI ('gl' parameter). location=Location('Switzerland') creates an object having a location name and a location code as Location(name='Switzerland', code='ch').

deepness: Deepness

Defines the search depth with the number of results to retrieve and optional enrichment parameters.

prompts: List[Prompt]

The list of prompts to classify a given product with (multiple) LLM calls. Each prompt object has a name, a context (used for defining the user prompt), a system_prompt (for defining the classification task), allowed_classes (a list of possible classes) and optionally default_if_missing (a default class if anything goes wrong).

from fraudcrawler import Language, Location, Deepness, Prompt
# Setup the search
search_term = "sildenafil"
language = Language(name="German")
location = Location(name="Switzerland")
deepness = Deepness(num_results=50)
prompts = [
    Prompt(
        name="relevance",
        context="This organization is interested in medical products and drugs.",
        system_prompt=(
            "You are a helpful and intelligent assistant. Your task is to classify any given product "
            "as either relevant (1) or not relevant (0), strictly based on the context and product details provided by the user. "
            "You must consider all aspects of the given context and make a binary decision accordingly. "
            "If the product aligns with the user's needs, classify it as 1 (relevant); otherwise, classify it as 0 (not relevant). "
            "Respond only with the number 1 or 0."
        ),
        allowed_classes=[0, 1],
    )
]

(Optional) Add search term enrichement. This will find related search terms (in a given language) and search for these as well.

from fraudcrawler import Enrichment
deepness.enrichment = Enrichment(
    additional_terms=5,
    additional_urls_per_term=10
)

(Optional) Add marketplaces where we explicitely want to look for (this will focus your search as the :site parameter for a google search)

from fraudcrawler import Host
marketplaces = [
    Host(name="International", domains="zavamed.com,apomeds.com"),
    Host(name="National", domains="netdoktor.ch, nobelpharma.ch"),
]

(Optional) Exclude urls (where you don't want to find products)

excluded_urls = [
    Host(name="Compendium", domains="compendium.ch"),
]

And finally run the pipeline

# Execute the pipeline
client.execute(
    search_term=search_term,
    language=language,
    location=location,
    deepness=deepness,
    prompts=prompts,
    # marketplaces=marketplaces,    # Uncomment this for using marketplaces
    # excluded_urls=excluded_urls   # Uncomment this for using excluded_urls
)

This creates a file with name pattern <search_term>_<language.code>_<location.code>_<datetime[%Y%m%d%H%M%S]>.csv inside the folder data/results/.

Once the pipeline terminated the results can be loaded and examined as follows:

df = client.load_results()
print(df.head(n=10))

If the client has been used to run multiple pipelines, an overview of the available results (for a given instance of FraudCrawlerClient) can be obtained with

client.print_available_results()

Contributing

see CONTRIBUTING.md

Async Setup

The following image provides a schematic representation of the package's async setup. Async Setup

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vianu_fraudcrawler-0.3.0.tar.gz (976.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vianu_fraudcrawler-0.3.0-py3-none-any.whl (1.0 MB view details)

Uploaded Python 3

File details

Details for the file vianu_fraudcrawler-0.3.0.tar.gz.

File metadata

  • Download URL: vianu_fraudcrawler-0.3.0.tar.gz
  • Upload date:
  • Size: 976.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.0 CPython/3.10.12 Linux/6.12.10-76061203-generic

File hashes

Hashes for vianu_fraudcrawler-0.3.0.tar.gz
Algorithm Hash digest
SHA256 b8358dc0c7e7b992a3919fa92aafb40980f7855a6a55cc9521528c745536d20f
MD5 d7ef5ab3c1f7bfaef1ac72ceb1e684ca
BLAKE2b-256 721649c9c3c057b9bda1075a37f8c15e4d45574398c25ccf7ac0fbbd37f42deb

See more details on using hashes here.

File details

Details for the file vianu_fraudcrawler-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: vianu_fraudcrawler-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.0 CPython/3.10.12 Linux/6.12.10-76061203-generic

File hashes

Hashes for vianu_fraudcrawler-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c2eda10fd357c9a3dc302a1d5c5b3557ef8535b6264f2a28ead2f185e2d938bf
MD5 ffac9914f62c91251137caa11ffb4831
BLAKE2b-256 1aeb91a1b0b7aa9aa75925f1a3b0d50b16e9e2913892322d87712b28f3e5a12b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page