Skip to main content

General use parser for esports liquipedia data

Project description

ggpyscraper

PyPI - Version License Python Version

ggpyscraper is a Python package designed to query and scrape esports data from liquipedia.net. With Liquipedia’s standardized formatting and multitude of supported esports, this package can parse data for more than 55 different video games.

Installation

We can use pip to install ggpyscraper.

pip install ggpyscraper

Usage

Liquipedia Pages

There are three different types of liquipedia pages that we can parse: tournaments, teams, and players. To parse any of these objects, the following page details are required:

game – the video game title the page belongs to, we can find the needed string from https://liquipedia.net/{game}/{page_name}. For example, the game string for Counter-Strike 2 is counterstrike as found from https://liquipedia.net/counterstrike/Autimatic.

name – the name of the page of interest, this can be found from https://liquipedia.net/{game}/{page_name}. For example, the page name for https://liquipedia.net/counterstrike/Autimatic is Autimatic.

user – the exact page name on Liquipedia, as requested by the Liquipedia API Terms of Use, which should describe the projects and any contact information.

action – For these pages, ggpyparser currently only supports wikicode parsing(the markup language of the website), action = wikicode, but because some pages are automatically generated, meaning that the wikicode does not yield relevant results, an html parse is currently in development(action = html). Currently, the html parse is only avaliable for counter-strike pages.

Tournament

from ggpyscraper.liquipedia_objects import tournament
t = tournament.Tournament(game = "counterstrike", name = "ELEAGUE/2018/Major",
                           user = "ggpyparser-example(github.com/lou-zhou)",
                           action = "wikicode"
                          )#defaults action = "wikicode"
#get all matches - N.B. Since only the playoffs appear on the page,
# only the playoff matches will be returned. If we want to look at other stages, we would look at
# name = "ELEAGUE/2018/Major/{Stage Name}" 
t.get_results()

#get relevant information about the tournament
t.get_info()

#get the participants in the tournament
t.get_participants()

# get the prize pool of the tournament
t.get_prizes()

# get the talent of the tournament(the announcers, commentators, etc.)
t.get_talent()

Player

from ggpyscraper.liquipedia_objects import player
t = player.Player(game = "counterstrike", name = "autimatic",
                           user = "ggpyparser-example(github.com/lou-zhou)",
                           action = "wikicode"
                          )#defaults action = "wikicode"
#get the gear used by the player
t.get_gear()

#get relevant information about the player
t.get_info()

Team

from ggpyscraper.liquipedia_objects import team
t = team.Team(game = "counterstrike", name = "Cloud9",
                           user = "ggpyparser-example(github.com/lou-zhou)",
                           action = "wikicode"
                          )#defaults action = "wikicode"
#get the news around the team(e.g. transfers)
t.get_news()

#get relevant information about the team
t.get_info()

#get members of the organization(e.g. CEO)
t.get_organization()

#get historical list of players
t.get_players()

Parsing "General" Liquipedia Pages

To facilitate getting page names, ggpyparser is also able to parse pages displaying lists of teams, players, or tournaments(e.g. liquipedia.net/counterstrike/S-Tier_Tournaments).

N.B. This parse uses an html parse, meaning that the rate limiting for the liquipedia API is significantly more stringent than wikicode parsing. Try not to call these html parses with high volume.

from ggpyscraper.parse_liquipedia import parse_general_pages
#parsing tournament pages ex: https://liquipedia.net/counterstrike/S-Tier_Tournaments
parse_general_pages.parse_tournaments(name = "S-Tier_Tournaments", 
                    game =  "counterstrike",
                    user =  "ggpyparser-example(github.com/lou-zhou)")

#parsing transfer pages ex: https://liquipedia.net/counterstrike/Transfers/2025
parse_general_pages.parse_transfers(name = "Transfers/2025",  
                game =  "counterstrike",
                user =  "ggpyparser-example(github.com/lou-zhou)")

#parsing team pages ex: https://liquipedia.net/counterstrike/Portal:Teams/Europe
parse_general_pages.parse_teams(region= "Europe", 
            game = "counterstrike", 
            user =  "ggpyparser-example(github.com/lou-zhou)")

#parsing player pages ex: https://liquipedia.net/counterstrike/Portal:Players/Europe
parse_general_pages.parse_players(region= "Europe", 
        game = "counterstrike", user =  "ggpyparser-example(github.com/lou-zhou)")
#parsing player pages ex: https://liquipedia.net/counterstrike/Banned_Players/Valve
#N.B. for games where there is no tournament specific bans, company is set to None
parse_general_pages.parse_banned_players(game = "counterstrike", 
            user = "ggpyparser-example(github.com/lou-zhou)",
            company = "Valve")

Parsing Multiple Liquipedia Pages

With rate-limiting as described by the Liquipedia API Terms of Use, to avoid being blocked, ggparserpy allows multiple wikicode page returns from a single request using parse_liquipedia.parse_multiple_liquipedia_pages. This function builds a dictionary between the page_names and the corresponding Python liquipedia_object.

#Ex: parsing IEM Cologne and Autimatic's Page
from ggpyscraper.parse_liquipedia import parse_multiple_liquipedia_pages

parse_multiple_liquipedia_pages.create_multiple_pages(game = "counterstrike",
        page_names = ["Intel_Extreme_Masters/2025/Cologne", "Autimatic"],
        user = "ggpyparser-example(github.com/lou-zhou)",
        page_ts = ["tournament", "player"])
#page_names is a list of strings describing the page names of each page
#page_ts is a list of strings of page types, valid elements of "tournament", "player", "team"

Important Note

Because this library relies the Liquipedia API, calls are subject to following the Liquipedia API Terms of Use, including rate-limiting of 1 call per 2 seconds for wikicode requests and 1 call per 30 seconds for html requests.

For reference, a call occurs whenever the user calls a Tournament, Team, or Player object is created(with the exception being with parse_multiple_liquipedia_pages where mutiple pages are generated from one call). I strongly recommend reviewing the Terms of Use and implementing throttling between requests to prevent exceeding these limits and risking an IP ban.

Issues and Bugs

This library is intended as a general solution for parsing data across a wide variety of esports, each with different tournament formats and prize pool structures. Combined with the fact that it is maintained by an undergraduate student with limited non-research software development experience and that this is a very early version, bugs are to be expected. Feedback and bug reports are welcome and can be submitted in Issues.

Contributing

Contributions are more than welcome! If you're interested in contributing to this library, please make a Pull Request.

Author and Acknowledgement

Data used by this project is sourced from Liquipedia, which is licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC BY-SA 3.0). In compliance with the license terms, please attribute Liquipedia as the source when using or redistributing this data.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ggpyscraper-0.0.2.1.tar.gz (31.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ggpyscraper-0.0.2.1-py3-none-any.whl (34.1 kB view details)

Uploaded Python 3

File details

Details for the file ggpyscraper-0.0.2.1.tar.gz.

File metadata

  • Download URL: ggpyscraper-0.0.2.1.tar.gz
  • Upload date:
  • Size: 31.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.7

File hashes

Hashes for ggpyscraper-0.0.2.1.tar.gz
Algorithm Hash digest
SHA256 44a4b6114168fb807cdbf11095b7018088f0a3d798e7b6a70ac42dd7a9f4caf5
MD5 0c99a7a33b5d481a134edb9ece92ba2b
BLAKE2b-256 07451d77396deff868de0e912e7bc5a55c18f14eeacfc6b68c346c7eb08fc924

See more details on using hashes here.

File details

Details for the file ggpyscraper-0.0.2.1-py3-none-any.whl.

File metadata

  • Download URL: ggpyscraper-0.0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 34.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.7

File hashes

Hashes for ggpyscraper-0.0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fea7e48ab24ae68d49b72689e5e024f0963aaa556519f8d1d23e39366336a142
MD5 94fe9b09788958c2e1c92c52689356ea
BLAKE2b-256 de4d7fbd06722fceb880d05d373efea4b8cb682bc9bbad432390b2adba77c44e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page