Skip to main content

Library for extracting ECHR data

Project description

echr extractor

This library contains functions to get ECHR data.

Version

Python 3.9

Contributors

brodriguesdemiranda
Benjamin Rodrigues de Miranda
ChloeCro
Chloe Crombach
Cloud956
Piotr Lewandowski
pranavnbapat
Pranav Bapat
running-machin
running-machin
shashankmc
shashankmc
gijsvd
gijsvd

How to install?

pip install echr-extractor

What are the functions?

  1. get_echr
  2. Gets all of the available metadata for echr cases from the HUDOC database. Can be saved in a file or returned in-memory.
  3. get_echr_extra
  4. Gets all of the available metadata for echr cases from the HUDOC database. On top of that downloads the full text for each case downloaded. Can be saved in a file or returned in-memory.
  5. get_nodes_edges
  6. Gets all of the available nodes and edges for echr cases for given metadata from the HUDOC database.

What are the parameters?

  1. get_echr
    • start_id: int, optional, default: 0
    • The id of the first case to be downloaded.
    • end_id: int, optional, default: The maximum number of cases available
    • The id of the last case to be downloaded.
    • count: int, optional, default: None
    • The number of cases per language to be downloaded, starting from the start_id.
      !NOTICE!
      If count is provided, the end_id will be set to start_id+count, overwriting any given end_id value.
    • start_date: date, optional, default None
    • The start publication date (yyyy-mm-dd)
    • end_date: date, optional, default current date
    • The end publication date (yyyy-mm-dd)
    • verbose: boolean, optional, default False
    • This option allows for additional printing, showing live progress of the extraction process.
    • fields: list of strings, optional, default all available fields
    • This argument can be provided, to limit the metadata to be downloaded. These fields will appear as different columns in the csv file / Dataframe object. The full list of fields is attached in the appendix.
    • save_file: ['y', 'n'],optional, default 'y'
    • Save metadata as a csv file in the data folder, or return as a Pandas DataFrame object in-memory.
    • link: string ,optional, default None
    • Allows the user to download results of a search from the HUDOC website. Since the HUDOC does not provide any proper API documentation, this method attempts to recreate an API call based on observer relation between the browser link and API call. This method might encounter errors, as there are possible behaviors that were not tested. If this argument is provided, all the other arguments are ignored, except for 'fields'. Further information on proper usage is in the Appendix.
    • query_payload: string ,optional, default None
    • Allows the user to download results of a search from the HUDOC website. If this argument is provided, it takes priority over the 'link' parameter. This method is much more robust than using the 'link' parameter. It requires the user to access the Network tab on his browser - full information on proper usage is in the Appendix.
    • language: list of strings, optional, default ['ENG']
    • The language of the metadata to be downloaded from the available languages.
      !NOTICE!
      If link or query payload are provided, the language argument will not be used, as the language also appears in the link and query.
  2. get_echr_extra
    • start_id: int, optional, default: 0
    • The id of the first case to be downloaded.
    • end_id: int, optional, default: The maximum number of cases available
    • The id of the last case to be downloaded.
    • count: int, optional, default: None
    • The number of cases per language given as input to be downloaded, starting from the start_id.
      !NOTICE!
      If count is provided, the end_id will be set to start_id+count, overwriting any given end_id value.
    • start_date: date, optional, default None
    • The start publication date (yyyy-mm-dd)
    • end_date: date, optional, default current date
    • The end publication date (yyyy-mm-dd)
    • verbose: boolean, optional, default False
    • This option allows for additional printing, showing live progress of the extraction process.
    • skip_missing_dates: boolean, optional, default False
    • This option makes the extraction not collect data for cases where there is no judgement date provided.
    • fields: list of strings, optional, default all available fields
    • This argument can be provided, to limit the metadata to be downloaded. These fields will appear as different columns in the csv file / Dataframe object. The full list of fields is attached in the appendix.
    • save_file: ['y', 'n'],optional, default 'y'
    • Save metadata as a csv file in the data folder and the full_text as a json file, or return a Pandas DataFrame object and a list of dictionaries in-memory.
    • language: list of strings, optional, default ['ENG']
    • The language of the metadata to be downloaded from the available languages.
      !NOTICE!
      If link or query payload are provided, the language argument will not be used, as the language also appears in the link and query.
    • link: string ,optional, default None
    • Allows the user to download results of a search from the HUDOC website. Since the HUDOC does not provide any proper API documentation, this method attempts to recreate an API call based on observer relation between the browser link and API call. This method might encounter errors, as there are possible behaviors that were not tested. If this argument is provided, all the other arguments are ignored, except for 'fields'. Further information on proper usage is in the Appendix.
    • query_payload: string ,optional, default None
    • Allows the user to download results of a search from the HUDOC website. If this argument is provided, it takes priority over the 'link' parameter. This method is much more robust than using the 'link' parameter. It requires the user to access the Network tab on his browser - full information on proper usage is in the Appendix.
    • threads: int, optional, default: 10
    • The full text download is a parallelizable process. This parameter determines the number of threads to be used in the download.
  3. get_nodes_edges
    • metadata_path
    • The path to the metadata file to read.
    • df
    • Alternative to metadata_path, user can provide a Pandas Dataframe object. In case both are given, df is ignored.
    • save_file: ['y', 'n'],optional, default 'y'
    • Save the nodes and edges of cases in metadata as csv files in the data folder, or return them as Pandas Dataframe objects in-memory.

Examples

import echr_extractor as echr

Below are examples for in-file saving:

df, json = echr.get_echr_extra(count=100,save_file='y',threads=10)
df = echr.get_echr(start_id=1,save_file='y',skip_missing_dates=True)

Below are examples for in-memory saving:

df, json = echr.get_echr_extra(start_id=20,end_id=3000,save_file='n')
    
df = echr.get_echr(start_id=1000,count=2000,save_file='n',verbose=True)

nodes, edges = echr.get_nodes_edges(metadata_path='data/echr_metadata.csv',save_file='n')

## License
[![License: Apache 2.0](https://img.shields.io/github/license/maastrichtlawtech/extraction_libraries)](https://opensource.org/licenses/Apache-2.0)

Previously under the [MIT License](https://opensource.org/licenses/MIT), as of 28/10/2022 this work is licensed under a [Apache License, Version 2.0](https://opensource.org/licenses/Apache-2.0).

Apache License, Version 2.0

Copyright (c) 2022 Maastricht Law & Tech Lab

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
    
    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Appendix

To properly use the 'link' parameter of the extraction methods, the user should head to 

https://hudoc.echr.coe.int/eng#%20

There, the user can use the tools of Advanced Search of HUDOC to search for specific cases.
Afterwards*, the user can copy the link of the current website, and pass it on to the extraction methods. 

Known issues with the 'link' method:

- Using the " character in your searches will cause the extraction to fail. It will only work if that character is in the
Text section, where it is essential for proper use of the search. In all the other search fields, please do not use the " character.
If it is essential for you work, please raise an issue on Github, and we can try to manually fix another field.


* It should be noted that the link only updates after the 'search' button  of the Advanced Search is clicked.



The full list of fields is as follows:

fields = ['itemid','applicability','application','appno','article','conclusion','decisiondate','docname',
'documentcollectionid','documentcollectionid2','doctype','doctypebranch','ecli','externalsources','extractedappno',
'importance','introductiondate','isplaceholder','issue','judgementdate','kpdate','kpdateAsText','kpthesaurus',
'languageisocode','meetingnumber','originatingbody','publishedby','Rank','referencedate','reportdate','representedby',
'resolutiondate',resolutionnumber','respondent','respondentOrderEng','rulesofcourt','separateopinion','scl',
'sharepointid','typedescription','nonviolation','violation']

These fields can take different values, for more information head to https://hudoc.echr.coe.int.

Query_payload Parameter

This section will define in a step-by-step fashion the proper usage of the 'query_payload' parameter.

  1. Go to the HUDOC website.
  2. Input your search parameters.
  3. Right-click on the website and inspect the website elements.
    guide
  4. After the tab on the right side has opened, enter the network section, which records requests made by the website.
    guide2
  5. Here you might see some requests already present. If that is the case, press the record button twice to clear the history. Otherwise, continue to step 6.
    guide3
  6. Once you are recording new requests and the history has been cleared, click the search button on the website to execute your search. guide4
  7. Now in the network tab, you should see new request records appear. Click the one on the top to inspect it. guide5
  8. A tab with request information should appear. Enter the Payload section. guide6
  9. Here the query payload should be present. Copy its value, and use it as the 'query_payload' parameter in code. It should be noted that this parameter should be used as a string surrounded by the single quotation mark ( ' ) , as the query payload might include the double quotation mark ( " ) characters. guide7
  10. Now you know how to use the query_payload parameter!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

echr_extractor-1.0.43.tar.gz (18.2 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page