Skip to main content

No project description provided

Reason this release was yanked:

Readme not updated

Project description

ZJU Crawler

介绍

浙江大学学生教师信息爬虫,从教务网、查老师获取对应信息。可以结合QQ机器人等食用。
该包目前集成了两个爬虫:

  • 学生个人信息爬虫

    • 加权均绩
    • 加权四分制均绩(出国,旧)
    • 加权4.3分制均绩(出国,新)
    • 加权百分制成绩
    • 考试信息(时间、考场、……)
  • 查老师网站老师信息爬虫(评分,对应课程等)

未来将会添加学在浙大的爬虫(如果修好了)。

安装

PyPi

TODO

手动安装

下载后进入setup.py所在文件夹执行pip install .即可。
也可以下载.whl文件安装。

使用示例

导入

import zjucrawler
# or:
from zjucrawler import chalaoshi # Chalaoshi website(unofficial)
from zjucrawler import zju # Fetch from official websites

教师

import asyncio
from zjucrawler import chalaoshi # Chalaoshi website(unofficial)
async def main():
    teacher = input("teacher ID >>>")
    print(await chalaoshi.get_teacher_info(int(teacher))) # 获取教师信息
    # search_teachers 通过教师姓名/缩写获取教师列表
    # get_course_info 获取课程平均绩点、标准差
loop = asyncio.get_event_loop()
loop.run_until_complete(main())

学生

import asyncio
from zjucrawler import zju # Fetch from official websites
async def main():
    username = input("username>>>")
    pwd = input("pwd>>>")
    test = zju.Fetcher(username, pwd, simulated=False) # simulated指定是否模拟浏览器进行登录
    print(await test.get_GPA()) # 获取全科均绩,不含弃修
    # get_avg_score 获取加权平均百分制成绩
    # get_abroad_GPA_old 出国均绩旧(4分制) -2021级
    # get_abroad_GPA_new 出国均绩新(4.3分制)2022级-
    exams = list(await test.get_all_exams()) # 获取所有考试信息
    print(exams)
    print(test.__dict__)

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

注意事项

确保包中含有security.js.
由于一些众所周知的神秘因素可能无法使用查老师(需要内网),请自行解决:)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zjucrawler-1.0.0.tar.gz (18.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zjucrawler-1.0.0-py3-none-any.whl (19.0 kB view details)

Uploaded Python 3

File details

Details for the file zjucrawler-1.0.0.tar.gz.

File metadata

  • Download URL: zjucrawler-1.0.0.tar.gz
  • Upload date:
  • Size: 18.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for zjucrawler-1.0.0.tar.gz
Algorithm Hash digest
SHA256 b3b767d8cd4449fca79a376204b54989e2f3b405a8366fd1f2925ae9214f4f04
MD5 43053b79af33563472c87076077b3e34
BLAKE2b-256 423f8428f94c8055df3802ea2f61446ac698e48c8d018a716376e825194a87e6

See more details on using hashes here.

File details

Details for the file zjucrawler-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: zjucrawler-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 19.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for zjucrawler-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 452de236cb40a2791e32826e2c60def5ea9667b6ff91a7eb0f9c698c70f894b4
MD5 b373b3c139446e4388d1d87efd39c118
BLAKE2b-256 b254ac4a39713a39638a6070b55a0b93fe08660b11de4195fb1082ff5c3de9d7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page