diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index d1aac5f..b05e80e 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -34,23 +34,23 @@ community looks forward to your contributions. 🎉 ## Code of Conduct This project and everyone participating in it is governed by the -[django-tasks-scheduler Code of Conduct](https://github.com/dsoftwareinc/django-tasks-scheduler/blob/main/CODE_OF_CONDUCT.md). +[django-tasks-scheduler Code of Conduct](https://github.com/django-commons/django-tasks-scheduler/blob/main/CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to . ## I Have a Question > If you want to ask a question, we assume that you have read the -> available [Documentation](https://github.com/dsoftwareinc/django-tasks-scheduler). +> available [Documentation](https://github.com/django-commons/django-tasks-scheduler). Before you ask a question, it is best to search for -existing [Issues](https://github.com/dsoftwareinc/django-tasks-scheduler/issues) that might help you. In case you have +existing [Issues](https://github.com/django-commons/django-tasks-scheduler/issues) that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first. If you then still feel the need to ask a question and need clarification, we recommend the following: -- Open an [Issue](https://github.com/dsoftwareinc/django-tasks-scheduler/issues/new). +- Open an [Issue](https://github.com/django-commons/django-tasks-scheduler/issues/new). - Provide as much context as you can about what you're running into. - Provide project and platform versions (nodejs, npm, etc), depending on what seems relevant. @@ -90,11 +90,11 @@ following steps in advance to help us fix any potential bug as fast as possible. - Make sure that you are using the latest version. - Determine if your bug is really a bug and not an error on your side, e.g., using incompatible environment components/versions (Make sure that you have read - the [documentation](https://github.com/dsoftwareinc/django-tasks-scheduler). If you are looking for support, you might + the [documentation](https://github.com/django-commons/django-tasks-scheduler). If you are looking for support, you might want to check [this section](#i-have-a-question)). - To see if other users have experienced (and potentially already solved) the same issue you are having, check if there is not already a bug report existing for your bug or error in - the [bug tracker](https://github.com/dsoftwareinc/django-tasks-scheduler/issues?q=label%3Abug). + the [bug tracker](https://github.com/django-commons/django-tasks-scheduler/issues?q=label%3Abug). - Also make sure to search the internet (including Stack Overflow) to see if users outside the GitHub community have discussed the issue. - Collect information about the bug: @@ -114,7 +114,7 @@ following steps in advance to help us fix any potential bug as fast as possible. We use GitHub issues to track bugs and errors. If you run into an issue with the project: -- Open an [Issue](https://github.com/dsoftwareinc/django-tasks-scheduler/issues/new). +- Open an [Issue](https://github.com/django-commons/django-tasks-scheduler/issues/new). (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.) - Explain the behavior you would expect and the actual behavior. @@ -146,9 +146,9 @@ community to understand your suggestion and find related suggestions. #### Before Submitting an Enhancement - Make sure that you are using the latest version. -- Read the [documentation](https://github.com/dsoftwareinc/django-tasks-scheduler) carefully and find out if the +- Read the [documentation](https://github.com/django-commons/django-tasks-scheduler) carefully and find out if the functionality is already covered, maybe by an individual configuration. -- Perform a [search](https://github.com/dsoftwareinc/django-tasks-scheduler/issues) to see if the enhancement has +- Perform a [search](https://github.com/django-commons/django-tasks-scheduler/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one. - Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to @@ -160,7 +160,7 @@ community to understand your suggestion and find related suggestions. #### How Do I Submit a Good Enhancement Suggestion? -Enhancement suggestions are tracked as [GitHub issues](https://github.com/dsoftwareinc/django-tasks-scheduler/issues). +Enhancement suggestions are tracked as [GitHub issues](https://github.com/django-commons/django-tasks-scheduler/issues). - Use a **clear and descriptive title** for the issue to identify the suggestion. - Provide a **step-by-step description of the suggested enhancement** in as many details as possible. @@ -180,7 +180,7 @@ Enhancement suggestions are tracked as [GitHub issues](https://github.com/dsoftw ### Your First Code Contribution Unsure where to begin contributing? You can start by looking through -[help-wanted issues](https://github.com/dsoftwareinc/wiwik/labels/help%20wanted). +[help-wanted issues](https://github.com/django-commons/django-tasks-scheduler/labels/help%20wanted). Never contributed to open source before? Here are a couple of friendly tutorials: diff --git a/.github/actions/test-coverage/action.yml b/.github/actions/test-coverage/action.yml index ef2ad18..59f958e 100644 --- a/.github/actions/test-coverage/action.yml +++ b/.github/actions/test-coverage/action.yml @@ -17,26 +17,26 @@ outputs: runs: using: "composite" steps: - - name: Run Tests with coverage + - name: Run regular tests with coverage shell: bash run: | cd testproject - poetry run coverage run manage.py test scheduler + uv run coverage run manage.py test --exclude-tag multiprocess scheduler - name: Coverage report id: coverage_report shell: bash run: | mv testproject/.coverage . echo 'REPORT<> $GITHUB_ENV - poetry run coverage report >> $GITHUB_ENV + uv run coverage report >> $GITHUB_ENV echo 'EOF' >> $GITHUB_ENV - name: json report id: json-report shell: bash run: | - poetry run coverage json + uv run coverage json echo "COVERAGE=$(jq '.totals.percent_covered_display|tonumber' coverage.json)" >> $GITHUB_ENV - - uses: mshick/add-pr-comment@v2 + - uses: mshick/add-pr-comment@dd126dd8c253650d181ad9538d8b4fa218fc31e8 if: ${{ github.event_name == 'pull_request' }} with: message: | @@ -45,5 +45,5 @@ runs: ${{ env.REPORT }} ``` repo-token: ${{ inputs.repoToken }} - repo-token-user-login: 'github-actions[bot]' allow-repeats: true + update-only: true diff --git a/.github/workflows/publish-documentation.yml b/.github/workflows/publish-documentation.yml index f1dc2a9..74f9cbb 100644 --- a/.github/workflows/publish-documentation.yml +++ b/.github/workflows/publish-documentation.yml @@ -17,10 +17,12 @@ jobs: url: https://pypi.org/p/fakeredis steps: - uses: actions/checkout@v4 + with: + persist-credentials: false - name: Set up Python uses: actions/setup-python@v5 with: - python-version: "3.11" + python-version: "3.13" - name: Configure Git Credentials run: | git config user.name github-actions[bot] diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml index 3059827..108d77f 100644 --- a/.github/workflows/publish.yml +++ b/.github/workflows/publish.yml @@ -1,40 +1,81 @@ -# This workflow will upload a Python Package using Twine when a release is created -# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries - -# This workflow uses actions that are not certified by GitHub. -# They are provided by a third-party and are governed by -# separate terms of service, privacy policy, and support -# documentation. - -name: Upload Python Package +name: Release on: release: types: [published] +env: + # Change these for your project's URLs + PYPI_URL: https://pypi.org/p/django-tasks-scheduler + PYPI_TEST_URL: https://test.pypi.org/p/django-tasks-scheduler + jobs: - publish: + build: + name: Build distribution 📦 runs-on: ubuntu-latest permissions: id-token: write # IMPORTANT: this permission is mandatory for trusted publishing - steps: - uses: actions/checkout@v4 + with: + persist-credentials: false - name: Set up Python uses: actions/setup-python@v5 with: - python-version: '3.11' - cache-dependency-path: poetry.lock + python-version: "3.13" + - name: Install pypa/build + run: + python3 -m pip install build --user + - name: Build a binary wheel and a source tarball + run: python3 -m build + - name: Store the distribution packages + uses: actions/upload-artifact@v4 + with: + name: python-package-distributions + path: dist/ + + publish-to-pypi: + name: >- + Publish Python 🐍 distribution 📦 to PyPI + if: startsWith(github.ref, 'refs/tags/') # only publish to PyPI on tag pushes + needs: + - build + runs-on: ubuntu-latest + environment: + name: pypi + url: ${{ env.PYPI_URL }} + permissions: + id-token: write # IMPORTANT: mandatory for trusted publishing + steps: + - name: Download all the dists + uses: actions/download-artifact@v4 + with: + name: python-package-distributions + path: dist/ + - name: Publish distribution 📦 to PyPI + uses: pypa/gh-action-pypi-publish@v1.12.4 + + publish-to-testpypi: + name: Publish Python 🐍 distribution 📦 to TestPyPI + needs: + - build + runs-on: ubuntu-latest - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install build + environment: + name: testpypi + url: ${{ env.PYPI_TEST_URL }} - - name: Build package - run: python -m build + permissions: + id-token: write # IMPORTANT: mandatory for trusted publishing - - name: Publish package to pypi - uses: pypa/gh-action-pypi-publish@v1.8.14 + steps: + - name: Download all the dists + uses: actions/download-artifact@v4 + with: + name: python-package-distributions + path: dist/ + - name: Publish distribution 📦 to TestPyPI + uses: pypa/gh-action-pypi-publish@v1.12.4 with: - print-hash: true \ No newline at end of file + repository-url: https://test.pypi.org/legacy/ + skip-existing: true \ No newline at end of file diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index 11ffc86..e416acb 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -1,7 +1,7 @@ name: Django CI on: - pull_request_target: + pull_request: branches: - master push: @@ -10,42 +10,43 @@ on: workflow_dispatch: jobs: - flake8: + ruff: runs-on: ubuntu-latest - name: "flake8 on code" + name: "ruff on code" + permissions: + contents: read steps: - uses: actions/checkout@v4 - - name: Setup Python - uses: actions/setup-python@v5 with: - python-version: "3.11" - cache-dependency-path: poetry.lock - - name: Install poetry and dependencies - shell: bash - run: | - python -m pip --quiet install poetry - echo "$HOME/.poetry/bin" >> $GITHUB_PATH - poetry install - - name: Run flake8 + persist-credentials: false + - name: Install uv + uses: astral-sh/setup-uv@v6 + - uses: actions/setup-python@v5 + with: + cache-dependency-path: uv.lock + python-version: "3.13" + - name: Run ruff shell: bash run: | - poetry run flake8 scheduler/ + uv run ruff check - testRedis: - needs: [ 'flake8' ] + test-regular: + needs: [ 'ruff' ] runs-on: ubuntu-latest + name: "Tests py${{ matrix.python-version }}/dj${{ matrix.django-version }}/${{ matrix.broker }}" strategy: max-parallel: 6 matrix: - python-version: [ '3.9', '3.10', '3.11', '3.12' ] - django-version: [ '4.2.13', '5.0.6' ] - exclude: - - python-version: '3.9' - django-version: '5.0.6' + python-version: [ '3.11', '3.12', '3.13' ] + django-version: [ '5.1.8', '5.2' ] + broker: [ 'redis', 'fakeredis', 'valkey' ] include: - - python-version: '3.11' - django-version: '4.2.13' + - python-version: '3.13' + django-version: '5.2' + broker: 'redis' coverage: yes + permissions: + pull-requests: write services: redis: image: redis:7.2.2 @@ -56,38 +57,64 @@ jobs: --health-interval 10s --health-timeout 5s --health-retries 5 + + valkey: + image: valkey/valkey:8.0 + ports: + - 6380:6379 + options: >- + --health-cmd "redis-cli ping" + --health-interval 10s + --health-timeout 5s + --health-retries 5 + outputs: version: ${{ steps.getVersion.outputs.VERSION }} + steps: - uses: actions/checkout@v4 - - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v5 with: - python-version: ${{ matrix.python-version }} - cache-dependency-path: poetry.lock - - name: Install poetry and dependencies + persist-credentials: false + - name: Install uv + uses: astral-sh/setup-uv@v6 + - uses: actions/setup-python@v5 + with: + cache-dependency-path: uv.lock + python-version: "${{ matrix.python-version }}" + + - name: Install django version shell: bash run: | - python -m pip --quiet install poetry - echo "$HOME/.poetry/bin" >> $GITHUB_PATH - poetry install -E yaml - poetry run pip install django==${{ matrix.django-version }} + if [ ${{ matrix.broker == 'valkey' }} == true ]; then + additional_args="--extra valkey" + fi + uv sync --extra yaml $additional_args + uv pip install django==${{ matrix.django-version }} - name: Get version id: getVersion shell: bash run: | - VERSION=$(poetry version -s --no-ansi -n) + VERSION=$(uv version --short) echo "VERSION=$VERSION" >> $GITHUB_OUTPUT + - name: Check for missing migrations run: | cd testproject - poetry run python manage.py makemigrations --check + uv run python manage.py makemigrations --check + - name: Run Tests without coverage if: ${{ matrix.coverage != 'yes' }} run: | cd testproject - poetry run python manage.py test scheduler + export FAKEREDIS=${{ matrix.broker == 'fakeredis' }} + if [ ${{ matrix.broker == 'valkey' }} == true ]; then + export BROKER_PORT=6380 + else + export BROKER_PORT=6379 + fi + uv run python manage.py test --exclude-tag multiprocess scheduler + # Steps for coverage check - name: Run tests with coverage uses: ./.github/actions/test-coverage @@ -96,9 +123,10 @@ jobs: pythonVer: ${{ matrix.python-version }} djangoVer: ${{ matrix.django-version }} repoToken: ${{ secrets.GITHUB_TOKEN }} + - name: Create coverage badge if: ${{ matrix.coverage == 'yes' && github.event_name == 'push' }} - uses: schneegans/dynamic-badges-action@v1.7.0 + uses: schneegans/dynamic-badges-action@7142847813c746736c986b42dec98541e49a2cea with: auth: ${{ secrets.GIST_SECRET }} gistID: b756396efb895f0e34558c980f1ca0c7 @@ -116,59 +144,9 @@ jobs: # write permission is required for auto-labeler # otherwise, read permission is required at least pull-requests: write - needs: testRedis + needs: test-regular runs-on: ubuntu-latest steps: - - uses: release-drafter/release-drafter@v6 + - uses: release-drafter/release-drafter@b1476f6e6eb133afa41ed8589daba6dc69b4d3f5 env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - - - testFakeRedis: - needs: [ 'flake8' ] - runs-on: ubuntu-latest - strategy: - max-parallel: 6 - matrix: - python-version: [ '3.9', '3.10', '3.11', '3.12' ] - django-version: [ '4.2.13', '5.0.6' ] - exclude: - - python-version: '3.9' - django-version: '5.0.6' - include: - - python-version: '3.11' - django-version: '4.2.13' - coverage: yes - - outputs: - version: ${{ steps.getVersion.outputs.VERSION }} - steps: - - uses: actions/checkout@v4 - - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v5 - with: - python-version: ${{ matrix.python-version }} - cache-dependency-path: poetry.lock - - name: Install poetry and dependencies - shell: bash - run: | - python -m pip --quiet install poetry - echo "$HOME/.poetry/bin" >> $GITHUB_PATH - poetry install -E yaml - poetry run pip install django==${{ matrix.django-version }} - - - name: Get version - id: getVersion - shell: bash - run: | - VERSION=$(poetry version -s --no-ansi -n) - echo "VERSION=$VERSION" >> $GITHUB_OUTPUT - - name: Check for missing migrations - run: | - cd testproject - poetry run python manage.py makemigrations --check - - name: Run Tests without coverage - run: | - cd testproject - export FAKEREDIS=True - poetry run python manage.py test scheduler + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} \ No newline at end of file diff --git a/.github/zizmor.yml b/.github/zizmor.yml new file mode 100644 index 0000000..c3a3542 --- /dev/null +++ b/.github/zizmor.yml @@ -0,0 +1,15 @@ +rules: + unpinned-images: + ignore: + - 'test.yml' + - 'test-dragonfly.yml' + unpinned-uses: + config: + policies: + actions/*: any + astral-sh/*: any + pypa/gh-action-pypi-publish: any + github-env: + ignore: + - 'action.yml:36:7' + - 'action.yml:28:7' \ No newline at end of file diff --git a/.gitignore b/.gitignore index 82e020f..7b41118 100644 --- a/.gitignore +++ b/.gitignore @@ -1,7 +1,7 @@ __pycache__/ *.py[cod] *$py.class - +tags *.so .Python diff --git a/.readthedocs.yaml b/.readthedocs.yaml index 0b64ab9..5a412a0 100644 --- a/.readthedocs.yaml +++ b/.readthedocs.yaml @@ -2,7 +2,7 @@ version: 2 build: os: "ubuntu-20.04" tools: - python: "3.11" + python: "3.12" mkdocs: configuration: mkdocs.yml diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index 4eb609e..c931a69 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -1,13 +1,18 @@ # Code of Conduct - django-tasks-scheduler +Adapted from [Django-Commons Code of Conduct](https://github.com/django-commons/membership/blob/main/CODE_OF_CONDUCT.md) + ## Our Pledge -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socioeconomic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. ## Our Standards @@ -19,44 +24,45 @@ community include: * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience -* Focusing on what is best not just for us as individuals, but for the - overall community +* Focusing on what is best not just for us as individuals, but for the overall + community Examples of unacceptable behavior include: -* The use of sexualized language or imagery, and sexual attention or - advances +* The use of sexualized language or imagery, and sexual attention or advances of + any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment -* Publishing others' private information, such as a physical or email - address, without their explicit permission +* Publishing others' private information, such as a physical or email address, + without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting -## Our Responsibilities +## Enforcement Responsibilities -Project maintainers are responsible for clarifying and enforcing our standards of +Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in -response to any behavior that they deem inappropriate, -threatening, offensive, or harmful. +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. -Project maintainers have the right and responsibility to remove, edit, or reject +Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are -not aligned to this Code of Conduct, and will -communicate reasons for moderation decisions when appropriate. +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. -Examples of representing our community include using an official e-mail address, +Examples of representing our community include using an official email address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement at . +reported to the community leaders responsible for enforcement at +[django-commons-coc@googlegroups.com](mailto:django-commons-coc@googlegroups.com). All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the @@ -67,28 +73,19 @@ reporter of any incident. Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: -### 1. Correction +### 1. Warning **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. -**Consequence**: A private, written warning from community leaders, providing -clarity around the nature of the violation and an explanation of why the -behavior was inappropriate. A public apology may be requested. - -### 2. Warning - -**Community Impact**: A violation through a single incident or series -of actions. - **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels -like social media. Violating these terms may lead to a temporary or -permanent ban. +like social media. Violating these terms may lead to a temporary or permanent +ban. -### 3. Temporary Ban +### 2. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. @@ -99,18 +96,34 @@ private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. -### 4. Permanent Ban +### 3. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an +standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. -**Consequence**: A permanent ban from any sort of public interaction within -the community. +**Consequence**: A permanent ban from any sort of public interaction within the +community. ## Attribution -This Code of Conduct is adapted from the [Contributor Covenant](https://contributor-covenant.org/), version -[1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct/code_of_conduct.md) and -[2.0](https://www.contributor-covenant.org/version/2/0/code_of_conduct/code_of_conduct.md), -and was generated by [contributing-gen](https://github.com/bttger/contributing-gen). \ No newline at end of file +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.1, available at +[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. + +Community Impact Guidelines were inspired by +[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see the FAQ at +[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at +[https://www.contributor-covenant.org/translations][translations]. + +[homepage]: https://www.contributor-covenant.org + +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html + +[Mozilla CoC]: https://github.com/mozilla/diversity + +[FAQ]: https://www.contributor-covenant.org/faq + +[translations]: https://www.contributor-covenant.org/translations diff --git a/README.md b/README.md index 81a2c93..f0ef295 100644 --- a/README.md +++ b/README.md @@ -1,10 +1,65 @@ Django Tasks Scheduler =================== -[![Django CI](https://github.com/dsoftwareinc/django-tasks-scheduler/actions/workflows/test.yml/badge.svg)](https://github.com/dsoftwareinc/django-tasks-scheduler/actions/workflows/test.yml) +[![Django CI](https://github.com/django-commons/django-tasks-scheduler/actions/workflows/test.yml/badge.svg)](https://github.com/django-commons/django-tasks-scheduler/actions/workflows/test.yml) ![badge](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/cunla/b756396efb895f0e34558c980f1ca0c7/raw/django-tasks-scheduler-4.json) [![badge](https://img.shields.io/pypi/dm/django-tasks-scheduler)](https://pypi.org/project/django-tasks-scheduler/) -Documentation can be found in https://django-tasks-scheduler.readthedocs.io/en/latest/ +Documentation can be found in https://django-tasks-scheduler.readthedocs.io/ + +# Usage + +1. Update `settings.py` to include scheduler configuration: + +```python +import os +from typing import Dict +from scheduler.types import SchedulerConfiguration, Broker, QueueConfiguration + +INSTALLED_APPS = [ + # ... + 'scheduler', + # ... +] +SCHEDULER_CONFIG = SchedulerConfiguration( + EXECUTIONS_IN_PAGE=20, + SCHEDULER_INTERVAL=10, + BROKER=Broker.REDIS, + CALLBACK_TIMEOUT=60, # Callback timeout in seconds (success/failure/stopped) + # Default values, can be overriden per task/job + DEFAULT_SUCCESS_TTL=10 * 60, # Time To Live (TTL) in seconds to keep successful job results + DEFAULT_FAILURE_TTL=365 * 24 * 60 * 60, # Time To Live (TTL) in seconds to keep job failure information + DEFAULT_JOB_TTL=10 * 60, # Time To Live (TTL) in seconds to keep job information + DEFAULT_JOB_TIMEOUT=5 * 60, # timeout (seconds) for a job + # General configuration values + DEFAULT_WORKER_TTL=10 * 60, # Time To Live (TTL) in seconds to keep worker information after last heartbeat + DEFAULT_MAINTENANCE_TASK_INTERVAL=10 * 60, # The interval to run maintenance tasks in seconds. 10 minutes. + DEFAULT_JOB_MONITORING_INTERVAL=30, # The interval to monitor jobs in seconds. + SCHEDULER_FALLBACK_PERIOD_SECS=120, # Period (secs) to wait before requiring to reacquire locks +) +SCHEDULER_QUEUES: Dict[str, QueueConfiguration] = { + 'default': QueueConfiguration(URL='redis://localhost:6379/0'), +} +``` + +2. Update `urls.py` to include scheduler urls: + +```python +from django.urls import path, include + +urlpatterns = [ + # ... + path('scheduler/', include('scheduler.urls')), +] +``` + +3. Run migrations: + +```bash +python manage.py migrate +``` + +4. Check out the admin views: + ![](./docs/media/admin-tasks-list.jpg) # Sponsor @@ -12,4 +67,7 @@ django-tasks-scheduler is developed for free. You can support this project by becoming a sponsor using [this link](https://github.com/sponsors/cunla). +# Contributing +Interested in contributing, providing suggestions, or submitting bugs? See +guidelines [at this link](.github/CONTRIBUTING.md). diff --git a/SECURITY.md b/SECURITY.md index 806683f..6b2bf97 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -2,9 +2,9 @@ ## Supported Versions -| Version | Supported | -|-------------|--------------------| -| 2023.latest | :white_check_mark: | +| Version | Supported | +|----------|--------------------| +| 4.latest | :white_check_mark: | ## Reporting a Vulnerability diff --git a/docs/changelog.md b/docs/changelog.md index c4bc542..9938783 100644 --- a/docs/changelog.md +++ b/docs/changelog.md @@ -1,5 +1,164 @@ # Changelog +## v4.0.5 🌈 + +### 🐛 Bug Fixes + +- fix:repeatable task without start date #276 +- fix:admin list of tasks showing local datetime #280 +- fix:wait for job child process using os.waitpid #281 + +### 🧰 Maintenance + +- refactor some tests + +## v4.0.4 🌈 + +### 🐛 Bug Fixes + +- Issue when `SCHEDULER_CONFIG` is a `dict` #273 +- Do not warn about _non_serializable_fields #274 + +### 🧰 Maintenance + +- Fix gha zizmor findings +- Update dependencies to latest versions + +## v4.0.3 🌈 + +### 🐛 Bug Fixes + +- Updated `scheduler_worker` management command argument to `--without-scheduler` since the worker has a scheduler by + default. + +## v4.0.2 🌈 + +### 🐛 Bug Fixes + +- Add type hint for `JOB_METHODS_LIST` +- Fix issue creating new `ONCE` task without a scheduled time #270 + +### 🧰 Maintenance + +- Update dependencies to latest versions +- Migrate to use `uv` instead of `poetry` for package management + +## v4.0.0 🌈 + +See breaking changes in 4.0.0 beta versions. + +### 🐛 Bug Fixes + +- Fix issue with non-primitive parameters for @job #249 + +## v4.0.0b3 🌈 + +Refactor the code to make it more organized and easier to maintain. This includes: + +- All types are under `types` instead of separated to `broker_types` and `settings_types`. +- Added `__all__` to `models`, and other packages. + +## v4.0.0b2 🌈 + +### 🐛 Bug Fixes + +- Fix bug when `SCHEDULER_CONFIG` is `SchedulerConfiguration` + +## v4.0.0b1 🌈 + +### Breaking Changes + +This version is a full revamp of the package. The main changes are related to removing the RQ dependency. +Worker/Queue/Job are all implemented in the package itself. This change allows for more flexibility and control over +the tasks. + +Management commands: + +- `rqstats` => `scheduler_stats` +- `rqworker` => `scheduler_worker` + +Settings: + +- `SCHEDULER_CONFIG` is now a `SchedulerConfiguration` object to help IDE guide settings. +- `SCHEDULER_QUEUES` is now a list of `QueueConfiguration` objects to help IDE guide settings. +- Configuring queue to use `SSL`/`SSL_CERT_REQS`/`SOCKET_TIMEOUT` is now done using `CONNECTION_KWARGS` in + `QueueConfiguration` + ```python + SCHEDULER_QUEUES: Dict[str, QueueConfiguration] = { + 'default': QueueConfiguration( + HOST='localhost', + PORT=6379, + USERNAME='some-user', + PASSWORD='some-password', + CONNECTION_KWARGS={ # Eventual additional Broker connection arguments + 'ssl_cert_reqs': 'required', + 'ssl':True, + }, + ), + # ... + } + ``` +- For how to configure in `settings.py`, please see the [settings documentation](./configuration.md). + +## v3.0.2 🌈 + +### 🐛 Bug Fixes + +- Fix issue updating wrong field #233 + +## v3.0.1 🌈 + +### 🐛 Bug Fixes + +- Lookup error issue #228. + +### 🧰 Maintenance + +- Migrated to use ruff instead of flake8/black + +## v3.0.0 🌈 + +### Breaking Changes + +- Renamed `REDIS_CLIENT_KWARGS` configuration to `CLIENT_KWARGS`. + +### 🚀 Features + +- Created a new `Task` model representing all kind of scheduled tasks. + - In future versions, `CronTask`, `ScheduledTask` and `RepeatableTask` will be removed. + - `Task` model has a `task_type` field to differentiate between the types of tasks. + - Old tasks in the database will be migrated to the new `Task` model automatically. + +### 🧰 Maintenance + +- Update dependencies to latest versions. + +## v2.1.1 🌈 + +### 🐛 Bug Fixes + +- Support for valkey sentinel configuration @amirreza8002 (#191) +- Fix issue with task being scheduled despite being already scheduled #202 + +## v2.1.0 🌈 + +### 🚀 Features + +- Support for custom job-class for every worker, using `--job-class` option in `rqworker` command. @gabriels1234 (#160) +- Support for integrating with sentry, using `--sentry-dsn`, `--sentry-debug`, and `--sentry-ca-certs` options in + `rqworker` command. +- Support for using ValKey as broker instead of redis. + +### 🧰 Maintenance + +- Refactor settings module. + +## v2.0.0 🌈 + +### Breaking Changes + +- Remove support for django 3.* and 4.*. Only support django 5.0 and above. + ## v1.3.4 🌈 ### 🧰 Maintenance diff --git a/docs/commands.md b/docs/commands.md index cb55187..7ea9e19 100644 --- a/docs/commands.md +++ b/docs/commands.md @@ -1,6 +1,6 @@ # Management commands -## rqworker +## `scheduler_worker` - Create a worker Create a new worker with a scheduler for specific queues by order of priority. If no queues are specified, will run on default queue only. @@ -8,11 +8,51 @@ If no queues are specified, will run on default queue only. All queues must have the same redis settings on `SCHEDULER_QUEUES`. ```shell -python manage.py rqworker queue1 queue2 queue3 - +usage: manage.py scheduler_worker [-h] [--pid PIDFILE] [--name NAME] [--worker-ttl WORKER_TTL] + [--fork-job-execution FORK_JOB_EXECUTION] [--sentry-dsn SENTRY_DSN] [--sentry-debug] + [--sentry-ca-certs SENTRY_CA_CERTS] [--burst] [--max-jobs MAX_JOBS] + [--max-idle-time MAX_IDLE_TIME] [--without-scheduler] [--version] [-v {0,1,2,3}] + [--settings SETTINGS] [--pythonpath PYTHONPATH] [--traceback] [--no-color] [--force-color] + [--skip-checks] + [queues ...] + +positional arguments: + queues The queues to work on, separated by space, all queues should be using the same redis + +options: + -h, --help show this help message and exit + --pid PIDFILE file to write the worker`s pid into + --name NAME Name of the worker + --worker-ttl WORKER_TTL + Default worker timeout to be used + --fork-job-execution FORK_JOB_EXECUTION + Fork job execution to another process + --sentry-dsn SENTRY_DSN + Sentry DSN to use + --sentry-debug Enable Sentry debug mode + --sentry-ca-certs SENTRY_CA_CERTS + Path to CA certs file + --burst Run worker in burst mode + --max-jobs MAX_JOBS Maximum number of jobs to execute before terminating worker + --max-idle-time MAX_IDLE_TIME + Maximum number of seconds to wait for new job before terminating worker + --without-scheduler Run worker without scheduler, default to with scheduler + --version Show program's version number and exit. + -v, --verbosity {0,1,2,3} + Verbosity level; 0=minimal output, 1=normal output, 2=verbose output, 3=very verbose output + --settings SETTINGS The Python path to a settings module, e.g. "myproject.settings.main". If this isn't provided, the + DJANGO_SETTINGS_MODULE environment variable will be used. + --pythonpath PYTHONPATH + A directory to add to the Python path, e.g. "/home/djangoprojects/myproject". + --traceback Display a full stack trace on CommandError exceptions. + --no-color Don't colorize the command output. + --force-color Force colorization of the command output. + --skip-checks Skip system checks. ``` -## export + + +## `export` - Export scheduled tasks Export all scheduled tasks from django db to json/yaml format. @@ -25,7 +65,7 @@ Result should be (for json): ```json [ { - "model": "ScheduledJob", + "model": "CronTaskType", "name": "Scheduled Task 1", "callable": "scheduler.tests.test_job", "callable_args": [ @@ -46,7 +86,7 @@ Result should be (for json): ] ``` -## import +## `import` - Import scheduled tasks A json/yaml that was exported using the `export` command can be imported to django. @@ -59,7 +99,7 @@ can be imported to django. python manage.py import -f {yaml,json} --filename {SOURCE-FILE} ``` -## run_job +## `run_job` - Run a job immediately Run a method in a queue immediately. @@ -67,10 +107,54 @@ Run a method in a queue immediately. python manage.py run_job {callable} {callable args ...} ``` -## delete failed jobs +## `delete_failed_jobs` - delete failed jobs Run this to empty failed jobs registry from a queue. ```shell python manage.py delete_failed_jobs ``` + +## `scheduler_stats` - Show scheduler stats + +Prints scheduler stats as a table, json, or yaml, example: + +```shell +$ python manage.py scheduler_stats + +Django-Scheduler CLI Dashboard + +-------------------------------------------------------------------------------- +| Name | Queued | Active | Finished | Canceled | Workers | +-------------------------------------------------------------------------------- +| default | 0 | 0 | 0 | 0 | 0 | +| low | 0 | 0 | 0 | 0 | 0 | +| high | 0 | 0 | 0 | 0 | 0 | +| medium | 0 | 0 | 0 | 0 | 0 | +| another | 0 | 0 | 0 | 0 | 0 | +-------------------------------------------------------------------------------- +``` + +```shell +usage: manage.py scheduler_stats [-h] [-j] [-y] [-i INTERVAL] [--version] [-v {0,1,2,3}] [--settings SETTINGS] [--pythonpath PYTHONPATH] [--traceback] [--no-color] [--force-color] [--skip-checks] + +Print statistics + +options: + -h, --help show this help message and exit + -j, --json Output statistics as JSON + -y, --yaml Output statistics as YAML + -i INTERVAL, --interval INTERVAL + Poll statistics every N seconds + --version Show program's version number and exit. + -v {0,1,2,3}, --verbosity {0,1,2,3} + Verbosity level; 0=minimal output, 1=normal output, 2=verbose output, 3=very verbose output + --settings SETTINGS The Python path to a settings module, e.g. "myproject.settings.main". If this isn't provided, the DJANGO_SETTINGS_MODULE environment variable will be used. + --pythonpath PYTHONPATH + A directory to add to the Python path, e.g. "/home/djangoprojects/myproject". + --traceback Raise on CommandError exceptions. + --no-color Don't colorize the command output. + --force-color Force colorization of the command output. + --skip-checks Skip system checks. + +``` \ No newline at end of file diff --git a/docs/configuration.md b/docs/configuration.md index fb83b6b..00ae6e3 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -5,34 +5,39 @@ All default settings for scheduler can be in one dictionary in `settings.py`: ```python -SCHEDULER_CONFIG = { - 'EXECUTIONS_IN_PAGE': 20, - 'DEFAULT_RESULT_TTL': 500, - 'DEFAULT_TIMEOUT': 300, # 5 minutes - 'SCHEDULER_INTERVAL': 10, # 10 seconds -} -SCHEDULER_QUEUES = { - 'default': { - 'HOST': 'localhost', - 'PORT': 6379, - 'DB': 0, - 'USERNAME': 'some-user', - 'PASSWORD': 'some-password', - 'DEFAULT_TIMEOUT': 360, - 'REDIS_CLIENT_KWARGS': { # Eventual additional Redis connection arguments - 'ssl_cert_reqs': None, +import os +from typing import Dict +from scheduler.types import SchedulerConfiguration, Broker, QueueConfiguration + +SCHEDULER_CONFIG = SchedulerConfiguration( + EXECUTIONS_IN_PAGE=20, + SCHEDULER_INTERVAL=10, + BROKER=Broker.REDIS, + CALLBACK_TIMEOUT=60, # Callback timeout in seconds (success/failure/stopped) + # Default values, can be overriden per task/job + DEFAULT_SUCCESS_TTL=10 * 60, # Time To Live (TTL) in seconds to keep successful job results + DEFAULT_FAILURE_TTL=365 * 24 * 60 * 60, # Time To Live (TTL) in seconds to keep job failure information + DEFAULT_JOB_TTL=10 * 60, # Time To Live (TTL) in seconds to keep job information + DEFAULT_JOB_TIMEOUT=5 * 60, # timeout (seconds) for a job + # General configuration values + DEFAULT_WORKER_TTL=10 * 60, # Time To Live (TTL) in seconds to keep worker information after last heartbeat + DEFAULT_MAINTENANCE_TASK_INTERVAL=10 * 60, # The interval to run maintenance tasks in seconds. 10 minutes. + DEFAULT_JOB_MONITORING_INTERVAL=30, # The interval to monitor jobs in seconds. + SCHEDULER_FALLBACK_PERIOD_SECS=120, # Period (secs) to wait before requiring to reacquire locks +) +SCHEDULER_QUEUES: Dict[str, QueueConfiguration] = { + 'default': QueueConfiguration( + HOST='localhost', + PORT=6379, + USERNAME='some-user', + PASSWORD='some-password', + CONNECTION_KWARGS={ # Eventual additional Broker connection arguments + 'ssl_cert_reqs': 'required', + 'ssl': True, }, - 'TOKEN_VALIDATION_METHOD': None, # Method to validate auth-header - }, - 'high': { - 'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0'), # If you're on Heroku - 'DEFAULT_TIMEOUT': 500, - }, - 'low': { - 'HOST': 'localhost', - 'PORT': 6379, - 'DB': 0, - } + ), + 'high': QueueConfiguration(URL=os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0')), + 'low': QueueConfiguration(HOST='localhost', PORT=6379, DB=0, ASYNC=False), } ``` @@ -42,23 +47,58 @@ Number of job executions to show in a page in a ScheduledJob admin view. Default: `20`. -### SCHEDULER_CONFIG: `DEFAULT_RESULT_TTL` +### SCHEDULER_CONFIG: `SCHEDULER_INTERVAL` + +Default scheduler interval, a scheduler is a subprocess of a worker and +will check which job executions are pending. + +Default: `10` (10 seconds). + +### SCHEDULER_CONFIG: `BROKER` -Default time to live for job execution result. +### SCHEDULER_CONFIG: `CALLBACK_TIMEOUT` + +### SCHEDULER_CONFIG: `DEFAULT_SUCCESS_TTL` + +Default time to live for job execution result when it is successful. Default: `600` (10 minutes). -### SCHEDULER_CONFIG: `DEFAULT_TIMEOUT` +### SCHEDULER_CONFIG: `DEFAULT_FAILURE_TTL` + +Default time to live for job execution result when it is failed. + +Default: `600` (10 minutes). + +### SCHEDULER_CONFIG: `DEFAULT_JOB_TTL` + +Default timeout for job info. -Default timeout for job when it is not mentioned in queue. Default: `300` (5 minutes). -### SCHEDULER_CONFIG: `SCHEDULER_INTERVAL` +### SCHEDULER_CONFIG: `DEFAULT_JOB_TIMEOUT` -Default scheduler interval, a scheduler is a subprocess of a worker and -will check which job executions are pending. +timeout (seconds) for a job. -Default: `10` (10 seconds). +Default: `300` (5 minutes). + +### SCHEDULER_CONFIG: `DEFAULT_WORKER_TTL` + +Time To Live (TTL) in seconds to keep worker information after last heartbeat. +Default: `600` (10 minutes). + +### SCHEDULER_CONFIG: `DEFAULT_MAINTENANCE_TASK_INTERVAL` + +The interval to run worker maintenance tasks in seconds. +Default: `600` 10 minutes. + +### SCHEDULER_CONFIG: `DEFAULT_JOB_MONITORING_INTERVAL` + +The interval to monitor jobs in seconds. + +### SCHEDULER_CONFIG: `SCHEDULER_FALLBACK_PERIOD_SECS` + +Period (secs) to wait before requiring to reacquire locks. ### SCHEDULER_CONFIG: `TOKEN_VALIDATION_METHOD` diff --git a/docs/drt-model.md b/docs/drt-model.md index d11c328..545658e 100644 --- a/docs/drt-model.md +++ b/docs/drt-model.md @@ -1,6 +1,6 @@ # Worker related flows -Running `python manage.py startworker --name 'X' --queues high default low` +Running `python manage.py scheduler_worker --name 'X' --queues high default low` ## Register new worker for queues ```mermaid @@ -48,8 +48,8 @@ sequenceDiagram note over worker,job: Find next job loop over queueKeys until job to run is found or all queues are empty - worker ->>+ queue: get next job id and remove it or None (zrange+zpop) - queue -->>- worker: job id / nothing + worker ->>+ queue: get next job name and remove it or None (zrange+zpop) + queue -->>- worker: job name / nothing end note over worker,job: Execute job or sleep diff --git a/docs/index.md b/docs/index.md index e9672d6..c4d74c6 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,83 +1,153 @@ # Django tasks Scheduler -[![Django CI][1]][2] -![badge][3] -[![badge][4]][5] - +[![Django CI][badge]][2] +![badge][coverage] +[![badge](https://img.shields.io/pypi/dm/django-tasks-scheduler)](https://pypi.org/project/django-tasks-scheduler/) --- A database backed asynchronous tasks scheduler for django. This allows remembering scheduled tasks, their parameters, etc. -## Terminology +!!! Important + Version 3.0.0 introduced a major design change. Instead of three separate models, there is one new `Task` model. + The goal is to simplify. + Make sure to follow [the migration guide](migrate_to_v3.md) + +## Architecture and terminology + +```mermaid +flowchart TD + subgraph Django Process + task[Scheduled Task
django-model] + end + db[(Relational
Database)] + subgraph Worker + worker[Worker
Queue listener
Job Execution] + commands[Worker
commands
Listener] + scheduler[Scheduler] + scheduler ~~~ commands ~~~ worker + end + + subgraph Broker + job[Job] + commandsChannel[Workers
Commands
Channel] + subgraph Queue + direction TB + scheduled[Scheduled Jobs] + queued[Queued jobs] + active[Active jobs] + finished[Finished jobs] + failed[Failed jobs] + canceled[Canceled jobs] + scheduled ~~~ queued ~~~ active + active ~~~ finished + active ~~~ failed + queued ~~~ canceled + end + job ~~~ commandsChannel + end + + task --> db + task -->|Create instance of executing a task| job + job -->|Queuing a job to be executed| scheduled + scheduled -.->|Queue jobs| scheduler -.-> queued + queued -.->|Worker picking up job to execute| worker + worker -.->|Moves it to active jobs| active + active -.->|Once terminated successfully| finished + active -.->|Once terminated unsuccessfully or stopped| failed + queued -...->|In case job is stopped before starting| canceled +``` + +### Scheduled Task + +django-tasks-scheduler is using a single `Task` django-model with different task types, the task types +are: + +- `ONCE` - Run the task once at a scheduled time. +- `REPEATABLE` - Run the task multiple times (limited number of times or infinite times) based on a time interval. +- `CRON` - Run a task indefinitely based on a cron string schedule. + +This enables having one admin view for all scheduled tasks, and having one table in the database to maintain the task +reduces the number of overall queries. +An `Task` instance contains all relevant information about a task to enable the users to schedule using django-admin and +track their status. + +### Job + +A job is a record in the broker, containing all information required to execute a piece of code, usually representing a +task, but not necessarily. + +It contains the following information: + +- Name of the job (that is unique, and passed in different queues). +- Link to the task. +- Reference to the method to be executed. +- Callbacks (In case of failure/success/stopped). +- Timeout details (for method to be executed, for callbacks) +- Successful/Failed result time-to-live. ### Queue A queue of messages between processes (main django-app process and worker usually). -This is implemented in `rq` package. +It is a collection of different registries for different purposes: -* A queue contains multiple registries for scheduled tasks, finished jobs, failed jobs, etc. +- Scheduled jobs: Jobs that are scheduled to run +- Queued jobs: Jobs waiting to be picked up by a worker to run. +- Active jobs: Jobs that are currently being executed. +- Finished jobs: Jobs that have been successfully executed +- Failed jobs: Jobs that have failed to execute or have been stopped +- Canceled jobs: Jobs that have been stopped/canceled before they were executed ### Worker -A process listening to one or more queues **for jobs to be executed**, and executing jobs queued to be -executed. +A process listening to one or more queues **for jobs to be executed**, and executing jobs queued to be executed. -### Scheduler +- A worker has a thread listening to a channel where it can get specific commands. +- A worker can have, by default, a subprocess for the scheduler. -A process listening to one or more queues for **jobs to be scheduled for execution**, and schedule them -to be executed by a worker. +### Scheduler (Worker sub-process) -This is a subprocess of worker. +A process listening to one or more queues for **jobs to be scheduled for execution**, and schedule them to be executed +by a worker (i.e., move them from scheduled-jobs registry to queued-jobs registry). -### Queued Job Execution +This is a sub-process of worker. -Once a worker listening to the queue becomes available, the job will be executed +### Job -### Scheduled Task Execution +Once a worker listening to the queue becomes available, the job will be executed. A scheduler checking the queue periodically will check whether the time the job should be executed has come, and if so, -it will queue it. +it will queue it, i.e., add it to the queued-jobs registry. * A job is considered scheduled if it is queued to be executed, or scheduled to be executed. * If there is no scheduler, the job will not be queued to run. -### Scheduled Task - -django models storing information about jobs. So it is possible to schedule using -django-admin and track their status. - -There are three types of ScheduledTask. - -* `Scheduled Task` - Run a job once, on a specific time (can be immediate). -* `Repeatable Task` - Run a job multiple times (limited number of times or infinite times) based on an interval -* `Cron Task` - Run a job multiple times (limited number of times or infinite times) based on a cron string - -Scheduled jobs are scheduled when the django application starts, and after a scheduled task is executed. - ## Scheduler sequence diagram ```mermaid sequenceDiagram autonumber + box DB + participant db as Database + end box Worker participant scheduler as Scheduler Process end - box DB - participant db as Database - + box Broker + participant job as Job end - box Redis queue - participant queue as Queue - participant schedule as Queue scheduled tasks + box Broker Queue + participant schedule as Scheduled jobs + participant queue as Queued jobs end loop Scheduler process - loop forever - note over scheduler, schedule: Database interaction + note over db, schedule: Database interaction scheduler ->> db: Check for enabled tasks that should be scheduled critical There are tasks to be scheduled - scheduler ->> schedule: Create a job for task that should be scheduled + scheduler ->> job: Create job for task that should be scheduled + scheduler ->> schedule: Add the job to the scheduled-jobs registry end - note over scheduler, schedule: Redis queues interaction + note over scheduler, queue: Broker queues interaction scheduler ->> schedule: check whether there are scheduled tasks that should be executed critical there are jobs that are scheduled to be executed scheduler ->> schedule: remove jobs to be scheduled @@ -95,23 +165,35 @@ sequenceDiagram box Worker participant worker as Worker Process end - box Redis queue - participant queue as Queue - participant finished as Queue finished jobs - participant failed as Queue failed jobs + box Queue + participant queue as Queued jobs + participant finished as Finished jobs + participant failed as Failed jobs + end + box Broker + participant job as Job + participant result as Result end loop Worker process - loop forever worker ->>+ queue: get the first job to be executed queue -->>- worker: A job to be executed or nothing critical There is a job to be executed - worker ->> queue: Remove job from queue + note over worker, result: There is a job to be executed + worker ->> queue: Remove job from queued registry worker ->> worker: Execute job critical Job ended successfully - worker ->> finished: Write job result + worker ->> worker: Execute successful callbacks + worker ->> finished: Move job to finished-jobs registry + worker ->> job: Update job details + worker ->> result: Write result option Job ended unsuccessfully - worker ->> failed: Write job result + worker ->> worker: Execute failure callbacks + worker ->> failed: Move job to failed-jobs registry + worker ->> job: Update job details + worker ->> result: Write result end option No job to be executed + note over worker, result: No job to be executed worker ->> worker: sleep end end @@ -121,16 +203,27 @@ sequenceDiagram ## Reporting issues or Features requests -Please report issues via [GitHub Issues](https://github.com/dsoftwareinc/django-tasks-scheduler/issues) . +Please report issues via [GitHub Issues][issues] . --- ## Acknowledgements -A lot of django-admin views and their tests were adopted from [django-rq](https://github.com/rq/django-rq). +- Some django-admin views and their tests were adopted from [django-rq][django-rq]. +- Worker and Queue implementation was inspired by [rq][rq]. + +[badge]:https://github.com/django-commons/django-tasks-scheduler/actions/workflows/test.yml/badge.svg + +[2]:https://github.com/django-commons/django-tasks-scheduler/actions/workflows/test.yml + +[coverage]:https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/cunla/b756396efb895f0e34558c980f1ca0c7/raw/django-tasks-scheduler-4.json + +[pypi-downloads]:https://img.shields.io/pypi/dm/django-tasks-scheduler + +[pypi]:https://pypi.org/project/django-tasks-scheduler/ + +[issues]:https://github.com/django-commons/django-tasks-scheduler/issues + +[django-rq]:https://github.com/rq/django-rq -[1]:https://github.com/dsoftwareinc/django-tasks-scheduler/actions/workflows/test.yml/badge.svg -[2]:https://github.com/dsoftwareinc/django-tasks-scheduler/actions/workflows/test.yml -[3]:https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/cunla/b756396efb895f0e34558c980f1ca0c7/raw/django-tasks-scheduler-4.json -[4]:https://img.shields.io/pypi/dm/django-tasks-scheduler -[5]:https://pypi.org/project/django-tasks-scheduler/ +[rq]:https://github.com/rq/rq \ No newline at end of file diff --git a/docs/installation.md b/docs/installation.md index 14b4269..e1edcab 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -15,60 +15,64 @@ ``` 3. Configure your queues. - Add at least one Redis Queue to your `settings.py`: + Add at least one Redis Queue to your `settings.py`. + Note that the usage of `QueueConfiguration` is optional, you can use a simple dictionary, but `QueueConfiguration` + helps preventing configuration errors. ```python - import os - SCHEDULER_QUEUES = { - 'default': { - 'HOST': 'localhost', - 'PORT': 6379, - 'DB': 0, - 'USERNAME': 'some-user', - 'PASSWORD': 'some-password', - 'DEFAULT_TIMEOUT': 360, - 'REDIS_CLIENT_KWARGS': { # Eventual additional Redis connection arguments - 'ssl_cert_reqs': None, - }, - }, - 'with-sentinel': { - 'SENTINELS': [('localhost', 26736), ('localhost', 26737)], - 'MASTER_NAME': 'redismaster', - 'DB': 0, - # Redis username/password - 'USERNAME': 'redis-user', - 'PASSWORD': 'secret', - 'SOCKET_TIMEOUT': 0.3, - 'CONNECTION_KWARGS': { # Eventual additional Redis connection arguments - 'ssl': True - }, - 'SENTINEL_KWARGS': { # Eventual Sentinel connection arguments - # If Sentinel also has auth, username/password can be passed here - 'username': 'sentinel-user', - 'password': 'secret', - }, - }, - 'high': { - 'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0'), # If you're on Heroku - 'DEFAULT_TIMEOUT': 500, - }, - 'low': { - 'HOST': 'localhost', - 'PORT': 6379, - 'DB': 0, - } - } + import os + from typing import Dict + from scheduler.types import QueueConfiguration + + SCHEDULER_QUEUES: Dict[str, QueueConfiguration] = { + 'default': QueueConfiguration( + HOST='localhost', + PORT=6379, + USERNAME='some-user', + PASSWORD='some-password', + CONNECTION_KWARGS={ # Eventual additional Broker connection arguments + 'ssl_cert_reqs': 'required', + 'ssl': True, + }, + ), + 'with-sentinel': QueueConfiguration( + SENTINELS= [('localhost', 26736), ('localhost', 26737)], + MASTER_NAME= 'redismaster', + DB= 0, + USERNAME= 'redis-user', + PASSWORD= 'secret', + CONNECTION_KWARGS= { + 'ssl': True}, + SENTINEL_KWARGS= { + 'username': 'sentinel-user', + 'password': 'secret', + }), + 'high': QueueConfiguration(URL=os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0')), + 'low': QueueConfiguration(HOST='localhost', PORT=6379, DB=0, ASYNC=False), + } ``` - + 4. Optional: Configure default values for queuing jobs from code: ```python - SCHEDULER_CONFIG = { - 'EXECUTIONS_IN_PAGE': 20, - 'DEFAULT_RESULT_TTL': 500, - 'DEFAULT_TIMEOUT': 300, # 5 minutes - 'SCHEDULER_INTERVAL': 10, # 10 seconds - } + from scheduler.types import SchedulerConfiguration, Broker + + SCHEDULER_CONFIG = SchedulerConfiguration( + EXECUTIONS_IN_PAGE=20, + SCHEDULER_INTERVAL=10, + BROKER=Broker.REDIS, + CALLBACK_TIMEOUT=60, # Callback timeout in seconds (success/failure/stopped) + # Default values, can be overriden per task/job + DEFAULT_SUCCESS_TTL=10 * 60, # Time To Live (TTL) in seconds to keep successful job results + DEFAULT_FAILURE_TTL=365 * 24 * 60 * 60, # Time To Live (TTL) in seconds to keep job failure information + DEFAULT_JOB_TTL=10 * 60, # Time To Live (TTL) in seconds to keep job information + DEFAULT_JOB_TIMEOUT=5 * 60, # timeout (seconds) for a job + # General configuration values + DEFAULT_WORKER_TTL=10 * 60, # Time To Live (TTL) in seconds to keep worker information after last heartbeat + DEFAULT_MAINTENANCE_TASK_INTERVAL=10 * 60, # The interval to run maintenance tasks in seconds. 10 minutes. + DEFAULT_JOB_MONITORING_INTERVAL=30, # The interval to monitor jobs in seconds. + SCHEDULER_FALLBACK_PERIOD_SECS=120, # Period (secs) to wait before requiring to reacquire locks + ) ``` - + 5. Add `scheduler.urls` to your django application `urls.py`: ```python from django.urls import path, include diff --git a/docs/media/add-scheduled-job.jpg b/docs/media/add-scheduled-job.jpg deleted file mode 100644 index 3783e7a..0000000 Binary files a/docs/media/add-scheduled-job.jpg and /dev/null differ diff --git a/docs/media/add-scheduled-task.jpg b/docs/media/add-scheduled-task.jpg new file mode 100644 index 0000000..abb355f Binary files /dev/null and b/docs/media/add-scheduled-task.jpg differ diff --git a/docs/media/admin-job-details.jpg b/docs/media/admin-job-details.jpg new file mode 100644 index 0000000..9c5b617 Binary files /dev/null and b/docs/media/admin-job-details.jpg differ diff --git a/docs/media/admin-queue-registry.jpg b/docs/media/admin-queue-registry.jpg new file mode 100644 index 0000000..32c1981 Binary files /dev/null and b/docs/media/admin-queue-registry.jpg differ diff --git a/docs/media/admin-queues-list.jpg b/docs/media/admin-queues-list.jpg new file mode 100644 index 0000000..0bb5791 Binary files /dev/null and b/docs/media/admin-queues-list.jpg differ diff --git a/docs/media/admin-task-details.jpg b/docs/media/admin-task-details.jpg new file mode 100644 index 0000000..4bf88c1 Binary files /dev/null and b/docs/media/admin-task-details.jpg differ diff --git a/docs/media/admin-tasks-list.jpg b/docs/media/admin-tasks-list.jpg new file mode 100644 index 0000000..52feeed Binary files /dev/null and b/docs/media/admin-tasks-list.jpg differ diff --git a/docs/media/admin-worker-details.jpg b/docs/media/admin-worker-details.jpg new file mode 100644 index 0000000..d1c9529 Binary files /dev/null and b/docs/media/admin-worker-details.jpg differ diff --git a/docs/media/admin-workers-list.jpg b/docs/media/admin-workers-list.jpg new file mode 100644 index 0000000..5b1ef02 Binary files /dev/null and b/docs/media/admin-workers-list.jpg differ diff --git a/docs/migrate_to_v3.md b/docs/migrate_to_v3.md new file mode 100644 index 0000000..ed4cf3b --- /dev/null +++ b/docs/migrate_to_v3.md @@ -0,0 +1,36 @@ +Migration from v2 to v3 +======================= + +Version 3.0.0 introduced a major design change. Instead of three separate models, there is one new `Task` model. The +goal is to have one centralized admin view for all your scheduled tasks, regardless of the scheduling type. + +You need to migrate the scheduled tasks using the old models (`ScheduledTask`, `RepeatableTask`, `CronTask`) to the new +model. It can be done using the export/import commands provided. + +After upgrading to django-tasks-scheduler v3.0.0, you will notice you are not able to create new scheduled tasks in the +old models, that is intentional. In the next version of django-tasks-scheduler (v3.1), the old models will be deleted, +so make sure you migrate your old models. + +!!! Note + While we tested different scenarios heavily and left the code for old tasks, we could not account for all different + use cases, therefore, please [open an issue][issues] if you encounter any. + +There are two ways to migrate your existing scheduled tasks: + +# Using the admin views of the old models + +If you go to the admin view of the old models, you will notice there is a new action in the actions drop down menu for +migrating the selected tasks. Use it, and you will also have a link to the new task to compare the migration result. + +Note once you migrate using this method, the old task will be disabled automatically. + +# Export/Import management commands + +Run in your project directory: + +```shell +python manage.py export > scheduled_tasks.json +python manage.py import --filename scheduled_tasks.json +``` + +[issues]: https://github.com/django-commons/django-tasks-scheduler/issues \ No newline at end of file diff --git a/docs/requirements.txt b/docs/requirements.txt index 69da99e..948c9be 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -1,2 +1,2 @@ -mkdocs==1.6.0 -mkdocs-material==9.5.27 +mkdocs==1.6.1 +mkdocs-material==9.6.14 diff --git a/docs/usage.md b/docs/usage.md index 4957961..e310a89 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -6,7 +6,7 @@ from scheduler import job -@job +@job() def long_running_func(): pass @@ -39,30 +39,51 @@ def long_running_func(): long_running_func.delay() # Enqueue function with a timeout of 3600 seconds. ``` -You can set in `settings.py` a default value for `DEFAULT_RESULT_TTL` and `DEFAULT_TIMEOUT`. +You can set in `settings.py` a default value for `DEFAULT_JOB_TTL` and `DEFAULT_JOB_TIMEOUT`. ```python # settings.py -RQ = { - 'DEFAULT_RESULT_TTL': 360, - 'DEFAULT_TIMEOUT': 60, -} +SCHEDULER_CONFIG = SchedulerConfiguration( + DEFAULT_SUCCESS_TTL=10 * 60, # Time To Live (TTL) in seconds to keep successful job results + DEFAULT_FAILURE_TTL=365 * 24 * 60 * 60, # Time To Live (TTL) in seconds to keep job failure information + DEFAULT_JOB_TTL=10 * 60, # Time To Live (TTL) in seconds to keep job information + DEFAULT_JOB_TIMEOUT=5 * 60, # timeout (seconds) for a job +) ``` -## Scheduling a job Through django-admin +## Managing tasks through the Django Admin -* Sign in to the Django Admin site (e.g., http://localhost:8000/admin/) and locate the - **Tasks Scheduler** section. -* Click on the **Add** link for the type of job you want to add (`Scheduled Task` - run once, `Repeatable Task` - run - multiple times, `Cron Task` - Run based on cron schedule). -* Enter a unique name for the job in the **Name** field. +### Viewing list of scheduled tasks + +![](media/admin-tasks-list.jpg) + +### Viewing details of a scheduled task + +It is possible to view list of executions of a task, as well as the details of a specific execution. +![](media/admin-task-details.jpg) + +### Scheduling a task Through django-admin + +* Sign in to the Django Admin site (e.g., http://localhost:8000/admin/) and locate the `Tasks Scheduler` section. +* Click on the **Add** on `Tasks` +* Enter a unique name for the task in the **Name** field. +* Select the task type, and according to the type, the form will change for the scheduling details. + * For `Repeatable task` + * Enter an Interval, and choose the Interval unit. This will calculate the time before the function is called + again. + * In the Repeat field, enter the number of times the job is to be run. Leaving the field empty, means the job + will be scheduled to run forever. + * For `Cron task` + * In the Repeat field, enter the number of times the job is to be run. Leaving the field empty, means the job + will be scheduled to run forever. + * In the cron string field, enter a cron string describing how often the job should run. * In the **Callable** field, enter a Python dot notation path to the method that defines the job. For the example above, that would be `myapp.jobs.count` * Choose your **Queue**. The queues listed are defined in your app `settings.py` under `SCHEDULER_QUEUES`. * Enter the time in UTC the job is to be executed in the **Scheduled time** field. -![](media/add-scheduled-job.jpg) +![](media/add-scheduled-task.jpg) #### Optional fields: @@ -76,56 +97,30 @@ RQ = { Once you are done, click **Save** and your job will be persisted to django database. -### Support for arguments for jobs +#### Support for arguments for tasks -django-tasks-scheduler supports scheduling jobs calling methods with arguments, as well as arguments that should be +django-tasks-scheduler supports scheduling tasks calling methods with arguments, as well as arguments that should be calculated in runtime. ![](media/add-args.jpg) -### Scheduled Task - run once - -No additional steps required. - -### Repeatable Task - Run a job multiple time based on interval +### Viewing queue statistics -Additional fields required: +![](media/admin-queues-list.jpg) -* Enter an **Interval**, and choose the **Interval unit**. This will calculate the time before the function is called - again. -* In the **Repeat** field, enter the number of time the job is to be run. Leaving the field empty, means the job will - be scheduled to run forever. +### Viewing queue specific registry jobs -### Cron Task - Run a job multiple time based on cron +![](media/admin-queue-registry.jpg) -Additional fields required: +### Viewing workers list -* In the **Repeat** field, enter the number of time the job is to be run. Leaving the field empty, means the job will be - scheduled to run forever. -* In the **cron string** field, enter a cron string describing how often the job should run. +![](media/admin-workers-list.jpg) -### Scheduled Task - run once +### Viewing worker details -No additional steps required. +![](media/admin-worker-details.jpg) -### Repeatable Task - Run a job multiple time based on interval - -Additional fields required: - -* Enter an **Interval**, and choose the **Interval unit**. This will calculate the time before the function is called - again. -* In the **Repeat** field, enter the number of time the job is to be run. Leaving the field empty, means the job will - be scheduled to run forever. - -### Cron Task - Run a job multiple time based on cron - -Additional fields required: - -* In the **Repeat** field, enter the number of time the job is to be run. Leaving the field empty, means the job will be - scheduled to run forever. -* In the **cron string** field, enter a cron string describing how often the job should run. - -## Enqueue jobs through command line +## Enqueue jobs using the command line It is possible to queue a job to be executed from the command line using django management command: @@ -134,40 +129,45 @@ using django management command: python manage.py run_job -q {queue} -t {timeout} -r {result_ttl} {callable} {args} ``` -## Running a worker +## Running a worker to process queued jobs in the background Create a worker to execute queued jobs on specific queues using: ```shell -python manage.py rqworker [queues ...] +usage: manage.py scheduler_worker [-h] [--pid PIDFILE] [--name NAME] [--worker-ttl WORKER_TTL] [--fork-job-execution FORK_JOB_EXECUTION] [--sentry-dsn SENTRY_DSN] [--sentry-debug] [--sentry-ca-certs SENTRY_CA_CERTS] [--burst] + [--max-jobs MAX_JOBS] [--max-idle-time MAX_IDLE_TIME] [--with-scheduler] [--version] [-v {0,1,2,3}] [--settings SETTINGS] [--pythonpath PYTHONPATH] [--traceback] [--no-color] [--force-color] + [--skip-checks] + [queues ...] ``` +More information about the different parameters can be found in the [commands documentation](commands.md). + ### Running multiple workers as unix/linux services using systemd You can have multiple workers running as system services. -In order to have multiple rqworkers, edit the `/etc/systemd/system/rqworker@.service` +To have multiple scheduler workers, edit the `/etc/systemd/system/scheduler_worker@.service` file, make sure it ends with `@.service`, the following is example: ```ini -# /etc/systemd/system/rqworker@.service +# /etc/systemd/system/scheduler_worker@.service [Unit] -Description = rqworker daemon +Description = scheduler_worker daemon After = network.target [Service] WorkingDirectory = {{ path_to_your_project_folder } } ExecStart = /home/ubuntu/.virtualenv/{ { your_virtualenv } }/bin/python \ {{ path_to_your_project_folder } }/manage.py \ - rqworker high default low + scheduler_worker high default low # Optional -# {{user to run rqworker as}} +# {{user to run scheduler_worker as}} User = ubuntu -# {{group to run rqworker as}} +# {{group to run scheduler_worker as}} Group = www-data # Redirect logs to syslog StandardOutput = syslog StandardError = syslog -SyslogIdentifier = rqworker +SyslogIdentifier = scheduler_worker Environment = OBJC_DISABLE_INITIALIZE_FORK_SAFETY = YES Environment = LC_ALL = en_US.UTF-8 Environment = LANG = en_US.UTF-8 @@ -180,11 +180,11 @@ After you are done editing the file, reload the settings and start the new worke ```shell sudo systemctl daemon-reload -sudo systemctl start rqworker@{1..3} +sudo systemctl start scheduler_worker@{1..3} ``` You can target a specific worker using its number: ```shell -sudo systemctl stop rqworker@2 +sudo systemctl stop scheduler_worker@2 ``` \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 524e2e5..ece8bed 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -5,7 +5,7 @@ site_description: >- Documentation for django-tasks-scheduler django library # Repository repo_name: dsoftwareinc/django-tasks-scheduler -repo_url: https://github.com/dsoftwareinc/django-tasks-scheduler +repo_url: https://github.com/django-commons/django-tasks-scheduler # Copyright copyright: Copyright © 2022 - 2023 Daniel Moran @@ -30,8 +30,8 @@ markdown_extensions: - pymdownx.caret - pymdownx.details - pymdownx.emoji: - emoji_generator: !!python/name:materialx.emoji.to_svg - emoji_index: !!python/name:materialx.emoji.twemoji + emoji_generator: !!python/name:material.extensions.emoji.to_svg + emoji_index: !!python/name:material.extensions.emoji.twemoji - pymdownx.highlight: anchor_linenums: true - pymdownx.inlinehilite @@ -101,6 +101,7 @@ theme: nav: - Home: index.md + - Migrate v2 to v3: migrate_to_v3.md - Installation: installation.md - Configuration: configuration.md - Usage: usage.md diff --git a/poetry.lock b/poetry.lock deleted file mode 100644 index 5c5cdb3..0000000 --- a/poetry.lock +++ /dev/null @@ -1,1624 +0,0 @@ -# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand. - -[[package]] -name = "asgiref" -version = "3.8.1" -description = "ASGI specs, helper code, and adapters" -optional = false -python-versions = ">=3.8" -files = [ - {file = "asgiref-3.8.1-py3-none-any.whl", hash = "sha256:3e1e3ecc849832fe52ccf2cb6686b7a55f82bb1d6aee72a58826471390335e47"}, - {file = "asgiref-3.8.1.tar.gz", hash = "sha256:c343bd80a0bec947a9860adb4c432ffa7db769836c64238fc34bdc3fec84d590"}, -] - -[package.dependencies] -typing-extensions = {version = ">=4", markers = "python_version < \"3.11\""} - -[package.extras] -tests = ["mypy (>=0.800)", "pytest", "pytest-asyncio"] - -[[package]] -name = "async-timeout" -version = "4.0.3" -description = "Timeout context manager for asyncio programs" -optional = false -python-versions = ">=3.7" -files = [ - {file = "async-timeout-4.0.3.tar.gz", hash = "sha256:4640d96be84d82d02ed59ea2b7105a0f7b33abe8703703cd0ab0bf87c427522f"}, - {file = "async_timeout-4.0.3-py3-none-any.whl", hash = "sha256:7405140ff1230c310e51dc27b3145b9092d659ce68ff733fb0cefe3ee42be028"}, -] - -[[package]] -name = "build" -version = "1.2.1" -description = "A simple, correct Python build frontend" -optional = false -python-versions = ">=3.8" -files = [ - {file = "build-1.2.1-py3-none-any.whl", hash = "sha256:75e10f767a433d9a86e50d83f418e83efc18ede923ee5ff7df93b6cb0306c5d4"}, - {file = "build-1.2.1.tar.gz", hash = "sha256:526263f4870c26f26c433545579475377b2b7588b6f1eac76a001e873ae3e19d"}, -] - -[package.dependencies] -colorama = {version = "*", markers = "os_name == \"nt\""} -importlib-metadata = {version = ">=4.6", markers = "python_full_version < \"3.10.2\""} -packaging = ">=19.1" -pyproject_hooks = "*" -tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""} - -[package.extras] -docs = ["furo (>=2023.08.17)", "sphinx (>=7.0,<8.0)", "sphinx-argparse-cli (>=1.5)", "sphinx-autodoc-typehints (>=1.10)", "sphinx-issues (>=3.0.0)"] -test = ["build[uv,virtualenv]", "filelock (>=3)", "pytest (>=6.2.4)", "pytest-cov (>=2.12)", "pytest-mock (>=2)", "pytest-rerunfailures (>=9.1)", "pytest-xdist (>=1.34)", "setuptools (>=42.0.0)", "setuptools (>=56.0.0)", "setuptools (>=56.0.0)", "setuptools (>=67.8.0)", "wheel (>=0.36.0)"] -typing = ["build[uv]", "importlib-metadata (>=5.1)", "mypy (>=1.9.0,<1.10.0)", "tomli", "typing-extensions (>=3.7.4.3)"] -uv = ["uv (>=0.1.18)"] -virtualenv = ["virtualenv (>=20.0.35)"] - -[[package]] -name = "cachecontrol" -version = "0.14.0" -description = "httplib2 caching for requests" -optional = false -python-versions = ">=3.7" -files = [ - {file = "cachecontrol-0.14.0-py3-none-any.whl", hash = "sha256:f5bf3f0620c38db2e5122c0726bdebb0d16869de966ea6a2befe92470b740ea0"}, - {file = "cachecontrol-0.14.0.tar.gz", hash = "sha256:7db1195b41c81f8274a7bbd97c956f44e8348265a1bc7641c37dfebc39f0c938"}, -] - -[package.dependencies] -filelock = {version = ">=3.8.0", optional = true, markers = "extra == \"filecache\""} -msgpack = ">=0.5.2,<2.0.0" -requests = ">=2.16.0" - -[package.extras] -dev = ["CacheControl[filecache,redis]", "black", "build", "cherrypy", "furo", "mypy", "pytest", "pytest-cov", "sphinx", "sphinx-copybutton", "tox", "types-redis", "types-requests"] -filecache = ["filelock (>=3.8.0)"] -redis = ["redis (>=2.10.5)"] - -[[package]] -name = "certifi" -version = "2024.6.2" -description = "Python package for providing Mozilla's CA Bundle." -optional = false -python-versions = ">=3.6" -files = [ - {file = "certifi-2024.6.2-py3-none-any.whl", hash = "sha256:ddc6c8ce995e6987e7faf5e3f1b02b302836a0e5d98ece18392cb1a36c72ad56"}, - {file = "certifi-2024.6.2.tar.gz", hash = "sha256:3cd43f1c6fa7dedc5899d69d3ad0398fd018ad1a17fba83ddaf78aa46c747516"}, -] - -[[package]] -name = "cffi" -version = "1.16.0" -description = "Foreign Function Interface for Python calling C code." -optional = false -python-versions = ">=3.8" -files = [ - {file = "cffi-1.16.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6b3d6606d369fc1da4fd8c357d026317fbb9c9b75d36dc16e90e84c26854b088"}, - {file = "cffi-1.16.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ac0f5edd2360eea2f1daa9e26a41db02dd4b0451b48f7c318e217ee092a213e9"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7e61e3e4fa664a8588aa25c883eab612a188c725755afff6289454d6362b9673"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a72e8961a86d19bdb45851d8f1f08b041ea37d2bd8d4fd19903bc3083d80c896"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5b50bf3f55561dac5438f8e70bfcdfd74543fd60df5fa5f62d94e5867deca684"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7651c50c8c5ef7bdb41108b7b8c5a83013bfaa8a935590c5d74627c047a583c7"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4108df7fe9b707191e55f33efbcb2d81928e10cea45527879a4749cbe472614"}, - {file = "cffi-1.16.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:32c68ef735dbe5857c810328cb2481e24722a59a2003018885514d4c09af9743"}, - {file = "cffi-1.16.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:673739cb539f8cdaa07d92d02efa93c9ccf87e345b9a0b556e3ecc666718468d"}, - {file = "cffi-1.16.0-cp310-cp310-win32.whl", hash = "sha256:9f90389693731ff1f659e55c7d1640e2ec43ff725cc61b04b2f9c6d8d017df6a"}, - {file = "cffi-1.16.0-cp310-cp310-win_amd64.whl", hash = "sha256:e6024675e67af929088fda399b2094574609396b1decb609c55fa58b028a32a1"}, - {file = "cffi-1.16.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b84834d0cf97e7d27dd5b7f3aca7b6e9263c56308ab9dc8aae9784abb774d404"}, - {file = "cffi-1.16.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1b8ebc27c014c59692bb2664c7d13ce7a6e9a629be20e54e7271fa696ff2b417"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ee07e47c12890ef248766a6e55bd38ebfb2bb8edd4142d56db91b21ea68b7627"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8a9d3ebe49f084ad71f9269834ceccbf398253c9fac910c4fd7053ff1386936"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e70f54f1796669ef691ca07d046cd81a29cb4deb1e5f942003f401c0c4a2695d"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5bf44d66cdf9e893637896c7faa22298baebcd18d1ddb6d2626a6e39793a1d56"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b78010e7b97fef4bee1e896df8a4bbb6712b7f05b7ef630f9d1da00f6444d2e"}, - {file = "cffi-1.16.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:c6a164aa47843fb1b01e941d385aab7215563bb8816d80ff3a363a9f8448a8dc"}, - {file = "cffi-1.16.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e09f3ff613345df5e8c3667da1d918f9149bd623cd9070c983c013792a9a62eb"}, - {file = "cffi-1.16.0-cp311-cp311-win32.whl", hash = "sha256:2c56b361916f390cd758a57f2e16233eb4f64bcbeee88a4881ea90fca14dc6ab"}, - {file = "cffi-1.16.0-cp311-cp311-win_amd64.whl", hash = "sha256:db8e577c19c0fda0beb7e0d4e09e0ba74b1e4c092e0e40bfa12fe05b6f6d75ba"}, - {file = "cffi-1.16.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:fa3a0128b152627161ce47201262d3140edb5a5c3da88d73a1b790a959126956"}, - {file = "cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:68e7c44931cc171c54ccb702482e9fc723192e88d25a0e133edd7aff8fcd1f6e"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd808f9c129ba2beda4cfc53bde801e5bcf9d6e0f22f095e45327c038bfe68e"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88e2b3c14bdb32e440be531ade29d3c50a1a59cd4e51b1dd8b0865c54ea5d2e2"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcc8eb6d5902bb1cf6dc4f187ee3ea80a1eba0a89aba40a5cb20a5087d961357"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b7be2d771cdba2942e13215c4e340bfd76398e9227ad10402a8767ab1865d2e6"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e715596e683d2ce000574bae5d07bd522c781a822866c20495e52520564f0969"}, - {file = "cffi-1.16.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:2d92b25dbf6cae33f65005baf472d2c245c050b1ce709cc4588cdcdd5495b520"}, - {file = "cffi-1.16.0-cp312-cp312-win32.whl", hash = "sha256:b2ca4e77f9f47c55c194982e10f058db063937845bb2b7a86c84a6cfe0aefa8b"}, - {file = "cffi-1.16.0-cp312-cp312-win_amd64.whl", hash = "sha256:68678abf380b42ce21a5f2abde8efee05c114c2fdb2e9eef2efdb0257fba1235"}, - {file = "cffi-1.16.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0c9ef6ff37e974b73c25eecc13952c55bceed9112be2d9d938ded8e856138bcc"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a09582f178759ee8128d9270cd1344154fd473bb77d94ce0aeb2a93ebf0feaf0"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e760191dd42581e023a68b758769e2da259b5d52e3103c6060ddc02c9edb8d7b"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:80876338e19c951fdfed6198e70bc88f1c9758b94578d5a7c4c91a87af3cf31c"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a6a14b17d7e17fa0d207ac08642c8820f84f25ce17a442fd15e27ea18d67c59b"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6602bc8dc6f3a9e02b6c22c4fc1e47aa50f8f8e6d3f78a5e16ac33ef5fefa324"}, - {file = "cffi-1.16.0-cp38-cp38-win32.whl", hash = "sha256:131fd094d1065b19540c3d72594260f118b231090295d8c34e19a7bbcf2e860a"}, - {file = "cffi-1.16.0-cp38-cp38-win_amd64.whl", hash = "sha256:31d13b0f99e0836b7ff893d37af07366ebc90b678b6664c955b54561fc36ef36"}, - {file = "cffi-1.16.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:582215a0e9adbe0e379761260553ba11c58943e4bbe9c36430c4ca6ac74b15ed"}, - {file = "cffi-1.16.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b29ebffcf550f9da55bec9e02ad430c992a87e5f512cd63388abb76f1036d8d2"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc9b18bf40cc75f66f40a7379f6a9513244fe33c0e8aa72e2d56b0196a7ef872"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cb4a35b3642fc5c005a6755a5d17c6c8b6bcb6981baf81cea8bfbc8903e8ba8"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b86851a328eedc692acf81fb05444bdf1891747c25af7529e39ddafaf68a4f3f"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c0f31130ebc2d37cdd8e44605fb5fa7ad59049298b3f745c74fa74c62fbfcfc4"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f8e709127c6c77446a8c0a8c8bf3c8ee706a06cd44b1e827c3e6a2ee6b8c098"}, - {file = "cffi-1.16.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:748dcd1e3d3d7cd5443ef03ce8685043294ad6bd7c02a38d1bd367cfd968e000"}, - {file = "cffi-1.16.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8895613bcc094d4a1b2dbe179d88d7fb4a15cee43c052e8885783fac397d91fe"}, - {file = "cffi-1.16.0-cp39-cp39-win32.whl", hash = "sha256:ed86a35631f7bfbb28e108dd96773b9d5a6ce4811cf6ea468bb6a359b256b1e4"}, - {file = "cffi-1.16.0-cp39-cp39-win_amd64.whl", hash = "sha256:3686dffb02459559c74dd3d81748269ffb0eb027c39a6fc99502de37d501faa8"}, - {file = "cffi-1.16.0.tar.gz", hash = "sha256:bcb3ef43e58665bbda2fb198698fcae6776483e0c4a631aa5647806c25e02cc0"}, -] - -[package.dependencies] -pycparser = "*" - -[[package]] -name = "charset-normalizer" -version = "3.3.2" -description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." -optional = false -python-versions = ">=3.7.0" -files = [ - {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"}, - {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"}, - {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"}, - {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"}, - {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"}, - {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"}, - {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"}, - {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"}, -] - -[[package]] -name = "cleo" -version = "2.1.0" -description = "Cleo allows you to create beautiful and testable command-line interfaces." -optional = false -python-versions = ">=3.7,<4.0" -files = [ - {file = "cleo-2.1.0-py3-none-any.whl", hash = "sha256:4a31bd4dd45695a64ee3c4758f583f134267c2bc518d8ae9a29cf237d009b07e"}, - {file = "cleo-2.1.0.tar.gz", hash = "sha256:0b2c880b5d13660a7ea651001fb4acb527696c01f15c9ee650f377aa543fd523"}, -] - -[package.dependencies] -crashtest = ">=0.4.1,<0.5.0" -rapidfuzz = ">=3.0.0,<4.0.0" - -[[package]] -name = "click" -version = "8.1.7" -description = "Composable command line interface toolkit" -optional = false -python-versions = ">=3.7" -files = [ - {file = "click-8.1.7-py3-none-any.whl", hash = "sha256:ae74fb96c20a0277a1d615f1e4d73c8414f5a98db8b799a7931d1582f3390c28"}, - {file = "click-8.1.7.tar.gz", hash = "sha256:ca9853ad459e787e2192211578cc907e7594e294c7ccc834310722b41b9ca6de"}, -] - -[package.dependencies] -colorama = {version = "*", markers = "platform_system == \"Windows\""} - -[[package]] -name = "colorama" -version = "0.4.6" -description = "Cross-platform colored terminal text." -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" -files = [ - {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, - {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, -] - -[[package]] -name = "coverage" -version = "7.5.4" -description = "Code coverage measurement for Python" -optional = false -python-versions = ">=3.8" -files = [ - {file = "coverage-7.5.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6cfb5a4f556bb51aba274588200a46e4dd6b505fb1a5f8c5ae408222eb416f99"}, - {file = "coverage-7.5.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2174e7c23e0a454ffe12267a10732c273243b4f2d50d07544a91198f05c48f47"}, - {file = "coverage-7.5.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2214ee920787d85db1b6a0bd9da5f8503ccc8fcd5814d90796c2f2493a2f4d2e"}, - {file = "coverage-7.5.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1137f46adb28e3813dec8c01fefadcb8c614f33576f672962e323b5128d9a68d"}, - {file = "coverage-7.5.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b385d49609f8e9efc885790a5a0e89f2e3ae042cdf12958b6034cc442de428d3"}, - {file = "coverage-7.5.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:b4a474f799456e0eb46d78ab07303286a84a3140e9700b9e154cfebc8f527016"}, - {file = "coverage-7.5.4-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:5cd64adedf3be66f8ccee418473c2916492d53cbafbfcff851cbec5a8454b136"}, - {file = "coverage-7.5.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e564c2cf45d2f44a9da56f4e3a26b2236504a496eb4cb0ca7221cd4cc7a9aca9"}, - {file = "coverage-7.5.4-cp310-cp310-win32.whl", hash = "sha256:7076b4b3a5f6d2b5d7f1185fde25b1e54eb66e647a1dfef0e2c2bfaf9b4c88c8"}, - {file = "coverage-7.5.4-cp310-cp310-win_amd64.whl", hash = "sha256:018a12985185038a5b2bcafab04ab833a9a0f2c59995b3cec07e10074c78635f"}, - {file = "coverage-7.5.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:db14f552ac38f10758ad14dd7b983dbab424e731588d300c7db25b6f89e335b5"}, - {file = "coverage-7.5.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3257fdd8e574805f27bb5342b77bc65578e98cbc004a92232106344053f319ba"}, - {file = "coverage-7.5.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3a6612c99081d8d6134005b1354191e103ec9705d7ba2754e848211ac8cacc6b"}, - {file = "coverage-7.5.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d45d3cbd94159c468b9b8c5a556e3f6b81a8d1af2a92b77320e887c3e7a5d080"}, - {file = "coverage-7.5.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ed550e7442f278af76d9d65af48069f1fb84c9f745ae249c1a183c1e9d1b025c"}, - {file = "coverage-7.5.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7a892be37ca35eb5019ec85402c3371b0f7cda5ab5056023a7f13da0961e60da"}, - {file = "coverage-7.5.4-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:8192794d120167e2a64721d88dbd688584675e86e15d0569599257566dec9bf0"}, - {file = "coverage-7.5.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:820bc841faa502e727a48311948e0461132a9c8baa42f6b2b84a29ced24cc078"}, - {file = "coverage-7.5.4-cp311-cp311-win32.whl", hash = "sha256:6aae5cce399a0f065da65c7bb1e8abd5c7a3043da9dceb429ebe1b289bc07806"}, - {file = "coverage-7.5.4-cp311-cp311-win_amd64.whl", hash = "sha256:d2e344d6adc8ef81c5a233d3a57b3c7d5181f40e79e05e1c143da143ccb6377d"}, - {file = "coverage-7.5.4-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:54317c2b806354cbb2dc7ac27e2b93f97096912cc16b18289c5d4e44fc663233"}, - {file = "coverage-7.5.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:042183de01f8b6d531e10c197f7f0315a61e8d805ab29c5f7b51a01d62782747"}, - {file = "coverage-7.5.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a6bb74ed465d5fb204b2ec41d79bcd28afccf817de721e8a807d5141c3426638"}, - {file = "coverage-7.5.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b3d45ff86efb129c599a3b287ae2e44c1e281ae0f9a9bad0edc202179bcc3a2e"}, - {file = "coverage-7.5.4-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5013ed890dc917cef2c9f765c4c6a8ae9df983cd60dbb635df8ed9f4ebc9f555"}, - {file = "coverage-7.5.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:1014fbf665fef86cdfd6cb5b7371496ce35e4d2a00cda501cf9f5b9e6fced69f"}, - {file = "coverage-7.5.4-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:3684bc2ff328f935981847082ba4fdc950d58906a40eafa93510d1b54c08a66c"}, - {file = "coverage-7.5.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:581ea96f92bf71a5ec0974001f900db495488434a6928a2ca7f01eee20c23805"}, - {file = "coverage-7.5.4-cp312-cp312-win32.whl", hash = "sha256:73ca8fbc5bc622e54627314c1a6f1dfdd8db69788f3443e752c215f29fa87a0b"}, - {file = "coverage-7.5.4-cp312-cp312-win_amd64.whl", hash = "sha256:cef4649ec906ea7ea5e9e796e68b987f83fa9a718514fe147f538cfeda76d7a7"}, - {file = "coverage-7.5.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:cdd31315fc20868c194130de9ee6bfd99755cc9565edff98ecc12585b90be882"}, - {file = "coverage-7.5.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:02ff6e898197cc1e9fa375581382b72498eb2e6d5fc0b53f03e496cfee3fac6d"}, - {file = "coverage-7.5.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d05c16cf4b4c2fc880cb12ba4c9b526e9e5d5bb1d81313d4d732a5b9fe2b9d53"}, - {file = "coverage-7.5.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c5986ee7ea0795a4095ac4d113cbb3448601efca7f158ec7f7087a6c705304e4"}, - {file = "coverage-7.5.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5df54843b88901fdc2f598ac06737f03d71168fd1175728054c8f5a2739ac3e4"}, - {file = "coverage-7.5.4-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:ab73b35e8d109bffbda9a3e91c64e29fe26e03e49addf5b43d85fc426dde11f9"}, - {file = "coverage-7.5.4-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:aea072a941b033813f5e4814541fc265a5c12ed9720daef11ca516aeacd3bd7f"}, - {file = "coverage-7.5.4-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:16852febd96acd953b0d55fc842ce2dac1710f26729b31c80b940b9afcd9896f"}, - {file = "coverage-7.5.4-cp38-cp38-win32.whl", hash = "sha256:8f894208794b164e6bd4bba61fc98bf6b06be4d390cf2daacfa6eca0a6d2bb4f"}, - {file = "coverage-7.5.4-cp38-cp38-win_amd64.whl", hash = "sha256:e2afe743289273209c992075a5a4913e8d007d569a406ffed0bd080ea02b0633"}, - {file = "coverage-7.5.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b95c3a8cb0463ba9f77383d0fa8c9194cf91f64445a63fc26fb2327e1e1eb088"}, - {file = "coverage-7.5.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3d7564cc09dd91b5a6001754a5b3c6ecc4aba6323baf33a12bd751036c998be4"}, - {file = "coverage-7.5.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:44da56a2589b684813f86d07597fdf8a9c6ce77f58976727329272f5a01f99f7"}, - {file = "coverage-7.5.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e16f3d6b491c48c5ae726308e6ab1e18ee830b4cdd6913f2d7f77354b33f91c8"}, - {file = "coverage-7.5.4-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dbc5958cb471e5a5af41b0ddaea96a37e74ed289535e8deca404811f6cb0bc3d"}, - {file = "coverage-7.5.4-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:a04e990a2a41740b02d6182b498ee9796cf60eefe40cf859b016650147908029"}, - {file = "coverage-7.5.4-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:ddbd2f9713a79e8e7242d7c51f1929611e991d855f414ca9996c20e44a895f7c"}, - {file = "coverage-7.5.4-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b1ccf5e728ccf83acd313c89f07c22d70d6c375a9c6f339233dcf792094bcbf7"}, - {file = "coverage-7.5.4-cp39-cp39-win32.whl", hash = "sha256:56b4eafa21c6c175b3ede004ca12c653a88b6f922494b023aeb1e836df953ace"}, - {file = "coverage-7.5.4-cp39-cp39-win_amd64.whl", hash = "sha256:65e528e2e921ba8fd67d9055e6b9f9e34b21ebd6768ae1c1723f4ea6ace1234d"}, - {file = "coverage-7.5.4-pp38.pp39.pp310-none-any.whl", hash = "sha256:79b356f3dd5b26f3ad23b35c75dbdaf1f9e2450b6bcefc6d0825ea0aa3f86ca5"}, - {file = "coverage-7.5.4.tar.gz", hash = "sha256:a44963520b069e12789d0faea4e9fdb1e410cdc4aab89d94f7f55cbb7fef0353"}, -] - -[package.extras] -toml = ["tomli"] - -[[package]] -name = "crashtest" -version = "0.4.1" -description = "Manage Python errors with ease" -optional = false -python-versions = ">=3.7,<4.0" -files = [ - {file = "crashtest-0.4.1-py3-none-any.whl", hash = "sha256:8d23eac5fa660409f57472e3851dab7ac18aba459a8d19cbbba86d3d5aecd2a5"}, - {file = "crashtest-0.4.1.tar.gz", hash = "sha256:80d7b1f316ebfbd429f648076d6275c877ba30ba48979de4191714a75266f0ce"}, -] - -[[package]] -name = "croniter" -version = "2.0.5" -description = "croniter provides iteration for datetime object with cron like format" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.6" -files = [ - {file = "croniter-2.0.5-py2.py3-none-any.whl", hash = "sha256:fdbb44920944045cc323db54599b321325141d82d14fa7453bc0699826bbe9ed"}, - {file = "croniter-2.0.5.tar.gz", hash = "sha256:f1f8ca0af64212fbe99b1bee125ee5a1b53a9c1b433968d8bca8817b79d237f3"}, -] - -[package.dependencies] -python-dateutil = "*" -pytz = ">2021.1" - -[[package]] -name = "cryptography" -version = "42.0.8" -description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." -optional = false -python-versions = ">=3.7" -files = [ - {file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:81d8a521705787afe7a18d5bfb47ea9d9cc068206270aad0b96a725022e18d2e"}, - {file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:961e61cefdcb06e0c6d7e3a1b22ebe8b996eb2bf50614e89384be54c48c6b63d"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e3ec3672626e1b9e55afd0df6d774ff0e953452886e06e0f1eb7eb0c832e8902"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e599b53fd95357d92304510fb7bda8523ed1f79ca98dce2f43c115950aa78801"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5226d5d21ab681f432a9c1cf8b658c0cb02533eece706b155e5fbd8a0cdd3949"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:6b7c4f03ce01afd3b76cf69a5455caa9cfa3de8c8f493e0d3ab7d20611c8dae9"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:2346b911eb349ab547076f47f2e035fc8ff2c02380a7cbbf8d87114fa0f1c583"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:ad803773e9df0b92e0a817d22fd8a3675493f690b96130a5e24f1b8fabbea9c7"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2f66d9cd9147ee495a8374a45ca445819f8929a3efcd2e3df6428e46c3cbb10b"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d45b940883a03e19e944456a558b67a41160e367a719833c53de6911cabba2b7"}, - {file = "cryptography-42.0.8-cp37-abi3-win32.whl", hash = "sha256:a0c5b2b0585b6af82d7e385f55a8bc568abff8923af147ee3c07bd8b42cda8b2"}, - {file = "cryptography-42.0.8-cp37-abi3-win_amd64.whl", hash = "sha256:57080dee41209e556a9a4ce60d229244f7a66ef52750f813bfbe18959770cfba"}, - {file = "cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:dea567d1b0e8bc5764b9443858b673b734100c2871dc93163f58c46a97a83d28"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4783183f7cb757b73b2ae9aed6599b96338eb957233c58ca8f49a49cc32fd5e"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0608251135d0e03111152e41f0cc2392d1e74e35703960d4190b2e0f4ca9c70"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dc0fdf6787f37b1c6b08e6dfc892d9d068b5bdb671198c72072828b80bd5fe4c"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9c0c1716c8447ee7dbf08d6db2e5c41c688544c61074b54fc4564196f55c25a7"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:fff12c88a672ab9c9c1cf7b0c80e3ad9e2ebd9d828d955c126be4fd3e5578c9e"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:cafb92b2bc622cd1aa6a1dce4b93307792633f4c5fe1f46c6b97cf67073ec961"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:31f721658a29331f895a5a54e7e82075554ccfb8b163a18719d342f5ffe5ecb1"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b297f90c5723d04bcc8265fc2a0f86d4ea2e0f7ab4b6994459548d3a6b992a14"}, - {file = "cryptography-42.0.8-cp39-abi3-win32.whl", hash = "sha256:2f88d197e66c65be5e42cd72e5c18afbfae3f741742070e3019ac8f4ac57262c"}, - {file = "cryptography-42.0.8-cp39-abi3-win_amd64.whl", hash = "sha256:fa76fbb7596cc5839320000cdd5d0955313696d9511debab7ee7278fc8b5c84a"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba4f0a211697362e89ad822e667d8d340b4d8d55fae72cdd619389fb5912eefe"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:81884c4d096c272f00aeb1f11cf62ccd39763581645b0812e99a91505fa48e0c"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c9bb2ae11bfbab395bdd072985abde58ea9860ed84e59dbc0463a5d0159f5b71"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7016f837e15b0a1c119d27ecd89b3515f01f90a8615ed5e9427e30d9cdbfed3d"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5a94eccb2a81a309806027e1670a358b99b8fe8bfe9f8d329f27d72c094dde8c"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:dec9b018df185f08483f294cae6ccac29e7a6e0678996587363dc352dc65c842"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:343728aac38decfdeecf55ecab3264b015be68fc2816ca800db649607aeee648"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:013629ae70b40af70c9a7a5db40abe5d9054e6f4380e50ce769947b73bf3caad"}, - {file = "cryptography-42.0.8.tar.gz", hash = "sha256:8d09d05439ce7baa8e9e95b07ec5b6c886f548deb7e0f69ef25f64b3bce842f2"}, -] - -[package.dependencies] -cffi = {version = ">=1.12", markers = "platform_python_implementation != \"PyPy\""} - -[package.extras] -docs = ["sphinx (>=5.3.0)", "sphinx-rtd-theme (>=1.1.1)"] -docstest = ["pyenchant (>=1.6.11)", "readme-renderer", "sphinxcontrib-spelling (>=4.0.1)"] -nox = ["nox"] -pep8test = ["check-sdist", "click", "mypy", "ruff"] -sdist = ["build"] -ssh = ["bcrypt (>=3.1.5)"] -test = ["certifi", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"] -test-randomorder = ["pytest-randomly"] - -[[package]] -name = "distlib" -version = "0.3.8" -description = "Distribution utilities" -optional = false -python-versions = "*" -files = [ - {file = "distlib-0.3.8-py2.py3-none-any.whl", hash = "sha256:034db59a0b96f8ca18035f36290806a9a6e6bd9d1ff91e45a7f172eb17e51784"}, - {file = "distlib-0.3.8.tar.gz", hash = "sha256:1530ea13e350031b6312d8580ddb6b27a104275a31106523b8f123787f494f64"}, -] - -[[package]] -name = "django" -version = "4.2.13" -description = "A high-level Python web framework that encourages rapid development and clean, pragmatic design." -optional = false -python-versions = ">=3.8" -files = [ - {file = "Django-4.2.13-py3-none-any.whl", hash = "sha256:a17fcba2aad3fc7d46fdb23215095dbbd64e6174bf4589171e732b18b07e426a"}, - {file = "Django-4.2.13.tar.gz", hash = "sha256:837e3cf1f6c31347a1396a3f6b65688f2b4bb4a11c580dcb628b5afe527b68a5"}, -] - -[package.dependencies] -asgiref = ">=3.6.0,<4" -sqlparse = ">=0.3.1" -tzdata = {version = "*", markers = "sys_platform == \"win32\""} - -[package.extras] -argon2 = ["argon2-cffi (>=19.1.0)"] -bcrypt = ["bcrypt"] - -[[package]] -name = "dulwich" -version = "0.21.7" -description = "Python Git Library" -optional = false -python-versions = ">=3.7" -files = [ - {file = "dulwich-0.21.7-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d4c0110798099bb7d36a110090f2688050703065448895c4f53ade808d889dd3"}, - {file = "dulwich-0.21.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2bc12697f0918bee324c18836053644035362bb3983dc1b210318f2fed1d7132"}, - {file = "dulwich-0.21.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:471305af74790827fcbafe330fc2e8bdcee4fb56ca1177c8c481b1c8f806c4a4"}, - {file = "dulwich-0.21.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d54c9d0e845be26f65f954dff13a1cd3f2b9739820c19064257b8fd7435ab263"}, - {file = "dulwich-0.21.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:12d61334a575474e707614f2e93d6ed4cdae9eb47214f9277076d9e5615171d3"}, - {file = "dulwich-0.21.7-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e274cebaf345f0b1e3b70197f2651de92b652386b68020cfd3bf61bc30f6eaaa"}, - {file = "dulwich-0.21.7-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:817822f970e196e757ae01281ecbf21369383285b9f4a83496312204cf889b8c"}, - {file = "dulwich-0.21.7-cp310-cp310-win32.whl", hash = "sha256:7836da3f4110ce684dcd53489015fb7fa94ed33c5276e3318b8b1cbcb5b71e08"}, - {file = "dulwich-0.21.7-cp310-cp310-win_amd64.whl", hash = "sha256:4a043b90958cec866b4edc6aef5fe3c2c96a664d0b357e1682a46f6c477273c4"}, - {file = "dulwich-0.21.7-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ce8db196e79c1f381469410d26fb1d8b89c6b87a4e7f00ff418c22a35121405c"}, - {file = "dulwich-0.21.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:62bfb26bdce869cd40be443dfd93143caea7089b165d2dcc33de40f6ac9d812a"}, - {file = "dulwich-0.21.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c01a735b9a171dcb634a97a3cec1b174cfbfa8e840156870384b633da0460f18"}, - {file = "dulwich-0.21.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fa4d14767cf7a49c9231c2e52cb2a3e90d0c83f843eb6a2ca2b5d81d254cf6b9"}, - {file = "dulwich-0.21.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bca4b86e96d6ef18c5bc39828ea349efb5be2f9b1f6ac9863f90589bac1084d"}, - {file = "dulwich-0.21.7-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a7b5624b02ef808cdc62dabd47eb10cd4ac15e8ac6df9e2e88b6ac6b40133673"}, - {file = "dulwich-0.21.7-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c3a539b4696a42fbdb7412cb7b66a4d4d332761299d3613d90a642923c7560e1"}, - {file = "dulwich-0.21.7-cp311-cp311-win32.whl", hash = "sha256:675a612ce913081beb0f37b286891e795d905691dfccfb9bf73721dca6757cde"}, - {file = "dulwich-0.21.7-cp311-cp311-win_amd64.whl", hash = "sha256:460ba74bdb19f8d498786ae7776745875059b1178066208c0fd509792d7f7bfc"}, - {file = "dulwich-0.21.7-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:4c51058ec4c0b45dc5189225b9e0c671b96ca9713c1daf71d622c13b0ab07681"}, - {file = "dulwich-0.21.7-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:4bc4c5366eaf26dda3fdffe160a3b515666ed27c2419f1d483da285ac1411de0"}, - {file = "dulwich-0.21.7-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a0650ec77d89cb947e3e4bbd4841c96f74e52b4650830112c3057a8ca891dc2f"}, - {file = "dulwich-0.21.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f18f0a311fb7734b033a3101292b932158cade54b74d1c44db519e42825e5a2"}, - {file = "dulwich-0.21.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c589468e5c0cd84e97eb7ec209ab005a2cb69399e8c5861c3edfe38989ac3a8"}, - {file = "dulwich-0.21.7-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:d62446797163317a397a10080c6397ffaaca51a7804c0120b334f8165736c56a"}, - {file = "dulwich-0.21.7-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e84cc606b1f581733df4350ca4070e6a8b30be3662bbb81a590b177d0c996c91"}, - {file = "dulwich-0.21.7-cp312-cp312-win32.whl", hash = "sha256:c3d1685f320907a52c40fd5890627945c51f3a5fa4bcfe10edb24fec79caadec"}, - {file = "dulwich-0.21.7-cp312-cp312-win_amd64.whl", hash = "sha256:6bd69921fdd813b7469a3c77bc75c1783cc1d8d72ab15a406598e5a3ba1a1503"}, - {file = "dulwich-0.21.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7d8ab29c660125db52106775caa1f8f7f77a69ed1fe8bc4b42bdf115731a25bf"}, - {file = "dulwich-0.21.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b0d2e4485b98695bf95350ce9d38b1bb0aaac2c34ad00a0df789aa33c934469b"}, - {file = "dulwich-0.21.7-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e138d516baa6b5bafbe8f030eccc544d0d486d6819b82387fc0e285e62ef5261"}, - {file = "dulwich-0.21.7-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:f34bf9b9fa9308376263fd9ac43143c7c09da9bc75037bb75c6c2423a151b92c"}, - {file = "dulwich-0.21.7-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2e2c66888207b71cd1daa2acb06d3984a6bc13787b837397a64117aa9fc5936a"}, - {file = "dulwich-0.21.7-cp37-cp37m-win32.whl", hash = "sha256:10893105c6566fc95bc2a67b61df7cc1e8f9126d02a1df6a8b2b82eb59db8ab9"}, - {file = "dulwich-0.21.7-cp37-cp37m-win_amd64.whl", hash = "sha256:460b3849d5c3d3818a80743b4f7a0094c893c559f678e56a02fff570b49a644a"}, - {file = "dulwich-0.21.7-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:74700e4c7d532877355743336c36f51b414d01e92ba7d304c4f8d9a5946dbc81"}, - {file = "dulwich-0.21.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c92e72c43c9e9e936b01a57167e0ea77d3fd2d82416edf9489faa87278a1cdf7"}, - {file = "dulwich-0.21.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d097e963eb6b9fa53266146471531ad9c6765bf390849230311514546ed64db2"}, - {file = "dulwich-0.21.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:808e8b9cc0aa9ac74870b49db4f9f39a52fb61694573f84b9c0613c928d4caf8"}, - {file = "dulwich-0.21.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1957b65f96e36c301e419d7adaadcff47647c30eb072468901bb683b1000bc5"}, - {file = "dulwich-0.21.7-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:4b09bc3a64fb70132ec14326ecbe6e0555381108caff3496898962c4136a48c6"}, - {file = "dulwich-0.21.7-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d5882e70b74ac3c736a42d3fdd4f5f2e6570637f59ad5d3e684760290b58f041"}, - {file = "dulwich-0.21.7-cp38-cp38-win32.whl", hash = "sha256:29bb5c1d70eba155ded41ed8a62be2f72edbb3c77b08f65b89c03976292f6d1b"}, - {file = "dulwich-0.21.7-cp38-cp38-win_amd64.whl", hash = "sha256:25c3ab8fb2e201ad2031ddd32e4c68b7c03cb34b24a5ff477b7a7dcef86372f5"}, - {file = "dulwich-0.21.7-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8929c37986c83deb4eb500c766ee28b6670285b512402647ee02a857320e377c"}, - {file = "dulwich-0.21.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:cc1e11be527ac06316539b57a7688bcb1b6a3e53933bc2f844397bc50734e9ae"}, - {file = "dulwich-0.21.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0fc3078a1ba04c588fabb0969d3530efd5cd1ce2cf248eefb6baf7cbc15fc285"}, - {file = "dulwich-0.21.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40dcbd29ba30ba2c5bfbab07a61a5f20095541d5ac66d813056c122244df4ac0"}, - {file = "dulwich-0.21.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8869fc8ec3dda743e03d06d698ad489b3705775fe62825e00fa95aa158097fc0"}, - {file = "dulwich-0.21.7-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d96ca5e0dde49376fbcb44f10eddb6c30284a87bd03bb577c59bb0a1f63903fa"}, - {file = "dulwich-0.21.7-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e0064363bd5e814359657ae32517fa8001e8573d9d040bd997908d488ab886ed"}, - {file = "dulwich-0.21.7-cp39-cp39-win32.whl", hash = "sha256:869eb7be48243e695673b07905d18b73d1054a85e1f6e298fe63ba2843bb2ca1"}, - {file = "dulwich-0.21.7-cp39-cp39-win_amd64.whl", hash = "sha256:404b8edeb3c3a86c47c0a498699fc064c93fa1f8bab2ffe919e8ab03eafaaad3"}, - {file = "dulwich-0.21.7-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:e598d743c6c0548ebcd2baf94aa9c8bfacb787ea671eeeb5828cfbd7d56b552f"}, - {file = "dulwich-0.21.7-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4a2d76c96426e791556836ef43542b639def81be4f1d6d4322cd886c115eae1"}, - {file = "dulwich-0.21.7-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6c88acb60a1f4d31bd6d13bfba465853b3df940ee4a0f2a3d6c7a0778c705b7"}, - {file = "dulwich-0.21.7-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ecd315847dea406a4decfa39d388a2521e4e31acde3bd9c2609c989e817c6d62"}, - {file = "dulwich-0.21.7-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d05d3c781bc74e2c2a2a8f4e4e2ed693540fbe88e6ac36df81deac574a6dad99"}, - {file = "dulwich-0.21.7-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6de6f8de4a453fdbae8062a6faa652255d22a3d8bce0cd6d2d6701305c75f2b3"}, - {file = "dulwich-0.21.7-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e25953c7acbbe4e19650d0225af1c0c0e6882f8bddd2056f75c1cc2b109b88ad"}, - {file = "dulwich-0.21.7-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:4637cbd8ed1012f67e1068aaed19fcc8b649bcf3e9e26649826a303298c89b9d"}, - {file = "dulwich-0.21.7-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:858842b30ad6486aacaa607d60bab9c9a29e7c59dc2d9cb77ae5a94053878c08"}, - {file = "dulwich-0.21.7-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:739b191f61e1c4ce18ac7d520e7a7cbda00e182c3489552408237200ce8411ad"}, - {file = "dulwich-0.21.7-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:274c18ec3599a92a9b67abaf110e4f181a4f779ee1aaab9e23a72e89d71b2bd9"}, - {file = "dulwich-0.21.7-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:2590e9b431efa94fc356ae33b38f5e64f1834ec3a94a6ac3a64283b206d07aa3"}, - {file = "dulwich-0.21.7-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ed60d1f610ef6437586f7768254c2a93820ccbd4cfdac7d182cf2d6e615969bb"}, - {file = "dulwich-0.21.7-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8278835e168dd097089f9e53088c7a69c6ca0841aef580d9603eafe9aea8c358"}, - {file = "dulwich-0.21.7-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ffc27fb063f740712e02b4d2f826aee8bbed737ed799962fef625e2ce56e2d29"}, - {file = "dulwich-0.21.7-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:61e3451bd3d3844f2dca53f131982553be4d1b1e1ebd9db701843dd76c4dba31"}, - {file = "dulwich-0.21.7.tar.gz", hash = "sha256:a9e9c66833cea580c3ac12927e4b9711985d76afca98da971405d414de60e968"}, -] - -[package.dependencies] -urllib3 = ">=1.25" - -[package.extras] -fastimport = ["fastimport"] -https = ["urllib3 (>=1.24.1)"] -paramiko = ["paramiko"] -pgp = ["gpg"] - -[[package]] -name = "fakeredis" -version = "2.23.2" -description = "Python implementation of redis API, can be used for testing purposes." -optional = false -python-versions = "<4.0,>=3.7" -files = [ - {file = "fakeredis-2.23.2-py3-none-any.whl", hash = "sha256:3721946b955930c065231befd24a9cdc68b339746e93848ef01a010d98e4eb4f"}, - {file = "fakeredis-2.23.2.tar.gz", hash = "sha256:d649c409abe46c63690b6c35d3c460e4ce64c69a52cea3f02daff2649378f878"}, -] - -[package.dependencies] -lupa = {version = ">=2.1,<3.0", optional = true, markers = "extra == \"lua\""} -redis = ">=4" -sortedcontainers = ">=2,<3" -typing_extensions = {version = ">=4.7,<5.0", markers = "python_version < \"3.11\""} - -[package.extras] -bf = ["pyprobables (>=0.6,<0.7)"] -cf = ["pyprobables (>=0.6,<0.7)"] -json = ["jsonpath-ng (>=1.6,<2.0)"] -lua = ["lupa (>=2.1,<3.0)"] -probabilistic = ["pyprobables (>=0.6,<0.7)"] - -[[package]] -name = "fastjsonschema" -version = "2.20.0" -description = "Fastest Python implementation of JSON schema" -optional = false -python-versions = "*" -files = [ - {file = "fastjsonschema-2.20.0-py3-none-any.whl", hash = "sha256:5875f0b0fa7a0043a91e93a9b8f793bcbbba9691e7fd83dca95c28ba26d21f0a"}, - {file = "fastjsonschema-2.20.0.tar.gz", hash = "sha256:3d48fc5300ee96f5d116f10fe6f28d938e6008f59a6a025c2649475b87f76a23"}, -] - -[package.extras] -devel = ["colorama", "json-spec", "jsonschema", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"] - -[[package]] -name = "filelock" -version = "3.15.4" -description = "A platform independent file lock." -optional = false -python-versions = ">=3.8" -files = [ - {file = "filelock-3.15.4-py3-none-any.whl", hash = "sha256:6ca1fffae96225dab4c6eaf1c4f4f28cd2568d3ec2a44e15a08520504de468e7"}, - {file = "filelock-3.15.4.tar.gz", hash = "sha256:2207938cbc1844345cb01a5a95524dae30f0ce089eba5b00378295a17e3e90cb"}, -] - -[package.extras] -docs = ["furo (>=2023.9.10)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.25.2)"] -testing = ["covdefaults (>=2.3)", "coverage (>=7.3.2)", "diff-cover (>=8.0.1)", "pytest (>=7.4.3)", "pytest-asyncio (>=0.21)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)", "pytest-timeout (>=2.2)", "virtualenv (>=20.26.2)"] -typing = ["typing-extensions (>=4.8)"] - -[[package]] -name = "flake8" -version = "7.1.0" -description = "the modular source code checker: pep8 pyflakes and co" -optional = false -python-versions = ">=3.8.1" -files = [ - {file = "flake8-7.1.0-py2.py3-none-any.whl", hash = "sha256:2e416edcc62471a64cea09353f4e7bdba32aeb079b6e360554c659a122b1bc6a"}, - {file = "flake8-7.1.0.tar.gz", hash = "sha256:48a07b626b55236e0fb4784ee69a465fbf59d79eec1f5b4785c3d3bc57d17aa5"}, -] - -[package.dependencies] -mccabe = ">=0.7.0,<0.8.0" -pycodestyle = ">=2.12.0,<2.13.0" -pyflakes = ">=3.2.0,<3.3.0" - -[[package]] -name = "flake8-pyproject" -version = "1.2.3" -description = "Flake8 plug-in loading the configuration from pyproject.toml" -optional = false -python-versions = ">= 3.6" -files = [ - {file = "flake8_pyproject-1.2.3-py3-none-any.whl", hash = "sha256:6249fe53545205af5e76837644dc80b4c10037e73a0e5db87ff562d75fb5bd4a"}, -] - -[package.dependencies] -Flake8 = ">=5" -TOMLi = {version = "*", markers = "python_version < \"3.11\""} - -[package.extras] -dev = ["pyTest", "pyTest-cov"] - -[[package]] -name = "freezegun" -version = "1.5.1" -description = "Let your Python tests travel through time" -optional = false -python-versions = ">=3.7" -files = [ - {file = "freezegun-1.5.1-py3-none-any.whl", hash = "sha256:bf111d7138a8abe55ab48a71755673dbaa4ab87f4cff5634a4442dfec34c15f1"}, - {file = "freezegun-1.5.1.tar.gz", hash = "sha256:b29dedfcda6d5e8e083ce71b2b542753ad48cfec44037b3fc79702e2980a89e9"}, -] - -[package.dependencies] -python-dateutil = ">=2.7" - -[[package]] -name = "idna" -version = "3.7" -description = "Internationalized Domain Names in Applications (IDNA)" -optional = false -python-versions = ">=3.5" -files = [ - {file = "idna-3.7-py3-none-any.whl", hash = "sha256:82fee1fc78add43492d3a1898bfa6d8a904cc97d8427f683ed8e798d07761aa0"}, - {file = "idna-3.7.tar.gz", hash = "sha256:028ff3aadf0609c1fd278d8ea3089299412a7a8b9bd005dd08b9f8285bcb5cfc"}, -] - -[[package]] -name = "importlib-metadata" -version = "8.0.0" -description = "Read metadata from Python packages" -optional = false -python-versions = ">=3.8" -files = [ - {file = "importlib_metadata-8.0.0-py3-none-any.whl", hash = "sha256:15584cf2b1bf449d98ff8a6ff1abef57bf20f3ac6454f431736cd3e660921b2f"}, - {file = "importlib_metadata-8.0.0.tar.gz", hash = "sha256:188bd24e4c346d3f0a933f275c2fec67050326a856b9a359881d7c2a697e8812"}, -] - -[package.dependencies] -zipp = ">=0.5" - -[package.extras] -doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] -perf = ["ipython"] -test = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-perf (>=0.9.2)", "pytest-ruff (>=0.2.1)"] - -[[package]] -name = "installer" -version = "0.7.0" -description = "A library for installing Python wheels." -optional = false -python-versions = ">=3.7" -files = [ - {file = "installer-0.7.0-py3-none-any.whl", hash = "sha256:05d1933f0a5ba7d8d6296bb6d5018e7c94fa473ceb10cf198a92ccea19c27b53"}, - {file = "installer-0.7.0.tar.gz", hash = "sha256:a26d3e3116289bb08216e0d0f7d925fcef0b0194eedfa0c944bcaaa106c4b631"}, -] - -[[package]] -name = "jaraco-classes" -version = "3.4.0" -description = "Utility functions for Python class constructs" -optional = false -python-versions = ">=3.8" -files = [ - {file = "jaraco.classes-3.4.0-py3-none-any.whl", hash = "sha256:f662826b6bed8cace05e7ff873ce0f9283b5c924470fe664fff1c2f00f581790"}, - {file = "jaraco.classes-3.4.0.tar.gz", hash = "sha256:47a024b51d0239c0dd8c8540c6c7f484be3b8fcf0b2d85c13825780d3b3f3acd"}, -] - -[package.dependencies] -more-itertools = "*" - -[package.extras] -docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] -testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"] - -[[package]] -name = "jeepney" -version = "0.8.0" -description = "Low-level, pure Python DBus protocol wrapper." -optional = false -python-versions = ">=3.7" -files = [ - {file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"}, - {file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"}, -] - -[package.extras] -test = ["async-timeout", "pytest", "pytest-asyncio (>=0.17)", "pytest-trio", "testpath", "trio"] -trio = ["async_generator", "trio"] - -[[package]] -name = "keyring" -version = "24.3.1" -description = "Store and access your passwords safely." -optional = false -python-versions = ">=3.8" -files = [ - {file = "keyring-24.3.1-py3-none-any.whl", hash = "sha256:df38a4d7419a6a60fea5cef1e45a948a3e8430dd12ad88b0f423c5c143906218"}, - {file = "keyring-24.3.1.tar.gz", hash = "sha256:c3327b6ffafc0e8befbdb597cacdb4928ffe5c1212f7645f186e6d9957a898db"}, -] - -[package.dependencies] -importlib-metadata = {version = ">=4.11.4", markers = "python_version < \"3.12\""} -"jaraco.classes" = "*" -jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""} -pywin32-ctypes = {version = ">=0.2.0", markers = "sys_platform == \"win32\""} -SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""} - -[package.extras] -completion = ["shtab (>=1.1.0)"] -docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"] -testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"] - -[[package]] -name = "lupa" -version = "2.2" -description = "Python wrapper around Lua and LuaJIT" -optional = false -python-versions = "*" -files = [ - {file = "lupa-2.2-cp27-cp27m-macosx_11_0_x86_64.whl", hash = "sha256:4bb05e3fc8f794b4a1b8a38229c3b4ae47f83cfbe7f6b172032f66d3308a0934"}, - {file = "lupa-2.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:13062395e716cebe25dfc6dc3738a9eb514bb052b52af25cf502c1fd74affd21"}, - {file = "lupa-2.2-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:e673443dd7f7f0510bb9f4b0dc6bad6932d271b0afdbdc492fa71e9b9eab638d"}, - {file = "lupa-2.2-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:3b47702b94e9e391052118cbde253f69a0af96ec776f48af74e72f30d740ccc9"}, - {file = "lupa-2.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2242884a5078cd2507f15a162b5faf6f39a1f27654a1cc7db09cdb65b0b599b3"}, - {file = "lupa-2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8555526f03bb41d5aef16d105e8f51da1000d833e90d846448cf745ca6cd72e8"}, - {file = "lupa-2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a50807c6cc11d3ecf568d964be6708e26d4669d435c76fcb568a98d1dd6e8ae9"}, - {file = "lupa-2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c140dd19614e43b76b84295945878cea3cdf7ed34e133b1a8c0e3fa7efc9c6ac"}, - {file = "lupa-2.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:c725c1832b0c6095583a6a57273e6f33a6b55230f90bcacdf06934ce21ef04e9"}, - {file = "lupa-2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:18a302810735da688d21e8397c696e68b89dbe3c45a3fdc3406f5c0e55887467"}, - {file = "lupa-2.2-cp310-cp310-win32.whl", hash = "sha256:a4f03aa308d949a3f2e4e755ffc6a698d3ea02fccd34014fab496efb99b3d4f4"}, - {file = "lupa-2.2-cp310-cp310-win_amd64.whl", hash = "sha256:8494802f789174cd26176e6b408e60e468cda348d4f767562d06991604813f61"}, - {file = "lupa-2.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:95ee903ab71c3e6498bcd3bca60938a961c84fae47cdf23389a48c73e15dbad2"}, - {file = "lupa-2.2-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:011dbc81a790693b5457a0d761b032a8acdcc2945e32ca6ef34a7698bda0b09a"}, - {file = "lupa-2.2-cp311-cp311-macosx_11_0_x86_64.whl", hash = "sha256:8c89d8e99f684dfedccbf2f0dbdcc28deb73c4ff0545452f43ec02330dacfe0c"}, - {file = "lupa-2.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:26c3edea3ce6465364af6cc1c134b7f23a3ff919e5e499720acbff01b14b9931"}, - {file = "lupa-2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cd6afa3f6c998ac55f90b0665266c19100387de55d25af25ef4a35197d29d52"}, - {file = "lupa-2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5b79bef7f48696bf70eff165afa49778470607dce6420b497eb82cfae1af6947"}, - {file = "lupa-2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:08e2bfa98725f7495cef30d42d87fff82795b9b9e76b740521828784b778ade7"}, - {file = "lupa-2.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:0318ceb4d1782776bae7495a3bd3d50e57f80115ecbeff1e95d87a4e9411acf2"}, - {file = "lupa-2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9180dc7ee5c580cee41d9afac0b7c738cf7f6badf4a1398a6e1921dff155619c"}, - {file = "lupa-2.2-cp311-cp311-win32.whl", hash = "sha256:82077fe962c6e9ae1652e826f58e6250d1daa13c446ba1f4d6b68f16df65db0b"}, - {file = "lupa-2.2-cp311-cp311-win_amd64.whl", hash = "sha256:e2d2b9a6a4ef109b75668e26204f122196f33907ce3ccc80322ca70f84f81598"}, - {file = "lupa-2.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8cd872e16e736a3ecb800e70b4f36a66c794b7d339247712244a515561da4ff5"}, - {file = "lupa-2.2-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:6e8027ad53daa511e4a049eb0eb9f71b46fd2c5be6897fc68d75288b04086d4d"}, - {file = "lupa-2.2-cp312-cp312-macosx_11_0_x86_64.whl", hash = "sha256:0a7bd2841fd41b718d415162ec53b7d00079c27b1c5c1a2f2d0fb8080dd64d73"}, - {file = "lupa-2.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:63eff3aa68791b5c9a400f89f18018f4f63b8619adaa603fcd09392b87ca6b9b"}, - {file = "lupa-2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8ab43356bb269ca4f03d25200b7559581cd791fbc631104c3e7d186d3c37221f"}, - {file = "lupa-2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:556779c0c28a2948749817ffd62dec882c834a6445aeff5d31ae862e14eebb21"}, - {file = "lupa-2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:42fd611a099ab1804a8d23154d4c7b2221557c94d34f8964da0dc03760f15d3d"}, - {file = "lupa-2.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:63d5ae8ccbafe0aa0034da32f18fc692963df1b5e1ebf91e76f504de1d5aecff"}, - {file = "lupa-2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:3d3d9e5991861d8ee28709d94e673b89bdea10188b34a155835ba2dbbc7d26a7"}, - {file = "lupa-2.2-cp312-cp312-win32.whl", hash = "sha256:58a3621579b26ad5a524c1c41623ec551160653e915cf4aa41453f4339821b89"}, - {file = "lupa-2.2-cp312-cp312-win_amd64.whl", hash = "sha256:8e8ff117eca26f5cedcd2b2467cf56d0c64cfcb804b5083a36d818b57edc4036"}, - {file = "lupa-2.2-cp36-cp36m-macosx_11_0_x86_64.whl", hash = "sha256:afe2b90c65f61f7d5ad55cdbfbb89cb50e5ab4d6184ea975befc51ffdc20dc8f"}, - {file = "lupa-2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c597ea2dc203767dcb5a853cf885a7238b0639f5b7cb5c6ad5dbe5d2b39e25c6"}, - {file = "lupa-2.2-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8149dcbe9953e8cad991949dec41bf6dbaa8a2d613e4b024f98e510b0aab4fa4"}, - {file = "lupa-2.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:92e1c6a1f380bc829618d0e95c15612b6e2604baa8ffd42547451e9d842837ae"}, - {file = "lupa-2.2-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:56be246cf7126f980c13b79a03ad43361dee5a65f8be8c4e2feb58a2bdcc5a2a"}, - {file = "lupa-2.2-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:da3460b920d4520ae8a3927b92c22402592fe2e31f08492c3c0ba9b8eadee302"}, - {file = "lupa-2.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:211d3371d9836d87b2097f520492241cd5e06b29ca8777739c4fe30a1df4c76c"}, - {file = "lupa-2.2-cp36-cp36m-win32.whl", hash = "sha256:617fc3532f224619e15d45adb9c9af8f4690e36cad332d68d49e78463e51d528"}, - {file = "lupa-2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:50b2f0f8bfcacd68c9ae0a2872ff4b90c2df0490f193253c922283a295f23b6a"}, - {file = "lupa-2.2-cp37-cp37m-macosx_11_0_x86_64.whl", hash = "sha256:f3de07b7f19296a702c8710f44b221aefe6563461e209198862cd1f06401b13d"}, - {file = "lupa-2.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eed6529c89ea475cbc403ed6e8670f1adf9eb2eb34b7610690d9827d35759a3c"}, - {file = "lupa-2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1cc171b352c187a012bbc5c20692236843e8c123c60569be872cb72bb7edcbd4"}, - {file = "lupa-2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:704ed8f5a91133a8d62cba2d6fe4f2e43c7ee6f3998484d31abcfc4a57bedd1e"}, - {file = "lupa-2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d2aa0fba09a045f5bcc638ede0f614fcd36339da58b7415a1e66e3590781a4a5"}, - {file = "lupa-2.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:b2b911d3890fa93ae3f83c5d806008c3b551941813b39e7605def137a9b9b064"}, - {file = "lupa-2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:00bcae88a2123f0cfd34f7206cc2d88008d905ebc065d41797827d046404b09e"}, - {file = "lupa-2.2-cp37-cp37m-win32.whl", hash = "sha256:225bbe9e58881bb92f96c6b43587168ed329b2b37c3236a9883efa681aec9f5a"}, - {file = "lupa-2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:57662d9653e157872caeaa622d966aa1da7bb8fe8646b63fb1194a3cdb98c417"}, - {file = "lupa-2.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:cc728fbe6d4e668ad8bec979ef86675387ca640e319ec029e0fc8f2bc9c3d224"}, - {file = "lupa-2.2-cp38-cp38-macosx_11_0_x86_64.whl", hash = "sha256:33a2beebe078e13770eff5d12a22d98a425fff89f87af2155c32769adc0114f1"}, - {file = "lupa-2.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fd1e95d8a399ff379d09358490171965aaa25007ed06488b972df08f1b3df509"}, - {file = "lupa-2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a63d1bc6a473813c707cf5badbfba081bf7cfbd761d58e1812c9a65a477146f9"}, - {file = "lupa-2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:df6e1bdd13f6fbdab2212bf08c24c232653832673c21c10ba576f89770e58686"}, - {file = "lupa-2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:26f2617544e4b8cf2a4c1873e6f4feb7e547f4c06bfd088a24547d37f68a3945"}, - {file = "lupa-2.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:189856225402eab6dc467b77190c5beddc5c004a9cdc5855e7517206f3b380ca"}, - {file = "lupa-2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2563d55538ebecab1d8768c77e1972f7768440b8e41aff4466352b942aa50dd1"}, - {file = "lupa-2.2-cp38-cp38-win32.whl", hash = "sha256:6c7e418bd39b9e2717654ed52ea55b681247d95139da958603e0766ed138b190"}, - {file = "lupa-2.2-cp38-cp38-win_amd64.whl", hash = "sha256:3facbd310fc73d3bcdb8cb363df80524ee52ac25b7566d0f0fb8b300b04c3bdb"}, - {file = "lupa-2.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cda04e655af89824a92b4ca168524e0f526b78da5f39f66103cc3b6a924ef60c"}, - {file = "lupa-2.2-cp39-cp39-macosx_11_0_universal2.whl", hash = "sha256:c49d1962478fa6a94b468e0dd6f725034ee690f41ae03217ff4672f370a7a099"}, - {file = "lupa-2.2-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:6bddf06f4f4b2257701e12690c5e951eb6a02b88633b7a43cc160172ff3a88b5"}, - {file = "lupa-2.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:10c3bb414fc3a4ba9ac3e57a17ffd4c3d0db6da78c53b6792de5a964b5539e42"}, - {file = "lupa-2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:10c2c81bc96f2091210aaf046ef22f920581a3e161b3961121171e02595ca6fb"}, - {file = "lupa-2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11193c9e7fe1b82d921991c68a33f5b08c8e0c16d67d173768fc80f8c75d9d52"}, - {file = "lupa-2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9e149fafd20e748818a0b718abc42f099a3cc6debc7c6932564d7e475291f0e2"}, - {file = "lupa-2.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:2518128f38a4608bbc5375404082a3c22c86037639842fb7b1fc2b4f5d2a41e3"}, - {file = "lupa-2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:756fc6aa5ca3a6b7764c474ef061760c5d38e2dd96c21567ab3c7d4f5ed2c3a7"}, - {file = "lupa-2.2-cp39-cp39-win32.whl", hash = "sha256:9b2b7148a77f60b7b193aec2bd820e89c1ecaab9838ca81c8212e2f972df1a1d"}, - {file = "lupa-2.2-cp39-cp39-win_amd64.whl", hash = "sha256:93216d7ae8bb373a8a388b058960a00eaaa6a01e5e2306a13e65db1024181a62"}, - {file = "lupa-2.2-pp310-pypy310_pp73-macosx_11_0_x86_64.whl", hash = "sha256:e4cd8c6f725a5629551ac08979d0631af6bed2564cf87dcae489bcb53bdab808"}, - {file = "lupa-2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95d712728d36262e0bcffea2ad4b1c3ee6122e4eb16f5a70c2f4750f34580148"}, - {file = "lupa-2.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:47eb46153810e868c543ffc53a3369700998a3e617cfcebf49133a79e6f56432"}, - {file = "lupa-2.2-pp37-pypy37_pp73-macosx_11_0_x86_64.whl", hash = "sha256:283066c6ef9141a66924854a78619ff16bc2efd324484807be58ca9a8e9b617a"}, - {file = "lupa-2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7141e395325f150321c3caa69178dc70224512e0483e2165d3d1ca375608abb7"}, - {file = "lupa-2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:502248085d3d2dc74e642f97773367a1929daa24fcf039dd5048acdd5b49a8f9"}, - {file = "lupa-2.2-pp38-pypy38_pp73-macosx_11_0_x86_64.whl", hash = "sha256:4cdeb4a942068882c9e3751520b6de1b6c21d7c2526a2040755b62c7cb46308f"}, - {file = "lupa-2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfd7e62f3149d10fa3485f4d5143f74b295787708b1974f7fad74b65fb911fa1"}, - {file = "lupa-2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:4c78b3b7137212a9ef881adca3168a376445da3a7dc322b2416c90a73c81db2c"}, - {file = "lupa-2.2-pp39-pypy39_pp73-macosx_11_0_x86_64.whl", hash = "sha256:ecd1b3a4d8db553c4eaed742843f4b7d77bca795ec9f4292385709bcf691e8a3"}, - {file = "lupa-2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36db930207c15656b9989721ea41ba8c039abd088cc7242bb690aa72a4978e68"}, - {file = "lupa-2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:8ccba6f5cd8bdecf4000531298e6edd803547340752b80fe5b74911fa6119cc8"}, - {file = "lupa-2.2.tar.gz", hash = "sha256:665a006bcf8d9aacdfdb953824b929d06a0c55910a662b59be2f157ab4c8924d"}, -] - -[[package]] -name = "mccabe" -version = "0.7.0" -description = "McCabe checker, plugin for flake8" -optional = false -python-versions = ">=3.6" -files = [ - {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"}, - {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"}, -] - -[[package]] -name = "more-itertools" -version = "10.3.0" -description = "More routines for operating on iterables, beyond itertools" -optional = false -python-versions = ">=3.8" -files = [ - {file = "more-itertools-10.3.0.tar.gz", hash = "sha256:e5d93ef411224fbcef366a6e8ddc4c5781bc6359d43412a65dd5964e46111463"}, - {file = "more_itertools-10.3.0-py3-none-any.whl", hash = "sha256:ea6a02e24a9161e51faad17a8782b92a0df82c12c1c8886fec7f0c3fa1a1b320"}, -] - -[[package]] -name = "msgpack" -version = "1.0.8" -description = "MessagePack serializer" -optional = false -python-versions = ">=3.8" -files = [ - {file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:505fe3d03856ac7d215dbe005414bc28505d26f0c128906037e66d98c4e95868"}, - {file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e6b7842518a63a9f17107eb176320960ec095a8ee3b4420b5f688e24bf50c53c"}, - {file = "msgpack-1.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:376081f471a2ef24828b83a641a02c575d6103a3ad7fd7dade5486cad10ea659"}, - {file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e390971d082dba073c05dbd56322427d3280b7cc8b53484c9377adfbae67dc2"}, - {file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00e073efcba9ea99db5acef3959efa45b52bc67b61b00823d2a1a6944bf45982"}, - {file = "msgpack-1.0.8-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:82d92c773fbc6942a7a8b520d22c11cfc8fd83bba86116bfcf962c2f5c2ecdaa"}, - {file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9ee32dcb8e531adae1f1ca568822e9b3a738369b3b686d1477cbc643c4a9c128"}, - {file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e3aa7e51d738e0ec0afbed661261513b38b3014754c9459508399baf14ae0c9d"}, - {file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:69284049d07fce531c17404fcba2bb1df472bc2dcdac642ae71a2d079d950653"}, - {file = "msgpack-1.0.8-cp310-cp310-win32.whl", hash = "sha256:13577ec9e247f8741c84d06b9ece5f654920d8365a4b636ce0e44f15e07ec693"}, - {file = "msgpack-1.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:e532dbd6ddfe13946de050d7474e3f5fb6ec774fbb1a188aaf469b08cf04189a"}, - {file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9517004e21664f2b5a5fd6333b0731b9cf0817403a941b393d89a2f1dc2bd836"}, - {file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d16a786905034e7e34098634b184a7d81f91d4c3d246edc6bd7aefb2fd8ea6ad"}, - {file = "msgpack-1.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e2872993e209f7ed04d963e4b4fbae72d034844ec66bc4ca403329db2074377b"}, - {file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c330eace3dd100bdb54b5653b966de7f51c26ec4a7d4e87132d9b4f738220ba"}, - {file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83b5c044f3eff2a6534768ccfd50425939e7a8b5cf9a7261c385de1e20dcfc85"}, - {file = "msgpack-1.0.8-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1876b0b653a808fcd50123b953af170c535027bf1d053b59790eebb0aeb38950"}, - {file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dfe1f0f0ed5785c187144c46a292b8c34c1295c01da12e10ccddfc16def4448a"}, - {file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3528807cbbb7f315bb81959d5961855e7ba52aa60a3097151cb21956fbc7502b"}, - {file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e2f879ab92ce502a1e65fce390eab619774dda6a6ff719718069ac94084098ce"}, - {file = "msgpack-1.0.8-cp311-cp311-win32.whl", hash = "sha256:26ee97a8261e6e35885c2ecd2fd4a6d38252246f94a2aec23665a4e66d066305"}, - {file = "msgpack-1.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:eadb9f826c138e6cf3c49d6f8de88225a3c0ab181a9b4ba792e006e5292d150e"}, - {file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:114be227f5213ef8b215c22dde19532f5da9652e56e8ce969bf0a26d7c419fee"}, - {file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d661dc4785affa9d0edfdd1e59ec056a58b3dbb9f196fa43587f3ddac654ac7b"}, - {file = "msgpack-1.0.8-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d56fd9f1f1cdc8227d7b7918f55091349741904d9520c65f0139a9755952c9e8"}, - {file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0726c282d188e204281ebd8de31724b7d749adebc086873a59efb8cf7ae27df3"}, - {file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8db8e423192303ed77cff4dce3a4b88dbfaf43979d280181558af5e2c3c71afc"}, - {file = "msgpack-1.0.8-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99881222f4a8c2f641f25703963a5cefb076adffd959e0558dc9f803a52d6a58"}, - {file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b5505774ea2a73a86ea176e8a9a4a7c8bf5d521050f0f6f8426afe798689243f"}, - {file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:ef254a06bcea461e65ff0373d8a0dd1ed3aa004af48839f002a0c994a6f72d04"}, - {file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e1dd7839443592d00e96db831eddb4111a2a81a46b028f0facd60a09ebbdd543"}, - {file = "msgpack-1.0.8-cp312-cp312-win32.whl", hash = "sha256:64d0fcd436c5683fdd7c907eeae5e2cbb5eb872fafbc03a43609d7941840995c"}, - {file = "msgpack-1.0.8-cp312-cp312-win_amd64.whl", hash = "sha256:74398a4cf19de42e1498368c36eed45d9528f5fd0155241e82c4082b7e16cffd"}, - {file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0ceea77719d45c839fd73abcb190b8390412a890df2f83fb8cf49b2a4b5c2f40"}, - {file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1ab0bbcd4d1f7b6991ee7c753655b481c50084294218de69365f8f1970d4c151"}, - {file = "msgpack-1.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1cce488457370ffd1f953846f82323cb6b2ad2190987cd4d70b2713e17268d24"}, - {file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3923a1778f7e5ef31865893fdca12a8d7dc03a44b33e2a5f3295416314c09f5d"}, - {file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a22e47578b30a3e199ab067a4d43d790249b3c0587d9a771921f86250c8435db"}, - {file = "msgpack-1.0.8-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bd739c9251d01e0279ce729e37b39d49a08c0420d3fee7f2a4968c0576678f77"}, - {file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d3420522057ebab1728b21ad473aa950026d07cb09da41103f8e597dfbfaeb13"}, - {file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5845fdf5e5d5b78a49b826fcdc0eb2e2aa7191980e3d2cfd2a30303a74f212e2"}, - {file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a0e76621f6e1f908ae52860bdcb58e1ca85231a9b0545e64509c931dd34275a"}, - {file = "msgpack-1.0.8-cp38-cp38-win32.whl", hash = "sha256:374a8e88ddab84b9ada695d255679fb99c53513c0a51778796fcf0944d6c789c"}, - {file = "msgpack-1.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:f3709997b228685fe53e8c433e2df9f0cdb5f4542bd5114ed17ac3c0129b0480"}, - {file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f51bab98d52739c50c56658cc303f190785f9a2cd97b823357e7aeae54c8f68a"}, - {file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:73ee792784d48aa338bba28063e19a27e8d989344f34aad14ea6e1b9bd83f596"}, - {file = "msgpack-1.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f9904e24646570539a8950400602d66d2b2c492b9010ea7e965025cb71d0c86d"}, - {file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e75753aeda0ddc4c28dce4c32ba2f6ec30b1b02f6c0b14e547841ba5b24f753f"}, - {file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5dbf059fb4b7c240c873c1245ee112505be27497e90f7c6591261c7d3c3a8228"}, - {file = "msgpack-1.0.8-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4916727e31c28be8beaf11cf117d6f6f188dcc36daae4e851fee88646f5b6b18"}, - {file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7938111ed1358f536daf311be244f34df7bf3cdedb3ed883787aca97778b28d8"}, - {file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:493c5c5e44b06d6c9268ce21b302c9ca055c1fd3484c25ba41d34476c76ee746"}, - {file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5fbb160554e319f7b22ecf530a80a3ff496d38e8e07ae763b9e82fadfe96f273"}, - {file = "msgpack-1.0.8-cp39-cp39-win32.whl", hash = "sha256:f9af38a89b6a5c04b7d18c492c8ccf2aee7048aff1ce8437c4683bb5a1df893d"}, - {file = "msgpack-1.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:ed59dd52075f8fc91da6053b12e8c89e37aa043f8986efd89e61fae69dc1b011"}, - {file = "msgpack-1.0.8.tar.gz", hash = "sha256:95c02b0e27e706e48d0e5426d1710ca78e0f0628d6e89d5b5a5b91a5f12274f3"}, -] - -[[package]] -name = "packaging" -version = "24.1" -description = "Core utilities for Python packages" -optional = false -python-versions = ">=3.8" -files = [ - {file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"}, - {file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"}, -] - -[[package]] -name = "pexpect" -version = "4.9.0" -description = "Pexpect allows easy control of interactive console applications." -optional = false -python-versions = "*" -files = [ - {file = "pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523"}, - {file = "pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f"}, -] - -[package.dependencies] -ptyprocess = ">=0.5" - -[[package]] -name = "pkginfo" -version = "1.11.1" -description = "Query metadata from sdists / bdists / installed packages." -optional = false -python-versions = ">=3.8" -files = [ - {file = "pkginfo-1.11.1-py3-none-any.whl", hash = "sha256:bfa76a714fdfc18a045fcd684dbfc3816b603d9d075febef17cb6582bea29573"}, - {file = "pkginfo-1.11.1.tar.gz", hash = "sha256:2e0dca1cf4c8e39644eed32408ea9966ee15e0d324c62ba899a393b3c6b467aa"}, -] - -[package.extras] -testing = ["pytest", "pytest-cov", "wheel"] - -[[package]] -name = "platformdirs" -version = "4.2.2" -description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`." -optional = false -python-versions = ">=3.8" -files = [ - {file = "platformdirs-4.2.2-py3-none-any.whl", hash = "sha256:2d7a1657e36a80ea911db832a8a6ece5ee53d8de21edd5cc5879af6530b1bfee"}, - {file = "platformdirs-4.2.2.tar.gz", hash = "sha256:38b7b51f512eed9e84a22788b4bce1de17c0adb134d6becb09836e37d8654cd3"}, -] - -[package.extras] -docs = ["furo (>=2023.9.10)", "proselint (>=0.13)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.25.2)"] -test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.4.3)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)"] -type = ["mypy (>=1.8)"] - -[[package]] -name = "poetry" -version = "1.8.3" -description = "Python dependency management and packaging made easy." -optional = false -python-versions = "<4.0,>=3.8" -files = [ - {file = "poetry-1.8.3-py3-none-any.whl", hash = "sha256:88191c69b08d06f9db671b793d68f40048e8904c0718404b63dcc2b5aec62d13"}, - {file = "poetry-1.8.3.tar.gz", hash = "sha256:67f4eb68288eab41e841cc71a00d26cf6bdda9533022d0189a145a34d0a35f48"}, -] - -[package.dependencies] -build = ">=1.0.3,<2.0.0" -cachecontrol = {version = ">=0.14.0,<0.15.0", extras = ["filecache"]} -cleo = ">=2.1.0,<3.0.0" -crashtest = ">=0.4.1,<0.5.0" -dulwich = ">=0.21.2,<0.22.0" -fastjsonschema = ">=2.18.0,<3.0.0" -importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""} -installer = ">=0.7.0,<0.8.0" -keyring = ">=24.0.0,<25.0.0" -packaging = ">=23.1" -pexpect = ">=4.7.0,<5.0.0" -pkginfo = ">=1.10,<2.0" -platformdirs = ">=3.0.0,<5" -poetry-core = "1.9.0" -poetry-plugin-export = ">=1.6.0,<2.0.0" -pyproject-hooks = ">=1.0.0,<2.0.0" -requests = ">=2.26,<3.0" -requests-toolbelt = ">=1.0.0,<2.0.0" -shellingham = ">=1.5,<2.0" -tomli = {version = ">=2.0.1,<3.0.0", markers = "python_version < \"3.11\""} -tomlkit = ">=0.11.4,<1.0.0" -trove-classifiers = ">=2022.5.19" -virtualenv = ">=20.23.0,<21.0.0" -xattr = {version = ">=1.0.0,<2.0.0", markers = "sys_platform == \"darwin\""} - -[[package]] -name = "poetry-core" -version = "1.9.0" -description = "Poetry PEP 517 Build Backend" -optional = false -python-versions = ">=3.8,<4.0" -files = [ - {file = "poetry_core-1.9.0-py3-none-any.whl", hash = "sha256:4e0c9c6ad8cf89956f03b308736d84ea6ddb44089d16f2adc94050108ec1f5a1"}, - {file = "poetry_core-1.9.0.tar.gz", hash = "sha256:fa7a4001eae8aa572ee84f35feb510b321bd652e5cf9293249d62853e1f935a2"}, -] - -[[package]] -name = "poetry-plugin-export" -version = "1.8.0" -description = "Poetry plugin to export the dependencies to various formats" -optional = false -python-versions = "<4.0,>=3.8" -files = [ - {file = "poetry_plugin_export-1.8.0-py3-none-any.whl", hash = "sha256:adbe232cfa0cc04991ea3680c865cf748bff27593b9abcb1f35fb50ed7ba2c22"}, - {file = "poetry_plugin_export-1.8.0.tar.gz", hash = "sha256:1fa6168a85d59395d835ca564bc19862a7c76061e60c3e7dfaec70d50937fc61"}, -] - -[package.dependencies] -poetry = ">=1.8.0,<3.0.0" -poetry-core = ">=1.7.0,<3.0.0" - -[[package]] -name = "ptyprocess" -version = "0.7.0" -description = "Run a subprocess in a pseudo terminal" -optional = false -python-versions = "*" -files = [ - {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"}, - {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"}, -] - -[[package]] -name = "pycodestyle" -version = "2.12.0" -description = "Python style guide checker" -optional = false -python-versions = ">=3.8" -files = [ - {file = "pycodestyle-2.12.0-py2.py3-none-any.whl", hash = "sha256:949a39f6b86c3e1515ba1787c2022131d165a8ad271b11370a8819aa070269e4"}, - {file = "pycodestyle-2.12.0.tar.gz", hash = "sha256:442f950141b4f43df752dd303511ffded3a04c2b6fb7f65980574f0c31e6e79c"}, -] - -[[package]] -name = "pycparser" -version = "2.22" -description = "C parser in Python" -optional = false -python-versions = ">=3.8" -files = [ - {file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"}, - {file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"}, -] - -[[package]] -name = "pyflakes" -version = "3.2.0" -description = "passive checker of Python programs" -optional = false -python-versions = ">=3.8" -files = [ - {file = "pyflakes-3.2.0-py2.py3-none-any.whl", hash = "sha256:84b5be138a2dfbb40689ca07e2152deb896a65c3a3e24c251c5c62489568074a"}, - {file = "pyflakes-3.2.0.tar.gz", hash = "sha256:1c61603ff154621fb2a9172037d84dca3500def8c8b630657d1701f026f8af3f"}, -] - -[[package]] -name = "pyproject-hooks" -version = "1.1.0" -description = "Wrappers to call pyproject.toml-based build backend hooks." -optional = false -python-versions = ">=3.7" -files = [ - {file = "pyproject_hooks-1.1.0-py3-none-any.whl", hash = "sha256:7ceeefe9aec63a1064c18d939bdc3adf2d8aa1988a510afec15151578b232aa2"}, - {file = "pyproject_hooks-1.1.0.tar.gz", hash = "sha256:4b37730834edbd6bd37f26ece6b44802fb1c1ee2ece0e54ddff8bfc06db86965"}, -] - -[[package]] -name = "python-dateutil" -version = "2.9.0.post0" -description = "Extensions to the standard Python datetime module" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" -files = [ - {file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"}, - {file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"}, -] - -[package.dependencies] -six = ">=1.5" - -[[package]] -name = "pytz" -version = "2024.1" -description = "World timezone definitions, modern and historical" -optional = false -python-versions = "*" -files = [ - {file = "pytz-2024.1-py2.py3-none-any.whl", hash = "sha256:328171f4e3623139da4983451950b28e95ac706e13f3f2630a879749e7a8b319"}, - {file = "pytz-2024.1.tar.gz", hash = "sha256:2a29735ea9c18baf14b448846bde5a48030ed267578472d8955cd0e7443a9812"}, -] - -[[package]] -name = "pywin32-ctypes" -version = "0.2.2" -description = "A (partial) reimplementation of pywin32 using ctypes/cffi" -optional = false -python-versions = ">=3.6" -files = [ - {file = "pywin32-ctypes-0.2.2.tar.gz", hash = "sha256:3426e063bdd5fd4df74a14fa3cf80a0b42845a87e1d1e81f6549f9daec593a60"}, - {file = "pywin32_ctypes-0.2.2-py3-none-any.whl", hash = "sha256:bf490a1a709baf35d688fe0ecf980ed4de11d2b3e37b51e5442587a75d9957e7"}, -] - -[[package]] -name = "pyyaml" -version = "6.0.1" -description = "YAML parser and emitter for Python" -optional = false -python-versions = ">=3.6" -files = [ - {file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"}, - {file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"}, - {file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"}, - {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"}, - {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"}, - {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"}, - {file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"}, - {file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"}, - {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"}, - {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"}, - {file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"}, - {file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"}, - {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a08c6f0fe150303c1c6b71ebcd7213c2858041a7e01975da3a99aed1e7a378ef"}, - {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"}, - {file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"}, - {file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"}, - {file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"}, - {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"}, - {file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"}, - {file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"}, - {file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"}, - {file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"}, - {file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"}, - {file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"}, - {file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"}, - {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"}, - {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"}, - {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"}, - {file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"}, - {file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"}, - {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"}, - {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"}, - {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"}, -] - -[[package]] -name = "rapidfuzz" -version = "3.9.3" -description = "rapid fuzzy string matching" -optional = false -python-versions = ">=3.8" -files = [ - {file = "rapidfuzz-3.9.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bdb8c5b8e29238ec80727c2ba3b301efd45aa30c6a7001123a6647b8e6f77ea4"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b3bd0d9632088c63a241f217742b1cf86e2e8ae573e01354775bd5016d12138c"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:153f23c03d4917f6a1fc2fb56d279cc6537d1929237ff08ee7429d0e40464a18"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a96c5225e840f1587f1bac8fa6f67562b38e095341576e82b728a82021f26d62"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b777cd910ceecd738adc58593d6ed42e73f60ad04ecdb4a841ae410b51c92e0e"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:53e06e4b81f552da04940aa41fc556ba39dee5513d1861144300c36c33265b76"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c7ca5b6050f18fdcacdada2dc5fb7619ff998cd9aba82aed2414eee74ebe6cd"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:87bb8d84cb41446a808c4b5f746e29d8a53499381ed72f6c4e456fe0f81c80a8"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:959a15186d18425d19811bea86a8ffbe19fd48644004d29008e636631420a9b7"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:a24603dd05fb4e3c09d636b881ce347e5f55f925a6b1b4115527308a323b9f8e"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:0d055da0e801c71dd74ba81d72d41b2fa32afa182b9fea6b4b199d2ce937450d"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:875b581afb29a7213cf9d98cb0f98df862f1020bce9d9b2e6199b60e78a41d14"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-win32.whl", hash = "sha256:6073a46f61479a89802e3f04655267caa6c14eb8ac9d81a635a13805f735ebc1"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-win_amd64.whl", hash = "sha256:119c010e20e561249b99ca2627f769fdc8305b07193f63dbc07bca0a6c27e892"}, - {file = "rapidfuzz-3.9.3-cp310-cp310-win_arm64.whl", hash = "sha256:790b0b244f3213581d42baa2fed8875f9ee2b2f9b91f94f100ec80d15b140ba9"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f57e8305c281e8c8bc720515540e0580355100c0a7a541105c6cafc5de71daae"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a4fc7b784cf987dbddc300cef70e09a92ed1bce136f7bb723ea79d7e297fe76d"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b422c0a6fe139d5447a0766268e68e6a2a8c2611519f894b1f31f0a392b9167"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f50fed4a9b0c9825ff37cf0bccafd51ff5792090618f7846a7650f21f85579c9"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b80eb7cbe62348c61d3e67e17057cddfd6defab168863028146e07d5a8b24a89"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f45be77ec82da32ce5709a362e236ccf801615cc7163b136d1778cf9e31b14"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd84b7f652a5610733400307dc732f57c4a907080bef9520412e6d9b55bc9adc"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:3e6d27dad8c990218b8cd4a5c99cbc8834f82bb46ab965a7265d5aa69fc7ced7"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:05ee0696ebf0dfe8f7c17f364d70617616afc7dafe366532730ca34056065b8a"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:2bc8391749e5022cd9e514ede5316f86e332ffd3cfceeabdc0b17b7e45198a8c"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:93981895602cf5944d89d317ae3b1b4cc684d175a8ae2a80ce5b65615e72ddd0"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:754b719a4990735f66653c9e9261dcf52fd4d925597e43d6b9069afcae700d21"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-win32.whl", hash = "sha256:14c9f268ade4c88cf77ab007ad0fdf63699af071ee69378de89fff7aa3cae134"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-win_amd64.whl", hash = "sha256:bc1991b4cde6c9d3c0bbcb83d5581dc7621bec8c666c095c65b4277233265a82"}, - {file = "rapidfuzz-3.9.3-cp311-cp311-win_arm64.whl", hash = "sha256:0c34139df09a61b1b557ab65782ada971b4a3bce7081d1b2bee45b0a52231adb"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5d6a210347d6e71234af5c76d55eeb0348b026c9bb98fe7c1cca89bac50fb734"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b300708c917ce52f6075bdc6e05b07c51a085733650f14b732c087dc26e0aaad"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83ea7ca577d76778250421de61fb55a719e45b841deb769351fc2b1740763050"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8319838fb5b7b5f088d12187d91d152b9386ce3979ed7660daa0ed1bff953791"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:505d99131afd21529293a9a7b91dfc661b7e889680b95534756134dc1cc2cd86"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c52970f7784518d7c82b07a62a26e345d2de8c2bd8ed4774e13342e4b3ff4200"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:143caf7247449055ecc3c1e874b69e42f403dfc049fc2f3d5f70e1daf21c1318"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b8ab0fa653d9225195a8ff924f992f4249c1e6fa0aea563f685e71b81b9fcccf"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:57e7c5bf7b61c7320cfa5dde1e60e678d954ede9bb7da8e763959b2138391401"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:51fa1ba84653ab480a2e2044e2277bd7f0123d6693051729755addc0d015c44f"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:17ff7f7eecdb169f9236e3b872c96dbbaf116f7787f4d490abd34b0116e3e9c8"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:afe7c72d3f917b066257f7ff48562e5d462d865a25fbcabf40fca303a9fa8d35"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-win32.whl", hash = "sha256:e53ed2e9b32674ce96eed80b3b572db9fd87aae6742941fb8e4705e541d861ce"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-win_amd64.whl", hash = "sha256:35b7286f177e4d8ba1e48b03612f928a3c4bdac78e5651379cec59f95d8651e6"}, - {file = "rapidfuzz-3.9.3-cp312-cp312-win_arm64.whl", hash = "sha256:e6e4b9380ed4758d0cb578b0d1970c3f32dd9e87119378729a5340cb3169f879"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a39890013f6d5b056cc4bfdedc093e322462ece1027a57ef0c636537bdde7531"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b5bc0fdbf419493163c5c9cb147c5fbe95b8e25844a74a8807dcb1a125e630cf"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:efe6e200a75a792d37b960457904c4fce7c928a96ae9e5d21d2bd382fe39066e"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:de077c468c225d4c18f7188c47d955a16d65f21aab121cbdd98e3e2011002c37"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8f917eaadf5388466a95f6a236f678a1588d231e52eda85374077101842e794e"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:858ba57c05afd720db8088a8707079e8d024afe4644001fe0dbd26ef7ca74a65"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d36447d21b05f90282a6f98c5a33771805f9222e5d0441d03eb8824e33e5bbb4"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:acbe4b6f1ccd5b90c29d428e849aa4242e51bb6cab0448d5f3c022eb9a25f7b1"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:53c7f27cdf899e94712972237bda48cfd427646aa6f5d939bf45d084780e4c16"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:6175682a829c6dea4d35ed707f1dadc16513270ef64436568d03b81ccb6bdb74"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:5276df395bd8497397197fca2b5c85f052d2e6a66ffc3eb0544dd9664d661f95"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:77b5c4f3e72924d7845f0e189c304270066d0f49635cf8a3938e122c437e58de"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-win32.whl", hash = "sha256:8add34061e5cd561c72ed4febb5c15969e7b25bda2bb5102d02afc3abc1f52d0"}, - {file = "rapidfuzz-3.9.3-cp38-cp38-win_amd64.whl", hash = "sha256:604e0502a39cf8e67fa9ad239394dddad4cdef6d7008fdb037553817d420e108"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:21047f55d674614eb4b0ab34e35c3dc66f36403b9fbfae645199c4a19d4ed447"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a56da3aff97cb56fe85d9ca957d1f55dbac7c27da927a86a2a86d8a7e17f80aa"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:964c08481aec2fe574f0062e342924db2c6b321391aeb73d68853ed42420fd6d"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5e2b827258beefbe5d3f958243caa5a44cf46187eff0c20e0b2ab62d1550327a"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c6e65a301fcd19fbfbee3a514cc0014ff3f3b254b9fd65886e8a9d6957fb7bca"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cbe93ba1725a8d47d2b9dca6c1f435174859427fbc054d83de52aea5adc65729"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aca21c0a34adee582775da997a600283e012a608a107398d80a42f9a57ad323d"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:256e07d3465173b2a91c35715a2277b1ee3ae0b9bbab4e519df6af78570741d0"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:802ca2cc8aa6b8b34c6fdafb9e32540c1ba05fca7ad60b3bbd7ec89ed1797a87"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:dd789100fc852cffac1449f82af0da139d36d84fd9faa4f79fc4140a88778343"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:5d0abbacdb06e27ff803d7ae0bd0624020096802758068ebdcab9bd49cf53115"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:378d1744828e27490a823fc6fe6ebfb98c15228d54826bf4e49e4b76eb5f5579"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-win32.whl", hash = "sha256:5d0cb272d43e6d3c0dedefdcd9d00007471f77b52d2787a4695e9dd319bb39d2"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-win_amd64.whl", hash = "sha256:15e4158ac4b3fb58108072ec35b8a69165f651ba1c8f43559a36d518dbf9fb3f"}, - {file = "rapidfuzz-3.9.3-cp39-cp39-win_arm64.whl", hash = "sha256:58c6a4936190c558d5626b79fc9e16497e5df7098589a7e80d8bff68148ff096"}, - {file = "rapidfuzz-3.9.3-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:5410dc848c947a603792f4f51b904a3331cf1dc60621586bfbe7a6de72da1091"}, - {file = "rapidfuzz-3.9.3-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:282d55700a1a3d3a7980746eb2fcd48c9bbc1572ebe0840d0340d548a54d01fe"}, - {file = "rapidfuzz-3.9.3-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc1037507810833646481f5729901a154523f98cbebb1157ba3a821012e16402"}, - {file = "rapidfuzz-3.9.3-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5e33f779391caedcba2ba3089fb6e8e557feab540e9149a5c3f7fea7a3a7df37"}, - {file = "rapidfuzz-3.9.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:41a81a9f311dc83d22661f9b1a1de983b201322df0c4554042ffffd0f2040c37"}, - {file = "rapidfuzz-3.9.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a93250bd8fae996350c251e1752f2c03335bb8a0a5b0c7e910a593849121a435"}, - {file = "rapidfuzz-3.9.3-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3617d1aa7716c57d120b6adc8f7c989f2d65bc2b0cbd5f9288f1fc7bf469da11"}, - {file = "rapidfuzz-3.9.3-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:ad04a3f5384b82933213bba2459f6424decc2823df40098920856bdee5fd6e88"}, - {file = "rapidfuzz-3.9.3-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8709918da8a88ad73c9d4dd0ecf24179a4f0ceba0bee21efc6ea21a8b5290349"}, - {file = "rapidfuzz-3.9.3-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b770f85eab24034e6ef7df04b2bfd9a45048e24f8a808e903441aa5abde8ecdd"}, - {file = "rapidfuzz-3.9.3-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:930b4e6fdb4d914390141a2b99a6f77a52beacf1d06aa4e170cba3a98e24c1bc"}, - {file = "rapidfuzz-3.9.3-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:c8444e921bfc3757c475c4f4d7416a7aa69b2d992d5114fe55af21411187ab0d"}, - {file = "rapidfuzz-3.9.3-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2c1d3ef3878f871abe6826e386c3d61b5292ef5f7946fe646f4206b85836b5da"}, - {file = "rapidfuzz-3.9.3-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:d861bf326ee7dabc35c532a40384541578cd1ec1e1b7db9f9ecbba56eb76ca22"}, - {file = "rapidfuzz-3.9.3-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cde6b9d9ba5007077ee321ec722fa714ebc0cbd9a32ccf0f4dd3cc3f20952d71"}, - {file = "rapidfuzz-3.9.3-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3bb6546e7b6bed1aefbe24f68a5fb9b891cc5aef61bca6c1a7b1054b7f0359bb"}, - {file = "rapidfuzz-3.9.3-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d8a57261ef7996d5ced7c8cba9189ada3fbeffd1815f70f635e4558d93766cb"}, - {file = "rapidfuzz-3.9.3-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:67201c02efc596923ad950519e0b75ceb78d524177ea557134d6567b9ac2c283"}, - {file = "rapidfuzz-3.9.3.tar.gz", hash = "sha256:b398ea66e8ed50451bce5997c430197d5e4b06ac4aa74602717f792d8d8d06e2"}, -] - -[package.extras] -full = ["numpy"] - -[[package]] -name = "redis" -version = "5.0.7" -description = "Python client for Redis database and key-value store" -optional = false -python-versions = ">=3.7" -files = [ - {file = "redis-5.0.7-py3-none-any.whl", hash = "sha256:0e479e24da960c690be5d9b96d21f7b918a98c0cf49af3b6fafaa0753f93a0db"}, - {file = "redis-5.0.7.tar.gz", hash = "sha256:8f611490b93c8109b50adc317b31bfd84fff31def3475b92e7e80bf39f48175b"}, -] - -[package.dependencies] -async-timeout = {version = ">=4.0.3", markers = "python_full_version < \"3.11.3\""} - -[package.extras] -hiredis = ["hiredis (>=1.0.0)"] -ocsp = ["cryptography (>=36.0.1)", "pyopenssl (==20.0.1)", "requests (>=2.26.0)"] - -[[package]] -name = "requests" -version = "2.32.3" -description = "Python HTTP for Humans." -optional = false -python-versions = ">=3.8" -files = [ - {file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"}, - {file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"}, -] - -[package.dependencies] -certifi = ">=2017.4.17" -charset-normalizer = ">=2,<4" -idna = ">=2.5,<4" -urllib3 = ">=1.21.1,<3" - -[package.extras] -socks = ["PySocks (>=1.5.6,!=1.5.7)"] -use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] - -[[package]] -name = "requests-toolbelt" -version = "1.0.0" -description = "A utility belt for advanced users of python-requests" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -files = [ - {file = "requests-toolbelt-1.0.0.tar.gz", hash = "sha256:7681a0a3d047012b5bdc0ee37d7f8f07ebe76ab08caeccfc3921ce23c88d5bc6"}, - {file = "requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06"}, -] - -[package.dependencies] -requests = ">=2.0.1,<3.0.0" - -[[package]] -name = "rq" -version = "1.16.2" -description = "RQ is a simple, lightweight, library for creating background jobs, and processing them." -optional = false -python-versions = ">=3.7" -files = [ - {file = "rq-1.16.2-py3-none-any.whl", hash = "sha256:52e619f6cb469b00e04da74305045d244b75fecb2ecaa4f26422add57d3c5f09"}, - {file = "rq-1.16.2.tar.gz", hash = "sha256:5c5b9ad5fbaf792b8fada25cc7627f4d206a9a4455aced371d4f501cc3f13b34"}, -] - -[package.dependencies] -click = ">=5" -redis = ">=3.5" - -[[package]] -name = "secretstorage" -version = "3.3.3" -description = "Python bindings to FreeDesktop.org Secret Service API" -optional = false -python-versions = ">=3.6" -files = [ - {file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"}, - {file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"}, -] - -[package.dependencies] -cryptography = ">=2.0" -jeepney = ">=0.6" - -[[package]] -name = "shellingham" -version = "1.5.4" -description = "Tool to Detect Surrounding Shell" -optional = false -python-versions = ">=3.7" -files = [ - {file = "shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686"}, - {file = "shellingham-1.5.4.tar.gz", hash = "sha256:8dbca0739d487e5bd35ab3ca4b36e11c4078f3a234bfce294b0a0291363404de"}, -] - -[[package]] -name = "six" -version = "1.16.0" -description = "Python 2 and 3 compatibility utilities" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" -files = [ - {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, - {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, -] - -[[package]] -name = "sortedcontainers" -version = "2.4.0" -description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set" -optional = false -python-versions = "*" -files = [ - {file = "sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0"}, - {file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"}, -] - -[[package]] -name = "sqlparse" -version = "0.5.0" -description = "A non-validating SQL parser." -optional = false -python-versions = ">=3.8" -files = [ - {file = "sqlparse-0.5.0-py3-none-any.whl", hash = "sha256:c204494cd97479d0e39f28c93d46c0b2d5959c7b9ab904762ea6c7af211c8663"}, - {file = "sqlparse-0.5.0.tar.gz", hash = "sha256:714d0a4932c059d16189f58ef5411ec2287a4360f17cdd0edd2d09d4c5087c93"}, -] - -[package.extras] -dev = ["build", "hatch"] -doc = ["sphinx"] - -[[package]] -name = "tomli" -version = "2.0.1" -description = "A lil' TOML parser" -optional = false -python-versions = ">=3.7" -files = [ - {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"}, - {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"}, -] - -[[package]] -name = "tomlkit" -version = "0.12.5" -description = "Style preserving TOML library" -optional = false -python-versions = ">=3.7" -files = [ - {file = "tomlkit-0.12.5-py3-none-any.whl", hash = "sha256:af914f5a9c59ed9d0762c7b64d3b5d5df007448eb9cd2edc8a46b1eafead172f"}, - {file = "tomlkit-0.12.5.tar.gz", hash = "sha256:eef34fba39834d4d6b73c9ba7f3e4d1c417a4e56f89a7e96e090dd0d24b8fb3c"}, -] - -[[package]] -name = "trove-classifiers" -version = "2024.5.22" -description = "Canonical source for classifiers on PyPI (pypi.org)." -optional = false -python-versions = "*" -files = [ - {file = "trove_classifiers-2024.5.22-py3-none-any.whl", hash = "sha256:c43ade18704823e4afa3d9db7083294bc4708a5e02afbcefacd0e9d03a7a24ef"}, - {file = "trove_classifiers-2024.5.22.tar.gz", hash = "sha256:8a6242bbb5c9ae88d34cf665e816b287d2212973c8777dfaef5ec18d72ac1d03"}, -] - -[[package]] -name = "typing-extensions" -version = "4.12.2" -description = "Backported and Experimental Type Hints for Python 3.8+" -optional = false -python-versions = ">=3.8" -files = [ - {file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"}, - {file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"}, -] - -[[package]] -name = "tzdata" -version = "2024.1" -description = "Provider of IANA time zone data" -optional = false -python-versions = ">=2" -files = [ - {file = "tzdata-2024.1-py2.py3-none-any.whl", hash = "sha256:9068bc196136463f5245e51efda838afa15aaeca9903f49050dfa2679db4d252"}, - {file = "tzdata-2024.1.tar.gz", hash = "sha256:2674120f8d891909751c38abcdfd386ac0a5a1127954fbc332af6b5ceae07efd"}, -] - -[[package]] -name = "urllib3" -version = "2.2.2" -description = "HTTP library with thread-safe connection pooling, file post, and more." -optional = false -python-versions = ">=3.8" -files = [ - {file = "urllib3-2.2.2-py3-none-any.whl", hash = "sha256:a448b2f64d686155468037e1ace9f2d2199776e17f0a46610480d311f73e3472"}, - {file = "urllib3-2.2.2.tar.gz", hash = "sha256:dd505485549a7a552833da5e6063639d0d177c04f23bc3864e41e5dc5f612168"}, -] - -[package.extras] -brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] -h2 = ["h2 (>=4,<5)"] -socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] -zstd = ["zstandard (>=0.18.0)"] - -[[package]] -name = "virtualenv" -version = "20.26.3" -description = "Virtual Python Environment builder" -optional = false -python-versions = ">=3.7" -files = [ - {file = "virtualenv-20.26.3-py3-none-any.whl", hash = "sha256:8cc4a31139e796e9a7de2cd5cf2489de1217193116a8fd42328f1bd65f434589"}, - {file = "virtualenv-20.26.3.tar.gz", hash = "sha256:4c43a2a236279d9ea36a0d76f98d84bd6ca94ac4e0f4a3b9d46d05e10fea542a"}, -] - -[package.dependencies] -distlib = ">=0.3.7,<1" -filelock = ">=3.12.2,<4" -platformdirs = ">=3.9.1,<5" - -[package.extras] -docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.2,!=7.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.6)"] -test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8)", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10)"] - -[[package]] -name = "xattr" -version = "1.1.0" -description = "Python wrapper for extended filesystem attributes" -optional = false -python-versions = ">=3.8" -files = [ - {file = "xattr-1.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ef2fa0f85458736178fd3dcfeb09c3cf423f0843313e25391db2cfd1acec8888"}, - {file = "xattr-1.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ccab735d0632fe71f7d72e72adf886f45c18b7787430467ce0070207882cfe25"}, - {file = "xattr-1.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9013f290387f1ac90bccbb1926555ca9aef75651271098d99217284d9e010f7c"}, - {file = "xattr-1.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcd5dfbcee73c7be057676ecb900cabb46c691aff4397bf48c579ffb30bb963"}, - {file = "xattr-1.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6480589c1dac7785d1f851347a32c4a97305937bf7b488b857fe8b28a25de9e9"}, - {file = "xattr-1.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:08f61cbed52dc6f7c181455826a9ff1e375ad86f67dd9d5eb7663574abb32451"}, - {file = "xattr-1.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:918e1f83f2e8a072da2671eac710871ee5af337e9bf8554b5ce7f20cdb113186"}, - {file = "xattr-1.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:0f06e0c1e4d06b4e0e49aaa1184b6f0e81c3758c2e8365597918054890763b53"}, - {file = "xattr-1.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:46a641ac038a9f53d2f696716147ca4dbd6a01998dc9cd4bc628801bc0df7f4d"}, - {file = "xattr-1.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:7e4ca0956fd11679bb2e0c0d6b9cdc0f25470cc00d8da173bb7656cc9a9cf104"}, - {file = "xattr-1.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6881b120f9a4b36ccd8a28d933bc0f6e1de67218b6ce6e66874e0280fc006844"}, - {file = "xattr-1.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:dab29d9288aa28e68a6f355ddfc3f0a7342b40c9012798829f3e7bd765e85c2c"}, - {file = "xattr-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0c80bbf55339c93770fc294b4b6586b5bf8e85ec00a4c2d585c33dbd84b5006"}, - {file = "xattr-1.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d1418705f253b6b6a7224b69773842cac83fcbcd12870354b6e11dd1cd54630f"}, - {file = "xattr-1.1.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:687e7d18611ef8d84a6ecd8f4d1ab6757500c1302f4c2046ce0aa3585e13da3f"}, - {file = "xattr-1.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b6ceb9efe0657a982ccb8b8a2efe96b690891779584c901d2f920784e5d20ae3"}, - {file = "xattr-1.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:b489b7916f239100956ea0b39c504f3c3a00258ba65677e4c8ba1bd0b5513446"}, - {file = "xattr-1.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0a9c431b0e66516a078125e9a273251d4b8e5ba84fe644b619f2725050d688a0"}, - {file = "xattr-1.1.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:1a5921ea3313cc1c57f2f53b63ea8ca9a91e48f4cc7ebec057d2447ec82c7efe"}, - {file = "xattr-1.1.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f6ad2a7bd5e6cf71d4a862413234a067cf158ca0ae94a40d4b87b98b62808498"}, - {file = "xattr-1.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0683dae7609f7280b0c89774d00b5957e6ffcb181c6019c46632b389706b77e6"}, - {file = "xattr-1.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:54cb15cd94e5ef8a0ef02309f1bf973ba0e13c11e87686e983f371948cfee6af"}, - {file = "xattr-1.1.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ff6223a854229055e803c2ad0c0ea9a6da50c6be30d92c198cf5f9f28819a921"}, - {file = "xattr-1.1.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d44e8f955218638c9ab222eed21e9bd9ab430d296caf2176fb37abe69a714e5c"}, - {file = "xattr-1.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:caab2c2986c30f92301f12e9c50415d324412e8e6a739a52a603c3e6a54b3610"}, - {file = "xattr-1.1.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:d6eb7d5f281014cd44e2d847a9107491af1bf3087f5afeded75ed3e37ec87239"}, - {file = "xattr-1.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:47a3bdfe034b4fdb70e5941d97037405e3904accc28e10dbef6d1c9061fb6fd7"}, - {file = "xattr-1.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:00d2b415cf9d6a24112d019e721aa2a85652f7bbc9f3b9574b2d1cd8668eb491"}, - {file = "xattr-1.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:78b377832dd0ee408f9f121a354082c6346960f7b6b1480483ed0618b1912120"}, - {file = "xattr-1.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6461a43b585e5f2e049b39bcbfcb6391bfef3c5118231f1b15d10bdb89ef17fe"}, - {file = "xattr-1.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:24d97f0d28f63695e3344ffdabca9fcc30c33e5c8ccc198c7524361a98d526f2"}, - {file = "xattr-1.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ad47d89968c9097900607457a0c89160b4771601d813e769f68263755516065"}, - {file = "xattr-1.1.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc53cab265f6e8449bd683d5ee3bc5a191e6dd940736f3de1a188e6da66b0653"}, - {file = "xattr-1.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:cd11e917f5b89f2a0ad639d9875943806c6c9309a3dd02da5a3e8ef92db7bed9"}, - {file = "xattr-1.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9c5a78c7558989492c4cb7242e490ffb03482437bf782967dfff114e44242343"}, - {file = "xattr-1.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:cebcf8a303a44fbc439b68321408af7267507c0d8643229dbb107f6c132d389c"}, - {file = "xattr-1.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b0d73150f2f9655b4da01c2369eb33a294b7f9d56eccb089819eafdbeb99f896"}, - {file = "xattr-1.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:793c01deaadac50926c0e1481702133260c7cb5e62116762f6fe1543d07b826f"}, - {file = "xattr-1.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e189e440bcd04ccaad0474720abee6ee64890823ec0db361fb0a4fb5e843a1bf"}, - {file = "xattr-1.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afacebbc1fa519f41728f8746a92da891c7755e6745164bd0d5739face318e86"}, - {file = "xattr-1.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9b1664edf003153ac8d1911e83a0fc60db1b1b374ee8ac943f215f93754a1102"}, - {file = "xattr-1.1.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dda2684228798e937a7c29b0e1c7ef3d70e2b85390a69b42a1c61b2039ba81de"}, - {file = "xattr-1.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b735ac2625a4fc2c9343b19f806793db6494336338537d2911c8ee4c390dda46"}, - {file = "xattr-1.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fa6a7af7a4ada43f15ccc58b6f9adcdbff4c36ba040013d2681e589e07ae280a"}, - {file = "xattr-1.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d1059b2f726e2702c8bbf9bbf369acfc042202a4cc576c2dec6791234ad5e948"}, - {file = "xattr-1.1.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:e2255f36ebf2cb2dbf772a7437ad870836b7396e60517211834cf66ce678b595"}, - {file = "xattr-1.1.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dba4f80b9855cc98513ddf22b7ad8551bc448c70d3147799ea4f6c0b758fb466"}, - {file = "xattr-1.1.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4cb70c16e7c3ae6ba0ab6c6835c8448c61d8caf43ea63b813af1f4dbe83dd156"}, - {file = "xattr-1.1.0-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83652910ef6a368b77b00825ad67815e5c92bfab551a848ca66e9981d14a7519"}, - {file = "xattr-1.1.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7a92aff66c43fa3e44cbeab7cbeee66266c91178a0f595e044bf3ce51485743b"}, - {file = "xattr-1.1.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9d4f71b673339aeaae1f6ea9ef8ea6c9643c8cd0df5003b9a0eaa75403e2e06c"}, - {file = "xattr-1.1.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a20de1c47b5cd7b47da61799a3b34e11e5815d716299351f82a88627a43f9a96"}, - {file = "xattr-1.1.0-pp38-pypy38_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23705c7079b05761ff2fa778ad17396e7599c8759401abc05b312dfb3bc99f69"}, - {file = "xattr-1.1.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:27272afeba8422f2a9d27e1080a9a7b807394e88cce73db9ed8d2dde3afcfb87"}, - {file = "xattr-1.1.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dd43978966de3baf4aea367c99ffa102b289d6c2ea5f3d9ce34a203dc2f2ab73"}, - {file = "xattr-1.1.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ded771eaf27bb4eb3c64c0d09866460ee8801d81dc21097269cf495b3cac8657"}, - {file = "xattr-1.1.0-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96ca300c0acca4f0cddd2332bb860ef58e1465d376364f0e72a1823fdd58e90d"}, - {file = "xattr-1.1.0.tar.gz", hash = "sha256:fecbf3b05043ed3487a28190dec3e4c4d879b2fcec0e30bafd8ec5d4b6043630"}, -] - -[package.dependencies] -cffi = ">=1.16.0" - -[package.extras] -test = ["pytest"] - -[[package]] -name = "zipp" -version = "3.19.2" -description = "Backport of pathlib-compatible object wrapper for zip files" -optional = false -python-versions = ">=3.8" -files = [ - {file = "zipp-3.19.2-py3-none-any.whl", hash = "sha256:f091755f667055f2d02b32c53771a7a6c8b47e1fdbc4b72a8b9072b3eef8015c"}, - {file = "zipp-3.19.2.tar.gz", hash = "sha256:bf1dcf6450f873a13e952a29504887c89e6de7506209e5b1bcc3460135d4de19"}, -] - -[package.extras] -doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] -test = ["big-O", "importlib-resources", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more-itertools", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-ignore-flaky", "pytest-mypy", "pytest-ruff (>=0.2.1)"] - -[extras] -yaml = ["pyyaml"] - -[metadata] -lock-version = "2.0" -python-versions = "^3.9" -content-hash = "cc83cb61408e5e8c7c884a42cf1538e141c32c4a3f194f09098a074251b1cc98" diff --git a/pyproject.toml b/pyproject.toml index 544389e..3222a37 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,75 +1,86 @@ [build-system] -requires = ["poetry-core"] -build-backend = "poetry.core.masonry.api" +requires = ["hatchling"] +build-backend = "hatchling.build" -[tool.poetry] +[project] name = "django-tasks-scheduler" -packages = [ - { include = "scheduler" }, -] -version = "1.3.4" -description = "An async job scheduler for django using redis" +version = "4.0.5" +description = "An async job scheduler for django using redis/valkey brokers" +authors = [{ name = "Daniel Moran", email = "daniel@moransoftware.ca" }] +requires-python = ">=3.10" readme = "README.md" -keywords = ["redis", "django", "background-jobs", "job-queue", "task-queue", "redis-queue", "scheduled-jobs"] -authors = [ - "Daniel Moran ", -] -maintainers = [ - "Daniel Moran ", -] license = "MIT" +maintainers = [{ name = "Daniel Moran", email = "daniel@moransoftware.ca" }] +keywords = [ + "redis", + "valkey", + "django", + "background-jobs", + "job-queue", + "task-queue", + "redis-queue", + "scheduled-jobs", +] classifiers = [ - 'Development Status :: 5 - Production/Stable', - 'Environment :: Web Environment', - 'Intended Audience :: Developers', - 'License :: OSI Approved :: MIT License', - 'Operating System :: OS Independent', - 'Programming Language :: Python', - 'Programming Language :: Python :: 3.9', - 'Programming Language :: Python :: 3.10', - 'Programming Language :: Python :: 3.11', - 'Programming Language :: Python :: 3.12', - 'Framework :: Django', - 'Framework :: Django :: 5.0', - 'Framework :: Django :: 4', - 'Framework :: Django :: 4.0', - 'Framework :: Django :: 4.1', - 'Framework :: Django :: 4.2', - 'Framework :: Django :: 3', - 'Framework :: Django :: 3.2', + "Development Status :: 5 - Production/Stable", + "Environment :: Web Environment", + "Intended Audience :: Developers", + "License :: OSI Approved :: MIT License", + "Operating System :: OS Independent", + "Programming Language :: Python", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + "Programming Language :: Python :: 3.13", + "Framework :: Django", + "Framework :: Django :: 5.0", + "Framework :: Django :: 5.1", + "Framework :: Django :: 5.2", +] +dependencies = [ + "django>=5", + "croniter>=2.0", + "click~=8.2", ] -homepage = "https://github.com/dsoftwareinc/django-tasks-scheduler" -documentation = "https://django-tasks-scheduler.readthedocs.io/en/latest/" -[tool.poetry.urls] -"Bug Tracker" = "https://github.com/dsoftwareinc/django-tasks-scheduler/issues" -"Funding" = "https://github.com/sponsors/cunla" +[project.optional-dependencies] +yaml = ["pyyaml~=6.0"] +valkey = ["valkey>=6.0.2,<7"] +sentry = ["sentry-sdk~=2.19"] -[tool.poetry.dependencies] -python = "^3.9" -django = ">=3.2" -croniter = "^2.0" -click = "^8.1" -rq = "^1.16" -pyyaml = { version = "^6.0", optional = true } +[project.urls] +Homepage = "https://github.com/django-commons/django-tasks-scheduler" +Documentation = "https://django-tasks-scheduler.readthedocs.io/" +"Bug Tracker" = "https://github.com/django-commons/django-tasks-scheduler/issues" +Funding = "https://github.com/sponsors/cunla" -[tool.poetry.dev-dependencies] -poetry = "^1.8.2" -coverage = "^7.5" -fakeredis = { version = "^2.21.1", extras = ['lua'] } -Flake8-pyproject = "^1.2" -pyyaml = "^6" -freezegun = "^1.5" +[dependency-groups] +dev = [ + "time-machine>=2.16.0,<3", + "ruff>=0.11", + "coverage~=7.6", + "fakeredis~=2.28", + "pyyaml>=6,<7", +] +[tool.hatch.build.targets.sdist] +include = ["scheduler"] -[tool.poetry.extras] -yaml = ["pyyaml"] +[tool.hatch.build.targets.wheel] +include = ["scheduler"] -[tool.flake8] -max-line-length = 119 +[tool.ruff] +line-length = 120 exclude = [ 'scheduler/migrations', + 'testproject', '.venv', '.github', - '__pycache', + '__pycache__', ] + +[tool.ruff.format] +quote-style = "double" +indent-style = "space" +skip-magic-trailing-comma = false +line-ending = "auto" diff --git a/scheduler/__init__.py b/scheduler/__init__.py index 4675745..81ea954 100644 --- a/scheduler/__init__.py +++ b/scheduler/__init__.py @@ -1,5 +1,9 @@ import importlib.metadata -__version__ = importlib.metadata.version('django-tasks-scheduler') +__version__ = importlib.metadata.version("django-tasks-scheduler") -from .decorators import job # noqa: F401 +__all__ = [ + "job", +] + +from scheduler.decorators import job diff --git a/scheduler/admin/__init__.py b/scheduler/admin/__init__.py index 5b0fa97..2e7a700 100644 --- a/scheduler/admin/__init__.py +++ b/scheduler/admin/__init__.py @@ -1,2 +1,8 @@ -from .task_models import TaskAdmin # noqa: F401 -from .redis_models import QueueAdmin, WorkerAdmin # noqa: F401 +from .ephemeral_models import QueueAdmin, WorkerAdmin +from .task_admin import TaskAdmin + +__all__ = [ + "QueueAdmin", + "WorkerAdmin", + "TaskAdmin", +] diff --git a/scheduler/admin/redis_models.py b/scheduler/admin/ephemeral_models.py similarity index 59% rename from scheduler/admin/redis_models.py rename to scheduler/admin/ephemeral_models.py index 846130a..15fddd1 100644 --- a/scheduler/admin/redis_models.py +++ b/scheduler/admin/ephemeral_models.py @@ -1,8 +1,7 @@ from django.contrib import admin from scheduler import views -from scheduler.models import Queue -from scheduler.models.worker import Worker +from scheduler.models.ephemeral_models import Queue, Worker class ImmutableAdmin(admin.ModelAdmin): @@ -13,17 +12,14 @@ def has_change_permission(self, request, obj=None): return True def has_module_permission(self, request): + """Returns True if the given request has any permission in the given app label. + + Can be overridden by the user in subclasses. In such case, it should return True if the given request has + permission to view the module on the admin index page and access the module's index page. Overriding it does + not restrict access to the add, change or delete views. Use `ModelAdmin.has_(add|change|delete)_permission` for + that. """ - return True if the given request has any permission in the given - app label. - - Can be overridden by the user in subclasses. In such case it should - return True if the given request has permission to view the module on - the admin index page and access the module's index page. Overriding it - does not restrict access to the add, change or delete views. Use - `ModelAdmin.has_(add|change|delete)_permission` for that. - """ - return request.user.has_module_perms('django-tasks-scheduler') + return request.user.has_module_perms("django-tasks-scheduler") @admin.register(Queue) @@ -41,4 +37,4 @@ class WorkerAdmin(ImmutableAdmin): def changelist_view(self, request, extra_context=None): """The 'change list' admin view for this model.""" - return views.workers(request) + return views.workers_list(request) diff --git a/scheduler/admin/task_admin.py b/scheduler/admin/task_admin.py new file mode 100644 index 0000000..e33cbef --- /dev/null +++ b/scheduler/admin/task_admin.py @@ -0,0 +1,206 @@ +from typing import List + +from django.contrib import admin, messages +from django.contrib.contenttypes.admin import GenericStackedInline +from django.db.models import QuerySet +from django.http import HttpRequest +from django.utils import timezone, formats +from django.utils.translation import gettext_lazy as _ + +from scheduler.helpers.queues import get_queue +from scheduler.models import TaskArg, TaskKwarg, Task, TaskType, get_next_cron_time +from scheduler.redis_models import JobModel +from scheduler.settings import SCHEDULER_CONFIG, logger +from scheduler.types import ConnectionErrorTypes + + +def job_execution_of(job: JobModel, task: Task) -> bool: + return job.scheduled_task_id == task.id and job.task_type == task.task_type + + +def get_job_executions_for_task(queue_name: str, scheduled_task: Task) -> List[JobModel]: + queue = get_queue(queue_name) + job_list: List[JobModel] = JobModel.get_many(queue.get_all_job_names(), connection=queue.connection) + res = sorted( + list(filter(lambda j: job_execution_of(j, scheduled_task), job_list)), key=lambda j: j.created_at, reverse=True + ) + return res + + +class JobArgInline(GenericStackedInline): + model = TaskArg + extra = 0 + fieldsets = ((None, dict(fields=("arg_type", "val"))),) + + +class JobKwargInline(GenericStackedInline): + model = TaskKwarg + extra = 0 + fieldsets = ((None, dict(fields=("key", ("arg_type", "val")))),) + + +def get_message_bit(rows_updated: int) -> str: + message_bit = "1 task was" if rows_updated == 1 else f"{rows_updated} tasks were" + return message_bit + + +@admin.register(Task) +class TaskAdmin(admin.ModelAdmin): + """TaskAdmin admin view for all task models.""" + + class Media: + js = ("admin/js/jquery.init.js", "admin/js/select-fields.js") + + save_on_top = True + change_form_template = "admin/scheduler/change_form.html" + actions = ["disable_selected", "enable_selected", "enqueue_job_now"] + inlines = [JobArgInline, JobKwargInline] + list_filter = ("enabled", "task_type", "queue") + list_display = ( + "enabled", + "name", + "function_string", + "is_scheduled", + "queue", + "task_schedule", + "next_run", + "successful_runs", + "last_successful_run", + "failed_runs", + "last_failed_run", + ) + list_display_links = ("name",) + readonly_fields = ( + "job_name", + "successful_runs", + "last_successful_run", + "failed_runs", + "last_failed_run", + ) + # radio_fields = {"task_type": admin.HORIZONTAL} + fieldsets = ( + ( + None, + dict( + fields=( + "name", + "callable", + ("enabled", "timeout", "result_ttl"), + "task_type", + ) + ), + ), + ( + None, + dict(fields=("scheduled_time",), classes=("tasktype-OnceTaskType",)), + ), + ( + None, + dict(fields=("cron_string",), classes=("tasktype-CronTaskType",)), + ), + ( + None, + dict( + fields=( + ( + "interval", + "interval_unit", + ), + "repeat", + ), + classes=("tasktype-RepeatableTaskType",), + ), + ), + (_("Queue settings"), dict(fields=(("queue", "at_front"), "job_name"))), + ( + _("Previous runs info"), + dict(fields=(("successful_runs", "last_successful_run"), ("failed_runs", "last_failed_run"))), + ), + ) + + @admin.display(description="Schedule") + def task_schedule(self, o: Task) -> str: + if o.task_type == TaskType.ONCE.value: + if timezone.is_naive(o.scheduled_time): + local_time = timezone.make_aware(o.scheduled_time, timezone.get_current_timezone()) + else: + local_time = timezone.localtime(o.scheduled_time) + return f"Run once: {formats.date_format(local_time, 'DATETIME_FORMAT')}" + elif o.task_type == TaskType.CRON.value: + return f"Cron: {o.cron_string}" + else: # if o.task_type == TaskType.REPEATABLE.value: + if o.interval is None or o.interval_unit is None: + return "" + return f"Repeatable: {o.interval} {o.get_interval_unit_display()}" + + @admin.display(description="Next run") + def next_run(self, o: Task) -> str: + return get_next_cron_time(o.cron_string) + + def change_view(self, request: HttpRequest, object_id, form_url="", extra_context=None): + extra = extra_context or {} + obj = self.get_object(request, object_id) + try: + execution_list = get_job_executions_for_task(obj.queue, obj) + except ConnectionErrorTypes as e: + logger.warn(f"Could not get job executions: {e}") + execution_list = list() + paginator = self.get_paginator(request, execution_list, SCHEDULER_CONFIG.EXECUTIONS_IN_PAGE) + page_number = request.GET.get("p", 1) + page_obj = paginator.get_page(page_number) + page_range = paginator.get_elided_page_range(page_obj.number) + + extra.update( + { + "pagination_required": paginator.count > SCHEDULER_CONFIG.EXECUTIONS_IN_PAGE, + "executions": page_obj, + "page_range": page_range, + "page_var": "p", + } + ) + + return super(TaskAdmin, self).change_view(request, object_id, form_url, extra_context=extra) + + def delete_queryset(self, request: HttpRequest, queryset: QuerySet) -> None: + for job in queryset: + job.unschedule() + super(TaskAdmin, self).delete_queryset(request, queryset) + + def delete_model(self, request: HttpRequest, obj: Task) -> None: + obj.unschedule() + super(TaskAdmin, self).delete_model(request, obj) + + @admin.action(description=_("Disable selected %(verbose_name_plural)s"), permissions=("change",)) + def disable_selected(self, request: HttpRequest, queryset: QuerySet) -> None: + rows_updated = 0 + for obj in queryset.filter(enabled=True).iterator(): + obj.enabled = False + obj.unschedule() + rows_updated += 1 + + level = messages.WARNING if not rows_updated else messages.INFO + self.message_user( + request, f"{get_message_bit(rows_updated)} successfully disabled and unscheduled.", level=level + ) + + @admin.action(description=_("Enable selected %(verbose_name_plural)s"), permissions=("change",)) + def enable_selected(self, request: HttpRequest, queryset: QuerySet) -> None: + rows_updated = 0 + for obj in queryset.filter(enabled=False).iterator(): + obj.enabled = True + obj.save() + rows_updated += 1 + + level = messages.WARNING if not rows_updated else messages.INFO + self.message_user(request, f"{get_message_bit(rows_updated)} successfully enabled and scheduled.", level=level) + + @admin.action(description="Enqueue now", permissions=("change",)) + def enqueue_job_now(self, request: HttpRequest, queryset: QuerySet) -> None: + task_names = [] + for task in queryset: + task.enqueue_to_run() + task_names.append(task.name) + self.message_user( + request, + f"The following jobs have been enqueued: {', '.join(task_names)}", + ) diff --git a/scheduler/admin/task_models.py b/scheduler/admin/task_models.py deleted file mode 100644 index ffc7d07..0000000 --- a/scheduler/admin/task_models.py +++ /dev/null @@ -1,163 +0,0 @@ -import redis -from django.contrib import admin, messages -from django.contrib.contenttypes.admin import GenericStackedInline -from django.utils.translation import gettext_lazy as _ - -from scheduler import tools -from scheduler.models import CronTask, TaskArg, TaskKwarg, RepeatableTask, ScheduledTask -from scheduler.settings import SCHEDULER_CONFIG, logger -from scheduler.tools import get_job_executions - - -class HiddenMixin(object): - class Media: - js = ['admin/js/jquery.init.js', ] - - -class JobArgInline(HiddenMixin, GenericStackedInline): - model = TaskArg - extra = 0 - fieldsets = ( - (None, { - 'fields': (('arg_type', 'val',),), - }), - ) - - -class JobKwargInline(HiddenMixin, GenericStackedInline): - model = TaskKwarg - extra = 0 - fieldsets = ( - (None, { - 'fields': (('key',), ('arg_type', 'val',),), - }), - ) - - -_LIST_DISPLAY_EXTRA = dict( - CronTask=('cron_string', 'next_run', 'successful_runs', 'last_successful_run', 'failed_runs', 'last_failed_run',), - ScheduledTask=('scheduled_time',), - RepeatableTask=( - 'scheduled_time', 'interval_display', 'successful_runs', 'last_successful_run', 'failed_runs', - 'last_failed_run',), -) -_FIELDSET_EXTRA = dict( - CronTask=( - 'cron_string', 'timeout', 'result_ttl', - ('successful_runs', 'last_successful_run',), - ('failed_runs', 'last_failed_run',), - ), - ScheduledTask=('scheduled_time', 'timeout', 'result_ttl'), - RepeatableTask=( - 'scheduled_time', - ('interval', 'interval_unit',), - 'repeat', 'timeout', 'result_ttl', - ('successful_runs', 'last_successful_run',), - ('failed_runs', 'last_failed_run',), - ), -) - - -@admin.register(CronTask, ScheduledTask, RepeatableTask) -class TaskAdmin(admin.ModelAdmin): - """TaskAdmin admin view for all task models. - Using the _LIST_DISPLAY_EXTRA and _FIELDSET_EXTRA additional data for each model. - """ - - save_on_top = True - change_form_template = 'admin/scheduler/change_form.html' - actions = ['disable_selected', 'enable_selected', 'enqueue_job_now', ] - inlines = [JobArgInline, JobKwargInline, ] - list_filter = ('enabled',) - list_display = ('enabled', 'name', 'job_id', 'function_string', 'is_scheduled', 'queue',) - list_display_links = ('name',) - readonly_fields = ('job_id',) - fieldsets = ( - (None, { - 'fields': ('name', 'callable', 'enabled', 'at_front',), - }), - (_('RQ Settings'), { - 'fields': ('queue', 'job_id',), - }), - ) - - def get_list_display(self, request): - if self.model.__name__ not in _LIST_DISPLAY_EXTRA: - raise ValueError(f'Unrecognized model {self.model}') - return TaskAdmin.list_display + _LIST_DISPLAY_EXTRA[self.model.__name__] - - def get_fieldsets(self, request, obj=None): - if self.model.__name__ not in _FIELDSET_EXTRA: - raise ValueError(f'Unrecognized model {self.model}') - return TaskAdmin.fieldsets + ((_('Scheduling'), { - 'fields': _FIELDSET_EXTRA[self.model.__name__], - }),) - - @admin.display(description='Next run') - def next_run(self, o: CronTask): - return tools.get_next_cron_time(o.cron_string) - - def change_view(self, request, object_id, form_url='', extra_context=None): - extra = extra_context or {} - obj = self.get_object(request, object_id) - try: - execution_list = get_job_executions(obj.queue, obj) - except redis.ConnectionError as e: - logger.warn(f'Could not get job executions: {e}') - execution_list = list() - paginator = self.get_paginator(request, execution_list, SCHEDULER_CONFIG['EXECUTIONS_IN_PAGE']) - page_number = request.GET.get('p', 1) - page_obj = paginator.get_page(page_number) - page_range = paginator.get_elided_page_range(page_obj.number) - - extra.update({ - "pagination_required": paginator.count > SCHEDULER_CONFIG['EXECUTIONS_IN_PAGE'], - 'executions': page_obj, - 'page_range': page_range, - 'page_var': 'p', - }) - - return super(TaskAdmin, self).change_view( - request, object_id, form_url, extra_context=extra) - - def delete_queryset(self, request, queryset): - for job in queryset: - job.unschedule() - super(TaskAdmin, self).delete_queryset(request, queryset) - - def delete_model(self, request, obj): - obj.unschedule() - super(TaskAdmin, self).delete_model(request, obj) - - @admin.action(description=_("Disable selected %(verbose_name_plural)s"), permissions=('change',)) - def disable_selected(self, request, queryset): - rows_updated = 0 - for obj in queryset.filter(enabled=True).iterator(): - obj.enabled = False - obj.unschedule() - rows_updated += 1 - - message_bit = "1 job was" if rows_updated == 1 else f"{rows_updated} jobs were" - - level = messages.WARNING if not rows_updated else messages.INFO - self.message_user(request, f"{message_bit} successfully disabled and unscheduled.", level=level) - - @admin.action(description=_("Enable selected %(verbose_name_plural)s"), permissions=('change',)) - def enable_selected(self, request, queryset): - rows_updated = 0 - for obj in queryset.filter(enabled=False).iterator(): - obj.enabled = True - obj.save() - rows_updated += 1 - - message_bit = "1 job was" if rows_updated == 1 else f"{rows_updated} jobs were" - level = messages.WARNING if not rows_updated else messages.INFO - self.message_user(request, f"{message_bit} successfully enabled and scheduled.", level=level) - - @admin.action(description="Enqueue now", permissions=('change',)) - def enqueue_job_now(self, request, queryset): - task_names = [] - for task in queryset: - task.enqueue_to_run() - task_names.append(task.name) - self.message_user(request, f"The following jobs have been enqueued: {', '.join(task_names)}", ) diff --git a/scheduler/apps.py b/scheduler/apps.py index 39efa9c..3032280 100644 --- a/scheduler/apps.py +++ b/scheduler/apps.py @@ -3,12 +3,9 @@ class SchedulerConfig(AppConfig): - default_auto_field = 'django.db.models.AutoField' - name = 'scheduler' - verbose_name = _('Tasks Scheduler') + default_auto_field = "django.db.models.AutoField" + name = "scheduler" + verbose_name = _("Tasks Scheduler") def ready(self): - from scheduler.models import BaseTask - from scheduler.settings import QUEUES - - BaseTask.QUEUES = [(queue, queue) for queue in QUEUES.keys()] + pass diff --git a/scheduler/decorators.py b/scheduler/decorators.py index f78b467..72e15cb 100644 --- a/scheduler/decorators.py +++ b/scheduler/decorators.py @@ -1,42 +1,98 @@ -from scheduler import settings -from .queues import get_queue, QueueNotFoundError -from .rq_classes import rq_job_decorator - - -def job(*args, **kwargs): - """ - The same as rq package's job decorator, but it automatically works out - the ``connection`` argument from SCHEDULER_QUEUES. - - And also, it allows simplified ``@job`` syntax to put a job into the default queue. - - """ - if len(args) == 0: - func = None - queue = 'default' - else: - if callable(args[0]): - func = args[0] - queue = 'default' - else: - func = None - queue = args[0] - args = args[1:] - - if isinstance(queue, str): - try: - queue = get_queue(queue) - if 'connection' not in kwargs: - kwargs['connection'] = queue.connection - except KeyError: - raise QueueNotFoundError(f'Queue {queue} does not exist') - - config = settings.SCHEDULER_CONFIG - - kwargs.setdefault('result_ttl', config.get('DEFAULT_RESULT_TTL')) - kwargs.setdefault('timeout', config.get('DEFAULT_TIMEOUT')) - - decorator = rq_job_decorator(queue, *args, **kwargs) - if func: - return decorator(func) - return decorator +from functools import wraps +from typing import Any, Callable, Dict, Optional, Union, List + +from scheduler.helpers.callback import Callback +from scheduler.types import ConnectionType + +JOB_METHODS_LIST: List[str] = list() + + +class job: + def __init__( + self, + queue: Union["Queue", str, None] = None, # noqa: F821 + connection: Optional[ConnectionType] = None, + timeout: Optional[int] = None, + result_ttl: Optional[int] = None, + job_info_ttl: Optional[int] = None, + at_front: bool = False, + meta: Optional[Dict[Any, Any]] = None, + description: Optional[str] = None, + on_failure: Optional[Union["Callback", Callable[..., Any]]] = None, + on_success: Optional[Union["Callback", Callable[..., Any]]] = None, + on_stopped: Optional[Union["Callback", Callable[..., Any]]] = None, + ): + """A decorator that adds a ``delay`` method to the decorated function, which in turn creates a RQ job when + called. Accepts a required ``queue`` argument that can be either a ``Queue`` instance or a string + denoting the queue name. For example:: + + + >>> @job(queue='default') + >>> def simple_add(x, y): + >>> return x + y + >>> ... + >>> # Puts `simple_add` function into queue + >>> simple_add.delay(1, 2) + + :param queue: The queue to use, can be the Queue class itself, or the queue name (str) + :type queue: Union['Queue', str] + :param connection: Broker Connection + :param timeout: Job timeout + :param result_ttl: Result time to live + :param job_info_ttl: Time to live for job info + :param at_front: Whether to enqueue the job at front of the queue + :param meta: Arbitraty metadata about the job + :param description: Job description + :param on_failure: Callable to run on failure + :param on_success: Callable to run on success + :param on_stopped: Callable to run when stopped + """ + from scheduler.helpers.queues import get_queue + + if queue is None: + queue = "default" + self.queue = get_queue(queue) if isinstance(queue, str) else queue + self.connection = connection + self.timeout = timeout + self.result_ttl = result_ttl + self.job_info_ttl = job_info_ttl + self.meta = meta + self.at_front = at_front + self.description = description + self.on_success = on_success + self.on_failure = on_failure + self.on_stopped = on_stopped + + def __call__(self, f): + @wraps(f) + def delay(*args, **kwargs): + from scheduler.helpers.queues import get_queue + + queue = get_queue(self.queue) if isinstance(self.queue, str) else self.queue + + job_name = kwargs.pop("job_name", None) + at_front = kwargs.pop("at_front", False) + + if not at_front: + at_front = self.at_front + + return queue.create_and_enqueue_job( + f, + args=args, + kwargs=kwargs, + timeout=self.timeout, + result_ttl=self.result_ttl, + job_info_ttl=self.job_info_ttl, + name=job_name, + at_front=at_front, + meta=self.meta, + description=self.description, + on_failure=self.on_failure, + on_success=self.on_success, + on_stopped=self.on_stopped, + when=None, + ) + + JOB_METHODS_LIST.append(f"{f.__module__}.{f.__name__}") + f.delay = delay + return f diff --git a/scheduler/helpers/__init__.py b/scheduler/helpers/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/scheduler/helpers/callback.py b/scheduler/helpers/callback.py new file mode 100644 index 0000000..4b3c96d --- /dev/null +++ b/scheduler/helpers/callback.py @@ -0,0 +1,37 @@ +import inspect +from typing import Union, Callable, Any, Optional + +from scheduler.helpers.utils import callable_func +from scheduler.timeouts import JobTimeoutException + + +class CallbackSetupError(Exception): + pass + + +class Callback: + def __init__(self, func: Union[str, Callable[..., Any]], timeout: Optional[int] = None): + from scheduler.settings import SCHEDULER_CONFIG + + self.timeout = timeout or SCHEDULER_CONFIG.CALLBACK_TIMEOUT + if not isinstance(self.timeout, int) or self.timeout < 0: + raise CallbackSetupError(f"Callback `timeout` must be a positive int, but received {self.timeout}") + if not isinstance(func, str) and not inspect.isfunction(func) and not inspect.isbuiltin(func): + raise CallbackSetupError(f"Callback `func` must be a string or function, received {func}") + if isinstance(func, str): + try: + func_str = func + func = callable_func(func) + except (TypeError, AttributeError, ModuleNotFoundError, ValueError): + raise CallbackSetupError(f"Callback `func` is not callable: {func_str}") + self.func: Callable[..., Any] = func + + @property + def name(self) -> str: + return f"{self.func.__module__}.{self.func.__qualname__}" + + def __call__(self, *args, **kwargs): + from scheduler.settings import SCHEDULER_CONFIG + + with SCHEDULER_CONFIG.DEATH_PENALTY_CLASS(self.timeout, JobTimeoutException): + return self.func(*args, **kwargs) diff --git a/scheduler/helpers/queues/__init__.py b/scheduler/helpers/queues/__init__.py new file mode 100644 index 0000000..4a77cf3 --- /dev/null +++ b/scheduler/helpers/queues/__init__.py @@ -0,0 +1,10 @@ +__all__ = [ + "Queue", + "InvalidJobOperation", + "get_queue", + "get_all_workers", + "perform_job", +] + +from .getters import get_queue, get_all_workers +from .queue_logic import Queue, InvalidJobOperation, perform_job diff --git a/scheduler/helpers/queues/getters.py b/scheduler/helpers/queues/getters.py new file mode 100644 index 0000000..d491a72 --- /dev/null +++ b/scheduler/helpers/queues/getters.py @@ -0,0 +1,72 @@ +from typing import Set + +from scheduler.redis_models.worker import WorkerModel +from scheduler.settings import SCHEDULER_CONFIG, get_queue_names, get_queue_configuration, QueueConfiguration, logger +from scheduler.types import ConnectionErrorTypes, BrokerMetaData, Broker +from .queue_logic import Queue + + +_BAD_QUEUE_CONFIGURATION = set() + + +def _get_connection(config: QueueConfiguration, use_strict_broker=False): + """Returns a Broker connection to use based on parameters in SCHEDULER_QUEUES""" + if SCHEDULER_CONFIG.BROKER == Broker.FAKEREDIS: + import fakeredis + + broker_cls = fakeredis.FakeRedis if not use_strict_broker else fakeredis.FakeStrictRedis + else: + broker_cls = BrokerMetaData[(SCHEDULER_CONFIG.BROKER, use_strict_broker)].connection_type + if config.URL: + return broker_cls.from_url(config.URL, db=config.DB, **(config.CONNECTION_KWARGS or {})) + if config.UNIX_SOCKET_PATH: + return broker_cls(unix_socket_path=config.UNIX_SOCKET_PATH, db=config.DB) + + if config.SENTINELS: + connection_kwargs = { + "db": config.DB, + "password": config.PASSWORD, + "username": config.USERNAME, + } + connection_kwargs.update(config.CONNECTION_KWARGS or {}) + sentinel_kwargs = config.SENTINEL_KWARGS or {} + SentinelClass = BrokerMetaData[(SCHEDULER_CONFIG.BROKER, use_strict_broker)].sentinel_type + sentinel = SentinelClass(config.SENTINELS, sentinel_kwargs=sentinel_kwargs, **connection_kwargs) + return sentinel.master_for( + service_name=config.MASTER_NAME, + redis_class=broker_cls, + ) + + return broker_cls( + host=config.HOST, + port=config.PORT, + db=config.DB, + username=config.USERNAME, + password=config.PASSWORD, + **(config.CONNECTION_KWARGS or {}), + ) + + +def get_queue(name="default") -> Queue: + """Returns an DjangoQueue using parameters defined in `SCHEDULER_QUEUES`""" + queue_settings = get_queue_configuration(name) + is_async = queue_settings.ASYNC + connection = _get_connection(queue_settings) + return Queue(name=name, connection=connection, is_async=is_async) + + +def get_all_workers() -> Set[WorkerModel]: + queue_names = get_queue_names() + + workers_set: Set[WorkerModel] = set() + for queue_name in queue_names: + if queue_name in _BAD_QUEUE_CONFIGURATION: + continue + connection = _get_connection(get_queue_configuration(queue_name)) + try: + curr_workers: Set[WorkerModel] = set(WorkerModel.all(connection=connection)) + workers_set.update(curr_workers) + except ConnectionErrorTypes as e: + logger.error(f"Could not connect for queue {queue_name}: {e}") + _BAD_QUEUE_CONFIGURATION.add(queue_name) + return workers_set diff --git a/scheduler/helpers/queues/queue_logic.py b/scheduler/helpers/queues/queue_logic.py new file mode 100644 index 0000000..d18c3d8 --- /dev/null +++ b/scheduler/helpers/queues/queue_logic.py @@ -0,0 +1,447 @@ +import asyncio +import sys +import traceback +from datetime import datetime +from typing import Dict, List, Optional, Tuple, Union, Any + +from redis import WatchError + +from scheduler.helpers.callback import Callback +from scheduler.helpers.utils import utcnow, current_timestamp +from scheduler.redis_models import ( + JobNamesRegistry, + FinishedJobRegistry, + ActiveJobRegistry, + FailedJobRegistry, + CanceledJobRegistry, + ScheduledJobRegistry, + QueuedJobRegistry, +) +from scheduler.redis_models import JobStatus, SchedulerLock, Result, ResultType, JobModel +from scheduler.settings import logger, SCHEDULER_CONFIG +from scheduler.types import ConnectionType, FunctionReferenceType, Self + + +class InvalidJobOperation(Exception): + pass + + +class NoSuchJobError(Exception): + pass + + +def perform_job(job_model: JobModel, connection: ConnectionType) -> Any: # noqa + """The main execution method. Invokes the job function with the job arguments. + + :returns: The job's return value + """ + job_model.persist(connection=connection) + _job_stack.append(job_model) + + try: + result = job_model.func(*job_model.args, **job_model.kwargs) + if asyncio.iscoroutine(result): + loop = asyncio.new_event_loop() + coro_result = loop.run_until_complete(result) + result = coro_result + if job_model.success_callback: + job_model.success_callback(job_model, connection, result) # type: ignore + return result + except: + if job_model.failure_callback: + job_model.failure_callback(job_model, connection, *sys.exc_info()) # type: ignore + raise + finally: + assert job_model is _job_stack.pop() + + +_job_stack = [] + + +class Queue: + REGISTRIES = dict( + finished="finished_job_registry", + failed="failed_job_registry", + scheduled="scheduled_job_registry", + active="active_job_registry", + canceled="canceled_job_registry", + queued="queued_job_registry", + ) + + def __init__(self, connection: Optional[ConnectionType], name: str, is_async: bool = True) -> None: + """Initializes a Queue object. + + :param name: The queue name + :param connection: Broker connection + :param is_async: Whether jobs should run "async" (using the worker). + """ + self.connection = connection + self.name = name + self._is_async = is_async + self.queued_job_registry = QueuedJobRegistry(connection=self.connection, name=self.name) + self.active_job_registry = ActiveJobRegistry(connection=self.connection, name=self.name) + self.failed_job_registry = FailedJobRegistry(connection=self.connection, name=self.name) + self.finished_job_registry = FinishedJobRegistry(connection=self.connection, name=self.name) + self.scheduled_job_registry = ScheduledJobRegistry(connection=self.connection, name=self.name) + self.canceled_job_registry = CanceledJobRegistry(connection=self.connection, name=self.name) + + def __len__(self): + return self.count + + @property + def scheduler_pid(self) -> int: + lock = SchedulerLock(self.name) + pid = lock.value(self.connection) + return int(pid.decode()) if pid is not None else None + + def clean_registries(self, timestamp: Optional[float] = None) -> None: + """Remove abandoned jobs from registry and add them to FailedJobRegistry. + + Removes jobs with an expiry time earlier than current_timestamp, specified as seconds since the Unix epoch. + Removed jobs are added to the global failed job queue. + """ + before_score = timestamp or current_timestamp() + self.queued_job_registry.compact() + started_jobs: List[Tuple[str, float]] = self.active_job_registry.get_job_names_before( + self.connection, before_score + ) + + with self.connection.pipeline() as pipeline: + for job_name, job_score in started_jobs: + job = JobModel.get(job_name, connection=self.connection) + if job is None or job.failure_callback is None or job_score + job.timeout > before_score: + continue + + logger.debug(f"Running failure callbacks for {job.name}") + try: + job.failure_callback(job, self.connection, traceback.extract_stack()) + except Exception: # noqa + logger.exception(f"Job {self.name}: error while executing failure callback") + raise + + else: + logger.warning( + f"Queue cleanup: Moving job to {self.failed_job_registry.key} (due to AbandonedJobError)" + ) + exc_string = ( + f"Moved to {self.failed_job_registry.key}, due to AbandonedJobError, at {datetime.now()}" + ) + job.status = JobStatus.FAILED + score = current_timestamp() + SCHEDULER_CONFIG.DEFAULT_FAILURE_TTL + Result.create( + connection=pipeline, + job_name=job.name, + worker_name=job.worker_name, + _type=ResultType.FAILED, + ttl=SCHEDULER_CONFIG.DEFAULT_FAILURE_TTL, + exc_string=exc_string, + ) + self.failed_job_registry.add(pipeline, job.name, score) + job.expire(connection=pipeline, ttl=SCHEDULER_CONFIG.DEFAULT_FAILURE_TTL) + job.save(connection=pipeline) + + for registry in self.REGISTRIES.values(): + getattr(self, registry).cleanup(connection=self.connection, timestamp=before_score) + pipeline.execute() + + def first_queued_job_name(self) -> Optional[str]: + return self.queued_job_registry.get_first() + + @property + def count(self) -> int: + """Returns a count of all messages in the queue.""" + res = 0 + for registry in self.REGISTRIES.values(): + res += getattr(self, registry).count(connection=self.connection) + return res + + def get_registry(self, name: str) -> Union[None, JobNamesRegistry]: + name = name.lower() + if name in Queue.REGISTRIES: + return getattr(self, Queue.REGISTRIES[name]) + return None + + def get_all_job_names(self) -> List[str]: + res = list() + res.extend(self.queued_job_registry.all()) + res.extend(self.finished_job_registry.all()) + res.extend(self.active_job_registry.all()) + res.extend(self.failed_job_registry.all()) + res.extend(self.scheduled_job_registry.all()) + res.extend(self.canceled_job_registry.all()) + return res + + def get_all_jobs(self) -> List[JobModel]: + job_names = self.get_all_job_names() + return JobModel.get_many(job_names, connection=self.connection) + + def create_and_enqueue_job( + self, + func: FunctionReferenceType, + args: Union[Tuple, List, None] = None, + kwargs: Optional[Dict] = None, + when: Optional[datetime] = None, + timeout: Optional[int] = None, + result_ttl: Optional[int] = None, + job_info_ttl: Optional[int] = None, + description: Optional[str] = None, + name: Optional[str] = None, + at_front: bool = False, + meta: Optional[Dict] = None, + on_success: Optional[Callback] = None, + on_failure: Optional[Callback] = None, + on_stopped: Optional[Callback] = None, + task_type: Optional[str] = None, + scheduled_task_id: Optional[int] = None, + pipeline: Optional[ConnectionType] = None, + ) -> JobModel: + """Creates a job to represent the delayed function call and enqueues it. + :param when: When to schedule the job (None to enqueue immediately) + :param func: The reference to the function + :param args: The `*args` to pass to the function + :param kwargs: The `**kwargs` to pass to the function + :param timeout: Function timeout + :param result_ttl: Result time to live + :param job_info_ttl: Time to live + :param description: The job description + :param name: The job name + :param at_front: Whether to enqueue the job at the front + :param meta: Metadata to attach to the job + :param on_success: Callback for on success + :param on_failure: Callback for on failure + :param on_stopped: Callback for on stopped + :param task_type: The task type + :param scheduled_task_id: The scheduled task id + :param pipeline: The Broker Pipeline + :returns: The enqueued Job + """ + status = JobStatus.QUEUED if when is None else JobStatus.SCHEDULED + job_model = JobModel.create( + connection=self.connection, + func=func, + args=args, + kwargs=kwargs, + result_ttl=result_ttl, + job_info_ttl=job_info_ttl, + description=description, + name=name, + meta=meta, + status=status, + timeout=timeout, + on_success=on_success, + on_failure=on_failure, + on_stopped=on_stopped, + queue_name=self.name, + task_type=task_type, + scheduled_task_id=scheduled_task_id, + ) + if when is None: + job_model = self.enqueue_job(job_model, connection=pipeline, at_front=at_front) + elif isinstance(when, datetime): + job_model.save(connection=self.connection) + self.scheduled_job_registry.schedule(self.connection, job_model.name, when) + else: + raise TypeError(f"Invalid type for when=`{when}`") + return job_model + + def job_handle_success( + self, job: JobModel, result: Any, job_info_ttl: int, result_ttl: int, connection: ConnectionType + ): + """Saves and cleanup job after successful execution""" + job.after_execution( + job_info_ttl, + JobStatus.FINISHED, + prev_registry=self.active_job_registry, + new_registry=self.finished_job_registry, + connection=connection, + ) + Result.create( + connection, + job_name=job.name, + worker_name=job.worker_name, + _type=ResultType.SUCCESSFUL, + return_value=result, + ttl=result_ttl, + ) + + def job_handle_failure(self, status: JobStatus, job: JobModel, exc_string: str, connection: ConnectionType): + # Does not set job status since the job might be stopped + job.after_execution( + SCHEDULER_CONFIG.DEFAULT_FAILURE_TTL, + status, + prev_registry=self.active_job_registry, + new_registry=self.failed_job_registry, + connection=connection, + ) + Result.create( + connection, + job.name, + job.worker_name, + ResultType.FAILED, + SCHEDULER_CONFIG.DEFAULT_FAILURE_TTL, + exc_string=exc_string, + ) + + def run_sync(self, job: JobModel) -> JobModel: + """Run a job synchronously, meaning on the same process the method was called.""" + job.prepare_for_execution("sync", self.active_job_registry, self.connection) + try: + result = perform_job(job, self.connection) + + with self.connection.pipeline() as pipeline: + self.job_handle_success( + job, result=result, job_info_ttl=job.job_info_ttl, result_ttl=job.success_ttl, connection=pipeline + ) + + pipeline.execute() + except Exception as e: # noqa + logger.warning(f"Job {job.name} failed with exception: {e}") + with self.connection.pipeline() as pipeline: + exc_string = "".join(traceback.format_exception(*sys.exc_info())) + self.job_handle_failure(JobStatus.FAILED, job, exc_string, pipeline) + pipeline.execute() + return job + + @classmethod + def dequeue_any( + cls, + queues: List[Self], + timeout: Optional[int], + connection: Optional[ConnectionType] = None, + ) -> Tuple[Optional[JobModel], Optional[Self]]: + """Class method returning a Job instance at the front of the given set of Queues, where the order of the queues + is important. + + When all the Queues are empty, depending on the `timeout` argument, either blocks execution of this function + for the duration of the timeout or until new messages arrive on any of the queues, or returns None. + + :param queues: List of Queue objects + :param timeout: Timeout for the pop operation + :param connection: Broker Connection + :returns: Tuple of Job, Queue + """ + + while True: + registries = [q.queued_job_registry for q in queues] + for registry in registries: + registry.compact() + + registry_key, job_name = QueuedJobRegistry.pop(connection, registries, timeout) + if job_name is None: + return None, None + + queue = next(filter(lambda q: q.queued_job_registry.key == registry_key, queues), None) + if queue is None: + logger.warning(f"Could not find queue for registry key {registry_key} in queues") + return None, None + + job = JobModel.get(job_name, connection=connection) + if job is None: + continue + return job, queue + return None, None + + def __repr__(self) -> str: + return f"{self.__class__.__name__}({self.name!r})" + + def __str__(self) -> str: + return f"<{self.__class__.__name__} {self.name}>" + + def _remove_from_registries(self, job_name: str, connection: ConnectionType) -> None: + """Removes the job from all registries besides failed_job_registry""" + self.finished_job_registry.delete(connection=connection, job_name=job_name) + self.scheduled_job_registry.delete(connection=connection, job_name=job_name) + self.active_job_registry.delete(connection=connection, job_name=job_name) + self.canceled_job_registry.delete(connection=connection, job_name=job_name) + self.queued_job_registry.delete(connection=connection, job_name=job_name) + + def cancel_job(self, job_name: str) -> None: + """Cancels the given job, which will prevent the job from ever running (or inspected). + + This method merely exists as a high-level API call to cancel jobs without worrying about the internals required + to implement job cancellation. + + :param job_name: The job name to cancel. + :raises NoSuchJobError: If the job does not exist. + :raises InvalidJobOperation: If the job has already been canceled. + """ + job = JobModel.get(job_name, connection=self.connection) + if job is None: + raise NoSuchJobError(f"No such job: {job_name}") + if job.status == JobStatus.CANCELED: + raise InvalidJobOperation(f"Cannot cancel already canceled job: {job.name}") + + pipe = self.connection.pipeline() + new_status = JobStatus.CANCELED if job.status == JobStatus.QUEUED else JobStatus.STOPPED + + while True: + try: + job.set_field("status", new_status, connection=pipe) + self._remove_from_registries(job_name, connection=pipe) + pipe.execute() + if new_status == JobStatus.CANCELED: + self.canceled_job_registry.add(pipe, job_name, 0) + else: + self.finished_job_registry.add( + pipe, job_name, current_timestamp() + SCHEDULER_CONFIG.DEFAULT_FAILURE_TTL + ) + pipe.execute() + break + except WatchError: + # if the pipeline comes from the caller, we re-raise the exception as it is the responsibility of the + # caller to handle it + raise + + def delete_job(self, job_name: str, expire_job_model: bool = True) -> None: + """Deletes the given job from the queue and all its registries""" + pipe = self.connection.pipeline() + + while True: + try: + self._remove_from_registries(job_name, connection=pipe) + self.failed_job_registry.delete(connection=pipe, job_name=job_name) + if expire_job_model: + job_model = JobModel.get(job_name, connection=self.connection) + if job_model is not None: + job_model.expire(ttl=job_model.job_info_ttl, connection=pipe) + pipe.execute() + break + except WatchError: + pass + + def enqueue_job( + self, job_model: JobModel, connection: Optional[ConnectionType] = None, at_front: bool = False + ) -> JobModel: + """Enqueues a job for delayed execution without checking dependencies. + + If Queue is instantiated with is_async=False, job is executed immediately. + :param job_model: The job redis model + :param connection: The Redis Pipeline + :param at_front: Whether to enqueue the job at the front + + :returns: The enqueued JobModel + """ + + pipe = connection if connection is not None else self.connection.pipeline() + job_model.started_at = None + job_model.ended_at = None + job_model.status = JobStatus.QUEUED + job_model.enqueued_at = utcnow() + job_model.save(connection=pipe) + + if self._is_async: + if at_front: + score = current_timestamp() + else: + score = self.queued_job_registry.get_last_timestamp() or current_timestamp() + self.scheduled_job_registry.delete(connection=pipe, job_name=job_model.name) + self.queued_job_registry.add(connection=pipe, score=score, job_name=job_model.name) + pipe.execute() + logger.debug(f"Pushed job {job_model.name} into {self.name} queued-jobs registry") + else: # sync mode + pipe.execute() + job_model = self.run_sync(job_model) + job_model.expire(ttl=job_model.job_info_ttl, connection=pipe) + pipe.execute() + + return job_model diff --git a/scheduler/helpers/utils.py b/scheduler/helpers/utils.py new file mode 100644 index 0000000..dae312c --- /dev/null +++ b/scheduler/helpers/utils.py @@ -0,0 +1,23 @@ +import datetime +import importlib +import time +from typing import Callable + + +def current_timestamp() -> int: + """Returns current UTC timestamp in secs""" + return int(time.time()) + + +def utcnow() -> datetime.datetime: + """Return now in UTC""" + return datetime.datetime.now(datetime.timezone.utc) + + +def callable_func(callable_str: str) -> Callable: + path = callable_str.split(".") + module = importlib.import_module(".".join(path[:-1])) + func = getattr(module, path[-1]) + if callable(func) is False: + raise TypeError(f"'{callable_str}' is not callable") + return func diff --git a/scheduler/management/commands/delete_failed_executions.py b/scheduler/management/commands/delete_failed_executions.py index cf3f59e..6f41980 100644 --- a/scheduler/management/commands/delete_failed_executions.py +++ b/scheduler/management/commands/delete_failed_executions.py @@ -1,31 +1,29 @@ import click from django.core.management.base import BaseCommand -from scheduler.queues import get_queue -from scheduler.rq_classes import JobExecution +from scheduler.helpers.queues import get_queue +from scheduler.redis_models import JobModel class Command(BaseCommand): - help = 'Delete failed jobs from Django queue.' + help = "Delete failed jobs from Django queue." def add_arguments(self, parser): - parser.add_argument( - '--queue', '-q', dest='queue', default='default', - help='Specify the queue [default]') - parser.add_argument('-f', '--func', help='optional job function name, e.g. "app.tasks.func"') - parser.add_argument('--dry-run', action='store_true', help='Do not actually delete failed jobs') + parser.add_argument("--queue", "-q", dest="queue", default="default", help="Specify the queue [default]") + parser.add_argument("-f", "--func", help='optional job function name, e.g. "app.tasks.func"') + parser.add_argument("--dry-run", action="store_true", help="Do not actually delete failed jobs") def handle(self, *args, **options): - queue = get_queue(options.get('queue', 'default')) - job_ids = queue.failed_job_registry.get_job_ids() - jobs = JobExecution.fetch_many(job_ids, connection=queue.connection) - func_name = options.get('func', None) + queue = get_queue(options.get("queue", "default")) + job_names = queue.failed_job_registry.all() + jobs = JobModel.get_many(job_names, connection=queue.connection) + func_name = options.get("func", None) if func_name is not None: jobs = [job for job in jobs if job.func_name == func_name] - dry_run = options.get('dry_run', False) - click.echo(f'Found {len(jobs)} failed jobs') - for job in jobs: - click.echo(f'Deleting {job.id}') + dry_run = options.get("dry_run", False) + click.echo(f"Found {len(jobs)} failed jobs") + for job in job_names: + click.echo(f"Deleting {job}") if not dry_run: - job.delete() - click.echo(f'Deleted {len(jobs)} failed jobs') + queue.delete_job(job) + click.echo(f"Deleted {len(jobs)} failed jobs") diff --git a/scheduler/management/commands/export.py b/scheduler/management/commands/export.py index 594d853..85c3c9d 100644 --- a/scheduler/management/commands/export.py +++ b/scheduler/management/commands/export.py @@ -1,58 +1,59 @@ import sys import click -from django.apps import apps from django.core.management.base import BaseCommand -from scheduler.tools import MODEL_NAMES +from scheduler.models import Task class Command(BaseCommand): - """ - Export all scheduled jobs - """ + """Export all scheduled jobs""" + help = __doc__ def add_arguments(self, parser): parser.add_argument( - '-o', '--output', - action='store', - choices=['json', 'yaml'], - default='json', - dest='format', - help='format of output', + "-o", + "--output", + action="store", + choices=["json", "yaml"], + default="json", + dest="format", + help="format of output", ) parser.add_argument( - '-e', '--enabled', - action='store_true', - dest='enabled', - help='Export only enabled jobs', + "-e", + "--enabled", + action="store_true", + dest="enabled", + help="Export only enabled jobs", ) parser.add_argument( - '-f', '--filename', - action='store', - dest='filename', - help='File name to load (otherwise writes to standard output)', + "-f", + "--filename", + action="store", + dest="filename", + help="File name to load (otherwise writes to standard output)", ) def handle(self, *args, **options): - file = open(options.get('filename'), 'w') if options.get("filename") else sys.stdout + file = open(options.get("filename"), "w") if options.get("filename") else sys.stdout res = list() - for model_name in MODEL_NAMES: - model = apps.get_model(app_label='scheduler', model_name=model_name) - jobs = model.objects.all() - if options.get('enabled'): - jobs = jobs.filter(enabled=True) - for job in jobs: - res.append(job.to_dict()) - if options.get("format") == 'json': + tasks = Task.objects.all() + if options.get("enabled"): + tasks = tasks.filter(enabled=True) + for task in tasks: + res.append(task.to_dict()) + + if options.get("format") == "json": import json - click.echo(json.dumps(res, indent=2), file=file) + + click.echo(json.dumps(res, indent=2, default=str), file=file) return - if options.get("format") == 'yaml': + if options.get("format") == "yaml": try: import yaml except ImportError: diff --git a/scheduler/management/commands/import.py b/scheduler/management/commands/import.py index c0ad01b..28007a2 100644 --- a/scheduler/management/commands/import.py +++ b/scheduler/management/commands/import.py @@ -1,104 +1,132 @@ import sys -from typing import Dict, Any +from typing import Dict, Any, Optional import click -from django.apps import apps from django.conf import settings from django.contrib.contenttypes.models import ContentType from django.core.management.base import BaseCommand from django.utils import timezone -from scheduler.models import TaskArg, TaskKwarg -from scheduler.tools import MODEL_NAMES +from scheduler.models import TaskArg, TaskKwarg, Task +from scheduler.models import TaskType def job_model_str(model_str: str) -> str: - if model_str.find('Job') == len(model_str) - 3: - return model_str[:-3] + 'Task' + if model_str.find("Job") == len(model_str) - 3: + return model_str[:-3] + "Task" return model_str -def create_job_from_dict(job_dict: Dict[str, Any], update): - model = apps.get_model(app_label='scheduler', model_name=job_model_str(job_dict['model'])) - existing_job = model.objects.filter(name=job_dict['name']).first() - if existing_job: +def get_task_type(model_str: str) -> TaskType: + model_str = job_model_str(model_str) + try: + return TaskType(model_str) + except ValueError: + pass + if model_str == "CronTask": + return TaskType.CRON + elif model_str == "RepeatableTask": + return TaskType.REPEATABLE + elif model_str in {"ScheduledTask", "OnceTask"}: + return TaskType.ONCE + raise ValueError(f"Invalid model {model_str}") + + +def create_task_from_dict(task_dict: Dict[str, Any], update: bool) -> Optional[Task]: + existing_task = Task.objects.filter(name=task_dict["name"]).first() + task_type = get_task_type(task_dict["model"]) + if existing_task: if update: - click.echo(f'Found existing job "{existing_job}, removing it to be reinserted"') - existing_job.delete() + click.echo(f'Found existing job "{existing_task}, removing it to be reinserted"') + existing_task.delete() else: - click.echo(f'Found existing job "{existing_job}", skipping') - return - kwargs = dict(job_dict) - del kwargs['model'] - del kwargs['callable_args'] - del kwargs['callable_kwargs'] - if kwargs.get('scheduled_time', None): - target = timezone.datetime.fromisoformat(kwargs['scheduled_time']) + click.echo(f'Found existing job "{existing_task}", skipping') + return None + kwargs = dict(task_dict) + kwargs["task_type"] = task_type + del kwargs["model"] + del kwargs["callable_args"] + del kwargs["callable_kwargs"] + if kwargs.get("scheduled_time", None): + target = timezone.datetime.fromisoformat(kwargs["scheduled_time"]) if not settings.USE_TZ and not timezone.is_naive(target): target = timezone.make_naive(target) - kwargs['scheduled_time'] = target - model_fields = set(map(lambda field: field.attname, model._meta.get_fields())) - keys_to_ignore = list(filter(lambda k: k not in model_fields, kwargs.keys())) + kwargs["scheduled_time"] = target + model_fields = filter(lambda field: hasattr(field, "attname"), Task._meta.get_fields()) + model_fields = set(map(lambda field: field.attname, model_fields)) + keys_to_ignore = list(filter(lambda _k: _k not in model_fields, kwargs.keys())) for k in keys_to_ignore: del kwargs[k] - scheduled_job = model.objects.create(**kwargs) - click.echo(f'Created job {scheduled_job}') - content_type = ContentType.objects.get_for_model(scheduled_job) + task = Task.objects.create(**kwargs) + click.echo(f"Created task {task}") + content_type = ContentType.objects.get_for_model(task) - for arg in job_dict['callable_args']: + for arg in task_dict["callable_args"]: TaskArg.objects.create( - content_type=content_type, object_id=scheduled_job.id, **arg, ) - for arg in job_dict['callable_kwargs']: + content_type=content_type, + object_id=task.id, + **arg, + ) + for arg in task_dict["callable_kwargs"]: TaskKwarg.objects.create( - content_type=content_type, object_id=scheduled_job.id, **arg, ) + content_type=content_type, + object_id=task.id, + **arg, + ) + return task class Command(BaseCommand): """ Import scheduled jobs """ + help = __doc__ def add_arguments(self, parser): parser.add_argument( - '-f', '--format', - action='store', - choices=['json', 'yaml'], - default='json', - dest='format', - help='format of input', + "-f", + "--format", + action="store", + choices=["json", "yaml"], + default="json", + dest="format", + help="format of input", ) parser.add_argument( - '--filename', - action='store', - dest='filename', - help='File name to load (otherwise loads from standard input)', + "--filename", + action="store", + dest="filename", + help="File name to load (otherwise loads from standard input)", ) parser.add_argument( - '-r', '--reset', - action='store_true', - dest='reset', - help='Remove all currently scheduled jobs before importing', + "-r", + "--reset", + action="store_true", + dest="reset", + help="Remove all currently scheduled jobs before importing", ) parser.add_argument( - '-u', '--update', - action='store_true', - dest='update', - help='Update existing records', + "-u", + "--update", + action="store_true", + dest="update", + help="Update existing records", ) def handle(self, *args, **options): - file = open(options.get('filename')) if options.get("filename") else sys.stdin + file = open(options.get("filename")) if options.get("filename") else sys.stdin jobs = list() - if options.get("format") == 'json': + if options.get("format") == "json": import json + try: jobs = json.load(file) except json.decoder.JSONDecodeError: - click.echo('Error decoding json', err=True) + click.echo("Error decoding json", err=True) exit(1) - elif options.get("format") == 'yaml': + elif options.get("format") == "yaml": try: import yaml except ImportError: @@ -108,10 +136,8 @@ def handle(self, *args, **options): yaml.Dumper.ignore_aliases = lambda *x: True jobs = yaml.load(file, yaml.SafeLoader) - if options.get('reset'): - for model_name in MODEL_NAMES: - model = apps.get_model(app_label='scheduler', model_name=model_name) - model.objects.all().delete() + if options.get("reset"): + Task.objects.all().delete() for job in jobs: - create_job_from_dict(job, update=options.get('update')) + create_task_from_dict(job, update=options.get("update")) diff --git a/scheduler/management/commands/rqworker.py b/scheduler/management/commands/rqworker.py deleted file mode 100644 index cd812e2..0000000 --- a/scheduler/management/commands/rqworker.py +++ /dev/null @@ -1,85 +0,0 @@ -import logging -import os -import sys - -import click -from django.core.management.base import BaseCommand -from django.db import connections -from redis.exceptions import ConnectionError -from rq.logutils import setup_loghandlers - -from scheduler.tools import create_worker - -VERBOSITY_TO_LOG_LEVEL = { - 0: logging.CRITICAL, - 1: logging.WARNING, - 2: logging.INFO, - 3: logging.DEBUG, -} - - -def reset_db_connections(): - for c in connections.all(): - c.close() - - -class Command(BaseCommand): - """ - Runs RQ workers on specified queues. Note that all queues passed into a - single rqworker command must share the same connection. - - Example usage: - python manage.py rqworker high medium low - """ - - args = '' - - def add_arguments(self, parser): - parser.add_argument('--pid', action='store', dest='pidfile', - default=None, help='file to write the worker`s pid into') - parser.add_argument('--burst', action='store_true', dest='burst', - default=False, help='Run worker in burst mode') - parser.add_argument('--name', action='store', dest='name', - default=None, help='Name of the worker') - parser.add_argument('--worker-ttl', action='store', type=int, dest='worker_ttl', default=420, - help='Default worker timeout to be used') - parser.add_argument('--max-jobs', action='store', default=None, dest='max_jobs', type=int, - help='Maximum number of jobs to execute before terminating worker') - parser.add_argument('--fork-job-execution', action='store', default=True, dest='fork_job_execution', type=bool, - help='Fork job execution to another process') - parser.add_argument( - 'queues', nargs='*', type=str, - help='The queues to work on, separated by space, all queues should be using the same redis') - - def handle(self, **options): - queues = options.get('queues', []) - if not queues: - queues = ['default', ] - click.echo(f'Starting worker for queues {queues}') - pidfile = options.get('pidfile') - if pidfile: - with open(os.path.expanduser(pidfile), "w") as fp: - fp.write(str(os.getpid())) - - # Verbosity is defined by default in BaseCommand for all commands - verbosity = options.get('verbosity', 1) - log_level = VERBOSITY_TO_LOG_LEVEL.get(verbosity, logging.INFO) - setup_loghandlers(log_level) - - try: - # Instantiate a worker - w = create_worker( - *queues, - name=options['name'], - default_worker_ttl=options['worker_ttl'], - fork_job_execution=options['fork_job_execution'], ) - - # Close any opened DB connection before any fork - reset_db_connections() - - w.work(burst=options.get('burst', False), - logging_level=log_level, - max_jobs=options['max_jobs'], ) - except ConnectionError as e: - click.echo(str(e), err=True) - sys.exit(1) diff --git a/scheduler/management/commands/run_job.py b/scheduler/management/commands/run_job.py index 467e084..2420c87 100644 --- a/scheduler/management/commands/run_job.py +++ b/scheduler/management/commands/run_job.py @@ -1,7 +1,7 @@ import click from django.core.management.base import BaseCommand -from scheduler.queues import get_queue +from scheduler.helpers.queues import get_queue class Command(BaseCommand): @@ -9,29 +9,29 @@ class Command(BaseCommand): Queues the function given with the first argument with the parameters given with the rest of the argument list. """ + help = __doc__ - args = '' + args = "" def add_arguments(self, parser): + parser.add_argument("--queue", "-q", dest="queue", default="default", help="Specify the queue [default]") + parser.add_argument("--timeout", "-t", type=int, dest="timeout", help="A timeout in seconds") parser.add_argument( - '--queue', '-q', dest='queue', default='default', - help='Specify the queue [default]') - parser.add_argument( - '--timeout', '-t', type=int, dest='timeout', - help='A timeout in seconds') + "--result-ttl", "-r", type=int, dest="result_ttl", help="Time to store job results in seconds" + ) parser.add_argument( - '--result-ttl', '-r', type=int, dest='result_ttl', - help='Time to store job results in seconds') - parser.add_argument('callable', help='Method to call', ) - parser.add_argument('args', nargs='*', help='Args for callable') + "callable", + help="Method to call", + ) + parser.add_argument("args", nargs="*", help="Args for callable") def handle(self, **options): - verbosity = int(options.get('verbosity', 1)) - timeout = options.get('timeout') - result_ttl = options.get('result_ttl') - queue = get_queue(options.get('queue')) - func = options.get('callable') - args = options.get('args') - job = queue.enqueue_call(func, args=args, timeout=timeout, result_ttl=result_ttl) + verbosity = int(options.get("verbosity", 1)) + timeout = options.get("timeout") + result_ttl = options.get("result_ttl") + queue = get_queue(options.get("queue")) + func = options.get("callable") + args = options.get("args") + job = queue.create_and_enqueue_job(func, args=args, timeout=timeout, result_ttl=result_ttl, when=None) if verbosity: - click.echo(f'Job {job.id} created') + click.echo(f"Job {job.name} created") diff --git a/scheduler/management/commands/rqstats.py b/scheduler/management/commands/scheduler_stats.py similarity index 53% rename from scheduler/management/commands/rqstats.py rename to scheduler/management/commands/scheduler_stats.py index fa9c810..a52e646 100644 --- a/scheduler/management/commands/rqstats.py +++ b/scheduler/management/commands/scheduler_stats.py @@ -9,13 +9,12 @@ ANSI_LIGHT_WHITE = "\033[1;37m" ANSI_RESET = "\033[0m" -KEYS = ('jobs', 'started_jobs', 'deferred_jobs', 'finished_jobs', 'canceled_jobs', 'workers') +KEYS = ("queued_jobs", "started_jobs", "finished_jobs", "canceled_jobs", "workers") class Command(BaseCommand): - """ - Print statistics - """ + """Print statistics""" + help = __doc__ def __init__(self, *args, **kwargs): @@ -25,48 +24,56 @@ def __init__(self, *args, **kwargs): def add_arguments(self, parser): parser.add_argument( - '-j', '--json', action='store_true', dest='json', - help='Output statistics as JSON', ) + "-j", + "--json", + action="store_true", + dest="json", + help="Output statistics as JSON", + ) parser.add_argument( - '-y', '--yaml', action='store_true', dest='yaml', - help='Output statistics as YAML', + "-y", + "--yaml", + action="store_true", + dest="yaml", + help="Output statistics as YAML", ) parser.add_argument( - '-i', '--interval', dest='interval', type=float, - help='Poll statistics every N seconds', + "-i", + "--interval", + dest="interval", + type=float, + help="Poll statistics every N seconds", ) def _print_separator(self): - click.echo('-' * self.table_width) + click.echo("-" * self.table_width) - def _print_stats_dashboard(self, statistics, prev_stats=None): + def _print_stats_dashboard(self, statistics, prev_stats=None, with_color: bool = True): if self.interval: click.clear() click.echo() click.echo("Django-Scheduler CLI Dashboard") click.echo() self._print_separator() - click.echo(f'| {"Name":<16} | Queued | Active | Deferred |' - f' Finished |' - f' Canceled |' - f' Workers |') + click.echo(f"| {'Name':<16} | Queued | Active | Finished | Canceled | Workers |") self._print_separator() for ind, queue in enumerate(statistics["queues"]): vals = list((queue[k] for k in KEYS)) # Deal with colors - if prev_stats and len(prev_stats['queues']) > ind: + if not with_color: + colors = ["" for _ in KEYS] + if prev_stats and len(prev_stats["queues"]) > ind: prev = prev_stats["queues"][ind] - prev_vals = (prev[k] for k in KEYS) - colors = [ANSI_LIGHT_GREEN - if vals[i] != prev_vals[i] else ANSI_LIGHT_WHITE - for i in range(len(prev_vals)) - ] + prev_vals = tuple(prev[k] for k in KEYS) + colors = [ + ANSI_LIGHT_GREEN if vals[i] != prev_vals[i] else ANSI_LIGHT_WHITE for i in range(len(prev_vals)) + ] else: colors = [ANSI_LIGHT_WHITE for _ in range(len(vals))] - to_print = ' | '.join([f'{colors[i]}{vals[i]:9}{ANSI_RESET}' for i in range(len(vals))]) - click.echo(f'| {queue["name"]:<16} | {to_print} |', color=True) + to_print = " | ".join([f"{colors[i]}{vals[i]:9}{ANSI_RESET}" for i in range(len(vals))]) + click.echo(f"| {queue['name']:<16} | {to_print} |", color=with_color) self._print_separator() @@ -75,33 +82,38 @@ def _print_stats_dashboard(self, statistics, prev_stats=None): click.echo("Press 'Ctrl+c' to quit") def handle(self, *args, **options): - + if options.get("json") and options.get("yaml"): + click.secho("Aborting. Cannot output as both json and yaml", err=True, fg="red") + exit(1) if options.get("json"): import json - click.secho(json.dumps(get_statistics(), indent=2), ) + + click.secho( + json.dumps(get_statistics(), indent=2), + ) return if options.get("yaml"): try: import yaml except ImportError: - click.secho("Aborting. yaml not supported", err=True, fg='red') + click.secho("Aborting. yaml not supported", err=True, fg="red") return - click.secho(yaml.dump(get_statistics(), default_flow_style=False), ) + click.secho(yaml.dump(get_statistics(), default_flow_style=False)) return self.interval = options.get("interval") if not self.interval or self.interval < 0: - self._print_stats_dashboard(get_statistics()) + self._print_stats_dashboard(get_statistics(), with_color=not options.get("no_color")) return try: prev = None while True: statistics = get_statistics() - self._print_stats_dashboard(statistics, prev) + self._print_stats_dashboard(statistics, prev, with_color=not options.get("no_color")) prev = statistics time.sleep(self.interval) except KeyboardInterrupt: diff --git a/scheduler/management/commands/scheduler_worker.py b/scheduler/management/commands/scheduler_worker.py new file mode 100644 index 0000000..ab122d6 --- /dev/null +++ b/scheduler/management/commands/scheduler_worker.py @@ -0,0 +1,160 @@ +import logging +import os +import sys + +import click +from django.core.management.base import BaseCommand +from django.db import connections + +from scheduler.types import ConnectionErrorTypes +from scheduler.worker import create_worker +from scheduler.settings import logger + +VERBOSITY_TO_LOG_LEVEL = { + 0: logging.CRITICAL, + 1: logging.WARNING, + 2: logging.INFO, + 3: logging.DEBUG, +} + +WORKER_ARGUMENTS = { + "queues", + "name", + "connection", + "maintenance_interval", + "job_monitoring_interval", + "dequeue_strategy", + "disable_default_exception_handler", + "fork_job_execution", + "with_scheduler", + "burst", +} + + +def reset_db_connections(): + for c in connections.all(): + c.close() + + +def register_sentry(sentry_dsn, **opts): + try: + import sentry_sdk + from sentry_sdk.integrations.rq import RqIntegration + except ImportError: + logger.error("Sentry SDK not installed. Skipping Sentry Integration") + return + + sentry_sdk.init(sentry_dsn, integrations=[RqIntegration()], **opts) + + +class Command(BaseCommand): + """Runs scheduler workers on specified queues. + Note that all queues passed into a single scheduler_worker command must share the same connection. + + Example usage: + python manage.py scheduler_worker high medium low + """ + + args = "" + + def _add_sentry_args(self, parser): + parser.add_argument("--sentry-dsn", action="store", dest="sentry_dsn", help="Sentry DSN to use") + parser.add_argument("--sentry-debug", action="store_true", dest="sentry_debug", help="Enable Sentry debug mode") + parser.add_argument("--sentry-ca-certs", action="store", dest="sentry_ca_certs", help="Path to CA certs file") + + def _add_work_args(self, parser): + parser.add_argument( + "--burst", action="store_true", dest="burst", default=False, help="Run worker in burst mode" + ) + parser.add_argument( + "--max-jobs", + action="store", + default=None, + dest="max_jobs", + type=int, + help="Maximum number of jobs to execute before terminating worker", + ) + parser.add_argument( + "--max-idle-time", + action="store", + default=None, + dest="max_idle_time", + type=int, + help="Maximum number of seconds to wait for new job before terminating worker", + ) + parser.add_argument( + "--without-scheduler", + action="store_false", + default=True, + dest="with_scheduler", + help="Run worker without scheduler, default to with scheduler", + ) + + def add_arguments(self, parser): + parser.add_argument( + "--pid", action="store", dest="pidfile", default=None, help="file to write the worker`s pid into" + ) + parser.add_argument("--name", action="store", dest="name", default=None, help="Name of the worker") + parser.add_argument( + "--worker-ttl", + action="store", + type=int, + dest="worker_ttl", + default=420, + help="Default worker timeout to be used", + ) + parser.add_argument( + "--fork-job-execution", + action="store", + default=True, + dest="fork_job_execution", + type=bool, + help="Fork job execution to another process", + ) + parser.add_argument( + "queues", + nargs="*", + type=str, + help="The queues to work on, separated by space, all queues should be using the same redis", + ) + self._add_sentry_args(parser) + self._add_work_args(parser) + + def handle(self, **options): + queues = options.pop("queues", []) + if not queues: + queues = [ + "default", + ] + click.echo(f"Starting worker for queues {queues}") + pidfile = options.pop("pidfile") + if pidfile: + with open(os.path.expanduser(pidfile), "w") as fp: + fp.write(str(os.getpid())) + + # Verbosity is defined by default in BaseCommand for all commands + verbosity = options.pop("verbosity", 3) + log_level = VERBOSITY_TO_LOG_LEVEL.get(verbosity, logging.INFO) + logger.setLevel(log_level) + + init_options = {k: v for k, v in options.items() if k in WORKER_ARGUMENTS} + + try: + # Instantiate a worker + w = create_worker(*queues, **init_options) + + # Close any opened DB connection before any fork + reset_db_connections() + + # Check whether sentry is enabled + if options.get("sentry_dsn") is not None: + sentry_opts = dict(ca_certs=options.get("sentry_ca_certs"), debug=options.get("sentry_debug")) + register_sentry(options.get("sentry_dsn"), **sentry_opts) + + w.work( + max_jobs=options["max_jobs"], + max_idle_time=options.get("max_idle_time", None), + ) + except ConnectionErrorTypes as e: + click.echo(str(e), err=True) + sys.exit(1) diff --git a/scheduler/migrations/0018_alter_crontask_queue_alter_repeatabletask_queue_and_more.py b/scheduler/migrations/0018_alter_crontask_queue_alter_repeatabletask_queue_and_more.py new file mode 100644 index 0000000..1ffdae5 --- /dev/null +++ b/scheduler/migrations/0018_alter_crontask_queue_alter_repeatabletask_queue_and_more.py @@ -0,0 +1,28 @@ +# Generated by Django 5.1b1 on 2024-06-29 14:21 + +from django.db import migrations, models + + +class Migration(migrations.Migration): + + dependencies = [ + ('scheduler', '0017_remove_crontask_repeat_crontask_failed_runs_and_more'), + ] + + operations = [ + # migrations.AlterField( + # model_name='crontask', + # name='queue', + # field=models.CharField(choices=scheduler.models.old_scheduled_task.get_queue_choices, help_text='Queue name', max_length=255, verbose_name='queue'), + # ), + # migrations.AlterField( + # model_name='repeatabletask', + # name='queue', + # field=models.CharField(choices=scheduler.models.old_scheduled_task.get_queue_choices, help_text='Queue name', max_length=255, verbose_name='queue'), + # ), + # migrations.AlterField( + # model_name='scheduledtask', + # name='queue', + # field=models.CharField(choices=scheduler.models.old_scheduled_task.get_queue_choices, help_text='Queue name', max_length=255, verbose_name='queue'), + # ), + ] diff --git a/scheduler/migrations/0019_task_crontask_new_task_id_repeatabletask_new_task_id_and_more.py b/scheduler/migrations/0019_task_crontask_new_task_id_repeatabletask_new_task_id_and_more.py new file mode 100644 index 0000000..bfdbcc1 --- /dev/null +++ b/scheduler/migrations/0019_task_crontask_new_task_id_repeatabletask_new_task_id_and_more.py @@ -0,0 +1,186 @@ +# Generated by Django 5.1.3 on 2024-11-20 20:32 + +import django.db.models.deletion +import scheduler.models.task +from django.db import migrations, models + + +class Migration(migrations.Migration): + + dependencies = [ + ("scheduler", "0018_alter_crontask_queue_alter_repeatabletask_queue_and_more"), + ] + + operations = [ + migrations.CreateModel( + name="Task", + fields=[ + ("id", models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")), + ("created_at", models.DateTimeField(auto_now_add=True)), + ("updated_at", models.DateTimeField(auto_now=True)), + ( + "name", + models.CharField(help_text="Name of the job", max_length=128, unique=True, verbose_name="name"), + ), + ( + "task_type", + models.CharField( + choices=[ + ("CronTaskType", "Cron Task"), + ("RepeatableTaskType", "Repeatable Task"), + ("OnceTaskType", "Run once"), + ], + default="OnceTaskType", + max_length=32, + verbose_name="Task type", + ), + ), + ("callable", models.CharField(max_length=2048, verbose_name="callable")), + ( + "enabled", + models.BooleanField( + default=True, + help_text="Should job be scheduled? This field is useful to keep past jobs that should no longer be scheduled", + verbose_name="enabled", + ), + ), + ( + "queue", + models.CharField( + choices=scheduler.models.task.get_queue_choices, + help_text="Queue name", + max_length=255, + verbose_name="queue", + ), + ), + ( + "job_id", + models.CharField( + blank=True, + editable=False, + help_text="Current job_id on queue", + max_length=128, + null=True, + verbose_name="job id", + ), + ), + ( + "at_front", + models.BooleanField( + default=False, + help_text="When queuing the job, add it in the front of the queue", + verbose_name="At front", + ), + ), + ( + "timeout", + models.IntegerField( + blank=True, + help_text="Timeout specifies the maximum runtime, in seconds, for the job before it'll be considered 'lost'. Blank uses the default timeout.", + null=True, + verbose_name="timeout", + ), + ), + ( + "result_ttl", + models.IntegerField( + blank=True, + help_text="The TTL value (in seconds) of the job result.
\n -1: Result never expires, you should delete jobs manually.
\n 0: Result gets deleted immediately.
\n >0: Result expires after n seconds.", + null=True, + verbose_name="result ttl", + ), + ), + ( + "failed_runs", + models.PositiveIntegerField( + default=0, help_text="Number of times the task has failed", verbose_name="failed runs" + ), + ), + ( + "successful_runs", + models.PositiveIntegerField( + default=0, help_text="Number of times the task has succeeded", verbose_name="successful runs" + ), + ), + ( + "last_successful_run", + models.DateTimeField( + blank=True, + help_text="Last time the task has succeeded", + null=True, + verbose_name="last successful run", + ), + ), + ( + "last_failed_run", + models.DateTimeField( + blank=True, help_text="Last time the task has failed", null=True, verbose_name="last failed run" + ), + ), + ( + "interval", + models.PositiveIntegerField( + blank=True, help_text="Interval for repeatable task", null=True, verbose_name="interval" + ), + ), + ( + "interval_unit", + models.CharField( + blank=True, + choices=[ + ("seconds", "seconds"), + ("minutes", "minutes"), + ("hours", "hours"), + ("days", "days"), + ("weeks", "weeks"), + ], + default="hours", + max_length=12, + null=True, + verbose_name="interval unit", + ), + ), + ( + "repeat", + models.PositiveIntegerField( + blank=True, + help_text="Number of times to run the job. Leaving this blank means it will run forever.", + null=True, + verbose_name="repeat", + ), + ), + ("scheduled_time", models.DateTimeField(blank=True, null=True, verbose_name="scheduled time")), + ( + "cron_string", + models.CharField( + blank=True, + help_text='Define the schedule in a crontab like syntax.\n Times are in UTC. Use crontab.guru to create a cron string.', + max_length=64, + null=True, + verbose_name="cron string", + ), + ), + ], + ), + migrations.AddField( + model_name="crontask", + name="new_task_id", + field=models.ForeignKey( + blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to="scheduler.task" + ), + ), + migrations.AddField( + model_name="repeatabletask", + name="new_task_id", + field=models.ForeignKey( + blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to="scheduler.task" + ), + ), + migrations.AddField( + model_name="scheduledtask", + name="new_task_id", + field=models.ForeignKey( + blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to="scheduler.task" + ), + ), + ] diff --git a/scheduler/migrations/0020_remove_repeatabletask_new_task_id_and_more.py b/scheduler/migrations/0020_remove_repeatabletask_new_task_id_and_more.py new file mode 100644 index 0000000..4134b89 --- /dev/null +++ b/scheduler/migrations/0020_remove_repeatabletask_new_task_id_and_more.py @@ -0,0 +1,30 @@ +# Generated by Django 5.1.6 on 2025-02-05 15:40 + +from django.db import migrations + + +class Migration(migrations.Migration): + + dependencies = [ + ("scheduler", "0019_task_crontask_new_task_id_repeatabletask_new_task_id_and_more"), + ] + + operations = [ + migrations.RemoveField( + model_name="repeatabletask", + name="new_task_id", + ), + migrations.RemoveField( + model_name="scheduledtask", + name="new_task_id", + ), + migrations.DeleteModel( + name="CronTask", + ), + migrations.DeleteModel( + name="RepeatableTask", + ), + migrations.DeleteModel( + name="ScheduledTask", + ), + ] diff --git a/scheduler/migrations/0021_remove_task_job_id_task_job_name.py b/scheduler/migrations/0021_remove_task_job_id_task_job_name.py new file mode 100644 index 0000000..3c03f51 --- /dev/null +++ b/scheduler/migrations/0021_remove_task_job_id_task_job_name.py @@ -0,0 +1,22 @@ +# Generated by Django 5.1.7 on 2025-03-24 14:30 + +from django.db import migrations, models + + +class Migration(migrations.Migration): + + dependencies = [ + ('scheduler', '0020_remove_repeatabletask_new_task_id_and_more'), + ] + + operations = [ + migrations.RemoveField( + model_name='task', + name='job_id', + ), + migrations.AddField( + model_name='task', + name='job_name', + field=models.CharField(blank=True, editable=False, help_text='Current job_name on queue', max_length=128, null=True, verbose_name='job name'), + ), + ] diff --git a/scheduler/models/__init__.py b/scheduler/models/__init__.py index b05c19a..1b4625a 100644 --- a/scheduler/models/__init__.py +++ b/scheduler/models/__init__.py @@ -1,3 +1,12 @@ -from .args import TaskKwarg, TaskArg, BaseTaskArg # noqa: F401 -from .queue import Queue # noqa: F401 -from .scheduled_task import BaseTask, ScheduledTask, RepeatableTask, CronTask # noqa: F401 +__all__ = [ + "Task", + "TaskType", + "TaskArg", + "TaskKwarg", + "get_scheduled_task", + "run_task", + "get_next_cron_time", +] + +from .args import TaskArg, TaskKwarg +from .task import TaskType, Task, get_scheduled_task, run_task, get_next_cron_time diff --git a/scheduler/models/args.py b/scheduler/models/args.py index 532b0c6..ac2d700 100644 --- a/scheduler/models/args.py +++ b/scheduler/models/args.py @@ -7,54 +7,59 @@ from django.db import models from django.utils.translation import gettext_lazy as _ -from scheduler import tools +from scheduler.helpers import utils ARG_TYPE_TYPES_DICT = { - 'str': str, - 'int': int, - 'bool': bool, - 'datetime': datetime, - 'callable': Callable, + "str": str, + "int": int, + "bool": bool, + "datetime": datetime, + "callable": Callable, } class BaseTaskArg(models.Model): class ArgType(models.TextChoices): - STR = 'str', _('string') - INT = 'int', _('int') - BOOL = 'bool', _('boolean') - DATETIME = 'datetime', _('datetime') - CALLABLE = 'callable', _('callable') + STR = "str", _("string") + INT = "int", _("int") + BOOL = "bool", _("boolean") + DATETIME = "datetime", _("datetime") + CALLABLE = "callable", _("callable") arg_type = models.CharField( - _('Argument Type'), max_length=12, choices=ArgType.choices, default=ArgType.STR, + _("Argument Type"), + max_length=12, + choices=ArgType.choices, + default=ArgType.STR, ) - val = models.CharField(_('Argument Value'), blank=True, max_length=255) + val = models.CharField(_("Argument Value"), blank=True, max_length=255) content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) object_id = models.PositiveIntegerField() content_object = GenericForeignKey() def clean(self): if self.arg_type not in ARG_TYPE_TYPES_DICT: - raise ValidationError({ - 'arg_type': ValidationError( - _(f'Could not parse {self.arg_type}, options are: {ARG_TYPE_TYPES_DICT.keys()}'), code='invalid') - }) + raise ValidationError( + { + "arg_type": ValidationError( + _(f"Could not parse {self.arg_type}, options are: {ARG_TYPE_TYPES_DICT.keys()}"), code="invalid" + ) + } + ) try: - if self.arg_type == 'callable': - tools.callable_func(self.val) - elif self.arg_type == 'datetime': + if self.arg_type == "callable": + utils.callable_func(self.val) + elif self.arg_type == "datetime": datetime.fromisoformat(self.val) - elif self.arg_type == 'bool': - if self.val.lower() not in {'true', 'false'}: + elif self.arg_type == "bool": + if self.val.lower() not in {"true", "false"}: raise ValidationError - elif self.arg_type == 'int': + elif self.arg_type == "int": int(self.val) except Exception: - raise ValidationError({ - 'arg_type': ValidationError( - _(f'Could not parse {self.val} as {self.arg_type}'), code='invalid') - }) + raise ValidationError( + {"arg_type": ValidationError(_(f"Could not parse {self.val} as {self.arg_type}"), code="invalid")} + ) def save(self, **kwargs): super(BaseTaskArg, self).save(**kwargs) @@ -65,24 +70,24 @@ def delete(self, **kwargs): self.content_object.save() def value(self): - if self.arg_type == 'callable': - res = tools.callable_func(self.val)() - elif self.arg_type == 'datetime': + if self.arg_type == "callable": + res = utils.callable_func(self.val)() + elif self.arg_type == "datetime": res = datetime.fromisoformat(self.val) - elif self.arg_type == 'bool': - res = self.val.lower() == 'true' + elif self.arg_type == "bool": + res = self.val.lower() == "true" else: res = ARG_TYPE_TYPES_DICT[self.arg_type](self.val) return res class Meta: abstract = True - ordering = ['id'] + ordering = ["id"] class TaskArg(BaseTaskArg): def __str__(self): - return f'TaskArg[arg_type={self.arg_type},value={self.value()}]' + return f"TaskArg[arg_type={self.arg_type},value={self.value()}]" class TaskKwarg(BaseTaskArg): @@ -90,7 +95,7 @@ class TaskKwarg(BaseTaskArg): def __str__(self): key, value = self.value() - return f'TaskKwarg[key={key},arg_type={self.arg_type},value={self.val}]' + return f"TaskKwarg[key={key},arg_type={self.arg_type},value={self.val}]" def value(self): return self.key, super(TaskKwarg, self).value() diff --git a/scheduler/models/ephemeral_models.py b/scheduler/models/ephemeral_models.py new file mode 100644 index 0000000..ca24a73 --- /dev/null +++ b/scheduler/models/ephemeral_models.py @@ -0,0 +1,21 @@ +from django.db import models + + +class Queue(models.Model): + """Placeholder model with no database table, but with django admin page and contenttype permission""" + + class Meta: + managed = False # not in Django's database + default_permissions = () + permissions = [["view", "Access admin page"]] + verbose_name_plural = " Queues" + + +class Worker(models.Model): + """Placeholder model with no database table, but with django admin page and contenttype permission""" + + class Meta: + managed = False # not in Django's database + default_permissions = () + permissions = [["view", "Access admin page"]] + verbose_name_plural = " Workers" diff --git a/scheduler/models/queue.py b/scheduler/models/queue.py deleted file mode 100644 index 5c03689..0000000 --- a/scheduler/models/queue.py +++ /dev/null @@ -1,12 +0,0 @@ -from django.db import models - - -class Queue(models.Model): - """Placeholder model with no database table, but with django admin page - and contenttype permission""" - - class Meta: - managed = False # not in Django's database - default_permissions = () - permissions = [['view', 'Access admin page']] - verbose_name_plural = " Queues" diff --git a/scheduler/models/scheduled_task.py b/scheduler/models/scheduled_task.py deleted file mode 100644 index 6255f59..0000000 --- a/scheduler/models/scheduled_task.py +++ /dev/null @@ -1,466 +0,0 @@ -import math -import uuid -from datetime import timedelta -from typing import Dict - -import croniter -from django.apps import apps -from django.conf import settings as django_settings -from django.contrib import admin -from django.contrib.contenttypes.fields import GenericRelation -from django.core.exceptions import ValidationError -from django.core.mail import mail_admins -from django.db import models -from django.templatetags.tz import utc -from django.urls import reverse -from django.utils import timezone -from django.utils.safestring import mark_safe -from django.utils.translation import gettext_lazy as _ - -from scheduler import settings -from scheduler import tools -from scheduler.models.args import TaskArg, TaskKwarg -from scheduler.queues import get_queue -from scheduler.rq_classes import DjangoQueue -from scheduler.settings import logger - -SCHEDULER_INTERVAL = settings.SCHEDULER_CONFIG['SCHEDULER_INTERVAL'] - - -def failure_callback(job, connection, result, *args, **kwargs): - model_name = job.meta.get('task_type', None) - if model_name is None: - return - model = apps.get_model(app_label='scheduler', model_name=model_name) - task = model.objects.filter(job_id=job.id).first() - if task is None: - logger.warn(f'Could not find {model_name} task for job {job.id}') - return - mail_admins(f'Task {task.id}/{task.name} has failed', - 'See django-admin for logs', ) - task.job_id = None - if isinstance(task, (CronTask, RepeatableTask)): - task.failed_runs += 1 - task.last_failed_run = timezone.now() - task.save(schedule_job=True) - - -def success_callback(job, connection, result, *args, **kwargs): - model_name = job.meta.get('task_type', None) - if model_name is None: - return - model = apps.get_model(app_label='scheduler', model_name=model_name) - task = model.objects.filter(job_id=job.id).first() - if task is None: - return - task.job_id = None - if isinstance(task, (CronTask, RepeatableTask)): - task.successful_runs += 1 - task.last_successful_run = timezone.now() - task.save(schedule_job=True) - - -class BaseTask(models.Model): - created = models.DateTimeField(auto_now_add=True) - modified = models.DateTimeField(auto_now=True) - QUEUES = [("default", "default"), ("low", "low"), ("high", "high")] - TASK_TYPE = 'BaseTask' - name = models.CharField( - _('name'), max_length=128, unique=True, - help_text='Name of the job.', ) - callable = models.CharField(_('callable'), max_length=2048) - callable_args = GenericRelation(TaskArg, related_query_name='args') - callable_kwargs = GenericRelation(TaskKwarg, related_query_name='kwargs') - enabled = models.BooleanField( - _('enabled'), default=True, - help_text=_('Should job be scheduled? This field is useful to keep ' - 'past jobs that should no longer be scheduled'), - ) - queue = models.CharField( - _('queue'), max_length=255, choices=QUEUES, - help_text=_('Queue name'), ) - job_id = models.CharField( - _('job id'), max_length=128, editable=False, blank=True, null=True, - help_text=_('Current job_id on queue')) - at_front = models.BooleanField( - _('At front'), default=False, blank=True, null=True, - help_text=_('When queuing the job, add it in the front of the queue'), ) - timeout = models.IntegerField( - _('timeout'), blank=True, null=True, - help_text=_("Timeout specifies the maximum runtime, in seconds, for the job " - "before it'll be considered 'lost'. Blank uses the default " - "timeout."), ) - result_ttl = models.IntegerField( - _('result ttl'), blank=True, null=True, - help_text=mark_safe( - """The TTL value (in seconds) of the job result.
- -1: Result never expires, you should delete jobs manually.
- 0: Result gets deleted immediately.
- >0: Result expires after n seconds."""), ) - - def callable_func(self): - """Translate callable string to callable""" - return tools.callable_func(self.callable) - - @admin.display(boolean=True, description=_('is scheduled?')) - def is_scheduled(self) -> bool: - """Check whether a next job for this task is queued/scheduled to be executed""" - if self.job_id is None: # no job_id => is not scheduled - return False - # check whether job_id is in scheduled/queued/active jobs - scheduled_jobs = self.rqueue.scheduled_job_registry.get_job_ids() - enqueued_jobs = self.rqueue.get_job_ids() - active_jobs = self.rqueue.started_job_registry.get_job_ids() - res = ((self.job_id in scheduled_jobs) - or (self.job_id in enqueued_jobs) - or (self.job_id in active_jobs)) - # If the job_id is not scheduled/queued/started, - # update the job_id to None. (The job_id belongs to a previous run which is completed) - if not res: - self.job_id = None - super(BaseTask, self).save() - return res - - @admin.display(description='Callable') - def function_string(self) -> str: - args = self.parse_args() - args_list = [repr(arg) for arg in args] - kwargs = self.parse_kwargs() - kwargs_list = [k + '=' + repr(v) for (k, v) in kwargs.items()] - return self.callable + f"({', '.join(args_list + kwargs_list)})" - - def parse_args(self): - """Parse args for running the job""" - args = self.callable_args.all() - return [arg.value() for arg in args] - - def parse_kwargs(self): - """Parse kwargs for running the job""" - kwargs = self.callable_kwargs.all() - return dict([kwarg.value() for kwarg in kwargs]) - - def _next_job_id(self): - addition = uuid.uuid4().hex[-10:] - name = self.name.replace('/', '.') - return f'{self.queue}:{name}:{addition}' - - def _enqueue_args(self) -> Dict: - """Args for DjangoQueue.enqueue. - Set all arguments for DjangoQueue.enqueue/enqueue_at. - Particularly: - - set job timeout and ttl - - ensure a callback to reschedule the job next iteration. - - Set job-id to proper format - - set job meta - """ - res = dict( - meta=dict( - task_type=self.TASK_TYPE, - scheduled_task_id=self.id, - ), - on_success=success_callback, - on_failure=failure_callback, - job_id=self._next_job_id(), - ) - if self.at_front: - res['at_front'] = self.at_front - if self.timeout: - res['job_timeout'] = self.timeout - if self.result_ttl is not None: - res['result_ttl'] = self.result_ttl - return res - - @property - def rqueue(self) -> DjangoQueue: - """Returns redis-queue for job""" - return get_queue(self.queue) - - def ready_for_schedule(self) -> bool: - """Is the task ready to be scheduled? - - If the task is already scheduled or disabled, then it is not - ready to be scheduled. - - :returns: True if the task is ready to be scheduled. - """ - if self.is_scheduled(): - logger.debug(f'Task {self.name} already scheduled') - return False - if not self.enabled: - logger.debug(f'Task {str(self)} disabled, enable task before scheduling') - return False - return True - - def schedule(self) -> bool: - """Schedule the next execution for the task to run. - :returns: True if a job was scheduled, False otherwise. - """ - if not self.ready_for_schedule(): - return False - schedule_time = self._schedule_time() - kwargs = self._enqueue_args() - job = self.rqueue.enqueue_at( - schedule_time, - tools.run_task, - args=(self.TASK_TYPE, self.id), - **kwargs, ) - self.job_id = job.id - super(BaseTask, self).save() - return True - - def enqueue_to_run(self) -> bool: - """Enqueue job to run now.""" - kwargs = self._enqueue_args() - job = self.rqueue.enqueue( - tools.run_task, - args=(self.TASK_TYPE, self.id), - **kwargs, - ) - self.job_id = job.id - self.save(schedule_job=False) - return True - - def unschedule(self) -> bool: - """Remove a job from django-queue. - - If a job is queued to be executed or scheduled to be executed, it will remove it. - """ - queue = self.rqueue - if self.job_id is None: - return True - queue.remove(self.job_id) - queue.scheduled_job_registry.remove(self.job_id) - self.job_id = None - self.save(schedule_job=False) - return True - - def _schedule_time(self): - return utc(self.scheduled_time) if django_settings.USE_TZ else self.scheduled_time - - def to_dict(self) -> Dict: - """Export model to dictionary, so it can be saved as external file backup""" - res = dict( - model=self.TASK_TYPE, - name=self.name, - callable=self.callable, - callable_args=[ - dict(arg_type=arg.arg_type, val=arg.val, ) - for arg in self.callable_args.all()], - callable_kwargs=[ - dict(arg_type=arg.arg_type, key=arg.key, val=arg.val, ) - for arg in self.callable_kwargs.all()], - enabled=self.enabled, - queue=self.queue, - repeat=getattr(self, 'repeat', None), - at_front=self.at_front, - timeout=self.timeout, - result_ttl=self.result_ttl, - cron_string=getattr(self, 'cron_string', None), - scheduled_time=self._schedule_time().isoformat(), - interval=getattr(self, 'interval', None), - interval_unit=getattr(self, 'interval_unit', None), - successful_runs=getattr(self, 'successful_runs', None), - failed_runs=getattr(self, 'failed_runs', None), - last_successful_run=getattr(self, 'last_successful_run', None), - last_failed_run=getattr(self, 'last_failed_run', None), - ) - return res - - def get_absolute_url(self): - model = self._meta.model.__name__.lower() - return reverse(f'admin:scheduler_{model}_change', args=[self.id, ]) - - def __str__(self): - func = self.function_string() - return f'{self.TASK_TYPE}[{self.name}={func}]' - - def save(self, **kwargs): - schedule_job = kwargs.pop('schedule_job', True) - update_fields = kwargs.get('update_fields', None) - if update_fields: - kwargs['update_fields'] = set(update_fields).union({'modified'}) - super(BaseTask, self).save(**kwargs) - if schedule_job: - self.schedule() - super(BaseTask, self).save() - - def delete(self, **kwargs): - self.unschedule() - super(BaseTask, self).delete(**kwargs) - - def clean_callable(self): - try: - tools.callable_func(self.callable) - except Exception: - raise ValidationError({ - 'callable': ValidationError( - _('Invalid callable, must be importable'), code='invalid') - }) - - def clean_queue(self): - queue_keys = settings.QUEUES.keys() - if self.queue not in queue_keys: - raise ValidationError({ - 'queue': ValidationError( - _('Invalid queue, must be one of: {}'.format( - ', '.join(queue_keys))), code='invalid') - }) - - def clean(self): - self.clean_queue() - self.clean_callable() - - class Meta: - abstract = True - - -class ScheduledTimeMixin(models.Model): - scheduled_time = models.DateTimeField(_('scheduled time')) - - class Meta: - abstract = True - - -class RepeatableMixin(models.Model): - failed_runs = models.PositiveIntegerField( - _('failed runs'), default=0, - help_text=_('Number of times the task has failed'), ) - successful_runs = models.PositiveIntegerField( - _('successful runs'), default=0, - help_text=_('Number of times the task has succeeded'), ) - last_successful_run = models.DateTimeField( - _('last successful run'), blank=True, null=True, - help_text=_('Last time the task has succeeded'), ) - last_failed_run = models.DateTimeField( - _('last failed run'), blank=True, null=True, - help_text=_('Last time the task has failed'), ) - - class Meta: - abstract = True - - -class ScheduledTask(ScheduledTimeMixin, BaseTask): - TASK_TYPE = 'ScheduledTask' - - def ready_for_schedule(self) -> bool: - return (super(ScheduledTask, self).ready_for_schedule() - and (self.scheduled_time is None - or self.scheduled_time >= timezone.now())) - - class Meta: - verbose_name = _('Scheduled Task') - verbose_name_plural = _('Scheduled Tasks') - ordering = ('name',) - - -class RepeatableTask(RepeatableMixin, ScheduledTimeMixin, BaseTask): - class TimeUnits(models.TextChoices): - SECONDS = 'seconds', _('seconds') - MINUTES = 'minutes', _('minutes') - HOURS = 'hours', _('hours') - DAYS = 'days', _('days') - WEEKS = 'weeks', _('weeks') - - interval = models.PositiveIntegerField(_('interval')) - interval_unit = models.CharField( - _('interval unit'), max_length=12, choices=TimeUnits.choices, default=TimeUnits.HOURS - ) - repeat = models.PositiveIntegerField( - _('repeat'), blank=True, null=True, - help_text=_('Number of times to run the job. Leaving this blank means it will run forever.'), ) - TASK_TYPE = 'RepeatableTask' - - def clean(self): - super(RepeatableTask, self).clean() - self.clean_interval_unit() - self.clean_result_ttl() - - def clean_interval_unit(self): - if SCHEDULER_INTERVAL > self.interval_seconds(): - raise ValidationError( - _("Job interval is set lower than %(queue)r queue's interval. " - "minimum interval is %(interval)"), - code='invalid', - params={'queue': self.queue, 'interval': SCHEDULER_INTERVAL}) - if self.interval_seconds() % SCHEDULER_INTERVAL: - raise ValidationError( - _("Job interval is not a multiple of rq_scheduler's interval frequency: %(interval)ss"), - code='invalid', - params={'interval': SCHEDULER_INTERVAL}) - - def clean_result_ttl(self) -> None: - """ - Throws an error if there are repeats left to run and the result_ttl won't last until the next scheduled time. - :return: None - """ - if self.result_ttl and self.result_ttl != -1 and self.result_ttl < self.interval_seconds() and self.repeat: - raise ValidationError( - _("Job result_ttl must be either indefinite (-1) or " - "longer than the interval, %(interval)s seconds, to ensure rescheduling."), - code='invalid', - params={'interval': self.interval_seconds()}, ) - - def interval_display(self): - return '{} {}'.format(self.interval, self.get_interval_unit_display()) - - def interval_seconds(self): - kwargs = {self.interval_unit: self.interval, } - return timedelta(**kwargs).total_seconds() - - def _enqueue_args(self): - res = super(RepeatableTask, self)._enqueue_args() - res['meta']['interval'] = self.interval_seconds() - res['meta']['repeat'] = self.repeat - return res - - def _schedule_time(self): - _now = timezone.now() - if self.scheduled_time >= _now: - return super()._schedule_time() - gap = math.ceil((_now.timestamp() - self.scheduled_time.timestamp()) / self.interval_seconds()) - if self.repeat is None or self.repeat >= gap: - self.scheduled_time += timedelta(seconds=self.interval_seconds() * gap) - self.repeat = (self.repeat - gap) if self.repeat is not None else None - return super()._schedule_time() - - def ready_for_schedule(self): - if super(RepeatableTask, self).ready_for_schedule() is False: - return False - if self._schedule_time() < timezone.now(): - return False - return True - - class Meta: - verbose_name = _('Repeatable Task') - verbose_name_plural = _('Repeatable Tasks') - ordering = ('name',) - - -class CronTask(RepeatableMixin, BaseTask): - TASK_TYPE = 'CronTask' - - cron_string = models.CharField( - _('cron string'), max_length=64, - help_text=mark_safe( - '''Define the schedule in a crontab like syntax. - Times are in UTC. Use crontab.guru to create a cron string.''') - ) - - def clean(self): - super(CronTask, self).clean() - self.clean_cron_string() - - def clean_cron_string(self): - try: - croniter.croniter(self.cron_string) - except ValueError as e: - raise ValidationError({'cron_string': ValidationError(_(str(e)), code='invalid')}) - - def _schedule_time(self): - self.scheduled_time = tools.get_next_cron_time(self.cron_string) - return super()._schedule_time() - - class Meta: - verbose_name = _('Cron Task') - verbose_name_plural = _('Cron Tasks') - ordering = ('name',) diff --git a/scheduler/models/task.py b/scheduler/models/task.py new file mode 100644 index 0000000..352f1f0 --- /dev/null +++ b/scheduler/models/task.py @@ -0,0 +1,488 @@ +import math +from datetime import timedelta, datetime +from typing import Dict, Any, Optional + +import croniter +from django.conf import settings as django_settings +from django.contrib import admin +from django.contrib.contenttypes.fields import GenericRelation +from django.core.exceptions import ValidationError +from django.core.mail import mail_admins +from django.db import models +from django.templatetags.tz import utc +from django.urls import reverse +from django.utils import timezone +from django.utils.safestring import mark_safe +from django.utils.translation import gettext_lazy as _ + +from scheduler import settings +from scheduler.helpers.callback import Callback +from scheduler.helpers.queues import Queue +from scheduler.helpers.queues import get_queue +from scheduler.redis_models import JobModel +from scheduler.settings import logger, get_queue_names +from scheduler.types import ConnectionType, TASK_TYPES +from .args import TaskArg, TaskKwarg +from ..helpers import utils + + +def _get_task_for_job(job: JobModel) -> Optional["Task"]: + if job.task_type is None or job.scheduled_task_id is None: + return None + task = Task.objects.filter(id=job.scheduled_task_id).first() + return task + + +def failure_callback(job: JobModel, connection, result, *args, **kwargs): + task = _get_task_for_job(job) + if task is None: + logger.warn(f"Could not find task for job {job.name}") + return + mail_admins( + f"Task {task.id}/{task.name} has failed", + "See django-admin for logs", + ) + task.job_name = None + task.failed_runs += 1 + task.last_failed_run = timezone.now() + task.save(schedule_job=True, clean=False) + + +def success_callback(job: JobModel, connection: ConnectionType, result: Any, *args, **kwargs): + task = _get_task_for_job(job) + if task is None: + logger.warn(f"Could not find task for job {job.name}") + return + task.job_name = None + task.successful_runs += 1 + task.last_successful_run = timezone.now() + task.save(schedule_job=True, clean=False) + + +def get_queue_choices(): + queue_names = get_queue_names() + return [(queue, queue) for queue in queue_names] + + +class TaskType(models.TextChoices): + CRON = "CronTaskType", _("Cron Task") + REPEATABLE = "RepeatableTaskType", _("Repeatable Task") + ONCE = "OnceTaskType", _("Run once") + + +class Task(models.Model): + class TimeUnits(models.TextChoices): + SECONDS = "seconds", _("seconds") + MINUTES = "minutes", _("minutes") + HOURS = "hours", _("hours") + DAYS = "days", _("days") + WEEKS = "weeks", _("weeks") + + created_at = models.DateTimeField(auto_now_add=True) + updated_at = models.DateTimeField(auto_now=True) + name = models.CharField(_("name"), max_length=128, unique=True, help_text=_("Name of the job")) + task_type = models.CharField(_("Task type"), max_length=32, choices=TaskType.choices, default=TaskType.ONCE) + callable = models.CharField(_("callable"), max_length=2048) + callable_args = GenericRelation(TaskArg, related_query_name="args") + callable_kwargs = GenericRelation(TaskKwarg, related_query_name="kwargs") + enabled = models.BooleanField( + _("enabled"), + default=True, + help_text=_( + "Should job be scheduled? This field is useful to keep past jobs that should no longer be scheduled" + ), + ) + queue = models.CharField(_("queue"), max_length=255, choices=get_queue_choices, help_text=_("Queue name")) + job_name = models.CharField( + _("job name"), max_length=128, editable=False, blank=True, null=True, help_text=_("Current job_name on queue") + ) + at_front = models.BooleanField( + _("At front"), + default=False, + help_text=_("When queuing the job, add it in the front of the queue"), + ) + timeout = models.IntegerField( + _("timeout"), + blank=True, + null=True, + help_text=_( + "Timeout specifies the maximum runtime, in seconds, for the job " + "before it'll be considered 'lost'. Blank uses the default " + "timeout." + ), + ) + result_ttl = models.IntegerField( + _("result ttl"), + blank=True, + null=True, + help_text=mark_safe( + """The TTL value (in seconds) of the job result.
+ -1: Result never expires, you should delete jobs manually.
+ 0: Result gets deleted immediately.
+ >0: Result expires after n seconds.""" + ), + ) + failed_runs = models.PositiveIntegerField( + _("failed runs"), + default=0, + help_text=_("Number of times the task has failed"), + ) + successful_runs = models.PositiveIntegerField( + _("successful runs"), + default=0, + help_text=_("Number of times the task has succeeded"), + ) + last_successful_run = models.DateTimeField( + _("last successful run"), + blank=True, + null=True, + help_text=_("Last time the task has succeeded"), + ) + last_failed_run = models.DateTimeField( + _("last failed run"), + blank=True, + null=True, + help_text=_("Last time the task has failed"), + ) + interval = models.PositiveIntegerField( + _("interval"), + blank=True, + null=True, + help_text=_("Interval for repeatable task"), + ) + interval_unit = models.CharField( + _("interval unit"), + max_length=12, + choices=TimeUnits.choices, + default=TimeUnits.HOURS, + blank=True, + null=True, + ) + repeat = models.PositiveIntegerField( + _("repeat"), + blank=True, + null=True, + help_text=_("Number of times to run the job. Leaving this blank means it will run forever."), + ) + scheduled_time = models.DateTimeField(_("scheduled time"), blank=True, null=True) + cron_string = models.CharField( + _("cron string"), + max_length=64, + blank=True, + null=True, + help_text=mark_safe( + """Define the schedule in a crontab like syntax. + Times are in UTC. Use crontab.guru to create a cron string.""" + ), + ) + + def callable_func(self): + """Translate callable string to callable""" + return utils.callable_func(self.callable) + + @admin.display(boolean=True, description=_("is scheduled?")) + def is_scheduled(self) -> bool: + """Check whether a next job for this task is queued/scheduled to be executed""" + if self.job_name is None: # no job_id => is not scheduled + return False + # check whether job_id is in scheduled/queued/active jobs + res = ( + (self.job_name in self.rqueue.scheduled_job_registry.all()) + or (self.job_name in self.rqueue.queued_job_registry.all()) + or (self.job_name in self.rqueue.active_job_registry.all()) + ) + # If the job_id is not scheduled/queued/started, + # update the job_id to None. (The job_id belongs to a previous run which is completed) + if not res: + self.job_name = None + super(Task, self).save() + return res + + @admin.display(description="Callable") + def function_string(self) -> str: + args = self.parse_args() + args_list = [repr(arg) for arg in args] + kwargs = self.parse_kwargs() + kwargs_list = [k + "=" + repr(v) for (k, v) in kwargs.items()] + return self.callable + f"({', '.join(args_list + kwargs_list)})" + + def parse_args(self): + """Parse args for running the job""" + args = self.callable_args.all() + return [arg.value() for arg in args] + + def parse_kwargs(self): + """Parse kwargs for running the job""" + kwargs = self.callable_kwargs.all() + return dict([kwarg.value() for kwarg in kwargs]) + + def _next_job_id(self): + addition = timezone.now().strftime("%Y%m%d%H%M%S%f") + return f"{self.queue}:{self.id}:{addition}" + + def _enqueue_args(self) -> Dict: + """Args for Queue.enqueue_call. + Set all arguments for Queue.enqueue. Particularly: + - set job timeout and ttl + - ensure a callback to reschedule the job next iteration. + - Set job-id to proper format + - set job meta + """ + res = dict( + meta=dict(), + task_type=self.task_type, + scheduled_task_id=self.id, + on_success=Callback(success_callback), + on_failure=Callback(failure_callback), + name=self._next_job_id(), + ) + if self.at_front: + res["at_front"] = self.at_front + if self.timeout: + res["timeout"] = self.timeout + if self.result_ttl is not None: + res["result_ttl"] = self.result_ttl + if self.task_type == TaskType.REPEATABLE: + res["meta"]["interval"] = self.interval_seconds() + res["meta"]["repeat"] = self.repeat + return res + + @property + def rqueue(self) -> Queue: + """Returns django-queue for job""" + return get_queue(self.queue) + + def enqueue_to_run(self) -> bool: + """Enqueue task to run now as a different instance from the scheduled task.""" + kwargs = self._enqueue_args() + self.rqueue.create_and_enqueue_job(run_task, args=(self.task_type, self.id), when=None, **kwargs) + return True + + def unschedule(self) -> bool: + """Remove a job from django-queue. + + If a job is queued to be executed or scheduled to be executed, it will remove it. + """ + if self.job_name is not None: + self.rqueue.delete_job(self.job_name) + self.job_name = None + self.save(schedule_job=False, clean=False) + return True + + def _schedule_time(self) -> datetime: + if self.task_type == TaskType.CRON: + self.scheduled_time = get_next_cron_time(self.cron_string) + elif self.task_type == TaskType.REPEATABLE: + _now = timezone.now() + if self.scheduled_time >= _now: + return utc(self.scheduled_time) if django_settings.USE_TZ else self.scheduled_time + gap = math.ceil((_now.timestamp() - self.scheduled_time.timestamp()) / self.interval_seconds()) + if self.repeat is None or self.repeat >= gap: + self.scheduled_time += timedelta(seconds=self.interval_seconds() * gap) + self.repeat = (self.repeat - gap) if self.repeat is not None else None + return utc(self.scheduled_time) if django_settings.USE_TZ else self.scheduled_time + + def to_dict(self) -> Dict: + """Export model to dictionary, so it can be saved as external file backup""" + interval_unit = str(self.interval_unit) if self.interval_unit else None + res = dict( + model=str(self.task_type), + name=self.name, + callable=self.callable, + callable_args=[ + dict( + arg_type=arg.arg_type, + val=arg.val, + ) + for arg in self.callable_args.all() + ], + callable_kwargs=[ + dict( + arg_type=arg.arg_type, + key=arg.key, + val=arg.val, + ) + for arg in self.callable_kwargs.all() + ], + enabled=self.enabled, + queue=self.queue, + repeat=getattr(self, "repeat", None), + at_front=self.at_front, + timeout=self.timeout, + result_ttl=self.result_ttl, + cron_string=getattr(self, "cron_string", None), + scheduled_time=self._schedule_time().isoformat(), + interval=getattr(self, "interval", None), + interval_unit=interval_unit, + successful_runs=getattr(self, "successful_runs", None), + failed_runs=getattr(self, "failed_runs", None), + last_successful_run=getattr(self, "last_successful_run", None), + last_failed_run=getattr(self, "last_failed_run", None), + ) + return res + + def get_absolute_url(self): + model = self._meta.model.__name__.lower() + return reverse( + f"admin:scheduler_{model}_change", + args=[ + self.id, + ], + ) + + def __str__(self): + func = self.function_string() + return f"{self.task_type}[{self.name}={func}]" + + def _schedule(self) -> bool: + """Schedule the next execution for the task to run. + :returns: True if a job was scheduled, False otherwise. + """ + self.refresh_from_db() + if self.is_scheduled(): + logger.debug(f"Task {self.name} already scheduled") + return False + if not self.enabled: + logger.debug(f"Task {str(self)} disabled, enable task before scheduling") + return False + schedule_time = self._schedule_time() + if self.task_type in {TaskType.REPEATABLE, TaskType.ONCE} and schedule_time < timezone.now(): + logger.debug(f"Task {str(self)} scheduled time is in the past, not scheduling") + return False + kwargs = self._enqueue_args() + job = self.rqueue.create_and_enqueue_job( + run_task, + args=(self.task_type, self.id), + when=schedule_time, + **kwargs, + ) + self.job_name = job.name + return True + + def save(self, **kwargs): + should_clean = kwargs.pop("clean", True) + if should_clean: + self.clean() + schedule_job = kwargs.pop("schedule_job", True) + update_fields = kwargs.get("update_fields", None) + if update_fields is not None: + kwargs["update_fields"] = set(update_fields).union({"updated_at"}) + super(Task, self).save(**kwargs) + if schedule_job: + self._schedule() + super(Task, self).save() + + def delete(self, **kwargs): + self.unschedule() + super(Task, self).delete(**kwargs) + + def interval_seconds(self): + kwargs = { + self.interval_unit: self.interval, + } + return timedelta(**kwargs).total_seconds() + + def clean_callable(self): + try: + utils.callable_func(self.callable) + except Exception: + raise ValidationError( + {"callable": ValidationError(_("Invalid callable, must be importable"), code="invalid")} + ) + + def clean_queue(self): + queue_names = get_queue_names() + if self.queue not in queue_names: + raise ValidationError( + { + "queue": ValidationError( + "Invalid queue, must be one of: {}".format(", ".join(queue_names)), code="invalid" + ) + } + ) + + def clean_interval_unit(self): + config = settings.SCHEDULER_CONFIG + if config.SCHEDULER_INTERVAL > self.interval_seconds(): + raise ValidationError( + _("Job interval is set lower than %(queue)r queue's interval. minimum interval is %(interval)"), + code="invalid", + params={"queue": self.queue, "interval": config.SCHEDULER_INTERVAL}, + ) + + def clean_result_ttl(self) -> None: + """Throws an error if there are repeats left to run and the result_ttl won't last until the next scheduled time. + :return: None + """ + if self.result_ttl and self.result_ttl != -1 and self.result_ttl < self.interval_seconds() and self.repeat: + raise ValidationError( + _( + "Job result_ttl must be either indefinite (-1) or " + "longer than the interval, %(interval)s seconds, to ensure rescheduling." + ), + code="invalid", + params={"interval": self.interval_seconds()}, + ) + + def clean_cron_string(self): + try: + croniter.croniter(self.cron_string) + except ValueError as e: + raise ValidationError({"cron_string": ValidationError(_(str(e)), code="invalid")}) + + def clean(self): + if self.task_type not in TaskType.values: + raise ValidationError( + {"task_type": ValidationError(_("Invalid task type"), code="invalid")}, + ) + self.clean_queue() + self.clean_callable() + if self.task_type == TaskType.CRON: + self.clean_cron_string() + if self.task_type == TaskType.REPEATABLE: + self.clean_interval_unit() + self.clean_result_ttl() + if self.task_type == TaskType.REPEATABLE and self.scheduled_time is None: + self.scheduled_time = timezone.now() + timedelta(seconds=2) + if self.task_type == TaskType.ONCE and self.scheduled_time is None: + raise ValidationError({"scheduled_time": ValidationError(_("Scheduled time is required"), code="invalid")}) + if self.task_type == TaskType.ONCE and self.scheduled_time < timezone.now(): + raise ValidationError( + {"scheduled_time": ValidationError(_("Scheduled time must be in the future"), code="invalid")} + ) + + +def get_next_cron_time(cron_string: Optional[str]) -> Optional[timezone.datetime]: + """Calculate the next scheduled time by creating a crontab object with a cron string""" + if cron_string is None: + return None + now = timezone.now() + itr = croniter.croniter(cron_string, now) + next_itr = itr.get_next(timezone.datetime) + return next_itr + + +def get_scheduled_task(task_type_str: str, task_id: int) -> Task: + # Try with new model names + if task_type_str in TASK_TYPES: + try: + task_type = TaskType(task_type_str) + task = Task.objects.filter(task_type=task_type, id=task_id).first() + if task is None: + raise ValueError(f"Job {task_type}:{task_id} does not exit") + return task + except ValueError: + raise ValueError(f"Invalid task type {task_type_str}") + raise ValueError(f"Job Model {task_type_str} does not exist, choices are {TASK_TYPES}") + + +def run_task(task_model: str, task_id: int) -> Any: + """Run a scheduled job""" + if isinstance(task_id, str): + task_id = int(task_id) + scheduled_task = get_scheduled_task(task_model, task_id) + logger.debug(f"Running task {str(scheduled_task)}") + args = scheduled_task.parse_args() + kwargs = scheduled_task.parse_kwargs() + res = scheduled_task.callable_func()(*args, **kwargs) + return res diff --git a/scheduler/models/worker.py b/scheduler/models/worker.py deleted file mode 100644 index f34181e..0000000 --- a/scheduler/models/worker.py +++ /dev/null @@ -1,12 +0,0 @@ -from django.db import models - - -class Worker(models.Model): - """Placeholder model with no database table, but with django admin page - and contenttype permission""" - - class Meta: - managed = False # not in Django's database - default_permissions = () - permissions = [['view', 'Access admin page']] - verbose_name_plural = " Workers" diff --git a/scheduler/queues.py b/scheduler/queues.py deleted file mode 100644 index c8b88b7..0000000 --- a/scheduler/queues.py +++ /dev/null @@ -1,158 +0,0 @@ -from typing import List, Dict - -import redis -from redis.sentinel import Sentinel - -from .rq_classes import JobExecution, DjangoQueue, DjangoWorker -from .settings import get_config -from .settings import logger - -_CONNECTION_PARAMS = { - 'URL', - 'DB', - 'USE_REDIS_CACHE', - 'UNIX_SOCKET_PATH', - 'HOST', - 'PORT', - 'PASSWORD', - 'SENTINELS', - 'MASTER_NAME', - 'SOCKET_TIMEOUT', - 'SSL', - 'CONNECTION_KWARGS', -} - - -class QueueNotFoundError(Exception): - pass - - -def _get_redis_connection(config, use_strict_redis=False): - """ - Returns a redis connection from a connection config - """ - if get_config('FAKEREDIS'): - import fakeredis - redis_cls = fakeredis.FakeRedis if use_strict_redis else fakeredis.FakeStrictRedis - else: - redis_cls = redis.StrictRedis if use_strict_redis else redis.Redis - logger.debug(f'Getting connection for {config}') - if 'URL' in config: - if config.get('SSL') or config.get('URL').startswith('rediss://'): - return redis_cls.from_url( - config['URL'], - db=config.get('DB'), - ssl_cert_reqs=config.get('SSL_CERT_REQS', 'required'), - ) - else: - return redis_cls.from_url( - config['URL'], - db=config.get('DB'), - ) - if 'UNIX_SOCKET_PATH' in config: - return redis_cls(unix_socket_path=config['UNIX_SOCKET_PATH'], db=config['DB']) - - if 'SENTINELS' in config: - connection_kwargs = { - 'db': config.get('DB'), - 'password': config.get('PASSWORD'), - 'username': config.get('USERNAME'), - 'socket_timeout': config.get('SOCKET_TIMEOUT'), - } - connection_kwargs.update(config.get('CONNECTION_KWARGS', {})) - sentinel_kwargs = config.get('SENTINEL_KWARGS', {}) - sentinel = Sentinel(config['SENTINELS'], sentinel_kwargs=sentinel_kwargs, **connection_kwargs) - return sentinel.master_for( - service_name=config['MASTER_NAME'], - redis_class=redis_cls, - ) - - return redis_cls( - host=config['HOST'], - port=config['PORT'], - db=config.get('DB', 0), - username=config.get('USERNAME', None), - password=config.get('PASSWORD'), - ssl=config.get('SSL', False), - ssl_cert_reqs=config.get('SSL_CERT_REQS', 'required'), - **config.get('REDIS_CLIENT_KWARGS', {}) - ) - - -def get_connection(queue_settings, use_strict_redis=False): - """Returns a Redis connection to use based on parameters in SCHEDULER_QUEUES - """ - return _get_redis_connection(queue_settings, use_strict_redis) - - -def get_queue( - name='default', - default_timeout=None, is_async=None, - autocommit=None, - connection=None, - **kwargs -) -> DjangoQueue: - """Returns an DjangoQueue using parameters defined in `SCHEDULER_QUEUES` - """ - from .settings import QUEUES - if name not in QUEUES: - raise QueueNotFoundError(f'Queue {name} not found, queues={QUEUES.keys()}') - queue_settings = QUEUES[name] - if is_async is None: - is_async = queue_settings.get('ASYNC', True) - - if default_timeout is None: - default_timeout = queue_settings.get('DEFAULT_TIMEOUT') - if connection is None: - connection = get_connection(queue_settings) - return DjangoQueue( - name, - default_timeout=default_timeout, - connection=connection, - is_async=is_async, - autocommit=autocommit, - **kwargs - ) - - -def get_all_workers(): - from .settings import QUEUES - workers = set() - for queue_name in QUEUES: - connection = get_connection(QUEUES[queue_name]) - try: - curr_workers = set(DjangoWorker.all(connection=connection)) - workers.update(curr_workers) - except redis.ConnectionError as e: - logger.error(f'Could not connect for queue {queue_name}: {e}') - return workers - - -def _queues_share_connection_params(q1_params: Dict, q2_params: Dict): - """Check that both queues share the same connection parameters - """ - return all( - ((p not in q1_params and p not in q2_params) - or (q1_params.get(p, None) == q2_params.get(p, None))) - for p in _CONNECTION_PARAMS) - - -def get_queues(*queue_names, **kwargs) -> List[DjangoQueue]: - """Return queue instances from specified queue names. - All instances must use the same Redis connection. - """ - from .settings import QUEUES - - kwargs['job_class'] = JobExecution - queue_params = QUEUES[queue_names[0]] - queues = [get_queue(queue_names[0], **kwargs)] - # perform consistency checks while building return list - for name in queue_names[1:]: - if not _queues_share_connection_params(queue_params, QUEUES[name]): - raise ValueError( - f'Queues must have the same redis connection. "{name}" and' - f' "{queue_names[0]}" have different connections') - queue = get_queue(name, **kwargs) - queues.append(queue) - - return queues diff --git a/scheduler/redis_models/__init__.py b/scheduler/redis_models/__init__.py new file mode 100644 index 0000000..2c1e269 --- /dev/null +++ b/scheduler/redis_models/__init__.py @@ -0,0 +1,33 @@ +__all__ = [ + "Result", + "ResultType", + "as_str", + "SchedulerLock", + "WorkerModel", + "DequeueTimeout", + "KvLock", + "JobStatus", + "JobModel", + "JobNamesRegistry", + "FinishedJobRegistry", + "ActiveJobRegistry", + "FailedJobRegistry", + "CanceledJobRegistry", + "ScheduledJobRegistry", + "QueuedJobRegistry", +] + +from .base import as_str +from .job import JobStatus, JobModel +from .lock import SchedulerLock, KvLock +from .registry.base_registry import DequeueTimeout, JobNamesRegistry +from .registry.queue_registries import ( + FinishedJobRegistry, + ActiveJobRegistry, + FailedJobRegistry, + CanceledJobRegistry, + ScheduledJobRegistry, + QueuedJobRegistry, +) +from .result import Result, ResultType +from .worker import WorkerModel diff --git a/scheduler/redis_models/base.py b/scheduler/redis_models/base.py new file mode 100644 index 0000000..9ff0f39 --- /dev/null +++ b/scheduler/redis_models/base.py @@ -0,0 +1,250 @@ +import dataclasses +import json +from collections.abc import Sequence +from datetime import datetime, timezone +from enum import Enum +from typing import List, Optional, Union, Dict, Collection, Any, ClassVar, Set, Type + +from redis import Redis + +from scheduler.settings import logger +from scheduler.types import ConnectionType, Self + +MAX_KEYS = 1000 + + +def as_str(v: Union[bytes, str]) -> Optional[str]: + """Converts a `bytes` value to a string using `utf-8`. + + :param v: The value (None/bytes/str) + :raises: ValueError: If the value is not `bytes` or `str` + :returns: Either the decoded string or None + """ + if v is None or isinstance(v, str): + return v + if isinstance(v, bytes): + return v.decode("utf-8") + raise ValueError(f"Unknown type {type(v)} for `{v}`.") + + +def decode_dict(d: Dict[bytes, bytes], exclude_keys: Set[str]) -> Dict[str, str]: + return {k.decode(): v.decode() for (k, v) in d.items() if k.decode() not in exclude_keys} + + +def _serialize(value: Any) -> Optional[Any]: + if value is None: + return None + if isinstance(value, bool): + value = int(value) + elif isinstance(value, Enum): + value = value.value + elif isinstance(value, datetime): + value = value.isoformat() + elif isinstance(value, dict): + value = json.dumps(value) + elif isinstance(value, (int, float)): + return value + elif isinstance(value, (list, set, tuple)): + return json.dumps(value, default=str) + return str(value) + + +def _deserialize(value: str, _type: Type) -> Any: + if value is None: + return None + try: + if _type is str or _type == Optional[str]: + return as_str(value) + if _type is datetime or _type == Optional[datetime]: + return datetime.fromisoformat(as_str(value)) + elif _type is bool: + return bool(int(value)) + elif _type is int or _type == Optional[int]: + return int(value) + elif _type is float or _type == Optional[float]: + return float(value) + elif _type in {List[str], Dict[str, str]}: + return json.loads(value) + elif _type == Optional[Any]: + return json.loads(value) + elif issubclass(_type, Enum): + return _type(as_str(value)) + except (ValueError, TypeError) as e: + logger.warning(f"Failed to deserialize {value} as {_type}: {e}") + return value + + +@dataclasses.dataclass(slots=True, kw_only=True) +class BaseModel: + name: str + _element_key_template: ClassVar[str] = ":element:{}" + # fields that are not serializable using method above and should be dealt with in the subclass + # e.g. args/kwargs for a job + _non_serializable_fields: ClassVar[Set[str]] = set() + + @classmethod + def key_for(cls, name: str) -> str: + return cls._element_key_template.format(name) + + @property + def _key(self) -> str: + return self._element_key_template.format(self.name) + + def serialize(self, with_nones: bool = False) -> Dict[str, str]: + data = dataclasses.asdict( + self, dict_factory=lambda fields: {key: value for (key, value) in fields if not key.startswith("_")} + ) + if not with_nones: + data = {k: v for k, v in data.items() if v is not None and k not in self._non_serializable_fields} + for k in data: + data[k] = _serialize(data[k]) + return data + + @classmethod + def deserialize(cls, data: Dict[str, Any]) -> Self: + types = {f.name: f.type for f in dataclasses.fields(cls) if f.name not in cls._non_serializable_fields} + for k in data: + if k in cls._non_serializable_fields: + continue + if k not in types: + logger.warning(f"Unknown field {k} in {cls.__name__}") + continue + data[k] = _deserialize(data[k], types[k]) + return cls(**data) + + +@dataclasses.dataclass(slots=True, kw_only=True) +class HashModel(BaseModel): + created_at: Optional[datetime] = None + parent: Optional[str] = None + _dirty_fields: Set[str] = dataclasses.field(default_factory=set) # fields that were changed + _save_all: bool = True # Save all fields to broker, after init, or after delete + _list_key: ClassVar[str] = ":list_all:" + _children_key_template: ClassVar[str] = ":children:{}:" + + def __post_init__(self): + self._dirty_fields = set() + self._save_all = True + + def __setattr__(self, key, value): + if key != "_dirty_fields" and hasattr(self, "_dirty_fields"): + self._dirty_fields.add(key) + super(HashModel, self).__setattr__(key, value) + + @property + def _parent_key(self) -> Optional[str]: + if self.parent is None: + return None + return self._children_key_template.format(self.parent) + + @classmethod + def all_names(cls, connection: Redis, parent: Optional[str] = None) -> Collection[str]: + collection_key = cls._children_key_template.format(parent) if parent else cls._list_key + collection_members = connection.smembers(collection_key) + return [r.decode() for r in collection_members] + + @classmethod + def all(cls, connection: Redis, parent: Optional[str] = None) -> List[Self]: + keys = cls.all_names(connection, parent) + items = [cls.get(k, connection) for k in keys] + return [w for w in items if w is not None] + + @classmethod + def exists(cls, name: str, connection: ConnectionType) -> bool: + if name is None: + return False + return connection.exists(cls._element_key_template.format(name)) > 0 + + @classmethod + def delete_many(cls, names: List[str], connection: ConnectionType) -> None: + for name in names: + connection.delete(cls._element_key_template.format(name)) + + @classmethod + def get(cls, name: str, connection: ConnectionType) -> Optional[Self]: + res = connection.hgetall(cls._element_key_template.format(name)) + if not res: + return None + try: + return cls.deserialize(decode_dict(res, set())) + except Exception as e: + logger.warning(f"Failed to deserialize {name}: {e}") + return None + + @classmethod + def get_many(cls, names: Sequence[str], connection: ConnectionType) -> List[Self]: + pipeline = connection.pipeline() + for name in names: + pipeline.hgetall(cls._element_key_template.format(name)) + values = pipeline.execute() + return [(cls.deserialize(decode_dict(v, set())) if v else None) for v in values] + + def save(self, connection: ConnectionType) -> None: + connection.sadd(self._list_key, self.name) + if self._parent_key is not None: + connection.sadd(self._parent_key, self.name) + mapping = self.serialize(with_nones=True) + if not self._save_all and len(self._dirty_fields) > 0: + mapping = {k: v for k, v in mapping.items() if k in self._dirty_fields} + none_values = {k for k, v in mapping.items() if v is None} + if none_values: + connection.hdel(self._key, *none_values) + mapping = {k: v for k, v in mapping.items() if v is not None} + if mapping: + connection.hset(self._key, mapping=mapping) + self._dirty_fields = set() + self._save_all = False + + def delete(self, connection: ConnectionType) -> None: + connection.srem(self._list_key, self._key) + if self._parent_key is not None: + connection.srem(self._parent_key, 0, self._key) + connection.delete(self._key) + self._save_all = True + + @classmethod + def count(cls, connection: ConnectionType, parent: Optional[str] = None) -> int: + if parent is not None: + result = connection.scard(cls._children_key_template.format(parent)) + else: + result = connection.scard(cls._list_key) + return result + + def get_field(self, field: str, connection: ConnectionType) -> Any: + types = {f.name: f.type for f in dataclasses.fields(self)} + res = connection.hget(self._key, field) + return _deserialize(res, types[field]) + + def set_field(self, field: str, value: Any, connection: ConnectionType, set_attribute: bool = True) -> None: + if not hasattr(self, field): + raise AttributeError(f"Field {field} does not exist") + if set_attribute: + setattr(self, field, value) + if value is None: + connection.hdel(self._key, field) + return + value = _serialize(value) + connection.hset(self._key, field, value) + + +@dataclasses.dataclass(slots=True, kw_only=True) +class StreamModel(BaseModel): + _children_key_template: ClassVar[str] = ":children:{}:" + + def __init__(self, name: str, parent: str, created_at: Optional[datetime] = None): + self.name = name + self.created_at: datetime = created_at or datetime.now(timezone.utc) + self.parent: str = parent + + @property + def _parent_key(self) -> str: + return self._children_key_template.format(self.parent) + + @classmethod + def all(cls, connection: ConnectionType, parent: str) -> List[Self]: + results = connection.xrevrange(cls._children_key_template.format(parent), "+", "-") + return [cls.deserialize(decode_dict(result[1], exclude_keys=set())) for result in results] + + def save(self, connection: ConnectionType) -> bool: + result = connection.xadd(self._parent_key, self.serialize(), maxlen=10) + return bool(result) diff --git a/scheduler/redis_models/job.py b/scheduler/redis_models/job.py new file mode 100644 index 0000000..90c5302 --- /dev/null +++ b/scheduler/redis_models/job.py @@ -0,0 +1,328 @@ +import base64 +import dataclasses +import inspect +import numbers +import pickle +from datetime import datetime +from enum import Enum +from typing import ClassVar, Dict, Optional, List, Callable, Any, Union, Tuple + +from scheduler.helpers import utils +from scheduler.helpers.callback import Callback +from scheduler.redis_models.base import HashModel, as_str +from scheduler.settings import SCHEDULER_CONFIG, logger +from scheduler.types import ConnectionType, Self, FunctionReferenceType +from .registry.base_registry import JobNamesRegistry +from ..helpers.utils import current_timestamp + + +class TimeoutFormatError(Exception): + pass + + +class JobStatus(str, Enum): + """The Status of Job within its lifecycle at any given time.""" + + QUEUED = "queued" + FINISHED = "finished" + FAILED = "failed" + STARTED = "started" + SCHEDULED = "scheduled" + STOPPED = "stopped" + CANCELED = "canceled" + + +@dataclasses.dataclass(slots=True, kw_only=True) +class JobModel(HashModel): + _list_key: ClassVar[str] = ":jobs:ALL:" + _children_key_template: ClassVar[str] = ":{}:jobs:" + _element_key_template: ClassVar[str] = ":jobs:{}" + _non_serializable_fields = {"args", "kwargs"} + + args: List[Any] + kwargs: Dict[str, str] + + queue_name: str + description: str + func_name: str + + timeout: int = SCHEDULER_CONFIG.DEFAULT_JOB_TIMEOUT + success_ttl: int = SCHEDULER_CONFIG.DEFAULT_SUCCESS_TTL + job_info_ttl: int = SCHEDULER_CONFIG.DEFAULT_JOB_TTL + status: JobStatus + created_at: datetime + meta: Dict[str, str] + at_front: bool = False + last_heartbeat: Optional[datetime] = None + worker_name: Optional[str] = None + started_at: Optional[datetime] = None + enqueued_at: Optional[datetime] = None + ended_at: Optional[datetime] = None + success_callback_name: Optional[str] = None + success_callback_timeout: int = SCHEDULER_CONFIG.CALLBACK_TIMEOUT + failure_callback_name: Optional[str] = None + failure_callback_timeout: int = SCHEDULER_CONFIG.CALLBACK_TIMEOUT + stopped_callback_name: Optional[str] = None + stopped_callback_timeout: int = SCHEDULER_CONFIG.CALLBACK_TIMEOUT + task_type: Optional[str] = None + scheduled_task_id: Optional[int] = None + + def __hash__(self): + return hash(self.name) + + def __eq__(self, other): # noqa + return isinstance(other, self.__class__) and self.name == other.name + + def __str__(self): + return f"{self.name}: {self.description}" + + def get_status(self, connection: ConnectionType) -> JobStatus: + return self.get_field("status", connection=connection) + + @property + def is_queued(self) -> bool: + return self.status == JobStatus.QUEUED + + @property + def is_canceled(self) -> bool: + return self.status == JobStatus.CANCELED + + @property + def is_failed(self) -> bool: + return self.status == JobStatus.FAILED + + @property + def func(self) -> Callable[[Any], Any]: + return utils.callable_func(self.func_name) + + @property + def is_scheduled_task(self) -> bool: + return self.scheduled_task_id is not None + + def expire(self, ttl: int, connection: ConnectionType) -> None: + """Expire the Job Model if ttl >= 0""" + if ttl == 0: + self.delete(connection=connection) + elif ttl > 0: + connection.expire(self._key, ttl) + + def persist(self, connection: ConnectionType) -> None: + connection.persist(self._key) + + def prepare_for_execution(self, worker_name: str, registry: JobNamesRegistry, connection: ConnectionType) -> None: + """Prepares the job for execution, setting the worker name, + heartbeat information, status and other metadata before execution begins. + :param worker_name: The name of the worker + :param registry: The registry to add the job to + :param current_pid: The current process id + :param connection: The connection to the broker + """ + self.worker_name = worker_name + self.last_heartbeat = utils.utcnow() + self.started_at = self.last_heartbeat + self.status = JobStatus.STARTED + registry.add(connection, self.name, self.last_heartbeat.timestamp()) + self.save(connection=connection) + + def after_execution( + self, + job_info_ttl: int, + status: JobStatus, + connection: ConnectionType, + prev_registry: Optional[JobNamesRegistry] = None, + new_registry: Optional[JobNamesRegistry] = None, + ) -> None: + """After the job is executed, update the status, heartbeat, and other metadata.""" + self.status = status + self.ended_at = utils.utcnow() + self.last_heartbeat = self.ended_at + if prev_registry is not None: + prev_registry.delete(connection, self.name) + if new_registry is not None and job_info_ttl != 0: + new_registry.add(connection, self.name, current_timestamp() + job_info_ttl) + self.save(connection=connection) + + @property + def failure_callback(self) -> Optional[Callback]: + if self.failure_callback_name is None: + return None + logger.debug(f"Running failure callbacks for {self.name}") + return Callback(self.failure_callback_name, self.failure_callback_timeout) + + @property + def success_callback(self) -> Optional[Callable[..., Any]]: + if self.success_callback_name is None: + return None + logger.debug(f"Running success callbacks for {self.name}") + return Callback(self.success_callback_name, self.success_callback_timeout) + + @property + def stopped_callback(self) -> Optional[Callable[..., Any]]: + if self.stopped_callback_name is None: + return None + logger.debug(f"Running stopped callbacks for {self.name}") + return Callback(self.stopped_callback_name, self.stopped_callback_timeout) + + def get_call_string(self): + return _get_call_string(self.func_name, self.args, self.kwargs) + + def serialize(self, with_nones: bool = False) -> Dict[str, str]: + """Serialize the job model to a dictionary.""" + res = super(JobModel, self).serialize(with_nones=with_nones) + res["args"] = base64.encodebytes(pickle.dumps(self.args)).decode("utf-8") + res["kwargs"] = base64.encodebytes(pickle.dumps(self.kwargs)).decode("utf-8") + return res + + @classmethod + def deserialize(cls, data: Dict[str, Any]) -> Self: + """Deserialize the job model from a dictionary.""" + res = super(JobModel, cls).deserialize(data) + res.args = pickle.loads(base64.decodebytes(data.get("args").encode("utf-8"))) + res.kwargs = pickle.loads(base64.decodebytes(data.get("kwargs").encode("utf-8"))) + return res + + @classmethod + def create( + cls, + connection: ConnectionType, + func: FunctionReferenceType, + queue_name: str, + args: Union[List[Any], Optional[Tuple]] = None, + kwargs: Optional[Dict[str, Any]] = None, + result_ttl: Optional[int] = None, + job_info_ttl: Optional[int] = None, + status: Optional[JobStatus] = None, + description: Optional[str] = None, + timeout: Optional[int] = None, + name: Optional[str] = None, + task_type: Optional[str] = None, + scheduled_task_id: Optional[int] = None, + meta: Optional[Dict[str, Any]] = None, + *, + on_success: Optional[Callback] = None, + on_failure: Optional[Callback] = None, + on_stopped: Optional[Callback] = None, + at_front: Optional[bool] = None, + ) -> Self: + """Creates a new job-model for the given function, arguments, and keyword arguments. + :returns: A job-model instance. + """ + args = args or [] + kwargs = kwargs or {} + timeout = _parse_timeout(timeout) or SCHEDULER_CONFIG.DEFAULT_JOB_TIMEOUT + if timeout == 0: + raise ValueError("0 timeout is not allowed. Use -1 for infinite timeout") + job_info_ttl = _parse_timeout(job_info_ttl if job_info_ttl is not None else SCHEDULER_CONFIG.DEFAULT_JOB_TTL) + result_ttl = _parse_timeout(result_ttl) + if not isinstance(args, (tuple, list)): + raise TypeError(f"{args!r} is not a valid args list") + if not isinstance(kwargs, dict): + raise TypeError(f"{kwargs!r} is not a valid kwargs dict") + if on_success and not isinstance(on_success, Callback): + raise ValueError("on_success must be a Callback object") + if on_failure and not isinstance(on_failure, Callback): + raise ValueError("on_failure must be a Callback object") + if on_stopped and not isinstance(on_stopped, Callback): + raise ValueError("on_stopped must be a Callback object") + if name is not None and JobModel.exists(name, connection=connection): + raise ValueError(f"Job with name {name} already exists") + if name is None: + date_str = utils.utcnow().strftime("%Y%m%d%H%M%S%f") + name = f"{queue_name}:{scheduled_task_id or ''}:{date_str}" + + if inspect.ismethod(func): + _func_name = func.__name__ + + elif inspect.isfunction(func) or inspect.isbuiltin(func): + _func_name = f"{func.__module__}.{func.__qualname__}" + elif isinstance(func, str): + _func_name = as_str(func) + elif not inspect.isclass(func) and hasattr(func, "__call__"): # a callable class instance + _func_name = "__call__" + else: + raise TypeError(f"Expected a callable or a string, but got: {func}") + description = description or _get_call_string(func, args or [], kwargs or {}, max_length=75) + job_info_ttl = job_info_ttl if job_info_ttl is not None else SCHEDULER_CONFIG.DEFAULT_JOB_TTL + model = JobModel( + created_at=utils.utcnow(), + name=name, + queue_name=queue_name, + description=description, + func_name=_func_name, + args=args or [], + kwargs=kwargs or {}, + at_front=at_front, + task_type=task_type, + scheduled_task_id=scheduled_task_id, + success_callback_name=on_success.name if on_success else None, + success_callback_timeout=on_success.timeout if on_success else None, + failure_callback_name=on_failure.name if on_failure else None, + failure_callback_timeout=on_failure.timeout if on_failure else None, + stopped_callback_name=on_stopped.name if on_stopped else None, + stopped_callback_timeout=on_stopped.timeout if on_stopped else None, + success_ttl=result_ttl, + job_info_ttl=job_info_ttl, + timeout=timeout, + status=status, + last_heartbeat=None, + meta=meta or {}, + worker_name=None, + enqueued_at=None, + started_at=None, + ended_at=None, + ) + model.save(connection=connection) + return model + + +def _get_call_string( + func_name: Optional[str], args: Any, kwargs: Dict[Any, Any], max_length: Optional[int] = None +) -> Optional[str]: + """ + Returns a string representation of the call, formatted as a regular + Python function invocation statement. If max_length is not None, truncate + arguments with representation longer than max_length. + + :param func_name: The function name + :param args: The function arguments + :param kwargs: The function kwargs + :param max_length: The max length of the return string + :return: A string representation of the function call + """ + if func_name is None: + return None + + arg_list = [as_str(_truncate_long_string(repr(arg), max_length)) for arg in args] + + list_kwargs = [f"{k}={as_str(_truncate_long_string(repr(v), max_length))}" for k, v in kwargs.items()] + arg_list += sorted(list_kwargs) + args = ", ".join(arg_list) + + return f"{func_name}({args})" + + +def _truncate_long_string(data: str, max_length: Optional[int] = None) -> str: + """Truncate arguments with representation longer than max_length""" + if max_length is None: + return data + return (data[:max_length] + "...") if len(data) > max_length else data + + +def _parse_timeout(timeout: Union[int, float, str]) -> int: + """Transfer all kinds of timeout format to an integer representing seconds""" + if not isinstance(timeout, numbers.Integral) and timeout is not None: + try: + timeout = int(timeout) + except ValueError: + digit, unit = timeout[:-1], (timeout[-1:]).lower() + unit_second = {"d": 86400, "h": 3600, "m": 60, "s": 1} + try: + timeout = int(digit) * unit_second[unit] + except (ValueError, KeyError): + raise TimeoutFormatError( + "Timeout must be an integer or a string representing an integer, or " + 'a string with format: digits + unit, unit can be "d", "h", "m", "s", ' + 'such as "1h", "23m".' + ) + + return timeout diff --git a/scheduler/redis_models/lock.py b/scheduler/redis_models/lock.py new file mode 100644 index 0000000..aa060f0 --- /dev/null +++ b/scheduler/redis_models/lock.py @@ -0,0 +1,36 @@ +from typing import Optional, Any + +from scheduler.types import ConnectionType + + +class KvLock(object): + def __init__(self, name: str) -> None: + self.name = name + self.acquired = False + + @property + def _locking_key(self) -> str: + return f"_lock:{self.name}" + + def acquire(self, val: Any, connection: ConnectionType, expire: Optional[int] = None) -> bool: + self.acquired = connection.set(self._locking_key, val, nx=True, ex=expire) + return self.acquired + + def expire(self, connection: ConnectionType, expire: Optional[int] = None) -> bool: + return connection.expire(self._locking_key, expire) + + def release(self, connection: ConnectionType): + connection.delete(self._locking_key) + + def value(self, connection: ConnectionType) -> Any: + return connection.get(self._locking_key) + + +class SchedulerLock(KvLock): + def __init__(self, queue_name: str) -> None: + super().__init__(f"lock:scheduler:{queue_name}") + + +class QueueLock(KvLock): + def __init__(self, queue_name: str) -> None: + super().__init__(f"queue:{queue_name}") diff --git a/scheduler/redis_models/registry/__init__.py b/scheduler/redis_models/registry/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/scheduler/redis_models/registry/base_registry.py b/scheduler/redis_models/registry/base_registry.py new file mode 100644 index 0000000..af9b211 --- /dev/null +++ b/scheduler/redis_models/registry/base_registry.py @@ -0,0 +1,118 @@ +import dataclasses +from collections.abc import Sequence +from typing import ClassVar, Optional, List, Tuple, Any + +from scheduler.helpers.utils import current_timestamp +from scheduler.redis_models.base import as_str, BaseModel +from scheduler.settings import logger +from scheduler.types import ConnectionType, Self + + +class DequeueTimeout(Exception): + pass + + +@dataclasses.dataclass(slots=True, kw_only=True) +class ZSetModel(BaseModel): + def cleanup(self, connection: ConnectionType, timestamp: Optional[float] = None) -> None: + """Remove expired jobs from registry.""" + score = timestamp or current_timestamp() + connection.zremrangebyscore(self._key, 0, score) + + def count(self, connection: ConnectionType) -> int: + """Returns the number of jobs in this registry""" + self.cleanup(connection=connection) + return connection.zcard(self._key) + + def add(self, connection: ConnectionType, job_name: str, score: float, update_existing_only: bool = False) -> int: + return connection.zadd(self._key, {job_name: float(score)}, xx=update_existing_only) + + def delete(self, connection: ConnectionType, job_name: str) -> None: + connection.zrem(self._key, job_name) + + +class JobNamesRegistry(ZSetModel): + _element_key_template: ClassVar[str] = ":registry:{}" + + def __init__(self, connection: ConnectionType, name: str) -> None: + super().__init__(name=name) + self.connection = connection + + def __len__(self) -> int: + return self.count(self.connection) + + def __contains__(self, item: str) -> bool: + return self.connection.zrank(self._key, item) is not None + + def all(self, start: int = 0, end: int = -1) -> List[str]: + """Returns list of all job names. + + :param start: Start score/timestamp, default to 0. + :param end: End score/timestamp, default to -1 (i.e., no max score). + :returns: Returns list of all job names with timestamp from start to end + """ + self.cleanup(self.connection) + res = [as_str(job_name) for job_name in self.connection.zrange(self._key, start, end)] + logger.debug(f"Getting jobs for registry {self._key}: {len(res)} found.") + return res + + def all_with_timestamps(self, start: int = 0, end: int = -1) -> List[Tuple[str, float]]: + """Returns list of all job names with their timestamps. + + :param start: Start score/timestamp, default to 0. + :param end: End score/timestamp, default to -1 (i.e., no max score). + :returns: Returns list of all job names with timestamp from start to end + """ + self.cleanup(self.connection) + res = self.connection.zrange(self._key, start, end, withscores=True) + logger.debug(f"Getting jobs for registry {self._key}: {len(res)} found.") + return [(as_str(job_name), timestamp) for job_name, timestamp in res] + + def get_first(self) -> Optional[str]: + """Returns the first job in the registry.""" + self.cleanup(self.connection) + first_job = self.connection.zrange(self._key, 0, 0) + return first_job[0].decode() if first_job else None + + def get_last_timestamp(self) -> Optional[float]: + """Returns the last timestamp in the registry.""" + self.cleanup(self.connection) + last_timestamp = self.connection.zrange(self._key, -1, -1, withscores=True) + return last_timestamp[0][1] if last_timestamp else None + + @property + def key(self) -> str: + return self._key + + @classmethod + def pop( + cls, connection: ConnectionType, registries: Sequence[Self], timeout: Optional[int] + ) -> Tuple[Optional[str], Optional[str]]: + """Helper method to abstract away from some Redis API details + + :param connection: Broker connection + :param registries: List of registries to pop from + :param timeout: Timeout in seconds + :raises ValueError: If timeout of 0 was passed + :raises DequeueTimeout: BLPOP Timeout + :returns: Tuple of registry key and job name + """ + if timeout == 0: + raise ValueError("Indefinite timeout not supported. Please pick a timeout value > 0") + registry_keys = [r.key for r in registries] + if timeout is not None: # blocking variant + colored_registries = ",".join(map(str, [str(registry) for registry in registry_keys])) + logger.debug(f"Starting BZMPOP operation for queues {colored_registries} with timeout of {timeout}") + result = connection.bzpopmin(registry_keys, timeout) + if not result: + logger.debug(f"BZMPOP timeout, no jobs found on queues {colored_registries}") + raise DequeueTimeout(timeout, registry_keys) + registry_key, job_name, timestamp = result + return as_str(registry_key), as_str(job_name) + else: # non-blocking variant + for registry_key in registry_keys: + results: List[Any] = connection.zpopmin(registry_key) + if results: + job_name, timestamp = results[0] + return as_str(registry_key), as_str(job_name) + return None, None diff --git a/scheduler/redis_models/registry/queue_registries.py b/scheduler/redis_models/registry/queue_registries.py new file mode 100644 index 0000000..9a0d87e --- /dev/null +++ b/scheduler/redis_models/registry/queue_registries.py @@ -0,0 +1,117 @@ +import time +from datetime import datetime, timedelta, timezone +from typing import ClassVar, Optional, List, Tuple + +from scheduler.helpers.utils import current_timestamp +from scheduler.types import ConnectionType +from .base_registry import JobNamesRegistry +from .. import as_str +from ..job import JobModel + + +class QueuedJobRegistry(JobNamesRegistry): + _element_key_template: ClassVar[str] = ":registry:{}:queued_jobs" + + def cleanup(self, connection: ConnectionType, timestamp: Optional[float] = None) -> None: + """This method is only here to prevent errors because this method is automatically called by `count()` + and `all()` methods implemented in JobIdsRegistry.""" + pass + + def compact(self) -> None: + """Removes all "dead" jobs from the queue by cycling through it, while guaranteeing FIFO semantics.""" + jobs_with_ts = self.all_with_timestamps() + for job_name, timestamp in jobs_with_ts: + if not JobModel.exists(job_name, self.connection): + self.delete(connection=self.connection, job_name=job_name) + + def empty(self) -> None: + queued_jobs_count = self.count(connection=self.connection) + with self.connection.pipeline() as pipe: + for offset in range(0, queued_jobs_count, 1000): + job_names = self.all(offset, 1000) + for job_name in job_names: + self.delete(connection=pipe, job_name=job_name) + JobModel.delete_many(job_names, connection=pipe) + pipe.execute() + + +class FinishedJobRegistry(JobNamesRegistry): + _element_key_template: ClassVar[str] = ":registry:{}:finished_jobs" + + +class FailedJobRegistry(JobNamesRegistry): + _element_key_template: ClassVar[str] = ":registry:{}:failed_jobs" + + +class CanceledJobRegistry(JobNamesRegistry): + _element_key_template: ClassVar[str] = ":registry:{}:canceled_jobs" + + def cleanup(self, connection: ConnectionType, timestamp: Optional[float] = None) -> None: + """This method is only here to prevent errors because this method is automatically called by `count()` + and `all()` methods implemented in JobIdsRegistry.""" + pass + + +class ScheduledJobRegistry(JobNamesRegistry): + _element_key_template: ClassVar[str] = ":registry:{}:scheduled_jobs" + + def cleanup(self, connection: ConnectionType, timestamp: Optional[float] = None) -> None: + """This method is only here to prevent errors because this method is automatically called by `count()` + and `all()` methods implemented in JobIdsRegistry.""" + pass + + def schedule(self, connection: ConnectionType, job_name: str, scheduled_datetime: datetime) -> int: + """Adds job_name to registry, scored by its execution time (in UTC). + If datetime has no tzinfo, it will assume localtimezone. + + :param connection: Broker connection + :param job_name: Job name to schedule + :param scheduled_datetime: datetime to schedule job + """ + # If datetime has no timezone, assume server's local timezone + if not scheduled_datetime.tzinfo: + tz = timezone(timedelta(seconds=-(time.timezone if time.daylight == 0 else time.altzone))) + scheduled_datetime = scheduled_datetime.replace(tzinfo=tz) + + timestamp = scheduled_datetime.timestamp() + return self.add(connection=connection, job_name=job_name, score=timestamp) + + def get_jobs_to_schedule(self, timestamp: int, chunk_size: int = 1000) -> List[str]: + """Gets a list of job names that should be scheduled. + + :param timestamp: timestamp/score of jobs in SortedSet. + :param chunk_size: Max results to return. + :returns: A list of job names + """ + jobs_to_schedule = self.connection.zrangebyscore(self._key, 0, max=timestamp, start=0, num=chunk_size) + return [as_str(job_name) for job_name in jobs_to_schedule] + + def get_scheduled_time(self, job_name: str) -> Optional[datetime]: + """Returns datetime (UTC) at which job is scheduled to be enqueued + + :param job_name: Job name + :returns: The scheduled time as datetime object, or None if job is not found + """ + + score: Optional[float] = self.connection.zscore(self._key, job_name) + if not score: + return None + + return datetime.fromtimestamp(score, tz=timezone.utc) + + +class ActiveJobRegistry(JobNamesRegistry): + """Registry of currently executing jobs. Each queue maintains a ActiveJobRegistry.""" + + _element_key_template: ClassVar[str] = ":registry:{}:active" + + def get_job_names_before(self, connection: ConnectionType, timestamp: Optional[float]) -> List[Tuple[str, float]]: + """Returns job names whose score is lower than a timestamp timestamp. + + Returns names for jobs with an expiry time earlier than timestamp, + specified as seconds since the Unix epoch. + timestamp defaults to calltime if unspecified. + """ + score = timestamp or current_timestamp() + jobs_before = connection.zrangebyscore(self._key, 0, score, withscores=True) + return [(as_str(job_name), score) for (job_name, score) in jobs_before] diff --git a/scheduler/redis_models/result.py b/scheduler/redis_models/result.py new file mode 100644 index 0000000..a89af18 --- /dev/null +++ b/scheduler/redis_models/result.py @@ -0,0 +1,67 @@ +import dataclasses +from datetime import datetime +from enum import Enum +from typing import Optional, Any, ClassVar, List + +from scheduler.helpers.utils import utcnow +from scheduler.redis_models.base import StreamModel, decode_dict +from scheduler.types import ConnectionType, Self + + +class ResultType(Enum): + SUCCESSFUL = "successful" + FAILED = "failed" + STOPPED = "stopped" + + +@dataclasses.dataclass(slots=True, kw_only=True) +class Result(StreamModel): + parent: str + type: ResultType + worker_name: str + ttl: Optional[int] = 0 + name: Optional[str] = None + created_at: datetime = dataclasses.field(default_factory=utcnow) + return_value: Optional[Any] = None + exc_string: Optional[str] = None + + _list_key: ClassVar[str] = ":job-results:" + _children_key_template: ClassVar[str] = ":job-results:{}:" + _element_key_template: ClassVar[str] = ":job-results:{}" + + @classmethod + def create( + cls, + connection: ConnectionType, + job_name: str, + worker_name: str, + _type: ResultType, + ttl: int, + return_value: Any = None, + exc_string: Optional[str] = None, + ) -> Self: + result = cls( + parent=job_name, + ttl=ttl, + type=_type, + return_value=return_value, + exc_string=exc_string, + worker_name=worker_name, + ) + result.save(connection) + return result + + @classmethod + def fetch_latest(cls, connection: ConnectionType, job_name: str) -> Optional["Result"]: + """Returns the latest result for given job_name. + + :param connection: Broker connection. + :param job_name: Job name. + :return: Result instance or None if no result is available. + """ + response: List[Any] = connection.xrevrange(cls._children_key_template.format(job_name), "+", "-", count=1) + if not response: + return None + result_id, payload = response[0] + res = cls.deserialize(decode_dict(payload, set())) + return res diff --git a/scheduler/redis_models/worker.py b/scheduler/redis_models/worker.py new file mode 100644 index 0000000..5d31600 --- /dev/null +++ b/scheduler/redis_models/worker.py @@ -0,0 +1,121 @@ +import dataclasses +from datetime import datetime +from enum import Enum +from typing import List, Optional, ClassVar, Any, Generator + +from scheduler.helpers.utils import utcnow +from scheduler.redis_models.base import HashModel, MAX_KEYS +from scheduler.settings import logger +from scheduler.types import ConnectionType, Self + +DEFAULT_WORKER_TTL = 420 + + +class WorkerStatus(str, Enum): + CREATED = "created" + STARTING = "starting" + STARTED = "started" + SUSPENDED = "suspended" + BUSY = "busy" + IDLE = "idle" + + +@dataclasses.dataclass(slots=True, kw_only=True) +class WorkerModel(HashModel): + name: str + queue_names: List[str] + pid: int + hostname: str + ip_address: str + version: str + python_version: str + state: WorkerStatus + job_execution_process_pid: int = 0 + successful_job_count: int = 0 + failed_job_count: int = 0 + completed_jobs: int = 0 + birth: Optional[datetime] = None + last_heartbeat: Optional[datetime] = None + is_suspended: bool = False + current_job_name: Optional[str] = None + stopped_job_name: Optional[str] = None + total_working_time_ms: float = 0.0 + current_job_working_time: float = 0 + last_cleaned_at: Optional[datetime] = None + shutdown_requested_date: Optional[datetime] = None + has_scheduler: bool = False + death: Optional[datetime] = None + + _list_key: ClassVar[str] = ":workers:ALL:" + _children_key_template: ClassVar[str] = ":queue-workers:{}:" + _element_key_template: ClassVar[str] = ":workers:{}" + + def save(self, connection: ConnectionType) -> None: + pipeline = connection.pipeline() + super(WorkerModel, self).save(pipeline) + for queue_name in self.queue_names: + pipeline.sadd(self._children_key_template.format(queue_name), self.name) + pipeline.expire(self._key, DEFAULT_WORKER_TTL + 60) + pipeline.execute() + + def delete(self, connection: ConnectionType) -> None: + logger.debug(f"Deleting worker {self.name}") + pipeline = connection.pipeline() + now = utcnow() + self.death = now + pipeline.hset(self._key, "death", now.isoformat()) + pipeline.expire(self._key, 60) + pipeline.srem(self._list_key, self.name) + for queue_name in self.queue_names: + pipeline.srem(self._children_key_template.format(queue_name), self.name) + pipeline.execute() + + def __eq__(self, other: Self) -> bool: + if not isinstance(other, self.__class__): + raise TypeError("Cannot compare workers to other types (of workers)") + return self._key == other._key + + def __hash__(self): + """The hash does not take the database/connection into account""" + return hash((self._key, ",".join(self.queue_names))) + + def set_current_job_working_time(self, job_execution_time: int, connection: ConnectionType) -> None: + self.set_field("current_job_working_time", job_execution_time, connection=connection) + + def heartbeat(self, connection: ConnectionType, timeout: Optional[int] = None) -> None: + timeout = timeout or DEFAULT_WORKER_TTL + 60 + connection.expire(self._key, timeout) + now = utcnow() + self.set_field("last_heartbeat", now, connection=connection) + logger.debug(f"Next heartbeat for worker {self._key} should arrive in {timeout} seconds.") + + @classmethod + def cleanup(cls, connection: ConnectionType, queue_name: Optional[str] = None): + worker_names = cls.all_names(connection, queue_name) + worker_keys = [cls.key_for(worker_name) for worker_name in worker_names] + with connection.pipeline() as pipeline: + for worker_key in worker_keys: + pipeline.exists(worker_key) + worker_exist = pipeline.execute() + invalid_workers = list() + for i, worker_name in enumerate(worker_names): + if not worker_exist[i]: + invalid_workers.append(worker_name) + if len(invalid_workers) == 0: + return + for invalid_subset in _split_list(invalid_workers, MAX_KEYS): + pipeline.srem(cls._list_key, *invalid_subset) + if queue_name: + pipeline.srem(cls._children_key_template.format(queue_name), *invalid_subset) + pipeline.execute() + + +def _split_list(a_list: List[str], segment_size: int) -> Generator[list[str], Any, None]: + """Splits a list into multiple smaller lists having size `segment_size` + + :param a_list: The list to split + :param segment_size: The segment size to split into + :returns: The list split into smaller lists + """ + for i in range(0, len(a_list), segment_size): + yield a_list[i : i + segment_size] diff --git a/scheduler/rq_classes.py b/scheduler/rq_classes.py deleted file mode 100644 index d8b9238..0000000 --- a/scheduler/rq_classes.py +++ /dev/null @@ -1,255 +0,0 @@ -from typing import List, Any, Optional, Union - -import django -from django.apps import apps -from redis import Redis -from redis.client import Pipeline -from rq import Worker -from rq.command import send_stop_job_command -from rq.decorators import job -from rq.exceptions import InvalidJobOperation -from rq.job import Job, JobStatus -from rq.job import get_current_job # noqa -from rq.queue import Queue, logger -from rq.registry import ( - DeferredJobRegistry, FailedJobRegistry, FinishedJobRegistry, - ScheduledJobRegistry, StartedJobRegistry, CanceledJobRegistry, BaseRegistry, -) -from rq.scheduler import RQScheduler -from rq.worker import WorkerStatus - -from scheduler import settings - -MODEL_NAMES = ['ScheduledTask', 'RepeatableTask', 'CronTask'] - -rq_job_decorator = job -ExecutionStatus = JobStatus -InvalidJobOperation = InvalidJobOperation - - -def as_text(v: Union[bytes, str]) -> Optional[str]: - """Converts a bytes value to a string using `utf-8`. - - :param v: The value (bytes or string) - :raises: ValueError: If the value is not bytes or string - :returns: Either the decoded string or None - """ - if v is None: - return None - elif isinstance(v, bytes): - return v.decode('utf-8') - elif isinstance(v, str): - return v - else: - raise ValueError('Unknown type %r' % type(v)) - - -def compact(lst: List[Any]) -> List[Any]: - """Remove `None` values from an iterable object. - :param lst: A list (or list-like) object - :returns: The list without None values - """ - return [item for item in lst if item is not None] - - -class JobExecution(Job): - def __eq__(self, other): - return isinstance(other, Job) and self.id == other.id - - @property - def is_scheduled_task(self): - return self.meta.get('scheduled_task_id', None) is not None - - def is_execution_of(self, scheduled_job): - return (self.meta.get('task_type', None) == scheduled_job.TASK_TYPE - and self.meta.get('scheduled_task_id', None) == scheduled_job.id) - - def stop_execution(self, connection: Redis): - send_stop_job_command(connection, self.id) - - -class DjangoWorker(Worker): - def __init__(self, *args, **kwargs): - self.fork_job_execution = kwargs.pop('fork_job_execution', True) - kwargs['job_class'] = JobExecution - kwargs['queue_class'] = DjangoQueue - super(DjangoWorker, self).__init__(*args, **kwargs) - - def __eq__(self, other): - return (isinstance(other, Worker) - and self.key == other.key - and self.name == other.name) - - def __hash__(self): - return hash((self.name, self.key, ','.join(self.queue_names()))) - - def __str__(self): - return f"{self.name}/{','.join(self.queue_names())}" - - def _start_scheduler( - self, - burst: bool = False, - logging_level: str = "INFO", - date_format: str = '%H:%M:%S', - log_format: str = '%(asctime)s %(message)s', - ) -> None: - """Starts the scheduler process. - This is specifically designed to be run by the worker when running the `work()` method. - Instantiates the DjangoScheduler and tries to acquire a lock. - If the lock is acquired, start scheduler. - If worker is on burst mode just enqueues scheduled jobs and quits, - otherwise, starts the scheduler in a separate process. - - - :param burst (bool, optional): Whether to work on burst mode. Defaults to False. - :param logging_level (str, optional): Logging level to use. Defaults to "INFO". - :param date_format (str, optional): Date Format. Defaults to DEFAULT_LOGGING_DATE_FORMAT. - :param log_format (str, optional): Log Format. Defaults to DEFAULT_LOGGING_FORMAT. - """ - self.scheduler = DjangoScheduler( - self.queues, - connection=self.connection, - logging_level=logging_level, - date_format=date_format, - log_format=log_format, - serializer=self.serializer, - ) - self.scheduler.acquire_locks() - if self.scheduler.acquired_locks: - if burst: - self.scheduler.enqueue_scheduled_jobs() - self.scheduler.release_locks() - else: - proc = self.scheduler.start() - self._set_property('scheduler_pid', proc.pid) - - def execute_job(self, job: 'Job', queue: 'Queue'): - if self.fork_job_execution: - super(DjangoWorker, self).execute_job(job, queue) - else: - self.set_state(WorkerStatus.BUSY) - self.perform_job(job, queue) - self.set_state(WorkerStatus.IDLE) - - def work(self, **kwargs) -> bool: - kwargs.setdefault('with_scheduler', True) - return super(DjangoWorker, self).work(**kwargs) - - def _set_property(self, prop_name: str, val, pipeline: Optional[Pipeline] = None): - connection = pipeline if pipeline is not None else self.connection - if val is None: - connection.hdel(self.key, prop_name) - else: - connection.hset(self.key, prop_name, val) - - def _get_property(self, prop_name: str, pipeline: Optional[Pipeline] = None): - connection = pipeline if pipeline is not None else self.connection - return as_text(connection.hget(self.key, prop_name)) - - def scheduler_pid(self) -> Optional[int]: - if len(self.queues) == 0: - logger.warning("No queues to get scheduler pid from") - return None - pid = self.connection.get(DjangoScheduler.get_locking_key(self.queues[0].name)) - return int(pid.decode()) if pid is not None else None - - -class DjangoQueue(Queue): - REGISTRIES = dict( - finished='finished_job_registry', - failed='failed_job_registry', - scheduled='scheduled_job_registry', - started='started_job_registry', - deferred='deferred_job_registry', - canceled='canceled_job_registry', - ) - """ - A subclass of RQ's QUEUE that allows jobs to be stored temporarily to be - enqueued later at the end of Django's request/response cycle. - """ - - def __init__(self, *args, **kwargs): - kwargs['job_class'] = JobExecution - super(DjangoQueue, self).__init__(*args, **kwargs) - - def get_registry(self, name: str) -> Union[None, BaseRegistry, 'DjangoQueue']: - name = name.lower() - if name == 'queued': - return self - elif name in DjangoQueue.REGISTRIES: - return getattr(self, DjangoQueue.REGISTRIES[name]) - return None - - @property - def finished_job_registry(self): - return FinishedJobRegistry(self.name, self.connection) - - @property - def started_job_registry(self): - return StartedJobRegistry(self.name, self.connection, job_class=JobExecution, ) - - @property - def deferred_job_registry(self): - return DeferredJobRegistry(self.name, self.connection, job_class=JobExecution, ) - - @property - def failed_job_registry(self): - return FailedJobRegistry(self.name, self.connection, job_class=JobExecution, ) - - @property - def scheduled_job_registry(self): - return ScheduledJobRegistry(self.name, self.connection, job_class=JobExecution, ) - - @property - def canceled_job_registry(self): - return CanceledJobRegistry(self.name, self.connection, job_class=JobExecution, ) - - def get_all_job_ids(self) -> List[str]: - res = list() - res.extend(self.get_job_ids()) - res.extend(self.finished_job_registry.get_job_ids()) - res.extend(self.started_job_registry.get_job_ids()) - res.extend(self.deferred_job_registry.get_job_ids()) - res.extend(self.failed_job_registry.get_job_ids()) - res.extend(self.scheduled_job_registry.get_job_ids()) - res.extend(self.canceled_job_registry.get_job_ids()) - return res - - def get_all_jobs(self): - job_ids = self.get_all_job_ids() - return compact([self.fetch_job(job_id) for job_id in job_ids]) - - def clean_registries(self): - self.started_job_registry.cleanup() - self.failed_job_registry.cleanup() - self.finished_job_registry.cleanup() - - def remove_job_id(self, job_id: str): - self.connection.lrem(self.key, 0, job_id) - - def last_job_id(self): - return self.connection.lindex(self.key, 0) - - -class DjangoScheduler(RQScheduler): - def __init__(self, *args, **kwargs): - kwargs.setdefault('interval', settings.SCHEDULER_CONFIG['SCHEDULER_INTERVAL']) - super(DjangoScheduler, self).__init__(*args, **kwargs) - - @staticmethod - def reschedule_all_jobs(): - for model_name in MODEL_NAMES: - model = apps.get_model(app_label='scheduler', model_name=model_name) - enabled_jobs = model.objects.filter(enabled=True) - unscheduled_jobs = filter(lambda j: j.ready_for_schedule(), enabled_jobs) - for item in unscheduled_jobs: - logger.debug(f"Rescheduling {str(item)}") - item.save() - - def work(self): - django.setup() - super(DjangoScheduler, self).work() - - def enqueue_scheduled_jobs(self): - self.reschedule_all_jobs() - super(DjangoScheduler, self).enqueue_scheduled_jobs() diff --git a/scheduler/settings.py b/scheduler/settings.py index 57c254a..405c5ab 100644 --- a/scheduler/settings.py +++ b/scheduler/settings.py @@ -1,43 +1,58 @@ import logging +from typing import List, Dict from django.conf import settings from django.core.exceptions import ImproperlyConfigured -logger = logging.getLogger(__package__) +from scheduler.types import SchedulerConfiguration, QueueConfiguration -QUEUES = dict() -SCHEDULER_CONFIG = dict() +logger = logging.getLogger("scheduler") +logging.basicConfig(level=logging.DEBUG) +_QUEUES: Dict[str, QueueConfiguration] = dict() +SCHEDULER_CONFIG: SchedulerConfiguration = SchedulerConfiguration() -def _token_validation(token: str) -> bool: - return False + +class QueueNotFoundError(Exception): + pass def conf_settings(): - global QUEUES + global _QUEUES global SCHEDULER_CONFIG - QUEUES = getattr(settings, 'SCHEDULER_QUEUES', None) - if QUEUES is None: - logger.warning('Configuration using RQ_QUEUES is deprecated. Use SCHEDULER_QUEUES instead') - QUEUES = getattr(settings, 'RQ_QUEUES', None) - if QUEUES is None: - raise ImproperlyConfigured("You have to define SCHEDULER_QUEUES in settings.py") - - SCHEDULER_CONFIG = { - 'EXECUTIONS_IN_PAGE': 20, - 'DEFAULT_RESULT_TTL': 600, # 10 minutes - 'DEFAULT_TIMEOUT': 300, # 5 minutes - 'SCHEDULER_INTERVAL': 10, # 10 seconds - 'FAKEREDIS': False, # For testing purposes - 'TOKEN_VALIDATION_METHOD': _token_validation, # Access stats from another application using API tokens - } - user_settings = getattr(settings, 'SCHEDULER_CONFIG', {}) - SCHEDULER_CONFIG.update(user_settings) + app_queues = getattr(settings, "SCHEDULER_QUEUES", None) + if app_queues is None or not isinstance(app_queues, dict): + raise ImproperlyConfigured("You have to define SCHEDULER_QUEUES in settings.py as dict") + + for queue_name, queue_config in app_queues.items(): + if isinstance(queue_config, QueueConfiguration): + _QUEUES[queue_name] = queue_config + elif isinstance(queue_config, dict): + _QUEUES[queue_name] = QueueConfiguration(**queue_config) + else: + raise ImproperlyConfigured(f"Queue {queue_name} configuration should be a QueueConfiguration or dict") + + user_settings = getattr(settings, "SCHEDULER_CONFIG", {}) + if isinstance(user_settings, SchedulerConfiguration): + SCHEDULER_CONFIG = user_settings # type: ignore + return + if not isinstance(user_settings, dict): + raise ImproperlyConfigured("SCHEDULER_CONFIG should be a SchedulerConfiguration or dict") + for k, v in user_settings.items(): + if k not in SCHEDULER_CONFIG.__annotations__: + raise ImproperlyConfigured(f"Unknown setting {k} in SCHEDULER_CONFIG") + setattr(SCHEDULER_CONFIG, k, v) conf_settings() -def get_config(key: str, default=None): - return SCHEDULER_CONFIG.get(key, None) +def get_queue_names() -> List[str]: + return list(_QUEUES.keys()) + + +def get_queue_configuration(queue_name: str) -> QueueConfiguration: + if queue_name not in _QUEUES: + raise QueueNotFoundError(f"Queue {queue_name} not found, queues={_QUEUES.keys()}") + return _QUEUES[queue_name] diff --git a/scheduler/static/admin/js/select-fields.js b/scheduler/static/admin/js/select-fields.js new file mode 100644 index 0000000..50ed38f --- /dev/null +++ b/scheduler/static/admin/js/select-fields.js @@ -0,0 +1,27 @@ +(function ($) { + $(function () { + const tasktypes = { + "CronTaskType": $(".tasktype-CronTaskType"), + "RepeatableTaskType": $(".tasktype-RepeatableTaskType"), + "OnceTaskType": $(".tasktype-OnceTaskType"), + }; + var taskTypeField = $('#id_task_type'); + + function toggleVerified(value) { + console.log(value); + for (const [k, v] of Object.entries(tasktypes)) { + if (k === value) { + v.show(); + } else { + v.hide(); + } + } + } + + toggleVerified(taskTypeField.val()); + + taskTypeField.change(function () { + toggleVerified($(this).val()); + }); + }); +})(django.jQuery); \ No newline at end of file diff --git a/scheduler/templates/admin/scheduler/confirm_action.html b/scheduler/templates/admin/scheduler/confirm_action.html index c61b8bf..69dd45c 100644 --- a/scheduler/templates/admin/scheduler/confirm_action.html +++ b/scheduler/templates/admin/scheduler/confirm_action.html @@ -22,7 +22,7 @@
    {% for job in jobs %}
  • - {{ job.id }} + {{ job.name }} {{ job | show_func_name }}
  • {% endfor %} @@ -31,7 +31,7 @@ {% csrf_token %}
    {% for job in jobs %} - + {% endfor %} diff --git a/scheduler/templates/admin/scheduler/job_detail.html b/scheduler/templates/admin/scheduler/job_detail.html index 6892844..5263c73 100644 --- a/scheduler/templates/admin/scheduler/job_detail.html +++ b/scheduler/templates/admin/scheduler/job_detail.html @@ -8,15 +8,15 @@ HomeQueues{{ queue.name }} › - {{ job.id }} + {{ job.name }}
    {% endblock %} {% block content_title %} -

    Job {{ job.id }} +

    Job {{ job.name }} {% if job.is_scheduled_task %} - Link to scheduled job + Link to scheduled job {% endif %}

    @@ -24,123 +24,136 @@

    Job {{ job.id }} {% block content %}
    -
    +
    -
    {{ job.origin }}
    +
    {{ job.queue_name }}
    +
    +
    +
    + +
    + {% if data_is_valid %} + {{ job.func_name }}( + {% if job.args %} + {% for arg in job.args %} + {{ arg|force_escape }}, + {% endfor %} + {% endif %} + {% for key, value in job.kwargs.items %} + {{ key }}={{ value|force_escape }}, + {% endfor %}) + {% else %} + Unpickling Error + {% endif %} +
    +
    +
    + +
    {{ job | show_func_name }}
    +
    -
    - -
    {{ job.timeout }}
    -
    - -
    - -
    {{ job.result_ttl }}
    -
    - -
    - -
    {{ job.created_at|date:"Y-m-d, H:i:s"|default:"-" }}
    -
    - - -
    - -
    {{ job.enqueued_at|date:"Y-m-d, H:i:s"|default:"-" }}
    -
    - -
    - -
    {{ job.started_at|date:"Y-m-d, H:i:s"|default:"-" }}
    +
    + +
    + {% for k in job.meta %} +
    + +
    {{ job.meta | get_item:k }}
    +
    + {% endfor %} +
    +
    +
    -
    - -
    {{ job.ended_at|date:"Y-m-d, H:i:s"|default:"-" }}
    +
    +
    +
    + +
    {{ job.timeout }}
    +
    +
    + +
    {{ job.success_ttl }}
    +
    +
    +
    +
    +

    Job queue interation

    +
    +
    +
    + +
    {{ job.status.value | capfirst }}
    +
    +
    + +
    {{ job.created_at|date:"Y-m-d, H:i:s"|default:"-" }}
    +
    +
    + +
    {{ job.enqueued_at|date:"Y-m-d, H:i:s"|default:"-" }}
    +
    -
    - -
    {{ job.get_status }}
    -
    +
    + +
    {{ job.started_at|date:"Y-m-d, H:i:s"|default:"-" }}
    +
    -
    - -
    - {% if data_is_valid %} - {{ job.func_name }}( - {% if job.args %} - {% for arg in job.args %} - {{ arg|force_escape }}, - {% endfor %} - {% endif %} - {% for key, value in job.kwargs.items %} - {{ key }}={{ value|force_escape }}, - {% endfor %}) - {% else %} - Unpickling Error - {% endif %} +
    + +
    {{ job.ended_at|date:"Y-m-d, H:i:s"|default:"-" }}
    +
    -
    - -
    {{ job | show_func_name }}
    -
    - -
    - -
    - {% for k in job.meta %} -
    - -
    {{ job.meta | get_item:k }}
    -
    - {% endfor %} +
    +
    +

    Last result

    +
    +
    + +
    {{ last_result.created_at|date:"Y-m-d, H:i:s" }}
    -
    - - -
    -
    - +
    +
    - {% if dependency_id %} - - {{ dependency_id }} - + {% if last_result.worker_name %} + {{ last_result.worker_name }} + {% else %} + - {% endif %}
    - {% if exc_info %} + + {% if last_result.exc_string %}
    -
    {% if job.exc_info %}{{ job.exc_info|linebreaks }}{% endif %}
    +
    {{ last_result.exc_string|default:"-"|linebreaks }}
    -
    {% endif %}
    - -
    {{ job.result | default:'-' }}
    + +
    +
    {{ last_result.return_value|default:'-'|linebreaks }}
    +
    - -
    {% if job.is_started %} - -
    - - {% for result in job.results %} -

    Result {{ result.id }}

    - + {% endif %}
    {% endfor %}
    -
    {% endblock %} diff --git a/scheduler/templates/admin/scheduler/jobs-list-with-tasks.partial.html b/scheduler/templates/admin/scheduler/jobs-list-with-tasks.partial.html new file mode 100644 index 0000000..8b5499a --- /dev/null +++ b/scheduler/templates/admin/scheduler/jobs-list-with-tasks.partial.html @@ -0,0 +1,76 @@ +{% load scheduler_tags i18n %} +{% if not add %} +
    +

    Job executions

    +
    + + + + + + + + + + + + + + + + {% for exec in executions %} + + + + + + + + + + + + {% endfor %} + +
    IDScheduled TaskSTATUSCreated atEnqueued atStarted atRan forWorker nameResult
    + {{ exec.name }} + + {% if exec.scheduled_task_id %} + + {{ exec|job_scheduled_task }} + + {% endif %} + + {{ exec|job_status }} + + {{ exec.created_at|date:"Y-m-d, H:i:s"|default:"-" }} + + {{ exec.enqueued_at|date:"Y-m-d, H:i:s"|default:"-" }} + + {{ exec.started_at|date:"Y-m-d, H:i:s"|default:"-" }} + + {{ exec|job_runtime }} + + {{ exec.worker_name|default:"-" }} + + {{ exec|job_result|default:"-" }} +
    +
    +

    + {% if pagination_required %} + {% for i in page_range %} + {% if i == executions.paginator.ELLIPSIS %} + {{ executions.paginator.ELLIPSIS }} + {% elif i == executions.number %} + {{ i }} + {% else %} + {{ i }} + {% endif %} + {% endfor %} + {{ executions.paginator.count }} {% blocktranslate count counter=executions.paginator.count %}entry + {% plural %}entries{% endblocktranslate %} + {% endif %} +

    +
    +{% endif %} \ No newline at end of file diff --git a/scheduler/templates/admin/scheduler/jobs-list.partial.html b/scheduler/templates/admin/scheduler/jobs-list.partial.html index 8186242..b3d7bb4 100644 --- a/scheduler/templates/admin/scheduler/jobs-list.partial.html +++ b/scheduler/templates/admin/scheduler/jobs-list.partial.html @@ -20,7 +20,12 @@

    Job executions

    {% for exec in executions %} - {{ exec.id }} + {{ exec.name }} + {% if exec.scheduled_task_id %} + + Go to scheduled task + + {% endif %} {{ exec|job_status }} diff --git a/scheduler/templates/admin/scheduler/jobs.html b/scheduler/templates/admin/scheduler/jobs.html index 5651d82..7fb9855 100644 --- a/scheduler/templates/admin/scheduler/jobs.html +++ b/scheduler/templates/admin/scheduler/jobs.html @@ -25,13 +25,13 @@
    -
    + {% csrf_token %}
    {% endblock %} -{% block content_title %}

    Workers in {{ queue.name }}

    {% endblock %} +{% block content_title %}

    Queue {{ queue.name }} workers

    {% endblock %} {% block content %} diff --git a/scheduler/templates/admin/scheduler/single_job_action.html b/scheduler/templates/admin/scheduler/single_job_action.html index 53f9089..b6adad5 100644 --- a/scheduler/templates/admin/scheduler/single_job_action.html +++ b/scheduler/templates/admin/scheduler/single_job_action.html @@ -6,7 +6,7 @@ HomeQueues{{ queue.name }} › - {{ job.id }} › + {{ job.name }} › Delete
    {% endblock %} @@ -18,8 +18,8 @@

    Are you sure you want to {{ action }} - - {{ job.id }} ({{ job|show_func_name }}) + + {{ job.name }} ({{ job|show_func_name }}) from {{ queue.name }}? diff --git a/scheduler/templates/admin/scheduler/stats.html b/scheduler/templates/admin/scheduler/stats.html index 369e3a5..bb7e41c 100644 --- a/scheduler/templates/admin/scheduler/stats.html +++ b/scheduler/templates/admin/scheduler/stats.html @@ -9,7 +9,7 @@ } {% endblock %} -{% block content_title %}

    RQ Queues

    {% endblock %} +{% block content_title %}

    Tasks Queues

    {% endblock %} {% block breadcrumbs %}