Skip to main content

Biblioteca Dadosfera para processamento e análise de dados

Project description

Dadosfera Library

A Dadosfera Library é uma biblioteca Python que fornece ferramentas e utilitários para processamento e análise de dados, incluindo funcionalidades para envio de emails, manipulação de dados e integração com diversos serviços.

Funcionalidades

A biblioteca oferece diversas funcionalidades para processamento e análise de dados:

  • EmailSender: Envio de emails com suporte a múltiplos destinatários, anexos e formatos HTML/texto
  • Processamento de Dados: Ferramentas para manipulação e transformação de dados
  • Integrações: Conectores para diversos serviços e APIs
  • Análise de Dados: Utilitários para análise e visualização de dados
  • Machine Learning: Suporte para modelos de ML e processamento de dados para ML

Para ver a lista completa de funcionalidades e documentação detalhada, visite nossa documentação oficial.

Instalação

Via pip

pip install dadosfera

Via Poetry

poetry add dadosfera

Uso

Exemplo Básico

from dadosfera import EmailSender

# Configurar o sender
sender = EmailSender(
    smtp_server="smtp.gmail.com",
    port=587,
    from_email="seu@email.com",
    password="sua_senha",
    use_ssl=True
)

# Criar e enviar um email
message = sender.create_message(
    to_email="destinatario@email.com",
    subject="Teste",
    body="Olá, este é um teste!",
    mimetype="html"  # ou "plain" para texto plano
)

# Enviar o email
sender.send_email("destinatario@email.com", message)

Para mais exemplos e casos de uso, consulte nossa documentação oficial.

Desenvolvimento

Pré-requisitos

  • Python 3.6.2 ou superior
  • Docker e Docker Compose
  • Poetry (para gerenciamento de dependências)

Configuração do Ambiente

  1. Clone o repositório:
git clone https://github.com/dadosfera/dadosfera-library.git
cd dadosfera-library
  1. Instale as dependências:
poetry install

Executando os Testes

  1. Configure as variáveis de ambiente:
cp test.env.example test.env
# Edite o arquivo test.env com suas configurações
  1. Execute os testes:
docker-compose -f test.local.docker-compose.yml build
docker-compose -f test.local.docker-compose.yml run test_suite

Contribuindo

  1. Faça um fork do projeto
  2. Crie uma branch para sua feature (git checkout -b feature/nova-feature)
  3. Faça commit das suas mudanças (git commit -m 'feat: adiciona nova feature')
  4. Faça push para a branch (git push origin feature/nova-feature)
  5. Abra um Pull Request

Licença

Este projeto está licenciado sob a licença MIT - veja o arquivo LICENSE para mais detalhes.

Writing Tests for dadosfera-lib

General Guidelines:

  • Using pytest: All tests should be written using the pytest framework. It provides a lot of helpful features out-of-the-box and is the preferred testing tool for this library.

  • Using Mocks: Whenever possible, try to use mocks in your tests. This helps in isolating the functionality and ensures that you're testing only the part of the code that you intend to.

  • Test Structure: The structure of the tests should mirror the structure of the library. This makes it easier to locate tests and understand which parts of the codebase they correspond to.

Example:

If the library structure is:

dadosfera
├── components
│   ├── brand.py
│   ├── hamburger_menu.py

Then, the corresponding test structure should be:

tests
├── components
│   ├── test_brand.py
│   ├── test_hamburger_menu.py

pytest Hints:

  • Test Functions: Every test function should start with test_. This is how pytest recognizes which methods to run as tests. For instance: test_function_name.

  • Test Modules: Similarly, modules (python files) containing tests should also start with test_. This allows pytest to recognize and collect them.

  • Fixtures: If you're using fixtures to set up some recurring prerequisites for your tests, the fixture function should start with fixture_ and use the name argument to specify a more intuitive fixture name.

    Example:

    @pytest.fixture(name='database_connection')
    def fixture_database_connection():
        # setup logic
        ...
        yield connection
        # teardown logic
        ...
    

    When referencing the fixture in your tests, you should use its given name:

    def test_database_functionality(database_connection):
        # Your test logic here
    

In Summary:

When writing tests for the dadosfera-lib, ensure that they are clear, concise, and thoroughly cover the functionality you're testing. By following the conventions and guidelines outlined above, you'll ensure that your tests are not only effective but also well-organized and easy to navigate.


Running Tests for dadosfera-lib

Pre-requisites:

  • Docker and Docker-compose should be installed on your machine.
  • You should have cloned the dadosfera-lib repository to your local system.

Instructions:

  1. Setting up Environment Variables:

Before you can run the tests, the environment variables required for the test suite need to be set up. For your convenience, an example .env file is provided.

  • Navigate to the root directory of dadosfera-lib.

  • Copy test.env.example to test.env:

cp test.env.example test.env
  • Open the test.env file with your favorite text editor. Replace <openai_api_key> with your actual OpenAI API key:
OPENAI_API_KEY=your_actual_openai_api_key
  • Save and close the file.
  1. Building the Docker Container:

Build the Docker container with the provided Docker configuration. From the root directory of dadosfera-lib, execute:

docker-compose -f test.local.docker-compose.yml build
  1. Running the Test Suite:

After building the Docker container, run the test suite using:

docker-compose -f test.local.docker-compose.yml run test_suite

The tests will commence and the terminal will display the test details, indicating which are in progress and their pass/fail status.

Notes:

  • Ensure you provide a valid OpenAI API key in the test.env for successful test execution.
  • The test suite operates inside a Docker container to guarantee a consistent test environment.
  • Due to the utilization of Docker volumes, changes made to the code are immediately reflected inside the container. Thus, there's no need to rebuild the Docker image after every change. However, if you've added new dependencies or libraries, rebuilding the image will be necessary.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dadosfera-1.22.0.tar.gz (40.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dadosfera-1.22.0-py3-none-any.whl (51.8 kB view details)

Uploaded Python 3

File details

Details for the file dadosfera-1.22.0.tar.gz.

File metadata

  • Download URL: dadosfera-1.22.0.tar.gz
  • Upload date:
  • Size: 40.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.10

File hashes

Hashes for dadosfera-1.22.0.tar.gz
Algorithm Hash digest
SHA256 3ca9d8f68815ecf1a05c0d59f8fee8e1b9f33b412024b1073a415f1b021ede8d
MD5 23fd9a345c59e26eef42391eac0d2f92
BLAKE2b-256 2cf94fccb07fecf905078d0837367894442c3dfd6ad1afedcf5a46aaec437761

See more details on using hashes here.

File details

Details for the file dadosfera-1.22.0-py3-none-any.whl.

File metadata

  • Download URL: dadosfera-1.22.0-py3-none-any.whl
  • Upload date:
  • Size: 51.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.10

File hashes

Hashes for dadosfera-1.22.0-py3-none-any.whl
Algorithm Hash digest
SHA256 325b4ce57f324a1ed786b1fb0bd682174841012abf21bc0ff33a3e308b52fde2
MD5 eb2ff1cabaaed68b824552faf75dd0cf
BLAKE2b-256 cac9e7b216bb72b39b39d8a6ebfecd8d7b0c4b2262d331143b9f13c8c04c1d1a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page