Skip to main content

LLM agent to facilitate the generation of unit test cases.

Project description

langunittest: Your Command-line LLM agent for unit test cases.

Problem

From clippy, you can generate unit test cases for Python modules with ease.

Take the following math.py module as an example:

def add(a: float, b: float) -> float:
    return a + b

def div(a: float, b: float) -> float:
    if b == 0.0:
        raise ValueError('Invalid divisor!')
    return a / b

You can generate unit test cases using the following command:

$ pack math.py | clippy 'Create unit test cases for "math.py".'

Example AI response:

import unittest
from math import add, div

class TestMathFunctions(unittest.TestCase):

    # Test cases for add()
    def test_add_positive_numbers(self):
        self.assertAlmostEqual(add(1, 2), 3)

    def test_add_negative_numbers(self):
        self.assertAlmostEqual(add(-1, -2), -3)

    def test_add_mixed_numbers(self):
        self.assertAlmostEqual(add(-1, 2), 1)
        self.assertAlmostEqual(add(1, -2), -1)

    def test_add_with_zero(self):
        self.assertAlmostEqual(add(5, 0), 5)
        self.assertAlmostEqual(add(0, 0), 0)

    def test_add_float_numbers(self):
        self.assertAlmostEqual(add(0.1, 0.2), 0.3)
        self.assertAlmostEqual(add(1.5, 2.5), 4.0)

    # Test cases for div()
    def test_div_positive_numbers(self):
        self.assertAlmostEqual(div(6, 2), 3)

    def test_div_negative_numbers(self):
        self.assertAlmostEqual(div(-6, -2), 3)

    def test_div_by_zero(self):
        with self.assertRaisesRegex(ValueError, 'Invalid divisor!'):
            div(10, 0)
        with self.assertRaisesRegex(ValueError, 'Invalid divisor!'):
            div(0, 0)

if __name__ == '__main__':
    unittest.main(argv=['first-arg-is-ignored'], exit=False)

Current Limitations: Despite the convenience, this approach comes with several drawbacks:

  • ⚠️ The generated test cases are printed to the console and must be copied manually into a file.
  • ❌ The test cases may contain errors and are not always executable out of the box.
  • 🧠 There’s no memory or iterative refinement; you cannot adjust or fine-tune the generated tests interactively.
  • 📉 Test coverage of the generated code is not reported or optimized.

Introducing langunittest

To address these limitations, we propose langunittest, a module designed to provide an interactive agent that works with you to generate, refine, and validate unit tests more efficiently.

Key benefits include:

  • 💾 Output saved directly to files—no copy-paste needed.
  • ✅ Automatically verified executable test cases.
  • 🔄 Interactive back-and-forth refinement with memory and state.
  • 📈 Built-in test coverage analysis to ensure sufficient coverage.

With langunittest, the goal is to generate reliable, high-quality, and maintainable unit test suites through an iterative and intelligent workflow.

Usage

Below we will demo how to use langunittest to help use create unit test cases.

Firstly, let's use below command to trigger Agent's REPL:

$ python -m langunittest.agents.agent
...
Starting enterprise_assistant agent. Type 'exit' to quit.
You:

Now we can start to interact with agent to write test cases:

  1. Setup project root path by entering prompt Set project root path as /tmp:
You: Set project root path as /tmp
...
programming_assistant: Project root path has been set to `/tmp`. How can I help you?

Note. You can change root path /tmp to yours according to real situation.

  1. Show the source code of target module:
You: Show me the source code of utils/my_math.py
...
'''python
def add(a: int, b: int) -> int:
  """Sums up the input `a` and `b`.

  Args:
    a: First value to add
    b: Second value to add

  Returns:
    Return value of `a + b`.
  """
  return a + b
'''
  1. Ask agent to create tests:
You: Please create test cases for it.
...
----------------------------------------------------------------------
Ran 4 tests in 0.000s

I've created the following test cases for `utils/my_math.py`:
'''python
import unittest
from utils.my_math import add

class TestMyMath(unittest.TestCase):

    def test_add_positive_numbers(self): # Test adding two positive numbers
        self.assertEqual(add(1, 2), 3)

    def test_add_negative_numbers(self): # Test adding two negative numbers
        self.assertEqual(add(-1, -2), -3)

    def test_add_mixed_numbers(self): # Test adding a positive and a negative number
        self.assertEqual(add(1, -2), -1)

    def test_add_zero(self): # Test adding zero to a number
        self.assertEqual(add(0, 5), 5)

if __name__ == '__main__':
    unittest.main()
'''
  1. Ask agent to add new test case:
You: Please add new test case with input 1 and 99.
...
Ran 5 tests in 0.000s

OK
...
'''python
import unittest
from utils.my_math import add

class TestMyMath(unittest.TestCase):
  ...
  def test_add_large_positive_numbers(self): # Test adding 1 and 99
        self.assertEqual(add(1, 99), 100)

if __name__ == '__main__':
    unittest.main()
'''

And here are the results of running all the tests:
'''
--- Custom Test Output ---
test_add_large_positive_numbers (tests.unit.utils.test_my_math.TestMyMath.test_add_large_positive_numbers) ... ok
test_add_mixed_numbers (tests.unit.utils.test_my_math.TestMyMath.test_add_mixed_numbers) ... ok
test_add_negative_numbers (tests.unit.utils.test_my_math.TestMyMath.test_add_negative_numbers) ... ok
test_add_positive_numbers (tests.unit.utils.test_my_math.TestMyMath.test_add_positive_numbers) ... ok
test_add_zero (tests.unit.utils.test_my_math.TestMyMath.test_add_zero) ... ok
==========
Overall Result: OK (All tests passed) ✅
'''
All tests passed successfully!
  1. Exit the REPL:
You: exit
Exiting agent.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langunittest-0.1.2.tar.gz (18.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langunittest-0.1.2-py3-none-any.whl (19.2 kB view details)

Uploaded Python 3

File details

Details for the file langunittest-0.1.2.tar.gz.

File metadata

  • Download URL: langunittest-0.1.2.tar.gz
  • Upload date:
  • Size: 18.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.9 Linux/6.12.32-1rodete1-amd64

File hashes

Hashes for langunittest-0.1.2.tar.gz
Algorithm Hash digest
SHA256 7a5b19396bfe021496f0818ae4f5a65456fa56c378ddfdc6266b7e496d54ec6f
MD5 a13f28567b9e35f6983618db7f35fe5a
BLAKE2b-256 7a8639a109de7c3f76733e4662f53ca11ec599347fb3da26e074e0cb53bfe448

See more details on using hashes here.

File details

Details for the file langunittest-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: langunittest-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 19.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.9 Linux/6.12.32-1rodete1-amd64

File hashes

Hashes for langunittest-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 af32328da12faef7b6fc5cad1d7dafcb7e4a74ace8e0000485d2e3798437d919
MD5 6081bc97f4a7bd2733b2cb3730612b1a
BLAKE2b-256 e241e9ad5c2b2bb4fd895b7acfb4a0db64a00820bbabb0df4ab85f8e55da860a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page