<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>PyPI recent updates for aitutor-assessmentkit</title>
    <link>https://pypi.org/project/aitutor-assessmentkit/</link>
    <description>Recent updates to the Python Package Index for aitutor-assessmentkit</description>
    <language>en</language>    <item>
      <title>0.1.8</title>
      <link>https://pypi.org/project/aitutor-assessmentkit/0.1.8/</link>
      <description>AITutor-AssessmentKit is the first open-source toolkit designed to evaluate the pedagogical performance of AI tutors in student mistake remediation tasks. With the growing capabilities of large language models (LLMs), this library provides a systematic approach to assess their teaching potential across multiple dimensions in educational dialogues.</description>
<author>Kaushal.Maurya@mbzuai.ac.ae</author>      <pubDate>Mon, 16 Dec 2024 05:23:57 GMT</pubDate>
    </item>    <item>
      <title>0.1.7</title>
      <link>https://pypi.org/project/aitutor-assessmentkit/0.1.7/</link>
      <description>AITutor-AssessmentKit is the first open-source toolkit designed to evaluate the pedagogical performance of AI tutors in student mistake remediation tasks. With the growing capabilities of large language models (LLMs), this library provides a systematic approach to assess their teaching potential across multiple dimensions in educational dialogues.</description>
<author>Kaushal.Maurya@mbzuai.ac.ae</author>      <pubDate>Mon, 16 Dec 2024 05:15:43 GMT</pubDate>
    </item>    <item>
      <title>0.1.6</title>
      <link>https://pypi.org/project/aitutor-assessmentkit/0.1.6/</link>
      <description>AITutor-AssessmentKit is the first open-source toolkit designed to evaluate the pedagogical performance of AI tutors in student mistake remediation tasks. With the growing capabilities of large language models (LLMs), this library provides a systematic approach to assess their teaching potential across multiple dimensions in educational dialogues.</description>
<author>Kaushal.Maurya@mbzuai.ac.ae</author>      <pubDate>Mon, 16 Dec 2024 05:09:56 GMT</pubDate>
    </item>    <item>
      <title>0.1.5</title>
      <link>https://pypi.org/project/aitutor-assessmentkit/0.1.5/</link>
      <description>AITutor-AssessmentKit is the first open-source toolkit designed to evaluate the pedagogical performance of AI tutors in student mistake remediation tasks. With the growing capabilities of large language models (LLMs), this library provides a systematic approach to assess their teaching potential across multiple dimensions in educational dialogues.</description>
<author>Kaushal.Maurya@mbzuai.ac.ae</author>      <pubDate>Mon, 16 Dec 2024 04:57:11 GMT</pubDate>
    </item>    <item>
      <title>0.1.4</title>
      <link>https://pypi.org/project/aitutor-assessmentkit/0.1.4/</link>
      <description>AITutor-AssessmentKit is the first open-source toolkit designed to evaluate the pedagogical performance of AI tutors in student mistake remediation tasks. With the growing capabilities of large language models (LLMs), this library provides a systematic approach to assess their teaching potential across multiple dimensions in educational dialogues.</description>
<author>Kaushal.Maurya@mbzuai.ac.ae</author>      <pubDate>Mon, 16 Dec 2024 04:54:12 GMT</pubDate>
    </item>    <item>
      <title>0.1.2</title>
      <link>https://pypi.org/project/aitutor-assessmentkit/0.1.2/</link>
      <description>Welcome to the `AITutor-AssessmentKit`! With the remarkable advancements in large language models (LLMs), there is growing interest in leveraging these models as AI-powered tutors. However, the field lacks robust evaluation methodologies and tools to systematically assess the pedagogical capabilities of such systems. The `AITutor-AssessmentKit` is the first open-source toolkit specifically designed to evaluate the pedagogical performance of AI tutors in *Student Mistake Remediation* tasks.</description>
<author>Kaushal.Maurya@mbzuai.ac.ae</author>      <pubDate>Sun, 15 Dec 2024 21:45:21 GMT</pubDate>
    </item>    <item>
      <title>0.1.1</title>
      <link>https://pypi.org/project/aitutor-assessmentkit/0.1.1/</link>
      <description>Welcome to the `AITutor-AssessmentKit`! With the remarkable advancements in large language models (LLMs), there is growing interest in leveraging these models as AI-powered tutors. However, the field lacks robust evaluation methodologies and tools to systematically assess the pedagogical capabilities of such systems. The `AITutor-AssessmentKit` is the first open-source toolkit specifically designed to evaluate the pedagogical performance of AI tutors in *Student Mistake Remediation* tasks.</description>
<author>Kaushal.Maurya@mbzuai.ac.ae</author>      <pubDate>Sun, 15 Dec 2024 21:37:55 GMT</pubDate>
    </item>  </channel>
</rss>