Skip to main content

Create and analyze LLM-based surveys

Project description

edsl.png

Expected Parrot Domain-Specific Language (EDSL)

EDSL makes it easy to conduct computational social science and market research with AI. Use it to design and run surveys and experiments with many AI agents and large language models at once, or to perform complex data labeling and other research tasks. Results are formatted as specified datasets that can be replicated at no cost, and come with built-in methods for analysis, visualization and collaboration.

Getting started

  1. Run pip install edsl to install the package. See instructions.

  2. Create an account to run surveys at the Expected Parrot server and access a universal remote cache of stored responses for reproducing results.

  3. Choose whether to use your own keys for language models or get an Expected Parrot key to access all available models at once. Securely manage keys, expenses and usage for your team from your account.

  4. Run the starter tutorial and explore other demo notebooks for a variety of use cases.

  5. Share workflows and survey results at Coop: a free platform for creating and sharing AI research.

  6. Join our Discord for updates and discussions!

Code & Docs

Requirements

  • Python 3.9 - 3.13
  • API keys for language models. You can use your own keys or an Expected Parrot key that provides access to all available models. See instructions on managing keys and model pricing and performance information.

Coop

Expected Parrot provides a free platform for creating, storing and sharing AI-based research, and validating it with human respondents.

Community

Contact

Features

Declarative design: Specified question types ensure consistent results without requiring a JSON schema (view at Coop):

from edsl import QuestionMultipleChoice

q = QuestionMultipleChoice(
  question_name = "example",
  question_text = "How do you feel today?",
  question_options = ["Bad", "OK", "Good"]
)

results = q.run()

results.select("example")
answer.example
Good

Parameterized prompts: Easily parameterize and control prompts with "scenarios" of data automatically imported from many sources (CSV, PDF, PNG, etc.) (view at Coop):

from edsl import ScenarioList, QuestionLinearScale

q = QuestionLinearScale(
  question_name = "example",
  question_text = "How much do you enjoy {{ scenario.activity }}?",
  question_options = [1,2,3,4,5,],
  option_labels = {1:"Not at all", 5:"Very much"}
)

sl = ScenarioList.from_list("activity", ["coding", "sleeping"])

results = q.by(sl).run()

results.select("activity", "example")
scenario.activity answer.example
Coding 5
Sleeping 5

Design AI agent personas to answer questions: Construct agents with relevant traits to provide diverse responses to your surveys. Note that agent responses are generated by language models based on their training data — they reflect statistical patterns, not the actual opinions of any demographic group. Use AI-simulated responses for prototyping and pre-testing, and validate with real human data when measuring actual attitudes or behaviors. (view at Coop)

from edsl import Agent, AgentList, QuestionList

al = AgentList(Agent(traits = {"persona":p}) for p in ["botanist", "detective"])

q = QuestionList(
  question_name = "example",
  question_text = "What are your favorite colors?",
  max_list_items = 3
)

results = q.by(al).run()

results.select("persona", "example")
agent.persona answer.example
botanist ['Green', 'Earthy Brown', 'Sunset Orange']
detective ['Gray', 'Black', 'Navy Blue']

Simplified access to LLMs: Choose whether to use your own API keys for LLMs, or access all available models with an Expected Parrot key. Run surveys with many models at once and compare responses at a convenient interface (view at Coop)

from edsl import Model, ModelList, QuestionFreeText

ml = ModelList(Model(m) for m in ["gpt-4o", "gemini-1.5-flash"])

q = QuestionFreeText(
  question_name = "example",
  question_text = "What is your top tip for using LLMs to answer surveys?"
)

results = q.by(ml).run()

results.select("model", "example")
model.model answer.example
gpt-4o When using large language models (LLMs) to answer surveys, my top tip is to ensure that the ...
gemini-1.5-flash My top tip for using LLMs to answer surveys is to **treat the LLM as a sophisticated brainst...

Piping & skip-logic: Build rich data labeling flows with features for piping answers and adding survey logic such as skip and stop rules (view at Coop):

from edsl import QuestionMultipleChoice, QuestionFreeText, Survey

q1 = QuestionMultipleChoice(
  question_name = "color",
  question_text = "What is your favorite primary color?",
  question_options = ["red", "yellow", "blue"]
)

q2 = QuestionFreeText(
  question_name = "flower",
  question_text = "Name a flower that is {{ color.answer }}."
)

survey = Survey(questions = [q1, q2])

results = survey.run()

results.select("color", "flower")
answer.color answer.flower
blue A commonly known blue flower is the bluebell. Another example is the cornflower.

Caching & reproducibility: API calls to LLMs are cached automatically. When you run a survey remotely, results are stored at the Expected Parrot server with verified prompts and timestamps. Share your code and anyone can retrieve your exact outputs at no cost, with no setup or API keys required. Learn more about how the universal remote cache works.

Flexibility: Choose whether to run surveys on your own computer or at the Expected Parrot server.

Tools for collaboration: Easily share workflows and projects privately or publicly at Coop: an integrated platform for AI-based research. Your account comes with free credits for running surveys, and lets you securely share keys, track expenses and usage for your team.

Built-in tools for analysis: Analyze results as specified datasets from your account or workspace. Easily import data to use with your surveys and export results

Project details


Release history Release notifications | RSS feed

This version

1.0.8

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

edsl-1.0.8.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

edsl-1.0.8-py3-none-any.whl (1.6 MB view details)

Uploaded Python 3

File details

Details for the file edsl-1.0.8.tar.gz.

File metadata

  • Download URL: edsl-1.0.8.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.11 Linux/5.15.0-1075-gcp

File hashes

Hashes for edsl-1.0.8.tar.gz
Algorithm Hash digest
SHA256 04067dca8f74512e429663a898e54b3f4ee274d9037231a47d0b3bdf3d26357d
MD5 363aff507ae003155935fa1d42881571
BLAKE2b-256 833fd4baade4c2f8c62e9b742737eea7a4d5d37b9db5593840bdce640f5a095d

See more details on using hashes here.

File details

Details for the file edsl-1.0.8-py3-none-any.whl.

File metadata

  • Download URL: edsl-1.0.8-py3-none-any.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.11 Linux/5.15.0-1075-gcp

File hashes

Hashes for edsl-1.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 be45dd4a301738000a8afbf97913514392880b2d35a8a55438a4120f243aa2ae
MD5 0300857524e7634ce6879c96c5f611ee
BLAKE2b-256 e7513dc61bc337165c3776b30263099011472d4b100274e4fd484e740b5da04a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page