Skip to main content

A toolkit that generates a variety of features for team conversation data.

Project description

Testing Features GitHub release License

The Team Communication Toolkit

The Team Communication Toolkit is a Python package that makes it easy for social scientists to analyze and understand text-based communication data. Our aim is to facilitate seamless analyses of conversational data --- especially among groups and teams! --- by providing a single interface for researchers to generate and explore dozens of research-backed conversational features.

We are a research project created by the Computational Social Science Lab at UPenn and funded by the Wharton AI and Analytics Initiative.

View - Home Page

View - Documentation

The Team Communication Toolkit is an academic project and is intended to be used for academic purposes only.

Getting Started

To use our tool, please ensure that you have Python >= 3.10 installed and a working version of pip, which is Python's package installer. Then, in your local environment, run the following:

pip install team_comm_tools

This command will automatically install our package and all required dependencies.

Troubleshooting

In the event that some dependency installations fail (for example, you may get an error that en_core_web_sm from Spacy is not found, or that there is a missing NLTK resource), please run this simple one-line command in your terminal, which will force the installation of Spacy and NLTK dependencies:

download_resources

If you encounter a further issue in which the 'wordnet' package from NLTK is not found, it may be related to a known bug in NLTK in which the wordnet package does not unzip automatically. If this is the case, please follow the instructions to manually unzip it, documented in this thread.

Import Recommendations: Virtual Environment and Pip

We strongly recommend using a virtual environment in Python to run the package. We have several specific dependency requirements. One important one is that we are currently only compatible with numpy < 2.0.0 because numpy 2.0.0 and above made significant changes that are not compatible with other dependencies of our package. As those dependencies are updated, we will support later versions of numpy.

We also strongly recommend using thet your version of pip is up-to-date (>=24.0). There have been reports in which users have had trouble downloading dependencies (specifically, the Spacy package) with older versions of pip. If you get an error with downloading en_core_web_sm, we recommend updating pip.

Using the FeatureBuilder

After you import the package and install dependencies, you can then use our tool in your Python script as follows:

from team_comm_tools import FeatureBuilder

Note: PyPI treats hyphens and underscores equally, so pip install team_comm_tools and pip install team-comm-tools are equivalent. However, Python does NOT treat them equally, and you should use underscores when you import the package, like this: from team_comm_tools import FeatureBuilder.

Once you import the tool, you will be able to declare a FeatureBuilder object, which is the heart of our tool. Here is some sample syntax:

my_feature_builder = FeatureBuilder(
   input_df = my_pandas_dataframe,
   # this means there's a column in your data called 'conversation_id' that uniquely identifies a conversation
   conversation_id_col = "conversation_id",
   # this means there's a column in your data called 'speaker_id' that uniquely identifies a speaker
   speaker_id_col = "speaker_id",
   # this means there's a column in your data called 'messagae' that contains the content you want to featurize
   message_col = "message",
   # this means there's a column in your data called 'timestamp' that conains the time associated with each message; we also accept a list of (timestamp_start, timestamp_end), in case your data is formatted in that way.
   timestamp_col= "timestamp",
   # this is where we'll cache things like sentence vectors; this directory doesn't have to exist; we'll create it for you!
   vector_directory = "./vector_data/",
   # this will be the base file path for which we generate the three outputs;
   # you will get your outputs in output/chat/my_output_chat_level.csv; output/conv/my_output_conv_level.csv; and output/user/my_output_user_level.
   output_file_base = "my_output"
   # it will also store the output into output/turns/my_output_chat_level.csv
   turns = False,
   # these features depend on sentence vectors, so they take longer to generate on larger datasets. Add them in manually if you are interested in adding them to your output!
   custom_features = [
         "(BERT) Mimicry",
         "Moving Mimicry",
         "Forward Flow",
         "Discursive Diversity"
   ],
)

# this line of code runs the FeatureBuilder on your data
my_feature_builder.featurize()

Data Format

We accept input data in the format of a Pandas DataFrame. Your data needs to have three (3) required input columns and one optional column.

  1. A conversation ID,
  2. A speaker ID,
  3. A message/text input, which contains the content that you want to get featurized;
  4. (Optional) a timestamp. This is not necessary for generating features, but behaviors related to the conversation's pace (for example, the average delay between messages; the "burstiness" of a conversation) cannot be measured without it.

Featurized Outputs: Levels of Analysis

Notably, not all communication features are made equal, as they can be defined at different levels of analysis. For example, a single utterance ("you are great!") may be described as a "positive statement." An individual who makes many such utterances may be described as a "positive person." Finally, the entire team may enjoy a "positive conversation," an interaction in which everyone speaks positively to each other. In this way, the same concept of positivity can be applied to three levels:

  1. The utterance,
  2. The speaker, and
  3. The conversation

We generate a separate output file for each level. When you declare a FeatureBuilder, you can use the output_file_base to define a base path shared among all three levels, and an output path will be automatically generated for each level of analysis.

For more information, please refer to the Introduction on our Read the Docs Page.

Learn More

Please visit our website, https://teamcommtools.seas.upenn.edu/, for general information about our project and research. For more detailed documentation on our features and examples, please visit our Read the Docs Page.

Becoming a Contributor

If you would like to make pull requests to this open-sourced repository, please read our GitHub Repo Getting Started Guide. We welcome new feature contributions or improvements to our framework.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

team_comm_tools-0.1.4.post2.tar.gz (210.4 kB view details)

Uploaded Source

Built Distribution

team_comm_tools-0.1.4.post2-py3-none-any.whl (219.2 kB view details)

Uploaded Python 3

File details

Details for the file team_comm_tools-0.1.4.post2.tar.gz.

File metadata

  • Download URL: team_comm_tools-0.1.4.post2.tar.gz
  • Upload date:
  • Size: 210.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.2

File hashes

Hashes for team_comm_tools-0.1.4.post2.tar.gz
Algorithm Hash digest
SHA256 6286b9fcde9a67fb370b6f4f97910d045874e3c01e5a6334b2f2990f0800b173
MD5 b40af800299ca3749dadab1c6c563d06
BLAKE2b-256 88efbec1f15e2b62e1119e32f7fe8c4a2119a43b553599f1aadde15c49fd90fe

See more details on using hashes here.

File details

Details for the file team_comm_tools-0.1.4.post2-py3-none-any.whl.

File metadata

File hashes

Hashes for team_comm_tools-0.1.4.post2-py3-none-any.whl
Algorithm Hash digest
SHA256 e5366e7a3a6f868172255f4e9c63364aee270e0791481daf3efcc57912aeeabc
MD5 7adc3f415dd86bdcf2fb68d7e6073a29
BLAKE2b-256 81546e95077ddeca606fbfd6c6b4805064dfd64f5b9770e124ecdbcb21acc5ab

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page