A toolkit that generates a variety of features for team conversation data.
Project description
The Team Communication Toolkit
The Team Communication Toolkit is a Python package that makes it easy for social scientists to analyze and understand text-based communication data. Our aim is to facilitate seamless analyses of conversational data --- especially among groups and teams! --- by providing a single interface for researchers to generate and explore dozens of research-backed conversational features.
We are a research project created by the Computational Social Science Lab at UPenn and funded by the Wharton AI and Analytics Initiative.
The Team Communication Toolkit is an academic project and is intended to be used for academic purposes only.
Getting Started
To use our tool, please ensure that you have Python >= 3.10 installed and a working version of pip, which is Python's package installer. Then, in your local environment, run the following:
pip install team_comm_tools
This command will automatically install our package and all required dependencies.
Troubleshooting
In the event that some dependency installations fail (for example, you may get an error that en_core_web_sm
from Spacy is not found, or that there is a missing NLTK resource), please run this simple one-line command in your terminal, which will force the installation of Spacy and NLTK dependencies:
download_resources
If you encounter a further issue in which the 'wordnet' package from NLTK is not found, it may be related to a known bug in NLTK in which the wordnet package does not unzip automatically. If this is the case, please follow the instructions to manually unzip it, documented in this thread.
Import Recommendations: Virtual Environment and Pip
We strongly recommend using a virtual environment in Python to run the package. We have several specific dependency requirements. One important one is that we are currently only compatible with numpy < 2.0.0 because numpy 2.0.0 and above made significant changes that are not compatible with other dependencies of our package. As those dependencies are updated, we will support later versions of numpy.
We also strongly recommend using thet your version of pip is up-to-date (>=24.0). There have been reports in which users have had trouble downloading dependencies (specifically, the Spacy package) with older versions of pip. If you get an error with downloading en_core_web_sm
, we recommend updating pip.
Using the FeatureBuilder
After you import the package and install dependencies, you can then use our tool in your Python script as follows:
from team_comm_tools import FeatureBuilder
Note: PyPI treats hyphens and underscores equally, so pip install team_comm_tools
and pip install team-comm-tools
are equivalent. However, Python does NOT treat them equally, and you should use underscores when you import the package, like this: from team_comm_tools import FeatureBuilder
.
Once you import the tool, you will be able to declare a FeatureBuilder object, which is the heart of our tool. Here is some sample syntax:
my_feature_builder = FeatureBuilder(
input_df = my_pandas_dataframe,
# this means there's a column in your data called 'conversation_id' that uniquely identifies a conversation
conversation_id_col = "conversation_id",
# this means there's a column in your data called 'speaker_id' that uniquely identifies a speaker
speaker_id_col = "speaker_id",
# this means there's a column in your data called 'messagae' that contains the content you want to featurize
message_col = "message",
# this means there's a column in your data called 'timestamp' that conains the time associated with each message; we also accept a list of (timestamp_start, timestamp_end), in case your data is formatted in that way.
timestamp_col= "timestamp",
# this is where we'll cache things like sentence vectors; this directory doesn't have to exist; we'll create it for you!
vector_directory = "./vector_data/",
# this will be the base file path for which we generate the three outputs;
# you will get your outputs in output/chat/my_output_chat_level.csv; output/conv/my_output_conv_level.csv; and output/user/my_output_user_level.
output_file_base = "my_output"
# it will also store the output into output/turns/my_output_chat_level.csv
turns = False,
# these features depend on sentence vectors, so they take longer to generate on larger datasets. Add them in manually if you are interested in adding them to your output!
custom_features = [
"(BERT) Mimicry",
"Moving Mimicry",
"Forward Flow",
"Discursive Diversity"
],
)
# this line of code runs the FeatureBuilder on your data
my_feature_builder.featurize()
Data Format
We accept input data in the format of a Pandas DataFrame. Your data needs to have three (3) required input columns and one optional column.
- A conversation ID,
- A speaker ID,
- A message/text input, which contains the content that you want to get featurized;
- (Optional) a timestamp. This is not necessary for generating features, but behaviors related to the conversation's pace (for example, the average delay between messages; the "burstiness" of a conversation) cannot be measured without it.
Featurized Outputs: Levels of Analysis
Notably, not all communication features are made equal, as they can be defined at different levels of analysis. For example, a single utterance ("you are great!") may be described as a "positive statement." An individual who makes many such utterances may be described as a "positive person." Finally, the entire team may enjoy a "positive conversation," an interaction in which everyone speaks positively to each other. In this way, the same concept of positivity can be applied to three levels:
- The utterance,
- The speaker, and
- The conversation
We generate a separate output file for each level. When you declare a FeatureBuilder, you can use the output_file_base
to define a base path shared among all three levels, and an output path will be automatically generated for each level of analysis.
For more information, please refer to the Introduction on our Read the Docs Page.
Learn More
Please visit our website, https://teamcommtools.seas.upenn.edu/, for general information about our project and research. For more detailed documentation on our features and examples, please visit our Read the Docs Page.
Becoming a Contributor
If you would like to make pull requests to this open-sourced repository, please read our GitHub Repo Getting Started Guide. We welcome new feature contributions or improvements to our framework.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for team_comm_tools-0.1.4.post1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | ba8fcf65e2e789685f3ca680f1dfadd2bbd797f0b61891f97d8d235f4d2c6c45 |
|
MD5 | 301802ed544320626463353b476635c9 |
|
BLAKE2b-256 | e636171cee9d190fa78b70b3220f2c9dd8e33430e8c1a5c1aaf74b393b004292 |
Hashes for team_comm_tools-0.1.4.post1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 089a8acf671490d09eee61ad2900c0574da551a428808a613cd2fdfcf9cad1a1 |
|
MD5 | 588f585fcbf08471a366f70bdc024e2d |
|
BLAKE2b-256 | f18947861496e9e33e29d8cda97525b2e16d97a1859864c7fff5d2a90e139903 |