Skip to main content

Ask coding questions directly from the terminal

Project description

Semantic search for developers

Version GitHub last commit GitHub issues Build Status Coverage Status

codequestion is a semantic search application for developer questions.


Developers typically have a web browser window open while they work and run web searches as questions arise. With codequestion, this can be done from a local context. This application executes similarity queries to find similar questions to the input query.

The default model for codequestion is built off the Stack Exchange Dumps on Once a model is installed, codequestion runs locally, no network connection is required.


codequestion is built with Python 3.7+ and txtai.


The easiest way to install is via pip and PyPI

pip install codequestion

Python 3.7+ is supported. Using a Python virtual environment is recommended.

codequestion can also be installed directly from GitHub to access the latest, unreleased features.

pip install git+

See this link for environment-specific troubleshooting.

Download a model

Once codequestion is installed, a model needs to be downloaded.

python -m

The model will be stored in ~/.codequestion/

The model can also be manually installed if the machine doesn't have direct internet access. The default model is pulled from the GitHub release page

unzip ~/.codequestion


Start up a codequestion shell to get started.


A prompt will appear. Queries can be typed into the console. Type help to see all available commands.



The latest release integrates txtai 5.0, which has support for semantic graphs.

Semantic graphs add support for topic modeling and path traversal. Topics organize questions into groups with similar concepts. Path traversal uses the semantic graph to show how two potentially disparate entries are connected. An example covering both topic and path traversal is shown below.


VS Code

A codequestion prompt can be started within Visual Studio Code. This enables asking coding questions right from your IDE.

Run Ctrl+` to open a new terminal then type codequestion.


API service

codequestion builds a standard txtai embeddings index. As such, it supports hosting the index via a txtai API service.

Running the following:


path: /home/user/.codequestion/models/stackexchange/
# Install API extra
pip install txtai[api]

# Start API
CONFIG=app.yml uvicorn "txtai.api:app"

# Test API
curl ""


    "text":"How to fetch data from sqlite using python? stackoverflow python sqlite",

Additional metadata fields can be pulled back with SQL statements.

    --data-urlencode "query=select id, date, tags, question, score from txtai where similar('python query sqlite')"
    --data-urlencode "limit=1"
    "tags":"python sqlite",
    "question":"How to fetch data from sqlite using python?",

Tech overview

The following is an overview covering how this project works.

Process the raw data dumps

The raw 7z XML dumps from Stack Exchange are processed through a series of steps (see building a model). Only highly scored questions with accepted answers are retrieved for storage in the model. Questions and answers are consolidated into a single SQLite file called questions.db. The schema for questions.db is below.

questions.db schema

Source TEXT
Question TEXT
QuestionUser TEXT
Answer TEXT
AnswerUser TEXT
Reference TEXT


codequestion builds a txtai embeddings index for questions.db. Each question in the questions.db schema is vectorized with a sentence-transformers model. Once questions.db is converted to a collection of sentence embeddings, the embeddings are normalized and stored in Faiss, which enables fast similarity searches.


codequestion tokenizes each query using the same method as during indexing. Those tokens are used to build a sentence embedding. That embedding is queried against the Faiss index to find the most similar questions.

Build a model

The following steps show how to build a codequestion model using Stack Exchange archives.

This is not necessary if using the default model from the GitHub release page

1.) Download files from Stack Exchange:

2.) Place selected files into a directory structure like shown below (current process requires all these files).

  • stackexchange/ai/
  • stackexchange/android/
  • stackexchange/apple/
  • stackexchange/arduino/
  • stackexchange/askubuntu/
  • stackexchange/avp/
  • stackexchange/codereview/
  • stackexchange/cs/
  • stackexchange/datascience/
  • stackexchange/dba/
  • stackexchange/devops/
  • stackexchange/dsp/
  • stackexchange/raspberrypi/
  • stackexchange/reverseengineering/
  • stackexchange/scicomp/
  • stackexchange/security/
  • stackexchange/serverfault/
  • stackexchange/stackoverflow/
  • stackexchange/stats/
  • stackexchange/superuser/
  • stackexchange/unix/
  • stackexchange/vi/
  • stackexchange/wordpress/

3.) Run the ETL process

python -m codequestion.etl.stackexchange.execute stackexchange

This will create the file stackexchange/questions.db

4.) OPTIONAL: Build word vectors - only necessary if using a word vectors model. If using word vector models, make sure to run pip install txtai[similarity]

python -m codequestion.vectors stackexchange/questions.db

This will create the file ~/.codequestion/vectors/stackexchange-300d.magnitude

5.) Build embeddings index

python -m codequestion.index index.yml stackexchange/questions.db

The default index.yml file is found on GitHub. Settings can be changed to customize how the index is built.

After this step, the index is created and all necessary files are ready to query.

Model accuracy

The following sections show test results for codequestion v2 and codequestion v1 using the latest Stack Exchange dumps. Version 2 uses a sentence-transformers model. Version 1 uses a word vectors model with BM25 weighting. BM25 and TF-IDF are shown to establish a baseline score.

StackExchange Query

Models are scored using Mean Reciprocal Rank (MRR).

Model MRR
all-MiniLM-L6-v2 85.0
SE 300d - BM25 77.1
BM25 67.7
TF-IDF 61.7

STS Benchmark

Models are scored using Pearson Correlation. Note that the word vectors model is only trained on Stack Exchange data, so it isn't expected to generalize as well against the STS dataset.

Model Supervision Dev Test
all-MiniLM-L6-v2 Train 87.0 82.7
SE 300d - BM25 Train 74.0 67.4


To reproduce the tests above, run the following. Substitute $TEST_PATH with any local path.

mkdir -p $TEST_PATH
wget -P $TEST_PATH/stackexchange
tar -C $TEST_PATH -xvzf Stsbenchmark.tar.gz
python -m codequestion.evaluate -s test -p $TEST_PATH

Further reading

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codequestion-2.1.0.tar.gz (26.2 kB view hashes)

Uploaded Source

Built Distribution

codequestion-2.1.0-py3-none-any.whl (28.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page