Skip to main content

Population receptive field analysis for motion-sensitive early- and mid-level visual cortex.

Project description

Population receptive field analysis for motion-sensitive early- and mid-level visual cortex.

This is an extension of the pyprf package. Compared to pyprf, pyprf_motion offers stimuli that were specifically optimized to elicit responses from motion-sensitive areas. On the analysis side, pyprf_motion offers some additional features made necessary by the different stimulation type (model positions defined in polar coordinates, sub-TR temporal resolution for model creation, cross-validation for model fitting) at the cost of some speed and flexibility. There is currently no support for GPU.

Installation

For installation, follow these steps:

  1. (Optional) Create conda environment
conda create -n env_pyprf_motion python=2.7
source activate env_pyprf_motion
conda install pip
  1. Clone repository
git clone https://github.com/MSchnei/pyprf_motion.git
  1. Install numpy, e.g. by running:
pip install numpy
  1. Install pyprf_motion with pip
pip install /path/to/cloned/pyprf_motion

Dependencies

Python 2.7

Package Tested version
NumPy 1.14.0
SciPy 1.0.0
NiBabel 2.2.1
cython 0.27.1
tensorflow 1.4.0
scikit-learn 0.19.1

How to use

1. Present stimuli and record fMRI data

The PsychoPy scripts in the stimulus_presentation folder can be used to map motion-sensitive visual areas (especially area hMT+) using the pRF framework.

  1. Specify your desired parameters in the config file.
  2. Run the createTexMasks.py file to generate relevant masks and textures. Masks and textures will be saved as numpy arrays in .npz format in the parent folder called MaskTextures.
  3. Run the createCond.py file to generate the condition order. Condition and target presentation orders will be saved as numpy arrays in .npz format in the parent folder called Conditions.
  4. Run the stimulus presentation file motLoc.py in PsychoPy. The stimulus setup should look like the following screen-shot:

2. Prepare spatial and temporal information for experiment as arrays

  1. Run prepro_get_spat_info.py in the prepro folder to obtain an array with the spatial information of the experiment.
  2. Run prepro_get_temp_info.py in the prepro folder to obtain an array with the temporal information of the experiment.

3. Prepare the input data

The input data should be motion-corrected, high-pass filtered and (optionally) distortion-corrected. If desired, spatial as well as temporal smoothing can be applied. The PrePro folder contains some auxiliary scripts to perform some of these functions.

4. Adjust the csv file

Adjust the information in the config_default.csv file in the Analysis folder, such that the provided information is correct. It is recommended to make a specific copy of the csv file for every subject.

5. Run pyprf_motion

Open a terminal and run

pyprf_motion -config path/to/custom_config.csv

References

This application is based on the following work:

License

The project is licensed under GNU General Public License Version 3.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
pyprf_motion-1.0.3.tar.gz (155.2 kB) Copy SHA256 hash SHA256 Source None Jul 10, 2018

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page