TIGRLab/asl-conn

Name: asl-conn

Owner: Kimel Family Translational Imaging-Genetics Research Lab

Description: Preprocessing of ASL data and connectivity analyses

Forked from: djoverton/asl-conn

Created: 2017-08-23 15:46:40.0

Updated: 2017-08-23 15:46:42.0

Pushed: 2017-08-02 16:26:17.0

Homepage: null

Size: 10

Language: Python

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Dawson Overton, 2017

Notes about PNC data:


The most important scripts in this directory are (run in this order if starting from scratch):


preprocess_asl.sh will take a subject (SUBJ_ID) directory and fully preprocess the corresponding ASL data using a known location for the T1 and ASL NIfTI files. It performs deskulling, transformation of the functional data to MNI space (after registration to the T1), spatial smoothing and detrending. The fully processed file is named ASL_MNI_detrend_smoothed.nii.gz and is placed in a subdirectory called “pipeline”, which itself is placed in the SUBJ_ID directory.


subtr_tag_control.py takes a list of absolute paths to subject directories, and subtracts control from tag volumes for all subjects in the list. This requires a fully processed NIfTI (ASL_MNI_detrend_smoothed.nii.gz) for each subject. It subtracts odd volumes from even volumes (for the raw PNC data, control volumes are odd and tag volumes are even). It will output a perfusion signal volume in the “pipeline” folder for each subject.


Note on atlas: before running the subsequent scripts, ensure you have an atlas that has been resampled to your functional data (e.g., using AFNI). Example: 3dresample -prefix shen_resamp.nii.gz -master SPN01_CMH_0001_01_01_RST_07_Ax-RestingState_MNI-nonlin.nii.gz -inset shen_2mm_268_parcellation.nii.gz


find_good_rois.py requires the following variables to be defined:

This script calculates the mean time series signal in each ROI of the given atlas, for each subject. If it is 0 for a subject, this likely means that the ROI lies entirely outside of the brain for that subject (and the 0 value causes problems for downstream analyses). Any ROI that has this problem for at least one subject is added to a set, and this set of ROIs are ignored in future analyses.


connect.py requires the following variables to be defined:

This script uses the “good ROIs” set from the previous step and calculates a correlation matrix for each subject (taking into account only the ROIs in this set). Each of these correlation matrices is added to a Python dictionary, with the key being the subject ID. This dictionary is saved to disk and used in classification.


classifier.py requires the following:

The input data needs to be in the format of a dictionary of connectivity matrices, where the key for each matrix is the PNC subject ID. The script will process this dictionary by flattening and taking the bottom triangle of each matrix, and looking up the psychosis diagnosis value (0 for non-PS, 1 for PS) for the corresponding subject ID.

classifier.py contains a “pca_reduce” function, which is optional. This function can be used on the input data to either specify a desired number of dimensions (letting % variance explained vary), or vice versa. This is especially useful for reducing training time for the model.

Several options can be specified to change the model type and parameters, number of cross validation folds, and feature selection (if any). The script will output a variety of performance metrics and save a figure to disk which includes an AUC curve for each fold and the average AUC curve.


dat directory:


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.