fMRI Preprocessing
We analyzed the raw dataset using the self-made script, which is
composed of FSL(Version 6.0.3) (Jenkinson et al., 2012) and AFNI
(Version AFNI_20.3.03) commands (Cox, 1996). The T1 weighted
anatomical images were skull-stripped with the quality visually checked.
The extracted brain image was then fed into the ‘afni_proc.py ’
program for fMRI preprocessing. The processing steps for each run
involved (1) slice timing correction, (2) calculating the rigid
transformation matrix between each fMRI volume and the volume with
minimal outliers and estimating the six parameters of the head motion
file, (3) estimating the linear transformation matrix for transforming
the brain into the MNI152 template, (4) combining the spatial
transformation matrix and applying it to all fMRI volumes, (5) spatially
smoothing the images using a spatial kernel with a half-width of 4 mm,
(6) removing the fMRI time series outside of the brain mask and scaling
each voxel time series to have a mean of 100 with a range of 0 to 200.
After the above-mentioned preprocessing steps, we conducted the
first-level generalized linear model (GLM) to model the task events of
the merged six runs with head motion and polynomial drifts regressed
out. The task regressors were made by convolving the onset of task
events with the hemodynamic response (a one-parameter gamma
variate). For the alerting and orienting effect, we convolved the onset
of the cue stage of separate cue conditions for each trial, namely no
cue, center cue, and spatial cue. For the executive effect, we convolved
the onset of congruent and incongruent trials separately . In addition,
the error trials were also modeled as a separate task event if
applicable.
The
first level GLM also estimated the three attentional network contrasts:
alerting (center cue – no cue), orienting (spatial cue – center cue),
and executive (incongruent - congruent) as defined in the original
design of ANT (Fan, McCandliss, et al. 2005). As the fMRI time series
were scaled to the local mean, the GLM model’s beta estimate
approximated the BOLD change percentage (Taylor, Reynolds, et al.,
2023). The five activation maps for each condition and the three
attention network contrasts were used for the following analysis.