diff --git a/ibc_data/ibc_tasks.tsv b/ibc_data/ibc_tasks.tsv index 38db85a..40659d9 100644 --- a/ibc_data/ibc_tasks.tsv +++ b/ibc_data/ibc_tasks.tsv @@ -83,5 +83,5 @@ AbstractionLocalizer This protocol was adapted from an ongoing study from our co Abstraction This protocol was adapted from an ongoing study from our colleagues at Neurospin, CEA Saclay, France. The goal of the study is to understand the neural representations of real-world things from different semantic categories at various levels of abstraction/rendering. So to achieve that, the subjects were presented with images belonging to six different semantic categories - human body, animals, faces, flora, objects and places, all rendered at three different levels of detail namely - geometry, edges and photos (in an ascending order of detail). To control for the attention there were five images of a star and the subjects were required to press a button when they saw them. There were four different examples from each category making a total of (6 categories x 4 examples x 3 renderings = 72 + 5 star probes =) 77 images. Each image was presented twice, for 300 ms with a variable inter-stimulus durations of 4, 6 or 8 seconds. There were 8 such runs and a localizer. The localizer was different from the four runs in that the images were from eight different categories - faces, human body, words, non-sense words, numbers, places, objects and checkerboards. Each category in the localizer was presented in a block of 6 seconds with each image being displayed for 100 ms followed by a 200 ms inter-stimuli interval. Each category block was presented 5 times (8 categories x 5 = 40 blocks) and the inter-block intervals were jittered for 4, 6 and 8 seconds (mean = 6 seconds). Psychtoolbox-3 (MATLAB 2021b) Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) 1920x1080 MDTB The **Multi-Domain Task Battery** was adapted from a study conducted by (`King et al., 2019 `__), where they aimed to investigate the functional organization of the cerebellum cortex by running a fMRI study with a collection of more than 20 tasks. The authors made the paradigm's code and parameters openly available for 9 of those tasks at the time `here `__, which allowed us to integrate them in the IBC project. The implementation was different than usual; here we presented all 9 tasks in one run, instead of dedicating a separated run for each task. :raw-html:`
` The protocol consisted of a short training session outside the scanner and 4 runs inside the scanner. In every run each task was performed twice in blocks of 35 seconds. In every run each task was performed twice in blocks of 35 seconds. At the beginning of each block, the instructions were displayed for 5 seconds so that subjects remember the instructions and expected actions. Immediately after, the task was performed continuously for 30 seconds, therefore each run lasted around 10 minutes and 30 seconds. If the task involved response from the subjects, they received feedback on their performance, which was given in form of a green check mark or a red cross, for correct or incorrect answers. At the end of each run, the success rates for each task were displayed followed by a video of a knot being tied as a part of an attention control task during the action observation task (described below). :raw-html:`
` Following are detailed descriptions for each task: :raw-html:`
` **1) Visual search:** Several 'L' shaped characters rotated at different angles were shown on each trial and subjects were asked to search for the standard (correct) orientation and press with their index finger if it was present, or with their middle finger if it was not. On each run, this task was performed twice, and for each time there were 12 trials, half of them being True (the correct 'L' shape was present). The order of True and False trials was randomized for each block on each run. :raw-html:`
` **2) Action observation:** Videos of knots being tied were displayed along with their name tags, and subjects were asked to remember the knot and its name. Two different knots were presented per run, and at the end of each run, another video of a knot was shown, this time without the name tag. We then asked subjects if this particular knot was displayed during the run, and if so, say the name. Only for run 3 the knot displayed at end was presented during the run. :raw-html:`
` **3) Flexion - extension:** Alternating cues with the words 'Extension' and 'Flexion' were presented, to indicate the participants to do so with their toes. :raw-html:`
` **4) Finger sequence:** A sequence of 6 digits from 1 to 4 were displayed and subjects were asked to press the keys corresponding to the numbers in the shown sequence. The mapping went from index being 1 to pinky being 4. Each block consisted of 8 trials and two blocks were presented during each run. The trials could be either simple or complex: the simple trials involving one or two consecutive fingers, and the complex involving three or four fingers, not necessarily consecutive. As the subject pressed the buttons, the digits became green if the correct key was pressed or red if not. At the end of each trial, if all the digits on the sequence were accurately followed, a green check appeared as feedback, if one or more was incorrect, a red cross appeared. Each trial lasted for 3.5 seconds, if the subject didn't complete the sequence before the end of the trial, it was counted as incorrect and the red cross appeared. :raw-html:`
` **5) Theory of mind:** The subject was presented with a short paragraph narrating a story, followed by a related statement. Subjects must decide whether the statement is true based on the initial paragraph by pressing with their index finger, or false by pressing with their middle finger. Four trials in total were performed per run, half of them being true. If the subject answered correctly, a green check appeared, and on the contrary, a red cross appeared. Each trial lasted 14 seconds, if the subject did not reply during that period, the trial was counted as a mistake and the negative feedback appeared. :raw-html:`
` **6) 2-back:** Several images were presented, one after another. For each presented image, participants had to press with their index finger if it is the same that was presented 2 images before or with their middle finger if it was not. The trials were divided into easy and hard. The easy trials were the ones where the current image presented was not displayed two images before, and the hard trials were those where it was. There were 12 trials per block, 7 of the easy type and 5 of the hard type. As the rest of tasks, this was performed twice, leading to 24 trials in total per run. Each image was displayed for 2 seconds, followed by the feedback which was once again a green check or a red cross. :raw-html:`
` **7) Semantic prediction**: Words from a sentence were shown, one at a time. Subjects must decide whether the last word fits into the sentence or not, by pressing with their index or middle finger, respectively. There were 4 trials per block, leading to 8 trials per run. Each block consisted of 2 'True' and 2 'False' trials, and the order of appearance was randomized. Each trial could be either hard or easy to perform, depending on the ambiguity of the sentence and there were 2 easy and 2 hard trials per block. The subjects received feedback after their response, a green check or a red cross, consistent with the tasks described above. :raw-html:`
` **8) Romance movie watching:** A 30 second clip from the 2009 Disney Pixar movie 'Up' was presented without any sound. Subjects were instructed to watch passively. Two such clips were presented on each run, and no clip was repeated across or within the runs. :raw-html:`
` **9) Rest:** Short resting-state period, a fixation cross was displayed and subjects were asked to fixate on it and not move. Psychopy 2021.1.3. (Python 3.8.5) Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) 1920x1080 Emotion This tasks was adapted from (`Favre et al., 2021 `__). This protocol aimed to examine emotional processing and the regions engaged on it. The subjects were presented with a series of pictures divided in two categories: neutral and negative images. The scenes depicted were mainly in a social context, for instance people chatting or eating during the neutral block; and people suffering or fighting during the negative block. The task consisted of two runs and a short training session before the acquisition. Each run consisted of 12 blocks of 10 images, alternating between neutral and negative blocks. Every picture was displayed for 2 seconds, and the subjects were instructed to press with their index finger if the scene occurred indoors, either inside a building or a car. The inter-block interval lasted 2 seconds, in which a fixation cross was shown. In the middle and in the end of the run the subjects were presented with two questions: *How do you feel?* and *How nervous do you feel?*, along with a scale for them to answer, going from *not well* to *extremely well* for the former question and *not nervous* to *extremely nervous* for the latter. The subjects used their index and middle fingers to slide through the scale and had 7 seconds to give their answer. :raw-html:`
` The images used for stimuli were extracted from different databases: the International Affective Picture system (IAPS) (`Lang et al., 2008 `__), the Geneva Affective Picture Database (GAPED) (`Dan-Glauser and Scherer, 2011 `__), the Socio-Moral Image Databade (SMID) (`Crone et al., 2018 `__), the Complex Affective Scene Set (COMPASS) (`Weierich et al., 2019 `__), the Besançon Affective Picture Set-Adolescents (BAPS-Ado) (`Szymanska et al., 2015 `__) and the EmoMadrid database (`Carretié et al., 2019 `__). The training session was performed inside the scanner before running the experiment, in order to get the subject familiar with the task and the slider used to answer. The training consisted of 3 blocks: neutral, negative and neutral images, followed by the two questions. We therefore had three main conditions for the task: *neutral*, *negative* and *assesment*. Psychtoolbox-3 (MATLAB 2021b) Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) 1920x1080 -MultiModal This protocol was derived from work by colleagues at `Laboratory for Neuro- and Psychophysiology `__ from the KU Leuven Medical School, who aimed to compare evoked responses to the same sensory stimulation across two different cohorts of human and non-human primates. Three categories of stimuli were used: visual, tactile and auditory. Visual stimuli consisted of gray-scale pictures of ten classes: monkey and human faces, monkey and human bodies (without the head), four-legged mammals, birds, man-made objects that looked either like a human or a monkey's body (e.g. guitar or kettle), fruits/vegetables and body-like sculptures. We presented 10 pictures per class, giving a total of 100 images, which were presented superimposed onto a pink noise background that filled the entire display. Tactile stimuli consisted of compressed air puffs delivered on both left and right side of the subjects' face on three different locations: above the upper lip, around the cheek area or middle lip and beneath the lower lip. The air puffs were delivered using 6 plastic pipes, one to each target location, with an intensity of 0.5 bars, at a distance of approximately 5 mm to the face, without touching it. The plastic pipes were connected to a custom-made computer controlled pneumatic system in the console room. Auditory stimuli consisted of 1-second clips of different natural sounds from six classes: human speech, human no-speech (e.g. baby crying, cough), monkey calls, animal sounds (e.g. horse), tool sounds and musical instruments (e.g. scissors, piano), and sounds from nature (e.g. rain, thunder). There were 10 different sounds per class, thus 60 different sound-clips in total. MR-compatible headphones were used. :raw-html:`
` To be congruent with the study from our colleagues, the auditory stimuli needed to be presented during silent periods, meaning no scanner noise, to ensure they were clearly audible and distinguishable (`Erb et al., 2018 `__). To achieve that, the repetition time (TR) for this protocol was modified to 2.6 seconds, during which we had a silence period (no data acquired, no scanner noise) of 1.2 seconds for stimuli presentation and 1.4 seconds of acquisition 120 time (TA). To ensure uniformity across the experiment, all three types of stimuli were presented during the silent period. Due to the change on TR and TA, some parameters were also updated to maintain a good enough spatial-resolution. :ref:`This table ` contains the final set of acquisition parameters used for this protocol. Psychopy 2021.1.3. (Python 3.8.5) Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) MRConfon MKII 1920x1080 Hardware: LabJack-U3, custom-made computer-controlled pneumatic system +MultiModal This protocol was derived from work by colleagues at `Laboratory for Neuro- and Psychophysiology `__ from the KU Leuven Medical School, who aimed to compare evoked responses to the same sensory stimulation across two different cohorts of human and non-human primates. Three categories of stimuli were used: visual, tactile and auditory. Visual stimuli consisted of gray-scale pictures of ten classes: monkey and human faces, monkey and human bodies (without the head), four-legged mammals, birds, man-made objects that looked either like a human or a monkey's body (e.g. guitar or kettle), fruits/vegetables and body-like sculptures. We presented 10 pictures per class, giving a total of 100 images, which were presented superimposed onto a pink noise background that filled the entire display. Tactile stimuli consisted of compressed air puffs delivered on both left and right side of the subjects' face on three different locations: above the upper lip, around the cheek area or middle lip and beneath the lower lip. The air puffs were delivered using 6 plastic pipes, one to each target location, with an intensity of 0.5 bars, at a distance of approximately 5 mm to the face, without touching it. The plastic pipes were connected to a custom-made computer controlled pneumatic system in the console room. Auditory stimuli consisted of 1-second clips of different natural sounds from six classes: human speech, human no-speech (e.g. baby crying, cough), monkey calls, animal sounds (e.g. horse), tool sounds and musical instruments (e.g. scissors, piano), and sounds from nature (e.g. rain, thunder). There were 10 different sounds per class, thus 60 different sound-clips in total. MR-compatible headphones were used. :raw-html:`
` To be congruent with the study from our colleagues, the auditory stimuli needed to be presented during silent periods, meaning no scanner noise, to ensure they were clearly audible and distinguishable (`Erb et al., 2018 `__). To achieve that, the repetition time (TR) for this protocol was modified to 2.6 seconds, during which we had a silence period (no data acquired, no scanner noise) of 1.2 seconds for stimuli presentation and 1.4 seconds of acquisition time (TA). To ensure uniformity across the experiment, all three types of stimuli were presented during the silent period. Due to the change on TR and TA, some parameters were also updated to maintain a good enough spatial-resolution. :ref:`This table ` contains the final set of acquisition parameters used for this protocol. Psychopy 2021.1.3. (Python 3.8.5) Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) MRConfon MKII 1920x1080 Hardware: LabJack-U3, custom-made computer-controlled pneumatic system Mario This task involves a video game protocol where participants played Super Mario Bros. We adapted the implementation from our colleagues at the `Courtois-Neuromod `__ project, who used it with their own cohort, based on the premise that video game playing engages various cognitive domains such as constant reward processing, strategic planning, environmental monitoring, and action-taking (`Bellec and Boyle, 2019 `__). Therefore, monitoring brain activity during video game play provides an intriguing window into the interaction of these cognitive processes. Our colleagues at the Courtois-Neuromod team also designed an MRI-compatible video game controller, which closely resembles the shape and essence of commercial controllers, ensuring a familiar gaming experience. We replicated this controller for the IBC project; for more details, refer to (`Harel et al., 2023 `__). This implementation was created using OpenAI's `GymRetro `__ package. :raw-html:`
` The game consisted of eight different worlds, each with three levels. Participants were instructed to play freely and complete as many levels as possible within the session, resulting in varying time spent on each level for each participant. None of the participants completed the entire game, but the majority reached the last world. Participants had unlimited lives but were allowed only three attempts to complete a level, meaning that if they lost twice consecutively, they would return to the last checkpoint in the current level. Losing a third time would restart the level and reset the count. This task was conducted over two sessions, with each session consisting of six runs lasting 10 minutes each. Each session began anew. Additionally, subsequent runs picked up where the previous one left off. For example, if a player was halfway through a level when an acquisition run ended, they would resume from the same point in the next run. Psychopy 2021.1.3. (Python 3.8.5) MR-compatible video game controller MRConfon MKII 1920x1080