From aa5eae8f64be068db3e160ccb54d86ba6d6164e8 Mon Sep 17 00:00:00 2001 From: Fernanda Ponce Date: Mon, 2 Sep 2024 16:40:38 +0200 Subject: [PATCH] typos and errors --- ibc_data/ibc_tasks.tsv | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/ibc_data/ibc_tasks.tsv b/ibc_data/ibc_tasks.tsv index dc60133..c132136 100644 --- a/ibc_data/ibc_tasks.tsv +++ b/ibc_data/ibc_tasks.tsv @@ -49,13 +49,12 @@ Stroop This task is a part of a battery of several tasks coming from the `experi ColumbiaCards This task is a part of a battery of several tasks coming from the `experiment factory `__ published in (`Eisenberg et al., 2017 `__) and presented using `expfactory-python `__ package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning. :raw-html:`
` The ColumbiaCards task is a gambling task in where the participants are presented with a set of cards facing down. In each trial, a different number of cards appear and the participant is informed of the amount gained per good card uncovered, the amount loss when uncovering the bad card, and the number of bad cards in the set. The participant can uncover as many cards as they want, by pressing the index finger's button on the response box, before pressing the middle finger's button to end the trial and start the next one. Uncovering a bad card automatically ends the trial. In each trial, the number of total cards, the number of bad cards, the amount gained per card uncovered and the amount lost if a bad card was uncovered changed. The order in which the cards is pre-determined for each trial, but the participant does not know it. The task is composed by 88 trials divided in 4 blocks of 22 trials each and was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions. :raw-html:`
` For the original version of this task, the authors provide a `simulator `__ which contains the original design. JavaScript, Python 2.7 Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 3200x1800 DotPatterns This task is a part of a battery of several tasks coming from the `experiment factory `__ published in (`Eisenberg et al., 2017 `__) and presented using `expfactory-python `__ package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning. :raw-html:`
` DotPatterns task presents the participant with pairs of stimuli, separated by a fixation cross. The participant has to press a button (index finger) as fast as possible after the presentation of the probe, and only one specific combination of cue-probe is instructed to be responded to differently. This task was designed to capture activation relative to the expectancy of the probe elicited by the correct cue. The task is composed by 160 trials divided in 4 blocks of 40 trials each. Each cue and probe lasted for 500ms, with a fixation cross that separates both lasting for 2000ms. It was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions. JavaScript, Python 2.7 Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 3200x1800 WardAndAllport "This task is a part of a battery of several tasks coming from the `experiment factory `__ published in (`Eisenberg et al., 2017 `__) and presented using `expfactory-python `__ package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning. :raw-html:`
` WardAndAllport task is a digital version of the WATT3 task (`Ward, Allport, 1997 `__, `Shallice, 1982 `__), and its main purpose is to capture activation related to planning abilities. For this, the task uses a factorial manipulation of 2 task parameters: search depth and goal hierarchy. Search depth involves mentally constructing the steps necessary to reach the goal state, and the interdependecy between steps in order to do so. This is expressed by the presence or absence of intermediate movements necessary for an optimal solution of each problem. Goal hierarchy refers to whether the order in which the three balls have to be put in their goal positions can be completely extracted from looking at the goal state or if it requires the participant to integrate information between goal and starting states (which result in unambiguous or partially ambiguous goal states, respectively). Detailed explanations and examples of each one of the four categories can be found in `Kaller et al., 2011 `__. :raw-html:`
` The task was divided in 4 practice trials, followed by 48 test trials divided in 3 blocks of 14 trials each, separated by 10 seconds of resting period. Data was only acquired during the test trials, although the practice trials were also performed inside the scanner with its corresponding equipment. In each trial, the participant would see two configurations of the towers: the test towers on the left, and the target towers on the right. The towers of the right showed the final configuration of balls required to complete the trial. Three buttons were assigned to the left (index finger' button), middle (middle finger's button) and right (ring finger's button) columns respectively, and each button press would either take the upper ball of the selected column or drop the ball in hand at the top of the selected column. On the upper left corner, a gray square with the text ""Ball in hand"" would show the ball currently in hand. All trials could be solved in 3 movements, considering taking a ball and putting it elsewhere as a single movement. The time between the end of one trial and the beginning of the next one was 1000 ms." JavaScript, Python 2.7 Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 3200x1800 +LePetitPrince This experiment is a natural language comprehension protocol, originally implemented by (`Bhattasali et al., 2019 `__, `Hale et al., 2022 `__). Each run of this task comprised three chapters of The Little Prince story by Antoine de Saint-Exupery in french (Le Petit Prince). During each run, the participant was presented with the audio of the story. In between runs, the experimenters would ask some multiple choice questions, as well as two or three open ended questions about the contents of the previous run, to keep participants engaged. The length of the runs varied between nine and thirteen minutes. Data were acquired in two different sessions, each one comprising five and four runs, respectively. The protocol also included a six-minutes localizer at the end of the second acquisition, in order to accurately map language areas for each participant. :raw-html:`
` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-08*, for whom we employed MRConfon MKII. Expyriment 0.9.0 (Python 3.6) OptoACTIVE (Optoacoustics) 1920x1080 BiologicalMotion1 "The phenomenon known as *biological motion* was first introduced in (`Johansson, 1973 `__), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (`Chang et al., 2018 `__). During the task, the participants were shown a point-light ""walker"", and they had to decide if the walker's orientation was to the left or right, by pressing on the response box respectively on the index finger's button or the middle finger's button. The stimuli were divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see `Chang et al., 2018 `__ for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. There were 2 different types of runs: type 1 and 2. This section refers to run type 1, which contained both global types (natural and inverted) and both local naturals. For run type 2 refer to :ref:`BiologicalMotion2`." Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1024x768 `See demo `__ BiologicalMotion2 "The phenomenon known as *biological motion* was first introduced in (`Johansson, 1973 `__), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (`Chang et al., 2018 `__). During the task, the participants were shown a point-light ""walker"", and they had to decide if the walker's orientation was to the left or right, by pressing on the response box respectively on the index finger's button or the middle finger's button. The stimuli was divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see `Chang et al., 2018 `__ for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. This section refers to run type 2, which contained both local naturals and both local modified." Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1024x769 -LePetitPrince This experiment is a natural language comprehension protocol, originally implemented by (`Bhattasali et al., 2019 `__, `Hale et al., 2022 `__). The use of complex naturalistic language stimuli has been used to study other processes, like semantic maps (`Huth et al., 2016 `__). Data were acquired in two different sessions, each one comprising five and four runs, respectively. Each run comprised three chapters of The Little Prince story by Antoine de Saint-Exupery in french (Le Petit Prince). During each run, the participant was presented with the audio of the story. In between runs, the experimenters would ask some multiple choice questions, as well as two or three open ended questions about the contents of the previous run, to keep participants engaged. The length of the runs varied between nine and thirteen minutes. The protocol also included a six-minutes localizer at the end of the second acquisition, in order to accurately map language areas for each participant. :raw-html:`
` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-08*, for whom we employed MRConfon MKII. Expyriment 0.9.0 (Python 3.6) OptoACTIVE (Optoacoustics) 1920x1080 MathLanguage The **Mathematics and Language** protocol was taken from (`Amalric et al., 2016 `__). This task aims to comprehensively capture the activation related with several types of mathematical and other types of facts, presented as sentences. During the task, the participants are presented a series of sentences, each one in either of two modalities: auditory or visual. Some of the categories include theory of mind statements, arithmetic facts and geometry facts. After each sentence, the participant has to indicate whether they believe the presented fact to be true or false, by respectively pressing the button in the left or right hand. A second version of each run (runs *B*) was generated reverting the modality for each trial, so those being visual in the original runs (runs *A*), would be auditory in their corresponding *B* version, and vice-versa. Each participant performed four A-type runs, followed three B-type runs due to time constraints. Each run had an equal number of trials of each category, and the order of the trials was the same for all subjects. :raw-html:`
` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-05* and *subject-08*, who completed the session using MRConfon MKII. Expyriment 0.9.0 (Python 3.6) In-house custom-made sticks featuring one-top button, each one to be used in each hand OptoACTIVE (Optoacoustics) 1920x1080 `See demo `__ `Repository `__ SpatialNavigation This protocol, an adaptation from the one used in (`Diersch et al., 2021 `__), was originally designed to capture the effects of spatial encoding and orientation learning in different age groups. The task demands subjects to navigate and orientate themselves in a complex virtual environment that resembled a typical German historic city center, consisting of town houses, shops and restaurant. There are three parts of this task: introduction (outside of the scanner), encoding (in scanner) and retrieval (in scanner). Before entering the scanner, the participants went through an introduction phase, during which they had the freedom to navigate the virtual environment with the objective of collecting eight red balls scattered throughout various streets of the virtual city. During this part, the participants could familiarize themselves with the different buildings and learn the location of the two target buildings: Town Hall and Church. After they collect all the red balls, a short training of the main task was performed to ensure the correct understanding of the instructions. :raw-html:`
` Then, participants went to the scanner. The task began with the encoding phase. During this period, the participant had to passively watch the camera move from one target building to the other, in such a way that every street of the virtual environment is passed through in every direction possible. Participants were instructed to pay close attention to the spatial layout of the virtual environment and the location of the target landmarks. Passive transportation instead of self-controlled traveling was chosen to ensure that every participant experienced the virtual environment for the same amount of time. After the encoding phase, the retrieval phase started, which consisted of 8 experimental trials and 4 control trials per run. In each trial, the participant was positioned near an intersection within the virtual environment, which was enveloped in a dense fog, limiting visibility. Subsequently, the camera automatically approached the intersection and centered itself. The participant’s task was to indicate the direction of the target building, which was displayed as a miniature picture at the bottom of the screen. Control and experimental trials were identical, but during control trials the participant had to point to one of the buildings of the intersection that had been colored in blue instead of the target building. All of the runs, except the first one, began with the encoding phase, followed by the retrieval phase. In the initial run, a control trial of the retrieval phase preceded the standard design of the encoding phase followed by the retrieval phase. Vizard 6 Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1920x1080 `Repository `__ GoodBadUgly "The GoodBadUgly task was adapted from the study by (`Mantini et al., 2012 `__), which was dedicated to investigate the correspondence between monkey and human brains using naturalistic stimuli. The task relies on watching - viewing and listening - the whole movie ""The Good, the Bad and the Ugly"" by Sergio Leone. For IBC, the French-dubbed version ""Le Bon, la Brute et le Truand"" was presented. The original 177-minute movie was cut into approximately 10-minute segments to match the segment length of the original study, which presented only three 10-minute segments from the middle of the movie. This resulted in a total of 18 segments (the last segment being only 4.5 minutes long). This task was performed during three acquisition sessions with seven segments each, one segment per run. The first three segments were repeated during the final acquisition after the entire movie had been completed." Expyriment 0.9.0 (Python 2.7) 1920x1080 - EmoMem "This task is a part of the CamCAN (`Cambridge Centre for Ageing and Neuroscience `__) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox `mrisync `__ that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The **Emotional Memory** task was designed to provide an assessment of implicit and explicit memory, and how it is affected by emotional valence. At the IBC we only conducted the encoding part of the task the Study phase as mentioned in (`Shafto et al., 2014 `__) but not the Test phase that happened outside the scanner in the original study. In each trial, participants were presented with a background picture for 2 seconds, followed by a foreground picture of an object superimposed on it. Participants were instructed to imagine a ""story"" linking the background and foreground picture, and after an 8-second presentation, the next trial began. The manipulation of emotional valence exclusively affected the background image, which could be negative, neutral, or positive. Participants were asked to indicate the moment they thought of a story or a connection between the object and the background image by pressing a button. In all, 120 trials were presented over 2 runs." Octave 4.4 + Psychtoolbox 3.0 Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) 800x600 EmoReco This task is a part of the CamCAN (`Cambridge Centre for Ageing and Neuroscience `__) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox `mrisync `__ that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The **Emotion Recognition** task compares brain activity when observing angry versus neutral expressions, and assesses how individuals differ in how they regulate responses to negative emotional expressions (`Shafto et al., 2014 `__). The expressions were presented on female and male faces (15 each), and each face had an angry and a neutral expression version. Emotions were presented in blocks of angry and neutral, with equal numbers of female and male faces in each block. In each trial, participants were asked to report the gender of the face by pressing the corresponding button. There were 12 blocks of each emotion and each block consisted of 5 trials. In all, 60 trials were presented in each of the 2 runs. E-Prime 2.0 Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) 1920x1080 `See demo `__ StopNogo This task is a part of the CamCAN (`Cambridge Centre for Ageing and Neuroscience `__) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox `mrisync `__ that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The StopNogo task assesses systems involved in action restraint and action cancellation by randomly interleaving *Go*, *Stop* and *No-Go* trials (`Shafto et al., 2014 `__). On *Go* trials, participants viewed a black arrow pointing left or right for 1000 ms, and indicated the direction of the arrow by pressing left/right buttons with their right hand. On *Stop* trials, the black arrow changed color (from black to red), after a short variable stop-signal delay. Participants were instructed that to not respond to the red arrow, so stop signal trials required canceling the initial response to the black arrow. The Stop-Signal delay varied trial-to-trial in steps of 50 ms, and a staircase procedure was used to maintain a performance level of 66% successful inhibition. Finally, in *No-Go* trials, the arrow was colored in red since the start of the trial (stop-signal delay of 0) and participants were required to make no response. Presentation Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) 1920x1080 `See demo `__