QA/QC of eye-tracking data¶
This notebook visualizes ET data for the purpose of QC'ing BIDS conversion and the data themselves.
%matplotlib inline
from pathlib import Path
import json
import ppjson
from importlib import reload # For debugging purposes
import numpy as np
import pandas as pd
import eyetrackingrun as et
from matplotlib import pyplot as plt
import plot
from IPython.display import HTML
from matplotlib import animation
import matplotlib.image as mpimg
In the schedule.tsv file, we've listed the EDF files we created and their associated sessions. Now, let's check out what's in the file.
As an illustrative example, we'll handle the data from session 4. Replace DATA_PATH
with your data's path.
BIDS_PATH = Path("/data/datasets/hcph")
session = "004"
Eye-tracking during the diffusion weighted imaging (DWI) run¶
Let's begin opening the eye-tracking information collected during a DWI run.
It's stored with BIDS conventions, so data is found in a "headless" tsv.gz
file.
recording_file = BIDS_PATH / "sub-001" / f"ses-{session}" / "dwi" / f"sub-001_ses-{session}_acq-highres_dir-RL_recording-eyetrack_physio.tsv.gz"
metadata = json.loads((
recording_file.parent
/ recording_file.name.replace(".tsv.gz", ".json")
).read_text())
recording = pd.read_csv(
recording_file,
sep=r"\s+",
header=None,
index_col=None,
na_values="n/a",
names=metadata["Columns"],
)
Metadata and calibration¶
The DWI run of our protocol is the first time the eye-tracker is employed. Therefore, we always run a calibration before it is started. As a result, we will typically see the metadata corresponding to calibration on this particular run.
Now, let's print out the contents of the BIDS' sidecar JSON corresponding to the DWI in this session.
print(
json.dumps(metadata, sort_keys=True, indent=2, cls=ppjson.CompactJSONEncoder)
)
{ "AverageCalibrationError": 0.49, "CalibrationCount": 1, "CalibrationLog": [ [9898190, ">>>>>>> CALIBRATION (HV9,P-CR) FOR RIGHT: <<<<<<<<<"], [9898190, "Calibration points:"], [9898190, "-18.8, -42.0 0, 0"], [9898190, "-16.8, -58.4 0, -2457"], [9898190, "-20.0, -25.9 0, 2457"], [9898190, "-45.1, -42.6 -3474, 0"], [9898190, "10.2, -40.9 3474, 0"], [9898190, "-45.4, -59.0 -3474, -2457"], [9898190, "11.1, -55.5 3474, -2457"], [9898190, "-47.5, -27.1 -3474, 2457"], [9898190, "8.6, -25.6 3474, 2457"], [9898190, "eye check box: (L,R,T,B)\n\t -53 17 -62 -22"], [9898190, "href cal range: (L,R,T,B)\n\t-5211 5211 -3686 3686"], [ 9898190, "Cal coeff:(X=a+bx+cy+dxx+eyy,Y=f+gx+goaly+ixx+jyy)\n 5.882e-05 125.7 12.521 -0.22744 -0.18622 \n 4.4115e-05 -4.6332 150.8 -0.048749 0.093667" ], [9898190, "Prenormalize: offx, offy = -18.822 -42.021"], [9898191, "Quadrant center: centx, centy = \n 5.882e-05 4.4115e-05"], [ 9898191, "Corner correction:\n 3.207e-05, -1.7627e-06 \n -1.6438e-05, 3.5065e-05 \n -2.0184e-05, -1.0586e-05 \n 5.7616e-06, 1.5672e-05" ], [9898191, "Gains: cx:120.898 lx:115.734 rx:103.799"], [9898191, "Gains: cy:164.746 ty:163.045 by:164.277"], [9898191, "Resolution (upd) at screen center: X=2.2, Y=1.6"], [9898191, "Gain Change Proportion: X: 0.115 Y: 0.008"], [9898191, "Gain Ratio (Gy/Gx) = 1.363"], [9898191, "Cross-Gain Ratios: X=0.100, Y=0.031"], [9898191, "Quadrant fixup[0] = 0.059,0.003"], [9898191, "Quadrant fixup[1] = 0.029,0.062"], [9898191, "Quadrant fixup[2] = 0.037,0.019"], [9898191, "Quadrant fixup[3] = 0.010,0.027"], [9898191, "PCR gain ratio(x,y) = 2.442, 1.946"], [9898191, "CR gain match(x,y) = 1.020, 1.020"], [9898191, "Slip rotation correction OFF"], [9898193, "CALIBRATION HV9 R RIGHT GOOD"], [9909442, "VALIDATION HV9 R RIGHT GOOD ERROR 0.49 avg. 0.92 max OFFSET 0.17 deg. -2.4,-3.7 pix."] ], "CalibrationResultOffset": [ 0.17, [-2.4, -3.7] ], "CalibrationResultOffsetUnits": ["deg", "pixels"], "CalibrationResultQuality": "GOOD", "CalibrationType": "HV9", "Columns": [ "x_coordinate", "y_coordinate", "pupil_size", "pupil_x_coordinate", "pupil_y_coordinate", "fixation", "saccade", "blink", "href_x_coordinate", "href_y_coordinate", "fast_x_velocity", "screen_ppdeg_x_coordinate", "screen_ppdeg_y_coordinate", "timestamp" ], "CornealReflectionThreshold": 179, "DeviceSerialNumber": "ZAB49", "EnvironmentCoordinates": "top-left", "EyeTrackerDistance": 1.02, "EyeTrackingMethod": "P-CR", "LoggedMessages": [ [9917678, "NO Reply is disabled for function eyelink_cal_result"], [9941593, "RECCFG CR 1000 2 0 R"], [9941593, "ELCLCFG TOWER"] ], "Manufacturer": "SR Research", "ManufacturersModelName": "EyeLink CL-OC", "MaximalCalibrationError": 0.92, "PupilFitMethod": "ellipse", "PupilFitMethodNumberOfParameters": 5, "PupilFitParameters": [ [1.01, 4], [0.15, 0.05], [0.65, 0.65], [0, 0, 0.3] ], "PupilThreshold": 68, "RecordedEye": "right", "SampleCoordinateSystem": "gaze-on-screen", "SampleCoordinateUnits": "pixel", "SamplingFrequency": 1000, "ScreenAOIDefinition": [ "square", [ 0, 800, 0, 600 ] ], "SoftwareVersion": "Eyelink 1000 v4.1594", "StartTime": -0.007, "StartTimestamp": 9941588, "StopTime": 1831.94, "StopTimestamp": 11773536, "ValidationErrors": [ [ 0.3, [6.2, 5.3] ], [ 0.36, [-8.7, 4.6] ], [ 0.92, [-13, -21.1] ], [ 0.84, [-20.4, -9.9] ], [ 0.35, [-8.7, -3.8] ], [ 0.87, [-0.5, -23.6] ], [ 0.66, [4.1, -17.4] ], [ 0.29, [8, -0.2] ], [ 0.66, [-8.1, -15.8] ] ], "ValidationPosition": [ [400, 300], [400, 51], [400, 549], [48, 300], [752, 300], [90, 81], [710, 81], [90, 519], [710, 519] ], "blink": { "Description": "Continuous recording of eyeblinks calculated by the eyetracking device (one indicates the eye is closed)." }, "fast_x_velocity": { "Description": "Fast x-coordinate velocity (EyeLink parameter)." }, "fixation": { "Description": "Continuous recording of fixation events calculated by the eyetracking device (one indicates the fixation is active)." }, "href_x_coordinate": { "Description": "Head reference x-coordinate of the recorded eye.", "Units": "pixel" }, "href_y_coordinate": { "Description": "Head reference y-coordinate of the recorded eye.", "Units": "pixel" }, "pupil_size": { "Description": "Pupil area of the recorded eye as calculated by the eyetracking device.", "Units": "a.u." }, "pupil_x_coordinate": { "Description": "Pupil center x-coordinate of the recorded eye.", "Units": "pixel" }, "pupil_y_coordinate": { "Description": "Pupil center y-coordinate of the recorded eye.", "Units": "pixel" }, "saccade": { "Description": "Continuous recording of saccade events calculated by the eyetracking device (one indicates the saccade is active)." }, "screen_ppdeg_x_coordinate": { "Description": "Screen pixels per degree (x-coordinate).", "Units": "pixels/degree" }, "screen_ppdeg_y_coordinate": { "Description": "Screen pixels per degree (y-coordinate).", "Units": "pixels/degree" }, "timestamp": { "Description": "Eye-tracker timestamp of the sampled recorded eye.", "Units": "a.u." }, "x_coordinate": { "Description": "Gaze position x-coordinate of the recorded eye.", "Units": "pixel" }, "y_coordinate": { "Description": "Gaze position y-coordinate of the recorded eye.", "Units": "pixel" } }
recording
x_coordinate | y_coordinate | pupil_size | pupil_x_coordinate | pupil_y_coordinate | fixation | saccade | blink | href_x_coordinate | href_y_coordinate | fast_x_velocity | screen_ppdeg_x_coordinate | screen_ppdeg_y_coordinate | timestamp | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 453.399994 | 126.699997 | 6090.0 | -1675.0 | -6632.0 | 0 | 0 | 0 | 527.0 | -1709.0 | 0.0 | 26.700001 | 26.799999 | 9941595 |
1 | 453.500000 | 127.800003 | 6101.0 | -1676.0 | -6622.0 | 0 | 0 | 0 | 528.0 | -1698.0 | 0.0 | 26.700001 | 26.799999 | 9941596 |
2 | 453.799988 | 129.000000 | 6092.0 | -1674.0 | -6613.0 | 0 | 0 | 0 | 531.0 | -1687.0 | 0.0 | 26.700001 | 26.799999 | 9941597 |
3 | 454.000000 | 129.500000 | 6083.0 | -1672.0 | -6609.0 | 0 | 0 | 0 | 533.0 | -1682.0 | 0.0 | 26.700001 | 26.799999 | 9941598 |
4 | 454.200012 | 128.399994 | 6081.0 | -1670.0 | -6617.0 | 0 | 0 | 0 | 534.0 | -1692.0 | 0.0 | 26.700001 | 26.799999 | 9941599 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
1831430 | NaN | NaN | NaN | -32768.0 | -32768.0 | 0 | 1 | 1 | -7936.0 | -7936.0 | NaN | 30.799999 | 32.599998 | 11773025 |
1831431 | NaN | NaN | NaN | -32768.0 | -32768.0 | 0 | 1 | 1 | -7936.0 | -7936.0 | NaN | 30.799999 | 32.599998 | 11773026 |
1831432 | NaN | NaN | NaN | -32768.0 | -32768.0 | 0 | 1 | 1 | -7936.0 | -7936.0 | NaN | 30.799999 | 32.599998 | 11773027 |
1831433 | NaN | NaN | NaN | -32768.0 | -32768.0 | 0 | 1 | 1 | -7936.0 | -7936.0 | NaN | 30.799999 | 32.599998 | 11773028 |
1831434 | NaN | NaN | NaN | -32768.0 | -32768.0 | 0 | 1 | 1 | -7936.0 | -7936.0 | NaN | 30.799999 | 32.599998 | 11773029 |
1831435 rows × 14 columns
Let's look into the gaze coordinates only:
data = recording[["x_coordinate", "y_coordinate"]]
data[data.x_coordinate.notna() & data.y_coordinate.notna()]
x_coordinate | y_coordinate | |
---|---|---|
0 | 453.399994 | 126.699997 |
1 | 453.500000 | 127.800003 |
2 | 453.799988 | 129.000000 |
3 | 454.000000 | 129.500000 |
4 | 454.200012 | 128.399994 |
... | ... | ... |
1785114 | 805.299988 | 1006.299988 |
1785115 | 804.799988 | 990.500000 |
1785116 | 806.299988 | 986.200012 |
1785117 | 807.099976 | 985.400024 |
1785118 | 807.099976 | 985.400024 |
193944 rows × 2 columns
Plotting some data¶
Let's first generate a time axis t_axis
in seconds.
To do so, we read the "timestamp" column of the dataframe, and divide by the sampling frequency (stored within the metadata).
In order to make it more "BIDS-y", we also set the start of the run at zero by applying the start time metadata.
t_axis = (
recording.timestamp.values - metadata["StartTimestamp"]
) / metadata["SamplingFrequency"]
Once we have the sampling axis, we can look at a basic measurement: pupil area in arbitrary units (as it comes from the EyeLink tracker).
With our conversion into BIDS, pupil area (column pa_right
of the EDF file) is mapped to the eye1_pupil_size
column of the _eyetrack.tsv.gz
file.
We first create a figure with landscape ratio to better get a sense of the data along time.
fig = plt.figure(figsize=(16, 2))
plt.plot(
t_axis,
recording["pupil_size"].values,
)
plt.xlabel("time [s]")
plt.ylabel("pupil area [a.u.]");
We can zoom in into the early moments of the run:
fig = plt.figure(figsize=(16, 2))
plt.plot(
t_axis,
recording["pupil_size"].values,
)
plt.xlabel("time [s]")
plt.ylabel("pupil area [a.u.]")
plt.xlim((-1.0, 10.0));
Next, we look at eye blinks. The two discontinuities at almost seconds 8 and 9 of the pupil area plot could be related to blinks.
fig = plt.figure(figsize=(16, 2))
plt.plot(
t_axis,
recording["blink"].values,
)
plt.xlabel("time [s]")
plt.ylabel("eyes closed");
Looks like the eyes (or at least the right eye, which was tracked) were closed during most of the run. Let's now plot together the first ten seconds of pupil area AND the blinks binary signal. Indeed, the pupil area drops when blinks are happening.
fig = plt.figure(figsize=(16, 2))
plt.plot(
t_axis,
recording["pupil_size"].values,
)
plt.plot(
t_axis,
recording["blink"].values * 10000,
)
plt.xlabel("time [s]")
plt.ylabel("pupil area [a.u.]")
plt.xlim((-0.1, 10.0));
We can clean up the pupil area time series by removing data while the eye was closed. The signal seems to display fewer artifacts.
pupil_area = recording["pupil_size"].values
pupil_area[recording["blink"] > 0] = np.nan
fig = plt.figure(figsize=(16, 2))
plt.plot(
t_axis,
recording["pupil_size"].values,
)
plt.xlabel("time [s]")
plt.ylabel("pupil area [a.u.]");
We can also see the map of time spent on areas of the screen.
First, we extract the screen size from the JSON metadata _eyetrack.json
.
Second, we select the pupil gaze columns in the recordings table (corresponding to _eyetrack.tsv.gz
).
Finally, we use our helper plotter to show the gaze frequency map:
size = (
metadata["ScreenAOIDefinition"][1][1],
metadata["ScreenAOIDefinition"][1][3],
)
data = recording[["x_coordinate", "y_coordinate"]]
data = data[recording.blink < 1]
plot.plot_heatmap_coordinate(data, density=False, screen_size=size);
Contrasting the gaze at the beginning and end of the "task"¶
At the start and end of the DWI run, we presented two central fixation points during 60s, serving as reference markers for the analysis of potential gaze drift (i.e., shifts in gaze coordinates). Let's compare the density maps during these two fixation points to assess the drift during the DWI.
First, we extract the data corresponding to the two fixation points:
fixation_duration = 60
start_first_fixation = metadata["StartTimestamp"]
stop_first_fixation = start_first_fixation + (
fixation_duration * metadata["SamplingFrequency"]
)
stop_second_fixation = metadata["StopTimestamp"]
start_second_fixation = stop_second_fixation - (
fixation_duration * metadata["SamplingFrequency"]
)
data_first_fixation = recording[
(recording["timestamp"] >= start_first_fixation)
& (recording["timestamp"] <= stop_first_fixation)
]
data_second_fixation = recording[
(recording["timestamp"] >= start_second_fixation)
& (recording["timestamp"] <= stop_second_fixation)
]
We can now create an animation that displays the heatmap of the coordinates during those two fixation points.
HEATMAP_ANIMATION_FIGURE_SIZE = (13, 13 * size[1] / size[0])
fig, ax = plt.subplots(figsize=HEATMAP_ANIMATION_FIGURE_SIZE, layout="tight")
def update(frame):
if frame == 0:
return plot.plot_heatmap_coordinate(
data_first_fixation, ax=ax, density=False, screen_size=size
)
return plot.plot_heatmap_coordinate(
data_second_fixation, ax=ax, density=False, screen_size=size
)
num_frames = 2
anim = animation.FuncAnimation(
fig, update, frames=num_frames, interval=2000, repeat=True, blit=False
);
plt.close('all');
plt.clf();
HTML(anim.to_html5_video())
<Figure size 432x288 with 0 Axes>
It's interesting to observe that the gaze during the second fixation wasn't centered on the screen, which is unexpected. This likely happened because the participant's eye was mostly closed during the presentation of the second fixation point at the end of the run. To verify this, let's plot the blink data for the final fixation stimulus:
fig = plt.figure(figsize=(16, 2))
plt.plot(
(data_second_fixation["timestamp"].values - metadata["StartTimestamp"])
/ metadata["SamplingFrequency"],
data_second_fixation["blink"].values,
)
plt.xlabel("time [s]")
plt.ylabel("eyes closed");
Quality Control task (QCT)¶
Now, let's repeat the process for the QCT, which is a short task with a number of stimuli. We'll encapsulate the session data and showcase the same visualizations. Since we have changed of task, we open the corresponding file and generate a new time axis:
recording_file = BIDS_PATH / "sub-001" / f"ses-{session}" / "func" / f"sub-001_ses-{session}_task-qct_dir-RL_recording-eyetrack_physio.tsv.gz"
metadata = json.loads((
recording_file.parent
/ recording_file.name.replace(".tsv.gz", ".json")
).read_text())
recording = pd.read_csv(
recording_file,
sep=r"\s+",
header=None,
index_col=None,
na_values="n/a",
names=metadata["Columns"],
)
t_axis = (recording.timestamp.values - metadata["StartTimestamp"]) / metadata["SamplingFrequency"]
First, let's look at the pupil size after filtering out blinks:
fig = plt.figure(figsize=(16, 2));
plt.plot(
t_axis[recording.blink == 0],
recording.loc[recording.blink == 0, "pupil_size"].values,
)
plt.xlabel("time [s]");
plt.ylabel("pupil area [a.u.]");
The duration of eyeblinks provides insights into the participant's wakefulness during the task. Let's visualize the eyeblink duration specifically for the QCT. Here the plot looks very different compared to the DWI run. The short blink durations in the QCT plot suggest that the participant likely kept their tracked eye open throughout the session.
fig = plt.figure(figsize=(16, 2))
plt.plot(
t_axis,
recording["blink"].values,
)
plt.xlabel("time [s]")
plt.ylabel("eyes closed");
Events (or "triggers") in the QCT¶
The QCT comprises four randomly ordered tasks: visual gratings, fingertapping with hand instructions, cognitive gaze movement, and blank trials with a central fixation point. Messages are sent by the Psychopy experiment at the beginning and end of each subtask, and these are stored in the 'LoggedMessages' field of et_qct.metadata. Let's examine these messages:
logged_messages = metadata['LoggedMessages']
logged_messages
[[12654533, 'NO Reply is disabled for function eyelink_cal_result'], [12691732, 'RECCFG CR 1000 2 0 R'], [12691732, 'ELCLCFG TOWER'], [12692244, 'start blank loop'], [12695241, 'stop blank loop'], [12695286, 'start blank loop'], [12698275, 'stop blank loop'], [12698320, 'start vis loop'], [12701326, 'stop vis loop'], [12701393, 'start mot loop'], [12706383, 'stop mot loop'], [12706394, 'start mot loop'], [12711390, 'stop mot loop'], [12711442, 'start blank loop'], [12714440, 'stop blank loop'], [12714496, 'start mot loop'], [12719497, 'stop mot loop'], [12719503, 'start mot loop'], [12724505, 'stop mot loop'], [12724562, 'start mot loop'], [12729562, 'stop mot loop'], [12729569, 'start mot loop'], [12734569, 'stop mot loop'], [12734626, 'start cog loop'], [12735118, 'stop cog loop'], [12735121, 'start cog loop'], [12735630, 'stop cog loop'], [12735634, 'start cog loop'], [12736128, 'stop cog loop'], [12736132, 'start cog loop'], [12736625, 'stop cog loop'], [12736628, 'start cog loop'], [12737122, 'stop cog loop'], [12737127, 'start cog loop'], [12737637, 'stop cog loop'], [12737697, 'start vis loop'], [12740692, 'stop vis loop'], [12740759, 'start mot loop'], [12745761, 'stop mot loop'], [12745771, 'start mot loop'], [12750768, 'stop mot loop'], [12750825, 'start vis loop'], [12753821, 'stop vis loop'], [12753890, 'start blank loop'], [12756886, 'stop blank loop'], [12756952, 'start cog loop'], [12757450, 'stop cog loop'], [12757453, 'start cog loop'], [12757964, 'stop cog loop'], [12757967, 'start cog loop'], [12758461, 'stop cog loop'], [12758466, 'start cog loop'], [12758959, 'stop cog loop'], [12758962, 'start cog loop'], [12759456, 'stop cog loop'], [12759459, 'start cog loop'], [12759954, 'stop cog loop'], [12760025, 'start blank loop'], [12763022, 'stop blank loop'], [12763092, 'start cog loop'], [12763587, 'stop cog loop'], [12763590, 'start cog loop'], [12764098, 'stop cog loop'], [12764099, 'start cog loop'], [12764596, 'stop cog loop'], [12764599, 'start cog loop'], [12765093, 'stop cog loop'], [12765097, 'start cog loop'], [12765591, 'stop cog loop'], [12765596, 'start cog loop'], [12766105, 'stop cog loop'], [12766183, 'start cog loop'], [12766685, 'stop cog loop'], [12766688, 'start cog loop'], [12767182, 'stop cog loop'], [12767183, 'start cog loop'], [12767696, 'stop cog loop'], [12767697, 'start cog loop'], [12768193, 'stop cog loop'], [12768194, 'start cog loop'], [12768692, 'stop cog loop'], [12768695, 'start cog loop'], [12769189, 'stop cog loop'], [12769263, 'start vis loop'], [12772260, 'stop vis loop'], [12772340, 'start blank loop'], [12775341, 'stop blank loop'], [12775427, 'start vis loop'], [12778426, 'stop vis loop'], [12778510, 'start blank loop'], [12781509, 'stop blank loop'], [12781602, 'start cog loop'], [12782088, 'stop cog loop'], [12782089, 'start cog loop'], [12782602, 'stop cog loop'], [12782608, 'start cog loop'], [12783100, 'stop cog loop'], [12783103, 'start cog loop'], [12783614, 'stop cog loop'], [12783616, 'start cog loop'], [12784111, 'stop cog loop'], [12784114, 'start cog loop'], [12784609, 'stop cog loop'], [12784698, 'start mot loop'], [12789698, 'stop mot loop'], [12789705, 'start mot loop'], [12794706, 'stop mot loop'], [12794792, 'start cog loop'], [12795287, 'stop cog loop'], [12795288, 'start cog loop'], [12795801, 'stop cog loop'], [12795801, 'start cog loop'], [12796297, 'stop cog loop'], [12796298, 'start cog loop'], [12796795, 'stop cog loop'], [12796798, 'start cog loop'], [12797292, 'stop cog loop'], [12797295, 'start cog loop'], [12797806, 'stop cog loop'], [12797895, 'start blank loop'], [12800893, 'stop blank loop'], [12800980, 'start vis loop'], [12803976, 'stop vis loop'], [12804073, 'start mot loop'], [12809067, 'stop mot loop'], [12809074, 'start mot loop'], [12814071, 'stop mot loop'], [12814161, 'start blank loop'], [12817158, 'stop blank loop'], [12817257, 'start blank loop'], [12820259, 'stop blank loop'], [12820367, 'start mot loop'], [12825365, 'stop mot loop'], [12825372, 'start mot loop'], [12830369, 'stop mot loop'], [12830497, 'start vis loop'], [12833488, 'stop vis loop'], [12833565, 'start blank loop'], [12836555, 'stop blank loop'], [12836631, 'start mot loop'], [12841628, 'stop mot loop'], [12841635, 'start mot loop'], [12846635, 'stop mot loop'], [12846746, 'start cog loop'], [12847248, 'stop cog loop'], [12847249, 'start cog loop'], [12847748, 'stop cog loop'], [12847751, 'start cog loop'], [12848260, 'stop cog loop'], [12848265, 'start cog loop'], [12848758, 'stop cog loop'], [12848761, 'start cog loop'], [12849256, 'stop cog loop'], [12849258, 'start cog loop'], [12849752, 'stop cog loop']]
To visualize the data of each subtask separately, let's construct a dictionary. Each entry will encapsulate a dataframe with data from one of these subtasks.
subtask_dataframes = {}
n_task = 0
for i in range(len(logged_messages)):
entry = logged_messages[i]
if "start" in entry[1]:
start_time = entry[0]
subtask_type = entry[1].split()[1]
stop_time = None
for j in range(i + 1, len(logged_messages)):
stop_entry = logged_messages[j]
if "stop" in stop_entry[1] and (subtask_type in stop_entry[1]):
stop_time = stop_entry[0]
n_task = n_task + 1
break
if stop_time is not None:
subtask_data = recording[
(recording["timestamp"] >= start_time)
& (recording["timestamp"] <= stop_time)
]
df_name = f"df_{subtask_type}_{n_task}"
subtask_dataframes[df_name] = subtask_data
Now we can plot the heatmap of the gaze coordinates for each of the subtasks.
fig, ax = plt.subplots(figsize=HEATMAP_ANIMATION_FIGURE_SIZE, layout="tight");
def update(frame):
ax.clear()
dataframe = list(subtask_dataframes.values())[frame]
if dataframe is not None:
plot.plot_heatmap_coordinate(
dataframe[["x_coordinate", "y_coordinate"]],
ax=ax,
density=False,
screen_size=size,
)
subtask_name = (
list(subtask_dataframes.keys())[frame].split("_")[1].capitalize()
)
ax.set_title(f"Subtask: {subtask_name}")
num_frames = len(subtask_dataframes)
anim = animation.FuncAnimation(
fig, update, frames=num_frames, interval=1000, repeat=True, blit=False
);
plt.close("all");
plt.clf();
HTML(anim.to_html5_video())
<Figure size 432x288 with 0 Axes>
During cognitive (cog) events, a noticeable bimodal distribution in gaze coordinates emerges, reflecting the participant's delayed reaction to stimuli. The initial portion of the gaze distribution persists on the coordinates of the preceding stimuli, while the subsequent part aligns with the coordinates of the currently displayed stimulus.
Resting-state fMRI run¶
Now, let's do the same for the resting state. We'll encapsulate the session data and show similar visualizations.
recording_file = BIDS_PATH / "sub-001" / f"ses-{session}" / "func" / f"sub-001_ses-{session}_task-rest_dir-RL_recording-eyetrack_physio.tsv.gz"
metadata = json.loads((
recording_file.parent
/ recording_file.name.replace(".tsv.gz", ".json")
).read_text())
recording = pd.read_csv(
recording_file,
sep=r"\s+",
header=None,
index_col=None,
na_values="n/a",
names=metadata["Columns"],
)
t_axis = (recording.timestamp.values - metadata["StartTimestamp"]) / metadata["SamplingFrequency"]
Let's first have a look at the blinks duration.
fig = plt.figure(figsize=(16, 2))
plt.plot(
t_axis,
recording["blink"].values,
)
plt.xlabel("time [s]")
plt.ylabel("eyes closed");
The resting state task comprises three events: initial and final fixation points for drift estimation and a 20-minute movie. Let's start by analyzing the data recorded during the movie. At the beginning and end of the movie, the Psychopy experiment sends the messages'start movie' and 'end movie'. The start of the movie correspond to the first trigger sent by the scanner. First, let's extract the data corresponding to the movie.
start_movie_timestamp = metadata["StartTimestamp"]
stop_movie_timestamp = [
entry[0] for entry in metadata["LoggedMessages"] if "end movie" in entry[1]
][0]
data_movie = recording[
(recording["timestamp"] >= start_movie_timestamp)
& (recording["timestamp"] <= stop_movie_timestamp)
]
We can now plot the heatmap of the gaze coordinates during the movie.
plot.plot_heatmap_coordinate(
data_movie[["x_coordinate", "y_coordinate"]],
density=False,
screen_size=size,
background_image="mundaka-image.png",
);