An in vivo microscopy dataset for the characterization of leukocyte cell death.

Recent advancements in intravital microscopy have enabled the study of cell death in vivo under a variety of experimental conditions, such as infection and cancer. However, the limited throughput of this technology, together with a lack of openly accessible datasets, affects the development of algorithms for the automatic detection and characterization of cell death, which would require the integration of extensive and curated datasets. To address these needs, we present a curated dataset of microscopy videos acquired within the spleen and lymph node of mice under inflammatory conditions associated with the death of neutrophils, eosinophils, and dendritic cells. The dataset provides time-lapse imaging data, the coordinates in space and time of cell death events displaying apoptotic–like morphodynamics, along with the 3D reconstruction of the cell morphology at each time point. Altogether, these data will be pivotal for the development of computer vision and bioimage analysis methods to advance cell death research.

Please find below the links to access imaging data, metadata, experimental details, and annotations for each video of the collection

Video

URL
Den1 https://app.immunemap.org/acquisition-public-view?id=515&videoID=644
Den2 https://app.immunemap.org/acquisition-public-view?id=354&videoID=458
Den3 https://app.immunemap.org/acquisition-public-view?id=520&videoID=653
Den4 https://app.immunemap.org/acquisition-public-view?id=516&videoID=645
Eos1 https://app.immunemap.org/acquisition-public-view?id=482&videoID=606
Eos2 https://app.immunemap.org/acquisition-public-view?id=522&videoID=655
Eos3 https://app.immunemap.org/acquisition-public-view?id=521&videoID=654
Eos4 https://app.immunemap.org/acquisition-public-view?id=351&videoID=463
Eos5 https://app.immunemap.org/acquisition-public-view?id=523&videoID=656
Neu1 https://app.immunemap.org/acquisition-public-view?id=517&videoID=649
Neu2 https://app.immunemap.org/acquisition-public-view?id=518&videoID=651
Neu3 https://app.immunemap.org/acquisition-public-view?id=55&videoID=79
Neu4 https://app.immunemap.org/acquisition-public-view?id=96&videoID=150
Neu5 https://app.immunemap.org/acquisition-public-view?id=525&videoID=664
Neu6 https://app.immunemap.org/acquisition-public-view?id=526&videoID=657
Neu7 https://app.immunemap.org/acquisition-public-view?id=530&videoID=665
Neu8 https://app.immunemap.org/acquisition-public-view?id=519&videoID=652
Neu9 https://app.immunemap.org/acquisition-public-view?id=534&videoID=660
Neu10 https://app.immunemap.org/acquisition-public-view?id=100&videoID=156
Neu11 https://app.immunemap.org/acquisition-public-view?id=531&videoID=667
Neu12 https://app.immunemap.org/acquisition-public-view?id=541&videoID=676
Neu13 https://app.immunemap.org/acquisition-public-view?id=524&videoID=661
Neu14 https://app.immunemap.org/acquisition-public-view?id=527&videoID=658
Neu15 https://app.immunemap.org/acquisition-public-view?id=537&videoID=671
Neu16 https://app.immunemap.org/acquisition-public-view?id=528&videoID=659
Neu17 https://app.immunemap.org/acquisition-public-view?id=540&videoID=675
Neu18 https://app.immunemap.org/acquisition-public-view?id=529&videoID=663
Neu19 https://app.immunemap.org/acquisition-public-view?id=102&videoID=158
Neu20 https://app.immunemap.org/acquisition-public-view?id=532&videoID=668
Neu21 https://app.immunemap.org/acquisition-public-view?id=535&videoID=666
Neu22 https://app.immunemap.org/acquisition-public-view?id=536&videoID=670
Neu23 https://app.immunemap.org/acquisition-public-view?id=533&videoID=669
Neu24 https://app.immunemap.org/acquisition-public-view?id=539&videoID=674
Neu25 https://app.immunemap.org/acquisition-public-view?id=538&videoID=662

👩‍🔬👨‍🔬 The Immunology Challenge: Submit Your Intravital Microscopy Videos via IMMUNEMAP

We invite all immunologists to participate in our Intravital Microscopy Video Challenge! This challenge aims to promote the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) within the scientific community by encouraging the sharing of high-quality microscopy data.

The Challenge timeline

You will receive a confirmation email once your data has been successfully received.

  • From November 15th to December 20th, everyone will have the chance to vote for their favourite videos on the IMMUNEMAP platform. Be sure to follow this page for more details on the voting process.
  • On January 7th 2024, the winner will be announced.

Evaluation Criteria:

Your videos will deceive a score, which is determined for 50% by the assessment of an internal committee and the remaining 50% through online community voting.

The criteria used by the internal committee are

  1. Innovation: the novelty of the biological insights or techniques showcased.
  2. Physical parameters (i.e. stability of the video, photobleaching, artefacts)
  3. Metadata completeness: accuracy and completeness of the metadata provided with each submission.

🏅 Prize

The winner will receive an APPLE iPad Wi-Fi 2022 10th Gen., a versatile tool for work.
This is a great opportunity to showcase your work, receive feedback, and contribute to the research community. We look forward to your participation!


👥 Organizers

  • Diego Ulisse Pizzagalli - Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
  • Rolf Krause - Euler Institute, Università della Svizzera Italiana, Lugano, Switzerland
  • Santiago Fernandez Gonzalez - Institute for research in biomedicine, Bellinzona, Switzerland
  • Raffaella Fiamma Cabini - Euler Institute, Università della Svizzera Italiana, Lugano, Switzerland
  • Elisa Palladino - Institute for research in biomedicine, Bellinzona, Switzerland
  • Enrico Moscatello - Euler Institute, Università della Svizzera Italiana, Lugano, Switzerland

 

For further details please contact This email address is being protected from spambots. You need JavaScript enabled to view it.

To promote the dissemination of Open Research Data practices, IMMUNEMAP organizes two challenges for the year 2024:

The Cell Behavior Video Classification Challenge (CBVCC) is a challenge designed to develop/adapt computer vision methods for classifying videos capturing cell behavior through Intravital Microscopy (IVM).

Read more here

 

The Intravital Microscopy Video Contest aims to promote the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) within the scientific community by encouraging the sharing of high-quality microscopy data.

Read more here

 

🔍 Introduction and Goal 

The Cell Behavior Video Classification Challenge (CBVCC) is a challenge designed to develop/adapt computer vision methods for classifying videos capturing cell behavior through Intravital Microscopy (IVM).

IVM is a powerful imaging technique that allows for non-invasive visualization of biological processes in living animals. Platforms such as two-photon microscopes exploit multiple low-energy photons to deliver high-resolution, three-dimensional videos depicting tissues and cells deep within the body. IVM has been used to visualize a wide range of biological processes, including immune responses, cancer development, and neurovascular function. 

The primary goal of the CBVCC challenge is to create models that can accurately classify videos based on the movement patterns of cells. Specifically, the models should be able of:

  • Identifying videos where cells exhibit sudden changes in their direction of movement.
  • Distinguishing these from videos where cells show consistent, linear movement, stationary cells as well as from videos containing only background.

The CBVCC challenge aims to provide a platform for researchers to develop innovative methods for classifying IVM videos, potentially leading to new insights into biological processes.

 

👩‍🔬👨‍🔬 The CBVCC Challenge 

The CBVCC challenge will be open to researchers from all over the world. Participants will be asked to develop computational models to classify IVM videos. The models will be evaluated based on their accuracy to classify videos.

The challenge will be divided into two phases:

  • Phase 1: In the first phase, participants will be provided with a training dataset and a test dataset of IVM images and labels. They will use this data to develop and evaluate their models.
  • Phase 2: In the second phase, participants will submit their results obtained on a distinct test dataset of IVM images. The models will be ranked based on their performance on the test dataset.

The challenge results will be decided based on the performance of the Phase 2 submission. 

 

🏅 The Challenge timeline

The CBVCC challenge will begin on September 15th, 2024, and end on December 20th, 2024. The timeline of the key events is organized as follows:

  • September 15th 2024 to November 15th 2024: Participants are invited to join the challenge by registering on the website.
    Registration link: https://forms.gle/cQSHJmEFZ5bRBJ328
  • November 15th 2024: The train dataset will be published, officially launching the challenge. Additionally, a validation leaderboard will be made available for participants. 
  • December 6th 2024: Test dataset (not annotated) released.
  • December 20th 2024: Challenge closes. By this date participants are required to submit their predictions on the test set, an abstract describing the methodology, code or docker container to replicate results.
  • January 7th 2024: Results and winners will be announced.

 

Dataset 

The CBVCC dataset consists of 2D video-patches extracted from IVM videos. These videos capture the behavior of regulatory T cells in the abdominal flank skin of mice undergoing a contact sensitivity response to the sensitizing hapten, oxazolone. The videos are acquired either 24 or 48 hours after the initial skin challenge with oxazolone. Each video sequence lasts for 30 minutes, with images captured at one-minute intervals, resulting in a total of 31 acquisitions per video.

The primary goal of this challenge is to classify the provided video-patches into two distinct categories:

  1. Video-patches where cells suddenly change their direction of movement (class 1): These videos contain cells that demonstrate sudden changes in their migratory paths.
  2. Video-patches without sudden changes in cell direction (class 0): These videos either show cells moving in a linear manner, remain stationary, or contain video with only the background and no visible cells.

The dataset comprises a total of 300 2D video-patches extracted from 48 different videos, representing both classes (n=180 class 0 and n=120 class 1). Each video-patch is a 2D projection along the z-axis of a 3D video sequence, carefully adjusted to a common contrast range to enhance the visibility of the biological processes. Additionally, all videos have been preprocessed to ensure a uniform pixel size of 0.8 µm. Video-patches are saved as RGB .avi files.

The dataset consists of 300 pairs of video-patches and their corresponding labels, divided as follows:

  • Training set: 210 video-patches (70%)
  • Phase 1 test set: 30 video-patches (10%)
  • Phase 2 test set: 60 video-patches (20%)

Each subset includes video patches extracted from different and independent IVM videos.

 

Evaluation

The evaluation metrics for the challenge are designed to comprehensively assess the models' performance in classifying video-patches. The metrics include:

  • Area Under the ROC Curve (AUC)
  • Sensitivity
  • Specificity
  • Balanced Accuracy

The final score will be calculated as:

score = 0.4*AUC + 0.2*(Precision+Recall+Balanced Accuracy)

The evaluation code will be made public later.

 

Rules of Participation

  • The data used to train algorithms may be restricted to the data provided by the challenge.
  • Participants must not manually annotate the test dataset to train a supervised model for phase 1 or 2 submissions.
  • All results will be made publicly available.
  • During the submission phase, participants will be asked to upload their model predictions (in terms of probabilities) and an abstract describing the methodology used (in Phase 2).
  • There is no monetary prize for the winners of the challenge.
  • Both AI-based and non-AI-based methods are allowed.

 

We will collaborate with participants to create a comprehensive journal article summarizing the key results and analyses from this challenge. Participants who submit valuable work are welcome to contribute to the publication, with up to three authors from each team being acknowledged.

In order for us to include you in our paper:

  • Please submit a detailed description of your solution with your final test phase submission.
  • You are welcome to submit additional paragraphs and figures about your submission via email (see the Contact info on the Organizers page).

Besides, we encourage all participants to independently submit their results without imposing any publication embargo.

 

👥 Organizers

  • Diego Ulisse Pizzagalli - Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
  • Rolf Krause - Euler Institute, Università della Svizzera Italiana, Lugano, Switzerland
  • Santiago Fernandez Gonzalez - Institute for research in biomedicine, Bellinzona, Switzerland
  • Raffaella Fiamma Cabini - Euler Institute, Università della Svizzera Italiana, Lugano, Switzerland
  • Elisa Palladino - Institute for research in biomedicine, Bellinzona, Switzerland
  • Enrico Moscatello - Euler Institute, Università della Svizzera Italiana, Lugano, Switzerland

 

For further details please contact This email address is being protected from spambots. You need JavaScript enabled to view it.

Free Joomla! templates by Engine Templates