Session Description: As we lose biodiversity in a changing world, data preservation becomes a fundamental tool to understand nature. The rise and development of open science, including the ESA’s Open Research Policy, have encouraged the correct and public archiving of data, making them safer and more useful for the scientific community. However, open data requirements and guidelines do little to address issues of access to and preservation of previously collected data. Nevertheless they remain a public good, funded by taxpayers and governments, so rescuing datasets to ensure their longevity and accessibility is imperative. In this session we will discuss and experience the data rescuing process as elaborated by the Living Data Project in the past two years of existence. As organizers explain and demonstrate the steps - data prioritization, team and metadata creation, data transfer, compilation, cleaning, validation, archiving and sharing - participants will work in groups to solve problems at each of these steps using only analogical materials. The aim is to focus on concepts and understanding rather than technical execution of tasks (e.g., understand and discuss what makes a good metadata instead of learning the computational skills to actually produce the perfect metadata file). Organizers will provide prompts that simulate the files and information we need to deal with in this process, such as card profiles of professionals available to be part of the team, undocumented physical spreadsheets with unorganized data, cardboxes the archive their data and stamps to classify their data according to FAIR and CARE principles, for example.