Open Research Data Projects
Projects funded in the framework of the ORD Program
The joint ORD program of ETH Zurich, EPFL and the four research institutes of the ETH Domain has financially supported more than 60 research projects in the period 2020–2023. Funding supports researchers engaging in, or developing, ORD practices with and for their community and assists these researchers in becoming Open Research Data leaders in their field.
This page provides an overview of these projects. It highlights how researchers in the ETH Domain are currently applying ORD in exemplary ways. Some of the projects have already been completed, others are still in progress. The projects have been divided into three categories.
“Establish” projects help link existing ORD practices to a research agenda to establish them on a broader basis. They contribute to a shared and comprehensive understanding of ORD practices that can then become de facto standards.
“Explore” projects are the most extensive ventures in the program and are designed to explore and test early-stage ORD practices. The goal is to map processes of what an ORD practice might look like and develop prototypes. Through these projects, new teams form across disciplines and institutions.
“Contribute” projects help scientists integrate their research data into existing, often international, infrastructures. By standardizing the processes and making them generally accessible, the data are validated, and their potential is considerably expanded.
Filter
Category
Institutions
Data type
Field
Researchers
Abstract
Stone masonry is an eco-friendly construction material, but its use has declined due to its vulnerability to earthquakes, mainly because of the poor arrangement of its microstructure. The microstructure includes the shape, size, and arrangement of stone units, which vary based on geographic, temporal, and material factors. Current building codes cannot fully account for this variability, and experimental studies are costly and impractical due to the diversity of masonry typologies. Numerical studies offer a solution, but creating realistic microstructures for modeling irregular stone masonry is complex and time-consuming. As a result, simplified microstructures are often used in simulations, which fail to capture the complexities of irregular masonry walls. To address this challenge, we have developed a 3D masonry microstructures database ready to use in numerical simulations. To enhance accessibility and usability, this project aims to create a web-based platform hosting this curated database of 3D microstructures and their geometric indices. The proposed web-based platform will also feature a tool for evaluating masonry quality using the Masonry Quality Index (MQI) from 2D images, promoting the preservation of historic structures and sustainable construction practices. Additionally, the platform will enable researchers to contribute and document new 3D microstructures, fostering collaboration and advancing numerical research on stone masonry.
Category
Institutions
Data type
Field
Researchers
Abstract
In order to advance our understanding of the carbon cycle, it is essential to evaluate the spatiotemporal variations of carbon between river and marine environments and gain insights into the pathways of carbon transfer from land to ocean. To do this, we need to work jointly with riverine and marine data, accounting for their temporal and spatial distribution. However, each of these systems have different data and metadata reporting strategies that need to be accounted for, which complicates their joint application. Efforts have been made to compile data from each of these systems into independent databases, but no attempt has yet been done to create a joint database of data of both of these systems while accounting for their different metadata. Hence, this project aims to bring together riverine and marine data into one database to easily query the data between both systems through the River to Ocean Geodatabase for Education and Research (ROGER). This database will be displayed in an interactive web-interface that queries riverine and/or marine data depending on the user’s requirements through a REST API. Harnessing the advanced geographical functions of PostgreSQL, the REST API will include functions that allow users to geospatially integrate riverine and marine data. This new database will provide a crucial step forward in the understanding of the carbon cycle along the land-ocean continuum, while ensuring that the data complies with best Open Research Data practices.
Category
Institutions
Data type
Field
Researchers
Abstract
Chronic cough is a common condition globally. While efforts are being made to develop wearables to detect and quantify cough events automatically, such monitoring devices have not yet been incorporated into routine clinical practice due to a lack of consistency in their validation, resulting in slow progress and a lack of trust in reported results. We have identified three main reasons for this heterogeneity: 1) the clinical definition of different cough events and especially the delimitation of their beginning/end lacks standardization, 2) the data used is typically private and imbalanced with inadequate labelling as a result of the previous point, and 3) methodologies to assess the accuracy of event detection are different between research groups and often inappropriate. This proposal builds on ORD datasets, community guidelines, and standards to propose a unified framework for validating cough event detection algorithms. The main objective is the development of standards that will unify the workflow for validating respiratory event detection algorithms to ensure data adheres the principles of Findable, Accessible, Interpretable, and Reusable data. This will be distributed through a website, serving as a central hub and reference for standardizing clinical definitions and methodologies, leading to a future benchmarking platform for respiratory event detection algorithms.
Institutions
Partners
Abstract
Sharing of research data is often perceived as a burden by researchers, as it usually involves manual upload of data from data management systems like Electronic Lab Notebooks (ELN) to repositories. This project aims to contribute to a better integration between ELN systems and data repositories in the ETH Domain, by implementing open API-based interfaces between the SciCat data repository and three important ELNs in the ETH Domain (SciLog, openBIS, Heidi). By implementing seamless interfaces between these widely used solutions in the ETH Domain, the project will simplify existing ORD practices for researchers, thereby lowering the barrier for publication of high-quality FAIR datasets.
Institutions
Partners
Abstract
The adoption of Electronic Laboratory Notebooks (ELNs) in academic research settings is steadily increasing and gradually replacing traditional paper-based notebooks. However, transitioning to ELNs requires time and expertise. Complicating matters, the market offers numerous ELN solutions, each with its unique data model, impeding seamless information exchange. In this project, we plan to address two critical aspects of ELN adoption in academia. First, we would like to broaden and strengthen the knowledge and requirements for adoption of ELN and data management solutions in academic research groups inside the ETH domain. This will be a collaboration between ETH Scientific IT Services (SIS) with the School of Engineering of EPFL, drawing from extensive experience inside SIS in deploying and providing their own software, openBIS, as an ELN and data management solution inside the ETH Domain (namely ETH Zurich, Empa and PSI) and beyond. Second, we aim to enhance interoperability by implementing a standard for data export from ELNs. To this end, we will explore and suggest enhancements to the RO-crate format which focuses on packaging data with metadata and simplifies data sharing and preservation, ensuring reproducibility and long-term accessibility. By sharing our experiences and introducing openBIS while exploring data export standards, we aim to contribute to streamlining ELN adoption and fostering data interoperability in academic research environments in the ETH domain. Our approach will facilitate efficient collaboration, enhance research reproducibility, and promote the advancement of scientific knowledge.
Institutions
Partners
Abstract
Authentication, authorization, and identity and access management (IAM) are central to interoperability between services. Currently ETH services use a variety of identity providers, from federated services like SWITCH eduID to institute-specific active directory installations. Incompatibilities between authentication and authorization can be a major obstacle to interoperability between institutes. To mitigate this, we propose to draft a set of guidelines for IAM practices relating to ORD services. All M2 projects funded under this measure will be expected to follow the guidelines, ensuring that these services are interoperable. The guidelines will also be published in the Central Info Point website and disseminated to researchers, providing clear best practices for services outside the ETH ORD program to follow.
Institutions
Partners
Abstract
Access to data is a fundamental part of any scientific analysis and discovery, and a basic requirement for Open Research Data. Nevertheless, accessing data can be limited by various barriers, including incompatible infrastructures and APIs, as it was highlighted in the Infrastructure report by the Expert Group Services & Infrastructures (EG-SI). This project aims to introduce a common Storage Access API based on industry standards in high impact use cases among the ETH Domain institutes. This will lower the efforts both for accessing data, and for developing general and domain-specific data-based tools, thus accelerating the path to scientific discoveries and leading to reusable tools across the ETH Domain scientific communities.
Institutions
Partners
Abstract
With this first-of-its-kind interoperability project, Eawag/LIB4RI, WSL and PSI aim to jointly define and implement a basic blueprint for metadata format and exchange to demonstrate the feasibility of achieving interoperability between data catalogues and repositories in the ETH Domain. We aim to define a common understanding and an agreed compliance level with national and international metadata standards relevant to ETH Domain data catalogues and repositories, and subsequently take all required steps to implement and comply with them, in order to improve the visibility of datasets in the ETH domain and beyond. This will include, as major deliverables, improving the link between datasets and scientific publications, and connecting the involved repositories (EnviDat, SciCat, Materials Cloud Archive) with each other and with well-established central search portals (including DORA and the Lib4RI search tool). This project will also uncover hidden challenges and barriers to interoperability, paving the way for other repositories in the ETH domain to join our interoperability efforts.
Institutions
Partners
Abstract
We propose to build an integration between analysis platforms and research data repositories to allow for an exchange of information about data use and to facilitate data reuse. The analysis platforms support the reproducibility of data analyses by managing and tracking the relations between input data, algorithms (and their versions), and output data while repositories contain relevant research data. This project will provide a blueprint and a concrete implementation for integrating these two vital sides of ETH Domain ORD infrastructure.
Institutions
Partners
Abstract
To ensure data to be sustainably FAIR and research to be reproducible, lab data management (LDM) and electronic lab notebooks (ELNs) must not evolve as separate systems, but rather ELNs need to be integrated into LDM. To address this need, we propose Gatekeeper, an extension of the already existing Renku platform. Gatekeeper is a centralized system that facilitates research data management across the complete project life cycle, including user access management, versatile integration of different sources, data sharing, archiving, and publication. This Renku extension acts as a middle layer that allows project-specific access to all connected data, independent of its source. In this proposal we will focus on extending Renku regarding the connection and metadata management for ELNs and other sources used in our consortium. New data sources can be integrated in a modular fashion, ranging from low-level and simple linking of data to in-depth integration including data integrity and metadata sanity checks. This modular set-up allows community-driven dissemination of Renku extensions and refinements across future users.
Institutions
Partners
Abstract
The ORD Central Info Point (CIP) project will create an online resources portal, where ETH researchers can navigate and orientate themselves in the ETH ORD landscape at various stages of the Research Data Management (RDM) life cycle. These web pages provide a single-entry point to promote ORD practices and increase services’ visibility with the aim to lower the barriers for researchers in identifying useful tools and services available in the ETH Domain. ORD Central Info Point will foster increased access and interoperability, and push towards increasing the availability of new and existing tools across institutions of the ETH Domain. The ORD Central Info Point will outline infrastructures and services available in the ETH Domain providing information on their respective purpose and use cases, access conditions and costs. The portal will not host any service, but instead direct users to the webpage of the relevant service. It will further provide curated information on policies and best practices in the ETH Domain. It will also be designed to include information on the training content from Measure 3 and direct links to these resources, and information on Measure 4. Technically, the ORD Central Info Point will be set up and run on existing infrastructure of the ORD Program Website.
Institutions
Partners
Abstract
Explore a comprehensive suite of digital learning resources designed to support researchers, students, and staff across the ETH Domain in implementing best practices for Research Data Management (RDM) and Open Research Data (ORD). The learning modules allow to learn at one’s own pace, cover a wide range of topics essential for effective management throughout the research data lifecycle, and are available as Open Educational Resources (OER).
Institutions
Partners
Abstract
Research increasingly relies on large amounts of data. To be successful, it needs to be paired with smart and efficient data management. The Data Stewardship Network Proposal of the Lib4RI, Empa, EPFL Library and ETH Library meets this need by:
- Facilitating knowledge exchange and best-practice workshops among persons with data-related roles. List of active Data Stewards is to be shared with the ETH Domain ORD program Measure 3 (dependent on their consent) as complementing activity, to encourage their involvement in the development of course material.
- Providing coordinated and pragmatic support for managing research data across the ETH Domain (i.e., developing a simplified data management plan template and interactive guides on archiving data)
- Suggesting improvements for data policies in the ETH Domain. This complements Measure 4 of the ETH Domain ORD program.
- Making the work and skill sets of Data Stewards and Research Software Engineers visible in the research community by choosing an existing communication platform and promoting its use by active Data Stewards and Research Software Engineers. This complements the service-related information which will be provided by the Central Info Point of Measure 2.
- Empowering the 4RI to catch up with ETH Zürich and EPF Lausanne in terms of data management support for researchers
Institutions
Partners
Abstract
Promoting the FAIR (Findable, Accessible, Interoperable, Reusable) data principles is not complete without considering the links between data and research software. Research software is an integral part of the entire data life cycle and is indispensable for data generation, data collection, data analysis or data archiving. Additionally, software itself as a digital artifact needs to be FAIR. Recently, FAIR principles for research software (FAIR4RS) have been proposed. Most of research software is developed by research software engineers (RSEs), who are dispersed widely across the research landscape. To better promote FAIR & ORD (Open Research Data) principles and other best practices for sustainable software in this community, RSEs would benefit from a common platform for regular interaction and knowledge exchange. In many other countries, RSE communities have been established with great success and help to promote the FAIR data principles and ORD. In this project, we propose to establish RSE communities at all institutions within the ETH Domain and to take the first steps towards building a Swiss-wide RSE community to promote best practices in research software engineering and adoption of FAIR principles for data and software. We also propose to connect the emerging RSE communities to synergize with other relevant established communities in the ORD landscape.
Institutions
Partners
Abstract
With the focus on data stewardship and other research data management specialists, ETH needs to consider whether the current role descriptions, functions, employment conditions and trainings are suitable for ORD and RDM specialists and, if necessary, develop proposals for future career paths of ORD professionals and training programmes.
The ETH Domain ORD Programme currently is concerned with career paths of ORD professionals. Within Measure 5 “Career Paths for Open Research Data Professionals” a project will be launched under the direction of HR ETH Board together with the Heads of Human Resources of the ETH Domain, to
- identify and delineate RDM/ORD roles (i.e., with example job descriptions);
- estimate FTE distribution across institutions and units of roles in each category;
- assess how roles are effectively defined in terms of written job descriptions; perception of roles by staff, their managers, and internal customers; and
- identify the drivers of staff hiring, retention, and job satisfaction/engagement.
The project will make recommendations, to inform strategy and for operational guidance and advice re roles, career paths and training.
Filter
Category
Institutions
Data type
Field
Researchers
Abstract
Catalysts play a major role in the production of many chemicals that are used everyday. Their proper characterization allows to understand their properties and therefore enables a rational design of better performing catalysts. Therefore, having large amounts of open access characterization data of catalysts will enable more efficient data/machine learning driven approaches to understand catalysts at a higher level and further speed up the design of better performing catalysts. However, although a wide variety of catalyst characterization data and open access repositories are available, one missing link is the proper and standardized data processing step complying to the FAIR principles prior to the upload to open access repositories.
At Swiss Cat+, an ETH domain technology platform, large amounts of data related to catalyst characterization are generated with the help of automated high-throughput technologies. Therefore the aim of the proposal is to implement additional components within the ETHZ Swiss Cat+ ORD Roadmap to have streamlined and standardized catalysis characterization data processing, and visualization tools within and beyond Swiss Cat+. This implementation will fully unlock the potential of using catalysis characterization data that are generated globally.
Category
Institutions
Data type
Field
Researchers
Abstract
Life science generates vast amounts of next generation sequencing (NGS) data, and there are well-established, FAIR repositories for open access of this data. Still, depositing NGS data in these repositories bears some challenges for life science researchers, leading to data not being deposited and shared.
We propose to implement a web service that simplifies data deposition for life science. Our service, will start from data and meta-information available in omics-data management systems like B-Fabric. It will let the user review and curate the information to be uploaded and will then perform the upload. Our web service will directly help research groups to make their data swiftly accessible in a well-defined and well-documented format in the recognized repositories with world-wide visibility and accessibility. The service will significantly reduce the efforts of making NGS data openly accessible, it will increase the quality of the openly accessible data, and it will make the originators of NGS data, the various research institutes in Switzerland, more visible.
Category
Institutions
Data type
Field
Researchers
Abstract
Wearable sensors offer vast potential for advancing healthcare through data-driven insights, but their integration into clinical trials and practice is hindered by a lack of interoperability. This project proposes the development of a standardized low-level communication protocol for wearable sensors to facilitate harmonized data collection across different platforms. By establishing a common standard for raw data transmission, the project aims to enable seamless aggregation of sensor data and foster collaboration among healthcare, research, and industry stakeholders. Through community-driven requirements gathering and standards development, the project seeks to address the current challenges in integrating wearable sensor data into healthcare practices. Inspired by successful standards in other domains, this initiative aims to catalyze a more interconnected digital health ecosystem where wearables play a pivotal role in personalized healthcare practices.
Category
Institutions
Data type
Field
Researchers
Abstract
The "Enhancing Global WASH Data Accessibility through Collaborative Initiatives" proposal, a joint effort by WASHWeb and openwashdata, aims to improve the global Water, Sanitation, and Hygiene (WASH) data ecosystem. The partnership, formed under shared goals for better data accessibility, usability, and representation, proposes a project divided into three work packages: Maintain, Extend, and Disseminate. The first package updates the Joint Monitoring Programme (JMP) dataset for broader use. The second aims to collaborate with data providers to create a new R dataset package, enhancing analyses of WASH investments and outcomes. The final package seeks to promote these open data packages through webinars, conference sessions, and online discussion groups, fostering a community around open WASH data.
Category
Institutions
Data type
Field
Researchers
Abstract
The OPEN-3D project revolutionizes urban mobility analytics by developing and enhancing four high-resolution traffic datasets derived from cutting-edge drone technology. This initiative, aligning with the ETH-Domain ORD Program, introduces innovative datasets and revitalizes existing ones with advanced tools, redefining traffic research practices. OPEN-3D meticulously constructs two new datasets and enriches two existing ones, all adhering to FAIR principles, supporting pivotal advancements across AI, computer vision (CV), and traffic management.
Imagine a central hub designed to facilitate global data exchange and interoperability. OPEN-3D turns this vision into reality, supporting a broad network of researchers and practitioners in traffic science, AI, and CV. This project offers unprecedented insights into urban and semi-urban traffic dynamics, enabling detailed analyses of vehicle trajectories and interactions, potentially transforming urban traffic management.
OPEN-3D not only contributes to traffic studies but also inspires global urban mobility innovation. By embracing open science principles, each dataset is accessible and poised to spur further research, setting new standards in data quality and transparency. This comprehensive approach exemplifies the potential of collaborative, data-driven innovation to enhance urban safety and efficiency, fostering sustainable urban development globally and catalyzing progress in traffic management technologies.
Category
Institutions
Data type
Field
Researchers
Abstract
The Groupe ACM (Gr-ACM) is a research group responsible for the collection of architecture heritage archives known as the Archives de la construction moderne (ACM). Digital contents in architecture heritage archives is increasing and will increase in the future both through donations of new born-digital archives and digitization campaigns of paper archives. Through the CA ORD Project, the Gr-ACM aims to improve its capacity to preserve and make FAIRly available data and metadata from digital architecture archives (both born-digital and digitized from paper). To date, the Gr-ACM has collected 4 Tb of digital data made of files in different formats (.dwg, .dxf, .pln, .jpg, .pdf, etc.) stored on external hard drives and CD-Rs, which are therefore unavailable for research.
The CA ORD Project aims to fill this gap. It is about installing, configuring, and running the open-source software Archivematica, which is an archiving system based on a multi-services architecture, that allows for automation, extraction, and normalization of data and metadata. These (METS files) will be made available on the already existing ACM’s ORD AtoM-based portal Morphé, since Archivematica and Atom are interoperable.
Thanks to readily exploitable data, the CA ORD Project will enlarge the ACM users community (currently limited to historians) including new researchers from the fields of architecture, engineering and land management.
Category
Institutions
Data type
Field
Researchers
Abstract
Mechanistic ecohydrological models are essential tools to accurately simulate the impacts of climate change on the water, carbon, and nutrient cycles. However, there are very few models available to the community which can holistically simulate such a wide range of processes and most of them are written in low-level programming languages (e.g., C++ or FORTRAN), hindering model accessibility to new users. In this regard, Tethys and Chloris (T&C), a state-of-the-art ecohydrological model written in MATLAB, offers a strong foundation for creating an accessible community-driven model. TRANSCODE aims to transform T&C into a FAIR (Findable, Accessible, Interoperable, Reusable) model by redesigning its architecture for modularity and re-implementing it in Julia, an open-source language which marries the computational efficiency of low-level programming languages such as FORTRAN and the accessibility of high-level languages such as MATLAB. This translation will improve computational efficiency, foster open code contributions from the community, and facilitate interoperability with other models. Specifically, the project will create a modular, comprehensively tested, highly efficient, and easily accessible version of T&C, termed T&C-Julia. TRANSCODE has the potential to significantly benefit the Earth science community and advance the field of ecohydrological modelling by providing a versatile, state-of-the-art, and open-source modelling platform.
Category
Institutions
Data type
Field
Researchers
Abstract
Imaging science and computational microscopy are rapidly advancing, driven by novel interdisciplinary approaches involving deep learning algorithms. However, the increasing complexity and cost of these cutting-edge imaging systems and algorithms often make them inaccessible for non-experts, low-resource settings, and teaching applications. To address this challenge, we would like to organize a one-day workshop on open-source microscopy and AI to bring together the smart-microscopy community.
The workshop will showcase a full pipeline of open-source solutions for optical imaging, from hardware to computational reconstruction and deep learning-based analysis. It will provide hands-on experience for participants to assemble an open microscope (OpenSIM, OpenUC2), perform reconstructions (Pyxu), and analyze images (DeepImageJ). It aims to empower researchers and teachers to take full control of their imaging pipeline and iterate rapidly on new solutions. Beyond this event, the project seeks to drive a broader and lasting impact on the community. Educational resources such as Jupyter notebooks and hardware kits will be developed and made publicly available to support teaching at EPFL and beyond.
By showcasing a comprehensive open-source ecosystem for microscopy, this initiative aims at making state-of-the-art imaging technologies more accessible and further catalyze the growth of an open, interdisciplinary microscopy community.
Category
Institutions
Data type
Field
Researchers
Abstract
Chemical pollution has exceeded planetary boundaries, requiring urgent solutions for chemical waste removal. Microbial biodegradation processes are crucial for breaking down chemical contaminants, yet the functions of microbial communities are often challenging to predict. To address this challenge, we aim to contribute a new pipeline, EDGEbp (Enabling Detection of metaGEnomic biodegradation potential), to advance the research capabilities of the ORD biodegradation prediction software, enviPathPlus. Specifically, in EDGEbp, we will build a Hidden Markov Model-based pipeline to identify biodegradation genes and pathways from total microbial community DNA (metagenomic) sequencing data. EDGEbp will output confidence scores to infer the ‘contaminant biodegradation potential’ of a given microbial community based on sequencing information. In other words, our aim is to build a tool to convert unintelligible DNA sequences into easily-understood biodegradation confidence scores. This will help us infer the capabilities of a specific microbiome to transform chemical contaminants. This project will therefore advance the sustainable development goals of improving water quality by reducing chemical pollution through microbial biodegradation. Overall, we anticipate that EDGEbp will expand the cutting-edge functionalities of the ORD tool enviPathPlus to support its long-term preservation and promote community engagement in line with ORD principles.
Category
Institutions
Data type
Field
Researchers
Abstract
Mobile ground robots have become increasingly popular in academia and various industrial applications. However, unlike other domains like aerial robotics, autonomous driving, and construction, there is currently no high-quality, large-scale dataset or reliable benchmark established in this field, nor the tooling available to do so. Creating such a dataset would be immensely valuable for researchers and developers in fostering research on robust and practical algorithms across diverse environments. Moreover, the development of a standardized benchmarking platform would promote fair comparisons between different approaches, fostering innovation and facilitating the rapid progress of mobile ground robot research. Motivated by this, we propose to collect and share a high-quality, versatile, large-scale robotic dataset, “GrandTour”, with scalable and automated tooling– focusing on legged robots in addition to a set of benchmarks and the necessary tooling.