The 34th IEEE International Requirements Engineering Conference (RE’26) will have an Artifact track. The Artifact track aims to foster open science and reusability in requirements engineering. Through the Artifact track, researchers can actively contribute to open science in requirements engineering.

Open Science is a movement that seeks to make scientific research more transparent, rigorous, accessible, and reproducible. This increases the accountability of those contributing the research, but also their credit for opening up scientific work. Open science aims to make research more open to participation, review/refutation, improvement, and (re-)use for the community to benefit through making research material available and reusable.

An artifact includes (but is not limited to) any dataset, tool, script, experimental protocol, codebook, or other executable or non-executable object produced by or used in the research.

The Artifact track of RE’26 encourages and supports authors in making their research and artifacts more accessible, reproducible, and verifiable. For this purpose, Artifact track offers the Call for Artifacts and the new Call for Open Science Competition. This new competition is a collaboration between RE’26 and the Open Research Knowledge Graph (ORKG) of TIB - Leibniz Information Centre for Science and Technology.

All authors of the Research, RE@Next, and Industrial Innovation tracks are invited to participate in both calls.

Fame, honor, the Best Artifact Award, and two Open Science Awards with a prize money await you.

Call for Open Science Competition

Gain more visibility, and get acknowledged for your contribution to open science in requirements engineering!

Authors of accepted papers in RE’26 (Research, RE@Next!, and Industrial Innovation tracks) are encouraged to participate in the Open Science Competition.

Open Science Awards and Prize Money

Submissions can be submitted for one or both challenges. All submissions will compete for the Open Science Award and the prize money of the two respective challenges. The goal of the awards is to recognize the authors’ efforts for outstanding contributions to open science in requirements engineering. The winners will be selected by the Artifact track Co-Chairs and other external reviewers.

Call for Artifacts

Gain more visibility, and get acknowledged for your contribution to the RE community!

Authors of accepted papers in the Research, RE@Next!, and Industrial Innovation tracks of RE’26 are encouraged to submit their artifacts for evaluation. Papers with accepted artifacts will receive a “Badge” on the front page of their paper in the proceedings.

Authors have the possibility (but do not have to) of presenting their accepted artifacts, during the conference, as posters.

Posters need to be printed by the authors and brought to the conference. Sizes up to DIN A0.

Best Artifact Award

All accepted artifacts will compete for the best artifact award. The goal of the award is to recognize the effort of authors creating and sharing outstanding research artifacts. The best artifact will be selected by the program committee during the review process.

Eligibility and Evaluation Criteria

The purpose of this section is to communicate submission expectations to authors and reviewing guidelines for reviewers. Failure to meet these guidelines does not automatically mean rejection and adhering fully to these guidelines does not automatically mean acceptance. Ambiguity is certain to exist, so academic knowledge and skills must be used to fully consider the eligibility of submissions, and scientific integrity is key to a successful and amicable process.

For further information, questions, or help regarding the Open Science Competition, please contact Oliver Karras (oliver.karras@tib.eu) and Giovanna Broccia (giovanna.broccia@isti.cnr.it)

Like the previous editions of RE, there will be two badges: Available and Reusable.

Available is awarded to publicly accessible artifacts with a DOI, with minimal documentation that ensures the runnability of the artifact.

  • The authors must place the artifact on a publicly accessible archival repository, such as Zenodo or FigShare.
  • A DOI for the artifact is provided via these archival repositories and is referenced in the artifact and also in the paper.
    Note: On Zenodo a DOI can be reserved before final publication of the artifact. Therefore, a DOI can be already provided in the paper before the camera-ready version.

Reusable is awarded to well-documented artifacts that facilitate reuse and replication.

  • An artifact is well-documented, exercisable, complete, and includes appropriate evidence of verification.
  • The artifact with this badge should facilitate reuse and repurpose.
  • Norms and standards of the research community for this artifact badge should be strictly adhered to.
  • Optional: For artifacts in the area of Natural Language Processing in RE, we invite authors to fill-in and submit the NLP4RE ID-Card along with their artifact repository.

The two badges build on each other. That is, an artifact that receives the Reusable badge needs to also fulfill the criteria for Available. We encourage the authors to apply for both badges. Exceptional cases due to confidentiality issues must be clearly explained by the authors.

Submission Instructions for Authors

★ Applying for the badge Available

  • The artifact must be hosted online, considering the following criteria:
    • The URL to access the artifact is immutable (cannot be altered by the author). Use Zenodo or FigShare. Artifacts shared on services such as Dropbox, Google Drive, One Drive, and institutional websites will NOT be considered for this badge.
    • The artifact has a Digital Object Identifier (DOI) redirecting to the immutable URL.
    • If your artifact is on GitHub, follow these instructions to get a DOI for your code. In addition, add a CITATION File Format (CFF) file to your repository, which is plain text files with human- and machine-readable citation information for software (and datasets). In this way, you can let others know how to cite your software or dataset correctly. You can easily create a CFF file online.
  • The artifact must contain a README.md file summarizing the following content:
    • "Summary of Artifact" – Describe what the artifact does, the expected inputs and outputs, and the motivation for developing and using the artifact.
    • "Authors Information" – List all authors and how to cite the use of this artifact. Note: The Artifact track will employ a single-blind review. No need for the authors to anonymize their submissions.
    • "Artifact Location" – Describe at which URL (or DOI) the artifact can be obtained.
  • The artifact must contain a LICENSE.md file showing the license of the artifact. The license should be a proper open-source license. If there exists a license file under a different name, the LICENSE.md file must point to the actual license.
  • Anyone must be able to access the artifact, without the need for registration.

★ Applying for the badge Reusable

  • Authors are strongly recommended to ask their colleagues to test the usability of their artifact on a fresh environment before submitting it.
  • In almost all cases, the artifact must fulfill ALL the criteria for the Available badge listed above. Note that if confidentiality issues prevent the authors from publicly sharing the artifact, the “Reusable” badge can still be awarded. However, a clear statement of the motivations for not sharing the artifact publicly shall be provided in the README.md file.
  • The artifact must contain an extended README.md file explaining the following content:
    • Same fields explained for the "Available" badge.
    • "Description of Artifact" – Describe each of the files in the artifact.
    • "System Requirements" (For automated analyses or tools) – state the required system, programs, and libraries needed to successfully run the artifact.
    • "Installation Instructions" (For automated analyses or tools) – explain in detail how to run the artifact from scratch.
    • For automated analyses or tools, there is an expectation that the submitted artifacts can be run on any machine. In cases where this is not possible, it is the responsibility of the authors to provide virtual environments from which to run the artifacts. For example, Python Virtual environments, Docker envs, VirtualBox VMs, etc.
    • The artifact must be runnable within a maximum time of 60 minutes. If your installation time is longer than 60 minutes, you must make this clear in your Installation section and offer an explanation. Some scripts take a long time to produce results. In these cases, the authors must provide a minimum working example and the expected output. This can be done via a smaller dataset, intermediate script data saved by the authors, a truncated script, etc.
    • "Usage Instructions" – Explain (preferably with a running example) how the artifact can be used
      • For automated analyses or tools, this should include instructions on how to interact with the tool, API documentation, and all the information that enables other subjects to reuse the artifact.
      • For non-executable artifacts, as, e.g., interview guides, protocols, codebooks, data collected from qualitative studies, or datasets in general, this should include explanations on how the artifacts can be reused by other researchers or practitioners.
    • "Steps to Reproduce" (For automated analyses or tools) – provide instructions on how to generate the results presented in the paper. Known deviations from results presented in the paper should be explicitly outlined (e.g., when a table or figure is not produced, or the produced results are different from the results presented in the paper). The anticipated time for reproducing the results should not exceed 60 minutes. Otherwise, if reproduction time is longer, the authors must provide intermediate results that can be used to facilitate reproduction.

Participation

The submission and review process will be conducted via the RE’26 Artifact track EasyChair. Make sure you select Artifacts.

After the submission, and before the notification date, the reviewers will interact with the authors using the EasyChair Rebuttal feature (see Review Process below).

Review Process

The review process in the Artifact track involves thorough discussions to improve the accessibility and reusability of the artifact. Reviewers and authors will work together to achieve this goal. The Artifact track will employ a single-blind review.

The review process has two primary objectives: i) Encourage improvement of artifacts through proper documentation, and ii) Verification that the artifacts meet the aforementioned badge criteria. For this reason, the Artifact track review is intended not only as a peer review but also as a discussion.

The review process will take place via the Rebuttal feature within EasyChair. Each submission will consist of a textual Abstract including information about the artifact. Each artifact will go through one round of rebuttal ,i.e., no back-and-forth between authors and reviewers is possible after the first rebuttal. Therefore, we ask authors to provide all answers to the reviewers’ comments in one round.

The entire review process is conducted approximately over a 3-week period. During this time, the reviewers will check the submitted artifacts against the badge guidelines. Reviewers are encouraged to start the review process early, as it can take time for reviewers and authors to sort out unforeseen issues in the artifacts. The reviewers will communicate their feedback and clarification via EasyChair.

Subsequently, the reviewers will again check the artifacts against the badge guidelines and will submit their final decision in EasyChair.

The Artifact track of RE’26 and the Open Research Knowledge Graph (ORKG) organize the first Open Science Competition to encourage and support authors to promote open science in requirements engineering. This competition aligns with the broader movement of open science in requirements engineering which aims to make scientific research more accessible, reproducible, and verifiable. This competition is an opportunity for the research community to showcase its commitment to open science and its promotion in requirements engineering by making scientific knowledge FAIR - Findable, Accessible, Interoperable, and Reusable. It is a call to action for researchers to adopt practices that will shape the future of scientific publishing and communication.

The competition will be comprised of two challenges that will ask the authors to make their contributions to research knowledge more explicit with the help of the Open Research Knowledge Graph (ORKG). Submissions can be submitted for one or both challenges. The best submission in each challenge receives an Open Science Award, including prize money. The Artifact track Co-Chairs will involve other reviewers to determine the winners.

For further information, questions, or help regarding the Open Science Competition, please contact Oliver Karras (oliver.karras@tib.eu) and Giovanna Broccia (giovanna.broccia@isti.cnr.it)

Challenge 1: Annotate your paper with SciKGTeX to describe its research contribution.

The accepted paper that is best annotated with SciKGTeX will be awarded the Open Science - Best ORKG Annotation Award, including prize money of 100€.

SciKGTeX is a LaTeX package that allows authors to semantically annotated the scientific contribution of their research in the LaTeX document at the time of writing. These annotations are embedded into the PDF’s metadata in a structured XMP format, which can be harvested by search engines and knowledge graphs. This process does not only simplify the contribution to research knowledge graphs but also promotes the practice of semantic representations in scientific communication and thus open science.

You can find the LaTeX package and more information at the following links:

Task

The first challenge of the Open Science Competition asks authors to annotate their papers using the five default annotations of SciKGTeX, as well as the additional annotation for the open science challenge, if applicable. Comprehensive information on how to use the annotations can be found in the materials provided above.

Remark: Experiments have shown that annotating a paper with SciKGTeX takes an average of just 7.5 minutes, with a range of between 5 - 10 minutes.

Default annotations:

\researchproblem{“Your annotated text describing the research problem”}
\objective{“Your annotated text describing the research objective(s)”}
\method{“Your annotated text describing the research method(s)”}
\result{“Your annotated text describing the research result(s)”}
\conclusion{“Your annotated text describing the research conclusion(s)”}

Additional Open Science Competition annotations:

Artifact annotations:
\contribution{replication package}{“Your URL to the replication package”}
\contribution{code repository}{“Your URL to the code repository”}
\contribution{dataset}{“Your URL to the dataset”}
Threats to Validity annotations:
\contribution{internal validity}{“Your annotated text describing a threat to internal validity”}
\contribution{external validity}{“Your annotated text describing a threat to external validity”}
\contribution{construct validity}{“Your annotated text describing a threat to construct validity”}
\contribution{conclusion validity}{“Your annotated text describing a threat to conclusion validity”}

Remark: You can have multiple annotations of the same type.

Evaluation criteria

Submissions will be evaluated by the following criteria:

  • Correctness of the annotated information (in respect to the manuscript)
  • Completeness of the annotations (bonus points for relevant, additional annotations)
  • Conciseness of the information in the annotations

Participation

For participation in Challenge 1, authors must submit their annotated paper in RE’26 Artifact track EasyChair.

Challenge 2: Enrich your paper with an ORKG comparison to provide an overview of the state of the art for your particular research domain, question, or problem.

The accepted paper that is enriched with the best ORKG comparison will be awarded the Open Science - Best ORKG Comparison Award, including prize money of 200€.

The ORKG is a ready-to-use and sustainably operated platform with infrastructure and services that aims to open scientific knowledge and improve its FAIRness - Findability, Accessibility, Interoperability, and Reusability. Researchers can use the ORKG to systematically describe and organize research papers and their contributions. ORKG Comparisons are a specific feature of the ORKG that allows researchers to organize research contributions in a tabular view, enabling easy comparison and filtering along different properties. This kind of representation facilitates a more efficient review of the state-of-the-art for specific research domains, questions, or problems, thereby enhancing the rigor and accountability of scientific inquiries. However, ORKG comparisons are not just auxiliary tools. They are recognized as scholarly outputs in their own right, complete with Digital Object Identifiers (DOIs) for easy citation. This feature underscores the ORKG’s potential to enhance the visibility and credibility of research contributions.

You can find more information on how to accompany your paper with an ORKG Comparison at the following links:

Task

The second challenge of the Open Science Competition asks authors to create, publish, and cite an ORKG Comparison in their paper. The ORKG comparison must provide an overview on the state of the art for the particular research domain, question, or problem of the corresponding paper. A comparison is particularly applicable when you systematically compare studies, software/tools, or approaches. Comprehensive information on how to create, publish, and cite an ORKG Comparison can be found in the materials provided above.

Evaluation criteria

Submissions will be evaluated by the following criteria:

  • Expressiveness of (semantic) attributes by which elements are compared
  • Level-of-detail and complexity of the attributes
  • Exhaustiveness of the compared elements

Participation

For participation in Challenge 2, authors must submit the link of their published ORKG Comparison in RE’26 Artifact track EasyChair.

When authors publish their ORKG Comparison, they must assign a DOI to make it persistent and citable and associate the ORKG Comparison with the RE‘26 conference by selecting “34th IEEE International Requirement Engineering Conference (RE’26)” from the corresponding drop-down menu, compare the screenshot below.

[]

The RE conference has an open science policy with the steering principle that all research results should be accessible to the public and, if possible, empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles and encourage all contributing authors to disclose anonymized and curated data to increase reproducibility and replicability.

Please note that the open science policy is optional and therefore not mandatory for submission or acceptance.

However, compliance with the open science policy is expected, and non-compliance must be justified. Therefore, we expect authors to include an explicit Data Availability Statement at the end of their paper (similar to acknowledgments). Specifically, authors should provide details about any artifact disclosed alongside their submission, such as any dataset, tool, script, experimental protocol, codebook, or other executable or non-executable object produced by or used in the research. Otherwise, the authors should justify the reasons why disclosure is not possible, e.g., due to intellectual property or confidentiality agreements.

Examples for Data Availability Statement:

  1. Data can be shared:

    The <data(set)/software/supplementary material/…> supporting the findings of this study are openly available in the <repository name, e.g., GitHub, Zenodo, FigShare,…> at <Link to repository, e.g., URL or DOI> [Citation of the published material].

  2. Data can not be shared:

    The <data(set)/software/supplementary material/…> supporting the findings of this study cannot be shared due to <justification…>.

Although open science policy is optional, we strongly encourage authors to read and apply the following best practices to contribute to open science:

Upon submission:

  • If you have an artifact associated with your submission, like data, software, or other supplementary material, adequately document the artifact to allow for, to the best extent possible, replicability, reproducibility, and reusability.

  • Make your artifact available on a persistent, immutable repository with a public URL, e.g., archiving your artifact on Zenodo or FigShare, which generates a permanent DOI for your artifact. The DOI and the citation of the published artifact should be referenced in the artifact and also in the paper.

  • We recognize that reproducibility and replicability are not necessarily goals in qualitative research and that, similar to industrial studies, qualitative studies often face challenges in sharing research data. For guidelines on how to report qualitative research to ensure the assessment of the reliability and credibility of research results, Submission Q&A page.

  • Annotate your paper with SciKGTeX to describe its research contribution.

  • Enrich your paper with an ORKG Comparison, if applicable, e.g., to provide an overview of the state of the art for your particular research domain, question, or problem.

Upon acceptance:

  • Whenever possible, attribute an appropriate Open License to the shared artifact and make sure you share this license with the artifact under the same folder in a LICENSE.md file.

  • Upload your annotated paper to the ORKG in less than 90 seconds to increase its visibility: Youtube - Import of SciKGTeX annotated PDF into the ORKG. If you encounter any issues during the upload, please contact oliver.karras@tib.eu.

  • When your artifact is on GitHub, add a CITATION File Format (CFF) file to your repository, which is a plain text file with human- and machine-readable citation information for software and datasets. In this way, you can let others know how to cite your software or dataset correctly. You can easily create a CFF file online.

  • Take your chance and participate in the Call for Artifacts and the Open Science Competition!

Document Your Artifact:

  • Prepare a README.md file that lists precisely what is being shared and how to use and reuse the research artifact, e.g., by summarizing the following:

    • “Summary of Artifact” – Describe what the artifact does, the expected inputs and outputs, and the motivation for developing and using the artifact.
    • “Description of Artifact” – Describe the structure of the artifact, e.g., all files included.
    • “Authors Information” – List all authors and how to cite the use of this artifact.
    • “Artifact Location” – Describe at which URL (or DOI) the artifact can be obtained.
  • Introduce a running example that helps reviewers and the community validate and reuse the shared artifact.

    • “Usage Instructions” – Explain (preferably with a running example) how the artifact can be used
  • If you are sharing code, think thoroughly about the requirements for running the code and how to facilitate this process (e.g., use virtual environments such as Docker where possible). It should be exercisable, complete, and include appropriate evidence of verification. Provide the following in the README.md or consider a separate INSTALL.md:

    • “System Requirements” – State the required system, programs, and libraries needed to successfully run the artifact.
    • “Installation Instructions” – Explain in detail how to run the artifact from scratch.
    • “Steps to Reproduce” – Explain in detail what needs to be done to produce the same data, figures, and tables as presented in your submission.

Further reading:

  • Guiding materials for using SciKGTeX and ORKG Comparisons:

    O. Karras, L. John, A. Ferrari, D. Fucci, D. Dell’Anna: Supplementary Materials of the Tutorial: “Promotion of Open Science in Requirements Engineering: Leveraging the ORKG and ORKG Ask for FAIR Scientific Information” (2.0), Zenodo, Link

  • General introduction to Open Science in Software Engineering and generally recommended practices:

    D. Mendez, D. Graziotin, S. Wagner, H. Seibold: Open Science in Software Engineering, In: M. Felderer, G.-H. Travassos (eds.) Contemporary Empirical Methods in Software Engineering, Springer, 2020. Link

  • Open Science - Artefact Management Guideline:

    J. Frattini, L. Montgomery, D. Fucci, M. Unterkalmsteiner, D. Mendez, J. Fischbach: Requirements Quality Research Artifacts: Recovery, Analysis, and Management Guideline, Journal of Systems and Software, 2024. Link

Questions? Use the Requirements Engineering Artifacts contact form.