ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

Title

AXA: Cross-Language Analysis through Integration of Single-Language Analyses

Paper-Link

Paper Link

Purpose

The artifact reproduces the evaluation of the paper “AXA: Cross-Language Analysis through Integration of Single-Language Analyses”. The included Dockerfile downloads and builds our implementation of the framework, including a cross-language points-to analysis for Java, JavaScript and Native, as well as the benchmark. Furthermore, it provides scripts to reproduce the evaluation.

Badge

We apply for the following badges: * Available * Reusable

Technology

The reviewer is assumed to be familiar with \emph{Bash} and \emph{Docker}.

Provenance

The artifact can be obtained at the following link:
10.5281/zenodo.13364690

Instructions

Prerequisites

The artifact requires a linux system and a working Docker installation. We have tested the artifact on a machine running Ubuntu with 32 GB of RAM.

Obtaining the Artifact

The archive “axa-artifact.zip” contains a Dockerfile and files required to build a docker container, from which the evaluation can be reproduced.

Prerequisites

  • Docker must be installed
  • At least 16 GB RAM

Creating the container

Download and extract the archive and change into the directory.

./createContainer.sh

This command creates the docker container, downloading all necessary dependencies in the process. This took about 20 minutes on our machine.

Starting the Container

docker run -it axaimage

This command starts the docker container and opens a terminal in which all commands listed below can be executed.

Reproducing the Evaluation

All scripts to reproduce the evaluation results (Table I, Table II, and Table III) are located in
/runner.

Table I (Implementation Effort)

To get the lines of code (Table I) of the extensions and changes of the single-language analyses for the integration into AXA run the following command:

/runner/printLOCs.sh

Table II (Benchmark).

The evaluation of Java-JS testcases and Java-Native testcases is performed separately. To run the evaluation of annotated test cases (Table II), execute the scripts runJSBenchmark.sh (JavaScript testcases) and runNativeBenchmark.sh (native testcases). This will run a cross-language points-to-analysis on the annotated benchmarks. Each testcase is annotated with expected points-to sets of references affected by cross-language interactions. A testcase is considered as passed iff the points-to analysis results match the annotation.

To evaluate Java-JavaScript testcases:

/runner/runJSBenchmark.sh

To evaluate Java-Native testcases:

/runner/runNativeBenchmark.sh

Table III (Precision & Recall of AXA vs. single language)

To measure precision and recall of points-to analysis (Table III), execute the scripts precisionRecallJS.sh (JavaScript testcases) and precisionRecallNative (native testcases). This experiment first performs an instrumentation on the benchmark cases to log instance-ids of reference variables at all possible points in code. The benchmarks are executed and instance ids are logged to a file. Then, the benchmark code is analyzed first with a Java-only points-to analysis, then with our cross-language points-to analysis. The points-to sets of all references in code are matched with instances logged by the instrumentation to calculate precision and recall.

To calculate precision/recall of points-to analysis on Java-JavaScript testcases:

/runner/precisionRecallJS.sh

To calculate precision/recall of points-to analysis on Java-Native testcases:

/runner/precisionRecallNative.sh

Reusability

To analyse arbitrary Java-Native-code using AXA, launch the org.opalj.xl.Coordinator class. The current implementation will perform a cross-language points-to analysis and output an overview of detected cross-language interactions.

Specify java classes to be analyzed using the -cp argument. To analyze native code that exists as llvm, specify the path to an LLVM .bc module through the LLVM_LIB_PATH environment variable.

cd /opal
export LLVM_LIB_PATH=/opal/DEVELOPING_OPAL/validateCross/src/test/resources/xl_llvm/libnative.bc
sbt "project XLanguage; runMain org.opalj.xl.Coordinator -cp=/opal/DEVELOPING_OPAL/validate/target/scala-2.13/test-classes/org/opalj/fpcf/fixtures/xl/llvm"

Reusability of the benchmark

The benchmarks can be reused to evaluate any Java/JavaScript/Native cross-language points-to analysis by processing the Annotations. Location of the benchmark in the artifact:

/opal/opal/DEVELOPING_OPAL/validate/src/test/java/org/opalj/fpcf/fixtures/xl