In this report we describe a method for deriving high-level quantitative prediction models
for making low-fidelity predictions on performance, power, and energy consumption when a software component is mapped to a specific processing element. These predictions are low-fidelity as the transformation specific parameters are not known at this stage. Instead we make predictions based on source code software metrics.
The models are built by employing various statistical and machine learning methods where the predictors (independent variables) are calculated from the source code (using static analysis techniques) and the output of the models is an estimation of the gain (in terms of time, average power, or energy) if a software element (kernel) is executed on a specific processing element compared to the sequential execution on a single core CPU.
To build the desired prediction models, first we took a number of algorithms referred to
as benchmarks that were implemented on various platforms we are focusing on, like native CPU and OpenCL. We manually identified and tagged the kernels in these benchmarks with markers for the energy measurement framework and the static analyser. After this, we extracted multiple size, coupling and complexity metrics from the kernels of these analysed systems, and aggregated to system level for every benchmark. Then we collected measurements on the time, energy, and power required to run these algorithms on different platforms, by relying on an oscilloscope-based measurement environment. Finally, we applied multiple statistical and machine learning methods that use the calculated metrics to build the models.