In this document, we describe a method for deriving detailed quantitative models for predicting performance, power, and energy consumption based on source code software metrics, with a special focus on recongurable hardware.

The models are built by employing various statistical and machine learning methods where the predictors (independent variables) are extracted from the source code (using static analysis techniques) and the output of the models is an estimation of the gain (in terms of time, average power, or energy) of executing a software element (kernel) on a specic accelerator (e.g., on a nowadays widespread GPU, or on an FPGA unit) instead of the CPU(s) of the host system.

To build the desired prediction models, first we took several algorithms referred to as benchmarks that were implemented in the computing platform-independent OpenCL C language (assisted by C/C++ host code). Then, we instrumented these benchmarks that allowed energy measurements and static analysis of the regions of interest in the code. After this, we extracted multiple code size, coupling and complexity metrics from these kernels via static code analysis, and aggregated them to system level for each benchmark. In parallel to the source code analysis, we also performed measurements on the time, energy, and power required by these algorithms by executing them on different platforms (most notably, on FPGA as well), using internal and external measurement methods. Finally, we applied several statistical and machine learning methods that use the combined static and dynamic information to build predictive models.

There are no prerequisites for using the created models; if users want to apply them to a new system, only the source code analysis is needed as dened in this report, and the extracted metrics can be fed to one of the models to predict the best platform to run the kernel and also to

predict the expected gain. The models presented in this report can be used as renements and extensions to those presented in previous work packages (which were starting points of static modeling research): most importantly, they have been applied to recongurable hardware as

well. These models will be used by partitioning algorithms developed in task T3.4 for selecting the best partition, and will also act as the basis for further work in T7.3 to develop system-level models.

In summary, we describe the method for creating predictive models through concrete experiments and based on three sets of benchmark programs. However, the modeling methodology is not specic for these benchmarks, it can be applied for and repeated on any alternative benchmark sets, should the need arise (e.g., either on larger benchmark sets, or on more domain specifc ones). Our presented results validate the idea of using source code metrics for predicting runtime performance and power consumption, both for classic and recongurable hardware components. For example, with the selected metrics, the Naive Bayes classication model for FPGA average power consumption estimation reached a 80% precision value, and several regression models for GPU energy consumption showed 0.80+ correlation. Such individual models those, which perform the best for each target platform and prediction aim (time, power, or energy) can be combined into a global prediction model.