To fully exploit the physics reach of the High-Luminosity Large Hadron Collider, the LHC experiments are planning substantial upgrades of their detector technologies and increases of their data acquisition rates. The higher proton-proton interaction rate, pileup and event processing rate present an unprecedented challenge to the real-time and offline event reconstruction, requiring a processing power which is orders of magnitude larger than today, and exceeds by far the expected increase for conventional CPUs. The Compact Muon Solenoid (CMS) experiment is developing a fully heterogeneous reconstruction software that will be used during the next LHC data taking period, starting in 2022. Its first applications will be the online reconstruction running on a GPU-equipped High-Level Trigger (HLT) farm, and the offline reconstruction running on HPC centres worldwide. These activities will allow the collaboration to gain experience with parallel algorithms and a heterogeneous framework, that will be essential to leverage diverse kinds of accelerators during the HL-LHC data taking. To keep under control the cost of software development, maintenance and validation that this will entail, CMS is evaluating various performance portability frameworks that promise a “write once, run anywhere” approach, building the same code base for different back ends and accelerator types. The speaker will present the ongoing work to port the CMS reconstruction software to the Intel oneAPI platform and compare its performance on different back-ends with that of native code running on the same hardware.