The continuous developments of remote sensing platforms and sensor technologies combined with the open and free data policy of Earth Observation programs is generating an unprecedented volume and variety of raw data. Whilst previously the major issue for researchers has been the identification of accessible remote sensing data sources, nowadays the main issue is how to make the processing of this vast abundance of open data scalable. Due to the insufficient memory size and number of cores available in commodity computers on the one hand and on the other hand the increasing number of applications that require data computing in near real time (i.e., supporting decision-makers), data processing pipelines necessitate the use of algorithms that can run and scale on High-Performance Computing (HPC) systems. JSC is implementing modular HPC architectures to pave the way to Exascale supercomputing. The systems are the result of a co-design approach, which involves the whole pipeline from hardware through middleware to applications. This talk will describe the potential that such strategy can bring to big Earth Observation data mining and applications. The HPC systems include heterogeneous hardware accelerators (i.e., GPUs, FPGAs) and software technologies within the same architecture, which covers both the needs of classic HPC simulations and novel machine (deep) learning algorithms.