Data-placement strategies to improve efficiency based on statistical characteristics of workloads and new-memory technologies
The age of big data is such that as the volume of data we generate grows, we develop new models, processes, and systems to cope with the growing demands of leveraging it to discover new insights.
This project looks at the use of non-volatile memory systems as a disruptive technology in big data processing. The question being, if we change one of the basic models of computer memory, can we discover new paradigms for fast, efficient data-processing? Can we make grid-scale computing cheaper, in terms of both time (data-throughput) and energy (using simpler algorithms)?