Deduplication has become quite effective at eliminating duplicates in data, thus multiplying the effective capacity of disk-based backup systems, and enabling them as realistic tape replacements. Despite these improvements, single-node raw capacity is still mostly limited to tens or a few hundreds of terabytes, forcing users to resort to complex and costly multi-node systems, which usually only allow them to scale to singledigit petabytes. As the opportunities for deduplication efficiency optimizations become scarce, we are challenged with the task of designing deduplication systems that will effectively address the capacity, throughput, management and energy requirements of the petascale age.
We present a high-performance deduplication prototype, designed at SRL from the ground up to optimize overall single-node performance, by making the best possible use of a node¹s resources, and achieve three important goals: scale to large capacity, provide good deduplication efficiency, and near-raw-disk throughput.
We will also discuss the requirements and challenges in designing commercial large scale cloud deduplication system.