Cache-oblivious peeling of random hypergraphs (Contributo in atti di convegno)

Type
Label
  • Cache-oblivious peeling of random hypergraphs (Contributo in atti di convegno) (literal)
Anno
  • 2014-01-01T00:00:00+01:00 (literal)
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#doi
  • 10.1109/DCC.2014.48 (literal)
Alternative label
  • Belazzougui D., Boldi P., Ottaviano G., Venturini R., Vigna S. (2014)
    Cache-oblivious peeling of random hypergraphs
    in DCC 2014 - Data Compression Conference, Snowbird, Utah, USA, 26-28 March 2014
    (literal)
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#autori
  • Belazzougui D., Boldi P., Ottaviano G., Venturini R., Vigna S. (literal)
Pagina inizio
  • 352 (literal)
Pagina fine
  • 361 (literal)
Note
  • PuMa (literal)
  • Scopu (literal)
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#affiliazioni
  • Department of Computer Science, University of Helsinki, Finland; Department of Computer Science, University of Milan, Italy; CNR-ISTI, Pisa, Italy; CNR-ISTI, Pisa, Italy; Department of Computer Science, University of Milan, Italy; (literal)
Titolo
  • Cache-oblivious peeling of random hypergraphs (literal)
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#isbn
  • 978-1-4799-3882-7 (literal)
Abstract
  • The computation of a peeling order in a randomly generated hypergraph is the most time- consuming step in a number of constructions, such as perfect hashing schemes, random r-SAT solvers, error-correcting codes, and approximate set encodings. While there exists a straightforward linear time algorithm, its poor I/O performance makes it impractical for hypergraphs whose size exceeds the available internal memory. We show how to reduce the computation of a peeling order to a small number of sequential scans and sorts, and analyze its I/O complexity in the cache-oblivious model. The resulting algorithm requires O(sort(n)) I/Os and O(n log n) time to peel a random hypergraph with n edges. We experimentally evaluate the performance of our implementation of this algorithm in a real- world scenario by using the construction of minimal perfect hash functions (MPHF) as our test case: our algorithm builds a MPHF of 7.6 billion keys in less than 21 hours on a single machine. The resulting data structure is both more space-efficient and faster than that obtained with the current state-of-the-art MPHF construction for large-scale key sets. (literal)
Prodotto di
Autore CNR
Insieme di parole chiave

Incoming links:


Prodotto
Autore CNR di
Insieme di parole chiave di
data.CNR.it