The precoding problem for sparse random linear network coding

Xiaolin Li*, Wai Ho Mow

*Corresponding author for this work

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

Abstract

In the random linear network coding scenario with subspace codes applied, if the number of packets injected into the network is larger than the dimension of the subspace, the packets are not linearly independent. This paper addresses the problem of how to choose these packets to represent the subspaces so as to minimize the decoding failure probability and formulates it as a precoding problem. We propose a precoding method based on the generator matrices of a class of the maximum distance separable codes and show that it can minimize the decoding failure probability for a sparse random transfer matrix over a large enough finite field. Our result is obtained by analyzing the rank distribution of finite field random matrices in the large field size limit. As a consequence, it is applied to shed some light on the tradeoff between the maximum achievable sparsity of the transfer matrix and the rate of the subspace code.

Original languageEnglish
Title of host publication2012 International Symposium on Network Coding, NetCod 2012
PublisherIEEE Computer Society
Pages185-190
Number of pages6
ISBN (Print)9781467318921
DOIs
Publication statusPublished - 2012
Event2012 International Symposium on Network Coding, NetCod 2012 - Cambridge, MA, United States
Duration: 29 Jun 201230 Jun 2012

Publication series

Name2012 International Symposium on Network Coding, NetCod 2012

Conference

Conference2012 International Symposium on Network Coding, NetCod 2012
Country/TerritoryUnited States
CityCambridge, MA
Period29/06/1230/06/12

Keywords

  • Decoding failure probability
  • Mds codes
  • Random matrix
  • Rank distribution
  • Subspace codes

Fingerprint

Dive into the research topics of 'The precoding problem for sparse random linear network coding'. Together they form a unique fingerprint.

Cite this