Asynchronous Decentralized Algorithms for the Noisy 20 Questions Problem

Theodoros Tsiligkaridis (MIT Lincoln Laboratory, USA)

This paper studies the problem of adaptively searching for an unknown target using multiple agents connected through a time-varying network topology. Agents are equipped with sensors capable of fast information processing, and we propose an asynchronous decentralized algorithm for controlling their search based on noisy observations. We propose asynchronous decentralized algorithms for adaptive query-based search that combine elements from the 20 questions approach and social learning. Under standard assumptions on the time-varying network dynamics, we prove convergence to correct consensus on the value of the parameter as the number of iterations grow. Our results establish that stability and consistency can be maintained even with one-way updating and randomized pairwise averaging, thus providing a scalable low complexity alternative to the synchronous decentralized estimation algorithms studied in previous works. We illustrate the effectiveness and robustness of our algorithm for random network topologies.

Second-Order Coding Region for the Discrete Lossy Gray-Wyner Source Coding Problem

Lin Zhou, Vincent Y. F. Tan and Mehul Motani (National University of Singapore, Singapore)

We derive the optimal second-order coding region for the lossy Gray-Wyner source coding problem for discrete memoryless sources under mild conditions. To do so, we leverage the properties of an appropriate generalization of the conditional distortion-tilted information density, which was first introduced by Kostina and Verdú (2012). The converse part uses the perturbation argument by Gu and Effros (2009) in their strong converse proof of the discrete Gray-Wyner problem. The achievability part uses a generalization of type covering lemmas and the uniform continuity of the conditional rate-distortion function in both the source joint distribution and the distortion level.

Remaining Uncertainties and Exponents under Rényi Information Measures

Masahito Hayashi (Nagoya University, Japan); Vincent Y. F. Tan (National University of Singapore, Singapore)

We study the asymptotics of the remaining uncertainty of a source when a compressed version of it and correlated side-information is observed. Instead of measuring the remaining uncertainty using Shannon measures, we do so using two forms of the conditional Rényi entropy. We show that these asymptotic results are generalizations of the strong converse exponent and the error exponent of Slepian-Wolf source coding.

On Second-Order Asymptotics of AWGN Channels with Feedback under the Expected Power Constraint

Lan V. Truong, Silas L. Fong and Vincent Y. F. Tan (National University of Singapore, Singapore)

In this paper, we analyze the asymptotic expansion for additive white Gaussian noise (AWGN) channels with feedback under an expected power constraint and the average error probability formalism. We show that the \epsilon-capacity depends on \epsilon in general and so the strong converse fails to hold. Furthermore, we provide bounds on the second-order term in the asymptotic expansion. We show that the second-order term is bounded between -lnln n and a term that is proportional to +\sqrt{nln n}. The lower bound on the second-order term shows that feedback does provide an improvement in the maximal achievable rate over the case where no feedback is available.

Encoding and Decoding of Balanced q-ary Sequences Using a Gray Code Prefix

Elie Mambou and Theo G. Swart (University of Johannesburg, South Africa)

Balancing sequences over a non-binary alphabet is considered, where the algebraic sum of the components (also known as the weight) is equal to some specific value. Various schemes based on Knuth’s simple binary balancing algorithm have been proposed. However, these have mostly assumed that the prefix describing the balancing point in the algorithm can easily be encoded. In this paper we show how non-binary Gray codes can be used to generate these prefixes. Together with a non-binary balancing algorithm, this forms a complete balancing system with straightforward and efficient encoding/decoding.

Universal decoding for source-channel coding with side information

Neri Merhav (Technion, Israel)

We consider a setting of Slepian-Wolf coding, where the random bin of the source vector undergoes channel coding, and then decoded at the receiver, based on additional side information, correlated to the source. For a given distribution of the randomly selected channel codewords, we propose a universal decoder that depends on the statistics of neither the correlated sources nor the channel, assuming first that they are both memoryless. Exact analysis of the random binning/coding error exponent of this universal decoder shows that it is the same as the one achieved by the optimal maximum a-posteriori (MAP) decoder. Previously known results on universal Slepian-Wolf source decoding, universal channel decoding, and universal source–channel decoding, are all obtained as special cases of this result. Subsequently, we outline further generalizations in several directions, including: (i) finite-state (FS) sources and channels, along with a universal decoding metric based on Lempel-Ziv (LZ) parsing, (ii) arbitrary sources and channels, where the universal decoding is with respect to a given class of decoding metrics, and (iii) full Slepian-Wolf coding, where both source streams are separately encoded and jointly decoded by a universal decoder.

The generalized stochastic likelihood decoder: random coding and expurgated bounds

Neri Merhav (Technion, Israel)

The likelihood decoder is a stochastic decoder that selects the decoded message at random, using the posterior distribution of the true underlying message given the channel output. In this work, we study a generalized version of this decoder where the posterior is proportional to a general function that depends only on the joint empirical distribution of the output vector and the codeword. This framework allows both mismatched versions and universal (MMI) versions of the likelihood decoder, as well as the corresponding ordinary deterministic decoders, among many others. We provide a direct analysis method that yields the exact random coding exponent (as opposed to separate upper bounds and lower bounds that turn out to be compatible, which were derived earlier by Scarlett et al.). We also extend the result from pure channel coding to combined source and channel coding (random binning followed by random channel coding) with side information (SI) available to the decoder. Finally, returning to pure channel coding, we derive also an expurgated exponent for the stochastic likelihood decoder, which turns out to be at least as tight (and in some cases, strictly so) as the classical expurgated exponent of the maximum likelihood decoder, even though the stochastic likelihood decoder is suboptimal.

On Projections of the Rényi Divergence on Generalized Convex Sets

M. Ashok Kumar (Indian Insitute of Technology, India); Igal Sason (Technion – Israel Institute of Technology, Israel)

Motivated by a recent result by van Erven and Harremoës, we study a forward projection problem for the Rényi divergence on a particular $\alpha$-convex set, termed $\alpha$-linear family. The solution to this problem yields a parametric family of probability measures which turns out to be an extension of the exponential family, and it is termed $\alpha$-exponential family. An orthogonality relationship between the $\alpha$-exponential and $\alpha$-linear families is first established and is then used to transform the reverse projection on an $\alpha$-exponential family into a forward projection on an $\alpha$-linear family. The full paper version of this work is available on the arXiv at \url{http://arxiv.org/abs/1512.02515}.

Generalized Integrated Interleaving BCH Codes

Yingquan Wu (Micron Technology, USA)

Tang and Koetter proposed generalized integrated interleaving Reed-Solomon codes in which each code of the second layer belongs to different subcode of the first layer code, based on the rationale that the strongest code is used to correct the most corrupted component word while the weakest code corrects the least corrupted component word. In this work we propose a novel generalized integrated interleaving scheme for binary BCH codes, prove a lower bound on the minimum distance, and derive a similar encoding and decoding algorithm as that of Reed-Solomon codes.

A Proof of the Strong Converse Theorem for Gaussian Broadcast Channels via the Gaussian Poincaré Inequality

Silas L. Fong and Vincent Y. F. Tan (National University of Singapore, Singapore)

We prove that 2-user Gaussian broadcast channels admit the strong converse. This implies that for every sequence of block codes with an asymptotic maximal error probability smaller than one, the limit points of the corresponding sequence of rate pairs must lie within the capacity region derived by Cover and Bergmans. The main mathematical tool required for our analysis is a logarithmic Sobolev inequality known as the Gaussian Poincaré inequality.

A Large Deviations Approach to Secure Lossy Compression

Nir Weinberger and Neri Merhav (Technion, Israel)

A Shannon cipher system for memoryless sources is considered, in which distortion is allowed at the legitimate decoder. The source is compressed using a rate distortion code secured by a shared key, which satisfies a constraint on the compression rate, as well as a constraint on the exponential rate of the excess-distortion probability at the legitimate decoder. Secrecy is measured by the exponential rate of the exiguous-distortion probability at the eavesdropper, rather than by the traditional measure of equivocation. The perfect secrecy exponent is defined as the maximal exiguous-distortion exponent achievable when the key rate is unlimited. Under limited key rate, it is proved that the maximal achievable exiguous-distortion exponent is equal to the minimum between the average key rate and the perfect secrecy exponent, for a fairly general class of variable key rate codes.

Generalized turbo signal recovery for nonlinear measurements and orthogonal sensing matrices

Ting Liu (Southeast University, P.R. China); Chao-Kai Wen (National Sun Yat-sen University, Taiwan); Shi Jin (Southeast University, P.R. China); Xiaohu You (National Mobile communication Research Lab., Southeast University, P.R. China)

In this study, we propose a generalized turbo signal recovery algorithm to estimate a signal from quantized measurements, in which the sensing matrix is a row-orthogonal matrix, such as the partial discrete Fourier transform matrix. The state evolution of the proposed algorithm is derived and is shown to be consistent with that obtained with the replica method. Numerical experiments illustrate the excellent agreement of the proposed algorithm with theoretical state evolution.

On the Smooth Rényi Entropy and Variable-Length Source Coding Allowing Errors

Shigeaki Kuzuoka (Wakayama University, Japan)

In this paper, we consider the problem of variable-length source coding allowing errors. The exponential moment of the codeword length is analyzed in the non-asymptotic regime and in the asymptotic regime. Our results show that the smooth Renyi entropy characterizes the optimal exponential moment of the codeword length.

Simple Systematic Pearson Coding

Jos H. Weber (Delft University of Technology, The Netherlands); Theo G. Swart (University of Johannesburg, South Africa); Kees A. Schouhamer Immink (Turing Machines Inc., The Netherlands)

The recently proposed Pearson codes offer immunity against channel gain and offset mismatch. These codes have very low redundancy, but efficient coding procedures were lacking. In this paper, systematic Pearson coding schemes are presented. The redundancy of these schemes is analyzed for memoryless uniform sources. It is concluded that simple coding can be established at only a modest rate loss.

A Non-Asymptotic Achievable Rate for the AWGN Energy-Harvesting Channel using Save-and-Transmit

Silas L. Fong and Vincent Y. F. Tan (National University of Singapore, Singapore); Jing Yang (University of Arkansas, USA)

This paper investigates the information-theoretic limits of the additive white Gaussian noise (AWGN) energy-harvesting (EH) channel in the finite blocklength regime. The EH process is characterized by a sequence of i.i.d. random variables with finite variances. We use the save-and-transmit strategy proposed by Ozel and Ulukus (2012) together with Shannon’s non-asymptotic achievability bound to obtain a lower bound on the achievable rate for the AWGN EH channel. The first-order term of the lower bound on the achievable rate is equal to C and the second-order (backoff from capacity) term is proportional to -sqrt(log(n)/n), where n denotes the blocklength and C denotes the capacity of the EH channel, which is the same as the capacity without the EH constraints. The constant of proportionality of the backoff term is found and qualitative interpretations are provided.

Information Bottleneck Graphs for Receiver Design

Jan Lewandowsky, Maximilian Stark and Gerhard Bauch (Hamburg University of Technology, Germany)

A generic design method for low complexity receivers is presented. The method pairs factor graphs and the Information Bottleneck method in one framework. Consequently, the method is called Information Bottleneck Graphs. The main idea of Information Bottleneck Graphs is optimizing the flow of relevant information through the signal processors. In contrast to most topical receivers with high precision signal processing units, Information Bottleneck Graphs yield receivers purely working on unsigned integers. All signal processing degenerates to lookup operations in tables of integers. Information Bottleneck Graphs are exemplarily applied to develop a complete coherent receiver including analog-to-digital conversion, channel estimation and decoding of Low Density Parity Check codes that only works on unsigned integers. This receiver uses recently introduced discrete decoders for Low Density Parity Check codes.

Bounds for Batch Codes with Restricted Query Size

Hui Zhang (Technion – Israel Institute of Technology, Israel); Vitaly Skachek (University of Tartu, Estonia)

We present new upper bounds on the parameters of batch codes with restricted query size. These bounds are an improvement on the Singleton bound. The techniques for derivations of these bounds are based on the ideas in the literature for codes with locality. By employing additional ideas, we obtain further improvements, which are specific for batch codes.

Quantum Resistant Random Linear Code Based Public Key Encryption Scheme RLCE

Yongge Wang (University of North Carolina at Charlotte, USA)

Lattice based encryption schemes and linear code based encryption schemes have received extensive attention in recent years since they have been considered as post-quantum candidate encryption schemes. Though LLL reduction algorithm has been one of the major cryptanalysis techniques for lattice based cryptographic systems, key recovery cryptanalysis techniques for linear code based cryptographic systems are generally scheme specific. In recent years, several important techniques such as Sidelnikov-Shestakov attack, filtration attacks, and algebraic attacks have been developed to crypt-analyze linear code based encryption schemes. Though most of these cryptanalysis techniques are relatively new, they prove to be very powerful and many systems have been broken using them. Thus it is important to design linear code based cryptographic systems that are immune against these attacks. This paper proposes linear code based encryption scheme RLCE which shares many characteristics with random linear codes. Our analysis shows that the scheme RLCE is secure against existing attacks and we hope that the security of the RLCE scheme is equivalent to the hardness of decoding random linear codes. Example parameters for different security levels are recommended for the scheme RLCE.

The Dispersion of Nearest-Neighbor Decoding for Additive Non-Gaussian Channels

Jonathan Scarlett (EPFL, Switzerland); Vincent Y. F. Tan (National University of Singapore, Singapore); Giuseppe Durisi (Chalmers University of Technology, Sweden)

We study the second-order asymptotics of information transmission using random Gaussian codebooks and nearest neighbor (NN) decoding over a power-limited additive stationary memoryless non-Gaussian channel. We show that the dispersion term depends on the non-Gaussian noise only through its second and fourth moments. We also characterize the second-order performance of point-to-point codes over Gaussian interference networks. Specifically, we assume that each user’s codebook is Gaussian and that NN decoding is employed, i.e., that interference from unintended users is treated as noise at each decoder.

Constructions of High-Rate Minimum Storage Regenerating Codes over Small Fields

Netanel Raviv and Natalia Silberstein (Technion, Israel); Tuvi Etzion (Technion-Israel Institute of Technology, Israel)

This paper presents a new construction of high-rate minimum storage regenerating codes. In addition to a minimum storage in a node, these codes have the following two important properties: first, given storage $\ell$ in each node, the entire stored data can be recovered from any $2\log_2 \ell$ (any $3\log_3\ell$) for 2 parity nodes (for 3 parity nodes, respectively); second, a helper node accesses the minimum number of its symbols for repair of a failed node (access-optimality). The goal of this paper is to provide a construction of such optimal codes over the smallest possible finite fields. The generator matrix of these codes is based on perfect matchings of complete graphs and hypergraphs, and on a rational canonical form of matrices. The field size required for our construction is significantly smaller when compared to previously known codes.

Second-Order Coding Region for the Discrete Successive Refinement Source Coding Problem

Lin Zhou, Vincent Y. F. Tan and Mehul Motani (National University of Singapore, Singapore)

We derive the optimal second-order coding region for the discrete successive refinement source coding problem under the joint excess-distortion event. To do so, we define a generalization of the tilted information density and leverage its properties. In the achievability part, we make use of type covering lemmas by Kanlis and Narayan (1996) and by No, Ingber and Weissman (2015). In the converse proof, we make use of the perturbation approach by Gu and Effros (2009). We also specialize our results to successively refinable sources and provide an alternative converse proof for such sources by generalizing Kostina and Verdú’s (2012) one-shot converse bound for point-to-point lossy source coding.

Throughput Maximization in Uncooperative Spectrum Sharing Networks

Thomas Stahlbuhk (Massachusetts Institute of Technology & MIT Lincoln Laboratory, USA); Brooke Shrader (MIT Lincoln Laboratory, USA); Eytan Modiano (MIT, USA)

We consider an opportunistic communication system in which a secondary transmitter communicates over the unused time slots of a primary user. In particular, we consider a system in which the primary user is uncooperative and transmits whenever its buffer is nonempty, and the secondary user relies on feedback from its receiver in order to decide when to transmit. The objective of the secondary user is to maximize its own throughput without degrading the throughput of the primary user. We analyze the maximum achievable throughput of the secondary user by formulating the problem as a partially observable Markov decision process. We derive bounds on the optimal solution and find a channel access policy for the secondary user that is near-optimal when the primary user’s exogenous arrival rate is low. These results are then used to characterize the set of arrival rates to the primary and secondary users that may be stably supported by the system.

Metrics based on Finite Directed Graphs

Marcelo Firer (State University of Campinas – UNICAMP, Brazil); Tuvi Etzion (Technion-Israel Institute of Technology, Israel)

Given a finite directed graph with n vertices, a metric on an n-dimensional vector space over a finite field is naturally defined, where the weight of a word is equal to the number of vertices in all the directed paths starting from a position where the word has a nonzero entry and the distance between two words is the weight of its difference. Two canonical forms, which do not affect the metric, are given to each graph. Based on these canonical forms we characterize each such metric. We further use these forms to prove that two graphs with different canonical forms yield two different metrics. Efficient algorithms to check if a given set of metric weights define a metric based on a graph are given. We provide tight bounds on the number of metric weights required to reconstruct the whole metric. The isomorphism problem of two metrics based on graphs is also considered. Finally, we discuss the group of linear isometries of the graph metrics and the connection of the work to coding theory.

Cell Associations that Maximize the Average Uplink-Downlink Degrees of Freedom

Aly El Gamal (Purdue University, USA)

We study the problem of associating mobile terminals to base stations in a linear interference network, with the goal of maximizing the average rate achieved over both the uplink and downlink sessions. The cell association decision is made at a centralized cloud level, with access to global network topology information. More specifically, given the constraint that each mobile terminal can be associated to a maximum of $N_c$ base stations at once, we characterize the maximum achievable pre-log factor (degrees of freedom) and the corresponding cell association pattern. Interestingly, the result indicates that for the case where $N_c \geq 2$, the optimal cell association guarantees the achievability of the maximum uplink rate even when optimizing for the uplink alone, and for the case where $N_c=1$, the optimal cell association is that of the downlink. Hence, this work draws attention to the question of characterizing network topologies for which the problem can be simplified by optimizing only for the uplink or only for the downlink.

Asymptotic Analysis of a Three State Quantum Cryptographic Protocol

Walter Krawec (Iona College, USA)

In this paper we consider a three-state variant of the BB84 quantum key distribution (QKD) protocol. We derive a new lower-bound on the key rate of this protocol in the asymptotic scenario and use mismatched measurement outcomes to improve the channel estimation. Our new key rate bound remains positive up to an error rate of $11\%$, exactly that achieved by the four-state BB84 protocol.

Energy-Distortion Tradeoff for the Gaussian Broadcast Channel with Feedback

Yonathan Murin (Stanford University, USA); Yonatan Kaspi (UCSD, USA); Ron Dabora (Ben-Gurion University, Israel); Deniz Gündüz (Imperial College London, United Kingdom)

This work focuses on the minimum transmission energy required for communicating a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless and causal channel output feedback (GBCF). We study the fundamental limit on the required transmission energy for broadcasting a pair of source samples, such that each source can be reconstructed at its respective receiver to within a target distortion, when the source-channel bandwidth ratio is not restricted. We derive a lower bound and three distinct upper bounds on the minimum required energy. For the upper bounds we analyze three transmission schemes: Two schemes are based on separate source-channel coding, and apply coding over multiple samples of source pairs. The third scheme is based on joint source-channel coding obtained by extending the Ozarow-Leung (OL) transmission scheme, which applies uncoded linear transmission. Numerical simulations show that despite its simplicity, the energy-distortion tradeoff of the OL-based scheme is close to that of the better separation-based scheme, which indicates that the OL scheme is attractive for energy-efficient source transmission over GBCFs.

Minimum Pearson Distance Detection in the Presence of Unknown Slowly Varying Offset

Vitaly Skachek (University of Tartu, Estonia); Kees A. Schouhamer Immink (Turing Machines Inc., The Netherlands)

Minimum Pearson Distance (MPD) detection offers resilience against unknown channel gain and varying offset. MPD detection is used in conjunction with a set, $S$, of codewords having specific properties. We will study the properties of the codewords of $S$, compute the size of $S$, and derive its redundancy for asymptotically large values of the codeword length $n$. The redundancy of $S$ is approximately $\frac{3}{2} \log_2 n + a,$ where $a=\log_2 \sqrt{\pi/24} = -1.467..$ for $n$ odd and $a=-0.467..$ for $n$ even.

Universal recoverability in quantum information

Marius Junge (University of Illinois at Urbana-Champaign, USA); Renato Renner (ETH Zuerich, Switzerland); David Sutter (ETH Zurich, Switzerland); Mark M Wilde (Louisiana State University, USA); Andreas Winter (Universitat Autonoma de Barcelona & ICREA, Spain)

The quantum relative entropy is well known to obey a monotonicity property (i.e., it does not increase under the action of a quantum channel). Here we present several refinements of this entropy inequality, some of which have a physical interpretation in terms of recovery from the action of the channel. The recovery channel given here is explicit and universal, depending only on the channel and one of the arguments to the relative entropy.

Strengthening the Entropy Power Inequality

Thomas Courtade (University of California, Berkeley, USA)

We tighten the Entropy Power Inequality (EPI) when one of the random summands is Gaussian. Our strengthening is closely related to strong data processing for Gaussian channels and generalizes the (vector extension of) Costa’s EPI. This leads to a new reverse EPI and, as a corollary, sharpens Stam’s inequality relating entropy power and Fisher information. Applications to network information theory are given, including a short self-contained proof of the rate region for the two-encoder quadratic Gaussian source coding problem. The proof of our main result is based on weak convergence and a doubling argument for establishing Gaussian optimality via rotational-invariance.

On the Capacity of a Class of Dual-Band Interference Channels

Subhajit Majhi and Patrick Mitran (University of Waterloo, Canada)

We consider a two-transmitter two-receiver dual-band Gaussian interference channel (GIC) which is motivated by the simultaneous use of both the conventional microwave band and the unconventional millimeter wave (mm-wave) band in future wireless networks where the traditional microwave band is complemented by additional spectrum in the mm-wave band. A key modeling feature of the mm-wave band is that due to severe path loss and relatively small wavelength, it must be used with highly directional antennas, and thus the transmitter is able to transmit to its intended receiver with negligible to no interference to other receivers. For this model, we derive some sufficient conditions on the channel gains under which the capacity of this type of dual-band GIC is determined. Specifically, these conditions are classified as when microwave band channel gains have (a) weak interference, i.e., both the cross channel gains are less than 1 and (b) mixed interference, i.e., only one of the cross channel gains is less than 1, while the channel gains in the dual-band GIC satisfy certain additional conditions in each case.

Symmetry, Demand Types and Outer Bounds in Caching Systems

Chao Tian (The University of Tennessee Knoxville, USA)

The fundamental limit of coded caching is considered in this work. We started with an investigation of the symmetry structure in the caching problem, which is used to show the existence of optimal symmetric solutions, as well as to motivate the notion of demand types. With a combination of analytical analysis using the symmetry and a computational approach developed earlier for the regenerating code problem, we obtain the following results on the memory-rate tradeoff: (1) A complete solution for any systems with $K=2$ users; (2) A complete solution for the case with $N=2$ files and $K=3$ users; (3) The best outer bound for the cases $(N,K)=(2,4)$ and $(N,K)=(3,3)$ in the literature, which are tight in certain regimes. These results provide new insights to the problem, and also help to identify the main coding challenges for different system parameters.

Secure Degrees of Freedom of the Gaussian Diamond-Wiretap Channel

Si-Hyeon Lee, Wanyao Zhao and Ashish Khisti (University of Toronto, Canada)

In this paper, we consider the Gaussian diamond-wiretap channel that consists of an orthogonal broadcast channel from a source to two relays and a Gaussian fast-fading multiple access-wiretap channel from the two relays to a legitimate destination and an eavesdropper. For the multiple access part, we consider both the case with full channel state information (CSI) and the case with no eavesdropper’s CSI, at the relays and the legitimate destination. For both the cases, we establish the exact secure degrees of freedom and generalize the results for multiple relays. For the converse part, we introduce a new technique of capturing the trade-off between the message rate and the amount of individual randomness injected at each relay. In the achievability part, we show (i) how to strike a balance between sending message symbols and common noise symbols from the source to the relays in the broadcast component and (ii) how to combine artificial noise-beamforming and noise-alignment techniques at the relays in the multiple access component.

Streaming Data Transmission in the Moderate Deviations and Central Limit Regimes

Si-Hyeon Lee (University of Toronto, Canada); Vincent Y. F. Tan (National University of Singapore, Singapore); Ashish Khisti (University of Toronto, Canada)

We consider streaming data transmission over a discrete memoryless channel. A new message is given to the encoder at the beginning of each block and the decoder decodes each message sequentially, after a delay of $T$ blocks. In this streaming setup, we study the fundamental interplay between the rate and error probability in the central limit and moderate deviations regimes and show the following achievability results: i) in the moderate deviations regime, the moderate deviations constant improves over the block coding or non-streaming setup at least by a factor of $T$ and ii) in the central limit regime, the second-order coding rate improves at least by a factor of approximately $\sqrt{T}$ for a wide range of channel parameters. For both the regimes, we propose coding techniques that incorporate a joint encoding of fresh and previous messages. Furthermore, we show that the exponent of the error probability can be improved tremendously by allowing variable decoding delay in the moderate deviations regime.

The Boltzmann Sequence-Structure Channel

Abram Magner (UIUC, USA); Daisuke Kihara and Wojciech Szpankowski (Purdue University, USA)

We rigorously study a channel that maps binary sequences to self-avoiding walks in the two-dimensional grid, inspired by a model of protein statistics. This channel, which we also call the Boltzmann sequence-structure channel, is characterized by a Boltzmann/Gibbs distribution with a free parameter corresponding to temperature. In our previous work, we verified experimentally that the channel capacity appears to have a phase transition for small temperature and decays to zero for high temperature. In this paper, we make some progress towards explaining these phenomena. We first estimate the conditional entropy between the input sequence and the output, giving an upper bound which exhibits a phase transition with respect to temperature. Then we derive a lower bound on the conditional entropy for some specific set of parameters. This lower bound allows us to conclude that the mutual information tends to zero for high temperature. Finally, we construct an example of the model for which there is no phase transition.

Vector Network Coding Based on Subspace Codes Outperforms Scalar Linear Network Coding

Tuvi Etzion (Technion-Israel Institute of Technology, Israel); Antonia Wachter-Zeh (Technion – Israel Institute of Technology, Israel)

This paper considers vector network coding based on rank-metric codes and subspace codes. Our main result is that vector network coding can significantly reduce the required field size compared to scalar linear network coding in the same multicast network. The achieved gap between the field size of scalar and vector network coding is in $q^{(h-2)t^2/h + o(t)}$ for any $q \geq 2$ and any even $h \geq 4$, where $t$ denotes the dimension of the vector solution and $h$ the number of messages. If $h \geq 5$ is odd, then the achieved gap of the field size between the scalar network coding solution and the vector network coding solution is $q^{(h-3)t^2/(h-1) + o(t)}$. Previously, only a gap of constant size had been shown. This implies also the same gap between the field size in linear and non-linear scalar network coding for multicast networks. The results are obtained by considering several multicast networks which are variations of the well-known combination network.

On the Construction of Jointly Superregular Lower Triangular Toeplitz Matrices

Jonas Hansen and Jan Østergaard (Aalborg University, Denmark); Johnny Kudahl and John Madsen (Bang & Olufsen A/S, Denmark)

Superregular matrices have the property that all of their submatrices, which can be full rank are full rank. Lower triangular superregular matrices are useful for e.g., maximum distance separable convolutional codes as well as for on-the-fly network codes. In this work, we provide an explicit design for superregular lower triangular Toeplitz matrices in GF(2^p) for the case of matrices with dimensions less than or equal to 5 x 5. For higher dimensional matrices, we present a greedy algorithm that finds a solution provided the fields size is sufficiently high. Finally, we introduce the notion of jointly superregular matrices, and extend our explicit constructions of lower triangular Toeplitz matrices to jointly superregular matrices. It is shown that jointly superregular matrices are necessary to achieve optimal decoding capabilities for the case of codes with a rate lower than 1/2.

Subblock Energy-Constrained Codes for Simultaneous Energy and Information Transfer

Anshoo Tandon and Mehul Motani (National University of Singapore, Singapore); Lav R. Varshney (University of Illinois at Urbana-Champaign, USA)

Consider an energy-harvesting receiver that uses the same received signal both for decoding information and for harvesting energy, which is employed to power its circuitry. In the scenario where the receiver has limited battery size, a signal with bursty energy content may cause power outage at the receiver since the battery will drain during intervals with low signal energy. The energy content in the signal may be regularized by partitioning each codeword into smaller subblocks and requiring that sufficient energy is carried in every subblock duration. In this paper, we study subblock energy-constrained codes (SECCs) which, by definition, are codes satisfying the subblock energy constraint. For SECCs, we provide a sufficient condition on the subblock length to avoid power outage at the receiver. We consider discrete memoryless channels and characterize the SECC capacity, and also provide different bounds on the SECC capacity. Further, we characterize and bound the random coding error exponent for SECCs.

Lossless Compression of Binary Trees with Correlated Vertex Names

Abram Magner (UIUC, USA); Krzysztof Turowski (Gdansk University of Technology, Poland); Wojciech Szpankowski (Purdue University, USA)

Compression schemes for advanced data structures have become the challenge of today. Information theory has traditionally dealt with conventional data such as text, image, or video. In contrast, most data available today is multitype and context-dependent. To meet this challenge, we have recently initiated a systematic study of advanced data structures such as unlabeled graphs. In this paper, we continue this program by considering trees with statistically correlated vertex names. Trees come in many forms, but here we deal with binary plane trees (where order of subtrees matters) and their non-plane version. Furthermore, we assume that each symbol of a vertex name depends in a Markovian sense on the corresponding symbol of the parent vertex name. We first evaluate the entropy for both types of trees. Then we propose two compression schemes CompressPTree for plane trees with correlated names, and CompressNPTree for non-plane trees. We prove that the former scheme achieves the lower bound within two bits, while the latter is within 1% of the optimal compression.

Guaranteed Error Correction of Faulty Bit-Flipping Decoders under Data-Dependent Gate Failures

Srdan Brkic (University of Belgrade, Serbia); Predrag N. Ivanis (School of Electrical Engineering, University of Belgrade, Serbia); Bane Vasić (University of Arizona, USA)

In this paper we analyze the effect of hardware unreliability to performance of bit-flipping decoders of low-density parity-check (LDPC) codes. We apply expander arguments to show that the simple parallel bit flipping decoder, built partially from faulty gates, can correct a linear fraction of worst case channel errors, when gate failures are correlated and dependent on the switching activity of logic gates. In addition, we provide a lower bound on the guaranteed error correction of LDPC codes with left degree of at least eight.

Feedback Does Not Increase the Capacity of Compound Channels with Additive Noise

Sergey Loyka (University of Ottawa, Canada); Charalambos D Charalambous (University of Cyprus, Cyprus)

A discrete compound channel with memory is considered, where no stationarity, ergodicity or information stability is required, and where the uncertainty set can be arbitrary. When the discrete noise is additive but otherwise arbitrary and there is no cost constraint on the input, it is shown that the causal feedback does not increase the capacity. This extends the earlier result obtained for general channels with full transmitter (Tx) channel state information (CSI). It is further shown that, for this compound setting and under a mild technical condition on the additive noise, the addition of the full Tx CSI does not increase the capacity either, so that the worst-case and compound channel capacities are the same, thus revealing a saddle-point property.

An outer bound on the storage-bandwidth tradeoff of exact-repair cooperative regenerating codes

Hyuk Lee and Jungwoo Lee (Seoul National University, Korea)

(n,k,d,r)-cooperative regenerating codes are a kind of erasure codes, where r failed nodes can be repaired cooperatively with the help of arbitrary d nodes. In this paper, we consider exact-repair cooperative regenerating codes whose parameters satisfy k=d=n-r. There exists a tradeoff between the storage capacity of each node \alpha and the repair bandwidth \gamma in regenerating codes, but the optimal storage-bandwidth tradeoff of the exact-repair cooperative regenerating codes has not been fully specified. We propose an outer bound on the storage-bandwidth tradeoff for the case of k=d=n-r and this result can be regarded as a generalization of the outer bound proposed by Prakash et. al. that specifies the optimal tradeoff of exact-repair regenerating codes for the case of k=d=n-1. Although the proposed outer bound is tighter than the cutset bound of the functional repair model, in the cases where n is large and r is small, the proposed outer bound shows the region on the \alpha-\gamma plane that no exact-repair codes can achieve, but functional-repair codes can.

Sum Capacity of Massive MIMO Systems with Quantized Hybrid Beamforming

An Liu and Vincent Lau (Hong Kong University of Science and Technology, Hong Kong)

Recently, hybrid beamforming, which consists of an analog RF precoder and a digital baseband precoder, has been proposed for massive MIMO systems to reduce the number of RF chains and power consumption at the base station (BS). This paper studies the impact of channel state information (CSI) on the sum capacity of massive MIMO systems with quantized hybrid beamforming where the RF precoder is selected from a finite size codebook. Two types of CSI at the BS (CSIT) are assumed: full instantaneous CSIT (full channel matrix between the BS and users) and hybrid CSIT (channel statistics plus the low dimensional effective channel matrix after RF precoding). We derive asymptotic sum capacity expressions under these two types of CSIT. We find that, in most cases, exploiting the full instantaneous CSIT can only achieve a marginal SNR gain and hybrid CSIT is sufficient to achieve the first order gain provided by massive MIMO.

Bounds and Constructions of Codes with Multiple Localities

Alexander Zeh and Eitan Yaakobi (Technion, Israel)

This paper studies bounds and constructions of locally repairable codes (LRCs) with multiple localities so-called multiple-locality LRCs (ML-LRCs). In the simplest case of two localities some code symbols of an ML-LRC have a certain locality while the remaining code symbols have another one. We extend two bounds, the Singleton and the alphabet-dependent upper bound on the dimension of Cadambe-Mazumdar for LRCs, to the case of ML-LRCs with more than two localities. Furthermore, we construct Singleton-optimal ML-LRCs codes.

Graph-Based Lossless Markov Lumpings

Bernhard C. Geiger (Technical University of Munich, Germany); Christoph Hofer-Temmel (NLDA, The Netherlands)

We use results from zero-error information theory to determine the set of non-injective functions through which a Markov chain can be projected without losing information. These lumping functions can be found by clique partitioning of a graph related to the Markov chain. Lossless lumping is made possible by exploiting the (sufficiently sparse) temporal structure of the Markov chain. Eliminating edges in the transition graph of the Markov chain trades the required output alphabet size versus information loss, for which we present bounds.

Long Cyclic Codes over GF(4) and GF(8) Better Than BCH Codes in the High-Rate Region

Ron M. Roth and Alexander Zeh (Technion, Israel)

An explicit construction of an infinite family of cyclic codes is presented which, over GF(4) (resp., GF(8)), have approximately 8/9 (resp., 48/49) the redundancy of BCH codes of the same minimum distance and length. As such, the new codes are the best codes currently known in a regime where the minimum distance is fixed and the code length goes to infinity.

On Spectral Design Methods for Quasi-Cyclic Codes

Ron M. Roth and Alexander Zeh (Technion, Israel)

A method is provided for constructing upper-triangular square matrices over the univariate polynomial ring over a finite field, under certain constraints on the eigenvalues of the matrices. In some cases of interest, the degree of the determinant of such matrices is shown to be the smallest possible. The method is then applied to construct generator polynomial matrices of quasi-cyclic codes with a prescribed designed minimum distance.

A Unified Inner Bound for the Two-Receiver Memoryless Broadcast Channel with Channel State and Message Side Information

Behzad Asadi and Lawrence Ong (The University of Newcastle, Australia); Sarah J Johnson (University of Newcastle, Australia)

We consider the two-receiver memoryless broadcast channel with states where each receiver requests both common and private messages, and may know part of the private message requested by the other receiver as receiver message side information (RMSI). We address two categories of the channel (i) channel with states known causally to the transmitter, and (ii) channel with states known non-causally to the transmitter. Starting with the channel without RMSI, we first propose a transmission scheme and derive an inner bound for the causal category. We then unify our inner bound for the causal category and the best-known inner bound for the non-causal category, although their transmission schemes are different. Moving on to the channel with RMSI, we first apply a pre-coding to the transmission schemes of the causal and non-causal categories without RMSI. We then derive a unified inner bound as a result of having a unified inner bound when there is no RMSI, and applying the same pre-coding to both categories. We show that our inner bound is tight for some new cases as well as the cases whose capacity region was known previously.

Data Driven Quickest Change Detection: An Algorithmic Complexity Approach

Husheng Li (University of Tennessee, USA)

Traditional quickest change detection, which detects distribution changes in random processes, requires full or partial knowledge of pre-change or post-change distributions of samples. In practice, it is possible that the prior information is unavailable, which prohibits the direct application of existing algorithms such as the cumulative sum (CUSUM) algorithm. In this paper, data driven quickest detection is studied, with only the assumption of stationary ergodic (not necessary i.i.d. or Markovian) processes. To fully exploit existing algorithm within the probabilistic framework, the theory of algorithmic complexity (a.k.a. Kolmogorov complexity) is applied to bridge the observed samples and unknown probability distribution. In particular, data compression algorithms such as Lempel-Ziv algorithm are used to measure the unconditional and conditional algorithm complexities. Numerical simulations are carried out to demonstrate the validity of the proposed algorithms.

The Capacity of Gaussian MISO Channels Under Total and Per-Antenna Power Constraints

Sergey Loyka (University of Ottawa, Canada)

The capacity of a fixed Gaussian MIMO channel and the optimal transmission strategy under the total power (TP) constraint and full channel state information are well-known. This problem remains open in the general case under individual per-antenna (PA) power constraints, while some special cases have been solved. These include a full-rank solution for the MIMO channel and a general solution for the MISO channel. In this paper, the Gaussian MISO channel is considered and its capacity as well as optimal transmission strategies are determined in a closed form under the joint total and per-antenna power constraints in the general case. In particular, the optimal strategy is hybrid and includes two parts: first is equal-gain transmission and second is maximum-ratio transmission, which are responsible for the PA and TP constraints respectively. The optimal beamforming vector is given in a closed-form and an accurate yet simple approximation to the capacity is proposed.

Fundamental limits on source-localization accuracy of EEG-based neural sensing

Pulkit Grover (Carnegie Mellon University, USA)

In this paper, we obtain information-theoretic fundamental limits on attainable source-localization accuracy in Electroencephalography (EEG) recordings of the brain. To develop a systematic approach, we borrow idealized models of the human head from neuroscience literature and analyze the brain-activity to scalp “channel,” where brain activity is viewed as the input, and the recordings on the brain-surface as the output of the channel. An evaluation of the distortion-rate function at this channel’s capacity is used to obtain outer (lower) bounds on attainable mean-squared reconstruction error for localizing a single dipole. These bounds can not be surpassed using \textit{any} sensing algorithm and hold in the limit of infinite number of sensors. While these limits are obtained under simplistic assumptions, these are the first limits for the problem that hold for all estimation algorithms, and need to be extended to more sophisticated models for obtaining a better understanding of optimal neural interfaces and algorithms. Finally, we also provide an upper bound on the Shannon capacity of EEG-based brain-computer interfaces.

Joint Source-Channel Coding with One-Bit ADC Front End

Morteza Varasteh (Imperial College, United Kingdom); Osvaldo Simeone (New Jersey Institute of Technology, USA); Deniz Gündüz (Imperial College London, United Kingdom)

This paper considers the zero-delay transmission of a Gaussian source over an additive white Gaussian noise (AWGN) channel with a one-bit analog-to-digital converter (ADC) front end. The optimization of the encoder and decoder is tackled under both the mean squared error (MSE) distortion and the outage distortion criteria with an average power constraint. For MSE distortion, the optimal transceiver is identified over the space of symmetric encoders. This result demonstrates that the linear encoder, which is optimal with a full-precision front end, approaches optimality only in the low signal-to-noise ratio (SNR) regime; while, digital transmission is optimal in the high SNR regime. For the outage distortion criterion, the structure of the optimal encoder and decoder are obtained. In particular, it is shown that the encoder mapping is piecewise constant and can take only two opposite values when it is non-zero.

Two-Stage Compressed Sensing for Millimeter Wave Channel Estimation

Yonghee Han and Jungwoo Lee (Seoul National University, Korea)

In millimeter wave (mmWave) communication systems, large antenna arrays are deployed to compensate high path loss. While the large array provides high beamforming gain, it also poses a challenge in channel estimation. Since mmWave
channels are likely to be sparse in angular domain, the channel estimation can be converted into a sparse recovery problem, and compressed sensing (CS) can be leveraged for channel estimation. However, conventional non-adaptive CS algorithms show poor recovery performance with low signal-to-noise ratio (SNR), which is common before beamforming in mmWave channels. Although recently developed adaptive CS schemes perform better in a low SNR regime, their excessive feedback requirement hinders practical usage. In this paper, we propose a two-stage CS scheme that requires one-time feedback and is robust to noise, which can be understood as a compromise between the two approaches. Sufficient conditions for the support recovery with the proposed scheme are characterized, and the effectiveness of the proposed scheme is shown numerically.

Connectivity in inhomogeneous random key graphs

Osman Yağan (Carnegie Mellon University & CyLab, USA)

We consider a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the $n$ sensors in the network is classified into $r$ classes according to a probability distribution $\boldsymbol{\mu}=\{\mu_1,\ldots,\mu_r\}$. Before deployment, a class-$i$ sensor is assigned $K_i$ cryptographic keys that are selected uniformly at random from a pool of $P$ keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. The communication topology of this network is modeled by an inhomogeneous random key graph. We establish scaling conditions on the parameters $P$ and $\{K_1,\ldots, K_r\}$ so that this graph is connected with high probability. The result is given in the form of a zero-one law with the number of sensors $n$ growing unboundedly large. Our result is shown to complement and improve those given by Godehardt et al. and Zhao et al. for the same model, therein referred to as the {\em general random intersection graph}.

On (Partial) Unit Memory Codes based on Reed-Solomon Codes for Streaming

Margreta Kuijper (University of Melbourne, Australia); Martin Bossert (Ulm University, Germany)

For streaming codes an erasure channel is assumed and the decoding delay is one of the main parameters to be considered. In this paper the erasure correcting capability of unit memory convolutional codes based on disjoint RS codes is analyzed. We take a sliding window decoder approach, where only the most current information is decoded before sliding the window one time-step further. We show that when we restrict the decoding delay to a small value, these codes still achieve an excellent erasure correction performance. This makes these codes useful for streaming applications where low latency is required.

Outage-Optimized Distributed Quantizers for Multicast Beamforming

Erdem Koyuncu (University of California, Irvine, USA); Christian Remling (University of Oklahoma, USA); Xiaoyi Liu and Hamid Jafarkhani (University of California, Irvine, USA)

We consider a slow-fading multicast channel with one $T$-antenna transmitter and $K$ single-antenna receivers with the goal of minimizing channel outage probability using quantized beamforming. Our focus is on a distributed limited feedback scenario where each receiver can only quantize and send feedback information regarding its own receiving channels.
A classical result in point-to-point quantized beamforming is that a necessary and sufficient condition for full diversity is to have $\lceil \log_2 T \rceil$ bits from the receiver. We first generalize this result to multicast beamforming systems and show that a necessary and sufficient condition to achieve full diversity for all receivers is to have $\lceil \log_2 T \rceil$ bits from {\it each} receiver. Also, for a two-receiver system and with $R$ feedback bits per receiver, we show that the outage performance with quantized beamforming is within $O(2^{-\frac{R}{32T^2}})$dBs of the performance with full channel state information at the transmitter (CSIT). This constitutes, in the context of multicast channels, the first example of a distributed limited feedback scheme whose performance can provably approach the performance with full CSIT.

When is Noisy State Information at the Encoder as Useless as No Information or as Good as Noise-Free State?

Rui Xu and Jun Chen (McMaster University, Canada); Tsachy Weissman (Stanford University, USA); Jian-Kang Zhang (McMaster University, Canada)

For any binary-input channel with perfect state information at the decoder, if the mutual information between the noisy state observation at the encoder and the true channel state is below a positive threshold determined solely by the state distribution, then the capacity is the same as that with no encoder side information. A complementary phenomenon is revealed for a similarly defined quantity.

Joint Optimization of Cloud and Edge Processing for Fog Radio Access Networks

Seok-Hwan Park (Chonbuk National University, Korea); Osvaldo Simeone (New Jersey Institute of Technology, USA); Shlomo (Shitz) Shamai (The Technion, Israel)

This work studies the joint design of cloud and edge processing for the downlink of a fog radio access network (F-RAN). In an F-RAN, cloud processing is carried out by a baseband processing unit (BBU) that is connected to enhanced remote radio heads (eRRHs) by means of fronthaul links. Edge processing is instead enabled by local caching of popular content at the eRRHs. Focusing on the design of the delivery phase for an arbitrary pre-fetching strategy, a novel superposition coding approach is proposed that is based on the hybrid use of the fronthaul links in both hard-transfer and soft-transfer modes. With the former, non-cached files are communicated over the fronthaul links to a subset of eRRHs, while, with the latter, the fronthaul links are used to convey quantized baseband signals as in a cloud RAN (C-RAN). The problem of maximizing the delivery rate is tackled under fronthaul capacity and per-eRRH power constraints. Numerical results are provided to validate the performance of the proposed hybrid delivery scheme for different baseline pre-fetching strategies.

Online Scheduling for Energy Harvesting Broadcast Channels with Finite Battery

Abdulrahman Baknina (University of Maryland, College Park, USA); Sennur Ulukus (University of Maryland, USA)

We consider online transmission scheduling for an energy harvesting broadcast channel with a finite-sized battery. The energy harvests are independent and identically distributed (i.i.d.) in time, and the transmitter gets to know them only causally as they happen. We first consider the case of Bernoulli energy arrivals, and determine the optimum online strategy that allocates power over time and between users optimally. We then consider the case of general i.i.d. energy arrivals, and propose a sub-optimum strategy coined fractional power constant cut-off (FPCC) policy. We develop a lower bound for the performance of the proposed FPCC policy, and a universal upper bound for the capacity region of the energy harvesting broadcast channel. We show that the proposed FPCC policy is near-optimal in that it yields rates that are within a constant gap from the optimum online policy, for all system parameters.

Inter-Class vs. Mutual Information as Side-Channel Distinguishers

Olivier Rioul (Telecom ParisTech & Ecole Polytechnique, France); Annelie Heuser (Telecom ParisTech, France); Sylvain Guilley and Jean-Luc Danger (Telecom ParisTech & Secure IC, France)

A novel “interclass information” side-channel distinguisher is compared to mutual information analysis. Interclass information possesses properties similar to mutual information but uses a different comparing strategy between the underlying conditional distributions. It is shown that interclass information can outperform mutual information in side-channel analysis, especially under low noise. The theoretical comparison is confirmed by simulations.

The Capacity of Discrete-Time Gaussian MIMO Channels with Periodic Characteristics

Nir Shlezinger (Ben Gurion University, Israel); Ron Dabora (Ben-Gurion University, Israel)

In many communications scenarios the channel exhibits periodic characteristics. Periodicity may be expressed as a periodically time-varying channel transfer function as well as an additive noise with periodically time-varying statistics. Examples for such scenarios include interference-limited communications, both wireless and wireline, and also power line communications (PLC). In this work, we characterize the capacity of discrete-time, finite-memory Gaussian multiple input multiple-output (MIMO) channels with periodic characteristics. The derivation transforms the periodic MIMO channel into an extended time-invariant MIMO channel, for which we obtain a closed-form capacity expression. It is shown that capacity can be achieved by an appropriate waterfilling scheme. The capacity expression obtained is numerically evaluated for practical PLC scenarios, and compared to the achievable rate of an ad-hoc orthogonal frequency division multiplexing based scheme, and the gains from optimally handling the periodicity of the channel are quantified.

Operational Interpretation of Rényi Conditional Mutual Information via Composite Hypothesis Testing Against Markov Distributions

Marco Tomamichel (The University of Sydney, Australia); Masahito Hayashi (Nagoya University, Japan)

We revisit the problem of asymmetric binary hypothesis testing against a composite alternative hypothesis. We introduce a general framework to treat such problems when the alternative hypothesis adheres to certain axioms. In this case we find the threshold rate, the optimal error and strong converse exponents (at large deviations from the threshold) and the second order asymptotics (at small deviations from the threshold). We apply our results to find operational interpretations of Rényi information measures. In particular, in case the alternative hypothesis consists of tripartite distributions satisfying the Markov property, we find that the optimal exponents are determined by the Rényi conditional mutual information.

Exploiting Variational Formulas for Quantum Relative Entropy

Mario Berta (California Institute of Technology, USA); Omar Fawzi (ENS de Lyon, France); Marco Tomamichel (The University of Sydney, Australia)

The relative entropy is the basic concept underlying various information measures like entropy, conditional entropy and mutual information. Here, we discuss how to make use of variational formulas for measured relative entropy and quantum relative entropy for understanding the additivity properties of various entropic quantities that appear in quantum information theory. In particular, we show that certain lower bounds on quantum conditional mutual information are superadditive.

Real Interference Alignment for Vector Channels

Pritam Mukherjee and Sennur Ulukus (University of Maryland, USA)

We present a real interference alignment technique for multiple-input multiple-output (MIMO) networks. This technique is based on a theorem due to Dirichlet and Khintchine for simultaneous Diophantine approximation and uses the outputs of all the antennas at the receiver simultaneously for decoding, instead of using them in an antenna-by-antenna basis. This allows us to forgo asymptotic real interference alignment for several multi-user scenarios such as the two-user MIMO interference channel with confidential messages and the two-user MIMO multiple access wiretap channel.

High Probability Guarantees in Repeated Games: Theory and Applications in Information Theory

Payam Delgosha (University of California, Berkeley, USA); Amin Gohari (Sharif University of Technology, Iran); Mohammad Akbarpour (Research Fellow, USA)

We introduce a “high probability” framework for repeated games with incomplete information. In our non-equilibrium setting, players aim to guarantee a certain payoff with high probability, rather than in expected value. We provide a high probability counterpart of the classical result of Mertens and Zamir for the zero-sum repeated games. Any payoff that can be guaranteed with high probability can be guaranteed in expectation, but the reverse is not true. Hence, unlike the average payoff case where the payoff guaranteed by each player is the negative of the payoff by the other player, the two guaranteed payoffs would differ in the high probability framework. One motivation for this framework comes from information transmission systems, where it is customary to formulate problems in terms of asymptotically vanishing probability of error. An application of our results to compound arbitrarily varying channels is given.

Partial Recovery Bounds for the Sparse Stochastic Block Model

Jonathan Scarlett (EPFL, Switzerland); Volkan Cevher (Ecole Polytechnique Federale de Lausanne, Switzerland)

In this paper, we study the information-theoretic limits of community detection in the symmetric two-community stochastic block model, with intra-community and inter-community edge probabilities $\frac{a}{n}$ and $\frac{b}{n}$ respectively. We consider the sparse setting, in which $a$ and $b$ do not scale with $n$, and provide upper and lower bounds on the proportion of community labels recovered on average. These bounds are seen to be near-matching for moderate values of $a$ and $b$, and matching in the limit as $a-b$ grows large.

Converse Bounds for Noisy Group Testing with Arbitrary Measurement Matrices

Jonathan Scarlett (EPFL, Switzerland); Volkan Cevher (Ecole Polytechnique Federale de Lausanne, Switzerland)

We consider the group testing problem, in which one seeks to identify a subset of defective items within a larger set of items based on a number of noisy tests. While matching achievability and converse bounds are known in several cases of interest for i.i.d.~measurement matrices, less is known regarding converse bounds for arbitrary matrices, except in some specific scenarios. We close this gap by presenting two new converse bounds for arbitrary matrices and general noise models. First, we provide a strong converse bound (i.e., $\mathbb{P}[\mathrm{error}] \to 1$) that matches existing achievability bounds in several cases of interest. Second, we provide a weak converse bound (i.e., $\mathbb{P}[\mathrm{error}] \not\to 0$) that matches existing achievability bounds in greater generality.

Coding for Locality in Reconstructing Permutations

Netanel Raviv and Eitan Yaakobi (Technion, Israel); Muriel Médard (MIT, USA)

The problem of storing permutations in a distributed manner arises in several common scenarios, such as efficient updates of a large, encrypted, or compressed data set. This problem may be addressed in either a combinatorial or a coding approach. The former approach boils down to presenting large sets of permutations with locality, that is, any symbol of the permutation can be computed from a small set of other symbols. In the latter approach, a permutation may be coded in order to achieve locality. This paper focuses on the combinatorial approach.
We provide upper and lower bounds for the maximal size of a set of permutations with locality, and provide several simple constructions which attain the upper bound. In cases where the upper bound is not attained, we provide alternative constructions using Reed-Solomon codes, permutation polynomials, and multi-permutations.

Universal Outage Behavior of Randomly Precoded Integer Forcing Over MIMO Channels

Elad Domanovitz and Uri Erez (Tel Aviv University, Israel)

Integer forcing is an equalization scheme for the multiple-input multiple-output communication channel that is applicable when all data streams are encoded using a common linear code. The scheme has been demonstrated to allow operating close to capacity for “most” channel matrices. In this work, the measure of “bad” channels is quantified by considering the outage probability of integer-forcing where random unitary precoding is applied at the transmitter side, and where the transmitter only knows the mutual information of the channel.

Achievable Rate Regions for Cooperative Relay Broadcast Channels with Rate-limited Feedback

Youlong Wu (Technische Universität München, Germany)

Achievable rate regions for cooperative relay broadcast channels with rate-limited feedback are proposed. Specifically, we consider two-receiver memoryless broadcast channels where each receiver sends feedback signals to the transmitter through a noiseless and rate-limited feedback link, and meanwhile, acts as relay to transmit cooperative information to the other receiver. It’s shown that the proposed rate regions improve on the known regions that consider either relaying cooperation or feedback communication, but not both.

Context Set Weighting Method

Zsolt Talata and Hee Sun Kim (University of Kansas, USA)

Contexts of stationary ergodic sources are considered not necessarily consecutive sequences of symbols of the past. The introduced context set model of a source provides a code that can achieve less parameter redundancy than the code the context tree and generalized context tree models provide. The problem of coding sources with unknown context set is addressed for multialphabet sources. Information on the maximum memory length of the source is not required; it may be even infinite. The Context Set Weighting method is introduced to efficiently calculate a mixture of the Krichevsky-Trofimov distributions over possible context sets. The coding distribution is proved to provide a code whose model redundancy does not exceed the order of the parameter redundancy. An algorithm is provided to compute the Context Set Weighting in a polynomial time.

Minimax Structured Normal Means Inference

Akshay Krishnamurthy (Microsoft Research, USA)

We provide a unified treatment of a broad class of noisy structure recovery problems, known as structured normal means inference. In this setting, the goal is to identify, from a finite collection of Gaussian distributions with different means, the distribution that produced some observed data. Recent work has studied several special cases including sparse vectors, biclusters, and graph-based structures. We establish nearly matching upper and lower bounds on the minimax probability of error for any structured normal means problem, and we derive an optimality certificate for the maximum likelihood estimator, which can be applied to many instantiations. We also consider an experimental design setting, where we generalize our minimax bounds and derive an algorithm for computing a design strategy with a certain optimality property. We show that our results give tight minimax bounds for many structure recovery problems and consider some consequences for interactive sampling.

Coded Compressive Sensing: A Compute-and-Recover Approach

Namyoon Lee (POSTECH, Korea); SongNam Hong (Ajou University, USA)

In this paper, we propose \textit{coded compressive sensing} that recovers a $n$-dimensional integer sparse signal vector from a noisy and quantized measurement vector whose dimension $m$ is far-fewer than $n$. The core idea of coded compressive sensing is to construct a linear sensing matrix whose columns consist of lattice codes. We present a two-stage decoding method named \textit{compute-and-recover} to detect the sparse signal from the noisy and quantized measurements. In the first stage, we transform such measurements into noiseless finite-field measurements using the linearity of lattice codewords. In the second stage, syndrome decoding is applied over the finite-field to reconstruct the sparse signal vector. A sufficient condition of a perfect recovery is derived. Our theoretical result demonstrates an interplay among the quantization level $p$, the sparsity level $k$, the signal dimension $n$, and the number of measurements $m$ for the perfect recovery. Considering 1-bit compressive sensing as a special case, we show that the proposed algorithm empirically outperforms an existing greedy recovery algorithm.

Adaptive Protocols for Interactive Communication

Shweta Agrawal (I. I. T Delhi, India); Ran Gelles (Princeton University, USA); Amit Sahai (UCLA, USA)

How much adversarial noise can protocols for interactive communication tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf. Theory, 2014) for the case of “robust” protocols, where each party sends messages only in fixed and predetermined rounds. We consider a new class of protocols for interactive communication, which we call adaptive protocols. Such protocols adapt structurally to the noise induced by the channel in the sense that both the order of speaking, and the length of the protocol may vary depending on observed noise.
We define models that capture adaptive protocols and study upper and lower bounds on the permissible noise rate in these models. When the length of the protocol may adaptively change according to the noise, we demonstrate a protocol that tolerates noise rates up to 1/3. When the order of speaking may adaptively change as well, we demonstrate a protocol that tolerates noise rates up to 2/3. Hence, adaptivity circumvents an impossibility result of 1/4 on the fraction of tolerable noise (Braverman and Rao, 2014).

On the Energy Benefit of Compute-and-forward for Multiple Unicasts

Zhijie Ren (Delft University of Technology, The Netherlands); Jasper Goseling (University of Twente, The Netherlands); Jos H. Weber (Delft University of Technology, The Netherlands); Michael Gastpar (EPFL & University of California, Berkeley, Switzerland)

Compute-and-forward (CF) is a technique which exploits the broadcast and superposition features and reduces the number of transmissions and receptions in wireless networks. In this paper, the energy benefit of CF is focused and networks with multiple unicasts are considered. We prove that the energy benefit of CF is upper bounded by a factor of $\min(\bar{d},K,12\sqrt{K})$, where $\bar{d}$ and $K$ are the average distance and the numbers of the sessions, respectively. Also, it can also be concluded that the energy benefit of network coding (NC) is also upper bounded by the same value, which is a new scaling law of the energy benefit for NC w.r.t. $K$.

Set Min-Sum Decoding Algorithm for Non-Binary LDPC Codes

Liyuan Song (Beihang University, P.R. China); Qin Huang (Beihang University, Beijing, P.R. China); Zulin Wang (Beihang University, P.R. China)

This paper reduces the complexity of decoding non-binary LDPC codes by set partition. In the check node update, the input vectors are partitioned into several sets such that different elements in the virtual matrix enjoy various computational strategies. As a result, the proposed algorithm achieves high computational efficiency by setting strategies according to the correct probability of these elements. Simulation results indicate that it significantly decreases the complexity of CN update with negligible performance loss.

A Lattice Coding Scheme for Secret Key Generation from Gaussian Markov Tree Sources

Shashank Vatedka (Indian Institute of Science, Bangalore, India); Navin Kashyap (Indian Institute of Science, India)

In this article, we study the problem of secret key generation in the multiterminal source model, where the terminals have access to correlated Gaussian sources. We assume that the sources form a Markov chain on a tree. We give a nested lattice-based key generation scheme whose computational complexity is polynomial in the number, N , of independent and identically distributed samples observed by each source. We also compute the achievable secret key rate and give a class of examples where our scheme is optimal in the fine quantization limit. However, we also give examples that show that our scheme is not always optimal in the limit of fine quantization.

Exact Random Coding Secrecy Exponents for the Wiretap Channel

Mani Bastani Parizi and Emre Telatar (EPFL, Switzerland); Neri Merhav (Technion, Israel)

We analyze the exact exponential decay rate of the expected amount of information leaked to the wiretapper in Wyner’s wiretap channel setting using wiretap channel codes constructed from both i.i.d. and constant-composition random codes. Our analysis for those sampled from i.i.d. random coding ensemble shows that the previously-known achievable secrecy exponent using this ensemble is indeed the exact exponent for an average code in the ensemble. Furthermore, our analysis on wiretap channel codes constructed from the ensemble of constant-composition random codes leads to an exponent which, in addition to being the exact exponent for an average code, is larger than the achievable secrecy exponent that has been established so far in the literature for this ensemble (which in turn was known to be smaller than that achievable by wiretap channel codes sampled from i.i.d. random coding ensemble). We also show examples where the exact secrecy exponent for the wiretap channel codes constructed from random constant-composition codes is larger than that of those constructed from i.i.d. random codes.

The Impact of Independence Assumptions on Wireless Communication Analysis

Ezio Biglieri (Universitat Pompeu Fabra, Barcelona, Spain); I-Wei Lai (Chang Gung University, Taiwan)

We consider some problems arising when a wireless communication system performance must be assessed in the presence of model uncertainties due to poorly known or unknown dependences of the random variables used to model the channel. We argue that performance bounds based on probability boxes can measure the extent of uncertainty caused by inaccurate models.

Towards a complete DMT classification of division algebra codes

Laura Luzzi (ETIS (ENSEA, Université de Cergy-Pontoise, CNRS)); Roope Vehkalahti (University of Turku, Finland); Alexander Gorodnik (University of Bristol, United Kingdom)

This work aims at providing new lower bounds for the diversity-multiplexing gain trade-off of a general class of lattice codes based on division algebras. In the low multiplexing gain regime, some bounds were previously obtained from the high signal-to-noise ratio estimate of the union bound for the pairwise error probabilities. Here these results are extended to cover a larger range of multiplexing gains. The improvement is achieved by using ergodic theory in Lie groups to estimate the behavior of the sum arising from the union bound. In particular, the new bounds for lattice codes derived from Q-central division algebras suggest that these codes can be divided into two classes based on their Hasse invariants at the infinite places. Algebras with ramification at the infinite place seem to provide a better diversity-multiplexing gain trade-off.

Empirical Coordination, State Masking and State Amplification: Core of the Decoder’s Knowledge

Mael Le Treust (ETIS / ENSEA, Université Cergy-Pontoise, CNRS, France); Matthieu Bloch (Georgia Institute of Technology & Georgia Tech Lorraine, France)

We revisit the problem of state masking and state amplification for state-dependent channel with causal state information at the encoder from the point of view of empirical coordination. Empirical coordination, which requires all sequences of symbols to be jointly typical for a target joint probability distribution, provides a unified perspective to simultaneously study state masking, state amplification, and capacity-distortion trade-off. Our main result is a characterization of the set of achievable rates, information leakages and joint distributions. We also discuss several specializations and extensions of the result, including the cases of zero message rate, without empirical coordination, strictly causal encoding, two-sided state information and noisy channel feedback. We introduce the notion of “core of the decoder’s knowledge”, to capture what the decoder can infer about all the signals involved in the model.

An Improved Upper Bound for the Most Informative Boolean Function Conjecture

Or Ordentlich (MIT, USA); Ofer Shayevitz (Tel Aviv University, Israel); Omri Weinstein (NYU, USA)

Suppose $X$ is a uniformly distributed $n$-dimensional binary vector and $Y$ is obtained by passing $X$ through a binary symmetric channel with crossover probability $\alpha$. A recent conjecture by Courtade and Kumar postulates that $I(f(X);Y)\leq 1-h(\alpha)$ for any Boolean function $f$. So far, the best known upper bound was essentially $I(f(X);Y)\leq (1-2\alpha)^2$. In this paper, we derive a new upper bound that holds for all balanced functions, and improves upon the best known previous bound for $\alpha>\tfrac{1}{3}$.

Caching and Delivery via Interference Elimination

Chao Tian (The University of Tennessee Knoxville, USA); Jun Chen (McMaster University, Canada)

We propose a new caching scheme where linear combinations of the file segments are cached at the users, for the scenarios where the number of files is no greater than the number of users. When a user requests a certain file in the delivery phase, the other file segments in the cached linear combinations can be viewed as interferences. The proposed scheme combines rank metric codes and maximum distance separable codes to facilitate the decoding and elimination of these interferences, and also to simultaneously deliver useful contents to the intended users. The performance of the proposed scheme can be explicitly evaluated, and we show that the new scheme can strictly improve existing tradeoff inner bounds in the literature; for certain cases, the new tradeoff points are in fact optimal.

Staircase Codes for Secret Sharing with Optimal Communication and Read Overheads

Rawad Bitar and Salim El Rouayheb (Illinois Institute of Technology, USA)

We study the communication efficient secret sharing (CESS) problem introduced by Huang, Langberg, Kliewer and Bruck. A classical threshold secret sharing scheme randomly encodes a secret into $n$ shares given to $n$ parties, such that any set of at least $t$, $t<n$, parties can reconstruct the secret, and any set of at most $z$, $z<t$, parties cannot obtain any information about the secret. Recently, Huang et al.\ characterized the achievable minimum communication overhead (CO) necessary for a legitimate user to decode the secret when contacting $d\geq t$ parties and presented explicit code constructions achieving minimum CO for $d=n$. The intuition behind the possible savings on CO is that the user is only interested in decoding the secret and does not have to decode the random keys involved in the encoding process. We introduce a new class of linear CESS codes called {\em Staircase Codes} over any field $GF(q)$, for any prime power $q> n$. We describe two explicit constructions of Staircase codes that achieve minimum communication and read overheads respectively for a fixed $d$, and universally for all possible values of $d, t\leq d\leq n$.

Super-Activation as a Unique Feature of Arbitrarily Varying Wiretap Channels

Rafael F. Schaefer (Technische Universität Berlin, Germany); Holger Boche (Technical University Munich, Germany); H. Vincent Poor (Princeton University, USA)

The question of additivity of the capacity of a channel goes back to Shannon who asked this for the zero error capacity function. Despite the common sense that the capacity is usually additive, there is surprisingly little known for non-trivial channels. This paper addresses this question for the arbitrarily varying wiretap channel (AVWC) which models secure communication in the presence of arbitrarily varying channel (AVC) conditions. For orthogonal AVWCs it has been shown that the non-additivity phenomenon of super-activation occurs; that is, there are orthogonal AVWCs, each having zero secrecy capacity, which allow for transmission with positive secrecy rate if they are used together. It is shown that for such orthogonal AVWCs super-activation is generic in the sense that whenever super-activation is possible, it is possible for all AVWCs in a certain neighborhood as well. Moreover, it is shown that the issue of super-activation and the continuity of the secrecy capacity solely depend on the legitimate link. Accordingly, the single-user AVC is studied and it is shown that in this case, super-activation for non-secure message transmission is not possible, making it a unique feature of secure communication over AVWCs. However, the capacity for message transmission of the single-user AVC is shown to be super-additive including a complete characterization.

A Lower Bound on the Optimum Feedback Rate for Downlink Multi-Antenna Cellular Networks

Jeonghun Park (The University of Texas at Austin, USA); Namyoon Lee (POSTECH, Korea); Jeffrey Andrews and Robert Heath (The University of Texas at Austin, USA)

We consider a multi-antenna downlink cellular net- work using either single-user maximal ratio transmission (MRT) or multi-user zero-forcing (ZF) transmission. The locations of the base stations are modeled by a Poisson point process to allow the inter-cell interference to be tractably analyzed. A tight lower bound on the optimum number of feedback bits maximizing the net spectral efficiency is derived, whereby the cost of feedback sent via uplink is subtracted from the corresponding gain in downlink spectral efficiency. When using MRT, the optimum number of feedback bits is shown to scale linearly with the number of antennas, and logarithmically with the channel coherence time. With ZF, the optimum amount of feedback scales the same as with MRT, but additionally also increases linearly with the pathloss exponent.

Capacity-Achieving Rate-Compatible Polar Codes

SongNam Hong (Ajou University, USA); Dennis Hui and Ivana Marić (Ericsson Research, USA)

We present a method of constructing rate-compatible polar codes that are capacity-achieving with low-complexity sequential decoders. The proposed code construction allows for incremental retransmissions at different rates in order to adapt to channel conditions. The main idea of the construction exploits certain common characteristics of polar codes that are optimized for a sequence of successively degraded channels. The proposed approach allows for an optimized polar code to be used at every transmission thereby achieving capacity. Due to the length limitation of conventional polar codes, the proposed construction can only support a restricted set of rates that is characterized by the size of the kernel when conventional polar codes are used. We thus consider punctured polar codes which provide more flexibility on block length by controlling a puncturing fraction. We show the existence of capacity-achieving punctured polar codes for any given puncturing fraction. Using punctured polar codes as constituent codes, we show that the proposed rate-compatible polar code is capacity-achieving for an arbitrary sequence of rates and for any class of degraded channels

Cloud-Aided Wireless Networks with Edge Caching: Fundamental Latency Trade-Offs in Fog Radio Access Networks

Ravi Tandon (University of Arizona, USA); Osvaldo Simeone (New Jersey Institute of Technology, USA)

Fog Radio Access Network (F-RAN) is an emerging wireless network architecture that leverages caching capabilities at the wireless edge nodes, as well as edge connectivity to the cloud via fronthaul links. This paper aims at providing a latency-centric analysis of the degrees of freedom of an F-RAN by accounting for the total content delivery delay across the fronthaul and wireless segments of the network. The main goal of the analysis is the identification of optimal caching, fronthaul and edge transmission policies. The study is based on the introduction of a novel performance metric, referred to as the Normalized Delivery Time (NDT), which measures the total delivery latency as compared to an ideal interference-free system. An information-theoretically optimal characterization of the trade-off between NDT, on the one hand, and fronthaul and caching resources, on the other, is derived for a class of F-RANs with two edge nodes and two users. Using these results, the interplay between caching and cloud connectivity is highlighted, as well as the impact of both caching and fronthaul resources on the delivery latency.

String Concatenation Construction for Chebyshev Permutation Channel Codes

Yeow Meng Chee and Han Mao Kiah (Nanyang Technological University, Singapore); San Ling (NTU, Singapore); Tuan Thanh Nguyen, Van Khu Vu and Xiande Zhang (Nanyang Technological University, Singapore)

We construct codes for the Chebyshev permutation channels whose study was initiated by Langberg et al. (2015). We establish several recursive code constructions and present efficient decoding algorithms for our codes. In particular, our constructions yield a family of binary codes of rate 0.643 when r=1. The upper bound on the rate in this case is 2/3 and the previous highest rate is 0.609.

Achievable Rates of Soliton Communication Systems

Qun Zhang and Terence H. Chan (University of South Australia, Australia)

An achievable rate of soliton communication system is derived based on the noise model we studied. Compared to existing results, ours is derived for the system where both eigenvalues and spectral amplitudes are modulated. In addition, we also show an increment of the communication rate by modulating the spectral amplitude of a soliton.

Rates of Constant-Composition Codes that Mitigate Intercell Interference

Yeow Meng Chee, Johan Chrisnata and Han Mao Kiah (Nanyang Technological University, Singapore); San Ling (NTU, Singapore); Tuan Thanh Nguyen and Van Khu Vu (Nanyang Technological University, Singapore)

For certain families of substrings $F$, we provide a closed formula for the maximum size of a $q$-ary $F$-avoiding code with a given composition. In addition, we provide numerical procedures to determine the asymptotic information rate for $F$-avoiding codes with for certain composition ratios. Using our procedures, we recover known results and compute the information rates for certain classes of $F$-avoiding constant-composition codes for $2 \le q \le 8$. For these values of $q$, we find composition ratios such that the rates of $F$-avoiding codes with constant-composition achieve the capacity of the $F$-avoiding channel.

Efficient Encoding/Decoding of Capacity-Achieving Constant-Composition ICI-Free Codes

Yeow Meng Chee, Johan Chrisnata and Han Mao Kiah (Nanyang Technological University, Singapore); San Ling (NTU, Singapore); Tuan Thanh Nguyen and Van Khu Vu (Nanyang Technological University, Singapore)

We give the first known efficient encoder/decoder for $q$-ary constant-composition ICI-free codes achieving ICI channel capacity, for all $q$. Previously, the best result known is an efficient encoder/decoder for binary constant-weight ICI-free codes with more than 2% loss over ICI channel capacity.

On the Role of Side Information In Strategic Communication

Emrah Akyol, Cédric Langbort and Tamer Başar (University of Illinois at Urbana-Champaign, USA)

This paper analyzes the fundamental limits of strategic communication in network settings. Strategic communication differs from the conventional communication paradigms in information theory since it involves different objectives for the encoder and the decoder, which are aware of this mismatch and act accordingly. This leads to a Stackelberg game where both agents commit to their mappings ex-ante. Building on our prior work on the point-to-point setting, this paper studies the compression and communication problems with the receiver and/or transmitter side information setting. The equilibrium strategies and associated costs are characterized for the Gaussian variables with quadratic cost functions. Several questions on the benefit of side information in source and joint source-channel coding in such strategic settings are analyzed. Our analysis has uncovered an interesting result on optimality of uncoded mappings in strategic source-channel coding in networks.

An Alternative Decoding Method for Gabidulin Codes in Characteristic Zero

Sven Müelich, Sven Puchinger, David Mödinger and Martin Bossert (Ulm University, Germany)

Gabidulin codes, originally defined over finite fields, are an important class of rank metric codes with various applications. Recently, their definition was generalized to certain fields of characteristic zero and a Welch–Berlekamp like algorithm with complexity $O(n^3)$ was given. We propose a new application of Gabidulin codes over infinite fields: low-rank matrix recovery. Also, an alternative decoding approach is presented based on a Gao type key equation, reducing the complexity to at least $O(n^2)$. This method immediately connects the decoding problem to well-studied problems, which have been investigated in terms of coefficient growth and numerical stability.

The Zero-Error Capacity of the Gelfand-Pinsker Channel with a Feedback Link

Annina Bracher (ETH Zurich, Switzerland); Amos Lapidoth (ETHZ, Switzerland)

The zero-error feedback capacity of the Gelfand-Pinsker channel is established. It can be positive even if it is zero in the absence of feedback. Moreover, the error-free transmission of a single bit may require more than one channel use.

Energy Complexity of Polar Codes

Christopher Blake and Frank R. Kschischang (University of Toronto, Canada)

Sequences of VLSI circuits implemented according to the Thompson VLSI model that compute encoding and decoding functions, called coding schemes, are classified according to the rate at which their associated block error probability scales with block length $N$. It is shown that coding schemes for binary symmetric channels with probability of error that scales as $O(f(N))$ must have encoding and decoding energy that scales at least as $\Omega\left(N \sqrt {-\ln f(N)}\right)$. Polar coding schemes of rate greater than $\frac{1}{2}$ are shown to have encoding and decoding energy that scales at least as $\Omega\left(N^{3/2}\right)$. This lower bound is achievable up to polylogarithmic factors on a mesh-network.

Strengthened Monotonicity of Relative Entropy via Pinched Petz Recovery Map

David Sutter (ETH Zurich, Switzerland); Marco Tomamichel (The University of Sydney, Australia); Aram W Harrow (MIT)

The quantum relative entropy between two states satisfies a monotonicity property meaning that applying the same quantum channel to both states can never increase their relative entropy. It is known that this inequality is only tight when there is a “recovery map” that exactly reverses the effects of the quantum channel on both states. In this paper we strengthen this inequality by showing that the difference of relative entropies is bounded below by the measured relative entropy between the first state and a recovered state from its processed version. The recovery map is a convex combination of rotated Petz recovery maps and perfectly reverses the quantum channel on the second state. As a special case we reproduce recent lower bounds on the conditional mutual information such as the one proved in [Fawzi and Renner, Commun. Math. Phys., 2015]. Our proof only relies on elementary properties of pinching maps and the operator logarithm.

Conveying Data and State with Feedback

Shraga Bross (Bar-Ilan University, Israel); Amos Lapidoth (ETHZ, Switzerland)

The Rate-and-State capacity of a state-dependent channel with a state-cognizant encoder is the highest possible rate of communication over the channel when the decoder-in addition to reliably decoding the data-must also reconstruct the state sequence with some required fidelity. Feedback from the channel output to the encoder is shown to increase it even for channels that are memoryless with memoryless states. This capacity is calculated here for such channels with feedback when the state reconstruction fidelity is measured using a single-letter distortion function and the state sequence is revealed to the encoder in one of two different ways: strictly-causally or causally.

Tight Upper Bounds on the Redundancy of Optimal Binary AIFV Codes

Weihua Hu, Hirosuke Yamamoto and Junya Honda (The University of Tokyo, Japan)

AIFV codes are lossless codes that generalize the class of instantaneous FV codes. The code uses multiple code trees and assigns source symbols to incomplete internal nodes as well as to leaves. AIFV codes are empirically shown to attain better compression ratio than Huffman codes. Nevertheless, an upper bound on the redundancy of optimal binary AIFV codes is only known to be 1, the same as the bound of Huffman codes. In this paper, the upper bound is improved to 1/2, which is shown to be tight. Along with this, a tight upper bound on the redundancy of optimal binary AIFV codes is derived for the case that p_max is larger than or equal to 1/2, where p_max is the probability of the most likely source symbol. This is the first theoretical work on the redundancy of optimal binary AIFV codes, suggesting superiority of the codes over Huffman codes.

Strong Divergence of the Shannon Sampling Series for an Infinite Dimensional Signal Space

Holger Boche, Ullrich J Mönich and Ezra Tampubolon (Technische Universität München, Germany)

Knowing whether a reconstruction process, for example the Shannon sampling series, is strongly divergent in terms of the lim or only weakly divergent in terms of the limsup is important, because strong divergence is linked to the non-existence of adaptive reconstruction processes. For non-adaptive reconstruction processes the existence is answered by the Banach–Steinhaus theory. However, the analysis of adaptive reconstruction processes is more difficult and not covered by the former theory. In this paper we consider the Paley–Wiener space $PW_\pi^1$ of bandlimited signals with absolutely integrable Fourier transform and analyze the structure of the set of signals for which the peak value of the Shannon sampling series is strongly divergent. We show that this set is lineable, i.e., that there exists an infinite dimensional subspace, all signals of which, except the zero signal, lead to strong divergence. Consequently, for all signals from this subspace, adaptivity in the number of samples that are used in the Shannon sampling series does not create a convergent reconstruction process.

Almost universal codes for fading wiretap channels

Laura Luzzi (ETIS (ENSEA, Université de Cergy-Pontoise, CNRS)); Cong Ling (Imperial College London, United Kingdom); Roope Vehkalahti (University of Turku, Finland)

We consider a fading wiretap channel model where the transmitter has only statistical channel state information, and the legitimate receiver and eavesdropper have perfect channel state information. We propose a sequence of non-random lattice codes which achieve strong secrecy and semantic security over ergodic fading channels. The construction is almost universal in the sense that it achieves the same constant gap to secrecy capacity over Gaussian and ergodic fading models.

New Ternary Binomial Bent Functions

Tor Helleseth and Alexander Kholosha (University of Bergen, Norway)

The ternary function $f(x)$ mapping $\mathbb{F}_{3^{4k}}$ to $\mathbb{F}_{3}$ and given by $f(x)=\mathop{Tr}_{4k}\big(a_1 x^{2(3^k+1)}+a_2 x^{(3^k+1)^2}\big)$, where $a_1$ is a nonsquare in $\mathbb{F}_{3^{4k}}$ and $a_2$ is defined explicitly by $a_1$, is proven to be a regular bent function of degree four belonging to the completed Maiorana-McFarland class. The proof is based on a new criterion that allows checking bentness by analyzing first- and second-order derivatives.

Cluster-Seeking Shrinkage Estimators

Pavan Srinath and Ramji Venkataramanan (University of Cambridge, United Kingdom)

This paper considers the problem of estimating a high-dimensional vector theta \in R^n from a noisy one-time observation. The noise vector is assumed to be i.i.d. Gaussian with known variance. For the squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JS-estimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for theta that lie close to the origin. JS-estimators can be generalized to shrink the data towards any target subspace. Such estimators also dominate the ML-estimator, but the risk reduction is significant only when theta lies close to the subspace. This leads to the question: in the absence of prior information about theta, how do we design estimators that give significant risk reduction over the ML-estimator for a wide range of theta?

In this paper, we attempt to infer the structure of theta from the observed data in order to construct a good attracting subspace for the shrinkage estimator. We provide concentration results for the squared-error loss and convergence results for the risk of the proposed estimators, as well as simulation results to support the claims. The estimators give significant risk reduction over the ML-estimator for a wide range of theta, particularly for large n.

Codes Correcting a Burst of Deletions or Insertions

Clayton Schoeny (University of California, Los Angeles, USA); Antonia Wachter-Zeh (Technion – Israel Institute of Technology, Israel); Ryan Gabrys (UIUC, USA); Eitan Yaakobi (Technion, Israel)

This paper studies codes that correct bursts of deletions. Namely, a code will be called a b-burst-correcting code if it can correct a deletion of any b consecutive bits. While the lower bound on the redundancy of such codes was shown by Levenshtein to be asymptotically log(n)+b-1, the redundancy of the best code construction by Cheng et al. is b(\log (n/b+1)). In this paper we close on this gap and provide codes with redundancy at most log(n)+(b-1)log(log(n))+b-log(b).

We also extend the burst deletion model to two more cases: 1. a deletion burst of at most b consecutive bits and 2. a deletion burst of size at most b (not necessarily consecutive). We extend our code construction for the first case and study the second case for b=3,4. The equivalent models for insertions are also studied and are shown to be equivalent to correcting the corresponding burst of deletions.

Balanced Permutation Codes

Ryan Gabrys and Olgica Milenkovic (UIUC, USA)

Motivated by charge balancing constraints for rank modulation schemes, we introduce the notion of balanced permutations and derive the capacity of balanced permutation codes. We also describe simple interleaving methods for permutation code constructions and show that they approach capacity.

On the Capacity of Diffusion-Based Molecular Timing Channels

Nariman Farsad and Yonathan Murin (Stanford University, USA); Andrew Eckford (York University, Canada); Andrea Goldsmith (Stanford University, USA)

This work introduces capacity limits for molecular timing (MT) channels, where information is modulated on the release timing of small information particles, and decoded from the time of arrival at the receiver. It is shown that the random time of arrival can be represented as an additive noise channel, and for the diffusion-based MT (DBMT) channel, this noise is distributed according to the Lévy distribution. Lower and upper bounds on the capacity of the DBMT channel are derived for the case where the delay associated with the propagation of information particles in the channel is finite. These bounds are also shown to be tight.

A tiger by the tail: when multiplicative noise stymies control

Jian Ding (University of Chicago, USA); Yuval Peres and Gireeja Ranade (Microsoft Research, USA)

This paper considers the stabilization of an unstable discrete-time linear system that is observed over a channel corrupted by continuous multiplicative noise. The main result is a converse bound that shows that if the system growth is large enough the system cannot be stabilized in a mean-squared sense. This is done by showing that the probability of the state magnitude remains bounded must go to zero with time.

It was known that a system with multiplicative observation noise can be stabilized using a simple linear strategy if the system growth is suitably bounded. However, it was not clear whether non-linear controllers could overcome arbitrarily large growth factors. One difficulty with using the standard approach for a data-rate theorem style converse is that the mutual information per round between the system state and the observation is potentially unbounded with a multiplicative noise observation channel. Our proof technique recursively bounds the conditional density of the system state (instead of focusing on the second moment) to bound the progress the controller can make.

On Ordered Syndromes for Multi Insertion/Deletion Error-Correcting Codes

Manabu Hagiwara (Chiba University, Japan)

Classes of multi insertion/deletion error-correcting codes based on order theory and axiomatic algebra are proposed by an abstraction of Helberg’s construction.

Codeword Stabilized Quantum Codes for Asymmetric Channels

Tyler Jackson (University of Guelph & Institute for Quantum Computing, Canada); Markus Grassl (Max-Plank-Institut für die Physik des Lichts, Germany); Bei Zeng (University of Guelph, Canada)

We discuss a method to adapt the codeword stabilized (CWS) quantum code framework to the problem of finding asymmetric quantum codes. We focus on the corresponding Pauli error models for amplitude damping noise and phase damping noise. In particular, we look at codes for Pauli error models that correct one or two amplitude damping errors. Applying local Clifford operations on graph states, we are able to exhaustively search for all possible codes up to length 9. With a similar method, we also look at codes for the Pauli error model that detect a single amplitude error and detect multiple phase damping errors. Many new codes with good parameters are found, including nonadditive codes and degenerate codes.

Concatenated Codes for Amplitude Damping

Tyler Jackson (University of Guelph & Institute for Quantum Computing, Canada); Markus Grassl (Max-Plank-Institut für die Physik des Lichts, Germany); Bei Zeng (University of Guelph, Canada)

We discuss a method to construct quantum codes correcting amplitude damping errors via code concatenation. The inner codes are chosen as asymmetric Calderbank-Shor-Steane (CSS) codes. By concatenating with outer codes correcting symmetric errors, many new codes with good parameters are
found, which are better than the amplitude damping codes obtained by any previously known construction.

An Achievable Rate Region for Superposed Timing Channels

Guido C. Ferrante (Singapore University of Technology and Design, and Massachusetts Institute of Technology, Singapore); Tony Q. S. Quek (Singapore University of Technology and Design, Singapore); Moe Win (Massachusetts Institute of Technology, USA)

A multiple-access channel where point processes are randomly transformed by timing channels and then superposed is considered. An achievable rate region for the K-user channel is established. A single-user achievable rate in the presence of “many” interfering users is proposed. Results are applied to exponential server timing channels.

Timing Capacity of Queues with Random Arrival and Modified Service Times

Guido C. Ferrante (Singapore University of Technology and Design, and Massachusetts Institute of Technology, Singapore); Tony Q. S. Quek (Singapore University of Technology and Design, Singapore); Moe Win (Massachusetts Institute of Technology, USA)

A queue timing channel with random arrival and service times is investigated. The message is encoded into a sequence of additional delays that packets are subject to before they depart. We derive upper and lower bounds of the channel capacity for general arrival and service processes, and general load. We establish the channel capacity for the queue with exponential server and no load constraint by keeping the queue stable. We discuss the consequences of this result and a possible application where the timing channel is used to send covert information.

On the Duplication Distance of Binary Strings

Noga Alon (Tel Aviv University, Israel); Jehoshua Bruck, Farzad Farnoud (Hassanzadeh) and Siddharth Jain (California Institute of Technology, USA)

We study the tandem duplication distance between binary sequences and their roots. This distance is motivated by genomic tandem duplication mutations and counts the smallest number of tandem duplication events that are required to take one sequence to another. We consider both exact and approximate tandem duplications, the latter leading to a combined duplication/Hamming distance. The paper focuses on the maximum value of the duplication distance to the root. For exact duplication, denoting the maximum distance to the root of a sequence of length $n$ by $f(n)$, we prove that $f(n)=\Theta(n)$. For the case of approximate duplication, where a $\beta$ fraction of symbols may be duplicated incorrectly, we show using the Plotkin bound that the maximum distance has a sharp transition from linear to logarithmic in $n$ at $\beta=1/2$.

Limited-Magnitude Error-Correcting Gray Codes for Rank Modulation

Yonatan Yehezkeally and Moshe Schwartz (Ben-Gurion University of the Negev, Israel)

We construct Gray codes over permutations for the rank-modulation scheme, which are also capable of correcting errors under the infinity-metric. These errors model limited-magnitude or spike errors, for which only single-error-detecting Gray codes are currently known. Surprisingly, the error-correcting codes we construct achieve better asymptotic rates than that of presently-known constructions not having the Gray property. We also cast the problem of improving upon these results into the context of finding a certain type of auxiliary codes in the symmetric group of even orders.

On Deep Holes of Projective Reed-Solomon Codes

Jun Zhang (Capital Normal University, P.R. China); Daqing Wan (University of California, Irvine, USA)

In this paper, we obtain new results on the covering radius and deep holes for projective Reed-Solomon (PRS) codes.

Spectral Analysis of Quasi-Cyclic Product Codes

Alexander Zeh (Technion, Israel); San Ling (NTU, Singapore)

This paper considers a linear quasi-cyclic product code of two given quasi-cyclic codes of relatively prime lengths over finite fields. We give the spectral analysis of a quasi-cyclic product code in terms of the spectral analysis of the row- and the column-code. Moreover, we provide a new lower bound on the minimum Hamming distance of a given quasi-cyclic code.

Unconstrained distillation capacities of a pure-loss bosonic broadcast channel

Masahiro Takeoka (National Institute of Information and Communications Technology & Raytheon BBN Technologies, Japan); Kaushik Seshadreesan (Max-Planck-Institute for the Science of Light, Germany); Mark M Wilde (Louisiana State University, USA)

Bosonic channels are important in practice as they form a simple model for free-space or fiber-optic communication. Here we consider a single-sender two-receiver pure-loss bosonic broadcast channel and determine the unconstrained capacity region for the distillation of bipartite entanglement and secret key between the sender and each receiver, whenever they are allowed arbitrary public classical communication. We show how the state merging protocol leads to achievable rates in this setting, giving an inner bound on the capacity region. We also evaluate an outer bound on the region by using the relative entropy of entanglement and a ‘reduction by teleportation’ technique. The outer bounds match the inner bounds in the infinite-energy limit, thereby establishing the unconstrained capacity region for such channels. Our result could provide a useful benchmark for implementing a broadcasting of entanglement and secret key through such channels. An important open question relevant to practice is to determine the capacity region in both this setting and the single-sender single-receiver case when there is an energy constraint on the transmitter.

Wireless Networks of Bounded Capacity

Grace Villacrés Estrada and Tobias Koch (Universidad Carlos III de Madrid & Gregorio Marañón Health Research Institute, Spain)

The channel capacity of wireless networks is often studied under the assumption that the communicating nodes have perfect channel-state information (CSI) in the sense that they have access to the fading coefficients in the network. To the best of our knowledge, one of the few works that studies wireless networks without this assumption is by Lozano, Heath, and Andrews. Inter alia, Lozano et al. show that in the absence of perfect CSI, and if the channel inputs are given by the square-root of the transmit power times a power-independent random variable, then the achievable information rate is bounded in the signal-to-noise ratio (SNR). However, such inputs do not necessarily achieve capacity, so one may argue that the information rate is bounded in the SNR because of the suboptimal input distribution. In this paper, it is demonstrated that if the nodes do not cooperate and they all use the same codebook, then the achievable information rate remains bounded in the SNR even if the input distribution is allowed to change arbitrarily with the transmit power.

New Proofs of Retrievability using Locally Decodable Codes

Julien Lavauzelle (LIX and INRIA Saclay, France); Francoise Levy-dit-Vehel (ENSTA, France)

Proofs of retrievability (PoR) are probabilistic protocols which ensure that a client can recover a file he previously stored on a server. Good PoRs aim at reaching an efficient trade-off between communication complexity and storage overhead, and should be usable an unlimited number of times. We present a new unbounded-use PoR construction based on a class of locally decodable codes, namely the lifted codes of Guo et. al.. Our protocols feature sublinear communication complexity and very low storage overhead. Moreover, the various parameters can be tuned so as to minimize the communication complexity (resp. the storage overhead) according to the setting of concern.

Quickest Sequence Phase Detection

Lele Wang (Tel Aviv University & Stanford University, Israel); Sihuang Hu and Ofer Shayevitz (Tel Aviv University, Israel)

We consider the problem of designing a length-$n$ binary sequence, such that the location of any length-$k$ contiguous subsequence can be determined from a noisy observation of that subsequence. We derive bounds on the minimal possible $k$ in the limit of $n\to\infty$, and describe some sequence constructions. Both adversarial and probabilistic noise models are addressed. Two applications of the problem include fast positioning and card tricks.

Sharper Upper Bounds for Unbalanced Uniquely Decodable Code Pairs

Per Austrin (School of Computer Science and Communication, KTH Royal Institute of Technology, Sweden); Petteri Kaski (Helsinki Institute for Information Technology HIIT, Aalto University, Finland); Mikko Koivisto (Helsinki Institute for Information Technology HIIT, University of Helsinki, Finland); Jesper Nederlof (Technical University of Eindhoven, The Netherlands)

Two sets A, B of binary strings of length n form a Uniquely Decodable Code Pair (UDCP) if every pair a in A, b in B yields a distinct sum a+b, where the addition is over Z^n. We show that every UDCP A, B, with |A| = 2^{(1-eps)n} and |B| = 2^{b*n}, satisfies b <= 0.4229 +\sqrt{eps}.
For sufficiently small eps, this bound significantly improves previous bounds by Urbanke & Li [Proc. of IEEE Information Theory Workshop ’98] and Ordentlich & Shayevitz [ISIT ’15], which upper bound b by 0.4921 and 0.4798, respectively, as eps approaches 0.

Constructions of Batch Codes with Near-Optimal Redundancy

Alexander Vardy (University of California San Diego, USA); Eitan Yaakobi (Technion, Israel)

Batch codes, first studied by Ishai et al., are a coding scheme to encode n information bits into m buckets, in a way that every batch request of k bits can be decoded while at most one bit is read from each bucket. There are several families of batch codes and in this work we study the class of multiset primitive batch codes, in which every bucket stores a single bit and bits can be requested multiple times. We simply refer to these codes as batch codes. The main problem under this paradigm is to optimize the number of encoded bits, which is the number of buckets, for given n and k, and we denote this value by B(n,k). Since there are several asymptotically optimal constructions of these codes, we are motivated to evaluate their optimality by their redundancy. Thus we define the optimal redundancy of batch codes to be r_B(n,k) = B(n,k)-n. Our main result in this paper claims that for any fixed k, r_B(n,k)=O(\sqrt{n}\log(n)).

Sequence assembly from corrupted shotgun reads

Shirshendu Ganguly (University of Washington, USA); Elchanan Mossel (University of Pennsylvania and University of California, Berkeley); Miklos Racz (Microsoft Research, USA)

The prevalent technique for DNA sequencing consists of two main steps: shotgun sequencing, where many randomly located fragments, called reads, are extracted from the overall sequence, followed by an assembly algorithm that aims to reconstruct the original sequence. There are many different technologies that generate the reads: widely-used second-generation methods create short reads with low error rates, while emerging third-generation methods create long reads with high error rates. Both error rates and error profiles differ among methods, so reconstruction algorithms are often tailored to specific shotgun sequencing technologies. As these methods change over time, a fundamental question is whether there exist reconstruction algorithms which are robust, i.e., which perform well under a wide range of error distributions.
Here we study this question of sequence assembly from corrupted reads. We make no assumption on the types of errors in the reads, but only assume a bound on their magnitude. More precisely, for each read we assume that instead of receiving the true read with no errors, we receive a corrupted read which has edit distance at most $\epsilon$ times the length of the read from the true read. We show that if the reads are long enough and there are sufficiently many of them, then approximate reconstruction is possible: we construct a simple algorithm such that for almost all original sequences the output of the algorithm is a sequence whose edit distance from the original one is at most $O(\epsilon)$ times the length of the original sequence.

On the Sum-Rate Capacity of Non-Symmetric Poisson Multiple Access Channel

Ain Ul Aisha (Worcester Polytechnic Institute, USA); Yingbin Liang (Syracuse University, USA); Lifeng Lai (Worcester Polytechnic Institute, USA); Shlomo (Shitz) Shamai (The Technion, Israel)

In this paper, we characterize the sum-rate capacity of the non-symmetric Poisson multiple access channel (MAC). While the sum-rate capacity of the symmetric Poisson MAC has been characterized in the literature, the special property exploited in the existing method for the symmetric case does not hold for the non-symmetric channel anymore. We obtain the optimal input that achieves the sum-rate capacity by solving a non-convex optimization problem. We show that, for certain channel parameters, it is optimal for a single user to transmit to achieve the sum-rate capacity. This is in sharp contrast to the Gaussian MAC, in which all users must transmit, either simultaneously or at different time, in order to achieve the sum-rate capacity.

A General Rate-Distortion Converse Bound for Entropy-Constrained Scalar Quantization

Tobias Koch (Universidad Carlos III de Madrid & Gregorio Marañón Health Research Institute, Spain); Gonzalo Vazquez-Vilar (Universidad Carlos III de Madrid, Spain)

We derive a lower bound on the smallest output entropy that can be achieved via scalar quantization of a source with given expected quadratic distortion. As the allowed distortion tends to zero, the bound converges to the output entropy achieved by a uniform quantizer, thereby recovering the result by Gish and Pierce that uniform quantizers are asymptotically optimal. The proposed derivation applies for any memoryless source that has a probability density function (pdf), a finite differential entropy, and whose integer part has a finite entropy. In contrast to Gish and Pierce, we do not require any additional constraints on the continuity or decay of the source pdf.

A Beta-Beta Achievability Bound with Applications

Wei Yang (Princeton University, USA); Austin Collins (MIT, USA); Giuseppe Durisi (Chalmers University of Technology, Sweden); Yury Polyanskiy (MIT, USA); H. Vincent Poor (Princeton University, USA)

A channel coding achievability bound expressed in terms of the ratio between two Neyman-Pearson $\beta$ functions is proposed. This bound is the dual of a converse bound established earlier by Polyanskiy and Verd\'{u} (2014). The new bound turns out to simplify considerably the analysis in situations where the channel output distribution is not a product distribution, for example due to a cost constraint or a structural constraint (such as orthogonality or constant composition) on the channel inputs. Connections to existing bounds in the literature are discussed. The bound is then used to derive 1) an achievability bound on the channel dispersion of additive non-Gaussian noise channels with random Gaussian codebooks, 2) the channel dispersion of an exponential-noise channel, 3) a second-order expansion for the minimum energy per bit of an AWGN channel, and 4) a lower bound on the maximum coding rate of a multiple-input multiple-output Rayleigh-fading channel with perfect channel state information at the receiver, which is the tightest known achievability result.

Finite-Blocklength Bounds for Wiretap Channels

Wei Yang (Princeton University, USA); Rafael F. Schaefer (Technische Universität Berlin, Germany); H. Vincent Poor (Princeton University, USA)

This paper investigates the maximal secrecy rate over a wiretap channel subject to reliability and secrecy constraints at a given blocklength. New achievability and converse bounds are derived, which are shown to be tighter than existing bounds. The bounds also lead to the tightest second-order coding rate for discrete memoryless and Gaussian wiretap channels.

Computing the Optimal Exponent of Correct Decoding for Discrete Memoryless Sources

Yutaka Jitsumatsu (Kyushu University, Japan); Yasutada Oohama (University of Electro-Communications, Japan)

The form of Dueck and Körner’s exponent function for correct decoding probability for discrete memoryless channels at rates above the capacity is similar to the form of Csiszár and Körner’s exponent function for correct decoding probability in lossy source coding for discrete memoryless sources at rates below the rate distortion function. We recently gave a new algorithm for computing
Dueck and Körner’s exponent. In this paper, we give an algorithm for computing Csiszár and Körner’s exponent. The proposed algorithm can also be used to compute cutoff rate and the rate distortion function.

Minimal Characterization and Provably Efficient Exhaustive Search Algorithm for Elementary Trapping Sets of Variable-Regular LDPC Codes

Yoones Hashemi Toroghi and Amir Banihashemi (Carleton University, Canada)

In this paper, we propose a new characterization and an efficient exhaustive search algorithm for elementary trapping sets (ETS) of variable-regular low-density parity-check (LDPC) codes. Recently, Karimi and Banihashemi proposed a characterization of ETSs, which was based on viewing an ETS as a layered superset (LSS) of a short cycle in the code’s Tanner graph. Compared to the LSS-based characterization, which is based on a single LSS expansion technique, the new characterization involves two additional expansion techniques. The introduction of the new techniques mitigates two problems that LSS-based characterization/search suffers from: (1) exhaustiveness: not every ETS structure is an LSS of a cycle, (2) search efficiency: LSS-based search algorithm often requires the enumeration of cycles with length much larger than the girth of the graph, where the multiplicity of such cycles increases rapidly with their length. We prove that using the three expansion techniques, any ETS structure can be obtained starting from a simple cycle, no matter how large the size of the structure $a$ or the number of its unsatisfied check nodes $b$ are, i.e., the characterization is exhaustive. We also demonstrate that for the proposed characterization to exhaustively cover all the ETS structures within the $(a,b)$ classes with $a \leq a_{max}, b \leq b_{max}$, for any value of $a_{max}$ and $b_{max}$, the maximum length of the required cycles is minimal. The proposed characterization corresponds to a provably efficient search algorithm, significantly more efficient than the LSS-based search.

On Network Simplification for Gaussian Half-Duplex Diamond Networks

Martina Cardone (University of Califonia, Los Angeles, USA); Christina Fragouli (UCLA, USA); Daniela Tuninetti (University of Illinois at Chicago, USA)

This paper investigates the simplification problem in Gaussian Half-Duplex (HD) diamond networks. The goal is to answer the following question: what is the minimum (worst-case) fraction of the total HD capacity that one can always achieve by smartly selecting a subset of $k$ relays, out of the $N$ possible ones? We make progress on this problem for $k=1$ and $k=2$ and show that for $N=k+1, \ k \in \{1,2\}$ at least $\frac{k}{k+1}$ of the total HD capacity is always approximately (i.e., up to a constant gap) achieved. Interestingly, and differently from the Full-Duplex (FD) case, the ratio in HD depends on $N$, and decreases as $N$ increases. For all values of $N$ and $k$ for which we derive worst case fractions, we also show these to be approximately tight. This is accomplished by presenting $N$-relay Gaussian HD diamond networks for which the best $k$-relay subnetwork has an approximate HD capacity equal to the worst-case fraction of the total approximate HD capacity. Moreover, we provide additional comparisons between the performance of this simplification problem for HD and FD networks, which highlight their different natures.

Estimation of entropy rate and Rényi entropy rate for Markov chains

Sudeep Kamath and Sergio Verdú (Princeton University, USA)

Estimation of the entropy rate of a stochastic process with unknown statistics, from a single sample path is a classical problem in information theory. While universal estimators for general families of processes exist, the estimates have not been accompanied by guarantees for fixed-length sample paths. We provide finite sample bounds on the convergence of a plug-in type estimator for the entropy rate of a Markov chain in terms of its alphabet size and its mixing properties. We also discuss Rényi entropy rate estimation for reversible Markov chains.

Polar Codes and Polar Lattices for Independent Fading Channels

Ling Liu (Department of Electrical and Electronic Engineering Imperial College London, United Kingdom); Cong Ling (Imperial College London, United Kingdom)

In this paper, we design polar codes and polar lattices for i.i.d. fading channels when the channel state information is only available to the receiver. For the binary input case, we propose a new design of polar codes through single-stage polarization to achieve the ergodic capacity. For the non-binary input case, polar codes are further extended to polar lattices to achieve the egodic Poltyrev capacity, i.e., the capacity without power limit. When the power constraint is taken into consideration, we show that polar lattices with lattice Gaussian shaping achieve the egodic capacity of fading channels. The coding and shaping are both explicit, and the overall complexity of encoding and decoding is $O(N \log^2 N)$.

Codes in the Damerau Distance for DNA Storage

Ryan Gabrys (UIUC, USA); Eitan Yaakobi (Technion, Israel); Olgica Milenkovic (UIUC, USA)

We introduce the new problem of code design in the Damerau metric. The Damerau metric is a generalization of the Levenshtein distance which also allows for adjacent transposition edits. We first provide constructions for codes that may correct either a single deletion or a single adjacent transposition and then proceed to extend these results to codes that can simultaneously correct a single deletion and multiple adjacent transpositions.

Content Delivery in Erasure Broadcast Channels with Cache and Feedback

Asma Ghorbel (CentraleSupelec, France); Mari Kobayashi (Supelec, France); Sheng Yang (Supélec, France)

We study the content delivery in the context of a K-user erasure broadcast channel such that a content providing server wishes to deliver requested files to users, each equipped with a cache of a finite memory. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by decentralized cache placement, we characterize the achievable rate region as a function of the memory sizes and the erasure probabilities. The proposed delivery scheme, based on the broadcasting scheme proposed by Wang and Gatzianas et al., exploits the receiver side information established during the placement phase. Our results can be extended to centralized cache placement as well as multi-antenna broadcast channels with state feedback.

On the Stationary Distribution of Asymmetric Binary Systems

Hidetoshi Yokoo (Gunma University, Japan)

This paper proposes an approximation to the stationary distribution of the states in Duda’s ABS entropy coder. While arithmetic coders represent a codeword by an interval of numbers, the ABS encoder represents its inner state by a single number. This paper proves that the proposed approximation to the state distribution converges to the true stationary distribution in the limit of a parameter of ABS. This leads to a rigorous proof of the fact that the rate of ABS asymptotically attains the source entropy.

On the Minimum Mean p-th Error in Gaussian Noise Channels and its Applications

Alex Dytso (University of Illinois at Chicago, USA); Ronit Bustin (Tel Aviv University, Israel); Daniela Tuninetti and Natasha Devroye (University of Illinois at Chicago, USA); H. Vincent Poor (Princeton University, USA); Shlomo (Shitz) Shamai (The Technion, Israel)

The problem of estimating an arbitrary random variable from its observation corrupted by additive white Gaussian noise, where the cost function is taken to be the minimum mean p-th error (MMPE), is considered. The classical minimum mean square error (MMSE) is a special case of the MMPE. Several bounds and properties of the MMPE are derived and discussed. As applications of the new MMPE bounds, this paper presents: (a) a new upper bound for the MMSE that complements the ‘single-crossing point property’ for all SNR value below a certain value at which the MMSE is known, (b) an improved characterization of the phase-transition phenomenon which manifests, in the limit as the length of the capacity achieving code goes to infinity, as a discontinuity of the MMSE, and (c) new bounds on the second derivative of mutual information, or the first derivative of MMSE, that tighten previously known bounds.

On the Entropy of Physically Unclonable Functions

Olivier Rioul (Telecom ParisTech & Ecole Polytechnique, France); Patrick Solé (Telecom Paristech, France); Sylvain Guilley and Jean-Luc Danger (Telecom ParisTech & Secure IC, France)

A physically unclonable function (PUF) is a hardware device that can generate intrinsic responses from challenges. The responses serve as unique identifiers and it is required that they be as little predictable as possible. A loop-PUF is in an architecture where $n$ single-bit delay elements are chained. Each PUF generates one bit response per challenge.
We model the relationship between responses and challenges in a loop-PUF using Gaussian random variables and give a closed-form expression of the total entropy of the responses. It is shown that $n$ bits of entropy can be obtained with $n$ challenges if and only if the challenges constitute a Hadamard code. Contrary to a previous belief, it is shown that adding more challenges results in an entropy strictly greater than $n$ bits. A greedy code construction is provided for this purpose.

Optimization of Time-Switching Energy Harvesting Receivers over Multiple Transmission Blocks

Zhengwei Ni and Mehul Motani (National University of Singapore, Singapore)

Compared with energy-harvesting transmitters, the performance of energy-harvesting receivers has not been fully investigated. The main consumption of energy at transmitters is for transmission, while that at receivers is for information decoding. Hence, the analysis and optimization of energy-harvesting transmitters and receivers are inherently different. This paper considers an end-to-end communication with an energy-harvesting receiver. The receiver has an time-switching architecture and can harvest energy from both dedicated transmitter and other ambient radio-frequency (RF) sources. Since the antenna of receiver usually operates over a range of frequencies, which are larger than the frequency band allocated for communication between transmitter and receiver, it can also harvest energy from the frequency bands used by other communications. In each block, the receiver firstly harvests energy then receives information for decoding. By assuming the energy consumption of other processing is negligible compared with decoding and the energy consumed for it is a non-decreasing convex function of normalized code rate, we provide a non-convex optimization problem to maximize the amount of information decoded over multiple blocks. Then, we convert it into a convex problem and solve it. Finally, for numerical results, we provide an example to validate the accuracy of our analysis and compare our scheme with other two suboptimal schemes.

Simplifying Wireless Social Caching

Mohammed Karmoose (UCLA, USA); Martina Cardone (University of Califonia, Los Angeles, USA); Christina Fragouli (UCLA, USA)

Social groups give the opportunity for a new form of caching. In this paper, we investigate how a social group of users can jointly optimize bandwidth usage, by each caching parts of the data demand, and then opportunistically share these parts among them upon meeting. We formulate this problem as a Linear Program (LP) with exponential complexity. Based on the optimal solution, we propose a simple heuristic inspired by the bipartite set-cover problem that operates in polynomial time. Furthermore, we prove a worst case gap between the heuristic and the LP solutions. Finally, we assess the performance of our algorithm using real-world mobility traces from the MIT Reality Mining project dataset.

On the Capacity of Multilevel NAND Flash Memory Channels

Yonglong Li (The University of Hong Kong, Hong Kong); Aleksandar Kavcic (University of Hawaii, USA); Guangyue Han (The University of Hong Kong, Hong Kong)

In this paper, we initiate a first information-theoretic study on multilevel NAND flash memory channels~\cite{kavcic2014} with intercell interference. More specifically, for a multilevel NAND flash memory channel under mild assumptions, we first prove that such a channel is indecomposable and it features asymptotic equipartition property; we then further prove that stationary processes achieve its information capacity, and consequently, as its order tends to infinity, its Markov capacity converges to its information capacity; eventually, we establish that its operational capacity is equal to its information capacity. Our results suggest that it is highly plausible to apply the ideas and techniques in the computation of the capacity of finite-state channels, which are relatively better explored, to that of the capacity of multilevel NAND flash memory channels.

Encoding Semiconstrained Systems

Ohad Elishco, Tom Meyerovitch and Moshe Schwartz (Ben-Gurion University of the Negev, Israel)

Semiconstrained systems were recently suggested as a generalization of constrained systems, commonly used in communication and data-storage applications that require certain offending subsequences be avoided. In an attempt to apply techniques from constrained systems, we study sequences of constrained systems that are contained in, or contain, a given semiconstrained system, while approaching its capacity. In the case of contained systems we describe to such sequences resulting in constant-to-constant bit-rate block encoders and sliding-block encoders. Surprisingly, in the case of containing systems we show that a “generic” semiconstrained system is never contained in a proper fully-constrained system.

Information Decomposition on Structured Space

Mahito Sugiyama (Osaka University, Japan); Hiroyuki Nakahara (RIKEN Brain Science Institute, Japan); Koji Tsuda (The University of Tokyo, Japan)

We build information geometry for a partially ordered set of variables and define the orthogonal decomposition of information theoretic quantities. The natural connection between information geometry and order theory leads to efficient decomposition algorithms. This generalization of Amari’s seminal work on hierarchical decomposition of probability distributions on event combinations enables us to analyze high-order statistical interactions arising in neuroscience, biology, and machine learning.

Performance Analysis of Fault Erasure Belief Propagation Decoder based on Density Evolution

Hiroki Mori and Tadashi Wadayama (Nagoya Institute of Technology, Japan)

In this paper, we will present an analysis on the fault erasure BP decoder based on the density evolution. In the fault BP decoder, messages exchanged in a BP process are stochastically corrupted due to unreliable logic gates and flip-flops; i.e., we assume circuit components with transient faults. We derived a set of the density evolution equations for the fault erasure BP processes. Our density evolution analysis reveals the asymptotic behaviors of the estimation error probability of the fault erasure BP decoders. In contrast to the fault free cases, it is observed that the error probabilities of the fault BP decoder converge to positive values, and that there exists a discontinuity in an error curve corresponding to the fault BP threshold. It is also shown that an message encoding technique provides higher fault BP thresholds than those of the original decoders at the cost of increase of its circuit size.

Signature codes for the A-channel and collusion-secure multimedia fingerprinting codes

Grigory Kabatiansky (IITP, Moscow, Russia); Marcel Fernández (Technical University of Catalonia, Spain); Moon Ho Lee (Chonbuk National University, Korea); Elena Egorova (IITP RAS, Russia)

We consider collusion-resistant fingerprinting codes for multimedia content. We show that the corresponding IPP-codes may trace {\it all} guilty users and at the same time have exponentially many code words. We also establish an equivalence between signature codes for the A-channel and multimedia fingerprinting codes and prove that the rate of the best $t$-signature codes for A-channel is at least $\Theta(t^{-2})$. Finally, we construct a family of $t$-signature codes for the A-channel with polynomial decoding complexity and rate $\Theta(t^{-3}).$

Bounds on Asymptotic Rate of Capacitive Crosstalk Avoidance Codes for On-chip Buses

Tadashi Wadayama and Taizuke Izumi (Nagoya Institute of Technology, Japan)

In order to prevent the capacitive crosstalk in on-chip buses, several types of capacitive crosstalk avoidance codes have been devised. These codes are designed to prohibit transition patterns prone to the capacity crosstalk from any consecutive two words transmitted to on-chip buses. This paper provides a rigorous analysis on the asymptotic rate of $(p,q)$-transition free word sequences under the assumption that coding is based on a pair of a stateful encoder and a stateless decoder. The symbols $p$ and $q$ represent $k$-bit transition patterns that should not be appeared in any consecutive two words at the same adjacent $k$-bit positions. It is proved that the maximum rate of the sequences equals to the subgraph domatic number of $(p,q)$-transition free graph. Based on the theoretical results on the subgraph domatic partition problem, a pair of lower and upper bounds on the asymptotic rate is derived. We also present that the asymptotic rate $-2 + \log_2 \left(3 + \sqrt{17} \right) \simeq 0.8325$ is achievable for the $p={\tt 01} \leftrightarrow q={\tt 10}$ transition free word sequences.

A Polynomial-Time Algorithm for Pliable Index Coding

Linqi Song (University of California, Los Angeles, USA); Christina Fragouli (UCLA, USA)

Pliable index coding considers a server with m messages and n clients where each client has as side information a subset of the messages. We seek to minimize the number of transmissions the server should make, so that each client receives (any) one message she does not already have. Previous work has shown that the server can achieve this using O(\log^2(n)) transmissions and needs at least \Omega(\log(n)) transmissions in the worst case, but finding a code of optimal length is NP-hard. In this paper, we propose a polynomial-time algorithm that always uses less than O(\log^2(n)) transmissions, i.e., is almost worst-case optimal. We also establish a connection between the pliable index coding problem and the minrank problem over a family of mixed matrices.

Integrated Parallel Interleaved Concatenation for Lowering Error Floors of LDPC Codes

Naoaki Kokubun and Hironori Uchikawa (Toshiba Corporation, Japan)

In order to suppress error floors of low density parity check (LDPC) codes, we study a parallel concatenation scheme with an intermediate single parity check (SPC) code and interleavers. The interleavers of the concatenation play an important role to lower error floors, because it is highly possible that interleaved bits in a trapping set are not to be in error even if the bits in the trapping set are originally in error. For quasi-cyclic LDPC codes, our simulation results confirm that the integrated parallel interleaved concatenation with circulant-size cyclic-shift interleavers improves error-floor performance, and it is better than serial concatenation schemes with BCH codes. If a decoder can afford to accommodate higher-hardware complexity, the decoding performance of our scheme improves so that the error floor can be lowered further.

On the Capacity of the Dirty Paper Channel with Fast Fading and Discrete Channel States

Stefano Rini (National Chiao Tung University, USA); Shlomo (Shitz) Shamai (The Technion, Israel)

Interference pre-cancellation as in the “writing onto dirty paper” channel crucially depends on the transmitter having exact knowledge of the way in which input and channel state combine to produce the channel output. The presence of even a small amount of uncertainty in such knowledge, gravely hampers the ability of the encoder to pre-code its transmissions against the channel state. This is particularly disappointing as it implies that interference pre-coding in practical systems is effective only when the channel estimates have very high precision, a condition which is generally unattainable in wireless environments. In this paper we show that state decoding, instead of state pre-cancellation, can be approximately optimal for a channel with discrete states when only partial channel knowledge is available. More specifically, we consider a variation of the “writing onto dirty paper” channel in which a discrete-valued state sequence is multiplied by a fast fading process and derive conditions on the fading distribution for which state decoding closely approaches capacity. This channel model is a special case of the Gelf’and-Pinsker channel and our results show an instance of this problem in which state decoding is approximately optimal.

Information Theoretic Caching: The Multi-User Case

Sung Hoon Lim (EPFL, Switzerland); Chien-Yi Wang (Télécom ParisTech, France); Michael Gastpar (EPFL & University of California, Berkeley, Switzerland)

In this paper, we present information theoretic inner and outer bounds on the fundamental tradeoff between cache memory size and update rate in a multi-user cache network. Each user is assumed to have individual caches, while upon users’ requests, an update message is sent though a common link to all users. The database is represented as a discrete memoryless source and the user request information is represented as side information that is available at the decoders and the update encoder, but oblivious to the cache encoder. We establish two inner bounds, the first based on a centralized caching strategy and the second based on a decentralized caching strategy. For the case when the user requests are i.i.d. with the uniform distribution, we show that the performance of the decentralized inner bound is within a multiplicative gap of 4 from the optimal cache–rate tradeoff. For general request distributions, we numerically compare the bounds and the baseline uncoded strategy, caching the most popular files.

Systematic Block Markov Superposition Transmission of Repetition Codes

Kechao Huang and Xiao Ma (Sun Yat-sen University, P.R. China); Baoming Bai (Xidian University, P.R. China)

In this paper, we propose systematic block Markov superposition transmission of repetition (BMST-R) codes, which can support a wide range of code rates but maintain essentially the same encoding/decoding hardware structure. The systematic BMST-R codes resemble the classical rate-compatible punctured convolutional~(RCPC) codes, except that they are typically non-decodable by the Viterbi algorithm due to the huge constraint length induced by the block-oriented encoding process. By taking into account that the codes are systematic, the performance of systematic BMST-R codes under maximum {\em a posteriori}~(MAP) decoding can be analyzed with a simple lower bound and an upper bound with the help of partial input-redundancy weight enumerating function~(IRWEF). Numerical results verify our analysis and show that systematic BMST-R codes perform well in a wide range of code rates.

Finite-Length Scaling Based on Belief Propagation for Spatially Coupled LDPC Codes

Markus Stinner (Technische Universität München, Germany); Luca Barletta (Politecnico di Milano, Italy); Pablo M. Olmos (Universidad Carlos III de Madrid, Spain)

The equivalence of peeling decoding (PD) and Belief Propagation (BP) for low-density parity-check (LDPC) codes over the binary erasure channel is analyzed. Modifying the scheduling for PD, it is shown that exactly the same variable nodes (VNs) are resolved in every iteration than with BP. Instead of resolvable equations, the decrease of erased VNs during the decoding process is analyzed. Finally, a scaling law using this quantity is established for spatially coupled LDPC codes.

The $\rho$-Capacity of a Graph

Sihuang Hu and Ofer Shayevitz (Tel Aviv University, Israel)

Motivated by the problem of zero-error broadcasting, we introduce a new notion of graph capacity, termed $\rho$-capacity, that generalizes the Shannon capacity of a graph. We derive upper and lower bounds on the $\rho$-capacity of arbitrary graphs, and provide a tighter upper bound for regular graphs. The $\rho$-capacity is employed to characterize the zero-error capacity region of the degraded broadcast channel.

Ginibre Sampling and Signal Reconstruction

Flavio Zabini (University of Bologna, Italy); Andrea Conti (ENDIF University of Ferrara, WiLAB University of Bologna, Italy)

The spatial distribution of sensing nodes plays a crucial role in signal sampling and reconstruction via wireless sensor networks. Although homogeneous Poisson point process (PPP) model is widely adopted for its analytical tractability, it cannot be considered a proper model for all experiencing nodes.
The Ginibre point process (GPP) is a class of determinantal point processes that has been recently proposed for wireless networks with repulsiveness between nodes. A modified GPP can be considered an intermediate class between the PPP (fully random) and the GPP (relatively regular) that can be derived as limiting cases. In this paper we analyze sampling and reconstruction of finite-energy signals in $\mathbb{R}^d$ when samples are gathered in space according to a determinantal point process whose second order product density function generalizes to $\mathbb{R}^d$ that of a modified GPP in $\mathbb{R}^2$.
We derive closed form expressions for sampled signal energy spectral density (ESD) and for signal reconstruction mean square error (MSE). Results known in the literature are shown to be sub-cases of the proposed framework. The proposed
analysis is also able to answer to the fundamental question: does the higher regularity of GPP also imply an higher signal reconstruction accuracy, according to the intuition? Theoretical results are illustrated through a simple case study.

Constructing Valid Convex Hull Inequalities for Single Parity-Check Codes Over Prime Fields

Eirik Rosnes (University of Bergen, Norway); Michael Helmling (Fraunhofer Institute for Industrial Mathematics ITWM, Germany)

In this work, we present an explicit construction of valid inequalities (using no auxiliary variables) for the convex hull of the so-called constant-weight embedding of a single parity-check (SPC) code over any prime field. The construction is based on classes of building blocks that are assembled to form the left-hand side of an inequality according to several rules. In the case of almost doubly-symmetric valid classes we prove that the resulting inequalities are all facet-defining, while we conjecture this to be true if and only if the class is valid and symmetric. Such sets of inequalities have not appeared in the literature before, have a strong theoretical interest, and can be used to develop an efficient (relaxed) adaptive linear programming decoder for general (non-SPC) linear codes over prime fields.

Deep Convolutional Neural Networks on Cartoon Functions

Philipp Grohs (ETH Zuerich, Switzerland); Thomas Wiatowski and Helmut Bölcskei (ETH Zurich, Switzerland)

Wiatowski and Boelcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities. While the translation invariance result applies to square-integrable functions, the deformation stability bound holds for band-limited functions only. Many signals of practical relevance (such as natural images) exhibit, however, sharp and curved discontinuities and are hence not band-limited. The main contribution of this paper is a deformation stability result that takes these structural properties into account. Specifically, we establish deformation stability bounds for the class of cartoon functions introduced by Donoho, 2001.

Deterministic and Ensemble-Based Spatially-Coupled Product Codes

Christian Häger (Chalmers University of Technology, Sweden); Henry D Pfister (Duke University, USA); Alexandre Graell i Amat and Fredrik Brännström (Chalmers University of Technology, Sweden)

Several authors have proposed spatially-coupled (or convolutional-like) variants of product codes (PCs). In this paper, we focus on a parametrized family of generalized PCs that recovers some of these codes (e.g., staircase and block-wise braided codes) as special cases and study the iterative decoding performance over the binary erasure channel. Even though our code construction is deterministic (and not based on a randomized ensemble), we show that it is still possible to rigorously derive the density evolution (DE) equations that govern the asymptotic performance. The obtained DE equations are then compared to those for a related spatially-coupled PC ensemble. In particular, we show that there exists a family of (deterministic) braided codes that follows the same DE equation as the ensemble, for any spatial length and coupling width.

Affine-malleable Extractors, Spectrum Doubling, and Application to Privacy Amplification

Divesh Aggarwal (EPFL, Switzerland); Kaave Hosseini and Shachar Lovett (UCSD, USA)

The study of seeded randomness extractors is a major line of research in theoretical computer science. The goal is to construct deterministic algorithms which can take a “weak” random source $X$ with min-entropy $k$ and a uniformly random seed $Y$ of length $d$, and outputs a string of length close to $k$ that is close to uniform and independent of $Y$. Dodis and Wichs~\cite{DW09} introduced a generalization of randomness extractors called non-malleable extractors ($\mathsf{nmExt}$) where $\mathsf{nmExt}(X,Y)$ is close to uniform and independent of $Y$ and $\mathsf{nmExt}(X,f(Y))$ for any function $f$ with no fixed points.
We relax the notion of a non-malleable extractor and introduce what we call an affine-malleable extractor ($\mathsf{AmExt}: \mathbb{F}^n \times \mathbb{F}^d \mapsto \mathbb{F}$) where $\mathsf{AmExt}(X,Y)$ is close to uniform and independent of $Y$ and has some limited dependence of $\mathsf{AmExt}(X,f(Y))$ – that conditioned on $Y$, $(\mathsf{AmExt}(X,Y), \mathsf{AmExt}(X,f(Y)))$ is $\epsilon$-close to $(U, A \cdot U + B)$ where $U$ is uniformly distributed in $\mathbb{F}$ and $A, B \in \mathbb{F}$ are random variables independent of $U$.
We show that the inner-product function $\langle{\cdot,\cdot}\rangle:\mathbb{F}^n \times \mathbb{F}^n \mapsto \mathbb{F}$ is an affine-malleable extractor for min-entropy $k = n/2 + \Omega(\log (1/\epsilon))$. Moreover, under a plausible conjecture in additive combinatorics (called the Spectrum Doubling Conjecture), we show that this holds for $k = \tilde{\Omega}(\log n + \log (1/\epsilon))$. As a modest justification of the conjecture, we show that a weaker version of the conjecture is implied by the widely believed Polynomial Freiman-Ruzsa conjecture.
We also study the classical problem of privacy amplification, where two parties Alice and Bob share a weak secret $X$ of min-entropy $k$, and wish to agree on secret key $R$ of length $m$ over a public communication channel completely controlled by a computationally unbounded attacker Eve. The main application of non-malleable extractors and its many variants has been in constructing secure privacy amplification protocols.
We show that affine-malleable extractors along with affine-evasive sets can also be used to construct efficient privacy amplification protocols. This gives a much simpler protocol for min-entropy $k = n/2 + \Omega (\log (1/\epsilon))$, and additionally, under the Spectrum Doubling Conjecture, achieves near optimal parameters and achieves additional security properties like source privacy that have been the focus of some recent results in privacy amplification.

Quickest Detection of Markov Networks

Javad Heydari and Ali Tajer (Rensselaer Polytechnic Institute, USA); H. Vincent Poor (Princeton University, USA)

Detecting correlation structures in large networks arises in many domains. Such detection problems are often studied independently of the underlying data acquisition process, rendering settings in which data acquisition policies and the associated sample size are pre-specified. Motivated by the advantages of data-adaptive sampling in data dimensionality reduction, especially in large networks, as well as enhancing the agility of the sampling process, this paper treats the inherently problems of data acquisition and correlation detection. Specifically, this paper considers a network of nodes generating random variables and designs the quickest sequential sampling strategy for collecting data and reliably deciding whether the network is a Markov network with a known correlation structure. By abstracting the Markov network as an undirected graph, in which the vertices represent the random variables and their connectivities model the correlation structure of interest, designing the quickest sampling strategy becomes equivalent to sequentially and data-adaptively identifying and sampling a sequence of vertices in the graph. Optimal sampling strategies are proposed and their associated optimality guarantees are established. Performance evaluations are provided to demonstrate the gains of the proposed sequential approaches.

Strong converse theorems using Rényi entropies

Felix Leditzky (University of Cambridge, United Kingdom); Mark M Wilde (Louisiana State University, USA); Nilanjana Datta (Cambridge, United Kingdom)

We use a Rényi entropy method to prove a strong converse theorem for the task of quantum state redistribution. More precisely, we establish the strong converse property for the boundary of the entire achievable rate region in the $(e,q)$-plane, where the entanglement cost $e$ and quantum communication cost $q$ are the operational rates describing a state redistribution protocol. The strong converse property is deduced from explicit bounds on the fidelity of the protocol in terms of a Rényi generalization of the optimal rates. Hence, we identify candidates for the strong converse exponents for entanglement cost $e$ and quantum communication cost $q$, respectively. To prove our results, we establish various new entropic inequalities, which might be of independent interest. These involve conditional entropies and mutual information derived from the sandwiched Rényi divergence. In particular, we obtain novel bounds relating these quantities to the fidelity of two quantum states.

Entanglement Assisted Classical Capacity of Compound Quantum Channels

Stephan Kaltenstadler (Technical University of Munich, Germany); Gisbert Janßen (Technische Universität München, Germany); Holger Boche (Technical University Munich, Germany)

We consider the task of entanglement assisted message transmission under presence of a compound memoryless quantum channel. In this model, the completely positive and trace preserving map governing the channel statistics is, instead of being perfectly known, only revealed as member of a certain sets of channels. Therefore, coding schemes have to be used which are simultaneously reliable for each member of this set. Utilizing universal codes for classical-quantum channels, we introduce optimal universal coding schemes for entanglement assisted message transmission of compound quantum channels. The resulting coding theorem together with a corresponding converse statement leads us to a single-letter expression for the entanglement assisted message transmission capacity of compound quantum channels.

Sub-Quadratic Decoding of Gabidulin Codes

Sven Puchinger (Ulm University, Germany); Antonia Wachter-Zeh (Technion – Israel Institute of Technology, Israel)

This paper shows how to decode errors and erasures with Gabidulin codes in sub-quadratic time in the code length, improving previous algorithms which had at least quadratic complexity. The complexity reduction is achieved by accelerating operations on linearized polynomials. In particular, we present fast algorithms for division, multi-point evaluation and interpolation of linearized polynomials and show how to efficiently compute minimal subspace polynomials.

Bounds on the communication rate needed to achieve SK capacity in the hypergraphical source model

Manuj Mukherjee (Indian Institute of Science, India); Chung Chan (The Chinese University of Hong Kong, Hong Kong); Navin Kashyap (Indian Institute of Science, India); Qiaoqiao Zhou (The Chinese University of Hong Kong, Hong Kong)

In the multiterminal source model of Csiszár and Narayan, the communication complexity, $R_{\text{SK}}$, for secret key (SK) generation is the minimum rate of communication required to achieve SK capacity. An obvious upper bound to $R_{\text{SK}}$ is given by $R_{\text{CO}}$, which is the minimum rate of communication required for \emph{omniscience}. In this paper we derive a better upper bound to $R_{\text{SK}}$ for the hypergraphical source model, which is a special instance of the multiterminal source model. The upper bound is based on the idea of fractional removal of hyperedges. It is further shown that this upper bound can be computed in polynomial time. We conjecture that our upper bound is tight. For the special case of a graphical source model, we also give an explicit lower bound on $R_{\text{SK}}$. This bound, however, is not tight, as demonstrated by a counterexample.

Consistency of the Plug-In Estimator of the Entropy Rate for Ergodic Processes

Lukasz Jerzy Debowski (Polish Academy of Sciences, Poland)

A plug-in estimator of entropy is the entropy of the distribution where probabilities of symbols or blocks have been replaced with their relative frequencies in the sample. Consistency and asymptotic unbiasedness of the plug-in estimator can be easily demonstrated in the IID case. In this paper, we ask whether the plug-in estimator can be used for consistent estimation of the entropy rate $h$ of a stationary ergodic process. The answer is positive if, to estimate block entropy of order $k$, we use a sample longer than $2^{k(h+\epsilon)}$, whereas it is negative if we use a sample shorter than $2^{k(h-\epsilon)}$. In particular, if we do not know the entropy rate $h$, it is sufficient to use a sample of length $(|\mathbb{X}|+\epsilon)^{k}$ where $|\mathbb{X}|$ is the alphabet size. The result is derived using $k$-block coding. As a by-product of our technique, we also show that the block entropy of a stationary process is bounded above by a nonlinear function of the average block entropy of its ergodic components. This inequality can be used for an alternative proof of the known fact that the entropy rate a stationary process equals the average entropy rate of its ergodic components.

Explicit constructions of MDS array codes and RS codes with optimal repair bandwidth

Min Ye (UMD, USA); Alexander Barg (University of Maryland, USA)

Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure-tolerance for a given number of parity nodes. Dimakis et. al. showed that in order to recover the failure of a single node in an MDS code with r parity nodes, at least 1/r fraction of the data stored in each of the surviving nodes is required. An MDS code is said to have the optimal repair property if this lower bound is achieved when repairing any single node failure. We study high-rate MDS codes with optimal repair property. Explicit constructions of such codes in the literature are only available for the cases where there are at most 3 parity nodes. In this paper, we give explicit constructions of MDS codes with optimal repair property for any r and any code length n. We also consider the case when only d surviving nodes are contacted for the repair of a single node failure, where n-r<=d<n. We construct explicit MDS array codes that achieve the lower bound on the repair bandwidth for any n, r and d. Finally, we give an explicit construction of Reed-Solomon code with asymptotically optimal repair property.

Dynamic Signaling Games under Nash and Stackelberg Equilibria

Serkan Sarıtaş (Bilkent University, Turkey); Serdar Yüksel (Queen’s University, Canada); Sinan Gezici (Bilkent University, Turkey)

In this study, dynamic and repeated quadratic cheap talk and signaling game problems are investigated. These involve encoder and decoders with mismatched performance objectives, where the encoder has a bias term in the quadratic cost functional. We consider both Nash equilibria and Stackelberg equilibria as our solution concepts, under a perfect Bayesian formulation. These two lead to drastically different characteristics for the equilibria. For the cheap talk problem under Nash equilibria, we show that fully revealing equilibria cannot exist and the final state equilibria have to be quantized for a large class of source models; whereas, for the Stackelberg case, the equilibria must be fully revealing regardless of the source model. In the dynamic signaling game where the transmission of a Gaussian source over a Gaussian channel is considered, the equilibrium policies are always linear for scalar sources under Stackelberg equilibria, and affine policies constitute an invariant subspace under best response maps for Nash equilibria.

Privacy-Constrained Remote Source Coding

Kittipong Kittichokechai and Giuseppe Caire (Technische Universität Berlin, Germany)

We consider the problem of revealing/sharing data in an efficient and secure way via a compact representation. The representation should ensure reliable reconstruction of the desired features/attributes while still preserve privacy of the secret parts of the data. The problem is formulated as a remote lossy source coding with a privacy constraint where the remote source consists of public and secret parts. Inner and outer bounds for the optimal tradeoff region of compression rate, distortion, and privacy leakage rate are given and shown to coincide for some special cases. When specializing the distortion measure to a logarithmic loss function, the resulting rate-distortion-leakage tradeoff for the case of identical side information forms an optimization problem which corresponds to the “secure” version of the so-called information bottleneck.

Construction of Polar Codes for Arbitrary Discrete Memoryless Channels

Talha Cihad Gulcu (University of Maryland, USA); Min Ye (UMD, USA); Alexander Barg (University of Maryland, USA)

We consider the construction problem of polar codes for general q-ary alphabets, analyzing different procedures that rely on the reduction of the alphabet of subchannels appearing in the code construction. As our first result, we estimate the capacity loss incurred by replacing a pair of output symbols with a single symbol (symbol merging). This enables us to propose an approximation algorithm of constructing polar codes for a variety of polarizing operations. The approximation error (capacity loss) of the merging step is at most O((1/\mu)^(1/(q-1))) and the complexity of code construction is bounded above as O(N(\mu^4)); where \mu is the maximum size of the subchannel alphabet permitted by the algorithm. We also show that if the polarizing operation relies on modulo-q addition, it is possible to merge subsets of output symbols without any loss in subchannel capacity. Performing this procedure before each approximation step results in a further speed-up of the code construction, and the resulting codes have smaller gap to capacity. We show that a similar speed-up can be attained for polar codes over finite field alphabets.

Hierarchy of Three-Party Consistency Specifications

Daniel Tschudi, Julian Loss and Ueli Maurer (ETH Zurich, Switzerland)

In the theory of distributed systems and in cryptography one considers a set of n parties which must securely perform a certain computation, even if some of the parties are dishonest. Broadcast, one of the most fundamental and widely used such primitive, allows one (possibly cheating) party to distribute a value m consistently to the other parties, in a context where only bilateral (authenticated) channels between parties are available. A well-known result states that this is possible if and only if strictly less than a third of the parties are dishonest. Broadcast guarantees a very strong form of consistency. This paper investigates generalizations of the broadcast setting in two directions: weaker forms of consistency guarantees are considered, and other resources than merely bilateral channels are assumed to be available. The ultimate goal of this line of work is to arrive at a complete classification of consistency specifications. As a concrete result in this direction we present a complete classification of three-party specifications with a binary input and binary outputs.

Comparing the Bit-MAP and Block-MAP Decoding Thresholds of Reed-Muller Codes on BMS Channels

Shrinivas Kudekar (Qualcomm Research, USA); Santhosh Kumar (Texas A&M University, USA); Marco Mondelli (EPFL, Switzerland); Henry D Pfister (Duke University, USA); Ruediger L Urbanke (EPFL, Switzerland)

The question whether RM codes are capacity-achieving is a long-standing open problem in coding theory that was recently answered in the affirmative for transmission over erasure channels [1], [2]. Remarkably, the proof does not rely on specific properties of RM codes, apart from their symmetry. Indeed, the main technical result consists in showing that any sequence of linear codes, with doubly-transitive permutation groups, achieves capacity on the memoryless erasure channel under bit-MAP decoding. Thus, a natural question is what happens under block-MAP decoding. In [1], [2], by exploiting further symmetries of the code, the bit-MAP threshold was shown to be sharp enough so that the block erasure probability also converges to 0. However, this technique relies heavily on the fact that the transmission is over an erasure channel.

We present an alternative approach to strengthen results regarding the bit-MAP threshold to block-MAP thresholds. This approach is based on a careful analysis of the weight distribution of RM codes. In particular, the flavor of the main result is the following: assume that the bit-MAP error probability decays as N^{-\delta}, for some \delta>0. Then, the block-MAP error probability also converges to 0. This technique applies to transmission over any binary memoryless symmetric channel. Thus, it can be thought of as a first step in extending the proof that RM codes are capacity-achieving to the general case.

Routing with Blinkers: Online Throughput Maximization without Queue Length Information

Georgios S. Paschos (Huawei Technologies, France); Mathieu Leconte (Huawei, France); Apostolos Destounis (Huawei Technologies France Research Center, France)

We study a service provisioning system where arriving jobs are routed in an online fashion to any of the available servers; typical applications include datacenters, Internet switches, and cloud computing infrastructures. A common goal in these scenarios is to balance the load across the servers and achieve maximum throughput. For example, the classical online policy Join-the-Shortest-Queue (JSQ) routes an arriving job to the server with the shortest instantaneous queue length. Although JSQ has desirable properties, it requires coordination between the routers and the servers in the form of queue length reports, which prohibits its practical usability in many scenarios.
In this paper we study the practical case of “routing with blinkers”, where no coordination is allowed between the routers and the service provisioning system, and the routers act in an individual manner with limited view of the system state. Every router keeps a log of delays of all jobs it has routed in the past; these are delayed estimates of the actual server queue length. Although easy to acquire, such information is a highly inaccurate depiction of the system state and hence it is unclear whether it is enough to achieve maximum performance. Motivated by the fact that a reasonable policy such as Join-the-Shortest-Delay fails to achieve maximum throughput, we propose a novel routing policy that “samples” the servers periodically and achieves maximum throughput, subject to a condition for the service discipline of the server.

On the (non-)existence of APN (n,n)-functions of algebraic degree n

Lilya Budaghyan (University of Bergen, Norway); Claude Carlet (University of Paris 8, France); Tor Helleseth and Nian Li (University of Bergen, Norway)

In this paper, we study the problem of existence of almost perfect nonlinear (APN) functions of algebraic degree $n$ over $\mathbb{F}_{2^n}$. We characterize such functions by means of derivatives and power moments of the Walsh transform. We deduce some non-existence results which imply, in particular, that for most of the known APN functions $F$ over $\mathbb{F}_{2^n}$ the function $x^{2^n-1}+F(x)$ is not APN.

Optimal Vector Linear Index Codes for Some Symmetric Side Information Problems

Mahesh Vaddi and B. Sundar Rajan (Indian Institute of Science, India)

This paper deals with vector linear index codes for multiple unicast index coding problems where there is a source with $K$ messages and there are $K$ receivers each wanting a unique message and having symmetric (with respect to the receiver index) two-sided antidotes (side information). Starting from a given multiple unicast index coding problem with $K$ messages and symmetric one-sided antidotes for which a scalar linear index code $\mathfrak{C}$ is known, we give a construction procedure which constructs a sequence (indexed by $m$) of multiple unicast index coding problems with symmetric two-sided antidotes (for the same source) for all of which a vector linear code $\mathfrak{C}^{(m)}$ is obtained from $\mathfrak{C}.$ Also, it is shown that if $\mathfrak{C}$ is optimal then $\mathfrak{C}^{(m)}$ is also optimal for all $m.$ To our knowledge, this is the first paper which gives a method to construct a sequence of optimal vector linear index codes.

Near-Optimal Finite-Length Scaling for Polar Codes over Large Alphabets

Henry D Pfister (Duke University, USA); Ruediger L Urbanke (EPFL, Switzerland)

For any prime power $q$, Mori and Tanaka introduced a family of $q$-ary polar codes based on $q$ by $q$ Reed-Solomon polarization kernels with elements from the Galois field with $q$ elements. For transmission over a $q$-ary erasure channel, they also derived a closed-form recursion for the erasure probability of each effective channel. In this paper, we use that expression to analyze the finite-length scaling of these codes on the $q$-ary erasure channel with erasure probability $\epsilon$. Our primary result is that, for any $\gamma>0$ and $\delta>0$, there is a $q_0$ such that, for all $q \geq q_0$, the fraction of effective channels with erasure rate at most $O(N^{-\gamma})$ is at least $1-\epsilon-O(N^{-1/2+\delta})$, where $N=q^n$ is the blocklength. Since the gap to the channel capacity $1-\epsilon$ cannot vanish faster than $O(N^{-1/2})$, this establishes near-optimal finite-length scaling for this family of codes. Our approach can be seen as an extension of a similar analysis for binary polar codes by Mondelli, Hassani, and Urbanke.

Privacy-Aware MMSE Estimation

Shahab Asoodeh, Fady Alajaji and Tamas Linder (Queen’s University, Canada)

We investigate the problem of the predictability of random variable $Y$ under a privacy constraint dictated by random variable $X$, correlated with $Y$, where both predictability and privacy are assessed in terms of the minimum mean-squared error (MMSE). Given that $X$ and $Y$ are connected via a binary-input symmetric-output (BISO) channel, we derive the \emph{optimal} random mapping $P_{Z|Y}$ such that the MMSE of $Y$ given $Z$ is minimized while the MMSE of $X$ given $Z$ is greater than $(1-\epsilon)\mathop{Var}(X)$ for a given $\epsilon\geq 0$. We also consider the case where $(X,Y)$ are continuous and $P_{Z|Y}$ is restricted to be an additive-noise channel.

SNR Gap Between MIMO Linear Receivers: Characterization and Applications

Giuseppa Alfano and Carla-Fabiana Chiasserini (Politecnico di Torino, Italy); Alessandro Nordio (IEIIT-CNR, Italy)

This paper presents a statistical characterization of the SNR gap between MIMO Zero-Forcing (ZF) and Minimum Mean Squared Error (MMSE) equalizers, beyond the Rayleigh assumption for the interfering streams amplitude fading. Results are valid for arbitrary transmit SNR values and number of transmit/receive antennas. Specifically, we provide the exact closed-form distribution of the random variable representing the difference between the output SNR on a generic receive filter branch, under MMSE and ZF equalization. Analytical results turn particularly useful for the study of heterogeneous cellular networks.

Optimal Systematic Distributed Storage Codes with Fast Encoding

Preetum Nakkiran (University of California, Berkeley, USA); K. v. Rashmi and Kannan Ramchandran (University of California at Berkeley, USA)

We consider the problem of constructing explicit erasure codes for distributed storage with the following desirable properties motivated by system constraints: (i) Maximum-Distance-Separable (MDS), (ii) Optimal repair-bandwidth, (iii) Flexibility in repair (as will be described), (iv) Systematic Form, and (v) Fast encoding (enabled by a sparse generator matrix). Existing constructions in the literature satisfy only strict subsets of these desired properties. This paper presents the first explicit code construction which theoretically guarantees all the five desired properties simultaneously. Our construction builds on a powerful class of codes called Product-Matrix (PM) codes. We additionally present a framework for understanding the interaction between sparsity and the design of systematic PM codes. We also present general ways of transforming existing storage and repair optimal codes to enable fast encoding through sparsity. In practice, such sparse codes result in encoding speedup by a factor of about 4 for typical parameters.

The Velocity of the Decoding Wave for Spatially Coupled Codes on BMS Channels

Rafah El-Khatib and Nicolas Macris (EPFL, Switzerland)

We consider the dynamics of belief propagation decoding of spatially coupled Low-Density Parity-Check codes. It has been conjectured that after a short transient phase, the profile of “error probabilities” along the spatial direction of a spatially coupled code develops a uniquely-shaped wavelike solution that propagates with constant velocity v. Under this assumption and for transmission over general Binary Memoryless Symmetric channels, we derive a formula for v. We also propose approximations that are simpler to compute and support our findings using numerical data.

Universally Secure Network Coding with Feedback

Gabriele Spini (Universiteit Leiden & Université de Bordeaux, CWI Amsterdam, The Netherlands); Gilles Zémor (Université Bordeaux 1, France)

In the model of Secure Network Coding, a sender is connected to several receivers by a network, i.e. a directed graph with a single source node and several destination nodes, where each node can perform operations on the values received via the incoming edges and sends the results via the outbound edges. An active adversary controls some of the edges; this means that he can read every symbol transmitted over the edges under his control and replace them with symbols of his choice. The goal of Secure Network Coding is to design protocols that allow transmission of a secret message from the sender to all receivers in a private and reliable way.

Classically, only one-way communication (from sender to receivers) has been studied; in this setting, security can be guaranteed as long as the number of edges controlled by the adversary is less than one third of the network connectivity. In this paper, we present a procedure where receivers are allowed to send feedback to the sender; with this feature, security is guaranteed against a stronger adversary: namely, the number of corrupted edges only needs to be smaller than one half of the connectivity. Furthermore, like previous state-of-the-art work on the single-round scenario, our scheme is universal, i.e. it does not require knowledge of the network code.

An Uplink-Downlink Duality for Cloud Radio Access Network

Liang Liu, Pratik Patil and Wei Yu (University of Toronto, Canada)

Uplink-downlink duality refers to the fact that the Gaussian broadcast channel has the same capacity region as the dual Gaussian multiple-access channel under the same sum-power constraint. This paper investigates a similar duality relationship between the uplink and downlink of a cloud radio access network (C-RAN), where a central processor (CP) cooperatively serves multiple mobile users through multiple remote radio heads (RRHs) connected to the CP with finite-capacity fronthaul links. The uplink of such a C-RAN model corresponds to a multiple-access relay channel; the downlink corresponds to a broadcast relay channel. This paper considers compression based relay strategies in both uplink and downlink C-RAN, where the quantization noise levels are functions of the fronthaul link capacities. If the fronthaul capacities are infinite, the conventional uplink-downlink duality applies. The main result of this paper is that even when the fronthaul capacities are finite, duality continues to hold for the case where independent compression is applied across each RRH in the sense that when the transmission and compression designs are jointly optimized, the achievable rate regions of the uplink and downlink remain identical under the same sum-power and individual fronthaul capacity constraints. As an application of the duality result, the power minimization problem in downlink C-RAN can be efficiently solved based on its uplink counterpart.

Age of Information with a Packet Deadline

Clement Kam, Sastry Kompella and Gam Nguyen (Naval Research Laboratory, USA); Jeffrey Wieselthier (Wieselthier Research, USA); Anthony Ephremides (University of Maryland at College Park, USA)

We study the age of information, which is a recent metric for measuring the freshness of a continually updated piece of information as observed at a remote monitor. The age of information metric has been studied for a variety of different queuing systems, and in this work, we introduce a packet deadline as a control mechanism to study its impact on the average age of information for an M/M/1/2 queuing system. We analyze the system for a fixed deadline and derive a mathematical expression for the average age. We numerically evaluate the expression and show the relationship of the age performance to that of the M/M/1/1 and M/M/1/2 systems. We show that using a deadline can outperform both the M/M/1/1 and M/M/1/2 without deadline.

Generalized rank weights of reducible codes, optimal cases and related properties

Umberto Martínez-Peñas (Aalborg University, Denmark)

Reducible codes for the rank metric were introduced for cryptographic purposes. They have fast encoding and decoding algorithms, include maximum rank distance (MRD) codes when Gabidulin codes may not be applied and can correct many rank errors beyond half of their minimum rank distance, which make them suitable for network coding. In this paper, we give lower and upper bounds on their generalized rank weights (GRWs), which measure information leakage on the network. We give conditions for them to be rank equivalent to cartesian products and conditions to be rank degenerate. We study their duality properties and MRD ranks. Finally, we obtain codes with optimal GRWs for all possible fixed packet and code sizes, and prove that they are the unique optimal codes up to rank equivalence. Moreover, we see that all of them have explicit polynomial-time decoding algorithms using any of their bases.

Group testing schemes from low-weight codewords of BCH codes

Shashanka Ubaru (University of Minnesota, USA); Arya Mazumdar (University of Massachusetts Amherst, USA); Alexander Barg (University of Maryland, USA)

Despite large volume of research in group testing, explicit small-size group testing schemes are still difficult to construct, and the parameters of known combinatorial schemes are limited by the constraints of the problem. Relaxing the worst-case identification requirements to probabilistic localization of defectives enables one to expand the range of parameters, and yet small-size practical constructions are sparse.
Motivated by this question, we perform an experimental study of almost disjunct matrices constructed from low-weight codewords of binary BCH codes, and evaluate their performance in nonadaptive group testing. We observe that identification of defectives is much more stable in these schemes compared to schemes constructed from random binary matrices. We derive an estimate of the error probability of identification in the constructed schemes which provides a partial explanation of their performance.

Considerations for Rank-based Cryptosystems

Anna-Lena Horlemann-Trautmann (EPFL, Switzerland); Kyle Marshall and Joachim Rosenthal (University of Zurich, Switzerland)

Cryptosystems based on rank metric codes have been considered as an alternative to McEliece cryptosystems due to the relative difficulty of solving the rank syndrome decoding problem. Generic attacks have recently seen several improvements, notably in the work of Gaborit et al., who give an improved algorithm using linearized polynomials which yields a polynomial time algorithm for certain parameters. On the structural side, many of the proposals for cryptosystems based on Gabidulin codes have proven to be weak, following an attack by Overbeck in 2001. Of the Gabidulin based systems managing to resist Overbeck’s attack, several were recently broken by Horlemann-Trautmann et al. using an attack based on finding the elements of rank one in some extended code. In this paper, we extend the polynomial time algorithm of Gaborit using the same underlying idea as Horlemann-Trautmann et al., and then demonstrate how codes with implicit structural weakness may be exploited, even if the explicit structure is not determined. We use this attack to break a Gabidulin code based cryptosystem which has so far resisted structural attacks.

New Perspectives on Weak Oblivious Transfer

Ueli Maurer and João Ribeiro (ETH Zurich, Switzerland)

In this paper we provide a generalization of weak oblivious transfer through the constructive cryptography framework. This generalization requires the global order of the inputs and outputs from and to two parties called Alice and Bob to be completely defined, a subtlety which has been overlooked by previous work on the subject. We provide evidence that the order of inputs and outputs in weak oblivious transfer matters. In particular, it may influence the kind and strength of symmetry results which can be obtained about such resources.

Network MIMO: Transmitters with no CSI Can Still be Very Useful

Paul de Kerret (EURECOM, France); David Gesbert (Eurecom Institute, France)

In this paper, we consider the Network MIMO channel under the so-called Distributed Channel State Information at the Transmitters (D-CSIT) configuration. In this setting, the precoder is designed in a distributed manner at each Transmitter (TX) on the basis of local versions of Channel State information (CSI) of various quality. Although the use of simple Zero-Forcing (ZF) was recently shown to reach the optimal DoF for a Broadcast Channel (BC) under noisy, yet centralized, CSI at the TX (CSIT), it can turn very inefficient when faced with D-CSIT: The number of Degrees-of-Freedom (DoF) achieved is then limited by the worst CSI accuracy across TXs. To circumvent this effect, we develop a new robust transmission scheme improving the DoF. A surprising result is uncovered by which, in the regime of so-called weak CSIT, the proposed scheme is shown to be DoF-optimal and to achieve a centralized outerbound consisting in the DoF of a genie-aided centralized setting in which the CSIT versions of all TXs are available everywhere. Building upon the insight obtained in the weak CSIT regime, we develop a general D-CSIT robust scheme for the $3$-user case which improves over the DoF obtained by conventional robust approaches for any arbitrary CSIT configuration.

Multiuser Two-Way Ranging

Ryan Keating and Dongning Guo (Northwestern University, USA)

Location awareness will be crucial for many future wireless network applications, such as the Internet of Things and vehicular networks. Existing localization works typically propose sequential signaling schemes where one pair of nodes communicate to range their distance at a time. This poses a significant problem for large networks where many nodes are within range of each node. In this work, a novel scheme is proposed which takes merely two frames of transmissions: In the first frame all nodes transmit their respective signatures; in the second frame all nodes basically repeat what they have received in the first frame (assuming full duplexing). By the end of the second frame, every node can estimate not only its distance to all nodes within range, but also the distances between neighboring nodes which are within range of each other. The proposed scheme is highly scalable, and is validated using simulation.

Novel Lower Bounds on the Entropy Rate of Binary Hidden Markov Processes

Or Ordentlich (MIT, USA)

Recently, Samorodnitsky proved a strengthened version of Mrs. Gerber’s Lemma, where the output entropy of a binary symmetric channel is bounded in terms of the average entropy of the input projected on a random subset of coordinates. Here, this result is applied for deriving novel lower bounds on the entropy rate of binary hidden Markov processes. For symmetric underlying Markov processes, our bound improves upon the best known bound in the very noisy regime. The nonsymmetric case is also considered, and explicit bounds are derived for Markov processes that satisfy the $(1,\infty)$-RLL constraint.

Bit-Additive Superposition Coding for the Bandwidth Limited Broadcast Channel

Ahmed Abotabl (University of Texas at Dallas, USA); Aria Nosratinia (University of Texas, Dallas, USA)

This paper studies coded modulation for the broadcast channel subject to a fixed transmit constellation. A straight forward superposition of two point-to-point coded modulations expands the transmit constellation and in general will not satisfy a pre-determined modulation constraint. Hierarchical modulation, where each input to the mapper is driven by one of the users messages, can satisfy the channel-input modulation constraint but the boundary of capacity region is approached only at isolated points (often just one point).We propose a superposition coding based on multilevel coding where in each input level to the mapper the users are allowed to superimpose their messages. Furthermore, a simple implementation of the proposed transmission using linear codes is presented and it is shown to achieve rate pairs that are very close to the boundary of the constellation constrained capacity. The linear coding constraint gives rise to a unique rate allocation problem between the users in each level of the multi-level code; this rate allocation problem is studied in this paper. We propose a pragmatic rate allocation algorithm that is shown to approach all points on the capacity region. Interesting features of the rate allocation are studied, for example, it is possible for the mixing of the two users’ data to occur only on one level of the multilevel code. Simulations show that good point-to-point LDPC codes show excellent performance of the proposed transmission.

A Class of Non-Linearly Solvable Networks

Joseph Connelly and Ken Zeger (University of California, San Diego, USA)

For each integer m ≥ 2, a network is constructed which is solvable over an alphabet of size m but is not solvable over any smaller alphabets. If m is composite, then the network has no vector linear solution over any R-module alphabet and is not asymptotically linear solvable over any finite-field alphabet. The network’s capacity is shown to equal one, and when m is composite, its linear capacity is bounded away from one for all finite-field alphabets.

Low Complexity Algorithm Approaching the ML Decoding of Binary LDPC Codes

Irina Bocharova and Boris D. Kudryashov (St. Petersburg University of Information Technologies, Mechanics and Optics, Russia); Vitaly Skachek and Yauhen Yakimenka (University of Tartu, Estonia)

A novel method for decoding of low-density parity-check (LDPC) codes on the AWGN channel is presented. In the proposed method, first, a standard belief-propagation (BP) decoder is applied, then a certain number of positions is erased using a combination of a reliability criterion and a set of masks. A list erasure decoder is then applied to the resulting word. The performance of the proposed method is analyzed mathematically and demonstrated by simulations.

A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design

Jinming Wen (Ecole Normal Sschool de Lyon Lab LIP, France); Xiao-Wen Chang (McGill University, Canada)

We propose an algorithm with expected complexity of $O(n\log n)$ arithmetic operations to solve a special shortest vector problem arising in computer-and-forward design, where $n$ is the dimension of the channel vector. This algorithm is more efficient than the best known algorithms with proved complexity.

A Sharp Condition for Exact Support Recovery of Sparse Signals With Orthogonal Matching Pursuit

Jinming Wen (Ecole Normal Sschool de Lyon Lab LIP, France); Zhengchun Zhou (Southwest Jiaotong University, P.R. China); Jian Wang (Seoul National University, Korea); Xiaohu Tang (SWJTU, P.R. China); Qun Mo (Zhejiang University, P.R. China)

Support recovery of sparse signals from noisy measurements with orthogonal matching pursuit (OMP) have been extensively studied in the literature. In this paper, we show that for any $K$-sparse signal $\mathbf{x}$, if the sensing matrix $\mathbf{A}$ satisfies the restricted isometry property (RIP) of order $K + 1$ with restricted isometry constant (RIC) $\delta_{K+1} < 1/\sqrt {K+1}$, then under some constraint on the minimum magnitude of the nonzero elements of $\mathbf{x}$, the OMP algorithm exactly recovers the support of $\mathbf{x}$ from the measurements $\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{v}$ in $K$ iterations, where $\mathbf{v}$ is the noise vector. This condition is sharp in terms of $\delta_{K+1}$ since for any given positive integer $K\geq 2$ and any $1/\sqrt{K+1}\leq t<1$, there always exist a $K$-sparse $\mathbf{x}$ and a matrix $mathbf{A}$ satisfying $\delta_{K+1}=t$ for which OMP may fail to recover the signal $\mathbf{x}$ in $K$ iterations. Moreover, the constraint on the minimum magnitude of the nonzero elements of $\mathbf{x}$ is weaker than existing results.

Soft McEliece: MDPC code-based McEliece cryptosystems with very compact keys through real-valued intentional errors

Marco Baldi, Paolo Santini and Franco Chiaraluce (Università Politecnica delle Marche, Italy)

We propose to use real-valued errors instead of classical bit flipping intentional errors in the McEliece cryptosystem based on moderate-density parity-check (MDPC) codes. This allows to exploit the error correcting capability of these codes to the utmost, by using soft-decision iterative decoding algorithms instead of hard-decision bit flipping decoders. However, soft reliability values resulting from the use of real-valued noise can also be exploited by attackers. We devise new attack procedures aimed at this, and compute the relevant work factors and security levels. We show that, for a fixed security level, these new systems achieve the shortest public key sizes ever reached, with a reduction up to 25% with respect to previous proposals.

Sequentially Detecting Transitory Changes

George V. Moustakides (Rutgers University, USA and University of Patras, Greece); Venugopal Veeravalli (University of Illinois at Urbana-Champaign, USA)

We are interested in the sequential detection of a change in the statistics of a random process. Specifically we consider changes that are not abrupt but exhibit a transitory phase before reaching their steady-state statistics. Adopting the classical worst-case conditional detection delay proposed by Lorden as our performance measure and constraining the average false-alarm period, we derive the sequential test that optimizes, in the exact sense, the proposed criterion. The resulting optimum rule resembles the well known CUSUM rule with the corresponding test-statistic-update being a function of all pre- and post-change pdfs but also of the false-alarm constraint.

Improved Active Sensing Performance in Wireless Sensor Networks via Channel State Information

Alessandro Biason (University of Padova, Italy); Urbashi Mitra (University of Southern California, USA); Michele Zorzi (Università degli Studi di Padova, Italy)

Active sensing refers to the process of choosing or tuning a set of sensors in order to track an underlying system in an efficient and accurate way. In a wireless environment, among the several kinds of features extracted by traditional sensors, the information carried by the communication channel about the state of the system can be used to further boost the tracking performance and save energy. A joint tracking problem which considers traditional measurements and channel together for tracking purposes is set up and solved. The system is modeled as a partially observable Markov decision problem and the properties of the cost-to-go function are used to reduce the problem complexity. Numerical results show the advantages of our proposal.

Almost Lossless Variable-Length Source Coding on Countably Infinite Alphabets

Jorge Silva (University of Chile, Chile); Pablo Piantanida (CentraleSupélec-CNRS-Université Paris-Sud, France)

Motivated from the fact that universal source coding on countably infinite alphabets is not feasible, the notion of almost lossless source coding is introduced. This idea –analog to the weak variable-length source coding problem proposed by Han 2000– aims at relaxing the lossless block-wise assumption to allow a distortion that vanishes asymptotically as the block-length goes to infinity. In this setup, both feasibility and optimality results are derived for the case of memoryless sources defined on countably infinite alphabets. Our results show on one hand that Shannon entropy characterizes the minimum achievable rate (known statistics) while on the other that almost lossless universal source coding becomes feasible for the family of finite entropy stationary and memoryless sources with countably infinite alphabets.

The Unbounded Benefit of Encoder Cooperation for the k-User MAC

Parham Noorzad and Michelle Effros (California Institute of Technology, USA); Michael Langberg (State University of New York at Buffalo, USA)

Cooperation strategies that allow communication devices to work together can improve network capacity. This paper generalizes the “cooperation facilitator” (CF) model from the 2-user to the $k$-user multiple access channel (MAC), extending capacity bounds, characterizing all $k$-user MACs for which the sum-capacity gain of encoder cooperation exceeds the capacity cost that enables it, and demonstrating an infinite benefit-cost ratio in the limit of small cost.

Can Negligible Cooperation Increase Network Reliability?

Parham Noorzad and Michelle Effros (California Institute of Technology, USA); Michael Langberg (State University of New York at Buffalo, USA)

In network cooperation strategies, nodes work together with the aim of increasing transmission rates or reliability. This paper demonstrates that enabling cooperation between the transmitters of a two-user multiple access channel via a cooperation facilitator that has access to both messages, always results in a network whose maximal- and average-error sum-capacities are the same—even when the information shared with the encoders is negligible. Thus, for a multiple access channel whose maximal- and average-error sum-capacities differ, the maximal-error sum-capacity is not continuous with respect to the output edge capacities of the facilitator. This shows that for some networks, sharing even a negligible number of bits per channel use with the encoders can yield a non-negligible benefit.

Guiding Blind Transmitters for K-user MISO Interference Relay Channels with Imperfect Channel Knowledge

Wonjae Shin (Seoul National University, Korea); Namyoon Lee (POSTECH, Korea); Jungwoo Lee (Seoul National University, Korea); H. Vincent Poor (Princeton University, USA)

This paper proposes a novel multi-antenna relay-aided interference management technique that requires imperfect channel knowledge for interference relay channels. Utilizing the proposed method, it is shown that $\frac{KM}{K+M-1}$ of degrees of freedom (DoF) is achievable in a K-user multiple-input-single-output interference relay channel when the relay has M antennas with a certain set of limited channel knowledge assumption. By leveraging this result, we demonstrate that the interference-free DoF of K is asymptotically achieved as M approaches infinity. One major implication of our results is that even under the limited channel knowledge, the use of massive antennas at the relay is sufficient to recover the optimal DoF for the relay-aided interference networks with perfect channel knowledge.

Exact Sequence Reconstruction for Insertion-Correcting Codes

Frederic Sala (University of California, Los Angeles, USA); Ryan Gabrys (Spawar Systems Center San Diego); Clayton Schoeny (University of California, Los Angeles, USA); Kayvon Mazooji and Lara Dolecek (UCLA, USA)

We study the problem of perfectly reconstructing sequences from traces. The sequences are codewords from a deletion/insertion-correcting code and the traces are the result of corruption by a fixed number of symbol insertions (larger than the minimum edit distance of the code.) This is the general version of a problem tackled by Levenshtein for uncoded sequences.
We introduce an exact formula for the maximum number of common supersequences shared by sequences at a certain edit distance, yielding a tight upper bound on the number of distinct traces necessary to guarantee exact reconstruction. We apply our results to the famous single deletion/insertion-correcting Varshamov-Tenengolts (VT) codes and show that a significant number of VT codeword pairs achieve the worst-case number of outputs needed for exact reconstruction.

Design of Membership Matrices for (r, t)-Availability in Distributed Storage

Yi-Sheng Su (Chang Jung Christian University, Taiwan)

This paper is concerned with the construction of local parities for optimal locally repairable codes (LRCs) with (r, t)-availability in distributed storage, where a symbol is said to have (r, t)-availability if it can be reconstructed from t disjoint repair alternatives of other symbols, each of size at most r. The key to constructing a family of optimal LRCs with (r, t)-availability that can support a scaling number of parallel reads while keeping the rate to be an arbitrarily high constant is the design of a (0,1)-matrix R, called a membership matrix, for dividing global parities into local ones. Although explicit designs of R are available, it remains unclear whether R with significantly better parameters can be constructed, which was a question left open. To tackle the open problem, this paper first provides a connection between designs of R and a combinatorial object in the well-known combinatorial design theory, called resolvable configurations. This paper then proposes several designs of R based respectively on Euclidean geometry, circulant permutation matrices, and affine permutation matrices, which, to the best knowledge of the author of this paper, are also new to resolvable configurations. The proposed designs of R are shown to have significantly better parameters than those in the literature.

On Optimal Transmission Strategies for Channels with Noiseless Feedback

Marat V Burnashev (Institute for Information Transmission Problems, Russian Academy of Sciences, Russia); Hirosuke Yamamoto (The University of Tokyo, Japan)

Two discrete time channels with noiseless feedback are considered: additive White Gaussian noise channel with strict power constraint AWGN(A) and a binary symmetric channel BSC(p). The best error decoding exponent is investigated, limiting to the case of non-exponential number of messages (i.e. the rate of transmission R = 0). The new transmission strategy is proposed, showing that for both AWGN(A) and BSC(p) channels with noiseless feedback at zero rate R=0 it is possible to achieve the same error exponent as for transmission of two messages. It gives another proof of known result for AWGN(A) channel and establishes the new result for BSC(p) channel. The strategy described is applicable to much wider class of channels.

Asymptotically tight bounds on the depth of estimated context trees

Álvaro Martín (Universidad de la República, Uruguay); Gadiel Seroussi (DTS Inc., Los Gatos, CA, USA, and Universidad de la República, Montevideo, Uruguay)

We study the maximum depth of context tree estimates, i.e., the maximum Markov order attainable by an estimated tree model given an input sequence of length $n$. We consider two classes of estimators: 1) Penalized maximum likelihood (PML) estimators where a context tree $\hat{T}$ is obtained by minimizing a cost of the form $-\log\hat{P}_{T}(x^n) + f(n)|S_T|$, where $\hat{P}_{T}(x^n)$ is the ML probability of the input sequence $x^n$ under a tree model $T$, $S_T$ is the set of states defined by $T$, and $f(n)$ is an increasing (penalization) function of $n$ (the popular BIC estimator corresponds to $f(n)=\frac{\alpha-1}{2}\log n$, where $\alpha$ is the size of the input alphabet). 2) MDL estimators based on the KT probability assignment. In each case we derive an asymptotic upper bound, $n^{1/2 + o(1)}$, on the estimated depth, and we exhibit explicit input sequences that asymptotically attain the bound up to the term $o(1)$ in the exponent.

A Characterization of the Capacity Region for Network Coding with Dependent Sources

Woong Kim (University at Buffalo, USA); Michael Langberg (State University of New York at Buffalo, USA); Michelle Effros (California Institute of Technology, USA)

In this work we characterize the capacity region for multi-source multi-terminal acyclic network coding with dependent information sources. We show that a full characterization of the capacity region can be derived in terms of entropy functions.

Pattern Maximum Likelihood Estimation of Finite-State Discrete-Time Markov Chains

Shashank Vatedka (Indian Institute of Science, Bangalore, India); Pascal Vontobel (The Chinese University of Hong Kong, Hong Kong)

We study the problem of estimating the pattern maximum likelihood (PML) distribution for time-homogeneous discrete-time Markov chains (DTMCs). The PML problem for memoryless sources has been well studied in the literature and we propose an extension of the same for DTMCs. For memoryless sources, Acharya et al. have shown that plug-in estimators obtained from the PML estimate yield good estimates for symmetric functionals of the distribution. We show that this holds for the PML estimate of DTMCs as well. Finally, we express the PML estimate for DTMCs as the double minimization of a certain free energy function and discuss some mean-field approximations to approximate the PML estimate efficiently.

Information Limits for Recovering a Hidden Community

Bruce Hajek (University of Illinois, USA); Yihong Wu (University of Illinois Urbana-Champaign, USA); Jiaming Xu (University of California, Berkeley, USA)

We study the problem of recovering a hidden community of cardinality $K$ from an $n \times n$ symmetric data matrix $A$, where for distinct indices $i,j$, $A_{ij} \sim P$ if $i, j$ both belong to the community and $A_{ij} \sim Q$ otherwise, for two known probability distributions $P$ and $Q$ depending on $n$. We focus on two types of asymptotic recovery guarantees as $n \to \infty$: (1) weak recovery: expected number of classification errors is $o(K)$; (2) exact recovery: probability of classifying all indices correctly converges to one. Under mild assumptions on $P$ and $Q$, and allowing the community size to scale sublinearly with $n$, we derive a set of sufficient conditions and a set of necessary conditions for recovery, which are asymptotically tight with sharp constants. The results hold in particular for the Gaussian case ($P=\mathcal{N}(\mu,1)$ and $Q=\mathcal{N}(0,1)$), and for the case of bounded log likelihood ratio, including the Bernoulli case ($P={\rm Bern}(p)$ and $Q={\rm Bern}(q)$) whenever $\frac{p}{q}$ and $\frac{1-p}{1-q}$ are bounded away from zero and infinity. An important algorithmic implication is that, whenever exact recovery is information theoretically possible, any algorithm that provides weak recovery when the community size is concentrated near $K$ can be upgraded to achieve exact recovery in linear additional time by a simple voting procedure.

Distance verification for LDPC codes

Ilya Dumer (University of California at Riverside, USA); Alexey Kovalev (University of Nebraska at Linkoln, USA); Leonid P Pryadko (University of California, Riverside, USA)

The problem of finding code distance has been long studied for the generic ensembles of linear codes and led to several algorithms that substantially reduce exponential complexity of this task. However, no asymptotic complexity bounds are known for distance verification in other ensembles of linear codes. Our goal is to re-design the existing generic algorithms of distance verification and derive their complexity for LDPC codes. We obtain new complexity bounds with provable performance expressed in terms of the erasure-correcting thresholds of long LDPC codes. These bounds exponentially reduce complexity estimates known for linear codes.

Quantum Capacities for Entanglement Networks

Shawn Cui (University of California Santa Barbara, USA); Zhengfeng Ji (University of Technology Sydney, Australia); Nengkun Yu (University of Waterloo, Canada); Bei Zeng (University of Guelph, Canada)

We discuss quantum capacities for two types of entanglement networks: $\mathcal{Q}$ for the quantum repeater network with free classical communication, and $\mathcal{R}$ for the tensor network as the rank of the linear operation represented by the tensor network. We find that $\mathcal{Q}$ always equals $\mathcal{R}$ in the regularized case for the same network graph. However, the relationships between the corresponding one-shot capacities $\mathcal{Q}_1$ and $\mathcal{R}_1$ are more complicated, and the min-cut upper bound is in general not achievable. We show that the tensor network can be viewed as a stochastic protocol with the quantum repeater network, such that $\mathcal{R}_1$ is a natural upper bound of $\mathcal{Q}_1$. We analyze the possible gap between $\mathcal{Q}_1$ and $\mathcal{R}_1$ for certain networks, and compare them with the one-shot classical capacity of the corresponding classical network.

On the Number of DNA Sequence Profiles for Practical Values of Read Lengths

Zuling Chang (Zhengzhou University, P.R. China); Johan Chrisnata, Martianus Frederic Ezerman and Han Mao Kiah (Nanyang Technological University, Singapore)

A recent study by one of the authors have demonstrated the relevance of profile vectors in DNA-based data storage. We provide exact values and lower bounds on the number of profile vectors for finite values of $q$, $l$, and $n$. Consequently, we demonstrate that for $q\ge 3$ and $n=q^a l$, $a=o(l)$, the number of profile vectors is at least $q^{kn}$ for some constant $0<k\le 1$. In addition to enumeration results, we provide a set of efficient encoding and decoding algorithms for a family of profile vectors.

Secure RAID Schemes for Distributed Storage

Wentao Huang and Jehoshua Bruck (California Institute of Technology, USA)

We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal encoding and decoding complexities, from the EVENODD codes and B codes, which are array codes widely used in the RAID architecture. These schemes optimally tolerate two node failures and two eavesdropping nodes. For more general parameters, we construct efficient systematic secure RAID schemes from Reed-Solomon codes. Our results suggest that building “keyless”, information-theoretic security into the RAID architecture is practical.

The Rate Region of Secure Exact-Repair Regenerating Codes for 5 Nodes

Fangwei Ye (The Chinese University of Hong Kong, Hong Kong); Kenneth W. Shum (Institute of Network Coding, Hong Kong); Raymond W. Yeung (The Chinese University of Hong Kong, Hong Kong)

The problem of exact-repair regenerating codes against eavesdropping attack is studied. The eavesdropping model we consider is that the eavesdropper has the capability to observe the data involved in the repair of a subset of nodes. In other words, the repair process is required to be secure. The focus of this paper is on such systems with $5$ nodes. Specifically, we characterize the rate regions under secure repair for the $(5,3,4)$ and $(5,4,4)$ instances with $1$ or $2$ wiretap nodes. While characterizing the rate region of exact-repair regenerating codes remains open, our results indicate that the problem may be more tractable under the security constraint as described.

A p -ary MDPC scheme

Qian Guo (Lund University & Lund University, Sweden); Thomas Johansson (Lund University, Sweden)

The McEliece public key cryptosystem is an attractive general construction that has received extensive attention over the years. Recently, a very promising version called QC-MDPC, was proposed. By using binary quasi-cyclic codes, the size of the public key can be decreased significantly. The decryption step involves iterative decoding of moderate density parity check codes (MDPC). In this paper we propose a non-binary version of QC-MDPC. The errors in the new scheme are discrete Gaussian and the decryption involves a new type of iterative decoding with a non-binary alphabet. The resulting scheme improves upon the binary QC-MDPC in that the size of the pubic key can be even smaller.

Improved Erasure List Decoding Locally Repairable Codes Using Alphabet-Dependent List Recovery

Alexander Zeh (Technion, Israel); Antonia Wachter-Zeh (Technion – Israel Institute of Technology, Israel)

New optimal constructions of locally repairable codes over small fields and their polynomial-time erasure list decoding are considered. Our code constructions are based on generalized code concatenation and give optimal binary codes with locality r = 2, 3. The impact of alphabet-dependent list recovery for alternant codes when applied to erasure list decoding of our constructed binary locally repairable codes is analyzed.

Lower Bounds and Optimal Protocols for Three-Party Secure Computation

Sundara Rajan S, Shijin Rajakrishnan and Andrew Thangaraj (IIT Madras, India); Vinod M Prabhakaran (Tata Institute of Fundamental Research, India)

The problem of three-party secure computation, where a function of private data of two parties is to be computed by a third party without revealing information beyond respective inputs or outputs is considered. New and better lower bounds on the amount of communication required between the parties to guarantee zero probability of error in the computation and achieve information-theoretic security are derived. Protocols are presented and proved to be optimal in some cases by showing that they achieve the improved lower bounds.

Continuity and Robustness to Incorrect Priors in Estimation and Control

Graeme Baker and Serdar Yüksel (Queen’s University, Canada)

This paper studies continuity properties of single and multi stage estimation and stochastic control problems with respect to initial probability distributions and applications of these results to the study of robustness of control policies applied to systems with incomplete probabilistic models. We establish that continuity and robustness cannot be guaranteed under weak and setwise convergences, but the optimal cost is continuous under the more stringent topology of total variation for stage-wise cost functions that are nonnegative, measurable, and bounded. Under further conditions on either the measurement channels or the source processes, however, weak convergence is sufficient. We also discuss similar properties under the Wasserstein distance. These results are shown to have direct implications, positive or negative, for robust control: If an optimal control policy is applied to a prior model $\tilde{P}$, and if $\tilde{P}$ is close to the true model $P$, then the application of the incorrect optimal policy to the true model leads to a loss that is continuous in the distance between $\tilde{P}$ and $P$ under total variation, and under some setups, weak convergence distance measures.

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Shunsuke Horii, Toshiyasu Matsushima and Shigeichi Hirasawa (Waseda University, Japan)

In this paper, we develop a new decoding algorithm of binary linear codes for symbol-pair read channel. The Symbol-pair read channel has recently been introduced by Cassuto and Blaum to model channel whose write resolution is higher than read resolution. The proposed decoding algorithm is based on the linear programming (LP). It is proved that the proposed LP decoder has the maximum-likelihood (ML) certificate property, i.e., the output of the decoder is guaranteed to be the ML codeword when it is integral. We also introduce the fractional pair distance dfp of the code which is a lower bound on the minimum pair distance. It is proved that the proposed LP decoder corrects up to ⌈dfp/2⌉-1 errors.

Classical-quantum channels with causal and non-causal channel state information at the sender

Holger Boche (Technical University Munich, Germany); Ning Cai (Xidian University, P.R. China); Janis Noetzel (Universitat Autònoma de Barcelona, Spain)

We study an analog of the well-known Gel’fand Pinsker Channel which uses quantum states for transmission of data. We consider the case where both the sender’s inputs to the channel and the channel states are elements of a finite set (cq-channel with state information at the sender). While the receiver has no information about the channel states, we distinguish between two cases at the sender: he either gets causal or non-causal channel state information. We give a single-letter description of the capacity in the first case and present two different regularized expressions of the capacity for the second. It turns out that the change from causal to non-causal channel state information at the encoder causes the complexity of numerical computation of the capacity formula to change from simple to seemingly difficult. Still, even in the difficult non-causal case we draw nontrivial conclusions, for example regarding continuity of the capacity with respect to changes in the system parameters.

Security in The Gaussian Interference Channel: Weak and Moderately Weak Interference Regimes

Parisa Babaheidarian (Boston University, USA); Somayeh Salimi (KTH Royal Institute of Technology, Sweden); Panagiotis Papadimitratos (KTH, Sweden)

We consider a secure communication scenario through the two-user Gaussian interference channel: each transmitter (user) has a confidential message to send reliably to its intended receiver while keeping it secret from the other receiver. Prior work investigated the performance of two different approaches for this scenario; i.i.d. Gaussian random codes and real alignment of structured codes. While the latter achieves the optimal sum secure degrees of freedom (s.d.o.f.), its extension to finite SNR regimes is challenging. In this paper, we propose a new achievability scheme for the weak and the moderately weak interference regimes, in which the reliability as well as the confidentiality of the transmitted messages are maintained at any finite SNR value. Our scheme uses lattice structure, structured jamming codewords, and lattice alignment in the encoding and the asymmetric compute-and-forward strategy in the decoding. We show that our lower bound on the sum secure rates scales linearly with log(SNR) and hence, it outperforms i.i.d. Gaussian random codes. Furthermore, we show that our achievable result is asymptotically optimal. Finally, we provide a discussion on an extension of our scheme to K>2 users.

Distortion Bounds for Source Broadcast over Degraded Channel

Lei Yu, Houqiang Li and Weiping Li (University of Science and Technology of China, P.R. China)

This paper investigates the joint source-channel coding problem of sending a memoryless source over a memoryless degraded broadcast channel. An inner bound and an outer bound on the achievable distortion region are derived, which respectively generalize and unify several existing bounds. Moreover, when specialized to Gaussian source broadcast or binary source broadcast, the inner bound and outer bound could recover the best known inner bound and outer bound in the literature. Besides, the inner bound and outer bound are also extended to Wyner-Ziv source broadcast problem, i.e., source broadcast with degraded side information available at decoders. Some new bounds are obtained when specialized to Wyner-Ziv Gaussian case and Wyner-Ziv binary case.

Optimal Byzantine Attack for Distributed Inference with M-ary Quantized Data

Po-Ning Chen (National Chiao Tung University, Taiwan); Yunghsiang Sam Han (National Taiwan University of Science and Technology, Taiwan); Hsuan-Yin Lin (National Chiao Tung University, Taiwan); Pramod Varshney (Syracuse University, USA)

In many applications that employ wireless sensor networks (WSNs), robustness of distributed inference against Byzantine attacks is very important. In this work, distributed inference when local sensors send M-ary data to the fusion center is considered and the optimal Byzantine attack policy is derived under the assumption that the Byzantine adversary has the knowledge of the statistics of local quantization outputs. Our analysis indicates that the fusion center can be blinded, in which case the detection error is as poor as a random guess, when an adequate fraction of the sensors are compromised.

Stationarity and Ergodicity of Stochastic Non-Linear Systems Controlled over Communication Channels

Serdar Yüksel (Queen’s University, Canada)

This paper is concerned with the following problem: Given a stochastic non-linear system controlled over a noisy channel, what is the largest class of channels for which there exist coding and control policies so that the closed loop system is stochastically stable? Stochastic stability notions considered are stationarity, ergodicity or asymptotic mean stationarity. We do not restrict the state space to be compact, for example systems considered can be driven by unbounded noise. Necessary and sufficient conditions are obtained for a large class of systems and channels. A generalization of Bode’s Integral Formula for a large class of non-linear systems and information channels is obtained.

The Capacity Gap Calculation for Multi-Pair Bidirectional Gaussian Relay Networks Based on Successive Compute-and-Forward Strategy

Leila Ghabeli (Sharif University of Technology, Iran); Milan S. Derpich (Universidad Tecnica Federico Santa Maria, Chile)

In this work we obtain capacity gaps for a class of N-pair bidirectional Gaussian relay networks, where one relay can help communications between the corresponding user pairs. For the uplink, we apply the generalization of successive compute-and-forward strategy (SCAF) for decoding the linear combinations of the messages of each user pair at the relay. The downlink channel is considered as a broadcast network with N receiver groups. It is shown that for all channel gains, the achievable rate region is to within gaps of (N-1+\log_2N)/2N and (N+\log_2N)/2N bpcu of the cut-set upper bound for restricted and non-restricted models, respectively. These gaps tend to 1/2 bits/sec/Hz per user as N goes to infinity. We first derive a comprehensive formulation for the N-step asymmetric SCAF and show that It includes the previously proposed SCAF approaches.

Recursive Bounds for Locally Repairable Codes with Multiple Repair Groups

Jie Hao and Shutao Xia (Tsinghua University, P.R. China); Bin Chen (South China Normal University, P.R. China)

Recently, codes with \emph{locality} have been widely studied to deal with the node repair problem in distributed storage systems. \emph{Locally repairable codes} are linear codes with locality properties for code symbols. If a code symbol can be repaired respectively by $t$ disjoint groups of other symbols, each of which has size at most $r$, this code symbol is said to have \emph{$(r,t)$-locality}. In this paper, we present recursive bounds for LRCs with $(r,t)$-locality for all code symbols. The recursive bounds have simple forms and can be used to derive various bounds for LRCs. Moreover, it is shown that many previous well known bounds of LRCs can be derived by using our recursive bounds. Besides the recursive bounds, we also propose a linear programming bound for LRCs with $(r,t)$-locality for all code symbols.

Uncertain Wiretap Channels and Secure Estimation

Moritz Wiese (KTH Royal Institute of Stockholm, Sweden); Karl H. Johansson (KTH, Sweden); Tobias J. Oechtering (KTH Royal Institute of Technology & School of Electrical Engineering, EE, Sweden); Panagiotis Papadimitratos (KTH, Sweden); Henrik Sandberg and Mikael Skoglund (KTH Royal Institute of Technology, Sweden)

The zero-error secrecy capacity of uncertain wiretap channels is defined. If the sensor-estimator channel is perfect, it is also calculated. Further properties are discussed. The problem of estimating a dynamical system with nonstochastic disturbances is studied where the sensor is connected to the estimator and an eavesdropper via an uncertain wiretap channel. The estimator should obtain a uniformly bounded estimation error whereas the eavesdropper’s error should tend to infinity. It is proved that the system can be estimated securely if the zero-error capacity of the sensor-estimator channel is strictly larger than the logarithm of the system’s unstable pole and the zero-error secrecy capacity of the uncertain wiretap channel is positive.

Multiuser Authentication with Anonymity Constraints over Noisy Channels

Remi A Chou and Aylin Yener (Pennsylvania State University, USA)

We consider authentication of messages sent by L legitimate transmitters to a legitimate receiver over a noisy multiple access channel. We assume the presence of a computationally unbounded opponent who has access to noisy observations of the messages transmitted, and can perform impersonation or substitution attacks. In addition, we consider anonymity constraints where the legitimate receiver must be able to authenticate the messages he receives with respect to predetermined groups of transmitters, but must be kept ignorant of the transmitter’s identity of a given message in a given group. Our main result is asymptotically matching upper and lower bounds on the probability of successful attack for the proposed authentication scheme. Our result quantifies the impact of a multiuser setting compared to a single-user setting, as well as the negative impact of anonymity constraints, on the probability of successful attack.

Gaussian Approximation for the Downlink Interference in Heterogeneous Cellular Networks

Serkan Ak and Hazer Inaltekin (Antalya International University, Turkey); H. Vincent Poor (Princeton University, USA)

This paper derives Gaussian approximation bounds for the standardized aggregate wireless interference (AWI) in the downlink of dense K-tier heterogenous cellular networks when base stations in each tier are distributed over the plane according to a (possibly non-homogeneous) Poisson process. The proposed methodology is general enough to account for general bounded path-loss models and fading statistics. The deviations of the distribution of the standardized AWI from the standard normal distribution are measured in terms of the Kolmogorov-Smirnov distance. An explicit expression bounding the Kolmogorov-Smirnov distance between these two distributions is obtained as a function of a broad range of network parameters such as per-tier transmission power levels, base station locations, fading statistics and the path-loss model. A simulation study is performed to corroborate the analytical results. In particular, a good statistical match between the standardized AWI distribution and its normal approximation occurs even for moderately dense heterogenous cellular networks. These results are expected to have important ramifications on the characterization of performance upper and lower bounds for emerging 5G network architectures.

Fixed-Length Compression for Letter-Based Fidelity Measures in the Finite Blocklength Regime

Lars Palzer and Roy Timo (Technische Universität München, Germany)

This paper studies fixed-length compression with multiple constraints in the finite blocklength regime. We introduce two different average distortion measures and consider constraints for individual source outcomes. The concept of d-tilted information as well as recent finite-length bounds for the optimal coding rates are extended to this setting. We further particularise our results to the binary memoryless source and a sparse Gaussian source.

Interventional Dependency Graphs: an Approach for Discovering Influence Structure

Jalal Etesami and Negar Kiyavash (University of Illinois at Urbana-Champaign, USA)

In this paper, we introduce a new type of graphical model, interventional dependency graphs, to encode interactions among processes. These type of graphical models are defined using a measure that captures the influence relationships based on the principle of intervention. Principle of intervention discovers an influence relationship by making assignment to certain variables while fixing other variables to see how these changes influence statistics of variables of interest. Furthermore, we derive some properties of the dynamics that can be inferred from these graphs and establish the relationship between this new graphical model and the directed information graphs used for causal inference.

A Partial Order For the Synthesized Channels of a Polar Code

Christian Schuerch (ETH Zurich, Switzerland)

A partial order for the synthesized channels $W_N^{(i)}$ of a polar code is presented that is independent of the underlying binary-input channel $W$. The partial order is based on the observation that $W_N^{(j)}$ is stochastically degraded to $W_N^{(i)}$ if $j$ is obtained by swapping a more significant 1 with a less significant 0 in the binary expansion of $i$. We derive an efficient representation of the partial order, the so-called covering relation. The partial order is then combined with another partial order from the literature that is also independent of $W$. Finally, we give some remarks on how this combined partial order can be used to simplify code construction of polar codes.

Downlink Outage Performance of Heterogeneous Cellular Networks

Serkan Ak and Hazer Inaltekin (Antalya International University, Turkey); H. Vincent Poor (Princeton University, USA)

This paper derives tight performance upper and lower bounds on the downlink outage efficiency of K-tier heterogeneous cellular networks (HCNs) for general signal propagation models with Poisson distributed base stations in each tier. In particular, the proposed approach to analyze the outage metrics in a K-tier HCN allows for the use of general bounded path-loss functions and random fading processes of general distributions. Considering two specific base station (BS) association policies, it is shown that the derived performance bounds track the actual outage metrics reasonably well for a wide range of BS densities, with the gap among them becoming negligibly small for denser HCN deployments. A simulation study is also performed for 2-tier and 3-tier HCN scenarios to illustrate the closeness of the derived bounds to the actual outage performance with various selections of the HCN parameters.

Short Block Length Code Design for Interference Channels

Shahrouz Sharifi (Arizona State University); Mehdi Dabirnia (Bilkent University, Turkey); A. Korhan Tanc (Kirklareli University, Turkey); Tolga M. Duman (Bilkent University, Turkey)

We focus on short block length code design for Gaussian interference channels (GICs) using trellis-based codes. We employ two different decoding techniques at the receiver side, namely, joint maximum likelihood (JML) decoding and single user (SU) minimum distance decoding. For different interference levels (strong and weak) and decoding strategies, we derive error-rate bounds to evaluate the code performance. We utilize the derived bounds in code design and provide several numerical examples for both strong and weak interference cases. We show that under the JML decoding, the newly designed codes offer significant improvements over the alternatives of optimal point-to-point (P2P) trellis-based codes and off-the-shelf low density parity check (LDPC) codes with the same block lengths.

On the Continuous-Time Poisson Channel with Varying Dark Current Known to the Transmitter

Ligong Wang (ETIS & CNRS, France)

This paper considers a continuous-time Poisson channel whose dark current varies with time. The actual values of the dark current are revealed to the transmitter as channel- state information (CSI), either causally or noncausally. It is shown that, in the limit where the coherence time of the dark current tends to zero, the improvement on capacity provided by both causal and noncausal CSI vanishes linearly with the coherence time.

Secure Computation of Randomized Functions

Deepesh Data (Tata Institute of Fundamental Research, Mumbai, India)

Two user secure computation of randomized functions is considered, where only one user computes the output. Both the users are semi-honest; and computation is such that no user learns any additional information about the other user’s input and output other than what cannot be inferred from its own input and output. First we consider a scenario, where privacy conditions are against both the users. In perfect security setting, Kilian gave a characterization of securely computable {\em randomized} functions in \cite{Kilian00}, and we provide rate-optimal protocols for such functions. We prove that the same characterization holds in asymptotic security setting as well and give a rate-optimal protocol. In another scenario, where privacy condition is only against the user who is not computing the function, we provide rate-optimal protocols. For perfect security in both the scenarios, our results are in terms of chromatic entropies of different graphs. In asymptotic security setting, we get single-letter expressions of rates in both the scenarios.

Constructing Sub-exponentially Large Optical Priority Queues with Switches and Fiber Delay Lines

Bin Tang and Xiaoliang Wang (Nanjing University, P.R. China); Cam-Tu Nguyen (Nanjing University, Vietnam); Sanglu Lu (Nanjing University, P.R. China)

Optical switching has been considered as a natural choice to keep pace with growing fiber link capacity. One key research issue of all-optical switching is the design of optical queues by using optical crossbar switches and fiber delay lines (SDL). In this paper, we focus on the construction of an optical priority queue with a single $(M+2)\times (M+2)$ crossbar switch and $M$ fiber delay lines, and evaluate it in terms of the buffer size of the priority queue. Currently, the best known upper bound of the buffer size is $O(2^M)$, while existing methods can only construct a priority queue with buffer $O(M^3)$.
In this paper, we make a great step towards closing the above huge gap. We propose a very efficient construction of priority queues with buffer $2^{\Theta(\sqrt{M})}$. We use 4-to-1 multiplexers with different buffer sizes, which can be constructed efficiently with SDL, as intermediate building blocks to simplify the design. The key idea in our construction is to route each packet entering the switch to some group of four 4-to-1 multiplexers according to its current priority, which is shown to be collision-free.

Multiple Quantum Hypothesis Testing Expressions and Classical-Quantum Channel Converse Bounds

Gonzalo Vazquez-Vilar (Universidad Carlos III de Madrid, Spain)

Alternative exact expressions are derived for the minimum error probability of a hypothesis test discriminating among M quantum states. The first expression corresponds to the error probability of a binary hypothesis test with certain parameters; the second involves the optimization of a given information-spectrum measure. Particularized in the classical-quantum channel coding setting, this characterization implies the tightness of two existing converse bounds; one derived by Matthews and Wehner using hypothesis-testing, and one obtained by Hayashi and Nagaoka via an information-spectrum approach.

Adaptation is Useless for Two Discrete Additive-Noise Two-Way Channels

Lin Song, Fady Alajaji and Tamas Linder (Queen’s University, Canada)

In two-way channels, each user transmits and receives at the same time. This allows each encoder to interactively adapt the current input to its own message and all previously received signals. Such coding approach can introduce correlation between inputs of different users, since all the users’ outputs are correlated by the nature of the channel. However, for some channels, such adaptation in the coding scheme and its induced correlation among users do not help enlarge the capacity region with respect to the standard coding method (where each user encodes only based on its own message). In this paper, it is shown that adaptation is useless for two classes of two-way discrete channels: the modulo additive-noise channel with memory and the multiple access/degraded broadcast channel.

Time and frequency selective Ricean MIMO capacity: an ergodic operator approach

Walid Hachem (Telecom-paristech, France); Aris L. Moustakas (University of Athens, Greece); Leonid Pastur (Institute of Low Temperature Physics, Kharkiv, Ukraine)

From the standpoint of information theory, a time and frequency selective Ricean ergodic MIMO channel can be represented in the Hilbert space $\ell^2(Z)$ by a random ergodic self-adjoint operator whose Integrated Density of States (IDS) governs the behavior of the Shannon’s mutual information. In this paper, it is shown that when the numbers of antennas at the transmitter and at the receiver tend to infinity at the same rate, the mutual information per receive antenna tends to a quantity that can be identified. This result can be obtained by analyzing the behavior of the Stieltjes transform of the IDS in the regime of the large numbers of antennas.

Uniformity Properties of Construction C

Maiara Bollauf (University of Campinas, Brazil); Ram Zamir (Tel Aviv University, Israel)

Construction C (also known as Forney’s multi-level code formula) forms an Euclidean code for the additive white Gaussian noise (AWGN) channel from L binary code components. If the component codes are linear, then the minimum distance and kissing number are the same for all the points. However, while in the single level (L = 1) case it reduces to lattice Construction A, a multi-level Construction C is in general not a lattice. We show that a two-level (L = 2) Construction C satisfies Forney’s definition for a geometrically uniform constellation. Specifically, every point sees the same configuration of neighbors, up to a reflection of the coordinates in which the lower level code is equal to 1. In contrast, for three levels and up (L 3), we construct examples where the distance spectrum varies between the points, hence the constellation is not geometrically uniform.

Optical Fiber MIMO Channel Model and its Analysis

Apostolos Karadimitrakis and Aris L. Moustakas (University of Athens, Greece); Hartmut Hafermann and Axel Müller (Huawei Technologies, France)

Technology is moving towards space division multiplexing in optical fiber to keep up with increasing data requirements and to avoid an imminent capacity crunch. Hence, it is of great interest to estimate the potential gains of this approach. As more spatial channels are being packed into a single fiber, their increased crosstalk necessitates the use of MIMO techniques to guarantee reliable operation. In this paper, we consider the capacity of the optical channel. We exploit the analogy between an optical fiber and a model from mesoscopic physics – a chaotic cavity – to obtain a novel channel model for the optical fiber. The model captures both random distributed crosstalk and mode-dependent loss, which are described within the framework of scattering theory. Using this model and tools from replica theory and random matrix theory, we derive the capacity of the fiber optical MIMO channel.

Convergence of generalized entropy minimizers in sequences of convex problems

Imre Csiszár (Renyi Institute, Hungarian Academy of Science, Hungary); František Matúš (Academy of Sciences of the Czech Republic & Institute of Information Theory and Automation, Czech Republic)

Integral functionals based on convex normal integrands are minimized over convex constraint sets. Generalized minimizers exist under a boundedness condition. Sequences of the minimization problems are studied when the constraint sets are nested. The corresponding sequences of generalized minimizers are related to the minimization over limit convex sets. Martingale theorems and moment problems are discussed.

Polar Coding for the Multiple Access Wiretap Channel via Rate-Splitting and Cooperative Jamming

Remi A Chou and Aylin Yener (Pennsylvania State University, USA)

We consider strongly secure communication over a discrete memoryless multiple access wiretap channel with two transmitters – no degradation or symmetry assumptions are made on the channel. Our main result is that any rate pair known to be achievable with a random coding like proof, is also achievable with a low-complexity polar coding scheme. Moreover, if the rate pair is known to be achievable without time-sharing, then time-sharing is not needed in our polar coding scheme as well. Our proof technique relies on rate-splitting and different cooperative jamming strategies. Specifically, our coding scheme combines several point-to-point codes that either aim at secretly conveying a message to the legitimate receiver or at performing cooperative jamming. Each point-to-point code relies on a chaining construction to be able to deal with an arbitrary channel and strong secrecy. We assess reliability and strong secrecy through a detailed analysis of the dependencies between the random variables involved in the scheme.

Analog Coding of a Source with Erasures

Marina Haikin and Ram Zamir (Tel Aviv University, Israel)

Analog coding decouples the tasks of protecting against erasures and noise. For erasure correction, it creates an “analog redundancy” by means of band-limited discrete Fourier transform (DFT) interpolation, or more generally, by an over-complete expansion based on a frame. We examine the analog coding paradigm for the dual setup of a source with “erasure” side-information (SI) at the encoder. The excess rate of analog coding above the rate-distortion function (RDF) is associated with the energy of the inverse of submatrices of the frame, where each submatrix corresponds to a possible erasure pattern. We give a partial theoretical as well as numerical evidence that a variety of structured frames, in particular DFT frames with difference-set spectrum and more general equiangular tight frames (ETFs), with a common MANOVA limiting spectrum, minimize the excess rate over all possible frames. However, they do not achieve the RDF even in the limit as the dimension goes to infinity.

Capacity and Degree-of-Freedom of OFDM Channels with Amplitude Constraint

Saeid Haghighatshoar (Technische Universität Berlin, Germany); Peter Jung (TU-Berlin, Communications and Information Theory Group & Fraunhofer HHI – Heinrich Hertz Institute, Germany); Giuseppe Caire (Technische Universität Berlin, Germany)

In this paper, we study the capacity and degree-of-freedom (DoF) scaling for the continuous-time amplitude limited AWGN channels in radio frequency (RF) and intensity modulated optical communication (OC) channels. More precisely, we study how the capacity varies in terms of the OFDM block transmission time $T$, bandwidth $W$, amplitude $A$ and the noise spectral density $\frac{N_0}{2}$. We first find suitable discrete encoding spaces for both cases, and prove that they are convex set that have a semi-definite programming (SDP) representation. Using tools from convex geometry, we find lower and upper bounds on the volume of these encoding sets, which we exploit to drive pretty sharp lower and upper bounds on the capacity. We also study a practical Tone-Reservation (TR) encoding algorithm and prove that its performance can be characterized by the statistical width of an appropriate convex set. Recently, it has been observed that in high-dimensional estimation problems under constraints such as those arisen in Compressed sensing (CS) statistical width plays a crucial role. We discuss some of the implications of the resulting statistical width on the performance of the TR. We also provide numerical simulations to validate these observations.

PD-sets for Z4-linear codes: Hadamard and Kerdock codes

Roland Barrolleta and Merce Villanueva (Universitat Autònoma de Barcelona, Spain)

Permutation decoding is a technique that strongly depends on the existence of a special subset, called PD-set, of the permutation automorphism group of a code. In this paper, a general criterion to obtain s-PD-sets of size s+1, which enable correction up to s errors, for Z4-linear codes is provided. Furthermore, some explicit constructions of s-PD-sets of size s+1 for important families of (nonlinear) Z_4-linear codes such as Hadamard and Kerdock codes are given.

On deterministic conditions for subspace clustering under missing data

Wenqi Wang (Purdue University, USA); Shuchin Aeron (Tufts University, USA); Vaneet Aggarwal (Purdue University, USA)

In this paper we present deterministic analysis of sufficient conditions for sparse subspace clustering under missing data, when data is assumed to come from a Union of Subspaces (UoS) model. In this context we consider two cases, namely Case I when all the points are sampled at the same co-ordinates, and Case II when points are sampled at different locations. We show that results for Case I directly follow from several existing results in the literature, while results for Case II are not as straightforward and we provide a set of dual conditions under which, perfect clustering holds true. We provide extensive set of simulation results for clustering as well as completion of data under missing entries, under the UoS model. Our experimental results indicate that in contrast to the full data case, accurate clustering does not imply accurate subspace identification and completion, indicating the natural order of relative hardness of these problems.

On Tightness of an Entropic Region Outer Bound for Network Coding and the Edge Removal Property

Ming Fai Wong and Michelle Effros (California Institute of Technology, USA); Michael Langberg (State University of New York at Buffalo, USA)

In this work, we study the Yeung network coding outer bound and prove an equivalence relationship between its tightness and the edge removal problem. In addition, we derive an implicit characterization of the 0-error capacity region using restricted sets of entropic vectors.

Finite-Sample Analysis of Approximate Message Passing

Cynthia Rush (Yale University, USA); Ramji Venkataramanan (University of Cambridge, United Kingdom)

This paper studies the performance of Approximate Message Passing (AMP), in the regime where the problem dimension is large but finite. We consider the setting of high-dimensional regression, where the goal is to estimate a high-dimensional vector \beta_0 from an observation y=A \beta_0 + w. AMP is a low-complexity, scalable algorithm for this problem. It has the attractive feature that its performance can be accurately characterized in the asymptotic large system limit by a simple scalar iteration called state evolution. Previous proofs of the validity of state evolution have all been asymptotic convergence results. In this paper, we derive a concentration result for AMP with i.i.d. Gaussian measurement matrices with finite dimension n x N. The result shows that the probability of deviation from the state evolution prediction falls exponentially in n. Our result provides theoretical support for empirical findings that have demonstrated excellent agreement of the AMP performance with state evolution predictions for moderately large dimensions.

Secrecy in Broadcast Channel with Combating Helpers and Interference Channel with Selfish Users

Karim A. Banawan (University of Maryland, College Park, USA); Sennur Ulukus (University of Maryland, USA)

We investigate the secure degrees of freedom (s.d.o.f.) of two new channel models: broadcast channel with combating helpers and interference channel with selfish users. In the first model, over a classical broadcast channel with confidential messages (BCCM), there are two helpers, each associated with one of the receivers. In the second model, over a classical interference channel with confidential messages (ICCM), there is a helper and users are selfish. The goal of introducing these channel models is to investigate various malicious interactions that arise in networks, including active adversaries. By casting each problem as an extensive-form game and applying recursive real interference alignment, we show that, for the first model, the combating intentions of the helpers are neutralized and the full s.d.o.f. is retained; for the second model, selfishness precludes secure communication and no s.d.o.f. is achieved.

Distributed Information-Theoretic Biclustering

Georg Pichler (Vienna University of Technology, Austria); Pablo Piantanida (CentraleSupélec-CNRS-Université Paris-Sud, France); Gerald Matz (Vienna University of Technology, Austria)

This paper investigates the problem of distributed biclustering of memoryless sources and extends previous work to the general case with more than two sources. Given a set of distributed stationary memoryless sources, the encoders’ goal is to find rate-limited representations of these sources such that the mutual information between two selected subsets of descriptions (each of them generated by distinct encoders) is maximized. This formulation is fundamentally different from conventional distributed source coding problems since here redundancy among descriptions should actually be maximally preserved. We derive non-trivial outer and inner bounds to the achievable region for this problem and further connect them to the CEO problem under logarithmic loss distortion. Since information-theoretic biclustering is closely related to distributed hypothesis testing against independence, our results are also expected to apply to that problem.

On the Performance of Mismatched Data Detection in Large MIMO Systems

Charles Jeon (Cornell University, USA); Arian Maleki (Columbia University, USA); Christoph Studer (Cornell University, USA)

We investigate the performance of mismatched data detection in large multiple-input multiple-output (MIMO) systems, where the prior distribution of the transmit signal used in the data detector differs from the true prior. To minimize the performance loss caused by this prior mismatch, we include a tuning stage into our recently-proposed large MIMO approximate message passing (LAMA) algorithm, which allows us to develop mismatched LAMA algorithms with optimal as well as sub-optimal tuning. We show that carefully-selected priors often enable simpler and computationally more efficient algorithms compared to LAMA with true prior while achieving near-optimal performance. A performance analysis of our algorithms for a Gaussian prior and a uniform prior within a hypercube covering the QAM constellation recovers classical and recent results on linear and non-linear MIMO data detection, respectively.

A General Optimality Condition of Link Scheduling for Emptying a Wireless Network

Qing He and Di Yuan (Linköping University, Sweden); Anthony Ephremides (University of Maryland at College Park, USA)

We consider link scheduling in wireless networks for emptying the queues of the source nodes and provide a unified mathematical formulation that accommodates all meaningful settings of link transmission rates and network configurations. We prove that, any scheduling problem is equivalent to solving a convex problem defined over the convex hull of the rate region. Based on the fundamental insight, a general optimality condition is derived, that yields a unified treatment of optimal scheduling. Furthermore, we demonstrate the implications and usefulness of the result. Specifically, by applying the theoretical insight to optimality characterization and complexity analysis of scheduling problems, we can both unify and extend previously obtained results.

The Capacity of Online (Causal) $q$-ary Error-Erasure Channels

Zitan Chen (University of Maryland, USA); Sidharth Jaggi (Chinese University of Hong Kong, Hong Kong); Michael Langberg (State University of New York at Buffalo, USA)

In the $q$-ary online (causal) channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword $\mathbf{x} =(x_1,\ldots,x_n) \in \{0,1,\ldots,q-1\}^n$ symbol-by-symbol via a channel limited to at most $pn$ errors (symbol changes) and $p^{\star} n$ erasures. The channel is “online” (i.e., “causal”) in the sense that at the $i$th step of communication the channel decides whether to corrupt the $i$th symbol or not only based on its view of the symbols $(x_1,\ldots,x_i)$. This is in contrast to the classical adversarial channel in which the corruption is chosen with full knowledge of the sent codeword $\mathbf{x}$. In this work we extend the results obtained in \cite{dey2012improved,bassily2014causal,zitan2015causal,dey2013upper} (in which the capacities of {\it binary online bit-flip-only channels}, and separately {\it binary online erasure-only channels} were characterized). We here extend those prior results in two important ways. First, we obtain the capacity of $q$-ary online channels for general $q$ (rather than just $q=2$). Second, we analyze combined error-erasure corruption models (rather than studying them separately). Characterization of this much broader class of symmetric online channels gives a fuller understanding of the effects of causality on jamming adversaries. The extensions in this paper require novel approaches for both optimal code designs, and matching information-theoretic converse arguments.

Topological Interference Management with Decoded Message Passing

Xinping Yi and Giuseppe Caire (Technische Universität Berlin, Germany)

Topological interference management (TIM) problem studies partially connected interference networks with no channel state information except for the connectivity graph at transmitters. In this paper, we consider a similar problem in the uplink cellular networks while message passing is enabled at receivers (e.g., base stations) in which the decoded messages can be routed to other receivers via backhaul links to help improve overall network performance. For this new problem setting, we try to answer the following two questions: (1) when is orthogonal access optimal? and (2) when does message passing help? From both graph theoretic and index coding perspectives, we are able to offer preliminary answers to those questions by identifying sufficient and/or necessary conditions.

Algorithmic Aspects of Optimal Channel Coding

Siddharth Barman (Indian Institute of Science, India); Omar Fawzi (ENS de Lyon, France)

A central question in information theory is to determine the maximum success probability that can be achieved in sending a fixed number of messages over a noisy channel. This was first studied in the pioneering work of Shannon who established a simple expression characterizing this quantity in the limit of multiple independent uses of the channel. Here we consider the general setting with only one use of the channel. We observe that the maximum success probability can be expressed as the maximum value of a submodular function. Using this connection, we establish the following results:
1. There is a simple greedy polynomial-time algorithm that computes a code achieving a (1-1/e)-approximation of the maximum success probability. Moreover, for this problem it is NP-hard to obtain an approximation ratio strictly better than (1-1/e).
2. Shared quantum entanglement between the sender and the receiver can increase the success probability by a factor of at most 1/(1-1/e). In addition, this factor is tight if one allows an arbitrary non-signaling box between the sender and the receiver.
3. We give tight bounds on the one-shot performance of the meta-converse of Polyanskiy-Poor-Verdu.

Simultaneous Connectivity in Heterogeneous Cognitive Radio Networks

Michal Yemini, Anelia Somekh-Baruch, Reuven Cohen and Amir Leshem (Bar-Ilan University, Israel)

In this paper we analyze the connectivity of cognitive radio ad-hoc networks. Contrary to previous works, we pursue the connectivity of both the primary and secondary networks, a state we call “simultaneous connectivity”. We determine that if the networks are simultaneously connected then their infinite connected components are unique. In addition, we characterize the region of densities in which both the primary and secondary networks have a unique infinite connected component.

Feedback Enhances Simultaneous Energy and Information Transmission in Multiple Access Channels

Selma Belhadj Amor (Inria, France); Samir M. Perlaza (INRIA, France); Ioannis Krikidis (University of Cyprus, Cyprus); H. Vincent Poor (Princeton University, USA)

In this paper, the fundamental limits of simultaneous information and energy transmission in the two-user Gaussian multiple access channel with feedback are fully characterized. All the achievable information and energy transmission rates (in bits per channel use and energy-units per channel use, respectively) are identified. More specifically, the information-energy capacity region is fully characterized. A simple achievability scheme based on power-splitting and Ozarow’s scheme is shown to be optimal. Finally, the maximum individual information rates and the information sum-capacity that are achievable given a minimum energy rate constraint of b energy-units per channel use at the input of the energy harvester are identified. An interesting conclusion is that for a fixed information transmission rate, feedback can at most double the energy transmission rate with respect to the case without feedback.

On the Relationship Between Edge Removal and Strong Converses

Oliver Kosut (Arizona State University, USA); Joerg Kliewer (New Jersey Institute of Technology, USA)

This paper explores the relationship between two ideas in network information theory: edge removal and strong converses. Edge removal properties state that if an edge of small capacity is removed from a network, the capacity region does not change too much. Strong converses state that, for rates outside the capacity region, the probability of error converges to 1. Various notions of edge removal and strong converse are defined, depending on how edge capacity and residual error probability scale with blocklength, and relations between them are proved. In particular, each class of strong converse implies a specific class of edge removal. The opposite direction is proved for deterministic networks, and some discussion is given for the noisy case.

Defect Tolerance: Fundamental Limits and Examples

Jennifer Tang (MIT, USA); Da Wang (Two Sigma Investments, USA); Yury Polyanskiy (MIT, USA); Gregory Wornell (Massachusetts Institute of Technology, USA)

This paper addresses the question of how to add redundancy to a collection of physical objects so that the overall system is more robust to failures. Physical redundancy can (generally) only be achieved by employing copy/substitute procedures. This is fundamentally different from information redundancy, where a single parity check simultaneously protects a large number of data bits against a single erasure. We propose a bipartite graph model of designing defect-tolerant systems where defective objects are repaired by reconnecting them to strategically placed redundant objects. The fundamental limits of this model are characterized under various asymptotic settings and both asymptotic and finite-size optimal systems are constructed.
Mathematically, we say that a k by m bipartite graph corrects t defects over alphabet of size q if for every q-coloring of k left vertices there exists a coloring of m right vertices such that every left vertex is connected to at least t same-colored right vertices. We study the tradeoff between redundancy m/k and the total number of edges in the graph divided by k. The question is trivial when q ≥ k: the optimal solution is a simple t-fold replication. However, when q < k some non-trivial savings are possible by leveraging the inherent repetition of colors.

A single-shot approach to lossy source coding under logarithmic loss

Yanina Shkel and Sergio Verdú (Princeton University, USA)

This paper studies the problem of lossy source coding with a specific distortion measure: logarithmic loss. The focus of this paper is on the single-shot approach which exposes the connection between lossy source coding with log-loss and lossless source coding. Point-to-point bounds, including the single-shot fundamental limit for average as well as excess distortion, are presented. Two multi-terminal problems are addressed: coding with side information (Wyner-Ziv), and multiple descriptions coding. In both cases, the application of the Shannon-McMillan Theorem to the single-shot bounds immediately yields the rate-distortion function and the rate distortion-region for stationary and ergodic sources.

Coding Across Unicast Sessions can Increase the Secure Message Capacity

Gaurav Kumar Agarwal (University of California Los Angeles, USA); Martina Cardone (University of Califonia, Los Angeles, USA); Christina Fragouli (UCLA, USA)

This paper characterizes the secret message capacity of three networks where two unicast sessions share some of the communication resources. Each network consists of erasure channels with state feedback. A passive eavesdropper is assumed to wiretap any one of the links. The capacity achieving schemes as well as the outer bounds are formulated as linear programs. The proposed strategies are then numerically evaluated and shown to achieve higher rate performances (up to a double single- or sum-rate) with respect to alternative strategies, where the network resources are time-shared among the two sessions. These results represent a step towards the secure capacity characterization for general networks. They also show that, even in configurations for which network coding does not offer benefits in absence of security, it can become beneficial under security constraints.

QoS-Driven Energy-Efficient Power Control with Markov Arrivals and Finite-Alphabet Inputs

Gozde Ozcan, Mustafa Ozmen and M. Cenk Gursoy (Syracuse University, USA)

This paper proposes optimal power adaptation schemes that maximize the energy efficiency (EE) in the presence of Markovian sources and finite-alphabet inputs subject to quality of service (QoS) constraints. First, maximum average arrival rates supported by transmitting signals with arbitrary input distributions are characterized in closed-form by employing the effective bandwidth of time-varying sources (e.g., discrete-time Markov and Markov fluid sources) and effective capacity of the time-varying wireless channel. Subsequently, EE is defined as the ratio of the maximum average arrival rate to the total power consumption, in which circuit power is also taken into account. Following these characterizations, an optimization problem is formulated to maximize the EE of the system, and optimal power control schemes are determined. Through numerical results, the performance of the optimal power control policies is evaluated for different signal constellations and is also compared with that of constant power transmission. The impact of QoS constraints, source characteristics, input distributions on the maximum achievable EE and the throughput is analyzed.

An Encryption Scheme based on Random Split of St-Gen Codes

Simona Samardjiska (Ss Cyril and Methodius” University, Skopje, Macedonia & Faculty of Computer Science and Engineering, Macedonia, the former Yugoslav Republic of); Danilo Gligoroski (Norwegian University of Science and Technology, Norway)

Staircase-Generator codes (St-Gen codes) have recently been introduced in the design of code-based public key schemes and in the design of steganographic matrix embedding schemes. In this paper we propose a method for random splitting of St-Gen Codes and use it to design a new coding based public key encryption scheme. The scheme uses the known list decoding method for St-Gen codes, but introduces a novelty in the creation of the public and private key. We modify the classical approach for hiding the structure of the generator matrix by introducing a technique for splitting it into random parts. This approach counters the weaknesses found in the previous constructions of public key schemes using St-Gen codes. Our initial software implementation shows that encryption using Random Split of St-Gen Codes compared to original St-Gen Codes is slower by a linear factor in the number of random splits of the St-Gen code, while the decryption complexity remains the same.

SAFFRON: A Fast, Efficient, and Robust Framework for Group Testing based on Sparse-Graph Codes

Kangwook Lee (University of California, Berkeley, USA); Ramtin Pedarsani (UC Berkeley, USA); Kannan Ramchandran (University of California at Berkeley, USA)

The group testing problem is to identify a population of K defective items from a set of n items by pooling groups of items. The result of a test for a group of items is positive if any of the items in the group is defective and negative otherwise. The goal is to judiciously group subsets of items such that defective items can be reliably recovered using the minimum number of tests, while also having a low-complexity decoder.
We describe SAFFRON (Sparse-grAph codes Framework For gROup testiNg), a non-adaptive group testing scheme that recovers at least a (1-ε)-fraction (for any arbitrarily small ε > 0) of K defective items with high probability with m=6C(ε)K*log_2(n) tests, where C(ε) is a precisely characterized constant that depends only on ε. For instance, it can provably recover at least (1-10^(-6))K defective items with m = 68K*log_2(n) tests. The computational complexity of the decoding algorithm is O(K*log n), which is order-optimal. Further, we describe a systematic methodology to robustify SAFFRON such that it can reliably recover the set of K defective items even in the presence of erroneous or noisy test results. We also propose Singleton-Only-SAFFRON, a variant of SAFFRON, that recovers all the K defective items with m=2e(1+ α)K*log(K)*log_2(n) tests with probability 1-O(1/K^α), where α>0 is a constant.

Rate of Prefix-free Codes in LQG Control Systems

Takashi Tanaka (KTH Royal Institute of Technology, Sweden); Karl Henrik Johansson (Royal Institute of Technology, Sweden); Tobias J. Oechtering (KTH Royal Institute of Technology & School of Electrical Engineering, EE, Sweden); Henrik Sandberg and Mikael Skoglund (KTH Royal Institute of Technology, Sweden)

In this paper, we consider a discrete time linear quadratic Gaussian (LQG) control problem in which state information of the plant is encoded in a variable-length binary codeword at every time step, and a control input is determined based on the codewords generated in the past. We derive a lower bound of the rate achievable by the class of prefix-free codes attaining the required LQG control performance. This lower bound coincides with the infimum of a certain directed information expression, and is computable by semidefinite programming (SDP). Based on a technique by Silva et al., we also provide an upper bound of the best achievable rate by constructing a controller equipped with a uniform quantizer with subtractive dither and Shannon-Fano coding. The gap between the obtained lower and upper bounds is less than $0.754r+1$ bits per time step regardless of the required LQG control performance, where $r$ is the rank of a signal-to-noise ratio matrix obtained by SDP, which is no greater than the dimension of the state.

Universal Compressed Sensing

Shirin Jalali (Bell Labs, USA); H. Vincent Poor (Princeton University, USA)

In this paper, the problem of developing universal algorithms for noiseless compressed sensing of stochastic processes is studied. First, Rényi’s notion of information dimension (ID) is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then the so-called Lagrangian minimum entropy pursuit (Lagrangian-MEP) algorithm, originally proposed by Baron et al. as a heuristic universal recovery algorithm, is studied. It is shown that, if the normalized number of randomized measurements is larger than the ID of the source process, for the right set of parameters, asymptotically, the Lagrangian-MEP algorithm recovers any stationary process satisfying some mixing constraints almost losslessly, without having any prior information about the source distribution.

A Necessary Condition for the Transmissibility of Correlated Sources over a MAC

Amos Lapidoth (ETHZ, Switzerland); Michele A Wigger (Telecom ParisTech, France)

A necessary condition for the transmissibility of correlated sources over a multi-access channel (MAC) is presented. The condition is related to Wyner’s common information and to the Slepian-Wolf capacity region of the MAC with private and common messages. An analogous condition for the transmissibility of remote sources over a MAC is also derived. Here the transmitters only observe noisy versions of the sources.

Rate-distortion dimension of stochastic processes

Farideh Ebrahim Rezagah (NYU (Alumni), USA); Shirin Jalali (Bell Labs, USA); Elza Erkip (New York University, USA); H. Vincent Poor (Princeton University, USA)

The rate-distortion dimension (RDD) of an analog stationary process is studied as a measure of complexity that captures the amount of information contained in the process. It is shown that the RDD of a process, defined as the asymptotic ratio of its rate-distortion function $R(D)$ to $\log {1\over D}$ as distortion $D$ approaches zero, is equal to its information dimension (ID). This generalizes an earlier result by Kawabata and Dembo and provides an operational approach to evaluate the ID of a process, which previously was shown to be closely related to the effective dimension of the underlying process and also to the fundamental limits of compressed sensing. The relation between RDD and ID is illustrated for a piecewise constant process.

Weakly Mutually Uncorrelated Codes

Seyed Mohammadhossein Tabatabaei Yazdi (University of Illinois at Urbana-Champaign, USA); Han Mao Kiah (Nanyang Technological University, Singapore); Olgica Milenkovic (UIUC, USA)

We introduce the notion of weakly mutually uncorrelated (WMU) sequences, motivated by applications in DNA-based storage systems and synchronization protocols. WMU sequences are characterized by the property that no sufficiently long suffix of one sequence is the prefix of the same or another sequence. In addition, WMU sequences used in DNA-based storage systems are required to have balanced compositions of symbols and to be at large mutual Hamming distance from each other. We present a number of constructions for balanced, error correcting WMU codes using Dyck paths, Knuth’s balancing principle, prefix synchronized and cyclic codes.

Capacity of Two-Relay Diamond Networks with Rate-Limited Links to the Relays and a Binary Adder Multiple Access Channel

Shirin Saeedi Bidokhti (Stanford University, USA); Gerhard Kramer (Technische Universität München, Germany)

A class of two-relay diamond networks is studied where the broadcast component is modelled by two independent bit-pipes and the multiple-access component is memoryless. A new upper is derived on the capacity which generalizes bounding techniques of Ozarow for the Gaussian multiple description problem (1981) and Kang and Liu for the Gaussian diamond network (2011). For binary adder MACs, the upper bound establishes the capacity for all ranges of bit-pipe capacities.

Limiting eigenvalue distributions of block random matrices with one-dimensional coupling structure

Toshiyuki Tanaka (Kyoto University, Japan)

We study limiting eigenvalue distributions of block random matrix ensembles with one-dimensional coupling structure under the limit where the matrix size tends to infinity. Matrices in the ensembles have independent real symmetric random matrices of Wigner type on the diagonal blocks and a scalar multiple of the identity matrix on the blocks adjacent to the diagonal blocks. Explicit analytical formulas for the limiting eigenvalue distributions are derived for the $2\times 2$-block ensemble as well as the $3\times 3$-block circular ensemble. Further numerical results for $B\times B$-block ensembles with $B\ge3$ are also shown.

Maximal Leakage Minimization for The Shannon Cipher System

Ibrahim Issa (Cornell University, USA); Sudeep Kamath (Princeton University, USA); Aaron Wagner (Cornell University, USA)

A variation of the Shannon cipher system, in which lossy communication is allowed and performance of an encryption scheme is measured in terms of maximal leakage (recently proposed by the authors [1]), is investigated. The asymptotic behavior of normalized maximal leakage is studied, and a single-letter characterization of the optimal limit is derived. Also, asymptotically-optimal encryption schemes are given.

Blind Interference Alignment for Private Information Retrieval

Hua Sun (University of California, Irvine, USA); Syed Ali Jafar (University of California Irvine, USA)

Blind interference alignment (BIA) refers to interference alignment schemes that are designed only based on channel coherence pattern knowledge at the transmitters (the “blind” transmitters do not know the exact channel values). Private information retrieval (PIR) refers to the problem where a user retrieves one out of K messages from N non-communicating databases (each holds all K messages) without revealing anything about the identity of the desired message index to any individual database. In this paper, we identify an intriguing connection between PIR and BIA. Inspired by this connection, we characterize the information theoretic optimal download cost of PIR, when we have K = 2 messages and the number of databases, N, is arbitrary.

When Does Spatial Correlation Add Value to Delayed Channel State Information?

Alireza Vahid and Robert Calderbank (Duke University, USA)

Fast fading wireless networks with delayed knowledge of the channel state information have received significant attention in recent years. An exception is networks where channels are spatially correlated. This paper characterizes the capacity region of two-user erasure interference channels with delayed knowledge of the channel state information and spatially correlated channels. There are instances where spatial correlation eliminates any potential gain from delayed channel state information and instances where it enables the same performance that is possible with instantaneous knowledge of channel state. The key is an extremal entropy power inequality for spatially correlated channels that separates the two types of instances. It is also shown that to achieve the capacity region, each transmitter only needs to rely on the delayed knowledge of the channels to which it is connected.

The Capacity of Some Pólya String Models

Ohad Elishco (Ben-Gurion University of the Negev, Israel); Farzad Farnoud (Hassanzadeh) (California Institute of Technology, USA); Moshe Schwartz (Ben-Gurion University of the Negev, Israel); Jehoshua Bruck (California Institute of Technology, USA)

We study random string-duplication systems, called Pólya string models, motivated by certain random mutation processes in the genome of living organisms. Unlike previous works that study the combinatorial capacity of string-duplication systems, or peripheral properties such as symbol frequency, this work provides exact capacity or bounds on it, for several probabilistic models. In particular, we give the exact capacity of the random tandem-duplication system, and the end-duplication system, and bound the capacity of the complement tandem-duplication system. Interesting connections are drawn between the former and the beta distribution common to population genetics, as well as between the latter system and signatures of random permutations.

Strong Secrecy Capacity of the Wiretap Channel II with DMC Main Channel

Dan He (Xidian University, P.R. China); Yuan Luo (Shanghai Jiao Tong University, P.R. China); Ning Cai (Xidian University, P.R. China)

This paper considers an extension of wiretap channel II, where the source message W is transmitted to the legitimate receiver via a discrete memoryless main channel (DMC). Receiving Y^N, the receiver needs to recover W with small error probability. Meanwhile, an eavesdropper is able to observe arbitrary subsequence of Y^N with size $\mu = N\alpha$, where $0 < \alpha < 1$ is a constant real number. The encoding-decoding scheme is designed to satisfy a strong secrecy criterion, i.e. the information of each block (instead of each bit) exposed to the eavesdropper is negligible or arbitrarily close to 0 when N is sufficiently large. We focus on the secrecy capacity of this model.

Optimal Sample Complexity for Stable Matrix Recovery

Yanjun Li (University of Illinois at Urbana-Champaign, USA); Kiryung Lee (Georgia Institute of Technology, USA); Yoram Bresler (University of Illinois at Urbana-Champaign, USA)

Tremendous efforts have been made to study the theoretical and algorithmic aspects of sparse recovery and low-rank matrix recovery. This paper establishes (near) optimal sample complexities for stable matrix recovery without constants or log factors. We treat sparsity, low-rankness, and other parsimonious structures within the same framework: constraint sets that have small covering numbers or Minkowski dimensions, which include notoriously challenging cases such as simultaneously sparse and low-rank matrices. We consider three types of random measurement matrices (unstructured, rank-1, and symmetric rank-1 matrices), following probability distributions that satisfy some mild conditions. In all these cases, we prove a fundamental achievability result — the recovery of matrices with parsimonious structures, using an optimal (or near optimal) number of measurements, is stable with high probability.

On Utility Optimization in Distributed Multiple Access over a Multi-packet Reception Channel

Yanru Tang, Faeze Heydaryan and Jie Luo (Colorado State University, USA)

This paper considers distributed medium access control in a wireless multiple access network with an unknown number of users. A multi-packet reception channel is assumed that all packets should be received successfully if and only if the number of users transmitting in parallel does not exceed a known threshold. We propose a transmission adaptation approach, where, in each time slot, a user estimates the probability of channel availability from the viewpoint of a virtual user and adjusts its transmission probability according to its utility objective and a derived user number estimate. Sufficient condition under which the system should have a unique equilibrium is obtained. Simulation results show that the proposed medium access control algorithm does help users to converge asymptotically to a near optimal transmission probability.

On channel dispersion per unit cost

Yücel Altuğ, H. Vincent Poor and Sergio Verdú (Princeton University, USA)

The fundamental tradeoff of channel coding per unit cost in the fixed-error probability regime is investigated for discrete memoryless channels in the presence of a free input symbol. In the absence of feedback, we characterize the speed of convergence to the capacity per unit cost in terms of a characteristic of the channel and cost function, which we refer to as \emph{$\varepsilon$-dispersion per unit cost}. Further, a sufficient condition for feedback to improve this convergence speed is provided.

Private Information Retrieval from MDS Coded Data in Distributed Storage Systems

Razane Tajeddine and Salim El Rouayheb (Illinois Institute of Technology, USA)

We consider the problem of providing privacy, in the private information retrieval (PIR) sense, to users requesting data from a distributed storage system (DSS). The DSS uses a Maximum Distance Separable (MDS) code to store the data reliably on unreliable storage nodes. Among these nodes, there are a number of spy nodes who will report to a third party, such as an oppressive regime, the data being requested by a certain user. A PIR scheme ensures that a user can satisfy its request while revealing, to the spy nodes, no information on which data is being requested. A user can achieve PIR by downloading all the data in the DSS. However, this is not a feasible solution due its high communication cost. We study information theoretic PIR schemes with low download communication cost. When there is one spy node, we construct PIR schemes with download cost 1 per unit of requested data (R is the code rate), achieving the 1−R information theoretic limit for linear schemes. When there are more than one spy node and for certain code rates, we devise PIR schemes that have download cost independent of the total size of the data in the DSS. An important property of the constructed PIR schemes is their universality since they depend on the code rate, but not on the MDS code itself.

Super-resolution MIMO radar

Reinhard Heckel (University of California, Berkeley, USA)

A MIMO radar emits probings signals with multiple transmit antennas and records the reflections from targets with multiple receive antennas. Estimating the relative angles, delays, and Doppler shifts from the received signals allows to determine the locations and velocities of the targets. Standard approaches to MIMO radar based on digital matched filtering or compressed sensing only resolve the angle-delay-Doppler triplets on a (1/(N_T N_R), 1/B,1/T) grid, where N_T and N_R are the number of transmit and receive antennas, B is the bandwidth of the probing signals, and T is the length of the time interval over which the reflections are observed. In this work, we show that the continuous angle-delay-Doppler triplets and the corresponding attenuation factors can be recovered perfectly by solving a convex optimization problem. This result holds provided that the angle-delay-Doppler triplets are separated either by 10/(N_T N_R-1) in angle, 10.01/B in delay, or 10.01/T in Doppler direction. Furthermore, this result is optimal (up to log factors) in the number of angle-delay-Doppler triplets that can be recovered.

Structure Learning and Universal Coding when Missing Values Exist

Joe Suzuki (Osaka University, Japan)

This paper considers structure learning from incomplete data with $n$ samples of $N$ variables assuming that the structure is a forest using the Chow-Liu algorithm. We construct two model selection algorithms that complete in $O(N^2)$ steps: one obtains a forest with the maximum posterior probability given the data, and the other obtains a forest that converges to the true one as $n$ increases. We show that the two forests are generally different when some values are missing. Moreover, we derive the conditional entropy given that no value is missing, and we evaluate the per-sample expected redundancy for universal coding of incomplete data in terms of the number of non-missing samples.

Variable-Length Lossy Source Coding Allowing Some Probability of Union of Overflow and Excess Distortion

Ryo Nomura (Senshu University, Japan); Hideki Yagi (University of Electro-Communications, Japan)

We consider a new concept of achievability in the variable-length lossy source coding on the basis of the probability of the union of the overflow of codeword length and the excess distortion. In this setting, our main concern is to determine the achievable rate-distortion region for given source and distortion measure. To this end, we first derive non-asymptotic upper and lower bounds, and then we derive general formulas of this achievable rate-distortion region in the first- and second-order sense. Finally, we apply our general formulas to stationary memoryless sources with an additive distortion measure.

On Layered Erasure Interference Channels without CSI at Transmitters

Yan Zhu (Northwestern University, USA); Cong Shen (University of Science and Technology of China, P.R. China)

This paper studies a layered erasure model for two-user interference channels, which can be viewed as a simplified version of Gaussian fading interference channel. It is assumed that channel state information (CSI) is only available at receivers but not at transmitters. Under such assumption, an outer bound is derived for the capacity region of such interference channel. The new outer bound is tight in many circumstances. For the remaining open cases, the outer bound extends previous result.

Correlation properties of sequences from the 2-D array structure of Sidelnikov sequences of different lengths and their union

Min Kyu Song and Hong-Yeop Song (Yonsei University, Korea); Dae San Kim (Sogang University, Korea); Jang Yong Lee (The Agency for Defense Development, Korea)

In this paper, we show that the cross-correlation of properly chosen two column sequences from the array structure of two different Sidelnikov sequences of periods $q^e-1$ and $q^f-1$, where $e \neq f$, is bounded by $(e+f-1)\sqrt{q}+1$. From this result, we construct new sequence families by combining sequence families from the array structure of Sidelnikov sequences of period $q^2-1, q^3-1, …, q^d-1$ for some $d$ with $2\leq d \leq \frac{1}{2}(\sqrt{q}-\frac{2}{\sqrt{q}}+1)$. The maximum non-trivial complex correlation of any two pair of sequences in the constructed sequence family is upper-bounded by $(2d-1)\sqrt{q}+1$:thus, the combining process does not affect the maximum non-trivial complex correlation.

Capacity-Achieving Rateless Polar Codes

Bin Li (Huawei Technologies, P.R. China); David Tse (Stanford University, USA); Kai Chen (Huawei Technologies Co., Ltd., P.R. China); Hui Shen (Huawei, USA)

A rateless coding scheme transmits incrementally more and more coded bits over an unknown channel until all the information bits are decoded reliably by the receiver. We propose a new rateless coding scheme based on polar codes, and we show that this scheme is capacity-achieving, i.e. its information rate is as good as the best code specifically designed for the unknown channel. Previous rateless coding schemes are designed for specific classes of channels such as AWGN channels, binary erasure channels, etc. but the proposed rateless coding scheme is capacity-achieving for broad classes of channels as long as they are ordered via degradation. Moreover, it inherits the conceptual and computational simplicity of polar codes.

Duplication-Correcting Codes for Data Storage in the DNA of Living Organisms

Siddharth Jain and Farzad Farnoud (Hassanzadeh) (California Institute of Technology, USA); Moshe Schwartz (Ben-Gurion University of the Negev, Israel); Jehoshua Bruck (California Institute of Technology, USA)

The ability to store data in the DNA of a living organism has applications in a variety of areas including synthetic biology and watermarking of patented genetically-modified organisms. Data stored in this medium is subject to errors arising from various mutations, such as point mutations, indels, and tandem duplication, which need to be corrected to maintain data integrity. In this paper, we provide error-correcting codes for errors caused by tandem duplications, which create a copy of a block of the sequence and insert it in a tandem manner, i.e., next to the original. In particular, we present a family of codes for correcting errors due to tandem duplications of a fixed length and any number of errors. We also study codes for correcting tandem duplications of length up to a given constant k, where we are primarily focused on the cases of k = 2, 3.

Coding for classical-quantum channels with rate limited side information at the encoder: An information-spectrum approach

Naqueeb Warsi (University of Oxford); Justin P Coon (University of Oxford, United Kingdom)

We study the hybrid classical-quantum version of the channel coding problem for the famous Gel’fand-Pinsker channel. In the classical setting for this channel the conditional distribution of the channel output given the channel input is a function of a random parameter called the channel state. We study this problem when a rate limited version of the channel state is available at the encoder for the classical-quantum Gel’fand-Pinsker channel. We establish the capacity region for this problem in the information-spectrum setting. The capacity region is quantified in terms of spectral-sup classical mutual information rate and spectral-inf quantum mutual information rate.

Evaluating hypercontractivity parameters using information measures

Chandra Nair (Chinese University of Hong Kong, Hong Kong); Yan Nan Wang (The Chinese University of Hong Kong, Hong Kong)

We use an equivalent characterization of hypercontractive parameters using relative entropy to compute the hypercontractive region for the binary erasure channel. A similar analysis also recovers the celebrated result for the binary symmetric channel, also called the Bonami-Beckner inequality.

Broadcast Channel under Unequal Coherence Intervals

Mohamed Fadel (University of Texas at Dallas, USA); Aria Nosratinia (University of Texas, Dallas, USA)

In practical multiuser wireless networks, different links often experience unequal coherence lengths due to differences in mobility as well as scattering environment, a common scenario that has largely been neglected in the fundamental studies of the wireless channel. A key feature of unequal coherence lengths is that the per-transmission cost of acquiring CSI and its effect on achievable rates may vary significantly among the nodes, thus pre-existing receive CSI on a typical node should not be assumed as it will hide this key feature of the problem. In this paper, the method of product superposition is employed to find the achievable degrees of freedom region of multiuser broadcast channel where the users coherence lengths have arbitrary integer ratios. The achievable degrees of freedom region meets the outer bound when the transmitter has fewer antennas than the receivers, or when all receivers have the same number of antennas, hence for this class of antenna configurations the optimal degrees of freedom is now known.

On the Optimality of Zero-Forcing and Treating Interference as Noise for K-user MIMO Interference Channels

Chunhua Geng (University of California, Irvine, USA); Syed Ali Jafar (University of California Irvine, USA)

In this work, we first establish that for the class of interference channels identified by Geng et al. where treating interference as noise (TIN) is optimal from the generalized degrees-of-freedom (GDoF) perspective, if the number of antennas at each node is scaled by a common constant factor, then the GDoF region scales by the same factor almost surely, and the TIN scheme remains optimal for the entire GDoF region. Next, we demonstrate that for K-user MIMO interference channels with different antenna numbers for transmitters and receivers, there exist non-trivial parameter regimes where a simple scheme of zero-forcing strong interference and treating the others as noise achieves the sum GDoF.

Topological Interference Management with Reconfigurable Antennas

Heecheol Yang (Seoul National University, Korea); Navid NaderiAlizadeh and Salman Avestimehr (University of Southern California, USA); Jungwoo Lee (Seoul National University, Korea)

We study the symmetric degrees-of-freedom (DoF) of partially connected interference networks under linear coding strategies at transmitters without channel state information beyond topology. We assume that the receivers are equipped with reconfigurable antennas that can switch among their preset modes. In such a network setting, we characterize the class of network topologies in which half linear symmetric DoF is achievable. Moreover, we derive a general upper-bound on the linear symmetric DoF for arbitrary network topologies. We also show that this upper-bound is tight if the transmitters have at most two co-interferers.

Simple algorithms and guarantees for low rank matrix completion over F_2

James Saunderson and Maryam Fazel (University of Washington, USA); Babak Hassibi (California Institute of Technology, USA)

Let X* be a n1 x n2 matrix with entries in F_2 and rank r < min(n1,n2) (often r << min(n1,n2)). We consider the problem of reconstructing X* given only a subset of its entries. This problem has recently found numerous applications, most notably in network and index coding, where finding optimal linear codes (over some field F_q) can be reduced to finding the minimum rank completion of a matrix with a subset of revealed entries. The problem of matrix completion over reals also has many applications and in recent years several polynomial- time algorithms with provable recovery guarantees have been developed. However, to date, such algorithms do not exist in the finite-field case. We propose a linear algebraic algorithm, based on inferring low-weight relations among the rows and columns of X*, to attempt to complete X* given a random subset of its entries. We establish conditions on the row and column spaces of X* under which the algorithm runs in polynomial time (in the size of X*) and can successfully complete X* with high probability from a vanishing fraction of its entries. We then propose a linear programming-based extension of our basic algorithm, and evaluate it empirically.

Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes

Venkatesan Guruswami and Ray Li (Carnegie Mellon University, USA)

This work constructs codes that are efficiently decodable from a constant fraction of \emph{worst-case} insertion and deletion errors in three parameter settings: (i) Binary codes with rate approaching 1; (ii) Codes with constant rate for error fraction approaching 1 over fixed alphabet size; and (iii) Constant rate codes over an alphabet of size $k$ for error fraction approaching $(k-1)/(k+1)$. When errors are constrained to deletions alone, efficiently decodable codes in each of these regimes were constructed recently. We complete the picture by constructing similar codes that are efficiently decodable in the insertion/deletion regime.

Fundamental Tradeoff between Computation and Communication in Distributed Computing

Songze Li (University of Southern California, USA); Mohammad Ali Maddah-Ali (Bell Labs, Alcatel Lucent, USA); Salman Avestimehr (University of Southern California, USA)

We introduce a general distributed computing framework, motivated by commonly used structures like MapReduce, and formulate an information-theoretic tradeoff between computation and communication in such a framework. We characterize the optimal tradeoff to within a constant factor, for all system parameters. In particular, we propose a coded scheme, namely “Coded MapReduce” (CMR), which creates and exploits coding opportunities in data shuffling for distributed computing, reducing the communication load by a factor that is linearly proportional to the computation load. We then prove a lower bound on the minimum communication load, and demonstrate that CMR achieves this lower bound to within a constant factor. This result reveals a fundamental connection between computation and communication in distributed computing – the two are inverse-linearly proportional to each other.

Equivalent characterization of reverse Brascamp-Lieb-type inequalities using information measures

Salman Beigi (Institute for Research in Fundamental Sciences, Iran); Chandra Nair (Chinese University of Hong Kong, Hong Kong)

We derive an equivalent characterization, using information measures, for a class of reverse Brascamp-Lieb type inequalities. These inequalities contain, in particular, the family of reverse hypercontractive inequalities.

Distribution of First Arrival Position in Molecular Communication

Yen-Chi Lee, Chiun-Chuan Chen and Ping-Cheng Yeh (National Taiwan University, Taiwan); Chia-Han Lee (Academia Sinica, Taiwan)

In molecular communication systems, information is conveyed via nanoscale particles or molecules. Traditionally, the distribution of the first arrival time to the receiver is considered for system design and evaluation if nanoscale particles or molecules are diffused from the transmitter to the receiver in diffusion-based molecular communication systems. In this paper, we consider an extra information in the diffusion-based molecular communication system, namely the first arrival position at the receiver. A mathematical framework is developed to obtain the closed-form density function of the first arrival position for particles/molecules diffusing under constant net drift. The derived density function not only provides a novel analytical framework for existing molecular communication systems but may inspire novel molecular communication system design.

On the capacity of the chemical channel with feedback

Jui Wu and Achilleas Anastasopoulos (University of Michigan, USA)

The trapdoor channel is a binary input/output/state channel with state changing deterministically as the modulo-2 sum of the current input, output and state. At each state, it behaves as one of two Z channels, each with crossover probability 1/2.
Permuter et al. formulated the problem of finding the capacity of the trapdoor channel with feedback as a stochastic control problem. By solving the corresponding Bellman fixed-point equation, they showed that the capacity equals $C=\log\frac{1+\sqrt{5}}{2}$.

In this paper, we consider the chemical channel, which is a generalization of the trapdoor channel, whereby at each state the corresponding Z channel has crossover probability $p$. We characterize the capacity of this problem as the solution of a Bellman fixed-point equation corresponding to a Markov decision process (MDP).
Numerical solution of this fixed-point equation reveals an unexpected behavior, that is, for a range of crossover probabilities, the capacity seems to be constant.
Our main contribution is to formalize and prove this observation. In particular, by explicitly solving the Bellman equation, we show the existence of an interval $[0.5,p^*]$ over which the capacity remains constant.
To the authors’ knowledge, this is the only known channel for which such behavior is observed.

Orbit-Entropy Cones and Extremal Pairwise Orbit-Entropy Inequalities

Jun Chen (McMaster University, Canada); Amir Salimi and Tie Liu (Texas A&M University, USA); Chao Tian (The University of Tennessee Knoxville, USA)

The notion of orbit-entropy cone is introduced. Specifically, orbit-entropy cone $P_G\overline{\Gamma}^*_n$ is the projection of $\overline{\Gamma}^*_n$ induced by $G$, where $\overline{\Gamma}^*_n$ is the closure of entropy region for $n$ random variables and $G$ is a permutation group over $\{0, 1,\cdots,n-1\}$. For symmetric group $S_n$ (with arbitrary $n$) and cyclic group $C_n$ (with $n\leq 5$), the associated orbit-entropy cones are shown to be characterized by the Shannon type inequalities. Moreover, the extremal pairwise relationship between orbit-entropies is determined completely for partitioned symmetric groups and partially for cyclic groups.

Minimax Lower Bounds for Linear Independence Testing

David Isenberg (Carnegie Mellon University, USA); Aaditya Ramdas (University of California, Berkeley, USA); Aarti Singh and Larry Wasserman (Carnegie Mellon University, USA)

Linear independence testing is a fundamental information-theoretic and statistical problem that can be posed as follows: given $n$ points $\{(X_i,Y_i)\}^n_{i=1}$ from a $p+q$ dimensional multivariate distribution where $X_i \in \mathbb{R}^p$ and $Y_i \in\mathbb{R}^q$, determine whether $a^T X$ and $b^T Y$ are uncorrelated for every $a \in \mathbb{R}^p, b\in \mathbb{R}^q$ or not. We give minimax lower bound for this problem (when $p+q,n \to \infty$, $(p+q)/n \leq \kappa < \infty$, without sparsity assumptions). In summary, our results imply that $n$ must be at least as large as $\sqrt {pq}/\|\Sigma_{XY}\|_F^2$ for any procedure (test) to have non-trivial power, where $\Sigma_{XY}$ is the cross-covariance matrix of $X,Y$.
We also provide evidence that the lower bound is tight, by connections to two-sample testing and regression when $q=1$.

Channel polarization and Blackwell measures

Maxim Raginsky (University of Illinois at Urbana-Champaign, USA)

The Blackwell measure of a binary-input channel (BIC) is the distribution of the posterior probability of 0 under the uniform input distribution. This paper gives an explicit characterization of the evolution of the Blackwell measure of an arbitrary symmetric BIC under Arikan’s polarization transform, and uses this characterization to provide a unifying set of techniques for studying the polarization phenomenon.

Zero-rate achievability of posterior matching schemes for channels with memory

Jui Wu and Achilleas Anastasopoulos (University of Michigan, USA)

Shayevitz and Feder proposed a capacity-achieving sequential transmission scheme for memoryless channels called posterior matching (PM). The proof of capacity achievability of PM is involved and requires invertibility of the PM kernel (also referred to as one-step invertibility). Recent work by the same authors provided a simpler proof but still requires PM kernel invertibility.
An alternative technique for proving capacity achievability of PM for memoryless channels is due to Ma and Coleman which is based on connections between PM and non-linear filtering. Central to this technique is a different notion of “invertibility” which refers to the property of PM that allows recovering the original message $W$ from knowledge of the observations and the value of the posterior message cdf evaluated at the true message. Unfortunately, this property is also proved by appealing to PM kernel invertibility. As a result, all these techniques cannot be readily generalized to channels with non-invertible PM kernels.
In this paper we analyze PM schemes for channels with memory and in particular unifilar channels with output feedback and intersymbol-interference (ISI) finite state channels with state and output feedback. We follow closely the alternative technique of Ma and Coleman and focus on zero-rate achievability which is the first step of the proof. Our main technical contribution is to show that “invertibility” can be obtained without requiring an invertible PM kernel. This is an indispensable property since in channels with memory the PM kernel is not invertible.

Cyclically Symmetric Entropy Inequalities

Jun Chen (McMaster University, Canada); Hao Ye and Chao Tian (The University of Tennessee Knoxville, USA); Tie Liu (Texas A&M University, USA); Zhiqing Xiao (Tsinghua University, P.R. China)

A cyclically symmetric entropy inequality is of the form $\hbar_{\mathcal{O}}\geq \bar{c}\hbar_{\mathcal{O}’}$, where $\hbar_{\mathcal{O}}$ and $\hbar_{\mathcal{O}’}$ are two cyclic orbit entropy terms. A computational approach is formulated for bounding the extremal value of $\bar{c}$, which is denoted by $\bar{c}_{\mathcal{O},\mathcal{O}’}$. For two non-empty orbits $\mathcal{O}$ and $\mathcal{O}’$ of a cyclic group, it is said that $\mathcal{O}$ dominates $\mathcal{O}’$ if $\bar{c}_{\mathcal{O},\mathcal{O}’}=1$. Special attention is paid to characterizing such dominance relationship, and a graphical method is developed for that purpose.

A New Type Size Code for Universal One-to-One Compression of Parametric Sources

Nematollah Iri and Oliver Kosut (Arizona State University, USA)

We consider universal source coding of an exponential family of i.i.d. distributions for short blocklengths. We present a variation of the previously introduced Type Size code, in which type classes are characterized based on the neighborhoods of the minimal sufficient statistics. We show that, there is no loss in dispersion compared to the non-universal setup. We identify the third-order coding rate of this variation of the Type Size code for compression of such parametric sources.

Construction of Full-Diversity 1-Level LDPC Lattices for Block-Fading Channels

Hassan Khodaiemehr and Mohammad-Reza Sadeghi (Amirkabir University of Technology, Iran); Daniel Panario (Carleton University, Canada)

LDPC lattices were the first family of lattices which have an efficient decoding algorithm in high dimensions over an AWGN channel. When we consider Construction D’ of lattices with one binary LDPC code as its underlying code, 1-level LDPC lattices are obtained. Block fading channel (BF) is a useful model for various wireless communication channels in both indoor and outdoor environments. In this type of channel, a lattice point is divided into multiple blocks such that fading is constant within a block but changes, independently, across blocks. The design of lattices for BF channels offers a challenging problem, which differs greatly from its counterparts like AWGN channels. In this paper we construct full diversity 1-level LDPC lattices for block fading channels. We propose a new iterative decoding method for these family of lattices which has complexity that grows linearly in the dimension of lattice.

Unequal Error Protection Coding Approaches to the Noisy 20 Questions Problem

Hye Won Chung (University of Michigan, USA); Lizhong Zheng (Massachusetts Institute of Technology, USA); Brian Sadler (Army Research Laboratory, USA); Alfred Hero III (University of Michigan, USA)

In this paper, we propose an unequal error protection coding strategy based on superposition coding for the noisy 20 questions problem.
In this problem, a learner wishes to successively refine an estimate of the value of a continuous random variable by posing binary queries and receiving noisy responses.
When the queries are designed non-adaptively as a single block and the noisy responses are modeled as the output of a binary symmetric channel the 20 questions problem can be mapped to an equivalent problem of channel coding with unequal error protection (UEP). A superposition coding strategy with UEP is introduced that has error exponent that is significantly better than that of the UEP repetition code introduced by Variani et al.

On the quantum no-signalling assisted zero-error classical simulation cost of non-commutative bipartite graphs

Xin Wang (University of Technology Sydney, Australia); Runyao Duan (University of Technology, Australia)

Using one channel to simulate another exactly with the aid of quantum no-signalling correlations has been studied recently. The one-shot no-signalling assisted classical zero-error simulation cost of non-commutative bipartite graphs has been formulated as semidefinite programms [Duan and Winter, IEEE Trans. Inf. Theory 62, 891 (2016)]. Before our work, it was unknown whether the one-shot (or asymptotic) no-signalling assisted zero-error classical simulation cost for general non-commutative graphs is multiplicative (resp. additive) or not. In this paper we address these issues and give a general sufficient condition for the multiplicativity of the one-shot simulation cost and the additivity of the asymptotic simulation cost of non-commutative bipartite graphs, which include all known cases such as extremal graphs and classical-quantum graphs. Applying this condition, we exhibit a large class of so-called \emph{cheapest-full-rank graphs} whose asymptotic zero-error simulation cost is given by the one-shot simulation cost. Finally, we disprove the multiplicativity of one-shot simulation cost by explicitly constructing a special class of qubit-qutrit non-commutative bipartite graphs.

The Replica-Symmetric Prediction for Compressed Sensing with Gaussian Matrices is Exact

Galen Reeves and Henry D Pfister (Duke University, USA)

This paper considers the fundamental limit of compressed sensing for i.i.d. signal distributions and i.i.d. Gaussian measurement matrices. Its main contribution is a rigorous characterization of the asymptotic mutual information (MI) and minimum mean-square error (MMSE) in this setting. Under mild technical conditions, our results show that the limiting MI and MMSE are equal to the values predicted by the replica method from statistical physics. This resolves a well-known open problem that has remained open for over a decade.

An Explicit Rate Bound for the Over-Relaxed ADMM

Guilherme Franca and Jose Bento (Boston College, USA)

The framework of Integral Quadratic Constraints of Lessard et al. (2014) reduces the computation
of upper bounds on the convergence rate of several optimization algorithms to semi-definite programming (SDP). Followup work by Nishihara et al. (2015) applies this technique to the entire family of over-relaxed
Alternating Direction Method of Multipliers (ADMM). Unfortunately, they only provide an explicit error bound for sufficiently large values of some of the parameters of the problem, leaving the computation for the general case as a numerical optimization problem. In this paper we provide an exact analytical solution to this SDP and obtain a general and explicit upper bound on the convergence rate of the entire family of
over-relaxed ADMM. Furthermore, we demonstrate that it is not possible to extract from this SDP a general bound better than ours. We end with a few numerical illustrations of our result and a comparison between the convergence rate we obtain for ADMM with known convergence rates for Gradient Descent (GD).

Optimizing Data Freshness, Throughput, and Delay in Multi-Server Information-Update Systems

Ahmed M Bedewy (The Ohio State University, USA); Yin Sun (the Ohio State University, USA); Ness B. Shroff (The Ohio State University, USA)

In this work, we investigate the design of information-update systems, where incoming update packets are forwarded to a remote destination through multiple servers (each server can be viewed as a wireless channel). One important performance metric of these systems is the data freshness at the destination, also called the age-of-information or simply age, which is defined as the time elapsed since the freshest packet at the destination was generated. Recent studies on information-update systems have shown that the age-of-information can be reduced by intelligently dropping stale packets. However, packet dropping may not be appropriate in many applications, such as news and social updates, where users are interested in not just the latest updates, but also past news. Therefore, all packets may need to be successfully delivered. In this paper, we study how to optimize age-of-information without throughput loss. We consider a general scenario where incoming update packets do not necessarily arrive in the order of their generation times. We prove that a preemptive Last Generated First Served (LGFS) policy simultaneous optimizes the age, throughput, and delay performance in infinite buffer queueing systems. We also show age-optimality for the LGFS policy for any finite queue size. These results hold for arbitrary, including non-stationary, arrival processes.

Highly Sensitive Universal Statistical Test

Hirosuke Yamamoto and Qiqiang Liu (The University of Tokyo, Japan)

In Maurer’s universal statistical test and its variations including Coron’s test to check the randomness of a binary sequence, the entropy of sequence is calculated from the repetition intervals of L-grams in the sequence, and the randomness is evaluated based on whether or not the entropy attains the maximum. However, since the derivative of the entropy is zero at the maximum, the deviation from the maximum cannot be detected sensitively. In this paper, we propose a new universal statistical test, in which a given sequence is converted into the most sensitive one by changing bit `1′ to `0′ randomly in the sequence. By simulation, we show that the proposed universal statistical test can detect non-randomness much more sensitively than Maurer’s and Coron’s tests and T-test.

The Stochastic-Calculus Approach to Multi-Receiver Poisson Channels

Nirmal V Shende and Aaron Wagner (Cornell University, USA)

We study two-receiver Poisson channels using tools derived from stochastic calculus. We compute necessary and sufficient conditions under which the continuous-time, continuous-space Poisson channel is less noisy and more capable, which turn out to be distinct from the conditions under which the “sampled” channel is less noisy and more capable. We also determine the capacity region of the more capable Poisson broadcast channel with independent message sets, the more capable Poisson wiretap channel, and the general two-decoder Poisson broadcast channel with degraded message sets.

Information stabilization of images over discrete memoryless channels

Eric Graves (Army Research Lab, USA); Tan Wong (University of Florida, USA)

This paper investigates the problem of information stabilization of the images of source sets over discrete memoryless channels (DMCs). It is shown that if the minimum image cardinality of a source set over a DMC has a specific entropy characterization, then the image of this source set will be information stable. In many applications, this requirement on the source set can be satisfied using the method of equal-image-size source partitioning. A construction of a strong secrecy subcode from a weak secrecy code for the wiretap channel is provided as an example to illustrate the use of the information stabilization technique.

Spider Codes: Practical Erasure Codes for Distributed Storage Systems

Lluis Pamies-Juarez, Cyril Guyot and Robert Mateescu (WD Research, USA)

Distributed storage systems use erasure codes to reliably store data with a small storage overhead. To further improve system performance, some novel erasure codes introduce new features such as the regenerating property or symbol locality, enabling these codes to have optimal repair times and optimal degraded read performance. Unfortunately, the introduction of these new features often exacerbates the performance of other system metrics such as encoding throughput, data reliability, and storage overhead, among others. In this paper we describe the intricate relationships between erasure code properties and system-level performance metrics, showing the different trade-offs distributed storage designers need to face. We also present Spider Codes, a new erasure code achieving a practical trade-off between the different system-level performance metrics.

Information Rates of Sampled Wiener Processes

Alon Kipnis (Stanford University, USA); Yonina C. Eldar (Technion-Israel Institute of Technology, Israel); Andrea Goldsmith (Stanford University, USA)

The minimal distortion attainable in recovering the waveform of a continuous-time Wiener process from an encoded version of its uniform samples is considered. We first introduce a combined sampling and source coding problem and prove an associated source coding theorem. We then derive an upper bound on the minimal distortion attainable under any sampling rate and a prescribed number of bits to encode the samples. We show that this bound is accurate to within a second order term in the sampling rate, and converges to the true distortion-rate function of the Wiener process as the sampling rate goes to infinity. For example, this bound implies that by providing a single bit per sample it is possible to achieve the optimal distortion-rate performance of the Wiener process, given by its distortion-rate function, to within a factor of $1.5$. We conclude the distortion-rate function of the Wiener process is strictly smaller than the indirect distortion-rate function from its uniform samples obtained at any finite sampling rate. This is in contrast to stationary infinite bandwidth processes.

An Extended Tanner Graph Approach to Decoding LDPC Codes over Decode-and-Forward Relay Channels

Bin Qian (Hong Kong University of Science and Technology, Hong Kong); Wai Ho Mow (Hong Kong University of Science and Technology & HKUST, Hong Kong)

This paper reexamines the decoding problem for LDPC-coded relay channels applying the decode-and-forward protocol. The error-free decoding process at the relay is typically assumed by the conventional MRC based decoder at the destination. However, in practice, the unsuccessful decoding at the relay is unavoidable and the resultant error propagation may become a performance bottleneck. In this paper, we propose a new Tanner graph error representation to accurately characterize the relay decoding errors. Based on the proposed error representation, we derive a new message passing decoder which is implemented on an extended Tanner graph of the system. It is empirically verified that the new decoder can outperform existing decoders, and achieve promising performance improvement in the scenario of symmetric links. Moreover, the proposed error representation allows us to conduct density evolution to analyze the threshold performance of the new decoder for LDPC-coded relay channels.

A Blind Matching Algorithm for Cognitive Radio Networks

Doha Hamza Mohamed (KAUST, Saudi Arabia); Jeff Shamma (King Abdullah University of Science and Technology (KAUST) & Georgia Institute of Technology, Saudi Arabia)

We consider a cognitive radio network where secondary users (SUs) are allowed access time to the spectrum belonging to the primary users (PUs) provided that they relay primary messages. PUs and SUs negotiate over allocations of the secondary power that will be used to relay PU data. We formulate the problem as a generalized assignment market to find a pairwise-stable matching. We propose a distributed blind matching algorithm (BLMA) to produce the pairwise-stable matching plus the associated power allocations. We stipulate a limited information exchange in the network so that agents only calculate their own utilities but no information is available about the utilities of any other users in the network. We establish convergence to pairwise-stable matchings in finite time. Finally we show that our algorithm exhibits a limited degradation in PU utility when compared with the Pareto optimal results attained using perfect information assumptions.

An Achievable Rate Region for the Two-Way Multiple Relay Channel

Jonathan Ponniah (Texas A&M, USA); Liang-Liang Xie (University of Waterloo, Canada)

We propose an achievable rate-region for the two-way multiple-relay channel using decode-and-forward block Markovian coding. We identify a conflict between the information flow in both directions. This conflict leads to an intractable number of decode-forward schemes and achievable rate regions, none of which are universally better than the others. We introduce a new concept in decode-forward coding called ranking, and discover that there is an underlying structure to all of these rate regions expressed in the rank assignment. Through this discovery, we characterize the complete achievable rate region that includes all of the rate regions corresponding to the particular decode-forward schemes. This rate region is an extension of existing results for the two-way one-relay channel and the two-way two-relay channel.

Channel Coding for Wireless Communication via Electromagnetic Polarization

Xiaobin Wu, Thomas E Fuja and Thomas Pratt (University of Notre Dame, USA)

This paper investigates fundamental properties of polarization-based modulation for wireless communication — and in particular the application of channel coding techniques to such systems. After developing appropriate channel models, bounds on achievable rates are computed, and the performance of exemplary LDPC codes are simulated; this is done for both additive white Gaussian noise channels as well as channels subject to i.i.d. Rayleigh fading. A novel “on/off” modulation scheme is developed that adaptively changes the information-bearing polarization states based on the singular value decomposition (SVD) of the realized channel; this scheme is shown to significantly outperform fixed-constellation schemes as well as adaptive-constellation schemes employing equal energy signals.

On Storage Allocation for Maximum Service Rate in Distributed Storage Systems

Moslem Noori (University of Alberta, Canada); Emina Soljanin (Rutgers University, USA); Masoud Ardakani (University of Alberta, Canada)

Storage allocation affects important performance measures of distributed storage systems. Most previous studies on the storage allocation consider its effect separately either on the success of the data recovery or on the service rate (time) where it is assumed that no access failure happens in the system. In this paper, we go one step further and incorporate the access model and the success of data recovery into the service rate analysis. In particular, we focus on quasi-uniform storage allocation and provide a service rate analysis for both fixed-size and probabilistic access models at the nodes. Using this analysis, we then show that for the case of exponential waiting time distribution at individuals storage nodes, minimal spreading allocation results in the highest system service rate for both access models. This means that for a given storage budget, replication provides a better service rate than coded storage solution.

Sequential Necessary and Sufficient Conditions for Optimal Channel Input Distributions of Channels with Memory and Feedback

Photios A. Stavrou, Charalambos D Charalambous and Christos K Kourtellaris (University of Cyprus, Cyprus)

We derive sequential necessary and sufficient conditions for any channel input distribution ${\cal P}_{0,n}\triangleq\{P_{X_t|X^{t-1},Y^{t-1}}:~t=0,1,\ldots,n\}$ to maximize directed information $I(X^n\rightarrow{Y^n})\triangleq\sum_{t=0}^n{I}(X_t;Y_t|Y^{t-1})$ for channel distributions of the form $\{P_{Y_t|Y_{t-M}^{t-1},X^t}:~t=0,1,\ldots,n\}$ where $X^n\triangleq\{X_0,\ldots,X_n\}$ and $Y^n\triangleq\{Y_0,\ldots,Y_n\}$ are the channel input and output random processes. The methodology utilizes the information structures of optimal channels input distributions and the corresponding Finite Transmission Feedback Information (FTFI) capacity derived in \cite{kourtellaris-charalambous2015aieeeit}, certain functional properties of directed information, and standard arguments of dynamic programming algorithm. The result is applied to a specific example of unit memory to derive recursive closed form expressions for the optimal (nonstationary) distributions which achieve the FTFI capacity. A numerical example is presented to demonstrate our concepts.

Design of Geometric Molecular Bonds

David Doty (University of California, Davis, USA); Andrew Winslow (Université Libre de Bruxelles, Belgium)

An example of a *nonspecific* molecular bond is the affinity of any positive charge for any negative charge (like-unlike), or of nonpolar material for itself when in aqueous solution (like-like). This contrasts *specific* bonds such as the affinity of the DNA base A for T, but not for C, G, or another A. Recent experimental breakthroughs in DNA nanotechnology demonstrate that a particular nonspecific like-like bond (“blunt-end DNA stacking” that occurs between the ends of any pair of DNA double-helices) can be used to create specific “macrobonds” by careful geometric arrangement of many nonspecific blunt ends, motivating the need for sets of macrobonds that are *orthogonal*: two macrobonds not intended to bind should have relatively low binding strength, even when misaligned.

To address this need, we introduce *geometric orthogonal codes* that abstractly model the engineered DNA macrobonds as two-dimensional binary codewords. While motivated by completely different applications, geometric orthogonal codes share similar features to the *optical orthogonal codes* studied by Chung, Salehi, and Wei. The main technical difference is the importance of 2D geometry in defining codeword orthogonality.

Two-stage Orthogonal Subspace Matching Pursuit for Joint Sparse Recovery

Kyung Su Kim (Korea Advanced Institute of Science and Technology, Korea); Sae-Young Chung (KAIST, Korea)

The joint sparse recovery problem addresses simultaneous recovery of jointly sparse signals (signal matrix) and their union support whose cardinality is k from their multiple measurement vectors (MMV) obtained through a common sensing matrix. k+1 is the ideal lower bound on the minimum required number of measurements for perfect recovery for almost all signals, i.e., excluding a set of Lebesgue measure zero. To get close to the lower bound by taking advantage of the signal structure, Lee, et al. proposed the Subspace-Augmented MUltiple SIgnal Classification (SA-MUSIC) method which is guaranteed to achieve the lower bound when the rank of signal matrix is k and provided less restrictive conditions than existing methods in approaching k+1 in the practically important case when the rank of the signal matrix is smaller than k. The conditions, however, are still restrictive despite its empirically superior performance. We propose an efficient algorithm called the Two-stage orthogonal Subspace Matching Pursuit (TSMP) which has less theoretical restriction in approaching the lower bound than existing algorithms. Empirical results show that the TSMP method with low complexity outperforms most existing methods. The proposed scheme has better empirical performance than most existing methods even in the single measurement vectors (SMV) problem case. Variants of restricted isometry property or mutual coherence are used to improve the theoretical guarantees of TSMP and to cover the noisy case as well.

Similarity Clustering in the Presence of Outliers: Exact Recovery via Convex Program

Ramya Korlakai Vinayak and Babak Hassibi (California Institute of Technology, USA)

We study the problem of clustering a set of data points based on their similarity matrix, each entry of which represents the similarity between the corresponding pair of points. We propose a convex-optimization-based algorithm for clustering using the similarity matrix, which has provable recovery guarantees. It needs no prior knowledge of the number of clusters and it behaves in a robust way in the presence of outliers and noise. Using a generative stochastic model for the similarity matrix (which can be thought of as a generalization of the classical Stochastic Block Model) we obtain precise bounds (not orderwise) on the sizes of the clusters, the number of outliers, the noise variance, separation between the mean similarities inside and outside the clusters and the values of the regularization parameter that guarantee the exact recovery of the clusters with high probability. The theoretical findings are corroborated with extensive evidence from simulations.

Lossless linear analog compression

Helmut Bölcskei and Erwin Riegler (ETH Zurich, Switzerland); Günther Koliander (Vienna University of Technology, Austria); Giovanni Alberti (University of Pisa, Italy); Camillo De Lellis (University of Zurich, Switzerland)

We establish the fundamental limits of lossless linear analog compression by considering the recovery of random vectors x in R^m from the noiseless linear measurements y = Ax with measurement matrix A in R^(nxm). Specifically, for a random vector x in R^m of arbitrary distribution we show that x can be recovered with zero error probability from n > inf dim_MB(U) linear measurements, where dim_MB(.) denotes the lower modified Minkowski dimension and the infimum is over all sets U in R^m with P[x in U] = 1. This achievability statement holds for Lebesgue almost all measurement matrices A. We then show that s-rectifiable random vectors—–a stochastic generalization of s-sparse vectors—–can be recovered with zero error probability from n > s linear measurements. From classical compressed sensing theory we would expect n >= s to be necessary for successful recovery of x. Surprisingly, certain classes of s-rectifiable random vectors can be recovered from fewer than s measurements. Imposing an additional regularity condition on the distribution of s-rectifiable random vectors x, we do get the expected converse result of s measurements being necessary. The resulting class of random vectors appears to be new and will be referred to as s-analytic random vectors.

Lattice Strategies for the Ergodic Fading Dirty Paper Channel

Ahmed Hindy (University of Texas at Dallas, USA); Aria Nosratinia (University of Texas, Dallas, USA)

A modified version of Costa’s dirty paper channel is studied, in which both the input signal and the state experience stationary and ergodic time-varying fading. The fading coefficients are assumed to be known exclusively at the receiver. An inner bound of the achievable rates using lattice codes is derived and compared to an outer bound of the capacity. For a wide range of fading distributions, the gap to capacity is within a constant value that does not depend on either the power of the input signal or the state. The results presented in this paper are applied to a class of ergodic fading broadcast channels with receive channel state information, where the achievable rate region is shown to be close to capacity under certain configurations.

Partial DNA Assembly: A Rate-Distortion Perspective

Ilan Shomorony (UC Berkeley, USA); Govinda M Kamath (Stanford University, India); Fei Xia (Tsinghua University, P.R. China); Thomas Courtade (University of California, Berkeley, USA); David Tse (Stanford University, USA)

Earlier formulations of the DNA assembly problem were all in the context of perfect assembly; i.e., given a set of reads from a long genome sequence, is it possible to perfectly reconstruct the original sequence? In practice, however, it is very often the case that the read data is not sufficiently rich to permit unambiguous reconstruction of the original sequence. While a natural generalization of the perfect assembly formulation to these cases would be to consider a rate-distortion framework, partial assemblies are usually represented in terms of an assembly graph, making the definition of a distortion measure challenging. In this work, we introduce a distortion function for assembly graphs that can be understood as the logarithm of the number of Eulerian cycles in the assembly graph, each of which correspond to a candidate assembly that could have generated the observed reads. We also introduce an algorithm for the construction of an assembly graph and analyze its performance on real genomes.

Minimum node degree in inhomogeneous random key graphs with unreliable links

Rashad Eletreby (Carnegie Mellon University, USA); Osman Yağan (Carnegie Mellon University & CyLab, USA)

We consider wireless sensor networks under a heterogeneous random key predistribution scheme and on-off channel model. The heterogeneous key predistribution scheme has recently been introduced by Yağan – as an extension to the Eschenauer and Gligor scheme – for the cases when the network consists of sensor nodes with varying level of resources and/or connectivity requirements (e.g., regular nodes vs. cluster heads). We model the network by an intersection of the inhomogeneous random key graph (induced by the heterogeneous scheme) with an Erdős-Rényi (ER) graph (induced by the on-off channel model). We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model so that with high probability all of its nodes are connected to at least k other nodes; i.e., the minimum node degree of the graph is no less than k. We also present numerical results to support these in the finite node regime. The numerical results suggest that the conditions that ensure k-connectivity coincide with those ensuring minimum node degree to be no less than k.

Two-Way Spinal Codes

Weiqiang Yang (Xidian University, P.R. China); Ying Li (University of Xidian, P.R. China); Xiaopu Yu and Yue Sun (Xidian University, P.R. China)

In this paper, we propose a rateless two-way spinal code. There exist two encoding processes in the proposed code, i.e., the forward encoding process and the backward encoding process. Rather than the original spinal code, where each message segment only has relationship with the coded symbols corresponding to itself and the later message segments, the information of each message segment of the proposed code is conveyed by the coded symbols corresponding to all the message segments. Based on this two-way coding strategy, we propose an iterative decoding algorithm. Different transmission schemes, including the symmetric transmission and the asymmetric transmission, are also discussed in this paper. Our analysis illustrates that the asymmetric transmission can be treated as a tradeoff between the performance and the decoding complexity. Simulation results show that the proposed code outperforms not only the original spinal code but also some strong channel codes, such as polar codes and raptor codes.

Double Regenerating Codes for Hierarchical Data Centers

Yuchong Hu (Huazhong University of Science and Technology, P.R. China); Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong); Xiaoyang Zhang (Huazhong University of Science and Technology, P.R. China)

Data centers increasingly adopt erasure coding to ensure fault-tolerant storage with low redundancy, yet the hierarchical nature of data centers incurs substantial oversubscribed cross-rack bandwidth in failure repair. We present Double Regenerating Codes (DRC), whose idea is to perform two-stage regeneration, so as to minimize the cross-rack repair bandwidth for a single-node repair with the minimum storage redundancy. We prove the existence of a DRC construction, and show via quantitative comparisons that DRC significantly reduces the cross-rack repair bandwidth of state-of-the-art minimum storage regenerating codes.

Semantic-Security Capacity for Wiretap Channels of Type II

Ziv Goldfeld (Ben Gurion University, Israel); Paul Cuff (Princeton University, USA); Haim H Permuter (Ben-Gurion University, Israel)

The secrecy capacity of the type II wiretap channel (WTC II) with a noisy main channel is currently an open problem. Herein its secrecy-capacity is derived and shown to be equal to its semantic-security (SS) capacity. In this setting, the legitimate users communicate via a discrete-memoryless (DM) channel in the presence of an eavesdropper that has perfect access to a subset of its choosing of the transmitted symbols, constrained to a fixed fraction of the blocklength. The secrecy criterion is achieved simultaneously for all possible eavesdropper subset choices. On top of that, SS requires negligible mutual information between the message and the eavesdropper’s observations even when maximized over all message distributions.
A key tool for the achievability proof is a novel and stronger version of Wyner’s soft covering lemma. Specifically, the lemma shows that a random codebook achieves the soft-covering phenomenon with high probability. The probability of failure is doubly-exponentially small in the blocklength. Since the combined number of messages and subsets grows only exponentially with the blocklength, SS for the WTC II is established by using the union bound and invoking the stronger soft-covering lemma. The direct proof shows that rates up to the weak-secrecy capacity of the classic WTC with a DM erasure channel (EC) to the eavesdropper are achievable. The converse follows by establishing the capacity of this DM wiretap EC as an upper bound for the WTC II

The Weight Consistency Matrix Framework for General Non-Binary LDPC Code Optimization: Applications in Flash Memories

Ahmed Hareedy and Chinmayi Lanka (University of California, Los Angeles (UCLA), USA); Clayton Schoeny (University of California, Los Angeles, USA); Lara Dolecek (UCLA, USA)

Transmission channels underlying modern memory systems, e.g., Flash memories, possess a significant amount of asymmetry. While existing LDPC codes optimized for symmetric, AWGN-like channels are being actively considered for Flash applications, we demonstrate that, due to channel asymmetry, such approaches are fairly inadequate. We propose a new, general, combinatorial framework for the analysis and design of non-binary LDPC (NB-LDPC) codes for asymmetric channels. We introduce a refined definition of absorbing sets, which we call general absorbing sets (GASs), and an important subclass of GASs, which we refer to as general absorbing sets of type two (GASTs). Additionally, we study the combinatorial properties of GASTs. We then present the weight consistency matrix (WCM), which succinctly captures key properties in a GAST. Based on these new concepts, we then develop a general code optimization framework, and demonstrate its effectiveness on the realistic highly-asymmetric normal-Laplace mixture (NLM) Flash channel. Our optimized codes enjoy over one order (resp., half of an order) of magnitude performance gain in the uncorrectable BER (UBER) relative to the unoptimized codes (resp. the codes optimized for symmetric channels).

Universal Multiparty Data Exchange

Himanshu Tyagi (Indian Institute of Science, India); Shun Watanabe (Tokyo University of Agriculture and Technology, Japan)

Multiple parties observing correlated data seek to recover each other’s data and attain omniscience. To that end, they communicate interactively over a noiseless broadcast channel: Each bit transmitted over this channel is received by all the parties. We give a universal interactive protocol for omniscience which requires communication of rate only O(n^{-1/2} log n) more than the optimal rate for every independent and identically distributed (in time) sequence of data.

Active Learning for Community Detection in Stochastic Block Models

Akshay Gadde, Eyal En Gad, Salman Avestimehr and Antonio Ortega (University of Southern California, USA)

The stochastic block model~(SBM) is an important generative model for random graphs in network science and machine learning, useful for benchmarking community detection (or clustering) algorithms. The symmetric SBM generates a graph with $2n$ nodes which cluster into two equally sized communities. Nodes connect with probability $p$ within a community and $q$ across different communities. We consider the case of $p=a\ln (n)/n$ and $q=b\ln (n)/n$. In this case, it was recently shown that recovering the community membership (or label) of every node with high probability (w.h.p.) using only the graph is possible if and only if the Chernoff-Hellinger (CH) divergence $D(a,b)=(\sqrt{a}-\sqrt{b})^2 \geq 1$. In this work, we study if, and by how much, community detection below the clustering threshold (i.e. $D(a,b)<1$) is possible by querying the labels of a limited number of chosen nodes (i.e., active learning). Our main result is to show that, under certain conditions, sampling the labels of a vanishingly small fraction of nodes (a number sub-linear in $n$) is sufficient for exact community detection even when $D(a,b)<1$. Furthermore, we provide an efficient learning algorithm which recovers the community memberships of all nodes w.h.p. as long as the number of sampled points meets the sufficient condition. We also show that recovery is not possible if the number of observed labels is less than $n^{1-D(a,b)}$. The validity of our results is demonstrated through numerical experiments.

A semidefinite programming upper bound of quantum capacity

Xin Wang (University of Technology Sydney, Australia); Runyao Duan (University of Technology, Australia)

Recently the power of positive partial transpose preserving (PPTp) and no-signalling (NS) codes in quantum communication has been studied. We continue with this line of research and show that the NS/PPTp/NS$\cap$PPTp codes assisted zero-error quantum capacity depends only on the non-commutative bipartite graph of the channel and the one-shot case can be computed efficiently by semidefinite programming (SDP). As an example, the activated PPTp codes assisted zero-error quantum capacity is carefully studied. We then present a general SDP upper bound $Q_\Gamma$ of quantum capacity and show it is always smaller than or equal to the “Partial transposition bound” introduced by Holevo and Werner, and the inequality could be strict. This upper bound is found to be additive, and thus is an upper bound of the potential PPTp assisted quantum capacity as well. We further demonstrate that $Q_\Gamma$ is strictly better than several previously known upper bounds for an explicit class of quantum channels. Finally, we show that $Q_\Gamma$ can be used to bound the super-activation of quantum capacity.

Spatially-Coupled Codes Approach Symmetric Information Rate of Finite-State Markov Fading Channels

Hiroshi Abe and Kenta Kasai (Tokyo Institute of Technology, Japan)

Fukushima et al. proved that spatially-coupled codes, without pilot symbols and any optimization for the channels, universally achieve the symmetric information rate (SIR) of generalized erasure channels with memory. We expect that the universality is also valid for fading channels. The receiver performs joint iterative channel estimation and decoding where factor-graphs-based BCJR channel estimator for finite-state Markov channels and the LDPC decoder are considered. We demonstrate that the reliable transmission is possible at a rate close to the SIR.

Caching-Aided Multicast for Partial Information

Tetsunao Matsuta and Tomohiko Uyematsu (Tokyo Institute of Technology, Japan)

This paper deals with a multicast network with a server and many users. The server has content files with the same size, and each user requests one of the files. On the other hand, each user has a local memory, and a part of information of the files is cached (i.e., stored) in these memories in advance of users’ requests. By using these cached information as side information, the server encodes files based on users’ requests. Then, it sends a codeword through an error-free shared link for which all users can receive a common codeword from the server without error. We assume that the server transmits either of whole or partial information of requested files at each different transmission rate (i.e., the codeword length per file size). In this paper, we focus on the region of pairs of these two rates such that (whole or partial) information of requested files are recovered at each user with an arbitrary small error probability. We give inner and outer bounds on this region.

Multiple Access Channel with Unreliable Cribbing

Wasim Huleihel (Technion & Technion – Israel Institute of Technology, Haifa, Israel); Yossef Steinberg (Technion, Israel)

It is by now well-known that cooperation between users can lead to significant performance gains. A common assumption in past works is that all the users are aware of the resources available for cooperation, and know exactly to what extent these resources can be used. In this work, we consider the multiple access channel (MAC) with (strictly causal, causal, and non-causal) cribbing that may be absent. The derived achievable regions are based on universal coding scheme which exploit the cribbing link if it is present, and can still operate (although at reduced rates) if cribbing is absent. We derive also an outer bound, which for some special case is tight.

Information Structures of Capacity Achieving Distribution for Channels with Memory and Feedback

Christos K Kourtellaris and Charalambos D Charalambous (University of Cyprus, Cyprus)

The information structures of the optimal channel input distributions ${\cal P}_{[0,n]}\triangleq \big\{{\bf P}_{A_i|A^{i-1}, B^{i-1}}:i=0, 1, \ldots, n\big\}$, which correspond to the extremum problem of feedback capacity $C_{A^n \to B^n}^{FB}\triangleq \sup_{{\cal P}_{[0,n]}}\sum_{i=0}^n I(A_i; B_i|B^{i-1})$ are identified, for any class of channel distributions $\big\{{\bf P}_{B_i|B^{i-1}, A_i}:i=0, 1, \ldots, n\big\}$ and $\big\{ {\bf P}_{B_i|B_{i-M}^{i-1}, A_i}:i=0, 1, \ldots, n\big\}$, where $B^n \triangleq \{B_j: j=0,1, \ldots, n\}$ are the channel output RVs, $A^n\triangleq \{A_j: j=0,1, \ldots, n\}$ are the channel inputs RVs, and $M$ is a finite nonnegative integer. The methodology utilizes stochastic optimal control theory, to identify the control process, the controlled process, and a variational equality of directed information, to derive upper bounds on $I(A^n \to B^n)\triangleq \sum_{i=0}^n I(A^i; B_i|B^{i-1})$, which are achievable over specific subsets of ${\cal P}_{[0,n]}$, which satisfy conditional independence.
The main theorem states, that for any channel with memory $M$, the optimal channel input conditional distribution occur in the subset $\overset{\circ}{\cal P}_{[0,n]}\triangleq \big\{ {\bf P}_{A_i|B_{i-M}^{i-1}}: i=1, \ldots, n\big\} \subset {\cal P}_{[0,n]}$, and the corresponding extremum problem simplifies to the following characterization.
C_{A^n \to B^n}^{FB, M} \triangleq \sup_{ \overset{\circ}{\cal P}_{[0,n]} } \sum_{i=0}^n I(A_i; B_i|B_{i-M}^{i-1}) . \nonumber %

On MBR codes with replication

Nikhil Krishnan Muralee Krishnan (Indian Institute of Science, India); P. Vijay Kumar (Indian Institute of Science, Bangalore, India)

An early paper by Rashmi et al. presented the construction of an $(n,k,d=n-1)$ MBR regenerating code featuring the inherent double replication of all code symbols and repair-by-transfer (RBT), both of which are important in practice. We first show that no MBR code can contain even a single code symbol that is replicated more than twice. We then go on to present two new families of MBR codes which feature double replication of all systematic message symbols. The codes also possess a set of $d$ nodes whose contents include the message symbols and which can be repaired through help-by-transfer (HBT). As a corollary, we obtain systematic RBT codes for the case $d=(n-1)$ that possess inherent double replication of all code symbols and having a field size of $O(n)$ in comparison with the general, $O(n^2)$ field size requirement of the earlier construction by Rashmi et. al. For the cases $(k=d=n-2)$ or $(k+1=d=n-2)$, the field size can be reduced to $q=2$, and hence the codes can be binary. We also give a necessary and sufficient condition for the existence of MBR codes having double replication of all code symbols and also suggest techniques which will enable an arbitrary MBR code to be converted to one with double replication of all code symbols.

Deterministic Performance Analysis of Subspace Methods for Cisoid Parameter Estimation

Céline Aubel and Helmut Bölcskei (ETH Zurich, Switzerland)

Performance analyses of subspace algorithms for cisoid parameter estimation available in the literature are predominantly of statistical nature with a focus on asymptotic—either in the sample size or the SNR—statements. This paper presents a deterministic, finite sample size, and finite-SNR performance analysis of the ESPRIT algorithm and the matrix pencil method. Our results are based, inter alia, on a new upper bound on the condition number of Vandermonde matrices with nodes inside the unit disk. This bound is obtained through a generalization of Hilbert’s inequality frequently used in large sieve theory.

Age-of-Information in the Presence of Error

Kun Chen and Longbo Huang (Tsinghua University, P.R. China)

We consider the peak age-of-information (PAoI) in an M/M/1 queueing system with packet delivery error, i.e., update packets can get lost during transmissions to their destination. We focus on two types of policies, one is to adopt Last-Come-First-Served (LCFS) scheduling, and the other is to utilize retransmissions, i.e., keep transmitting the most recent packet. Both policies can effectively avoid the queueing delay of a busy channel and ensure a small PAoI. Exact PAoI expressions under both policies with different error probabilities are derived, including First-Come-First-Served (FCFS), LCFS with preemptive priority, LCFS with non-preemptive priority, Retransmission with preemptive priority, and Retransmission with non-preemptive priority. Numerical results obtained from analysis and simulation are presented to validate our results.

Coding of Insertion-Deletion-Substitution Channels without Markers

Ryohei Goto, Kenta Kasai and Haruhiko Kaneko (Tokyo Institute of Technology, Japan)

In this paper, we deal with coding for synchronization errors. In conventional studies, to combat such errors, periodic synchronization markers are inserted or specifier and watermark codes are concatenated. These codes enable estimation of synchronous errors, but do not have ability to correct random errors. Low-density parity-check codes are usually concatenate to correct random errors. Due to the lack of dependence of information, periodic synchronization marker insertion prevents codes to approach the capacity. Recently, it is observed that spatially-coupled codes universally approach the symmetric information rate (SIR) of arbitrary finite state Markov channels. We introduce a synchronously erroneous finite state Markov channel model whose SIR is computable. Numerical experiments demonstrate spatially-coupled codes approach the SIR of the channel.

Some Results on the Scalar Gaussian Interference Channel

Salman Beigi (Institute for Research in Fundamental Sciences, Iran); Sida Liu (The Chinese University of Hong Kong, Hong Kong); Chandra Nair (Chinese University of Hong Kong, Hong Kong); Mehdi Yazdanpanah (The Chinese University of Hong Kong, Hong Kong)

We study the optimality of Gaussian signaling (with power control) for the two-user scalar Gaussian interference channel. The capacity region is shown to exhibit a discontinuity of slope around the sum-rate point for a subset of the very weak interference channel. We also show that using colored Gaussians (multi-letter) does not improve on the single-letter region of Gaussian signaling with power control. Finally, we also present an approach to test the optimality of Gaussian signaling motivated by some calculations of the slope of the Han-Kobayashi region near the corner point of the Z-interference channel.

Approximating probability distributions with short vectors, via information theoretic distance measures

Ferdinando Cicalese (Università di Verona); Luisa Gargano and Ugo Vaccaro (University of Salerno, Italy)

Given a probability distribution $\mathbf{p}=(p_1, \ldots , p_n)$ and an integer $m<n$, what is the probability distribution $\mathbf{q}=(q_1, \ldots , q_m)$ that is “the closest” to $\mathbf{p}$, that is, that best approximates $\mathbf{p}$? It is clear that the answer depends on the function one chooses to evaluate the goodness of the approximation. In this paper we provide a general criterion to approximate $\mathbf{p}$ with a shorter vector $\mathbf{q}$ by using ideas from majorization theory. We evaluate the goodness of our approximation by means of a variety of information theoretic distance measures.

Incremental and Decremental Secret Key Agreement

Chung Chan (The Chinese University of Hong Kong, Hong Kong); Ali Al-Bashabsheh (Institute of Network Coding & The Chinese University of Hong Kong, Hong Kong); Qiaoqiao Zhou (The Chinese University of Hong Kong, Hong Kong)

In the usual multiterminal secret key agreement problem, the goal is to compute the maximum secret key rate, called the secrecy capacity, for a group of users observing a given private correlated random source. In this work, we study the rate of change of the capacity when some common randomness is added to or removed from a subset of users. We identify how one can increase the secrecy capacity efficiently by adding common randomness to a small subset of users. We can also simplify the source model by removing redundant common randomness that does not contribute to the secrecy capacity, hence, possibly allowing simpler schemes for achieving the secrecy capacity. More importantly, as the secrecy capacity has been shown to measure the mutual information among multiple random variables, the results in this work characterize how changes in the mutual information of a subset of random variables affect the mutual information of the entire set. Some combinatorial structure has been clarified along with some meaningful open problems.

Coding Advantage in Communications among Peers

Kai Cai (University of Hong Kong, Hong Kong); Guangyue Han (The University of Hong Kong, Hong Kong)

We consider the problem of network coding advantage in a communication scenario where information exchange is bi-directional and peers communicate via multiple unicast sessions. In such a setting, we study the overall performance of all multiple unicast sessions and propose a version of the multiple unicast conjecture. One of our main results is a weaker version of the proposed conjecture: Consider all the multiple unicast sessions associated with a number of terminals in an undirected network. Then, the common transmission rate of all these multiple unicast sessions achieved by network coding (in the sense of Langberg and Medard) can also be achieved by fractional routing, or simply put, in a weak sense, coding advantage does not exist in our setting.

Capacity and Power Scaling Laws for Finite Antenna Amplify-and-Forward Relay Networks

David Simmons and Justin P Coon (University of Oxford, United Kingdom); Naqueeb Warsi (University of Oxford)

A novel framework is presented that can be used to study the capacity and power scaling properties of linear multiple-input multiple-output (MIMO) $d\times d$ antenna amplify-and-forward (AF) relay networks. In particular, we model these networks as random dynamical systems (RDS) and calculate their $d$ Lyapunov exponents. Our analysis can be applied to systems with any per-hop channel fading distribution; although, in this contribution we focus on Rayleigh fading. Our main results are twofold: 1) the total transmit power at the $n$th node will follow a deterministic trajectory through the network governed by the network’s maximum Lyapunov exponent, 2) the capacity of the $i$th eigenchannel at the $n$th node will follow a deterministic trajectory through the network governed by the network’s $i$th Lyapunov exponent. Before concluding, we present some numerical examples to highlight the theory. A more complete, extended version of this work has been submitted to the IEEE Transactions on Information Theory.

Comparison of quantum channels and statistical experiments

Anna Jencova (Mathematical Institute, Slovak Academy of Sciences, Slovakia)

For a pair of quantum channels with the same input space, we show that the possibility of approximation of one channel by post-processings of the other channel can be characterized by comparing the success probabilities for the two ensembles obtained as outputs for any ensemble on the input space coupled with an ancilla. This provides an operational interpretation to a natural extension of Le Cam’s deficiency to quantum channels. In particular, we obtain a version of the randomization criterion for quantum statistical experiments. The proofs are based on some properties of the diamond norm and its dual, which are of independent interest.

Worst case QC-MDPC decoder for McEliece cryptosystem

Julia Chaulet (Inria & Thales Communication and Security, France); Nicolas Sendrier (INRIA, France)

QC-MDPC-McEliece is a recent variant of the McEliece encryption scheme which enjoys relatively small key sizes as well as a security reduction to hard problems of coding theory. Furthermore, it remains secure against a quantum adversary and is very well suited to low cost implementations on embedded devices. Decoding MDPC codes is achieved with the (iterative) bit flipping algorithm, as for LDPC codes. Variable time decoders might leak some information on the code structure (that is on the sparse parity check equations) and must be avoided. A constant time decoder is easy to emulate, but its running time depends on the worst case rather than on the average case. So far implementations were focused on minimizing the average cost. We show that to reduce the maximal number of iterations the tuning of the algorithm is not the same as for reducing the average cost. This provides some indications on how to engineer the QC-MDPC-McEliece scheme to resist a timing side-channel attack.

Soft Covering with High Probability

Paul Cuff (Princeton University, USA)

Wyner’s soft-covering lemma is the central analysis step for achievability proofs of information theoretic security, resolvability, and channel synthesis. It can also be used for simple achievability proofs in lossy source coding. This work sharpens the claim of soft-covering by moving away from an expected value analysis. Instead, a random codebook is shown to achieve the soft-covering phenomenon with high probability. The probability of failure is super-exponentially small in the block-length, enabling many applications through the union bound. This work gives bounds for both the exponential decay rate of total variation and the second-order codebook rate that suffices for soft covering.

Caching in Mobile HetNets: A Throughput-Delay Trade-off Perspective

Trung-Anh Do (Dankook University, Korea); Sang-Woon Jeon (Andong National University, Korea); Won-Yong Shin (Dankook University, Korea)

This paper analyzes the optimal throughput-delay trade-off in content-centric mobile heterogeneous networks (HetNets), where each node moves according to the random walk mobility model and requests a content object from the library independently at random, according to a Zipf popularity distribution. Instead of allowing access to all content objects at base stations (BSs) via costly backhaul, we consider a more practical scenario where mobile nodes and BSs, each having a finite-size cache space, are able to cache a subset of content objects so that each request is served by other mobile nodes or BSs via multihop transmissions. Under the protocol model, we characterize a fundamental throughput-delay trade-off in terms of scaling laws by introducing the optimal caching allocation strategy and the corresponding content delivery routing protocol.

Collaborative Distributed Hypothesis Testing with General Hypotheses

Gil Katz (Supélec, France); Pablo Piantanida (CentraleSupélec-CNRS-Université Paris-Sud, France); Mérouane Debbah (Huawei, France)

The problem of collaborative distributed hypothesis testing is investigated. In this setting, a binary decision is required about the joint distribution of two arbitrary dependent memoryless processes that are sampled at different physical locations (nodes) in the system. Interactive rate-limited communication is allowed between these nodes. Defining two types of error events, the error exponent for an error of the second type is investigated, under a prescribed probability of error of the first type. A general achievable error exponent, as a function of the total available communication resources, is proposed, for the case of two general hypotheses. The special case of testing against independence is revisited for which it is shown that optimality can be attained, as a special case of the general achievable exponent, provided the constraint over the error probability of the first type goes to zero.

Centralized Coded Caching for Heterogeneous Lossy Requests

Qianqian Yang and Deniz Gündüz (Imperial College London, United Kingdom)

Centralized coded caching of popular contents is studied for users with heterogeneous distortion requirements, corresponding to diverse processing and display capabilities of mobile devices. Users’ distortion requirements are assumed to be fixed and known, while their particular demands are revealed only after the placement phase. Modeling each file in the database as an independent and identically distributed Gaussian vector, the minimum delivery rate that can satisfy any demand combination within the corresponding distortion target is studied. The optimal delivery rate is characterized for the special case of two users and two files for any pair of distortion requirements. For the general setting with multiple users and files, a layered caching and delivery scheme, which exploits the successive refinability of Gaussian sources, is proposed. This scheme caches each content in multiple layers, and it is optimized by solving two subproblems: lossless caching of each layer with heterogeneous cache capacities, and allocation of available caches among layers. The delivery rate minimization problem for each layer is solved numerically, while two schemes, called the proportional cache allocation (PCA) and naive cache allocation (NCA), are proposed for cache allocation. These schemes are compared with each other and the cut-set bound through numerical simulations.

Statistical Group Sparse Beamforming for Green Cloud-RAN via Large System Analysis

Yuanming Shi (ShanghaiTech University, P.R. China); Jun Zhang and Khaled B. Letaief (The Hong Kong University of Science and Technology, Hong Kong)

In this paper, we develop a statistical group sparse beamforming framework to minimize the network power consumption for green cloud radio access networks (Cloud-RANs). Such a scheme will promote group sparsity structures in the beamforming vectors, which will provide a good indicator for remote radio head (RRH) ordering to enable adaptive RRH selection for power saving. In contrast to the previous works depending heavily on instantaneous channel state information (CSI), the proposed algorithm only depends on the long-term channel state attenuation for RRH ordering, which does not require frequent update, thereby significantly reducing the computation overhead. This is achieved by developing a smoothed lp-minimization approach to induce group sparsity in the beamforming vectors, followed by an iterative reweighted-l2 algorithm via the principles of the majorization-minimization (MM) algorithm and the Lagrangian duality theory. With the well-structured closed-form solutions at each iteration, we further leverage the large-dimensional random matrix theory to derive deterministic approximations for the squared l2-norm of the induced group sparse beamforming vectors in the large system regimes. The deterministic approximation results only dependent on the statistical CSI and guide the RRH ordering. Simulation results demonstrate the near-optimal performance of the proposed algorithm even in finite systems.

Lossy Compression with Near-uniform Encoder Outputs

Badri N Vellambi and Joerg Kliewer (New Jersey Institute of Technology, USA); Matthieu Bloch (Georgia Institute of Technology & Georgia Tech Lorraine, France)

It is well known that lossless compression of a discrete memoryless source at a rate just above entropy with near-uniform encoder output is possible if and only if the encoder and decoder share a common random seed. This work focuses on deriving conditions for near-uniform lossy compression in the Wyner-Ziv and the distributed lossy compression problems. We show that in the Wyner-Ziv case, near-uniform encoder output and operation close to the WZ-rate limit is simultaneously possible, while in the distributed lossy compression problem, jointly near-uniform outputs is achievable at any rate point in the interior of the rate region provided the sources share non-trivial common randomness.

Non-Bayesian Multiple Change-Point Detection Controlling False Discovery Rate

Jie Chen and Wenyi Zhang (University of Science and Technology of China, P.R. China); H. Vincent Poor (Princeton University, USA)

A sequential procedure for non-Bayesian multiple change-point problems subject to false discovery rate (FDR) control is considered. The procedure may be viewed as a variant of Benjanmini and Hochberg’s procedure tailored for change-point detection problems. A theoretical guarantee for the procedure’s FDR is established. Further, sequential procedures that control the FDR and the familywise error rate are compared in terms of the average detection delay.

On Caching with More Users than Files

Kai Wan (L2S – CNRS – Supelec – Univ Paris-Sud, France); Daniela Tuninetti (University of Illinois at Chicago, USA); Pablo Piantanida (CentraleSupélec-CNRS-Université Paris-Sud, France)

Caching appears to be an efficient way to reduce peak hour network traffic congestion by storing some content at the user’s cache without knowledge of later demands. Recently, Maddah-Ali and Niesen proposed a two-phase, placement and delivery phase, coded caching strategy for centralized systems (where coordination among users is possible in the placement phase), and for decentralized systems. This paper investigates the same setup under the further assumption that the number of users is larger than the number of files. By using the same uncoded placement strategy of Maddah-Ali and Niesen, a novel coded delivery strategy is proposed to profit from the multicasting opportunities that arise because a file may be demanded by multiple users. The proposed delivery method is proved to be optimal under the constraint of uncoded placement for centralized systems with two files; moreover it is shown to outperform known caching strategies for both centralized and decentralized systems.

An Improved Upper Bound on Block Error Probability of Least Squares Superposition Codes with Unbiased Bernoulli Dictionary

Yoshinari Takeishi (Mitsubishi Electric Information Network Corporation, Japan); Junichi Takeuchi (Kyushu University, Japan)

For the additive white Gaussian noise channel with average power constraint, it is shown that sparse superposition codes, proposed by Barron and Joseph in 2010, achieve the capacity. We study the upper bounds on its block error probability with least
squares decoding when a dictionary with which we make codewords is drawn from an unbiased Bernoulli distribution. We improve the upper bounds shown by Takeishi et.al. in 2014 with fairly simplified form.

Correction of Data and Syndrome Errors by Stabilizer Codes

Alexei Ashikhmin (Bell Labs, Alcatel-Lucent, USA); Ching-Yi Lai (Academia Sinica, Taiwan); Todd A. Brun (University of Southern California, USA)

Performing active quantum error correction to protect fragile quantum states highly depends on the correctness of error information–error syndromes. To obtain reliable error syndromes using imperfect physical circuits, we propose the idea of quantum data-syndrome (DS) codes that are capable of correcting both data qubits and syndrome bits errors. We study fundamental properties of quantum DS codes and provide several CSS-type code constructions of quantum DS codes.

On the Optimal Boolean Function for Prediction Under Quadratic Loss

Nir Weinberger (Technion, Israel); Ofer Shayevitz (Tel Aviv University, Israel)

Suppose $Y^{n}$ is obtained by observing a uniform Bernoulli random vector $X^{n}$ through a binary symmetric channel. Courtade and Kumar asked how large the mutual information between $Y^{n}$ and a Boolean function $\mathsf{b}(X^{n})$ could be, and conjectured that the maximum is attained by the dictator function. An equivalent formulation of this conjecture is that dictator minimizes the prediction cost in sequentially predicting $Y^{n}$ under logarithmic loss, given $\mathsf{b}(X^{n})$. In this paper, we study the question of minimizing the sequential prediction cost under a different (proper) loss function – the quadratic loss. In the noiseless case, we show that majority asymptotically minimizes this prediction cost among all Boolean functions. We further show that for weak noise, majority is better than dictator, and that for strong noise dictator outperforms majority. We conjecture that for quadratic loss, there is no single Boolean function that is simultaneously optimal at all noise levels.

On SDoF of Multi-Receiver Wiretap Channel With Alternating CSIT

Zohaib Awan (RUB, Germany); Abdellatif Zaidi (Université Paris-Est Marne La Vallée, France); Aydin Sezgin (RUB & Digital Communication Systems, Germany)

We study the problem of secure transmission over a Gaussian multi-input single-output (MISO) two receiver channel with an external eavesdropper, under the assumption that the state of the channel which is available to each receiver is conveyed either perfectly ($P$) or with delay ($D$) to the transmitter. Denoting by $S_1$, $S_2$, and $S_3$ the channel state information at the transmitter (CSIT) of user 1, user 2, and eavesdropper, respectively, the overall CSIT can then alternate between eight possible states, i.e., $(S_1,S_2,S_3) \in \{P,D\}^3$. We denote by $\lambda_{S_1 S_2 S_3}$ the fraction of time during which the state $S_1S_2S_3$ occurs. Under these assumptions, we consider the multi-receiver setup and characterize the SDoF region of fixed hybrid states $PPD$, $PDP$, and $DDP$. We then focus our attention on the symmetric case in which $\lambda_{PDD}=\lambda_{DPD}$. For this case, we establish bounds on the SDoF region. The analysis reveals that alternating CSIT allows synergistic gains in terms of SDoF; and shows that, by opposition to encoding separately over different states, joint encoding across the states enables strictly better secure rates.

Optimizing The Spatial Content Caching Distribution for Device-to-Device Communications

Derya Malak (The University of Texas at Austin, USA); Mazin Al-Shalash (Huawei, USA); Jeffrey Andrews (The University of Texas at Austin, USA)

We study the optimal geographic content placement problem for device-to-device (D2D) networks in which the content popularity follows the Zipf law. We consider a D2D caching model where the locations of the D2D users (caches) are modeled by a Poisson point process (PPP) and have limited communication range and finite storage. Unlike most related work which assumes independent placement of content, and does not capture the locations of the users, we model the spatial properties of the network including spatial correlation in terms of the cached content. We propose two novel spatial correlation models, the exchangeable content model and a Matern (MHC) content placement model, and analyze and optimize the hit probability, which is the probability of a given D2D node finding a desired file at another node within its communication range. We contrast these results to the independent placement model, and show that exchangeable placement performs worse. On the other hand, MHC placement yields a higher cache hit probability than independent placement for small cache sizes.

A Single-Letter Upper Bound on the Feedback Capacity of Unifilar Finite-State Channels

Oron Sabag and Haim H Permuter (Ben-Gurion University, Israel); Henry D Pfister (Duke University, USA)

A single-letter upper bound on the feedback capacity of a unifilar finite-state channel is derived. The upper bound is tight for all cases where the feedback capacity is known. Its efficiency is also demonstrated by direct application of the bound on the dicode erasure channel, which results in a new capacity result. The bound is based on a new technique, called the $Q$-contexts mapping, where the channel outputs are recursively quantized to a finite set, called the contexts set.

Secret Key Generation over Noisy Channels with Common Randomness

Germán Bassi (KTH Royal Institute of Technology, Sweden); Pablo Piantanida (CentraleSupélec-CNRS-Université Paris-Sud, France); Shlomo (Shitz) Shamai (The Technion, Israel)

This paper investigates the problem of secret key generation over a wiretap channel when the terminals have access to correlated sources. These sources are independent of the main channel and the users observe them before the transmission takes place. A novel achievable scheme for this model is proposed and is shown to be optimal under certain less noisy conditions. This result improves upon the existing literature where the more stringent condition of degradedness was needed.

Bandlimited Field Estimation from Samples Recorded by a Location-Unaware Mobile Sensor

Animesh Kumar (Indian Institute of Technology Bombay, India)

Spatial field sampling with mobile sensor has recently been addressed in the literature. This work introduces and proposes solution to a fundamental question: can a spatial field be estimated from samples taken by a mobile sensor at unknown sampling locations along a path? Spatially one-dimensional and bandlimited, and temporally fixed fields are considered. It is assumed that field samples are collected on spatial locations realized by an unknown renewal process. That is, the sampling locations and the inter-sample distribution in the renewal process are both unknown. It is shown that average mean-squared error in field estimation decreases as O(1/n) where n is the sampling-rate employed by the mobile sensor. The sampling rate can be increased by controlling the mean value of the inter-sample spacing by a location unaware mobile sensor.

Cooperative Tx/Rx Caching in Interference Channels: A Storage-Latency Tradeoff study

Fan Xu, Kangqi Liu and Meixia Tao (Shanghai Jiao Tong University, P.R. China)

This paper studies the storage-latency tradeoff in the 3×3 wireless interference network with caches equipped at all transmitters and receivers. The tradeoff is characterized by the so-called fractional delivery time (FDT) at given normalized transmitter and receiver cache sizes. We first propose a generic cooperative transmitter/receiver caching strategy with adjustable file splitting ratios. Based on this caching strategy, we then design the delivery phase carefully to turn the considered interference channel opportunistically into broadcast channel, multicast channel, X channel, or a hybrid form of these channels. After that, we obtain an achievable upper bound of the minimum FDT by
solving a linear programming problem of the file splitting ratios. The achievable FDT is a convex and piece-wise linear decreasing function of the cache sizes. Receiver local caching gain, coded multicasting gain, and transmitter cooperation gain (interference alignment and interference neutralization) are leveraged in different cache size regions.

Codes with Unequal Locality

Swanand Kadhe and Alex Sprintson (Texas A&M University, USA)

In many practical settings, there is a need to design distributed storage codes with certain locality constraints. For a code $C$, its $i$-th symbol is said to have locality $r$ if it can be recovered by accessing some other $r$ symbols of $C$. Locally repairable codes (LRCs) are the family of codes such that every symbol has small locality.
In this paper, we focus on LRCs with “unequal symbol locality”, wherein different symbols of the code possess different values of locality. First, we consider a class of codes with “unequal information locality”, i.e., systematic codes with unequal locality constraints imposed only on the information symbols. For this class of codes, we compute a tight upper bound on the minimum distance as a function of locality constraints. We demonstrate that the construction of Pyramid codes by Huang et al. can be adapted to design “optimal” codes with unequal information locality that achieve the minimum distance bound.
Next, we consider codes with “unequal all-symbol locality”, i.e., codes in which the locality constraints are imposed on all symbols. We establish an upper bound on the minimum distance as a function of number of symbols of each locality value. We show that the construction based on rank-metric codes by Silberstein et al. can be adapted to obtain optimal codes with unequal all-symbol locality.
Finally, we introduce the concept of “locality requirement” of a code, which can be viewed as a recoverability requirement on symbols. Information locality requirement of a code essentially specifies the minimum number of information symbols of each locality value that must be present in the code. For a given locality requirement, we present a greedy algorithm to assign locality values to information symbols, which allows us to construct codes that have maximum minimum distance among all codes that satisfy the locality requirement.

Plausible Deniability over Broadcast Channels

Mayank Bakshi (The Chinese University of Hong Kong, Hong Kong); Vinod M Prabhakaran (Tata Institute of Fundamental Research, India)

In this paper, we introduce the notion of Plausible Deniability in an information theoretic framework. We consider a scenario where an entity that eavesdrops through a broadcast channel summons one of the parties in a communication protocol to reveal their message (or signal vector). It is desirable that the summoned party have enough freedom to produce a fake output that is likely plausible given the eavesdropper’s observation. We examine three variants of this problem — Message Deniability, Transmitter Deniability, and Receiver Deniability. In the first setting, the message sender is summoned to produce the sent message. Similarly, in the second and third settings, the transmitter and the receiver are required to produce the transmitted codeword, and the received vector respectively. For each of these settings, we examine the maximum communication rate that allows a given minimum rate of plausible fake outputs. For the Message and Transmitter Deniability problems, we fully characterise the capacity region for general broadcast channels, while for the Receiver Deniability problem, we give an achievable rate region for stochastically degraded broadcast channels.

Performance Evaluation of Faulty Iterative Decoders using Absorbing Markov Chains

Predrag N. Ivanis (School of Electrical Engineering, University of Belgrade, Serbia); Bane Vasić (University of Arizona, USA); David Declercq (ETIS ENSEA/univ. of Cergy-Pontoise/CNRS, France)

We propose an iterative decoder made of a combination of faulty and perfect logic gates that is capable of correcting more channel errors than its counterpart made completely of perfect logic gates.We present an error probability analysis based on absorbing Markov chains, and explain how the randomness in the check node update function helps a decoder to escape to local minima associated with trapping sets. For the (155; 64) Tanner low-density parity check code, we provide a range of gate failure probabilities for which imperfect decoders perform better.

Adaptive Recoding for BATS Codes

Hoover H.F. Yin (The Chinese University of Hong Kong, Hong Kong); Shenghao Yang (The Chinese University of Hong Kong, Shenzhen, P.R. China); Qiaoqiao Zhou and Lily M.L. Yung (The Chinese University of Hong Kong, Hong Kong)

BATS codes were proposed for communication through networks with packet loss. A BATS code consists of an outer code and an inner code. The outer code is a matrix generalization of fountain codes, which works with the inner code that comprises random linear network coding at the intermediate network nodes. In this paper, we propose a new inner code scheme for BATS codes, called adaptive recoding, which can be applied distributively at the intermediate network nodes, requiring only local knowledge of the received packets and the outgoing network link erasure probability. We show that adaptive recoding has significant throughput gain for relatively small batch sizes, compared with the baseline recoding scheme used in existing works.

On LCD Codes and Lattices

Xiaolu Hou and Frederique Oggier (Nanyang Technological University, Singapore)

LCD (linear complimentary dual) codes are linear codes that trivially intersect with their duals. We address the question of an equivalent concept for lattices. We observe basic properties of the intersection of a lattice with its dual, and consider the construction of lattices from LCD codes using Construction A. Lattices obtained from the intersection of a code and its dual via Construction A are further discussed.

Some Results on Optimal Locally Repairable Codes

Jie Hao and Shutao Xia (Tsinghua University, P.R. China); Bin Chen (South China Normal University, P.R. China)

In a linear code, a code symbol is said to have locality $r$ if it can be repaired by accessing at most $r$ other code symbols. For an $(n,k,r)$ \emph{locally repairable codes} (LRC), the most important bounds on minimum distances might be the well-known Singleton-like bound and the Cadambe-Mazumdar bound which takes the field size into account. In this paper, we study the constructions of optimal LRCs from the view of parity-check matrices. Firstly, all the optimal binary LRCs meeting the Singleton-like bound are found in the sense of equivalence of linear codes, i.e., except the proposed $4$ classes of LRCs, there is no other binary $(n,k,r)$ LRC with minimum distance $d=n-k-\lceil k/r\rceil +2$. Then a class of binary LRCs with distance $4$ and arbitrary locality is proposed and shown to be optimal with respect to the Cadambe-Mazumdar bound. Moreover, we give a class of high rate optimal $q$-ary LRCs meeting the Singleton-like bound with minimum distance $4$ while the required field size is only $q \ge r-1$. Finally, several methods to obtain short optimal LRCs from long optimal LRCs are proposed at the end of this paper.

Generalized Belief Propagation Based TDMR Detector and Decoder

Chaitanya K Matcha (Indian Institute of Science, Bangalore, India); Mohsen Bahrami (University of Arizona, USA); Shounak Roy (Indian Institute of Science, India); Shayan Garani (Indian Institute of Science, Bangalore, India); Bane Vasić (University of Arizona, USA)

Two dimensional magnetic recording (TDMR) achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in two dimensional (2D) inter symbol interference (ISI) and very high media noise. Therefore, it is critical to handle the media noise along with the 2D ISI detection. In this paper, we tune the generalized belief propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. The performance of the GBP algorithm is evaluated over a Voronoi based TDMR channel model where the soft outputs from the GBP algorithm are used by a belief propagation (BP) algorithm to decode low-density parity check (LDPC) codes.

Two Classes of (r,t)-Locally Repairable Codes

Anyu Wang (Institute of Information Engineering, Chinese Academy of Sciences, P.R. China); Zhifang Zhang (Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.R. China)

Recently, the (r,t)-locally repairable code has attracted a lot of attention due to its potential application in distributed storage systems for hot data. A locally repairable code with locality r and availability t, which is termed an (r,t)-LRC, is a code satisfying the property that the value at each coordinate can be recovered from t disjoint repair sets, each set consists of at most r other coordinates. In this paper, we investigate two constructions of (r,t)-LRCs. The first one is a cyclic code of which the parity check polynomial is closely related to the trace function over finite fields. This code can achieve high availability and large minimum distance. The second one is based on the incidence matrix of linear subspaces in F^m_q. For some specific parameters, we prove that its information rate is always higher than r/(r+t) which is conjectured to be near to the optimal information rate for (r,t)-LRCs (A. Wang and Z. Zhang, ISIT 2015). By shortening this code in a specially designed way, we obtain (r,t)-LRCs with slightly lower information rate but much more desirable locality r.

Polar Codes for Broadcast Channels with Receiver Message Side Information and Noncausal State Available at the Encoder

Jin Sima and Wei Chen (Tsinghua University, P.R. China)

In this paper polar codes are proposed for two receiver broadcast channels with receiver message side information (BCSI) and noncausal state available at the encoder, referred to as BCSI with noncausal state for short, where the two receivers know a priori the private messages intended for each other. An achievable rate region for BCSI with noncausal state is established and shown to strictly
contain the straightforward extension of the Gelfand-Pinsker result. To achieve the established rate region, we present polar codes for the general Gelfand-Pinsker problem, which adopts chaining construction and utilizes causal information to pre-transmit the frozen bits. It is also shown that causal information is necessary to pre-transmit the frozen bits. Based on the result of Gelfand-Pinsker problem, we then propose polar codes for BCSI with noncausal state. The difficulty is that there are multiple chains sharing common information bit indices. To avoid value assignment conflicts, a nontrivial polarization alignment scheme is presented. It is shown that the proposed region is tight for degraded BCSI with noncausal state.

Near-Capacity Protograph Doubly-Generalized LDPC Codes with Block Thresholds

Asit Kumar Pradhan (Indian Institute of Technology Madras, India); Andrew Thangaraj (IIT Madras, India)

Protograph doubly-generalized low-density parity-check (DGLDPC) codes, which allow for arbitrary component codes at the variable and check nodes of a protograph, are considered. Exact density evolution is derived over the binary erasure channel. Conditions on the protograph and component codes to ensure equality of block-error threshold and density evolution threshold for large-girth ensembles are established. Conditions for stability of density evolution are derived, and block-error threshold property is extended to binary-input symmetric channels. Optimized low-rate protographs for DGLDPC codes over the erasure channel are presented.

Towards a Constant-Gap Sum-Capacity Result for the Gaussian Wiretap Channel with a Helper

Rick Fritschek and Gerhard Wunder (Freie Universität Berlin)

Recent investigations have shown that the sum secure degrees of freedom of the Gaussian wiretap channel with a helper is $\tfrac{1}{2}$. The achievable scheme for this result is based on the real interference alignment approach. While providing a good way to show degrees of freedom results, this technique has the disadvantage of relying on the Khintchine-Groshev theorem and is therefore limited to {\it almost all channel gains}. This means that there are infinitely many channel gains, where the scheme fails. Furthermore, the real interference alignment approach cannot be used to yield stronger constant-gap results. We approach this topic from a signal-scale alignment perspective and use the linear deterministic model as a first approximation. Here we can show a constant-gap sum capacity for certain channel gain parameters. We transfer these results to the Gaussian model and discuss the results.

On the Impact of Sparsity on the Broadcast Capacity of Wireless Networks

Serj Haddad and Olivier Lévêque (EPFL, Switzerland)

We characterize the maximum achievable broadcast rate in a wireless network at low SNR and under line-of-sight fading assumption. Our result shows that this rate depends negatively on the sparsity of the network. This is to be put in contrast with the number of degrees of freedom available in the network, which have been shown previously to increase with the sparsity of the network.

Inference of latent network features via co-intersection representations of graphs

Hoang Dau (University of Illinois at Urbana-Champaign, USA); Olgica Milenkovic (UIUC, USA)

We propose a new latent Boolean feature model for complex networks that captures different types of node interactions and network communities. The model is based on a new concept in graph theory, termed the co-intersection representation of a graph, which generalizes the notion of an intersection representation. We describe how to use co-intersection representations to deduce node feature sets and their communities, and proceed to derive several general bounds on the minimum number of features used in co-intersection representations. We also discuss graph families for which exact co-intersection characterizations are possible.

On Secrecy Rates and Outage in Multi-User Multi-Eavesdroppers MISO Systems

Joseph Kampeas and Asaf Cohen (Ben-Gurion University of the Negev, Israel); Omer Gurewitz (Ben-Gurion University Of The Negev, Israel)

In this paper, we study the secrecy rate and outage probability in Multiple-Input-Single-Output (MISO) Gaussian wiretap channels at the limit of a large number of legitimate users and eavesdroppers. In particular, we analyze the asymptotic achievable secrecy rates and outage, when only statistical knowledge on the wiretap channels is available to the transmitter.

The analysis provides exact expressions for the reduction in the secrecy rate as the number of eavesdroppers grows, compared to the boost in the secrecy rate as the number of legitimate users grows.

A statistical perspective of sampling scores for linear regression

Siheng Chen, Rohan Varma, Aarti Singh and Jelena Kovacevic (Carnegie Mellon University, USA)

In this paper, we consider a statistical problem of learning a linear model from noisy samples. Existing work has focused on approximating the least squares solution by using leverage-based scores as an importance sampling distribution. However, no finite sample statistical guarantees and no computationally efficient optimal sampling strategies have been proposed. To evaluate the statistical properties of different sampling strategies, we propose a simple yet effective estimator, which is easy for theoretical analysis and is useful in multitask linear regression. We derive the exact mean square error of the proposed estimator for any given sampling scores. Based on minimizing the mean square error, we propose the optimal sampling scores for both estimator and predictor, and show that they are influenced by the noise-to-signal ratio. Numerical simulations match the theoretical analysis well.

Advanced Factorization Strategies for Lattice-Reduction-Aided Preequalization

Sebastian Stern and Robert F.H. Fischer (Ulm University, Germany)

Lattice-reduction-aided preequalization (LRA PE) is a powerful technique for interference handling on the multi-user multiple-input/multiple-output (MIMO) broadcast channel. However, recent advantages in the strongly related field of compute-and-forward and integer-forcing equalization have raised the question, if the factorization task present in LRA PE is really solved in an optimum way. In this paper, advanced factorization strategies are presented, significantly increasing the transmission performance. Specifically, the signal constellation and its related lattice as well as the factorization task/strategy are discussed. The impact of dropping the common unimodularity constraint in LRA PE is studied. Numerical simulations are given to show the effectiveness of all presented strategies.

Revisiting the Sanders-Bogolyubov-Ruzsa Theorem in F_p^n and its Application to Non-malleable Codes

Divesh Aggarwal (EPFL, Switzerland); Jop Briet (CWI, Amsterdam, The Netherlands)

Non-malleable codes (NMCs) protect sensitive data against degrees of corruption that prohibit error detection, ensuring instead that a corrupted codeword decodes correctly or to something that bears little relation to the original message. The split-state model, in which codewords consist of two blocks, considers adversaries who tamper with either block arbitrarily but independently of the other. The simplest construction in this model, due to Aggarwal, Dodis, and Lovett (STOC’14), was shown to give NMCs sending $k$-bit messages to $O(k^7)$-bit codewords. It is conjectured, however, that the construction allows linear-length codewords.
Towards resolving this conjecture, we show that the construction allows for code-length $O(k^5)$. This is achieved by analysing a special case of Sanders’s Bogolyubov-Ruzsa theorem for general Abelian groups. Closely following the excellent exposition of this result for the group $F_2^n$ by Lovett, we expose its dependence on $p$ for the group $F_p^n$, where $p$ is a prime.

Centralized Repair of Multiple Node Failures

Ankit Singh Rawat (Carnegie Mellon University, USA); Onur Ozan Koyluoglu (The University of Arizona, USA); Sriram Vishwanath (University of Texas Austin, USA)

This paper considers a distributed storage system, where multiple storage nodes can be reconstructed simultaneously at a centralized location. This centralized multi-node repair (CMR) model is a generalization of regenerating codes that allow for bandwidth-efficient repair of a single failed node. This work focuses on the trade-off between the amount of data stored and repair bandwidth in this CMR model. In particular, repair bandwidth bounds are derived for the minimum storage multi-node repair (MSMR) and the minimum bandwidth multi-node repair (MBMR) operating points. The tightness of these bounds are analyzed via code constructions. The MSMR point is characterized through codes achieving this point under functional repair for general set of CMR parameters, as well as with codes enabling exact repair for certain CMR parameters. The MBMR point, on the other hand, is characterized with exact repair codes for all CMR parameters for systems that satisfy certain entropy accumulation property.

Proof of Threshold Saturation for Spatially Coupled Sparse Superposition Codes

Jean Barbier (EPFL, Switzerland); Mohamad Dia (EPFL & American University of Beirut, Switzerland); Nicolas Macris (EPFL, Switzerland)

Recently, a new class of codes, called sparse superposition or sparse regression codes, has been proposed for communication over the AWGN channel. It has been proven that they achieve capacity using power allocation and various forms of iterative decoding. Empirical evidence has also strongly suggested that the codes achieve capacity when spatial coupling and Approximate Message Passing decoding are used, without need of power allocation. In this note we prove that State Evolution (which tracks message passing) indeed saturates the optimal threshold of the underlying code ensemble. Our proof uses ideas developed in the theory of low-density parity-check codes and compressive sensing.

Algebraic Lattice Codes Achieve the Capacity of the Compound Block-Fading Channel

Antonio Campello (Télécom Paristech, France); Cong Ling (Imperial College London, United Kingdom); Jean-Claude Belfiore (Telecom Paristech & Huawei Technologies, France)

We propose a coding scheme that achieves the capacity of the compound block-fading channel with lattice decoding at the receiver. Our lattice construction exploits the multiplicative structure of number fields and their group of units to absorb ill-conditioned channel realizations. To shape the constellation, a discrete Gaussian distribution over the lattice points is applied. A by-product of our results is the proof that the lattice Gaussian distribution is capacity-achieving in the AWGN channel for any signal-to-noise ratio.

On constructions of bent functions from involutions

Sihem Mesnager (University of Paris VIII & LAGA and Telcom Paristech, France)

Bent functions are maximally nonlinear Boolean functions. They are important functions introduced by Rothaus and studied firstly by Dillon and next by many researchers for four decades. Since the complete classification of bent functions seems elusive, many researchers turn to design constructions of bent functions. In this paper, we show that linear involutions (which are an important class of permutations) over finite fields give rise to bent functions in bivariate representations. In particular, we exhibit new constructions of bent functions involving binomial linear involutions whose dual functions are directly obtained without computation. The existence of bent functions from involutions heavily relies on solving systems of equations over finite fields.

Distributed Recursive Composite Hypothesis Testing: Imperfect Communication

Anit Kumar Sahu and Soummya Kar (Carnegie Mellon University, USA)

This paper focuses on the problem of distributed composite hypothesis testing in a noisy network of sparsely interconnected agents in which a pair of agents exchange information over an additive noise channel. The network objective is to test a simple null hypothesis against a composite alternative concerning the state of the field, modeled as a vector of (continuous) unknown parameters determining the parametric family of probability measures induced on the agents’ observation spaces under the hypotheses. A recursive generalized likelihood ratio test (GLRT) type algorithm in a distributed setup of the \emph{consensus+innovations} form is proposed, in which the agents update their parameter estimates and decision statistics by simultaneously processing the latest sensed information (\emph{innovations}) and information obtained from neighboring agents (\emph{consensus}). This paper characterizes the conditions and the testing algorithm design parameters which ensure that the probabilities of decision errors decay to zero asymptotically in the large sample limit.

Bounds on the Maximal Minimum Distance of Linear Locally Repairable Codes

Antti Pöllänen, Thomas Westerbäck, Ragnar Freij-Hollanti and Camilla Hollanti (Aalto University, Finland)

Locally repairable codes (LRCs) are error correcting codes used in distributed data storage. Besides a global level, they enable errors to be corrected locally, reducing the need for communication between storage nodes. There is a close connection between almost affine LRCs and matroid theory which can be utilized to construct good LRCs and derive bounds on their performance.

A generalized Singleton bound for linear LRCs with parameters (n,k,d,r,δ) was given in [N. Prakash et al., “Optimal Linear Codes with a Local-Error-Correction Property”, IEEE Int. Symp. Inf. Theory]. In this paper, a LRC achieving this bound is called perfect. Results on the existence and nonexistence of linear perfect (n,k,d,r,δ)-LRCs were given in [W. Song et al., “Optimal locally repairable codes”, IEEE J. Sel. Areas Comm.]. Using matroid theory, these existence and nonexistence results were later strengthened in [T. Westerbäck et al., “On the Combinatorics of Locally Repairable Codes”, Arxiv: 1501.00153], which also provided a general lower bound on the maximal achievable minimum distance d_max(n,k,r,δ) that a linear LRC with parameters (n,k,d,r,δ) can have. This article expands the class of parameters (n,k,d,r,δ) for which there exist perfect linear LRCs and improves the lower bound for d_max(n,k,r,δ). Further, this bound is proved to be optimal for the class of matroids that is used to derive the existence bounds of linear LRCs.

Rate and Delay for Coded Caching with Carrier Aggregation

Nikhil Karamchandani (Indian Institute of Technology Bombay, India); Suhas Diggavi (University of California Los Angeles, USA); Giuseppe Caire (Technische Universität Berlin, Germany); Shlomo (Shitz) Shamai (The Technion, Israel)

Motivated by the ability of modern terminals to receive simultaneously from multiple networks (e.g., WLAN and Cellular), we extend the single shared link network with caching at the user nodes to the case of $r$ parallel partially shared links, where users in different classes receive from the server simultaneously and in parallel through different set of links. For this setting, we give an order-optimal rate and (maximal) delay region characterization for the case of $r=2$ links with two classes of users, one receiving only from link $1$ and the other from both links $1$ and $2$. We also extend these results to $r=3$ with three classes of users, receiving from link 1, from links 1 and 2, and from links 1 and 3, respectively.

Nearly Optimal Robust Secret Sharing

Mahdi Cheraghchi (Imperial College London, United Kingdom)

We prove that a known approach to improve Shamir’s celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for $n$ parties against any collusion of size $\delta n$, for any constant $\delta \in (0, 1/2)$. This result holds in the so-called “non-rushing” model in which the $n$ shares are submitted simultaneously for reconstruction. We thus finally obtain a fully explicit and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is $k(1+o(1)) + O(\kappa)$, where $k$ is the secret length and $\kappa$ is the security parameter. Like Shamir’s scheme, in this modified scheme any set of more than $\delta n$ honest parties can efficiently recover the secret.

Using algebraic geometry codes instead of Reed-Solomon codes, the share length can be decreased to a constant (only depending on $\delta$) while the number of shares $n$ can grow independently. In this case, when $n$ is large enough, the scheme satisfies the “threshold” requirement in an approximate sense; i.e., any set of $\delta n(1+\rho)$ honest parties, for arbitrarily small $\rho > 0$, can efficiently reconstruct the secret.

Combinatorial and LP bounds for LRC codes

Sihuang Hu and Itzhak Tamo (Tel Aviv University, Israel); Alexander Barg (University of Maryland, USA)

A locally recoverable (LRC) code is a code that enables a simple recovery of an erased symbol by accessing only a small number of other symbols. We present several new combinatorial bounds on LRC codes including the locality-aware sphere packing and Plotkin bounds. We also develop an approach to linear programming (LP) bounds on LRC codes. The resulting LP bound gives better estimates in examples than the other upper bounds known in the literature.

Lower Bounds on Joint Modulation-Estimation Performance for the Gaussian MAC

Ayşe Ünsal (INSA Lyon, France); Raymond Knopp (Institut Eurecom, France); Neri Merhav (Technion, Israel)

This paper considers the problem of jointly estimating two independent continuous-valued parameters sent over a Gaussian multiple-access channel (MAC) subject to the mean square error (MSE) as a fidelity criterion. We generalize the parameter modulation-estimation analysis techniques proposed by Merhav in 2012 to a two-user multiple-access channel
model to obtain outer bounds to the achievable region in the plane of the MSE’s of the two user parameters, as well as the
achievable region of the exponential decay rates of these MSE’s in the asymptotic regime of long blocks.

Helper-Assisted State Cancelation for Multiple Access Channels

Yunhao Sun (Syracuse University, USA); Ruchen Duan (Samsung Semiconductor Inc., USA); Yingbin Liang (Syracuse University, USA); Ashish Khisti (University of Toronto, Canada); Shlomo (Shitz) Shamai (The Technion, Israel)

This paper investigates the two-user state-dependent Gaussian multiple access channel (MAC) with a helper. The channel is corrupted by an additive Gaussian state sequence known to neither the transmitters nor the receiver, but to a helper noncausally, which assists state cancelation at the receiver. Inner and outer bounds on the capacity region are first derived, which improve the previous bounds given by Duan et al. Further comparison of these bounds yields either segments on the capacity region boundary or the full capacity region by considering various cases of channel parameters.

On the Energy-Distortion Tradeoff for the Gaussian Broadcast Problem

Erman Köken and Ertem Tuncel (UC Riverside, USA)

The energy-distortion tradeoff for the transmission of a white Gaussian source over the additive white Gaussian broadcast channel is investigated by translating the known upper and lower bounds into the infinite bandwidth regime. While a gap continues to exist between the bounds in this regime, it is shown that in a certain region on the distortion plane, the energy difference between the best known upper and lower bounds is quantifiably small.

A Survey of Bratteli Information Source Theory

John C Kieffer (University of Minnesota, USA)

We survey recent results on Bratteli-Vershik information sources, which are sources that live on a Bratteli diagram, a type of graph with a countable infinity of vertices and edges that are split into levels. The results are valid when the underlying Bratteli diagram satisfies a regularity condition. These results include an ergodic decomposition theorem, Shannon-McMillan-Breiman theorem, and theorems in source coding theory. The results are obtained using the Vershik transformation that is associated with a Bratteli diagram. It is explained how some previously known results in source coding for finite-alphabet stationary sequential information sources are obtainable from source coding results for Bratteli-Vershik sources.

Single-User CSIT Can be Quite Useful for State-Dependent Broadcast Channels

Shih-Chun Lin (National Taiwan University of Science and Technology, Taiwan); I-Hsiang Wang (National Taiwan University, Taiwan)

State-dependent broadcast channels (BC) with heterogeneous channel state information available at the transmitter (CSIT) are studied. The heterogeneity of CSIT lies in the timeliness of channel state that governs the link from the transmitter to different receivers – CSI of each link can be perfectly (causally or non-causally), delayed, or not available at the transmitter. We focus on the erasure BC and the bursty Gaussian BC, where the channel states are governed by memoryless Bernoulli processes, independent across users. For the erasure BC with perfect single user CSIT, we characterize its capacity region regardless of the CSIT of the other user and show that this capacity region strictly contains that with no CSIT. For the case with delayed single-user CSIT, we propose an opportunistic network coding scheme that achieves a strictly larger rate region than the no-CSIT capacity region. These results are extended to the bursty Gaussian BC, where for the perfect single-user CSIT scenario, the capacity region is characterized to within a bounded gap; and for the delayed single-user CSIT scenario, a rate region based the opportunistic network coding scheme is derived. As a corollary, single-user CSIT is able to increase the sum degrees of freedom (DoF) for bursty Gaussian BC. Our result is in sharp contrast to the recent negative result by Davoodi and Jafar [1], where it is shown that for fast-fading MISO broadcast channel, single-user CSIT does not help at all in terms of sum DoF.

Secure Lossy Helper and Gray-Wyner Problems

Meryem Benammar (HUAWEI Technologies France, France); Abdellatif Zaidi (Université Paris-Est Marne La Vallée, France)

In this work, we investigate two secure source coding models, a Helper problem and a Gray-Wyner problem. In both settings, the encoder is connected to each of the legitimate receivers through a public link as well as a private link; and an external eavesdropper intercepts every information that is sent on the public link. Specifically, in the Helper problem, a memoryless source pair (S0,S1) is to be compressed and sent on both links such that the component S0 can be recovered losslessly at the legitimate receiver while being kept completely secret from an eavesdropper that overhears on the public link, and the component S1 is recovered lossily, to within some prescribed distortion level, at the legitimate receiver. In the Gray-Wyner model, a memoryless source triple (S0,S1,S2) is to be compressed and sent to two legitimate receivers, such that the component S0 is recovered at both receivers losslessly and kept secret at an external eavesdropper that listens on the public link; and the component Sj, is to be recovered lossily at Receiver j, j=1,2. We establish single-letter characterizations of the optimal secure rate-distortion regions of both models. The analysis sheds important light on the role of the private link(s), i.e., for the transmission of the source S0 or for sharing a secret key that is then used to encrypt the source S0 over the public link.

Collaboration Alignment in Distributed Interference Management in Uplink Cellular Systems

Borna Kananian (Sharif University of Technology, Iran); Mohammad Ali Maddah-Ali (Bell Labs, Alcatel Lucent, USA); Seyed Pooya Shariatpanahi (Institute for Research in Fundamental Sciences (IPM), Iran); Babak Hossein Khalaj (Sharif University of Technology, Iran)

We consider a cellular wireless system including several interfering multi-user multi-antenna uplink channels, where the base station of each cell has to locally recover the messages of its corresponding users. We use a linear Wyner model, where each base station experiences interference only from the users in the two neighboring cells. Each base station is connected to the two nearby base stations through a backhaul link. The objective is to achieve the maximum degrees of freedom per cell, with minimum aggregated load in the backhaul. We propose a cooperative alignment scheme, in which each base station forms backhaul messages by combining the previous received backhaul messages with the received signals at its wireless terminal. The backhaul messages allow the neighboring base stations to peel off the aggregated interference with minimum help. In this conference paper, we focus on linear processing schemes and prove the optimality of the proposed scheme, for systems with two antennas per base station and two users per cell, where each user is equipped with two antennas.

Canonical Conditions for K/2 Degrees of Freedom

David Stotz (ETH Zurich, Switzerland); Syed Ali Jafar (University of California Irvine, USA); Helmut Bölcskei (ETH Zurich, Switzerland); Shlomo (Shitz) Shamai (The Technion, Israel)

Stotz and Bölcskei, 2015, identified an explicit condition for K/2 degrees of freedom (DoF) in constant single-antenna interference channels (ICs). This condition is expressed in terms of linear independence—over the rationals—of monomials in the off-diagonal entries of the IC matrix and is satisfied for almost all IC matrices. There is, however, a prominent class of IC matrices that admits K/2 DoF but fails to satisfy this condition. The main contribution of the present paper is a more general condition for K/2 DoF (in fact for 1/2 DoF for each user) that, inter alia, encompasses this example class. While the existing condition by Stotz and Bölcskei is of algebraic nature, the new condition is canonical in the sense of capturing the essence of interference alignment by virtue of being expressed in terms of a generic injectivity condition that guarantees separability of signal and interference.

The rates of convergence of neural network estimates of hierarchical interaction regression models

Michael Kohler (Technische Universitat Darmstad, Germany); Adam Krzyżak (Concordia University, Canada)

In this paper we introduce so-called hierarchical interaction models where we assume that the computation of the value of a function m from R^d to R is done in several layers, where in each layer a function of at most d* inputs computed by the previous layer is evaluated. We investigate two different regression estimates based on polynomial splines and on neural networks and show that if the regression function satisfies a hierarchical interaction model and all occurring functions in the model are smooth, the rate of convergence of these estimates depends on d* (and not on d). Hence in this case the estimates can achieve good rate of convergence even for large d and are in this sense able to circumvent the so-called curse of dimensionality.

On Lossy Transmission of Correlated Sources over a Multiple Access Channel

Basak Guler (The Pennsylvania State University, USA); Deniz Gündüz (Imperial College London, United Kingdom); Aylin Yener (Pennsylvania State University, USA)

We study lossy communication of correlated sources over a multiple access channel. In particular, we provide a joint source-channel coding scheme for transmitting correlated sources with decoder side information, and study the conditions under which separate source and channel coding is optimal. For the latter, the encoders and/or the decoder have access to a common observation conditioned on which the two sources are independent. By establishing necessary and sufficient conditions, we show the optimality of separation when the encoders and the decoder both have access to the common observation. We also demonstrate that separation is optimal when only the encoders have access to the common observation whose lossless recovery is required at the decoder. As a special case, we study separation for sources with a common part. Our results indicate that side information can have significant impact on the optimality of source-channel separation in lossy transmission.

New Constructions of SD and MR Codes over Small Finite Fields

Guangda Hu (Princeton University, USA); Sergey Yekhanin (Microsoft Research)

Data storage applications require erasure-correcting codes with prescribed sets of dependencies between data symbols and redundant symbols. The most common arrangement is to have $k$ data symbols and $h$ redundant symbols (that each depends on all data symbols) be partitioned into a number of disjoint groups, where for each group one allocates an additional (local) redundant symbol storing the parity of all symbols in the group. A code as above is maximally recoverable, if it corrects all erasure patterns that are information theoretically correctable given the dependency constraints. A slightly weaker guarantee is provided by SD codes.

One key consideration in the design of MR and SD codes is the size of the finite field underlying the code as using small finite fields facilitates encoding and decoding operations. In this paper we present new explicit constructions of SD and MR codes over small finite fields.

Optimal Aging-Aware Channel Access Control for Wireless Networks with Energy Harvesting

Roberto Valentini (University of L’Aquila, Italy); Marco Levorato (University of California, Irvine, USA)

Energy harvesting is arising as a key technology in wireless systems, allowing continuous and prolonged operations. However, the bursty nature of the energy arrival process associated with renewable sources and the energy usage pattern caused by wireless protocols may cause considerable stress to the battery and eventually reduce its lifetime. In fact, deep charging and discharging cycles degrade the battery State of Health, that is, the maximum amount of energy that can be stored. In this paper, a framework for the optimization of wireless nodes’ transmission strategy is presented, where battery aging rate is included as a constraint. The proposed framework is based on Markov Decision Process theory, where the embedded stochastic process models energy arrival and storage, and channel fading, as well as the control variables. Numerical results unveil the tension between packet delivery rate and battery degradation.

Latent Tree Ensemble of Pairwise Copulas for Spatial Extremes Analysis

Hang Yu, Junwei Huang and Justin Dauwels (Nanyang Technological University, Singapore)

We consider the problem of jointly describing extreme events at a multitude of locations, which presents paramount importance in catastrophes forecast and risk management. Specifically, a novel Ensemble-of-Latent-Trees of Pairwise Copula (ELTPC) model is proposed. In this model, the spatial dependence is captured by latent trees expressed by pairwise copulas. To compensate the limited expressiveness of every single latent tree, an mixture of latent trees is employed. By harnessing the variational inference and stochastic gradient techniques, we further develop a triply stochastic variational inference (TSVI) algorithm for learning and inference. The corresponding computational complexity is only linear in the number of variables. Numerical results from both the synthetic and real data show that the ELTPC model provides a reliable description of the spatial extremes in a flexible but parsimonious manner.

Performance Bounds for Remote Estimation with an Energy Harvesting Sensor

Ayca Ozcelikkale, Tomas McKelvey and Mats Viberg (Chalmers University of Technology, Sweden)

Remote estimation with an energy harvesting sensor with a limited data buffer is considered. The sensor node observes an unknown correlated circularly wide-sense stationary (c.w.s.s.) Gaussian field and communicates its observations to a remote fusion center using the energy it harvested. The fusion center employs minimum mean-square error (MMSE) estimation to reconstruct the unknown field. The distortion minimization problem under the online scheme, where the sensor has only access to the statistical information for the future energy packets is considered. We provide performance bounds on the achievable distortion under a slotted block transmission scheme, where at each transmission time slot, the data and the energy buffer is completely emptied. Our bounds provide insight to the trade-off between the buffer sizes and the achievable distortion. These trade-offs illustrate the insensitivity of the performance to the buffer size for signals with low degree of freedom and suggest performance improvements with increasing buffer size for signals with relatively higher degree of freedom.

Topological Coded Caching

Xinping Yi and Giuseppe Caire (Technische Universität Berlin, Germany)

Cache-aided network architectures are emerging as an innovative solution able to harness device memory, a cheap and widely available resource, into bandwidth, so as to meet the predicted dramatic increase of user data traffic generated by on-demand multi-media. In this paper, starting from the previously proposed and widely studied femtocaching network, we consider a partially connected interference network where the femto base stations are equipped with caches and have no access to channel state information beyond the network connectivity (network topology). We aim at characterizing the tradeoff between the cache memory size and the normalized transmission delay for file delivery. We formulate a joint file placement and delivery optimization problem, and propose approaches to compute extreme points of the achievable memory-delay region. Our algorithmic solution consists of decomposing the intractable joint optimization problem into separate subproblems, which are solvable using existing efficient methods.

Clustering from Sparse Pairwise Measurements

Alaa Saade (Ecole Normale Supérieure, France); Florent Krzakala (Ecole Normale Superieure, France); Marc Lelarge (INRIA and ENS, France); Lenka Zdeborova (Institut de Physique Theorique IPhT, CEA Saclay and CNRS, France)

We consider the problem of grouping items into clusters based on few random pairwise comparisons between the items. We introduce three closely related algorithms for this task: a belief propagation algorithm approximating the Bayes optimal solution, and two spectral algorithms based on the non-backtracking and Bethe Hessian operators. For the case of two symmetric clusters, we conjecture that these algorithms are asymptotically optimal in that they detect the clusters as soon as it is information theoretically possible to do so. We substantiate this claim for one of the spectral approaches we introduce.

Variational Bayesian Dynamic Compressive Sensing

Hongwei Wang (Northwestern Polytechnical University, P.R. China); Hang Yu, Micheal Hoy and Justin Dauwels (Nanyang Technological University, Singapore); Heping Wang (Northwestern Polytechnical University, P.R. China)

Dynamic compressed sensing (DCS) has recently gained popularity as a successful approach to recovering dynamic sparse signals. In this paper, we attack the problem a Bayesian perspective. The proposed model imposes sparse constraints on both the unknown sparse signal and its temporal innovation via t priors. We then develop a computationally efficient mean-field variational Bayes algorithm to learn the model without parameter tuning. We consider both the online and offline scenarios, and demonstrate via numerical experiments that the proposed methods are superior to alternatives in terms of both reconstruction accuracy and computational time.

“Pretty strong” converse for the private capacity of degraded quantum wiretap channels

Andreas Winter (Universitat Autonoma de Barcelona & ICREA, Spain)

In the vein of the recent “pretty strong” converse for the quantum and private capacity of degradable quantum channels [Morgan/Winter, IEEE Trans. Inf. Theory 60(1):317-333, 2014], we use the same techniques, in particular the calculus of min-entropies, to show a pretty strong converse for the private capacity of degraded classical-quantum-quantum (cqq-) wiretap channels, which generalize Wyner’s model of the degraded classical wiretap channel.
While the result is not completely tight, leaving some gap between the region of error and privacy parameters for which the converse bound holds, and a larger no-go region, it represents a further step towards an understanding of strong converses of wiretap channels [cf. Hayashi/Tyagi/Watanabe, arXiv:1410.0443 for the classical case].

On Multistage Learning a Hidden Hypergraph

Arkadii Dyachkov and Ilya Vorobyev (Moscow State University, Russia); Nikita Polyanskii (Lomonosov Moscow State University, Russia); Vladislav Shchukin (Moscow State University, Russia)

Learning a hidden hypergraph is a natural generalization of the classical group testing problem that consists in detecting unknown hypergraph $H_{un}=H(V,E)$ by carrying out edge-detecting tests. In the given paper we focus our attention only on a specific family $F(t,s,\mathbf{l})$ of localized hypergraphs for which the total number of vertices $|V| = t$, the number of edges $|E|\le s$, $s << t$, and the cardinality of any edge $|e|\le l$, $l << t$. Our goal is to identify all edges of $H_{un}\in F(t,s,\mathbf{l})$ by using the minimal number of tests. We develop an adaptive algorithm that matches the information theory bound, i.e., the total number of tests of the algorithm in the worst case is at most $sl\log_2 t(1+o(1))$. We also discuss a probabilistic generalization of the problem.

Chernoff Information of Bottleneck Gaussian Trees

Binglin Li (Tsinghua University, Beijing, P.R. China); Shuangqing Wei (Louisiana State University, USA); Yue Wang and Jian Yuan (Tsinghua University, P.R. China)

In this paper, our objective is to find out the determining factors of Chernoff information in distinguishing a set of Gaussian trees. In this set, each tree can be attained via an edge removal and grafting operation from another tree. This is equivalent to asking for the Chernoff information between the most-likely confused, i.e. “bottleneck”, Gaussian trees, as shown to be the case in ML estimated Gaussian tree graphs lately. We prove that the Chernoff Information (CI) between two Gaussian trees related to each other through such operation is the same as that between two three-node Gaussian trees, whose topologies and edge weights are subject to the graphical operation. In addition, such CI is shown to be determined only by the maximum generalized eigenvalue of the two Gaussian covariance matrices. The Chernoff information of scalar Gaussian variables as a result of linear transformation (LT) of the original Gaussian vectors is also uniquely determined by the same maximum generalized eigenvalue. What is even more interesting is that after incorporating the cost of measurements into a normalized Chernoff information, Gaussian variables from LT have larger normalized CI than the one based on the original Gaussian vectors, as shown in our proved bounds.

Independent and Memoryless Sampling Rate Distortion

Vinay Praneeth Boda (University of Maryland, College Park, USA); Prakash Narayan (University of Maryland, USA)

Consider a discrete memoryless multiple source with $m$ component sources. A subset of $k \leq m$ sources are sampled at each time instant and jointly compressed in order to reconstruct all the $m$ sources under a given distortion criterion. A sampling rate distortion function is studied for two main sampling schemes. First, for independent random sampling performed without knowledge of the source outputs, it is shown that the sampling rate distortion function is the same regardless of whether the decoder is informed or not of the sequence of sampling sets. Next, memoryless random sampling is considered with the sampler depending on the source outputs and with an informed decoder. It is shown that deterministic sampling, characterized by a conditional point-mass, is optimal and suffices to achieve the sampling rate distortion function. For memoryless random sampling with an uninformed decoder, an upper bound for the sampling rate distortion function is seen to possess a similar property of conditional point-mass optimality. It is shown by example that memoryless sampling with an informed decoder can outperform strictly independent random sampling, and that memoryless sampling can do strictly better with an informed decoder than without.

Generalized Fault-Tolerant Quantum Computation over Nice Rings

Sangjun Lee and Andreas Klappenecker (Texas A&M University, USA)

Transversal operations are an elegant way to realize fault-tolerant quantum gates. Fault-tolerant quantum computation has been studied in detail over a finite field. In this paper, we derive transversal Clifford operations for CSS codes over nice rings, including Fourier transforms, SUM gates, and phase gates. Transversal operations alone cannot provide a computationally universal set of gates. As an example of a non-transversal gate, we derive fault-tolerant implementations of doubly-controlled $Z$ gates for triorthogonal stabilizer codes over nice rings.

A bit of delay is sufficient and stochastic encoding is necessary to overcome online adversarial erasures

Bikash K Dey (Indian Institute of Technology Bombay, India); Sidharth Jaggi (Chinese University of Hong Kong, Hong Kong); Michael Langberg (State University of New York at Buffalo, USA); Anand D. Sarwate (Rutgers University, USA)

We consider the problem of communicating a message $m$ in the presence of a malicious jamming adversary (Calvin), who can erase an arbitrary set of up to $pn$ bits, out of $n$ transmitted bits $\mathbf{X} = (x_1,\ldots,x_n)$. The capacity of such a channel when Calvin is {\it exactly causal}, {\it i.e.} Calvin’s decision of whether or not to erase bit $x_i$ depends on his observations $(x_1,\ldots,x_i)$ was recently characterized to be $1-2p$. In this work we show two (perhaps) surprising phenomena. Firstly, we demonstrate via a novel code construction that if Calvin is {\it delayed} by even a single bit, {\it i.e.} Calvin’s decision of whether or not to erase bit $x_i$ depends only on $(x_1,\ldots,x_{i-1})$ (and is independent of the “current bit” $x_i$) then the capacity increases to $1-p$ when the encoder is allowed to be stochastic. Secondly, we show via a novel jamming strategy for Calvin that, in the single-bit-delay setting, if the encoding is deterministic ({\it i.e.} the transmitted codeword $\mathbf{X}$ is a deterministic function of the message $m$) then no rate asymptotically larger than $1-2p$ is possible with vanishing probability of error; hence {\it stochastic encoding} (using private randomness at the encoder) is essential to achieve the capacity of $1-p$ against a one-bit-delayed Calvin.

On the Design of Linear Projections for Compressive Sensing with Side Information

Meng-Yang Chen (University College London, United Kingdom); Francesco Renna (University of Cambridge, United Kingdom); Miguel Rodrigues (University College London, United Kingdom)

In this paper, we study the problem of projection kernel design for the reconstruction of high-dimensional signals from low-dimensional measurements in the presence of side information, assuming that the signal of interest and the side information signal are described by a joint Gaussian mixture model (GMM). In particular, we consider the case where the projection kernel for the signal of interest is random, whereas the projection kernel associated to the side information is designed. We then derive sufficient conditions on the number of measurements needed to guarantee that the minimum mean-squared error (MMSE) tends to zero in the low-noise regime. Our results demonstrate that the use of a designed kernel to capture side information can lead to substantial gains in relation to a random one, in terms of the number of linear projections required for reliable reconstruction.

d-imbalance WOM Codes for Reduced Inter-Cell Interference in Multi-Level NVMs

Evyatar Hemo (Technion – Institute of Technology, Israel); Yuval Cassuto (Technion, Israel)

In recent years, due to the spread of multi-level non-volatile memories, q-ary write-once memories (WOM) codes have been extensively studied. By using WOM codes, it is possible to rewrite NVMs t times before erasing the cells. The use of WOM codes enables to improve the performance of the storage device, however, it may also increase errors caused by inter-cell interference (ICI). This work presents WOM codes that restrict the imbalance between code symbols throughout the write sequence, hence decreasing ICI. We first specify the imbalance model as a bound d on the difference between codeword levels. Then a 2-cell code construction for general q and input size is proposed. An upper bound on the write count is also derived, showing the optimality of the proposed construction. The new codes are also shown to be competitive with known codes not adhering to the bounded imbalance constraint.

Data Extraction via Histogram and Arithmetic Mean Queries: Fundamental Limits and Algorithms

I-Hsiang Wang (National Taiwan University, Taiwan); Shao-Lun Huang (Massachusetts Institute of Technology, USA); Kuan-Yun Lee and Kwang-Cheng Chen (National Taiwan University, Taiwan)

The problems of extracting information from a data set via histogram queries or arithmetic mean queries are considered. We first show that the fundamental limit on the number of histogram queries, $m$, so that the entire data set of size $n$ can be extracted losslessly, is $m = \Theta (n/\log n)$, sub-linear in the size of the data set. For proving the lower bound (converse), we use standard arguments based on simple counting. For proving the upper bound (achievability), we proposed two query mechanisms. The first mechanism is random sampling, where in each query, the items to be included in the queried subset are uniformly randomly selected. With random sampling, it is shown that the entire data set can be extracted with vanishing error probability using $\Omega (n/\log n)$ queries. The second one is a non-adaptive deterministic algorithm. With this algorithm, it is shown that the entire data set can be extracted exactly (no error) using $\Omega(n/\log n)$ queries. We then extend the results to arithmetic mean queries, and show that for data sets taking values in a real-valued finite arithmetic progression, the fundamental limit on the number of arithmetic mean queries to extract the entire data set is also $\Theta(n/\log n)$. The connections with group testing and applications to data privacy are also discussed.

Cognitive Hierarchy Theory for Heterogeneous Uplink Multiple Access in the Internet of Things

Nof Abuzainab and Walid Saad (Virginia Tech, USA); H. Vincent Poor (Princeton University, USA)

In this paper, the problem of distributed uplink random access in an Internet of Things (IoT) system, composed of heterogeneous group of nodes compromising both machine-type devices (MTDs) and human-type devices (HTDs). The problem is formulated as a noncooperative game between the heterogeneous IoT devices whose goal is to find the transmission probabilities and service rates that meet their individual quality-of-service (QoS) requirements. To solve this game while capturing the heterogeneity of the devices, in terms of resource constraints and QoS needs, a novel approach based on the behavioral game framework of cognitive hierarchy (CH) theory is proposed. This approach enables the IoT devices to reach a CH equilibrium concept that adequately factors in the various levels of rationality corresponding to the heterogeneous capabilities of MTDs and HTDs. Simulation results show that the proposed CH solution can significantly improve the performance, in terms of energy efficiency, for both MTDs and HTDs, achieving, on the average, a %67 of improvement compared to the traditional Nash equilibrium-based game-theoretic solutions.

On a Hypergraph Approach to Multistage Group Testing Problems

Arkadii Dyachkov and Ilya Vorobyev (Moscow State University, Russia); Nikita Polyanskii (Lomonosov Moscow State University, Russia); Vladislav Shchukin (Moscow State University, Russia)

Group testing is a well known search problem that consists in detecting up to s, s << t, defective elements of the set [t]={1,…,t} by carrying out tests on properly chosen subsets of [t]. In classical group testing the goal is to find all defective elements by using the minimal possible number of tests. In this paper we consider multistage group testing. We propose a general idea how to use a hypergraph approach to searching defective elements. For the case s=2 ant t\to\infty, we design an explicit construction, which makes use of 2*log_2 t(1+o(1)) tests in the worst case and consists of 4 stages. For the general case of fixed s>2 ant t\to\infty, we provide an explicit construction, which uses (2s-1)log_2 t(1+o(1)) tests and consists of 2s-1 rounds.

Coherent state constellations for Bosonic Gaussian channels

Felipe Lacerda (Aarhus University, Denmark); Joseph M. Renes (ETH Zurich, Switzerland); Volkher Scholz (Ghent University)

We propose constellations of finitely-many coherent states for high-rate quantum and classical communication over the thermal noise Bosonic Gaussian channel. Our constructions are based on constellations for the classical additive white Gaussian noise (AWGN) channel, and we adapt the results of Wu and Verdú [Allerton 2010, pp. 620] for the AWGN to determine achievable rates of classical and quantum information transmission for the thermal noise channel. Several constellations allow classical rates approaching the classical capacity, recently determined by Giovannetti et al. [Nature Photonics 8, 796 (2014)], while in the quantum case the rates approach the Gaussian coherent information. The constellations can also be used for private transmission of classical information at the coherent information rate.

Variable-Length Coding with Stop-Feedback for the Common-Message Broadcast Channel

Kasper F Trillingsgaard (Aalborg University, Denmark); Wei Yang (Princeton University, USA); Giuseppe Durisi (Chalmers University of Technology, Sweden); Petar Popovski (Aalborg University, Denmark)

This paper investigates the maximum coding rate over a K-user discrete memoryless broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder, which transmits continuously until it receives all K stop signals. We present nonasymptotic achievability and converse bounds for the maximum coding rate, which strengthen and generalize the bounds previously reported in Trillingsgaard et al. (2015) for the two-user case. An asymptotic analysis of these bounds reveal that — contrary to the point-to-point case — the second-order term in the asymptotic expansion of the maximum coding rate decays inversely proportional to the square root of the average blocklength. This holds for certain nontrivial common-message broadcast channels, such as the binary symmetric broadcast channel. Furthermore, we identify conditions under which our converse and achievability bounds are tight up to the second order. Through numerical evaluations, we illustrate that our second-order asymptotic expansion approximates accurately the maximum coding rate and that the speed of convergence to capacity is indeed slower than for the point-to-point case.

A Reduction Approach to the Multiple-Unicast Conjecture in Network Coding

Xunrui Yin and Zongpeng Li (University of Calgary, Canada); Xin Wang (Fudan University, P.R. China)

The multiple-unicast conjecture in network coding states that for multiple unicast sessions in an undirected network, network coding has no advantage over routing in improving the throughput or saving bandwidth. In this work, we propose a reduction method to study the multiple-unicast conjecture, and prove the conjecture for a new class of networks that are characterized by relations between cut-sets and source-receiver paths. This class subsumes the two known types of networks with non-zero max-flow min-cut gaps. Combing this result with a computer-aided search, we derive as a corollary that network coding is unnecessary in networks with up to 6 nodes. We also prove the multiple-unicast conjecture for almost all unit-link-length networks with up to 3 sessions and 7 nodes.

Achievable Rates for Additive Isotropic α-Stable Noise Channels

Malcolm Egan (Université Blaise Pascal, France); Mauro de Freitas (Université de Lille 1 and IEMN/IRCICA in France); Laurent Clavier (Institut Mines-Telecom, Telecom Lille & IEMN / IRCICA, France); Alban Goupil (Université de Reims Champagne-Ardenne, France); Gareth Peters (University College London London, United Kingdom); Nourddine Azzaoui (Université Blaise Pascal – Clermont-Ferrand II, France)

Impulsive noise arises in many communication systems—ranging from wireless to molecular—and is often modeled by the α-stable distribution. In this paper, we investigate properties of the capacity of complex isotropic α-stable noise channels, which can arise in the context of wireless cellular communications and are not well understood at present. In particular, we derive a tractable lower bound, as well as prove existence and uniqueness of the optimal input distribution. We then apply our lower bound to study the case of parallel α-stable noise channels and derive a bound that provides insight into the effect of the tail index α on the achievable rate.

Optimal Differential Privacy Mechanisms under Hamming Distortion for Structured Source Classes

Kousha Kalantari and Lalitha Sankar (Arizona State University, USA); Anand D. Sarwate (Rutgers University, USA)

We develop the tradeoff between privacy, quantified using local differential privacy (L-DP), and utility, quantified using Hamming distortion, for specific classes of universal memoryless finite-alphabet sources. In particular, for the class of permutation invariant sources (i.e., sources whose distributions are invariant under permutations), the optimal L-DP mechanism is obtained. On the other hand, for the class of sources with ordered statistics (i.e., for every distribution $P=(P_1,P_2,…,P_M) \in \mathcal{P}, P_1 \ge P_2 \ge P_3 \ge \ldots \ge P_M$), upper and lower bounds on the achievable local differential privacy are derived with optimality results for specific range of distortions.

Sparse Approximations of Directed Information Graphs

Christopher J Quinn (Purdue University, USA); Ali Pinar (Sandia National Laboratories, USA); Jing Gao (University at Buffalo, USA); Lu Su (State University of New York at Buffalo, USA)

Given a network of agents interacting over time, which few interactions best characterize the dynamics of the whole network? We propose an algorithm that finds the optimal sparse approximation of a network. The user controls the level of sparsity by specifying the total number of edges. The networks are modeled using directed information graphs, a graphical model that depicts causal influences between agents in a network. Goodness of approximation is measured with Kullback-Leibler divergence. The algorithm finds the best approximation with no assumptions on the topology or the class of the joint distribution.

Crossing the KS threshold in the stochastic block model with information theory

Emmanuel Abbe and Colin Sandon (Princeton University, USA)

Decelle et al.\ made a fascinating conjecture on the problem of detecting communities in the stochastic block model: up to 4 communities, the Kesten-Stigum (KS) threshold is conjectured to be the unique threshold for both efficient and non-efficient detection algorithms, whereas from 5 communities, it is conjectured that it is possible to detect communities below the KS threshold information-theoretically but not efficiently. This paper proves that indeed, from 5 communities onward, it is possible to detect below the KS threshold with a non-efficient algorithm that samples a typical clustering.
Further, the gap between the KS and information-theoretic bound is shown to be large in some cases. In the case where edges are drawn only across clusters with an average degree of $b$, and denoting by $k$ the number of communities, the KS bound reads $b \gtrsim k^2$ whereas our information-theoretic bound reads $b \gtrsim k \ln(k)$.

The Maximum Han-Kobayashi Sum-Rate for Gaussian Interference Channels

Ali Haghi and Amir K. Khandani (University of Waterloo, Canada)

The best known achievable rate region for the two-user Gaussian interference channel is due to the Han-Kobayashi (HK) scheme. The HK achievable region includes the regions achieved by all other known schemes. However, mathematical expressions that characterize the HK region are complicated and involve a time sharing variable and two arbitrary power splitting variables. Accordingly, the boundary points of the HK region, and in particular the maximum HK sum-rate, are not known in general. This paper studies the sum-rate of the HK scheme with Gaussian inputs. For the weak interference class, this study fully characterizes the maximum achievable sum-rate and shows that the weak interference class is partitioned into five regions. For each region, the optimal power splitting and the corresponding maximum achievable sum-rate are expressed in closed forms. Moreover, we show that the same approach can be adopted to characterize all boundary points.

Mutual Information, Relative Entropy and Estimation Error in Semi-Martingale Channels

Jiantao Jiao, Kartik Venkat and Tsachy Weissman (Stanford University, USA)

Fundamental relations between information and estimation have been established in the literature for the Gaussian and Poisson channels. In this work, we demonstrate that such relations hold for a much larger family of continuous-time channels. We introduce the family of semi-martingale channels where the channel output is a semi-martingale stochastic process, and the channel input modulates the characteristics of the semi-martingale. For these channels, which includes as a special case the Gaussian and Poisson models, we establish new representations relating the mutual information between the channel input and output to an optimal causal filtering loss, thereby unifying and considerably extending results from the Gaussian and Poisson settings. Extensions to the setting of mismatched estimation are also presented where the relative entropy between the laws governing the output of the channel under two different input distributions is equal to the cumulative difference between the estimation loss incurred by using the mismatched and optimal causal filters respectively. The results in this work can be viewed as the continuous-time analogues of recent generalizations for relations between information and estimation for scalar transformations via Lévy channels.

A quadratic Welch-Berlekamp algorithm to decode generalized Gabidulin codes, and some variants

Gwezheneg Robert (Université de Rennes1, France)

Gabidulin codes are Maximum Rank Distance (MRD) codes. They have been recently generalized to cyclic Galois extension fields. The unique decoding problem is equivalent to the linear reconstruction problem. The aim of this article is the study of an algorithm to solve this reconstruction problem. We prove that the output of our algorithm is a solution of the reconstruction problem. Then we give some variants. We also establish that (one of the variant of) the algorithm is quadratic.

Stronger Attacks on Causality-Based Key Agreement

Benno Salwey (Università della Svizzera Italiana (USI), Switzerland); Stefan Wolf (USI Lugano, Switzerland)

Remarkably, it has been shown that in principle, security proofs for quantum key-distribution (QKD) protocols can be independent of assumptions on the devices used and even of the fact that the adversary is limited by quantum theory. All that is required instead is the absence of any hidden information flow between the laboratories, a condition that can be enforced either by shielding or by space-time causality. All known schemes for such Causal Key Distribution (CKD) that offer noise-tolerance (and, hence, must use privacy amplification as a crucial step) require multiple devices carrying out measurements in parallel on each end of the protocol, where the number of devices grows with the desired level of security. We investigate the power of the adversary for more practical schemes, where both parties each use a single device carrying out measurements consecutively. We provide a novel construction of attacks that is strictly more powerful than the best known attacks and has the potential to decide the question whether such practical CKD schemes are possible in the negative.

Second-Order Asymptotics of Covert Communications over Noisy Channels

Mehrdad Tahmasbi (Georgia Institute of Technology, USA); Matthieu Bloch (Georgia Institute of Technology & Georgia Tech Lorraine, France)

We consider the problem of covert communication over noisy Discrete Memoryless Channels (DMCs). Covertness is measured with respect to an adversary in terms of the divergence between the channel output distribution induced with and without communication. We characterize the exact second order asymptotics of the number of bits that can be reliably transmitted with a probability of error less than $\epsilon$ and a divergence less than $\delta$. The main technical contribution of this paper is a detailed analysis of how to expurgate a random code while maintaining its channel resolvability properties.

Locally Differentially-Private Distribution Estimation

Adriano Pastore (Ecole Polytechnique Federale de Lausanne, Switzerland); Michael Gastpar (EPFL & University of California, Berkeley, Switzerland)

We consider a setup in which confidential i.i.d. samples $X_1,\dotsc,X_n$ from an unknown discrete distribution $P_X$ are passed through a discrete memoryless privatization channel (a.k.a. mechanism) which guarantees an $\epsilon$-level of local differential privacy. This constraint entails in particular that the mutual information between channel input and output is at most $\epsilon$ for any source distribution. For a given $\epsilon$, the channel should be designed such that an estimate of the source distribution based on the channel outputs converges as fast as possible to the exact value $P_X$. For this purpose we consider two metrics of estimation accuracy: the expected mean-square error and the expected Kullback-Leibler divergence. We derive their respective normalized first-order terms (as $n \to \infty$), which for a given target privacy $\epsilon$ represent the factor by which the sample size must be augmented so as to achieve the same estimation accuracy as that of an identity (non-privatizing) channel. We formulate the privacy-utility tradeoff problem as being that of minimizing said first-order term under a privacy constraint $\epsilon$. A converse bound is stated which bounds the optimal tradeoff away from the origin. Inspired by recent work on the optimality of staircase mechanisms (albeit for different objectives as ours), we derive an achievable tradeoff based on circulant step mechanisms. Within this finite class, we determine the optimal step pattern.

Dispersion of the Coherent MIMO Block-Fading Channel

Austin Collins and Yury Polyanskiy (MIT, USA)

In this paper we consider a channel model that is often used to describe the mobile wireless scenario: multiple-antenna additive white Gaussian noise channel subject to random (fading) gain with full channel state information available at the receiver. Dynamics of the fading process are approximated by a piecewise-constant process (frequency non-selective isotropic block fading). This work addresses the finite blocklength fundamental limits of this channel model. Specifically, we give a formula for the channel dispersion — a quantity governing the delay required to achieve capacity — and present achievability and (partial) converse bounds. Multiplicative nature of the fading disturbance leads to a number of interesting technical difficulties that required us to enhance traditional methods for finding channel dispersion. Knowledge of channel dispersion opens the possibility for studying the impact of channel dynamics, antenna selection rules, etc on the communication rate.

Simplified Successive-Cancellation List Decoding of Polar Codes

Seyyed Ali Hashemi, Carlo Condo and Warren Gross (McGill University, Canada)

The Successive-Cancellation List (SCL) decoding algorithm is one of the most promising approaches towards practical polar code decoding. It is able to provide a good trade-off between error-correction performance and complexity, tunable through the size of the list. In this paper, we show that in the conventional formulation of SCL, there are redundant calculations which do not need to be performed in the course of the algorithm. We simplify SCL by removing these redundant calculations and prove that the proposed simplified SCL and the conventional SCL algorithms are equivalent. The simplified SCL algorithm is valid for any code and can reduce the time-complexity of SCL without affecting the space complexity.

Placement and Read Algorithms for High Throughput in Coded Network Switches

Rami Cohen (Technion – Israel Institute of Technology, Israel); Yuval Cassuto (Technion, Israel)

Coded switches write incoming packets with redundancy to increase the flexibility to read them later without contention. An important question pertaining to coded switches is what policy to follow when placing the coded packets in the switch memory. We study this question by proposing two such placement policies: Cyclic placement and (block-) Design placement. We show that these policies offer many advantages in switching throughput, algorithmic efficiency, and analysis amenability.

Classical-Quantum Arbitrarily Varying Wiretap Channel: Common Randomness Assisted Code and Continuity

Minglai Cai (Technische Universität München, Germany); Holger Boche (Technical University Munich, Germany); Christian Deppe (Technical University of Munich, Germany); Janis Nötzel (Universitat Autònoma de Barcelona, Spain)

We determine the secrecy capacities under common randomness assisted coding of arbitrarily varying classical-quantum wiretap channels. Furthermore, we determine the secrecy capacity of a mixed channel model which is compound from the sender to the legal receiver and varies arbitrarily from the sender to the eavesdropper. As an application we examine when the secrecy capacity is a continuous function of the system parameters and show that resources are helpful for channel stability.

Polar Coding for Processes with Memory

Eren Şaşoğlu (Intel Corporation, USA); Ido Tal (Technion, Israel)

We study polar coding over channels and sources with memory. We show that $\psi$-mixing processes polarize under the standard transform, and that the rate of polarization to deterministic distributions is roughly $O(2^{-\sqrt{N}})$ as in the memoryless case, where $N$ is the blocklength. This implies that the error probability guarantees of polar channel and source codes extend to a large class of models with memory, including finite-order Markov sources and finite-state channels.

On the Capacity of Fading Channels with Amplitude-Limited Inputs

Ahmad A ElMoslimany (Arizona State University, USA); Tolga M. Duman (Bilkent University, Turkey)

We address the problem of finding the capacity of fading channels with arbitrary distributions under the assumption of amplitude-limited inputs. Specifically, we show that if the fading coefficients have a finite support and the channel state information is available at the receiver side only, there is a unique input distribution that achieves the channel capacity and this input distribution is discrete with a finite number of mass points.

A Connection Between Locally Repairable Codes and Exact Regenerating Codes

Toni Ernvall (University of Turku, Finland); Thomas Westerbäck, Ragnar Freij-Hollanti and Camilla Hollanti (Aalto University, Finland)

Typically, locally repairable codes (LRCs) and regenerating codes have been studied independently of each other, and it has not been clear how the parameters of one relate to those of the other. In this paper, a novel connection between locally repairable codes and exact regenerating codes is established. Via this connection, locally repairable codes are interpreted as exact regenerating codes. Further, some of these codes are shown to perform better than time-sharing codes between minimum bandwidth regenerating and minimum storage regenerating codes.

Speeding Up Distributed Machine Learning Using Codes

Kangwook Lee (University of California, Berkeley, USA); Maximilian Lam, Ramtin Pedarsani and Dimitris Papailiopoulos (UC Berkeley, USA); Kannan Ramchandran (University of California at Berkeley, USA)

Distributed machine learning algorithms that dominate the use of modern large-scale computing platforms face several types of randomness, uncertainty and system “noise.” These include straggler nodes, system failures, maintenance outages, and communication bottlenecks. In this work, we view distributed machine learning algorithms through a coding-theoretic lens, and show how codes can equip them with robustness against this system noise.
Motivated by their importance and universality, we focus on two of the most basic building blocks of distributed learning algorithms: data shuffling and matrix multiplication. In data shuffling, we use codes to reduce communication bottlenecks: when a constant fraction of the data can be cached at each worker node, and n is the number of workers, coded shuffling reduces the communication cost by up to a factor Θ(n) over uncoded shuffling. For matrix multiplication, we use codes to alleviate the effects of stragglers, also known as the straggler problem. We show that if the number of workers is n, and the runtime of each subtask has an exponential tail, the optimal coded matrix multiplication is Θ(log n) times faster than the uncoded matrix multiplication or the optimal task replication scheme.

A Bayesian view of Single-Qubit Clocks, and an Energy versus Accuracy tradeoff

Manoj Gopalkrishnan (Tata Institute of Fundamental Research, India); Varshith Kandula and Praveen Sriram (Indian Institute of Technology Bombay, India); Abhishek Deshpande (Imperial College London and Tata Institute of Fundamental Research Mumbai, India); Bhaskaran Muralidharan (Indian Institute of Technology Bombay, India)

We bring a Bayesian viewpoint to the analysis of clocks. Using exponential distributions as priors for clocks, we analyze the case of a single precessing spin. We find that, at least with a single qubit, quantum mechanics does not allow exact timekeeping, in contrast to classical mechanics which does. We find the optimal ratio of angular velocity of precession to rate of the exponential distribution that leads to maximum accuracy. Further, we find an energy versus accuracy tradeoff — the energy cost is at least $k_BT$ times the improvement in accuracy as measured by the entropy reduction in going from the prior distribution to the posterior distribution.

Analysis on LT codes for Unequal Recovery Time with Complete and Partial Feedback

Rana Abbas (The University of Sydney, Australia); Mahyar Shirvanimoghaddam (University of Newcastle, Australia); Yonghui Li and Branka Vucetic (University of Sydney, Australia)

In this paper, we investigate the impact of feedback in LT codes to guarantee unequal recovery time (URT) for different message segments. We analyze the URT-LT codes using the AND-OR tree for two scenarios: complete and partial feedback. We derive the necessary conditions for these two feedback schemes to achieve the required recovery time. We validate the analysis by simulation and highlight the cases where feedback is advantageous.

Distance Preserving Maps and Combinatorial Joint Source-channel Coding for Large Alphabets

Arya Mazumdar (University of Massachusetts Amherst, USA); Yury Polyanskiy (MIT, USA); Ankit Singh Rawat (Carnegie Mellon University, USA); Hajir Roozbehani (MIT, USA)

In this paper we present several results regarding distance preserving maps between nonbinary Hamming spaces and combinatorial (adversarial) joint source-channel coding. In an $(\alpha,\beta)$-map from one Hamming space to another, any two sequences that are at least $\alpha$ relative distance apart, are mapped to sequences that are relative distance at least $\beta$ apart. The motivation to study such maps come from $(D,\delta)$-joint source-channel coding (JSCC) schemes, where any encoded sequence must be recovered within a relative distortion $D$, even in the presence of $\delta$ proportion of adversarial errors. We provide bounds on the parameters of both $(\alpha,\beta)$-maps and $(D,\delta)$-JSCC for nonbinary alphabets. We also provide constructive schemes for both, that are optimal for many cases.

Cascade Channels with Infinite Memory

Martin Mittelbach (Dresden University of Technology, Germany); Eduard Jorswieck (TU Dresden, Germany)

Two theorems are proved for a cascade channel with two components, one theorem regarding input memory and the other regarding output memory. First, we show that if both components are asymptotically input-memoryless, then the cascade channel is asymptotically input-memoryless as well. Further, we prove that if both components are $\alpha$-mixing and additionally the second component is causal and asymptotically input-memoryless, then the cascade channel is $\alpha$-mixing. The results allow to study memory properties of complex models by analyzing basic building blocks. Further, they can be applied to analyze memory properties of information sources at the output of a channel. The results are relevant, e. g., in connection with coding theorems, concentration inequalities, or central limit theorems. The considered model includes discrete- as well as continuous-time channels and sources with completely arbitrary alphabets.

MMSE Estimation in a Sensor Network in the Presence of an Adversary

Craig Wilson and Venugopal Veeravalli (University of Illinois at Urbana-Champaign, USA)

Estimation in a two node sensor network is considered, with one sensor of high quality but potentially affected by an adversary and one sensor of low quality but immune to the actions of the adversary. The observations of the sensors are combined at a fusion center to produce a minimum mean square error (MSE) estimate taking into account the actions of the adversary. An approach based on hypothesis testing is introduced to decide whether the high quality sensor should be used. The false alarm probability of the hypothesis test introduces a natural trade-off between the MSE performance when the adversary takes no action and when the adversary acts. Finally, a method is developed to select the false alarm probability robustly to ensure good performance regardless of the adversary’s action.

A New Wiretap Channel Model and its Strong Secrecy Capacity

Mohamed Nafea (The Pennsylvania State University, USA); Aylin Yener (Pennsylvania State University, USA)

In this paper, a new wiretap channel (WTC) model with a discrete memoryless (DM) main channel and a wiretapper who noiselessly observes a fixed portion, of her choice, of the transmitted symbols, while observing the remaining transmitted symbols through another DM channel (DMC), is considered. The strong secrecy capacity of the model is identified. The achievability is established using the output statistics of random binning framework which exploits the duality between source and channel coding problems. The converse is derived by upper bounding the secrecy capacity of an equivalent model with the secrecy capacity of a DM-WTC. This result generalizes both the classical DM-WTC and the WTC-II with a DM main channel.

Fairness in Communication for Omniscience

Ni Ding (The Australian National University, Australia); Chung Chan and Qiaoqiao Zhou (The Chinese University of Hong Kong, Hong Kong); Rodney Andrew Kennedy and Parastoo Sadeghi (The Australian National University, Australia)

We consider the problem of how to fairly distribute the minimum sum-rate among the users in communication for omniscience (CO). We formulate a problem of minimizing a weighted quadratic function over a submodular base polyhedron which contains all achievable rate vectors, or transmission strategies, for CO that have the same sum-rate. By solving it, we can determine the rate vector that optimizes the Jain’s fairness measure, a more commonly used fairness index than the Shapley value in communications engineering. We show that the optimizer is a lexicographically optimal (lex-optimal) base and can be determined by a decomposition algorithm (DA) that is based on submodular function minimization (SFM) algorithm and completes in strongly polynomial time. We prove that the lex-optimal minimum sum-rate strategy for CO can be determined by finding the lex-optimal base in each user subset in the fundamental partition and the complexity can be reduced accordingly.

Stopping Sets for MDS-based Product Codes

Fanny Jardel (Télécom ParisTech, France); Joseph Jean Boutros (Texas A&M University at Qatar, Qatar); Mireille Sarkiss (CEA LIST, France)

Stopping sets for MDS-based product codes under iterative row-column algebraic decoding are analyzed in this paper. A union bound to the performance of iterative decoding is established for the independent symbol erasure channel. This bound is tight at low and very low error rates. We also proved that the performance of iterative decoding reaches the performance of Maximum-Likelihood decoding at vanishing channel erasure probability. Numerical results are shown for product codes at different coding rates.

Are Imperfect Reviews Helpful in Social Learning?

Tho Ngoc Le (Northwestern University, USA); Vijay Subramanian (University of Michigan, USA); Randall A Berry (Northwestern University, USA)

Social learning encompasses situations in which agents attempt to learn from observing the actions of other agents. It is well known that in some cases this can lead to information cascades in which agents blindly follow the actions of others, even though this may not be optimal. Having agents provide reviews in addition to their actions provides one possible way to avoid “bad cascades.” In this paper, we study one such model where agents sequentially decide whether or not to purchase a product, whose true value is either good or bad. If they purchase the item, agents also leave a review, which may be imperfect. Conditioning on the underlying state of the item, we study the impact of such reviews on the asymptotic properties of cascades. For a good underlying state, using Markov analysis we show that depending on the review quality, reviews may in fact increase the probability of a wrong cascade. On the other hand, for a bad underlying state, we use martingale analysis to bound the tail-probability of the time until a correct cascade happens.

Further Results on Independent Metropolis-Hastings-Klein Sampling

Zheng Wang and Cong Ling (Imperial College London, United Kingdom)

Sampling from a lattice Gaussian distribution is emerging as an important problem in coding and cryptography. This paper gives a further analysis of the independent Metropolis-Hastings-Klein (MHK) algorithm we presented at ISIT 2015. We derive the exact spectral gap of the induced Markov chain, which dictates the convergence rate of the independent MHK algorithm. Then, we apply the independent MHK algorithm to the closest vector problem (CVP), and establish the trade-off between decoding performance and complexity.

Operator Algebra Approach to Quantum Capacities

Marius Junge, Li Gao and Nicolas Laracuente (University of Illinois at Urbana-Champaign, USA)

Using a suitable algebraic setup we find new estimates of the quantum capacity and the potential quantum capacity for non-degradable channels obtained by random unitaries associated with a finite group. This approach can be generalized to quantum groups and uses new tools from operator algebras and interpolation of Renyi-type entropies. As an application we obtain new estimates for the depolarizing channel in high dimension.

Further results on lower bounds for coded caching

Hooshang Ghasemi and Aditya Ramamoorthy (Iowa State University, USA)

Coded caching is a technique that promises huge rate savings in certain canonical content distribution scenarios over the Internet. In the coded caching setting, previous contributions have demonstrated a constant multiplicative gap between the achievable rate and corresponding lower bound on the rate, independent of the problem parameters. Our prior work demonstrated that good lower bounds on the coded caching rate can be obtained by equivalently considering a combinatorial problem on a directed tree. In this work, we study certain structural properties of our algorithm that allow us to analytically quantify improvements on the rate lower bound. This analysis allows us to obtain a multiplicative gap of at most four between the achievable rate and our lower bound. To our best knowledge, this is the best known multiplicative gap known for this problem.

An LP Lower Bound for Rate Distortion with Variable Side Information

Sinem Unal and Aaron Wagner (Cornell University, USA)

We consider a rate distortion problem with side information at multiple decoders. Several lower bounds have been proposed for this general problem or special cases of it. We provide a lower bound for general instances of this problem, which was inspired by a linear-programming lower bound for index coding, and show that it subsumes most of the lower bounds in literature. Using this bound, we explicitly characterize the rate distortion function of a problem which can be seen as a Gaussian analogue of the “odd-cycle” index coding problem.

Generalisation of Kraft inequality for source coding into permutations

Kristo Visk and Ago-Erik Riet (University of Tartu, Estonia)

We develop a general framework to prove Kraft-type inequalities for prefix-free permutation codes for source coding with various notions of permutation code and prefix. We also show that the McMillan-type converse theorem in most of these cases fails, and give a general form of a counterexample. Our approach is more general and works for other structures besides permutation codes. The classical Kraft inequality for prefix-free codes and results about permutation codes follow as corollaries of our main theorem and main counterexample.

Approaching the Capacity of AWGN Channels using Multi-Layer Raptor Codes and Superposition Modulation

Mahyar Shirvanimoghaddam and Sarah J Johnson (University of Newcastle, Australia)

We propose a capacity approaching coding strategy for additive white Gaussian noise (AWGN) channels by using multi-layer Raptor codes and superposition modulation. Each AWGN channel is divided into several binary-input AWGN (BI-AWGN) channels at a very low signal to noise ratio (SNR), where a capacity approaching Raptor code can be used to encode the message over each layer. A single capacity approaching degree distribution is then used for the Raptor codes over all the layers. A well-known multi-stage decoder is used for successive interference cancellation and decoding the multi-layer Raptor code, where each decoding stage is modeled by a BI-AWGN channel of a fixed SNR. This allows the development of a capacity approaching code for an AWGN channel at every SNR by using a single Raptor code with a fixed degree distribution as the component code.

Low Complexity Precoding for MIMOME Wiretap Channels Based on Cut-off Rate

Sina Rezaei Aghdam and Tolga M. Duman (Bilkent University, Turkey)

We propose a low complexity transmit signal design scheme for achieving information-theoretic secrecy over a MIMO wiretap channel driven by finite-alphabet inputs. We assume that the transmitter has perfect channel state information (CSI) of the main channel and also knows the statistics of the eavesdropper’s channel. The proposed transmission scheme relies on jointly optimizing the precoder matrix and the artificial noise so as to maximize the achievable secrecy rates. In order to lower the computational complexity associated with the transmit signal design, we employ a design metric using the cut-off rate instead of the mutual information. We formulate a gradient-descent based optimization algorithm and demonstrate via extensive numerical examples that the proposed signal design scheme can yield an enhanced secrecy performance compared with the existing solutions in spite of its relatively lower computational complexity. The impacts of the modulation order as well as the number of antennas at the transmitter and receiver ends on the achievable secrecy rates are also investigated.

A Layered Caching Architecture for the Interference Channel

Jad Hachem (University of California, Los Angeles, USA); Urs Niesen (Qualcomm Research, USA); Suhas Diggavi (University of California Los Angeles, USA)

Recent work has studied the benefits of caching in the interference channel, particularly by placing caches at the transmitters. In this paper, we study the two-user Gaussian interference channel in which caches are placed at both the transmitters and the receivers. We propose a separation strategy that divides the physical and network layers. While a natural separation approach might be to abstract the physical layer into several \emph{independent} bit pipes at the network layer, we argue that this is inefficient. Instead, the separation approach we propose exposes \emph{interacting} bit pipes at the network layer, so that the receivers observe related (yet not identical) quantities. We find the optimal strategy within this layered architecture, and we compute the degrees-of-freedom it achieves. Finally, we state that separation is optimal in regimes where the receiver caches are large.

Keyless Covert Communication over Multiple-Access Channels

Keerthi Suria Kumar Arumugam (Georgia Institute of Technology, USA); Matthieu Bloch (Georgia Institute of Technology & Georgia Tech Lorraine, France)

We consider a scenario in which two legitimate transmitters attempt to communicate with a legitimate receiver over a discrete memoryless Multiple-Access Channel (MAC), while escaping detection from an adversary who observes their communication through another discrete memoryless MAC. If the MAC to the legitimate receiver is “better” than the one to the adversary, in a sense that we make precise, then the legitimate users can reliably communicate on the order of square root of n bits per n channel uses with arbitrarily Low Probability of Detection (LPD) without using a secret key. We also identify the pre-constants of the scaling, which leads to a characterization of the covert capacity region.

Cutset Width and Spacing for Reduced Cutset Coding of Markov Random Fields

Matthew G. Reyes (self-employed); David L Neuhoff (University of Michigan, USA)

In this paper we explore tradeoffs, regarding coding performance, between the thickness and spacing of the cutset used in Reduced Cutset Coding (RCC) of a Markov random field \cite{reyes2010}. Considering MRF models on a square lattice of sites, we show that under a stationarity condition, increasing the thickness of the cutset reduces coding rate for the cutset, increasing the spacing of the cutset increases the coding rate of the non-cutset pixels, though the coding rate of the latter is always strictly less than that of the former. We show that the redundancy of RCC can be decomposed into two terms, a correlation redundancy due to coding the components of the cutset independently, and a distribution redundancy due to coding the cutset as a reduced MRF. We provide analysis of these two sources of redundancy. We present results from numerical simulations with a homogeneous Ising model that bear out the analytical results.

Reliability of Sequential Hypothesis Testing Can Be Achieved by an Almost-Fixed-Length Test

Anusha Lalitha (University of California San Diego, USA); Tara Javidi (UCSD, USA)

The maximum type-I and type-II error exponents associated with the newly introduced almost-fixed-length hypothesis testing is characterized. In this class of tests, the decision-maker declares the true hypothesis almost always after collecting a fixed number of samples $n$; however in very rare cases with exponentially small probability the decision maker is allowed to collect another set of samples (no more than polynomial in $n$). This class of hypothesis tests are shown bridge the gap between the classical hypothesis testing with a fixed sample size and the sequential hypothesis testing, and improve the trade-off between type-I and type-II error exponents.

Approximately achieving the feedback interference channel capacity with point-to-point codes

Joyson Sebastian and Can Karakus (University of California, Los Angeles, USA); Suhas Diggavi (University of California Los Angeles, USA)

Superposition codes with rate-splitting have been used for all approximately optimal strategies for the interference channel, with and without feedback. As rate-splitting requires fore-knowledge of channel parameters or statistics, in this paper we explore schemes for the interference channel (with feedback) that do not use superposition or rate-splitting. We demonstrate that point-to-
point codes designed for inter-symbol-interference channels, along with time-sharing can approximately achieve the entire rate region of the interference channel with feedback. We show that such a scheme also approximately achieves the rate-region for the interference channel with fading, for a large class of fading distributions.

Perfect Gaussian Integer Sequences from Cyclic Difference Sets

Xinjiao Chen (Wuhan University, P.R. China); Chunlei Li and Chunming Rong (University of Stavanger, Norway)

A Gaussian integer is a complex number whose real and imaginary parts are both integers. Perfect Gaussian integer sequences find application in many areas including CDMA systems, OFDM systems and space-time code design. This paper proposed a unified construction of perfect Gaussian integer sequences based on cyclic difference sets. It turns out that this construction produces an abundance of perfect Gaussian integer sequences. The proposed construction includes all the sequences recently given by Lee et. al as special cases, and many new families of Gaussian integer sequences. To illustrate, two classes defined from Kassami-Welch functions and Helleseth-Gong functions are also given.

Improved group testing rates with constant column weight designs

Matthew Aldridge (University of Bath & Heilbronn Institute for Mathematical Research, United Kingdom); Oliver Johnson (University of Bristol, United Kingdom); Jonathan Scarlett (EPFL, Switzerland)

We consider nonadaptive group testing where each item is placed in a constant number of tests. The tests are chosen uniformly at random with replacement, so the testing matrix has (almost) constant column weights. We show that performance is improved compared to Bernoulli designs, where each item is placed in each test independently with a fixed probability. In particular, we show that the rate of the practical COMP detection algorithm is increased by 31% in all sparsity regimes. In dense cases, this beats the best possible algorithm with Bernoulli tests, and in sparse cases is the best proven performance of any practical algorithm. We also give an algorithm-independent upper bound for the constant column weight case; for dense cases this is again a 31% increase over the analogous Bernoulli result.

On the Capacity of Non-Binary Write-Once Memory

Michal Horovitz (Technion – Israel Institute of Technology, Israel); Eitan Yaakobi (Technion, Israel)

Write-once memory (WOM) is a storage device consisting of q-ary cells that can only increase their value. A WOM code is a scheme to write messages to the memory without decreasing the cells’ levels. There are four models of WOM which depend on whether the encoder and decoder are informed or uninformed with the previous state of the memory. The WOM capacity of the four models was extensively studied by Wolf et al. for the binary case, however in the non-binary setup only the model, in which the encoder is informed and the decoder is not, was studied by Fu and Han Vinck.
In this paper we study the capacity regions and maximum sum-rates of non-binary WOM codes for these four models. We extend the results by Wolf et al. and show that for the models in which the encoder is informed and the decoder is informed or uninformed the capacity region is the same both for epsilon-error and zero-error. We also find the epsilon-error capacity region in case the encoder is uninformed and the decoder is informed and show that, in contrary to the binary case, it is a proper subset of the capacity region in the first two models. Several more results on the maximum sum-rate are presented as well.

Edge Caching for Coverage and Capacity-aided Heterogeneous Networks

Ejder Baştuğ (CentraleSupélec, France); Mehdi Bennis (Centre of Wireless Communications, University of Oulu, Finland); Marios Kountouris (Huawei Technologies, France); Mérouane Debbah (Huawei, France)

A two-tier heterogeneous cellular network (HCN) with intra-tier and inter-tier dependence is studied here. The macro cell deployment follow a Poisson point process (PPP) and two different clustered point processes are used to model the cache-enabled small cells. Under this model, we derive approximate expressions in terms of finite integrals for the average delivery rate considering inter-tier and intra-tier dependence. Given the fact that cache size drastically improves the performance of small cells, we show that rate splitting of limited-backhaul induces non-linear performance variations, and therefore has to adjusted for rate fairness among users of different tiers.

New Error Correcting Codes for Informed Receivers

Lakshmi Prasad Natarajan (Indian Institute of Technology Hyderabad, India); Yi Hong and Emanuele Viterbo (Monash University, Australia)

We construct error correcting codes for jointly transmitting a finite set of independent messages to an ‘informed receiver’ which has prior knowledge of the values of some subset of the messages as side information. The transmitter is oblivious to the message subset already known to the receiver and performs encoding in such a way that any possible side information can be used efficiently at the decoder. We construct and identify several families of algebraic error correcting codes for this problem using cyclic and maximum distance separable (MDS) codes. The proposed codes are of short block length, many of them provide optimum or near-optimum error correction capabilities and guarantee larger minimum distances than known codes of similar parameters for informed receivers. The constructed codes are also useful as error correcting codes for index coding when the transmitter does not know the side information available at the receivers.

Secondary Spectrum Market: To acquire or not to acquire side information?

Arnob Ghosh (University Of Pennsylvania, USA); Saswati Sarkar (University of Pennsylvania, USA); Randall A Berry (Northwestern University, USA)

In a secondary spectrum market primaries set prices for their unused channels to the secondaries. The payoff of a primary depends on the channel state information (CSI) of its competitors. We consider a model where a primary can acquire its competitors CSI at a cost. We formulate a game between two primaries where each primary decides whether to acquire its competitor’s CSI or not and then selects its price based on that. Our result shows that no primary will decide to acquire its competitor’s CSI with probability $1$. When the cost of acquiring the CSI is above a threshold, there is a unique Nash Equilibrium where both the primaries remain uninformed of their respective competitor’s CSI. When the cost is below the threshold, each primary randomizes between its decision to acquire the CSI or not. Our result reveals that irrespective of the cost of acquiring the CSI, the expected payoff of a primary remains the same.

Coding for Lossy Function Computation: Analyzing Sequential Function Computation with Distortion Accumulation

Yaoqing Yang, Pulkit Grover and Soummya Kar (Carnegie Mellon University, USA)

We consider the problem of lossy linear function computation for Gaussian sources in a tree network. The goal is to find the optimal tradeoff between the sum rate (the overall number of bits communicated in the network) and the achieved distortion (the overall mean-square error of estimating the function result) at a specified sink node. Using random Gaussian codebooks, an inner bound is obtained that is shown to match the information-theoretic outer bound (obtained in our recent work [1]) in the limit of zero distortion. To compute the overall distortion for the random coding scheme, we applied the analysis of Distortion Accumulation which was quantified in [1] for MMSE estimates of intermediate computation variables instead of for the codewords of random Gaussian codebooks. The key in applying the analysis of Distortion Accumulation is showing that the random-coding based codeword on the receiver side is close in mean-square sense to the MMSE estimate of the source, even if the knowledge of the source distribution is not fully accurate.

Efficient Optimal Joint Channel Estimation and Data Detection for Massive MIMO Systems

Haider Alshamary and Weiyu Xu (University of Iowa, USA)

In this paper, we propose an efficient optimal joint channel estimation and data detection algorithm for massive MIMO wireless systems. Our algorithm is optimal in terms of the generalized likelihood ratio test (GLRT). For massive MIMO system, we show that the expected complexity of our algorithm grows polynomially in the channel coherence time. Simulation results demonstrate significant performance gains of our algorithm compared with suboptimal non-coherent detection algorithms. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detection for massive MIMO systems with general constellations.

Computing Linear Transforms with Unreliable Components

Yaoqing Yang, Pulkit Grover and Soummya Kar (Carnegie Mellon University, USA)

We consider the problem of computing a binary linear transform using when all circuit components are unreliable. We propose a novel “ENCODED” technique that uses LDPC (low-density parity-check) codes and embedded noisy decoders to keep the error probability of the computation below a small constant independent of the size of the linear transform, even when all logic gates in the computation are prone to probabilistic errors. Unlike existing works on applying coding to computing with unreliable components, the “ENCODED” technique explicitly considers the errors that happen during both the encoding and the decoding phases. Further, we show that ENCODED requires fewer operations (in order sense) than repetition techniques. Our results extend to computing with circuits that have permanent errors using expander-LDPC codes to correct worst-case errors.

GDoF of the MISO BC: Bridging the Gap between Finite Precision CSIT and Perfect CSIT

Arash Gholami Davoodi (University of California, Irvine, USA); Syed Ali Jafar (University of California Irvine, USA)

This work bridges the gap between sharply contrasting results on the degrees of freedom of the $K$ user broadcast channel where the transmitter is equipped with $K$ transmit antennas and each of the $K$ receivers is equipped with a single antenna. This channel has $K$ DoF when channel state information at the transmitter (CSIT) is perfect, but as shown recently, it has only $1$ DoF when the CSIT is limited to finite precision. By considering the full range of partial CSIT assumptions parameterized by $\beta\in[0,1]$, such that the strength of the channel estimation error terms scales as $\sim SNR^{-\beta}$ relative to the channel strengths which scale as $\sim SNR$, it is shown that this channel has $1-\beta+K\beta$ DoF. For $K=2$ users with arbitrary $\beta_{ij}$ parameters, the DoF are shown to be $1+\min_{i,j} \beta_{ij}$. To explore diversity of channel strengths, the results are further extended to the symmetric Generalized Degrees of Freedom setting where the direct channel strengths scale as $\sim SNR$ and the cross channel strengths scale as $\sim SNR^{\alpha}$, $\alpha\in[0,1],\beta\in[0,\alpha]$. Here, the roles of $\alpha$ and $\beta$ are shown to counter each other on equal terms, so that the sum GDoF value in the $K$ user setting is $(\alpha-\beta)+K(1-(\alpha-\beta))$ and for the 2 user setting with arbitrary $\beta_{ij}$, is $2-\alpha+\min_{i,j}\beta_{ij}$.

Two Classes of Zero Difference Balanced Functions and Their Optimal Constant Composition Codes

Yang Yang and Zhengchun Zhou (Southwest Jiaotong University, P.R. China); Xiaohu Tang (SWJTU, P.R. China)

Constant composition codes (CCCs) are a special class of constant-weight codes. They include permutation codes as a subclass. The construction of CCCs with parameters achieving certain bounds has been an interesting research topic in coding theory. Recently, Ding established a bridge from zero difference balanced (ZDB) functions to CCCs with parameters meeting the Luo-Fu-Vinck-Chen bound. This provides a new approach for obtaining optimal CCCs. One objective of this paper is to present two new classes of ZDB functions whose parameters have been not covered in the literature. Another objective of this paper is to introduce two classes of CCCs meeting the Luo-Fu-Vinck-Chen bound from these new ZDB functions.

Asymptotic Mutual Information for the Binary Stochastic Block Model

Yash Deshpande (Stanford University, USA); Emmanuel Abbe (Princeton University, USA); Andrea Montanari (Stanford University, USA)

We develop an information-theoretic view of the stochastic block model, a popular statistical model for the large-scale structure of complex networks. A graph $G$ from such a model is generated by first assigning vertex labels at random from a finite alphabet, and then connecting vertices with edge probabilities depending on the labels of the endpoints. In the case of the symmetric two-group model, we establish an explicit `single-letter’ characterization of the per-vertex mutual information between the vertex labels and the graph.
The explicit expression of the mutual information is intimately related to estimation-theoretic quantities, and –in particular– reveals a phase transition at the critical point for community detection. Below the critical point the per-vertex mutual information is asymptotically the same as if edges were independent of the vertex labels. Correspondingly, no algorithm can estimate the partition better than random guessing. Conversely, above the threshold, the per-vertex mutual information is strictly smaller than the independent-edges upper bound. In this regime there exists a procedure that estimates the vertex labels better than random guessing.

A Converse to Low-Rank Matrix Completion

Daniel L Pimentel-Alarcon (University of Wisconsin-Madison, USA); Rob Nowak (University of Wisconsin, Madison, USA)

In many practical applications, one is given a subset Omega of the entries in a dxN data matrix X, and aims to infer all the missing entries. Existing theory in low-rank matrix completion (LRMC) provides conditions on X (e.g., bounded coherence or genericity) and Omega (e.g., uniform random sampling or deterministic combinatorial conditions) to guarantee that if X is rank-r, then X is the only rank-r matrix that agrees with the observed entries, and hence X can be uniquely recovered by some method (e.g., nuclear norm or alternating minimization).
In many situations, though, one does not know beforehand the rank of X, and depending on X and Omega, there may be rank-r matrices that agree with the observed entries, even if X is not rank-r. Hence one can be deceived into thinking that X is rank-r when it really is not. In this paper we give conditions on X (genericity) and a deterministic condition on Omega to guarantee that if there is a rank-r matrix that agrees with the observed entries, then X is indeed rank-r. While our condition on Omega is combinatorial, we provide a deterministic efficient algorithm to verify whether the condition is satisfied. Furthermore, this condition is satisfied with high probability under uniform random sampling schemes with only O(max(r,log(d)) samples per column. This strengthens existing results in LRMC, allowing to drop the assumption that X is known a priori to be low-rank.

LDPC Decoders with Missing Connections

Linjia Chang (University of Illinois at Urbana-champaign, USA); Avhishek Chatterjee and Lav R. Varshney (University of Illinois at Urbana-Champaign, USA)

Due to process variation in nanoscale manufacturing, there may be permanently missing connections in information processing hardware. Due to timing errors in circuits, there may be missed messages in intra-chip communications, equivalent to transiently missing connections. In this work, we investigate the performance of message-passing LDPC decoders in the presence of missing connections. We prove concentration and convergence theorems that validate the use of density evolution performance analysis. Arbitrarily small error probability is not possible under miswiring, but we find suitably defined decoding thresholds for communication systems with binary erasure channels and peeling decoders, as well as binary symmetric channels and Gallager A decoders. We see that decoding is robust to missing wires, as decoding thresholds degrade smoothly.

Phase transition and noise sensitivity of $\ell_p$-minimization for $0 \leq p \leq 1$

Haolei Weng, Le Zheng, Arian Maleki and Xiaodong Wang (Columbia University, USA)

Recovering a sparse vector $x_o \in \mathbb{R}^N$ from its noisy linear observations, $y \in \mathbb{R}^n$ with $y= Ax_o + w$, has been the central problem of compressed sensing. One of the classes of recovery algorithms that has attracted attention is the class of $\ell_p$-regularized least squares (LPLS) that seeks the minimum of $\frac{1}{2}\left\| {y – Ax} \right\|_2^2 + \lambda {\| x \|_p^p}$ for $p \in [0,1]$. In this paper we employ the replica method from statistical physics to analyze the global minima of LPLS. Our paper reveals several surprising asymptotic properties of LPLS: (i) The phase transition curve of LPLS is the same for every $0 \leq p<1$. These phase transition curves are much higher than the phase transition curve for $p=1$. (ii) The phase transition curve of LPLS for every value of $0 \leq p \leq 1$ depends only on the sparsity level and does not change with the distribution of the non-zero coefficients of $x_o$. (iii) Despite the equality of the phase transition curves, different values of $p$ show different performances once a small amount of measurement noise, $w$, is added.

Degrees of Freedom of MIMO Y Channel with Multiple Relays

Tian Ding (Chinese University of Hong Kong, Hong Kong); Xiaojun Yuan (ShanghaiTech University, P.R. China); Soung Chang Liew (The Chinese University of Hong Kong, Hong Kong)

We study the degrees of freedom (DoF) of a symmetric multi-relay multiple-input multiple-output (MIMO) Y channel, where three users, each equipped with M antennas, exchange messages via multiple geographically separated relay nodes, each with N antennas. We formulate a general DoF achievability problem assuming the use of linear precoding and post-processing. To solve this problem, we present a new uplink-downlink asymmetric strategy where the user precoders are designed for signal alignment and the user post-processors are used for interference neutralization. Based on that, we derive an achievable DoF of the considered model for an arbitrary antenna setup. The optimality of the derived DoF is established under certain antenna configurations. Also, we show that our design considerably outperforms the conventional uplink-downlink symmetric design.

Generalized DoF of the Symmetric K-User Interference Channel under Finite Precision CSIT

Arash Gholami Davoodi (University of California, Irvine, USA); Syed Ali Jafar (University of California Irvine, USA)

The generalized degrees of freedom (GDoF) characterization of the symmetric K-user interference channel is obtained under finite precision channel state information at the transmitters (CSIT). The symmetric setting is where each cross channel is capable of carrying degrees of freedom (DoF) while each direct channel is capable of carrying 1 DoF. Remarkably, under finite precision CSIT the symmetric K-user interference channel loses all the GDoF benefits of interference alignment. The GDoF per user diminish with the number of users everywhere except in the very strong (optimal for every receiver to decode all messages) and very weak (optimal to treat all interference as noise) interference regimes. The result stands in sharp contrast to prior work on the symmetric setting under perfect CSIT, where the GDoF per user remain undiminished due to interference alignment. The result also stands in contrast to prior work on a subclass of asymmetric settings under finite precision CSIT, i.e., the topological interference management problem, where interference alignment plays a crucial role and provides substantial GDoF benefits.

Asymptotically Achievable Error Probabilities for Multiple Hypothesis Testing

Pierre Moulin (University of Illinois at Urbana-Champaign, USA)

The region of achievable error probabilities for $k$-ary hypothesis tests is studied in the asymptotic setting with $n$ independent and identically distributed observations. We identify a $k^2 – k – 1$ dimensional parametric family of optimal (non-dominated) tests and relate it to a family of Bayes tests whose loss function depends exponentially on $n$. We present an asymptotic characterization of the conditional error probabilities for these tests within $O(1)$, using strong large deviations analysis.

Sequence Reconstruction over the Deletion Channel

Ryan Gabrys (UIUC, USA); Eitan Yaakobi (Technion, Israel)

The sequence-reconstruction problem, first proposed by Levenshtein, models the setup in which a sequence from some set is transmitted over several independent channels, and the decoder receives the outputs from every channel. The main problem of interest is to determine the minimum number of channels required to reconstruct the transmitted sequence. In the combinatorial context, the problem is equivalent to finding the largest intersection between two balls of radius t where the distance between their centers is at least d. The setup of this problem was studied before for several error metrics such as the Hamming metric, the Kendall-tau metric, and the Johnson metric.
In this paper, we extend the study initiated by Levenshtein for reconstructing sequences over the deletion channel. While he solved the case where the transmitted word can be arbitrary, we study the setup where the transmitted word belongs to a single-deletion-correcting code and there are $t$ deletions in every channel. Under this paradigm, we study the minimum number of different channel outputs in order to construct a successful decoder.

New Exact-Repair Codes for Distributed Storage Systems Using Matrix Determinant

Mehran Elyasi and Soheil Mohajer (University of Minnesota, USA)

The exact regeneration codes for distributed storage system are studied in this work. A novel coding scheme is proposed for code construction for any (n, k, d = k) system, and it is shown to be optimum. In particular, the optimum tradeoff of exact- regeneration system is fully characterized for any system with d = k. The new construction is based on fundamental properties of matrix determinant, thus the code is called determinant code. It is devised for the entire range of (α, β) on the optimum tradeoff.

Energy Efficient Distributed Coding for Data Collection in a Noisy Sparse Network

Yaoqing Yang, Soummya Kar and Pulkit Grover (Carnegie Mellon University, USA)

We consider the problem of data collection in a two-layer network consisting of (1) noisy links between $N$ distributed agents and a remote sink node; (2) a noisy sparse network formed by these distributed agents. We study the effect of inter-agent communications on the overall energy consumption. We jointly consider the design of the optimal graph topology for the inter-agent network and the in-network computing scheme under the sparsity constraint and the energy constraint. Despite the sparse connections between agents, we provide an in-network coding scheme that reduces the overall energy consumption by a factor of $\Theta(\log N)$ compared to a naive scheme based on direct agent-to-sink communications only. By providing lower bounds on both the energy consumption and the sparseness (number of links) of the network, we show that the proposed scheme is energy-optimal except for a factor of $\Theta(\log\log N)$. The proposed scheme extends a previous work of Gallager [1] on noisy broadcasting from a complete graph to a sparse graph, while bringing in new techniques from error control coding and noisy circuits.

Secure Index Coding: Existence and Construction

Lawrence Ong (The University of Newcastle, Australia); Badri N Vellambi (New Jersey Institute of Technology, USA); Phee Lep Yeoh (University of Melbourne, Australia); Joerg Kliewer (New Jersey Institute of Technology, USA); Jinhong Yuan (University of New South Wales, Australia)

We investigate the construction of weakly-secure index codes for a sender to send messages to multiple receivers with side information in the presence of an eavesdropper. We derive a sufficient and necessary condition for the existence of index codes that are secure against an eavesdropper with access to any subset of messages of cardinality t, for any fixed t. In contrast to the benefits of using random keys in secure network coding, we prove that random keys do not promote security in three classes of index-coding instances.

Probabilistic bounds on the trapping redundancy of linear codes

Yuichiro Fujiwara and Yu Tsunoda (Chiba University, Japan)

The trapping redundancy of a linear code is the number of rows of a smallest parity-check matrix such that no submatrix forms an $(a,b)$-trapping set. This concept was first introduced in the context of low-density parity-check (LDPC) codes in an attempt to estimate the number of redundant rows in a parity-check matrix suitable for iterative decoding. Essentially the same concepts appear in other contexts as well such as robust syndrome extraction for quantum error correction. Among the known upper bounds on the trapping redundancy, the strongest one was proposed by employing a powerful tool in probabilistic combinatorics, called the Lovász Local Lemma. Unfortunately, the proposed proof invoked this tool in a situation where an assumption made in the lemma does not necessarily hold. Hence, although we do not doubt that nonetheless the proposed bound actually holds, for it to be a mathematical theorem, a more rigorous proof is desired. Another disadvantage of the proposed bound is that it only deals with $(a,b)$-trapping sets with rather small $a$. Here, we give a more general and sharper upper bound on trapping redundancy by making mathematically more rigorous use of probabilistic combinatorics without relying on the lemma. Our bound is applicable to all potentially avoidable $(a,b)$-trapping sets with $a$ smaller than the minimum distance of a given linear code, while being generally much sharper than the bound through the Lovász Local Lemma. In fact, our upper bound is sharp enough to exactly determine the trapping redundancy for many cases, thereby providing precise knowledge in the form of the most general bound with mathematical rigor.

State Estimation, Wireless Tropes, Demons and Uncertainty

Christopher Rose (Brown University, USA)

Consider observation of a system with initial state x(0) through some signal r(t) corrupted by white noise of spectral height N_0. When the system is cast in state-space form and the observations projected onto the relevant orthonormal bases, completely unbidden, two well-known wireless communications tropes emerge: a colored noise channel and a multi-access channel wherein elements of system state are associated with different “signatures” defined by the system. That is, from a mathematical perspective, the system could communicate to the observer in a well-understood way. Taking these tropes at face value, we investigate the efficiency of conveying a state vector x(0) through classical estimation to that wherein a “demon” manipulates an identical initially-at-rest system so as to communicate x(0) to the observer on successive epochs (channel uses). An energy constraint on the initial state E[|x(0)|^2 ]= E is assumed, and the demon’s signaling efforts over the ensemble of epochs are constrained similarly. In all cases, the demon conveys the x(0) with less error — by orders of magnitude for moderate signal to noise ratio E/N_0. Furthermore, the demon scenario results in some number of reliably-conveyed bits of information and imposes crisp limits on relative uncertainty of different state element estimates. In fact, the form of these limits is identical to that of the quantum mechanical Uncertainty Principle (although there is no requirement of a momentum-position analog). Nonetheless, the appearance of these tropes raises the question of whether communication and information theory have something deeper to say about physical interactions and the cacophony of system voices in conversation.

Degrees of Freedom of the Bursty MIMO X Channel without Feedback

Shih-Yi Yeh and I-Hsiang Wang (National Taiwan University, Taiwan)

We investigate the degrees of freedom (DoF) of the symmetric bursty MIMO X channel without feedback, where the presence of the two cross links is governed by a Bernoulli pc random state. The sum DoF is characterized for most of the antenna-burstiness configurations, except the case where pc > 0.5 and the ratio of the number of transmit and receive antennas is between 2/3 and 3/2. When pc is less than or equal to 0.5, the sum DoF of the bursty MIMO X channel is equal to that of the interference channel, and hence cross-link messaging does not help. When pc > 0.5, cross-link messaging is necessary, and we showed that simple interference alignment schemes suffice to achieve the sum DoF. For the case where the characterization of DoF remains open, we propose a combination of Han-Kobayashi coding and interference alignment that achieves higher DoF than interference alignment alone. This is in sharp contrast to the non-bursty case where interference alignment alone is DoF-optimal.

Minimax Lower Bounds for Kronecker-Structured Dictionary Learning

Zahra Shakeri, Waheed U. Bajwa and Anand D. Sarwate (Rutgers University, USA)

Dictionary learning is the problem of estimating the collection of atomic elements that provide a sparse representation of measured/collected signals or data. This paper finds fundamental limits on the sample complexity of estimating dictionaries for tensor data by proving a lower bound on the minimax risk. This lower bound depends on the dimensions of the tensor and parameters of the generative model. The focus of this paper is on second-order tensor data, with the underlying dictionaries constructed by taking the Kronecker product of two smaller dictionaries and the observed data generated by sparse linear combinations of dictionary atoms observed through white Gaussian noise. In this regard, the paper provides a general lower bound on the minimax risk and also adapts the proof techniques for equivalent results using sparse and Gaussian coefficient models. The reported results suggest that the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data.

Improving on The Cut-Set Bound for General Primitive Relay Channels

Xiugang Wu and Ayfer Özgür (Stanford University, USA)

Consider a primitive relay channel, where, a source $X$ wants to send information to a destination $Y$ with the help of a relay $Z$ and the relay can communicate to the destination via an error-free digital link of rate $R_0$. For the symmetric case, i.e., when $Y$ and $Z$ are conditionally i.i.d. given $X$, we have recently developed new upper bounds on the capacity of this channel that are tighter than existing bounds, including the celebrated cut-set bound. In this paper, we extend these bounds to the asymmetric case, where $Y$ and $Z$ are conditionally independent given $X$ with arbitrary conditional marginal distributions, for both discrete memoryless and Gaussian channels.

Distributed Simulation of Continuous Random Variables

Cheuk Ting Li and Abbas El Gamal (Stanford University, USA)

We establish the first known upper bound on the exact and Wyner’s common information of $n$ continuous random variables in terms of the dual total correlation between them (which is a generalization of mutual information). In particular, we show that when the pdf of the random variables is log-concave, there is a constant gap of $n^{2}\log e+9n\log n$ between this upper bound and the dual total correlation lower bound that does not depend on the distribution. The upper bound is obtained using a computationally efficient dyadic decomposition scheme for constructing a discrete common randomness variable $W$ from which the $n$ random variables can be simulated in a distributed manner. We then bound the entropy of $W$ using a new measure, which we refer to as the erosion entropy.

Rate-distance tradeoff for codes above graph capacity

Daniel F Cullina (University of Illinois at Urbana-Champaign, USA); Marco Dalai (University of Brescia, Italy); Yury Polyanskiy (MIT, USA)

The capacity of a graph is defined as the rate of exponential growth of independent sets in the strong powers of the graph. In the strong power an edge connects two sequences if at each position their letters are equal or adjacent. We consider a variation of the problem where edges in the power graphs are removed between sequences which differ in more than a fraction $\delta$ of coordinates. The proposed generalization can be interpreted as the problem of determining the highest rate of zero undetected-error communication over a link with adversarial noise, where only a fraction $\delta$ of symbols can be perturbed and only some substitutions are allowed.
We derive lower bounds on achievable rates by combining graph homomorphisms with a graph-theoretic generalization of the Gilbert-Varshamov bound. We then give an upper bound, based on Delsarte’s linear programming approach, which combines Lovász’ theta function with the construction used by McEliece et al. for bounding the minimum distance of codes in Hamming spaces.

Polar Coded Non-Orthogonal Multiple Access

Jincheng Dai, Kai Niu, Zhongwei Si and Jiaru Lin (Beijing University of Posts and Telecommunications, P.R. China)

In this paper, polar codes are first applied in non-orthogonal multiple access (NOMA) and the channel polarization idea is extended to NOMA, which is a major multiple access technique in 5G systems. The polar coded NOMA (PC-NOMA) scheme is proposed, whereby the NOMA channel is decomposed into a series of binary-input channels under a two-stage channel polarization transform. In the first stage, the NOMA channel is divided into a group of user synthesized channels by using the multi-level coding structure. In the second stage, based on the structure of bit-interleaved code modulation, user synthesized channels are further decomposed into binary polarized channels. Then, a joint successive cancellation decoding scheme is given to construct the multiuser receiver of PC-NOMA. Finally, a low complexity search algorithm is proposed to schedule the NOMA decoding order which improves the error performance by enhanced polarization among user synthesized channels. The block error ratio performances over additive white Gaussian noise channels indicate that the proposed PC-NOMA obviously outperforms the turbo coded NOMA scheme due to the advantages of the two-stage polarization.

Coded Caching for Networks with the Resolvability Property

Li Tang and Aditya Ramamoorthy (Iowa State University, USA)

Coded caching is a recently proposed technique for dealing with large scale content distribution over the Internet. As in conventional caching, it leverages the presence of local caches at the end users. However, it considers coding in the caches and/or coded transmission from the central server and demonstrates that huge savings in transmission rate are possible when the server and the end users are connected via a single shared link. In this work, we consider a more general topology where there is a layer of relay nodes between the server and the users, e.g., combination networks studied in network coding are an instance of these networks. We propose novel schemes for a class of such networks that satisfy a so-called resolvability property and demonstrate that the performance of our scheme is strictly better than previously proposed schemes.

A characterization of statistical manifolds on which the relative entropy is a Bregman divergence

Hiroshi Nagaoka (University of Electro-Communications, Japan)

It is well known that the relative entropy (Kullback-Leibler divergence) is represented in the form of Bregman divergence on exponential families and mixture families for some coordinate systems. We give a characterization of the class of statistical manifolds (smooth manifolds of probability mass functions) on finite sample spaces having coordinate systems for which the relative entropy is a Bregman divergence.

Voronoi Constellations for High-Dimensional Lattice Codes

Nuwan S. Ferdinand (University of Oulu, Finland); Matthew Nokleby (Wayne State University, USA); Behnaam Aazhang (Rice University, USA)

This paper proposes a low-complexity scheme to construct Voronoi constellations for the shaping of high-dimensional lattice codes. The Voronoi region of a low-dimensional lattice is used as a prototype for the shaping region for a high-dimensional lattice codebook. The proposed scheme retains the shaping and coding gains of the respective lattices. Further, the proposed scheme provides a general approach for shaping popular high-dimensional lattices, including LDA lattices, for which no practical shaping algorithm exists to our knowledge. Finally, the proposed scheme preserves the algebraic properties of nested lattice codes, making it suitable for compute-and-forward applications. Using E8 and BW16 as prototype shaping lattices, we numerically show that the proposed scheme results in 0.65 dB and 0.86 dB shaping gains.

Uncoordinated Multiple Access Schemes for Visible Light Communications and Positioning

Siu-Wai Ho (University of South Australia, Australia); Chi Wan Sung (City University of Hong Kong, Hong Kong)

In visible light communication (VLC) systems, information are conveyed by visible light instead of radio-frequency electromagnetic waves. Based on received signal strength, accurate indoor positioning systems can also be built. Since a receiver obtains the superposition of signals from all light sources within line of sight together with ambient light, a multiple access scheme is necessary for the receiver to distinguish the received symbol and signal strength from each light source. This paper proposes two multiple access schemes for VLC. The first scheme supports information broadcast and positioning. By using $2^N$ timeslots, $N$ transmitters transmit $2^N-1$ symbols in total. The second scheme supports positioning only but places emphasis on minimizing the required timeslots. In each $2N$ timeslots for an odd integer $N$, the channel gains of $\frac{3N-1}{2}$ transmitters can be estimated.

Overlap-Based Genome Assembly from Variable-Length Reads

Joseph Hui and Ilan Shomorony (UC Berkeley, USA); Kannan Ramchandran (University of California at Berkeley, USA); Thomas Courtade (University of California, Berkeley, USA)

Recently developed high-throughput sequencing platforms can generate very long reads, making perfect assembly of whole genomes information-theoretically possible. One of the challenges in achieving this goal in practice, however, is that traditional assembly algorithms based on the de Bruijn graph framework cannot handle the high error rates of long-read technologies. On the other hand, overlap-based approaches such as string graphs are very robust to errors, but cannot achieve the theoretical lower bounds. In particular, these methods handle the variable-length reads provided by long-read technologies in a suboptimal manner. In this work, we introduce a new assembly algorithm with two desirable features in the context of long-read sequencing:
(1) it is an overlap-based method, thus being more resilient to read errors than de Bruijn graph-based algorithms; and
(2) it achieves the information-theoretic bounds even in the variable-length read setting.

Binary Codes with Locality for Multiple Erasures Having Short Block Length

Balaji Srinivasan Babu (IISc, India); K P Prasanth (Indian Institute of Science, India); P. Vijay Kumar (Indian Institute of Science, Bangalore, India)

This paper considers linear, binary codes having locality parameter $r$, that are capable of recovering from $t\geq 2$ erasures and which additionally, possess short block length. Both sequential and parallel (through orthogonal parity checks) recovery are considered. In the case of sequential repair, the results include (a) extending and characterizing minimum-block-length constructions for $t=2$, (b) providing improved bounds on block length for $t=3$ as well as a general construction for $t=3$ having short block length, (c) providing high-rate constructions for $\left( r=2, \ t \in \{4,5,6,7\} \right) $ and (d) providing short-block-length constructions for general $(r,t)$. In the case of parallel repair, minimum-block-length constructions are characterized whenever $t |(r^2+r)$ and examples examined.

Converse bounds for interference channels via coupling and proof of Costa’s conjecture

Yury Polyanskiy (MIT, USA); Yihong Wu (University of Illinois Urbana-Champaign, USA)

It is shown that under suitable regularity conditions, differential entropy is $O(\sqrt{n})$-Lipschitz as a function of probability distributions on $R^n$ with respect to the quadratic Wasserstein distance. Under similar conditions, (discrete) Shannon entropy is shown to be $O(n)$-Lipschitz in distributions over the product space with respect to Ornstein’s distance (Wasserstein distance corresponding to the Hamming distance).
These results together with Talagrand’s and Marton’s transportation-information inequalities allow one to replace the unknown multi-user interference with its i.i.d.~approximations. As an application, a new outer bound for the two-user Gaussian interference channel is proved, which, in particular, settles the “missing corner point” problem of Costa (1985).

Support Recovery from Noisy Random Measurements via Weighted L1 Minimization

Jun Zhang, Urbashi Mitra and Kuan-Wen Huang (University of Southern California, USA); Nicolò Michelusi (Purdue University, USA)

Herein, we analyze the sample complexity of general weighted ℓ1 minimization in terms of support recovery from noisy underdetermined measurements. This analysis generalizes prior work for standard ℓ1 minimization by considering the weighting
effect. We state explicit relationship between the weights and the sample complexity such that i.i.d random Gaussian measurement matrices used with weighted ℓ1 minimization recovers the support of the underlying signal with high probability as the problem dimension increases. This result provides a measure that is predictive of relative performance of different algorithms. Motivated
by the analysis, a new iterative weighted strategy is proposed. In the Reweighted Partial Support (RePS) algorithm, a sequence of weighted ℓ1 minimization problems are solved where partial support recovery is used to prune the optimization; furthermore, the weights used for the next iteration are updated by the current
estimate. RePS is compared to other weighted algorithms through the proposed measure and numerical results, which demonstrate its superior performance for a spectrum occupancy estimation problem motivated by cognitive radio.

Wireless Network Simplification: Beyond Diamond Networks

Yahya H. Ezzeldin and Ayan Sengupta (University of California, Los Angeles, USA); Christina Fragouli (UCLA, USA)

We consider an arbitrary layered Gaussian relay network with L layers of N relays each, from which we select subnetworks with K relays per layer. We prove that: (i) For arbitrary L, N and K = 1, there always exists a subnetwork that approximately achieves 2 / ((L−1)N + 4) (resp. 2/(LN +2) ) of the network capacity for odd L (resp. even L), (ii) For L = 2, N = 3, K = 2, there always exists a subnetwork that approximately achieves 1/2 of the network capacity. We also provide example networks where even the best subnetworks achieve exactly these fractions (upto additive gaps). Along the way, we derive some results on MIMO antenna selection and capacity decomposition that may also be of independent interest.

The Multiple Access Wiretap Channel II with a Noisy Main Channel

Mohamed Nafea (The Pennsylvania State University, USA); Aylin Yener (Pennsylvania State University, USA)

A two transmitter multiple access wiretap channel II (MAC-WT-II) with a discrete memoryless (DM) main channel is investigated. Two models for the wiretapper, who chooses a fixed-length subset of the channel uses and observes erasures outside this subset, are proposed. In the first model, in each position of the subset, the wiretapper noiselessly observes either the first or the second user’s symbol, while in the second model, the wiretapper observes a noiseless superposition of the two symbols. Achievable strong secrecy rate regions for the two models are derived. The achievability is established by solving a dual secret key agreement problem in the source model. The secrecy of the keys in the dual source model is established by deriving a lemma which provides a doubly exponential convergence rate for the probability of the keys being uniform and independent from the wiretapper’s observation. The results extend the recently examined WTC-II with a DM main channel to a multiple access setting.

Multiterminal Compress-and-Estimate Source Coding

Alon Kipnis (Stanford University, USA); Stefano Rini (National Chiao Tung University, USA); Andrea Goldsmith (Stanford University, USA)

We consider a multiterminal source coding problem in which a random source signal is estimated from encoded versions of multiple noisy observations. Each encoded version, however, is compressed so as to minimize a local distortion measure, defined only with respect to the distribution of the corresponding noisy observation. The original source is then estimated from these compressed noisy observations. We denote the minimal distortion under this coding scheme as the compress-and-estimate distortion-rate function (CE-DRF). We derive a single-letter expression for the CE-DRF in the case of an i.i.d source. We evaluate this expression for the case of a Gaussian source observed through multiple parallel AWGN channels and quadratic distortion
and in the case of a non-uniform binary i.i.d source observed through multiple binary symmetric channels under Hamming distortion. For the case of a Gaussian source, we compare the performance for centralized encoding versus that of distributed encoding. In the centralized encoding scenario, when the code rates are sufficiently small, there is no loss of performance compared to the indirect source coding distortion-rate function, whereas distributed encoding achieves distortion strictly larger then the optimal multiterminal source coding scheme. For the case of a binary source, we show that even with a single observation, the CE-DRF is strictly larger than that of indirect source coding.

Arbitrarily varying networks: capacity-achieving computationally efficient codes

Peida Tian (The Chinese University of Hong Kong, Hong Kong); Sidharth Jaggi (Chinese University of Hong Kong, Hong Kong); Mayank Bakshi (The Chinese University of Hong Kong, Hong Kong); Oliver Kosut (Arizona State University, USA)

We consider the problem of communication over a network containing a hidden and malicious adversary that can control a subset of network resources, and aims to disrupt communications. We focus on omniscient node-based adversary, i.e., the adversary can control a subset of nodes, and knows the message, network code and packets on all links. Characterizing information-theoretically optimal communication rates as a function of network parameters and bounds on the adversarially controlled network is in general open, even for unicast (single source, single destination) problems. In this work we characterize the information-theoretically optimal randomized capacity of such problems, i.e., under the assumption that the source node shares (an asymptotically negligible amount of) independent common randomness with each network node a priori. We propose a novel computationally-efficient communication scheme whose rate matches a natural information-theoretically “erasure outer bound” on the optimal rate. Our schemes require no prior knowledge of network topology, and can be implemented in a distributed manner as an overlay on top of classical distributed linear network coding.

Feeling the Bern: Adaptive Estimators for Bernoulli Probabilities of Pairwise Comparisons

Nihar B Shah (University of California, Berkeley, USA); Sivaraman Balakrishnan (CMU, USA); Martin Wainwright (University of California, Berkeley, USA)

We study methods for aggregating pairwise comparison data in order to estimate outcome probabilities for future comparisons. We investigate this problem under a flexible class of models satisfying the strong stochastic transitivity (SST) condition. Prior works have studied the minimax risk for estimation of the pairwise comparison probabilities under the SST model. The minimax risk, however, is a measure of the worst-case risk of an estimator over a large parameter space, and in general provides only a rudimentary understanding of an estimator in problems where the intrinsic difficulty of estimation varies considerably over the parameter space. In this paper, we introduce an adaptivity index, in order to benchmark the performance of an estimator against an oracle estimator. The adaptivity index, in addition to measuring the worst-case risk of an estimator, also captures the extent to which the estimator adapts to the instance-specific difficulty of the underlying problem, relative to an oracle estimator. In the context of this adaptivity index we provide two main results. We propose a three-step, Count-Randomize-Least squares (CRL) estimator, and derive upper bounds on the adaptivity index of this estimator. We complement this result with a complexity-theoretic result, that shows that conditional on the planted clique hardness conjecture, no computationally efficient estimator can achieve a substantially smaller adaptivity index.

Capacity of Remotely Powered Communication

Dor Shaviv and Ayfer Özgür (Stanford University, USA); Haim H Permuter (Ben-Gurion University, Israel)

Motivated by recent developments in wireless power transfer, we study communication with a remotely powered transmitter. We propose an information-theoretic model where a charger can dynamically decide on how much power to transfer to the transmitter based on its side information regarding the communication, while the transmitter can dynamically adopt its coding strategy to its instantaneous energy state, which in turn depends on the actions previously taken by the charger. We characterize the capacity as n-letter mutual information rate under various levels of side information available at the charger. In some special cases, motivated by different settings of practical interest, we simplify these expressions to single-letter form, or provide an algorithm to efficiently compute capacity using dynamic programming. Our results provide some surprising insights on how side information at the charger can be used to increase the overall capacity of the system.

Relations Between Conditional Shannon Entropy and Expectation of $\ell_{\alpha}$-Norm

Yuta Sakai and Ken-ichi Iwata (University of Fukui, Japan)

Information measures of random variables are used in several fields, information theory, probability theory, statistics, pattern recognition, cryptology, machine learning, and others. On the studies of information measures, inequalities for information measures are commonly used in many applications. In axiomatic definitions of the entropy, e.g., R\'{e}nyi entropy, Tsallis entropy, entropy of type-$\beta$, $\gamma$-entropy, and the $R$-norm information, the $\ell_{\alpha}$-norm are usually used. In another our paper which submitted to ISIT2016, we derived tight bounds between the Shannon entropy and the $\ell_{\alpha}$-norm for $n$-ary probability vectors, $n \ge 2$. In this study, we examine extremal relations between the conditional Shannon entropy and the expectation of $\ell_{\alpha}$-norm for joint probability distributions. More precisely, we establish tight bounds of the expectation of $\ell_{\alpha}$-norm with a fixed conditional Shannon entropy. Similarly, we also derive the tight bounds of the conditional Shannon entropy with a fixed expectation of $\ell_{\alpha}$-norm. Using these tight bounds, we describe tight bounds between the conditional Shannon entropy and some conditional entropies, e.g., Arimoto’s conditional R\'{e}nye entropy and the conditional $R$-norm information. Moreover, we apply these results to discrete memoryless under a uniform input distribution. Then, we provide tight bounds of Gallager’s $E_{0}$ function with a fixed (ordinary) mutual information.

Capacity of Block Rayleigh Fading Channels Without CSI

Mainak Chowdhury and Andrea Goldsmith (Stanford University, USA)

A system with a single antenna at the transmitter and receiver and no channel state information at either is considered. The channel experiences block Rayleigh fading with a coherence time of T0 symbol times and the fading statistics are assumed to be known perfectly. The system operates with a finite average transmit power. It is shown that the capacity optimal input distribution in the T0 -dimensional space is the product of the distribution of an isotropically-distributed unit vector and a distribution on the 2−norm in the T0 -dimensional space which is discrete and has a finite number of points in the support. Numerical evaluations of this distribution and the associated capacity for a channel with fading and Gaussian noise for a coherence time T0 = 2 are presented for representative SNRs. It is also shown numerically that an implicit channel estimation is done by the capacity-achieving scheme.

Erasure Broadcast Networks with Receiver Caching

Shirin Saeedi Bidokhti (Stanford University, USA); Roy Timo (Technische Universität München, Germany); Michele A Wigger (Telecom ParisTech, France)

A cache-aided broadcast erasure network is studied with a set of receivers that have access to individual cache memories and a set of receivers that have no cache memory. The erasure statistics of the channel are assumed to be symmetric with respect to all receivers in each set and users with no cache memory are assumed to have better channels, statistically. Lower and upper bounds are derived on the capacity of the network. The lower bounds are achieved by joint cache-channel coding schemes and are shown to be significantly larger than the bounds achievable by naive separate cache-channel coding schemes. For the case of two receivers, the capacity is characterized for interesting ranges of cache memory sizes.

Capacity of the Energy Harvesting Gaussian MAC

Huseyin A Inan, Dor Shaviv and Ayfer Özgür (Stanford University, USA)

We consider an energy harvesting multiple access channel where the transmitters are powered by an exogenous stochastic energy harvesting process and equipped with finite batteries. We characterize the capacity region of this channel as n-letter mutual information rate and develop inner and outer bounds that differ by a constant gap. An interesting conclusion that emerges from our results is that in a symmetric system, where transmitters are statistically equivalent to each other, the largest achievable common rate point approaches that of a standard AWGN MAC with an average power constraint, as the number of users in the MAC becomes large.

On the Symmetries and the Capacity Achieving Input Covariance Matrices of Multiantenna Channels

Mario Diaz (Queen’s University, Canada)

In this paper we study the capacity achieving input covariance matrices of a single user multiantenna channel based solely on the group of symmetries of its matrix of propagation coefficients. Our main result, which unifies and improves the techniques used in a variety of classical capacity theorems, uses the Haar (uniform) measure on the group of symmetries to establish the existence of a capacity achieving input covariance matrix in a very particular subset of the covariance matrices. This result allows us to provide simple proofs for old and new capacity theorems. Among other results, we show that for channels with two or more standard symmetries, the isotropic input is optimal. Overall, this paper provides a precise explanation of why the capacity achieving input covariance matrices of a channel depend more on the symmetries of the matrix of propagation coefficients than any other distributional assumption.

Fundamental Limits of Secretive Coded Caching

Vaishakh Ravindrakumar, Parthasarathi Panda and Nikhil Karamchandani (Indian Institute of Technology Bombay, India); Vinod M Prabhakaran (Tata Institute of Fundamental Research, India)

Recent work by Maddah-Ali and Niesen introduced coded caching which demonstrated the benefits of joint design of storage and transmission policies in content delivery networks. They studied a setup where a server communicates with a set of users, each equipped with a local cache, over a shared error-free link and proposed an order-optimal caching and delivery scheme. In this paper, we introduce the problem of secretive coded caching where we impose the additional constraint that a user should not be able to learn anything, from either the content stored in its cache or the server transmissions, about a file it did not request. We propose a feasible scheme for this setting and demonstrate its order-optimality with respect to information-theoretic lower bounds.

Performance Bounds for Quantized LDPC Decoders Based on Absorbing Sets

Homayoon Hatami (University of Notre Dame, USA); David G. M. Mitchell (New Mexico State University, USA); Daniel J. Costello, Jr. and Thomas E Fuja (University of Notre Dame, USA)

A code independent performance bound for a given absorbing set is derived for quantized low- density parity-check (LDPC) decoders. The analysis demonstrates that each absorbing set in the Tanner graph imposes a specific lower bound on the frame error rate (FER) of any code containing that absorbing set under a given quantization scheme. This approach is applicable to any message-passing (MP) decoding algorithm and any uniform or non-uniform quantization scheme for LDPC codes. Simulation results using the sum-product algorithm (SPA) provide FERs that are consistent with the obtained bounds. In addition, the bounds demonstrate that the conventional quantized SPA is not capable of achieving very low FERs if the LDPC code contains certain absorbing sets.

Strong Secrecy and Stealth for Broadcast Channels with Confidential Messages

Igor Bjelakovic (Technische Universität Berlin, Germany); Jafar Mohammadi (Fraunhofer Heinrich-Hertz-Institute & Technical University of Berlin, Germany); Slawomir Stanczak (Fraunhofer Heinrich Hertz Institute & Technische Universität Berlin, Germany)

This paper extends the weak secrecy results of Liu et al.\ for broadcast channels with two confidential messages to strong secrecy. Our results are based on an extension of the techniques developed by Hou and Kramer on bounding Kullback-Leibler divergence in context of \textit{resolvability} and \textit{effective secrecy}.

Enhanced Recursive Reed-Muller Erasure Decoding

Alexandre Soro (ISAE, France); Jerome Lacan (University of Toulouse, France); Vincent Roca (INRIA Rhône-Alpes, France); Valentin Savin (CEA LETI, France); Mathieu Cunche (INSA-Lyon / INRIA, France)

Recent work have shown that Reed-Muller (RM) codes achieve the erasure channel capacity. However, this performance is obtained with maximum-likelihood decoding which can be costly for practical applications. In this paper, we propose an encoding/decoding scheme for Reed-M¨ uller codes on the packet erasure channel based on Plotkin construction. We present several improvements over the generic decoding. They allow, for a light cost, to compete with maximum-likelihood decoding performance, especially on high-rate codes, while significantly outperforming it in terms of speed.

Achievable Secrecy Rates in the Multiple Access Wiretap Channel with Deviating Users

Karim A. Banawan (University of Maryland, College Park, USA); Sennur Ulukus (University of Maryland, USA)

We consider the multiple access wiretap channel (MAC-WTC) where multiple legitimate users wish to have secure communication with a legitimate receiver in the presence of an eavesdropper. The exact secure degrees of freedom (s.d.o.f.) region of this channel is known. Achieving this region requires users to follow a certain protocol altruistically and transmit both message-carrying and cooperative jamming signals in an optimum manner. In this paper, we consider the case when a subset of users deviate from this optimum protocol. We consider two kinds of deviation: when some of the users stop transmitting cooperative jamming signals, and when a user starts sending intentional jamming signals. For the first scenario, we investigate possible responses of the remaining users to counteract such deviation. For the second scenario, we use an extensive-form game formulation for the interactions of the deviating and well-behaving users. We prove that a deviating user can drive the s.d.o.f. to zero; however, the remaining users can exploit its intentional jamming signals as cooperative jamming signals against the eavesdropper and achieve an optimum s.d.o.f.

Weight Distribution of the Syndrome of Linear Codes and Connections to Combinatorial Designs

Christoph Pacher (AIT Austrian Institute of Technology GmbH, Austria); Philipp Grabenweger (AIT Austrian Institute of Technology, Austria); Dimitris Simos (SBA Research, Austria)

The expectation and the variance of the syndrome weight distribution of linear codes after transmission of codewords through a binary symmetric channel is derived exactly in closed form as functions of the code’s parity-check matrix and of the degree distributions of the associated Tanner graph. The influence of (check) regularity of the Tanner graph is studied. Special attention is payed to Tanner graphs that have no cycles of length four. We further study the equivalence of some classes of combinatorial designs and important classes of LDPC codes and apply our general results to those more specific structures. Simulations are performed to show the validity of the theoretical approach.

Estimation of KL Divergence Between Large-Alphabet Distributions

Yuheng Bu (University of Illinois at Urbana Champaign, USA); Shaofeng Zou and Yingbin Liang (Syracuse University, USA); Venugopal Veeravalli (University of Illinois at Urbana-Champaign, USA)

The problem of estimating the KL divergence between two unknown distributions is studied. The alphabet size k of the distributions can scale to infinity. The estimation is based on m and n independent samples respectively drawn from the two distributions. It is first shown that there does not exist any consistent estimator to guarantee asymptotic small worst-case quadratic risk over the set of all pairs of distributions. A restricted set that contains pairs of distributions with bounded ratio f(k) is further considered. An augmented plug-in estimator is proposed, and is shown to be consistent if and only if m = ω(k ∨ log^2(f(k)) and n = ω(kf(k)). Furthermore, if f(k) ≥ log^2(k) and log^2(f(k)) = o(k), it is shown that any consistent estimator must satisfy the necessary conditions: m = ω(k/log k ∨ log^2(f(k)) and n = ω(kf(k)/log k).

Secure Group Testing

Alejandro Cohen (Ben-Gurion University, Israel); Asaf Cohen (Ben-Gurion University of the Negev, Israel); Omer Gurewitz (Ben-Gurion University Of The Negev, Israel)

The principle mission of Group Testing (GT) is to identify a small subset of “defective” items from a large population, by grouping items into minimum number of test pools. The test outcome of a pool is positive if it contains at least one defective item, and is negative otherwise. GT algorithms are utilized in many applications, and the privacy regarding the status of the items, namely, defective or not, is a critical issue.
In this paper, we consider a scenario where there is an eavesdropper (Eve) which is able to observe a subset of the GT outcomes (pools). We propose a new non-adaptive Secure Group Testing (SGT) algorithm based on information theoretic principles, which keeps the eavesdropper ignorant regarding the items status.

Decoding Analysis Accounting for Mis-Corrections for Spatially-Coupled Split-Component Codes

Dmitri Truhachev and Alireza Karami (Dalhousie University, Canada); Lei Zhang and Frank R. Kschischang (University of Toronto, Canada)

We consider an asymptotic iterative decoding analysis of spatially-coupled split-component codes. The analysis takes into account the impact of mis-corrections that occur in component code decoding. The analysis models flows of corrections and mis-corrections that occur throughout the decoding process in the entire coupled code chain. The results for spatially-coupled split-component codes with BCH component codes demonstrate that the analysis provides significantly more accurate estimates of the iterative decoding threshold values.

Computationally Efficient Deniable Communication

Qiaosheng Zhang and Mayank Bakshi (The Chinese University of Hong Kong, Hong Kong); Sidharth Jaggi (Chinese University of Hong Kong, Hong Kong)

In this paper, we design the first computationally efficient codes for simultaneously reliable and deniable communication over a Binary Symmetric Channel (BSC). Our setting is as follows. A transmitter Alice wishes to potentially reliably transmit a message to a receiver Bob, while ensuring that the transmission taking place is deniable from an eavesdropper Willie (who hears Alice’s transmission over a noisier BSC). Prior works show that Alice can reliably and deniably transmit O(sqrt{n}) bits over n channel uses without any shared secrets between Alice and Bob. One drawback of prior works is that the computational complexity of the codes designed scales as 2^{Theta(sqrt{n})}. In this work we provide the first computationally tractable codes with provable guarantees on both reliability and deniability, while simultaneously achieving the best known throughput for the problem.

Bounds on the reliability of a typewriter channel

Marco Dalai (University of Brescia, Italy); Yury Polyanskiy (MIT, USA)

We give new bounds on the reliability function of a typewriter channel with 5 inputs and crossover probability $1/2$. The lower bound is more of theoretical than practical importance; it improves very marginally the expurgated bound, providing a counterexample to a conjecture on its tightness by Shannon, Gallager and Berlekamp which does not need the construction of algebraic-geometric codes previously used by Katsman Tsfasman and Vladut. The upper bound is derived by using an adaptation of the linear programming bounds and it is essentially useful as a low-rate anchor for the straight line bound.

A class of index coding problems with rate 1/3

Prasad Krishnan (IIIT Hyderabad, India); Lalitha Vadlamani (International Institute of Information Technology, India)

An index coding problem with $n$ messages has symmetric rate $R$ if all $n$ messages can be conveyed at rate $R$. In a recent work, a class of index coding problems for which symmetric rate $\frac{1}{3}$ is achievable was characterised using special properties of the side-information available at the receivers. In this paper, we show a larger class of index coding problems (which includes the previous class of problems) for which symmetric rate $\frac{1}{3}$ is achievable. In the process, we also obtain a stricter necessary condition for rate $\frac{1}{3}$ feasibility than what is known in literature.

Structural results for two-user interactive communication

Jhelum Chakravorty and Aditya Mahajan (McGill University, Canada)

In this paper we consider an interactive communication system with two users, who sequentially observe two correlated sources, and send the quantized observation symbol to each other. The sources are functions of a random variable, which the users wish to estimate, and two i.i.d processes. The transmission is costly and the fidelity of reconstruction is measured by a distortion function. We consider the finite-horizon optimization problem, which in this case, belongs to the category of dynamic team problems, since it has two decision makers that have access to different information but need to cooperate and coordinate their actions to minimize a common objective. We identify time-homogeneous information states (sufficient statistic) for the encoding and decoding strategies and a dynamic programming decomposition to compute the optimal strategies. Our approach consists of using the person-by-person approach and common information approach in tandem.

Finite Blocklength Achievable Rates for Energy Harvesting AWGN Channels with Infinite Buffer

Konchady Gautam Shenoy (Indian Institute of Science (IISC), India); Vinod Sharma (Indian Institute of Science, India)

We consider an additive white Gaussian channel where the transmitter is powered by an energy harvesting source. For such a system, we provide a lower bound on the maximal codebook at finite code lengths that improves upon previously known bounds.

Exact Closed-Form Expression for the Inverse Moments of One-sided Correlated Gram Matrices

Khalil Elkhalil (King Abdullah University of Science and Technology (KAUST), Saudi Arabia); Abla Kammoun (Kaust, Saudi Arabia); Tareq Y. Al-Naffouri (King Abdullah University of Science and Technology, USA); Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST), Saudi Arabia)

In this paper, we derive a closed-form expression for the inverse moments of one sided-correlated random Gram matrices. Such a question is mainly motivated by applications in signal processing and wireless communications for which evaluating this quantity is a question of major interest. This is for instance the case of the best linear unbiased estimator, in which the average estimation error corresponds to the first inverse moment of a random Gram matrix.

On The Construction of Capacity-Achieving Lattice Gaussian Codes

Wael Alghamdi (King Abdullah University of Science and Technology, Saudi Arabia); Walid Abediseid and Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST), Saudi Arabia)

In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore.

On the Throughput Rate of Wireless Multipoint Multicasting

Michal Kaliszan and Giuseppe Caire (Technische Universität Berlin, Germany); Slawomir Stanczak (Fraunhofer Heinrich Hertz Institute & Technische Universität Berlin, Germany)

3GPP/LTE provisions for a Multimedia Broadcast Multicast Service (MBMS), where a common data stream is sent to many users (a multicast group) simultaneously from multiple base stations, transmitting on the same frequency channel, i.e., forming a Single-Frequency Network (SFN). This setting has been extensively treated as a max-min fair beamforming problem, where the beamforming vector is optimized as a function of the instantaneous channel state information in order to maximize the instantaneous (per-slot) common rate over all users. Unfortunately, such common rate vanishes as the number of users grows for fixed number of base station antennas. In this paper we consider the ergodic regime, where coding across multiple slots affected by independent fading is allowed. We formulate the problem as an ergodic compound channel, subject to a per-slot and per-group of antennas power constraint, and we provide an efficient algorithm that approximates the compound capacity to any desired degree of accuracy. Then, in line with the current implementation of MBMS-FSN in 3GPP, we consider also the multicast throughput achievable by a concatenated coding scheme, where inner physical layer coding is applied on a per-slot basis, and outer packet erasure coding is used at the application layer. The optimal strategy in this case is NP-Hard, and we propose a convex relaxation approach with good performance and low complexity.

Sharp minimax bounds for testing discrete monotone distributions

Yuting Wei (UC Berkeley, USA); Martin Wainwright (University of California, Berkeley, USA)

We consider a binary hypothesis testing problem of determining whether discrete data is drawn from some known distribution $p$ versus from an unknown alternative than is $\epsilon$-separated in the total variation norm. Under monotonicity constraints, we show that the global minimax testing radius for this problems scales as $\epsilon^2 \asymp \left(\frac{\sqrt{\log d}}{n}\right)^{4/5}$. This scaling in is significantly different from classical scaling $\epsilon^2 \asymp \frac{\sqrt{d}}{n}$ that holds without monotonicity constraints. We also prove some locally adaptive results on the testing radius over $k$-piece distributions, and other distributions $p$ that have “simpler” structure.

Using Reed-Solomon codes in the $(U|U+V)$ construction and an application to cryptography

Irene Márquez-Corbella (Inria Paris); Jean-Pierre Tillich (INRIA, France)

In this paper we present a modification of Reed-Solomon codes that beats the Guruswami-Sudan $1-\sqrt{R}$ decoding radius of Reed-Solomon codes at low rates $R$. The idea is to choose Reed-Solomon codes $U$ and $V$ with appropriate rates in a $(U|U+V)$ construction and to decode them with the Koetter-Vardy soft information decoder. We suggest to use a slightly more general version of these codes (but which has the same decoding performances as the $(U|U+V)$-construction) for being used in code-based cryptography, namely to build a McEliece scheme. The point is here that these codes not only perform nearly as well (or even better in the low rate regime) as Reed-Solomon codes, their structure seems to avoid the Sidelnikov-Shestakov attack which broke a previous McEliece proposal based on generalized Reed-Solomon codes.

Delay-optimal Computation Task Scheduling for Mobile-Edge Computing Systems

Juan Liu (HKUST, P.R. China); Yuyi Mao (Hong Kong University of Science and Technology, Hong Kong); Jun Zhang and Khaled B. Letaief (The Hong Kong University of Science and Technology, Hong Kong)

Mobile-edge computing (MEC) emerges as a promising paradigm to release the tension between the computation-intensive mobile applications and limited computation resources at mobile devices. By offloading the computation tasks to the MEC server, the computation performance can be improved substantially. The design of computation task scheduling policy inevitably incurs a challenging two-timescale stochastic optimization problem, i.e., in the larger timescale, whether to execute a task locally at the mobile device or to offload a task to the MEC server for cloud computing should be decided, while in the smaller timescale, the transmission policy for the task input should adapt to the channel side information. In this paper, we will adopt a Markov decision process approach to handle this problem, where computation tasks are scheduled based on the queuing state of the task buffer, the execution state of the local central processing unit, and the state of the transmission unit. By analyzing the average delay of each task and the average power consumption at the mobile device, we formulate a power-constrained delay minimization problem, and propose an efficient one-dimension search algorithm to find the optimal task scheduling policy. Simulation results are shown to
demonstrate the effectiveness of the proposed optimal stochastic task scheduling policy in achieving a shorter average execution delay compared to the baseline policies.

Shannon Capacity of Signal Transduction for Multiple Independent Receptors

Peter J Thomas (Case Western Reserve University, USA); Andrew Eckford (York University, Canada)

Cyclic adenosine monophosphate (cAMP) is considered a model system for signal transduction, the mechanism by which cells exchange chemical messages. Our previous work calculated the Shannon capacity of a single cAMP receptor; however, a typical cell may have thousands of receptors operating in parallel. In this paper, we calculate the capacity of a cAMP signal transduction system with an arbitrary number of independent, indistinguishable receptors. We show, somewhat unexpectedly, that the capacity is achieved by an IID input distribution, and that the capacity for n receptors is n times the capacity for a single receptor.

Robustness Of Cooperative Communication Schemes To Channel Models

Vasuki Narasimha Swamy (University of California, Berkeley, USA); Gireeja Ranade (Microsoft Research, USA); Anant Sahai (UC Berkeley, USA)

Cooperative communication to extract multi-user diversity and network coding are two ideas for improving wireless protocols. These ideas can be exploited to design protocols for low-latency high-reliability communication for control. Given the high-performance constraints for this communication, it is critical, to understand how sensitive such protocols are to modeling assumptions. We examine the impact of channel reciprocity, quasi-static fading, and the spatial independence of channel fades in this paper.
This paper uses simple models to explore the performance sensitivity to assumptions. It turns out that wireless network- coding is moderately sensitive to channel reciprocity and non- reciprocity costs about 2dB SNR. The loss of the quasi-static fading assumption has a similar cost for the network coding based protocol but has a negligible effect on the protocol that doesn’t use network coding. The real sensitivity of cooperative communication protocols is to the spatial independence assumptions. Capping the amount of independence to a small number degrades performance but perhaps more surprisingly, a simple Gilbert-Elliott-inspired model shows that having a random amount of independence can also severely impact performance.

On the decoding delay of rate-1/2 Complex Orthogonal Designs

Smarajit Das (IIT Guwahati, India)

Rate-1/2 Complex orthogonal designs (CODs) is an important class of space-time block codes due to the fact that the decoding delay of these codes is substantially less than that of the maximal-rate CODs. Determining the minimum value of the decoding delay of rate-1/2 CODs remains an important open problem. Nonetheless, a lower bound of $\nu(2m)$ on the decoding delay of the rate-1/2 CODs with $2m$ columns is obtained under certain assumptions and codes have been constructed with decoding delay meeting the lower bound for $m=1,2$ and $3$ modulo $4$. In this paper, it is shown that the lower bound for the decoding delay continues to hold even for much more general class of rate-/1/2 CODs. Furthermore, the construction of rate-1/2 CODs with decoding delay meeting the lower bound is provided for all $m$.

Low Delay Network Streaming Under Burst Losses

Rafid Mahmood, Ahmed Badr and Ashish Khisti (University of Toronto, Canada)

In the classic burst erasure channel, packets are consecutively erased in bursts between gaps of perfect communication. For the Burst Rank Loss Network, instead of a single-link erasure channel, there is a channel matrix that is rank-deficient for a burst of time before returning to full-rank. We establish the streaming capacity of the Burst Rank Loss Network and construct a new family of layered codes referred to as Recovery Of Bursts In Networks (ROBIN) codes that achieve capacity. Our results generalize previous work on both the single-link and multiple-parallel-link streaming setups. Simulations over statistical network models show that ROBIN codes attain low packet loss rates in comparison to existing codes.

Delay-Constrained Capacity For Broadcast Erasure Channels: A Linear-Coding-Based Study

Chih-Chun Wang (Purdue University, USA)

This work studies the 1-to-2 broadcast packet erasure channels with causal ACKnowledgement (ACK), which is motivated by practical downlink access point networks. While the corresponding delay-constrained Shannon capacity remains an open problem (none of the existing analysis tools can be directly applied), this work focuses on linear codes and proposes three new definitions of delay-constrained throughput based on different outage metrics: the file-based, the rank-based, and the packet-based ones. It then fully characterizes those delay-constrained linear coding capacity regions for relatively-short-delay flows — flows for which the delay requirement is no larger than the interval of file arrivals.

Low-Complexity Stochastic Generalized Belief Propagation

Farzin Haddadpour and Mahdi Jafari Siavoshani (Sharif University of Technology, Iran); Morteza Noshad (University of Michigan, USA)

The generalized belief propagation (GBP), introduced by Yedidia et al., is an extension of the belief propagation (BP) algorithm, which is widely used in different problems involved in calculating exact or approximate marginals of probability distributions. In many problems, it has been observed that the accuracy of GBP considerably outperforms that of BP. However, because in general the computational complexity of GBP is higher than BP, its application is limited in practice.

In this paper, we introduce a stochastic version of GBP called stochastic generalized belief propagation (SGBP) where it can be considered as an extension to the stochastic BP (SBP) algorithm introduced by Noorshams et al. In their work, it has been shown that SBP reduces the complexity per iteration of BP by an order of magnitude in alphabet size. In contrast to SBP, SGBP can reduce the computation complexity if certain topological conditions are met by the region graph associated to the graphical model. However, the reduction can be larger than only one order of magnitude in alphabet size. In this paper, we characterize these conditions and the amount of computation gain that we can obtain by using SGBP. Finally, using similar proof techniques employed by Noorshams et al., for general graphical models satisfy contraction conditions, we prove the asymptotic convergence of SGBP to the unique GBP fixed point, as well as providing non-asymptotic upper bounds on mean square error and on the high probability error.

Algebraic Properties of Polar Codes From a New Polynomial Formalism

Magali Bardet (University of Rouen, France); Vlad Dragoi (University of Rouen, Romania); Ayoub Otmani (University of Rouen, France); Jean-Pierre Tillich (INRIA, France)

Polar codes form a very powerful family of codes with a low complexity decoding algorithm that attains many information theoretic limits in error correction and source coding. These codes are closely related to Reed-Muller codes because both can be described with the same algebraic formalism, namely they are generated by evaluations of monomials. However, finding the right set of generating monomials for a polar code which optimises the decoding performances is a nontrivial task and is channel dependent. The purpose of this paper is to reveal some universal properties of these monomials. We will namely prove that there is a way to define a nontrivial (partial) order on monomials so that the monomials generating a polar code devised for a binary-input symmetric channel always form a decreasing set. We call such codes decreasing monomial codes. The fact that polar codes are decreasing monomial codes turns out to have rather deep consequences on their structure. Indeed, we show that decreasing monomial codes have a very large permutation group by proving that it contains a group called lower triangular affine group. Furthermore, the codewords of minimum weight correspond exactly to the orbits of the minimum weight codewords that are obtained from evaluations of monomials of the generating set. In particular, it gives an efficient way of counting the number of minimum weight codewords of a decreasing monomial code and henceforth of a polar code.

Consecutive Switch Codes

Sarit Buzaglo (UCSD, USA); Eitan Yaakobi and Yuval Cassuto (Technion, Israel); Paul H. Siegel (University of California, San Diego, USA)

Switch codes, first proposed by Wang et. al, are codes that are designed to increase the parallelism of data writing and reading processes in network switches. A network switch consists of $n$ input ports, $k$ output ports, and $m$ \emph{banks} which store new arriving packets from the input ports on each time slot, called \emph{generation}. The objective is to store the packets in the banks such that every request of $k$ packets by the output ports, which can be from previous generations, can be handled by reading at most one packet from every bank.
In this paper we study a new type of switch codes that can simultaneously deliver large symbol requests and good coding rate. These attractive features are achieved by relaxing the request model to a natural sub-class we call {\em consecutive requests}. For this new request model we define a new type of codes called {\em consecutive switch codes}. These codes are studied in both the computational and combinatorial models, corresponding to whether the data can be encoded or not. We present several code constructions and prove the optimality of one family of these codes by providing the corresponding lower bound. Lastly, we introduce a construction of switch codes for the case $n=k$, which improve upon the best known results for this case.

Distributed Detection over Connected Networks via One-Bit Quantizer

Shengyu Zhu and Biao Chen (Syracuse University, USA)

This paper considers distributed detection over large scale connected networks with arbitrary topology. Contrasting to the canonical parallel fusion network where a single node has access to the outputs from all other sensors, each node can only exchange one-bit information with its direct neighbors in the present setting. Our approach adopts a novel consensus reaching algorithm using asymmetric bounded quantizers that allow controllable consensus error. Under the Neyman-Pearson criterion, we show that, with each sensor employing an identical one-bit quantizer for local information exchange, this approach achieves the optimal error exponent of centralized detection provided that the algorithm converges. Simulations show that the algorithm converges when the network is large enough.

Throughput of Two-Hop Wireless Channels with Queueing Constraints and Finite Blocklength Codes

Yi Li, M. Cenk Gursoy and Senem Velipasalar (Syracuse University, USA)

In this paper, throughput of two-hop wireless relay channels is studied in the finite blocklength regime. Half-duplex relay operation, in which the source node initially sends information to the intermediate relay node and the relay node subsequently forwards the messages to the destination, is considered. It is assumed that all messages are stored in buffers before being sent through the channel, and both the source node and the relay operate under statistical queueing constraints. After characterizing the transmission rates in the finite blocklength regime, the system throughput is formulated via queueing analysis. Subsequently, several properties of the throughput function in terms of system parameters are identified, and an efficient algorithm is proposed to maximize the throughput. Interplay between throughput, queueing constraints, relay location, time allocation, and code blocklength is investigated through numerical results.

On additive-combinatorial affine inequalities for Shannon entropy and differential entropy

Ashok Makkuva (University of Illinois at Urbana-Champaign, USA); Yihong Wu (University of Illinois Urbana-Champaign, USA)

This paper addresses the question of to what extent do discrete entropy inequalities for weighted sums of independent group-valued random variables continue to hold for differential entropies. We show that all balanced affine inequalities (with the sum of coefficients being zero) of Shannon entropy extend to differential entropy; conversely, any affine inequality for differential entropy must be balanced. In particular, this result recovers recently proved differential entropy inequalities by Kontoyiannis and Madiman \cite{KM14} from their discrete counterparts due to Tao \cite{Tao10} in a unified manner. Our proof relies on a result of Rényi which relates the Shannon entropy of a finely discretized random variable to its differential entropy and also helps in establishing the entropy of the sum of quantized random variables is asymptotically equal to that of the quantized sum.

Information concentration for convex measures

Jiange Li (University of Delaware, USA); Matthieu Fradelizi (Université Paris-Est, France); Mokshay Madiman (University of Delaware, USA)

Sharp exponential deviation estimates for the information content as well as a sharp bound on the varentropy are obtained for convex probability measures on Euclidean spaces. These provide, in a sense, a nonasymptotic equipartition property for convex measures even in the absence of stationarity-type assumptions.

Write Sneak-Path Constraints Avoiding Disturbs in Memristor Crossbar Arrays

Yuval Cassuto, Shahar Kvatinsky and Eitan Yaakobi (Technion, Israel)

We study the problem of write disturbs due to write sneak paths in memristor crossbar arrays. A write sneak path is a bit configuration in the array that causes a write of one cell to undesirably flip the value of another cell. We study the configurations that cause such write sneak paths, and characterize them in terms of tight constraints to prevent them. We show that thanks to the flexibility to choose the write order the resulting constraints are milder compared to known similar ones for read sneak paths. In addition, we derive the array constraints when parallel write is allowed in the rows or column only, or in both the rows and columns.

Optimal Sequential Test with Finite Horizon and Constrained Sensor Selection

Shang Li, Xiaoou Li, Xiaodong Wang and Jingchen Liu (Columbia University, USA)

This work considers the online sensor selection for the finite-horizon sequential hypothesis testing. In particular, at each step of the sequential test, the “most informative” sensor is selected based on all the previous samples so that the expected sample size is minimized. In addition, certain sensors cannot be used more than their prescribed budgets on average. Under this setup, we show that the optimal sensor selection strategy is a time-variant function of the running hypothesis posterior, and the optimal test takes the form of a truncated sequential probability ratio test. Both of these operations can be obtained through a simplified version of dynamic programming. Numerical results demonstrate that the proposed online approach outperforms the existing offline approach to the order of magnitude.

Reed-Muller Codes Achieve Capacity on the Quantum Erasure Channel

Santhosh Kumar (Texas A&M University, USA); Robert Calderbank and Henry D Pfister (Duke University, USA)

The quantum erasure channel is the simplest example of a quantum communication channel and its information capacity is known precisely. The subclass of quantum error-correcting codes called stabilizer codes is known to contain capacity-achieving sequences for the quantum erasure channel but no efficient method is known to explicitly construct these sequences. In this article, we describe explicitly a capacity-achieving code sequence for the quantum erasure channel. In particular, we construct a sequence of Calderbank-Shor-Steane (CSS) stabilizer codes from a sequence of self-orthogonal binary linear codes and show that the sequence of CSS codes is capacity-achieving on the quantum erasure channel if the sequence of binary linear codes is capacity-achieving on the binary erasure channel.

Recently, Reed-Muller codes were shown to achieve capacity on classical erasure channels. Using this and the above result, we show that CSS codes constructed from self-orthogonal binary Reed-Muller codes achieve the capacity of the quantum erasure channel. The capacity-achieving nature of these CSS codes is also explained from a GF(4) perspective.

Reverse entropy power inequalities for $s$-concave densities

Peng Xu, James Melbourne and Mokshay Madiman (University of Delaware, USA)

We explore conditions under which a reverse Rényi entropy power inequality holds for random vectors with $s$-concave densities, and also discuss connections with Convex Geometry.

Minimax Estimation of the $L_1$ Distance

Jiantao Jiao, Yanjun Han and Tsachy Weissman (Stanford University, USA)

We consider the problem of estimating the $L_1$ distance between two discrete probability measures $P$ and $Q$ from empirical data in a nonasymptotic and large alphabet setting. We construct minimax rate-optimal estimators for $L_1(P,Q)$ when $Q$ is either known or unknown, and show that the performance of the optimal estimators with $n$ samples is essentially that of the Maximum Likelihood Estimators (MLE) with $n\ln n$ samples. Hence, we demonstrate that the \emph{effective sample size enlargement} phenomenon, discovered and discussed in Jiao \emph{et al.} (2015), holds for this problem as well. However, the construction of optimal estimators for $L_1(P,Q)$ requires new techniques and insights outside the scope of the \emph{Approximation} methodology of functional estimation in Jiao \emph{et al.} (2015).

The Dirty MIMO Multiple-Access Channel

Anatoly Khina (California Institute of Technology, USA & Tel Aviv University, Israel); Yuval Kochman (The Hebrew University of Jerusalem, Israel); Uri Erez (Tel Aviv University, Israel)

In the scalar dirty multiple-access channel, in addition to Gaussian noise, two additive interference signals are present, each known non-causally to a single transmitter. It was shown by Philosof et al. that for strong interferences, an i.i.d. ensemble of codes does not achieve the capacity region. Rather, a structured-codes approach was presented, which was shown to be optimal in the limit of high signal-to-noise ratios (SNRs), where the sum-capacity is dictated by the minimal (“bottleneck”) channel gain. In the present work, we consider the multiple-input multiple-output (MIMO) variant of this setting. In order to incorporate structured codes in this case, one can utilize matrix decompositions, which transform the channel into effective parallel scalar dirty multiple-access channels. This approach however suffers from a “bottleneck” effect for each effective scalar channel and therefore the achievable rates strongly depend on the chosen decomposition. It is shown that a recently proposed decomposition, where the diagonals of the effective channel matrices are equal up to a scaling factor, is optimal at high SNRs, under an equal rank assumption.

Covert Communication over Classical-Quantum Channels

Azadeh Sheikholeslami (University of Massachusetts at Amherst, USA); Boulat Bash (Raytheon BBN Technologies, USA); Don Towsley (University of Massachusetts at Amherst, USA); Dennis Goeckel (University of Massachusetts, USA); Saikat Guha (Raytheon BBN Technologies, USA)

Recently, the fundamental limits of covert, i.e., reliable-yet-undetectable, communication have been established for general memoryless channels and for lossy-noisy bosonic (quantum) channels with a quantum-limited adversary. The key import of these results was the square-root law (SRL) for covert communication, which states that $O(\sqrt{n})$ covert bits, but no more, can be reliably transmitted over $n$ channel uses with $O(\sqrt{n})$ bits of secret pre-shared between communicating parties. Here we prove the achievability of the SRL for a general memoryless classical-quantum channel, showing that SRL covert communication is achievable over any quantum communication channel with a product-state transmission strategy. We leave open the converse, which, if proven, would show that even using entangled transmissions and entangling measurements, the SRL for covert communication cannot be surpassed over an arbitrary quantum channel.

Role of a Relay in Bursty Networks with Correlated Transmissions

Sunghyun Kim (ETRI, Korea); Soheil Mohajer (University of Minnesota, USA); Changho Suh (KAIST, Korea)

We explore the role of a relay in multiuser networks where some physical perturbation shared around the users may generate data traffic for them simultaneously, hence cause their transmission patterns to be correlated. We investigate how the gain from the help of a relay varies with correlations across the users’ transmission patterns in a bursty multiple access channel where the users send signals intermittently. As our main results, we show that in most cases a relay can provide a greater degrees-of-freedom (DoF) gain when the users’ transmission patterns are more correlated. Furthermore, we demonstrate that the DoF gain can scale with the number of users.

Practical Interactive Scheme for Extremum Computation in Distributed Networks

Solmaz Torabi, Jie Ren and John M. Walsh (Drexel University, USA)

Several users observing independent random variables exchange error-free messages with one another and a central receiver, the central estimation officer (CEO), with the aim of enabling the CEO to compute either the maximum across users (the max), or a a user attaining this maximum (the arg max) for each element of their local observation sequences. The fundamental lower bound on the information exchange rate required over all quantization schemes, both scalar and vector, is computed for this interactive problem with a known iterative convex geometric method. Next, an optimal dynamic program achieving the minimum expected rate and expected rate delay tradeoff over all scalar quantization schemes is presented, and the benefits of enabling users to overhear each others messages is assessed. Finally, a series of substantially reduced complexity dynamic programs are shown, both theoretically and empirically, to obtain performance close to the fundamental limits, and to scale favorably as the number of users grow.

Sparse Random Linear Network Coding for Data Compression in WSNs

Wenjie Li (Laboratoire des Signaux et Systèmes & Université Paris-Sud, France); Francesca Bassi (LSS-CNRS-Supelec, France); Michel Kieffer (L2S – CNRS – SUPELEC – UniversityParis-Sud, France)

This paper addresses the information theoretical analysis of data compression achieved by random linear network coding in wireless sensor networks. A sparse network coding matrix is considered with columns having possibly different sparsity factors. For stationary and ergodic sources, necessary and sufficient conditions are provided on the number of required measurements to achieve asymptotically vanishing reconstruction error. To ensure the asymptotically optimal compression ratio, the sparsity factor can be arbitrary close to zero in absence of additive noise. In presence of noise, a sufficient condition on the sparsity of the coding matrix is also proposed.

Parallel distinguishability of quantum operations

Runyao Duan (University of Technology, Australia); Cheng Guo (University of Technology Sydney, P.R. China); Chi-Kwong Li (College of William and Mary, USA); Yinan Li (University of Technology Sydney, Australia)

We find that the perfect distinguishability of two quantum operations by a parallel scheme depends only on an operator subspace generated from their Choi-Kraus operators. We further show that any operator subspace can be obtained from two quantum operations in such a way. This connection enables us to study the parallel distinguishability of operator subspaces directly without explicitly referring to the underlining quantum operations. We obtain a necessary and sufficient condition for the parallel distinguishability of an operator subspace that is either one-dimensional or Hermitian. In both cases the condition is equivalent to the non-existence of positive definite operator in the subspace, and an optimal discrimination protocol is obtained. Finally, we provide more examples to show that the non-existence of positive definite operator
is sufficient for many other cases, but in general it is only a necessary condition.

Improving Convergence of Divergence Functional Ensemble Estimators

Kevin Moon (University of Michigan, USA); Kumar Sricharan (University of Michigan, Ann Arbor, USA); Kristjan Greenewald and Alfred Hero III (University of Michigan, USA)

Recent work has focused on the problem of nonparametric estimation of divergence functionals. Many existing approaches are restrictive in their assumptions on the density support or require difficult calculations at the support boundary which must be known a priori. We derive the MSE convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets where knowledge of the support boundary is not required. We then generalize the theory of optimally weighted ensemble estimation to derive two estimators that achieve the parametric rate when the densities are sufficiently smooth. The asymptotic distribution of these estimators and some guidelines for tuning parameter selection are provided. Based on the theory, we propose an empirical estimator of Rényi-$\alpha$ divergence that outperforms the standard kernel density plug-in estimator, especially in high dimension.

An Unconventional Clustering Problem: User Service Profile Optimization

Fabio D’Andreagiovanni (ECMath MATHEON and ZIB Berlin, Germany); Giuseppe Caire (Technische Universität Berlin, Germany)

We consider the problem of clustering N users into K groups such that users in the same group are assigned a common service profile over M commodities. The profile of each group k sets for each commodity m the maximum of the service quality (measured by its price) that users in the k-th group can afford to pay. The objective is to find a clustering of users into groups that maximizes the total service quality gained by users, expressed by the total price. This Service Profile Optimization Problem (SPOP) emerges in various applications, as for example the bit-loading in Hybrid Fiver Coax data distribution systems. We propose a Mixed Integer Linear Programming (MILP) model for the problem, which allows us to use state-of-the-art MILP solvers as the core tool in an original powerful heuristic, which offers complexity and performance advantages with respect to previously proposed methods.

Expectation Consistent Approximate Inference: Generalizations and Convergence

Alyson Fletcher (University of California, Los Angeles, USA); Mojtaba Sahraee-Ardakan (UCSC, USA); Sundeep Rangan (New York University, USA); Philip Schniter (The Ohio State University, USA)

Approximations of loopy belief propagation, including expectation propagation and approximate message passing, have attracted considerable attention for probabilistic inference problems. This paper proposes and analyzes a generalization of Opper and Winther’s expectation consistent (EC) approximate inference method. The proposed method, called Generalized Expectation Consistency (GEC), can be applied to both maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimation. Here we characterize its fixed points, convergence, and performance relative to the replica prediction of optimality.

Smoothing Brascamp-Lieb Inequalities and Strong Converses for Common Randomness Generation

Jingbo Liu (Princeton University, USA); Thomas Courtade (University of California, Berkeley, USA); Paul Cuff and Sergio Verdú (Princeton University, USA)

We study the infimum of the best constant in a functional inequality, the Brascamp-Lieb-like inequality, over auxiliary measures within a neighborhood of a product distribution. In the finite alphabet and the Gaussian cases, such an infimum converges to the best constant in a mutual information inequality. Implications for strong converse properties of two common randomness (CR) generation problems are discussed. In particular, we prove the strong converse property of the rate region for the omniscient helper CR generation problem in the discrete and the Gaussian cases. The latter case is a rare instance of a strong converse for a continuous source when the rate region involves auxiliary random variables.

Key Generation with Limited Interaction

Jingbo Liu, Paul Cuff and Sergio Verdú (Princeton University, USA)

The basic two-terminal key generation model is considered, where the communication between the terminals is limited. We introduce a preorder relation on the set of joint distributions called $XY$-absolute continuity, and we reduce the multi-letter characterization of the key-communication tradeoff to the evaluation of the $XY$-concave envelope of a functional. For small communication rates, the key bits per interaction bit is expressed with a “symmetrical strong data processing constant”. Using hypercontractivity and R\'{e}nyi divergence, we also prove a computationally friendly strong converse bound for the common randomness bits per interaction bit in terms of the supremum of the maximal correlation coefficient over a set of distributions, which is tight for binary symmetric sources. Regarding the other extreme case, a new characterization of the minimum interaction for achieving the maximum key rate (MIMK) is given, and is used to resolve a conjecture by Tyagi about the MIMK for binary sources.

Optimizing Energy Efficiency over Energy-Harvesting LTE Cellular Networks

Hajar Mahdavi-Doost (Rutgers University, USA); Narayan Prasad (NEC Labs America, Princeton, USA); Sampath Rangarajan (NEC Labs America, USA)

We consider the problem of downlink scheduling in an LTE network powered by energy harvesting devices. We formulate optimization problems that seek to optimize two popular energy efficiency metrics subject to mandatory LTE network constraints along with energy harvesting causality constraints.
We identify a key sub-problem pertaining to maximizing the weighted sum rate that is common for both optimization problems, and is also of independent interest. We show that the latter sub-problem can be reformulated as a constrained submodular set function maximization problem. This enables us to design constant-factor approximation algorithms for maximizing the weighted sum rate as well as the two energy efficiency metrics over an energy harvesting LTE downlink. Our proposed algorithms are simple to implement and offer superior performance.

Brascamp-Lieb Inequality and Its Reverse: An Information Theoretic View

Jingbo Liu (Princeton University, USA); Thomas Courtade (University of California, Berkeley, USA); Paul Cuff and Sergio Verdú (Princeton University, USA)

We generalize a result by Carlen and Cordero-Erausquin on the equivalence between the Brascamp-Lieb inequality and the subadditivity of relative entropy by allowing for random transformations (a broadcast channel). This leads to a unified perspective on several functional inequalities that have been gaining popularity in the context of proving impossibility results. We demonstrate that the information theoretic dual of the Brascamp-Lieb inequality is a convenient setting for proving properties such as data processing, tensorization, convexity and Gaussian optimality. Consequences of the latter include an extension of the Brascamp-Lieb inequality allowing for Gaussian random transformations, the determination of the multivariate Wyner common information for Gaussian sources, and a multivariate version of Nelson’s hypercontractivity theorem. Finally we present an information theoretic characterization of a reverse Brascamp-Lieb inequality involving a random transformation (a multiple access channel).

Minimum Storage Regenerating Codes For All Parameters

Arman Fazeli and Sreechakra Goparaju (University of California, San Diego, USA); Alexander Vardy (University of California San Diego, USA)

Regenerating codes for distributed storage have attracted much research interest in the past decade. Such codes trade the bandwidth needed to repair a failed node with the overall amount of data stored in the network. Minimum storage regenerating (MSR) codes are an important class of optimal regenerating codes that minimize (first) the amount of data stored per node and (then) the repair bandwidth. Specifically, an $[n, k, d]-(\alpha)$ MSR code $C$ over $F_q$ is defined as follows. Using such a code C, a file $\mathbb{F}_q$ consisting of $\alpha k$ symbols over $\mathbb{F}_q$ can be distributed among $n$ nodes, each storing $\alpha$ symbols, in such a way that:
– the file $F$ can be recovered by downloading the content of any $k$ of the $n$ nodes; and
– the content of any failed node can be reconstructed by accessing any $d$ of the remaining $n-1$ nodes and downloading $\alpha/(d-k+1)$ symbols from each of these nodes.
A common practical requirement for regenerating codes is to have the original file $F$ available in uncoded form on some $k$ of the $n$ nodes, known as systematic nodes. In this case, several authors relax the defining node-repair condition above, requiring the optimal repair bandwidth of $d\alpha/(d-k+1)$ symbols for systematic nodes only. We shall call such codes systematic-repair MSR codes.
Unfortunately, explicit constructions of $[n, k, d]$ MSR codes are known only for certain special cases: either low rate, namely $k/n \leq 0.5$, or high repair connectivity, namely $d = n-1$. Although setting $d = n-1$ minimizes the repair bandwidth, it may be impractical to connect to all the remaining nodes in order to repair a single failed node. Our main result in this paper is an explicit construction of systematic-repair $[n, k, d]$ MSR codes for all possible values of parameters n, k, d. In particular, we construct systematic-repair MSR codes of high rate $k/n > 0.5$ and low repair connectivity $k \leq d \leq n-1$. Such codes were not previously known to exist. In order to construct these codes, we solve simultaneously several repair scenarios, each of which is expressible as an interference alignment problem. Extension of our results beyond systematic repair remains an open problem.

Thinning, photonic beamsplitting, and a general discrete entropy power inequality

Saikat Guha (Raytheon BBN Technologies, USA); Jeffrey H Shapiro (Massachusetts Institute of Technology, USA); Raul Garcia-Patron (Universite Libre de Bruxelles, Belgium)

Many partially-successful attempts have been made to find the most natural discrete-variable version of Shannon’s entropy power inequality (EPI). We develop an axiomatic framework from which we deduce the natural form of a discrete-variable EPI and an associated entropic monotonicity in a discrete-variable central limit theorem. In this discrete EPI, the geometric distribution, which has the maximum entropy among all discrete distributions with a given mean, assumes a role analogous to the Gaussian distribution in Shannon’s EPI. The entropy power of $X$ is defined as the mean of a geometric random variable with entropy $H(X)$. The crux of our construction is a discrete-variable version of Lieb’s scaled addition $X \boxplus_\eta Y$ of two random variables $X$ and $Y$ with $\eta \in (0, 1)$. We discuss the relationship of our discrete EPI with recent work of Yu and Johnson who developed an EPI for a restricted class of random variables that have ultra-log-concave (ULC) distributions. Even though we leave open the proof of the aforesaid natural form of the discrete EPI, we show that this discrete EPI holds true for variables with arbitrary discrete distributions when the entropy power is redefined as $e^{H(X)}$ in analogy with the continuous version. Finally, we show that our conjectured discrete EPI is a special case of the yet-unproven Entropy Photon-number Inequality (EPnI), which assumes a role analogous to Shannon’s EPI in capacity proofs for sending classical information over single and multi-user Gaussian-noise bosonic (quantum) channels.

Performance of Flash Memories with Different Binary Labelings: A Multi-User Perspective

Pengfei Huang and Paul H. Siegel (University of California, San Diego, USA); Eitan Yaakobi (Technion, Israel)

In this work, we study the performance of different decoding schemes for multilevel flash memories where each page in every block is encoded independently. We focus on the multi-level cell (MLC) flash memory, which is modeled as a two-user multiple access channel suffering from asymmetric noise. The uniform rate regions and sum rates of Treating Interference as Noise (TIN) decoding and Successive Cancelation (SC) decoding are investigated for a Program/Erase (P/E) cycling model and a data retention model. We examine the effect of different binary labelings of the cell levels, as well as the impact of further quantization of the memory output (i.e., additional read thresholds). Finally, we extend our analysis to the three-level cell (TLC) flash memory.

On Computation Rates for Arithmetic Sum

Ardhendu Tripathy and Aditya Ramamoorthy (Iowa State University, USA)

For zero-error function computation over directed acyclic networks, existing upper and lower bounds on the computation capacity are known to be loose. In this work we consider the problem of computing the arithmetic sum over a specific directed acyclic network that is not a tree. We assume the sources to be i.i.d. Bernoulli with parameter $1/2$. Even in this simple setting, we demonstrate that upper bounding the computation rate is quite nontrivial. In particular, it requires us to consider variable length network codes and relate the upper bound to equivalently lower bounding the entropy of descriptions observed by the terminal conditioned on the function value. This lower bound is obtained by further lower bounding the entropy of a so-called \textit{clumpy distribution}. We also demonstrate an achievable scheme that uses variable length network codes and in-network compression.

Balanced Reed-Solomon Codes

Wael Halbawi (California Institute of Technology, USA); Zihan Liu (The Chinese University of Hong Kong, Hong Kong); Babak Hassibi (California Institute of Technology, USA)

We consider the problem of constructing linear MDS error-correcting codes with generator matrices that are sparsest and balanced. In this context, sparsest means that every row has the least possible number of non-zero entries, and balanced means that every column contains the same number of non-zero entries. Codes with this structure minimize the maximal computation time of computing any code symbol, a property that is appealing to systems where computational load-balancing is critical. The problem was studied before by Dau et al. where it was shown that there always exists an MDS code over a sufficiently large field such that its generator matrix is both sparsest and balanced. However, the construction is not explicit and more importantly, the resulting MDS codes do not lend themselves to efficient error correction. With an eye towards explicit constructions with efficient decoding, we show in this paper that the generator matrix of a cyclic Reed–Solomon code of length $n$ and dimension $k$ can always be transformed to one that is both sparsest and balanced, for all parameters $n$ and $k$ where $\frac{k}{n}(n – k + 1)$ is an integer.

On the Design of Universal Schemes for Massive Uncoordinated Multiple Access

Austin Taghavi, Avinash Vem, Jean-Francois Chamberland and Krishna Narayanan (Texas A&M University, USA)

Future wireless access points may have to support sporadic transmissions from a massive number of unattended machines. Recently, there has been a lot of interest in the design of massive uncoordinated multiple access schemes for such systems based on clever enhancements to slotted ALOHA. A close connection has been established between the design of the multiple access scheme and the design of low density generator matrix codes. Based on this connection, optimal multiple access schemes have been designed based on slotted ALOHA and successive interference cancellation, assuming that the number of users in the network is known at the transmitters. In this paper, we extend this work and consider the design of universal uncoordinated multiple access schemes that are agnostic to the number of users in the network. We design Markov chain based transmission policies and numerical results show that substantial improvement to slotted ALOHA is possible.

Rate-Distortion Bounds on Bayes Risk in Supervised Learning

Matthew Nokleby (Wayne State University, USA); Ahmad Beirami (Duke University, MIT, USA); Robert Calderbank (Duke University, USA)

An information-theoretic framework is presented for estimating the number of labeled samples needed to train a classifier in a parametric Bayesian setting. Ideas from rate-distortion theory are used to derive bounds for the average $\ell_1$ or $\ell_\infty$ distance between the learned classifier and the true maximum a posteriori classifier in terms of the differential entropy of the posterior distribution, the Fisher information of the parametric family, and the number of training samples available. The maximum a posteriori classifier is viewed as a random source, labeled training data are viewed as a finite-rate encoding of the source, and the $\ell_1$ or $\ell_\infty$ Bayes risk is viewed as the average distortion. The result is a framework dual to the well-known probably approximately correct (PAC) framework. PAC bounds characterize worst-case learning performance of a family of classifiers whose complexity is captured by the Vapnik-Chervonenkis (VC) dimension. The rate-distortion framework, on the other hand, characterizes the average-case performance of a family of data distributions in terms of a quantity called the interpolation dimension, which represents the complexity of the family of data distributions. The resulting bounds do not suffer from the pessimism typical of the PAC framework, particularly when the training set is small.
The framework also naturally accommodates multi-class settings. Furthermore, Monte Carlo methods provide accurate estimates of the bounds even for complicated distributions.

Approximate Capacity of Index Coding for Some Classes of Graphs

Fatemeh Arbabjolfaei (University of California, San Diego, USA); Young-Han Kim (UCSD, USA)

For a class of index coding problems with side information graph having the Ramsey number $R(i,j)$ upper bounded by $ci^aj^b$, it is shown that the clique covering scheme approximates the broadcast rate within a multiplicative factor of $O(n^{\frac{a+b}{a+b+1}})$, where $n$ is the number of messages.
Based on this result and known bounds on Ramsey numbers, it is demonstrated that the broadcast rate of planar graphs, line graphs, and fuzzy circular interval graphs can be approximated within a factor of $(2n)^{2/3}$.

Two-way Lossy Compression via a Relay with Self Source

Ebrahim MolavianJazi (Penn State University, USA); Aylin Yener (Pennsylvania State University, USA)

We consider interactive source coding of two sources through a relay which also has a source. Alice and Bob have no direct links and wish to exchange their sources with fidelity via an intermediary, Ryan. Ryan also has an individual source and seeks to communicate it to Alice and Bob. We develop inner and outer bounds for the optimal rate-distortion region of this problem, which coincide in certain lossless cases, e.g., when the sources of Alice and Bob are conditionally independent given the source of Ryan or when two of the sources are functions of the third one. The bounds heavily make use of Wyner-Ziv and Berger-Tung coding and often rely on linear network coding. Our results highlight the dual role of the relaying source, which, on one hand, facilitates compression rate savings for the other two sources by helping as side information, and on the other hand, requires additional rate for its own description.

On Full Duplex Gaussian Relay Channels with Self-Interference

Arash Behboodi (RWTH Aachen University, Germany); Anas Chaaban (King Abdullah University of Science and Technology, Saudi Arabia); Rudolf Mathar (RWTH Aachen University, Germany); Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST), Saudi Arabia)

\ac{SI} in \ac{FD} systems is the interference caused caused by the transmission stream on the reception stream. Being one of the main restrictive factor for performance of practical full duplex systems, however, not too much is known about its effect on the fundamental limits of relaying systems. In this work, we consider full duplex relay channel with \ac{SI} where \ac{SI} is modeled as an additive Gaussian noise whose variance is dependent on instantaneous input power. The classical achievable rates and upper bounds for single relay channels no longer apply due to the structure of \ac{SI}. Achievable rates for the case of \ac{DF} and \ac{CF} and upper bounds on the capacity are derived in this case assuming Gaussian inputs. The deterministic model is also introduced and its capacity is characterized. The optimal input distributions for the general case is discussed and particularly it is shown that the conditional distribution of the source given the relay should be Gaussian. Numerical results are provided, comparing the presented schemes.

On the Capacity of Strong Asynchronous Multiple Access Channels with a Large Number of Users

Sara Shahi, Daniela Tuninetti and Natasha Devroye (University of Illinois at Chicago, USA)

This paper studies the impact of block asynchronism on the capacity of a slotted Multiple Access Channel (MAC) whose number of users $K_n$ increases with the blocklength $n$. In a {\it slotted strong-asynchronous MAC}, the $K_n$ users have independent transmission start times that are integer multiples of $n$ (slotted) which are uniformly distributed on a window of length $A_n=e^{n\alpha}$ (strong-asynchronism). All users’ messages as well as transmission times need to be reliably decoded at the single receiver. We show that for $K_n=e^{n\beta}$ with $\beta>\alpha$, not even synchronization is possible when transmitting a single message per user. We also show that for $K_n=e^{\beta}$ with $\beta=o(n)$, each user can achieve its point-to-point asynchronous capacity, which is a trivial upper bound for the capacity of the MAC. Finally, achievable rates for $K_n=e^{n\beta}$ with $0<\beta<\frac{\alpha}{2}$ are derived.

New results about Tu-Deng’s conjecture

Soukayna Qarboua (IMT Telecom Bretagne and Lab-STICC, France & Mohammed V University in Rabat, LabMiA, FSR, Morocco); Julien Schrek (IMT Telecom Bretagne and Lab-STICC, France); Caroline Fontaine (CNRS Lab-STICC & Telecom Bretagne ITI, France)

To design robust symmetric encryption schemes, we need to use Boolean functions with suitable properties. Among the security criteria these functions need to fulfill, we can mention algebraic immunity. A lot of papers study how to construct suitable functions, but some of them assume the validity of Tu-Deng’s combinatorial conjecture [2] to estimate the algebraic immunity of the Boolean functions they design.

Adaptivity provably helps: information-theoretic limits on l-0 cost of non-adaptive sensing

Sanghamitra Dutta and Pulkit Grover (Carnegie Mellon University, USA)

The advantages of adaptivity and feedback are of immense interest in signal processing and communication with many positive and negative results. Although it is established that adaptivity does not offer substantial reductions in minimax mean square error for a fixed number of measurements, existing results have shown several advantages of adaptivity in complexity of reconstruction, accuracy of support detection, and gain in signal-to-noise ratio, under constraints on sensing energy. Sensing energy has often been measured in terms of the Frobenius Norm of the sensing matrix. This paper uses a different metric that we call the l_0 cost of a sensing matrix– to quantify the complexity of sensing. Thus sparse sensing matrices have a lower cost. We derive information-theoretic lower bounds on the l_0 cost that hold for any non-adaptive sensing strategy. We establish that any non-adaptive sensing strategy must incur an l_0 cost of at least \Omega\left( N \sqrt{\log_2(N)}\right) to reconstruct an N-dimensional, one–sparse signal when the number of measurements are limited to \Theta\left(\log_2 (N)\right). In comparison, bisection-type adaptive strategies only require an l_0 cost of at most \mathcal{O}(N) for equal number of measurements in order sense. The problem has an interesting interpretation as a sphere packing problem in a multidimensional space, such that all the sphere centres have minimum non-zero co-ordinates.

On the Entropy and Mutual Information of Point Processes

Francois Baccelli (UT Austin & The University of Texas at Austin, USA); Jae Oh Woo (The University of Texas at Austin, USA)

This paper is focused on information theoretic properties of point processes. Firstly, we discuss the entropy of a point process and the entropy rate of stationary point processes. Then we give explicit formulas for these quantities in the Poisson case, as well as maximal entropy properties for homogeneous Poisson point processes. Secondly, we define the mutual information rate of two stationary point processes. We then give explicit formulas for the mutual information rate between a homogeneous Poisson point process and its displacement.

Asymptotic MAP upper bounds for LDPC codes

David Matas (Technical University of Catalonia (UPC), Spain); Meritxell Lamarca (Universitat Politècnica de Catalunya, Spain)

This paper aims at computing tight upper bounds for the maximum a posteriori threshold of low-density parity check codes in the asymptotic blocklength regime for the transmission over binary-input memoryless symmetric-output channels. While these bounds are already known, we propose a novel derivation based on a completely different approach: based solely on the concept of the chain rule and the conditional entropy, resorting to the concentration theorem for the code ensemble to compute the syndrome entropy with low complexity employing density evolution.

On the Impossibility of Information-Theoretic Composable Coin Toss Extension

Gregor Seiler and Ueli Maurer (ETH Zurich, Switzerland)

Shared randomness is an important resource in cryptography. It is well-known that in the information-theoretic setting there is no protocol that allows two parties who do not trust each other to obtain a uniformly distributed shared bit string solely by exchanging messages such that a dishonest party can not influence the result. On the other hand, in the situation where the two parties already share a random bit string and want to use it in order to construct a longer random bit string, it is only known to be impossible when the protocols are restricted in the number of messages to be exchanged. In this paper we prove that it is also impossible when arbitrarily many messages are allowed.

Binarizations in Random Number Generation

Sung-il Pae (Hongik University, Korea)

Extracting procedures produce unbiased random bits from biased coin flips. Binarizations take inputs from an m-faced dice and produce bit sequences to be fed into a (binary) extracting procedure to obtain random bits, and this can be done in an entropy-preserving manner, without loss of information. Such a procedure has been proposed by Zhou and Bruck. We discuss a family of such entropy-preserving processes that we call complete binarizations.

Cutsize Distributions of Balanced Hypergraph Bipartitions for Random Hypergraphs

Takayuki Nozaki (Yamaguchi University, Japan)

In a previous work, we presented a parallel encoding algorithm for low-density parity-check (LDPC) codes by partitioning hypergraph representation for the LDPC codes. The aim of this research is to analyze the processing time of this encoding algorithm. This paper clarifies that the processing time of the encoding algorithm depends on the minimum cutsize of balanced hypergraph partitions. Moreover, this paper gives the typical minimum cutsize and cutsize distribution for balanced hypergraph bipartitions of random hypergraphs defined from a regular LDPC ensemble.

Fundamental Limits of Cache-Aided Interference Management

Navid NaderiAlizadeh (University of Southern California, USA); Mohammad Ali Maddah-Ali (Bell Labs, Alcatel Lucent, USA); Salman Avestimehr (University of Southern California, USA)

We consider a system, comprising a library of files (e.g., movies) and a wireless network with an arbitrary number of transmitters and receivers, where each node is equipped with a local cache memory. The system operates in two phases, the prefetching phase, where each cache is pre-populated from the contents of the library, up to its limited size, and then the delivery phase, where each receiver reveals its request for a file from the library, and the system needs to deliver the requested files. The objective is to design the cache placement and the communication scheme to maximize the rate of delivery for arbitrary set of requested files.

We characterize the sum degrees-of-freedom (sum-DoF) of this network to within a factor of 2 for all system parameters, under one-shot linear schemes. In particular, we show that the linear sum-DoF scales linearly with the aggregate cache size in the network (i.e., the cumulative memory available at all nodes). The proposed achievable scheme exploits the redundancy of the content at transmitters’ caches to cooperatively zero-force some outgoing interference, and availability of the unintended content at the receivers’ caches to cancel (subtract) some of the incoming interference. The outer bound is derived by an optimization argument which bounds the number of communication blocks needed to deliver any requested contents to the receivers. This result demonstrates that in this setting, caches at the transmitters’ side are equally valuable as the caches at the receivers’ side. In addition, it shows that caching can offer a throughput gain that scales linearly with the size of the network.

Estimating the Number of Defectives with Group Testing

Moein Falahatgar (University of California San Diego, USA); Ashkan Jafarpour and Alon Orlitsky (University of California, San Diego, USA); Venkatadheeraj Pichapati (UCSD, India); Ananda Theertha Suresh (University of California, San Diego, USA)

Estimating the number of defective elements of a set has various biological applications including estimating the prevalence of a disease or disorder. Group testing has been shown to be more efficient than scrutinizing each element separately for defectiveness. In group testing, we query a subset of elements and the result of the query will be defective if the subset contains at least one defective element. We present an adaptive, randomized group-testing algorithm to estimate the number of defective elements with near-optimal number of queries. Our algorithm uses at most $2\log\log d+\mathcal{O}(\frac{1}{\delta^2}\log \frac{1}{\epsilon})$ queries and estimates the number of defective elements $d$ up to a multiplicative factor of $1\pm\delta$, with error probability $\le \epsilon$. Also, we show an information-theoretic lower bound $(1-\epsilon)\log\log d-1$ on the necessary number of queries any adaptive algorithm makes to estimate the number of defective elements for constant $\delta$.

Online Policies for Multiple Access Channel with Common Energy Harvesting Source

Abdulrahman Baknina (University of Maryland, College Park, USA); Sennur Ulukus (University of Maryland, USA)

We consider online transmission policies for the two-user multiple access channel, where both users harvest energy from a common source. The transmitters are equipped with arbitrary but finite-sized batteries. The energy harvests are independent and identically distributed (i.i.d.) over time, and synchronized at the two users due to their common source. The transmitters know the energy arrivals only causally. We first consider the special case of Bernoulli energy arrivals. For this case, we determine the optimal policies that achieve the boundary of the capacity region. We show that the optimal power allocation decreases in time, and that the capacity region is a pentagon. We then consider general i.i.d. energy arrivals, and propose a distributed fractional power (DFP) policy. We develop lower and upper bounds on the performance of the proposed DFP policy for general i.i.d. energy arrivals, and show that the proposed DFP is near-optimal in that it yields rates which are within a constant gap of the derived lower and lower bounds.

Reliability-Bandwidth Tradeoffs for Distributed Storage Allocations

Siddhartha Brahma (University of Neuchatel, Switzerland); Hugues Mercier (Université de Neuchâtel, Switzerland)

We consider the allocation of coded data over nodes in a distributed storage system under a budget constraint. A system with failed nodes can recover the original data of unit size if the amount of data in the active nodes is at least a unit. Building on the work of Leong et. al, we introduce the concepts of tight allocations and repair bandwidth in this distributed setting. For tight allocations, the amount of data in the failed nodes gives a lower bound to the repair bandwidth required to put the system back to its original state. Using this bound, we define the Minimum Expected Repair Bandwidth (MERB) to study the tradeoffs between reliability and repair bandwidth, both empirically and by proving bounds on MERB in terms of the reliability. We show that even computing MERB for a general allocation is #P-Hard and suggest a simpler objective function to optimize it approximately. Finally, we study the asymptotic behavior of MERB for large systems and show two distinct optimal allocation regimes depending on the failure probability of the storage nodes.

Optimal Energy Management for Energy Harvesting Transmitters under Battery Usage Constraint

Xianwen Wu, Jing Yang and Jingxian Wu (University of Arkansas, USA)

This paper takes the impact of charging/discharging operations on battery degradation into consideration, and studies the optimal energy management policy for an energy harvesting communication system under a battery usage constraint. Specifically, in each time slot, we assume the harvested energy can be used to power the transmitter immediately without entering into the battery, or stored into the battery for now and retrieved later for transmission. Whenever the battery is charged or discharged, a cost will be incurred to account for its impact on battery degradation. We impose an long-term average cost constraint on the battery, which is translated to the average number of charge/discharge operations per unit time. The objective is to develop an online policy to maximize the long-term average throughput of the transmitter under energy causality constraint and the battery usage constraint.

We first relax the energy causality constraint on the system, and impose an energy flow conservation constraint instead. We show that the optimal energy management policy has a double-threshold structure: if the amount of energy arrives in each time slot lies in between the two thresholds, it will be used immediately without involving the battery; otherwise, the battery will be charged or discharged accordingly to maintain a constant transmit power. We then modify the double-threshold policy slightly to accommodate the energy causality constraint, and analyze its long-term performance. We show that the system achieves the same long-term average performance, thus it is optimal.

Message Partitioning and Limited Auxiliary Randomness: Alternatives to Honey Encryption

AmirEmad Ghassami, Daniel F Cullina and Negar Kiyavash (University of Illinois at Urbana-Champaign, USA)

In a symmetric-key cryptography system, it is often required to transmit a nonuniform message from a very large set. In this case, a computationally unbounded adversary can take advantage of the non-uniformity of the posterior to recover the message. Recently an encryption scheme called Honey Encryption has been proposed to increase the information-theoretic security of the system, i.e., guaranteed level of security regardless of the computational power of the adversary. In this paper, we present a technique called message partitioning which can be used to accomplish the same goal. We analyze the overall security of the combination of this technique with Honey Encryption, which uses a Distribution Transforming Encoder (DTE) block. We propose a new DTE which has an acceptable performance under limited amount of available auxiliary randomness. Achievable bounds are presented for both cases, which under certain conditions, are close to the lower bounds on the level of the success of the adversary.

On Vectorial Bent Functions with Dillon-type Exponents

Lucien Lapierre and Petr Lisonek (Simon Fraser University, Canada)

We study vectorial bent functions with Dillon-type exponents. These functions have attracted attention because they are hyperbent whenever they are bent, and they achieve the highest possible algebraic degree among all bent functions on the same domain. In low dimensions we determine the simplest possible forms of such functions when they map to GF(4). We prove non-existence results for certain monomial and multinomial bent functions mapping to large codomains.

On the Two-User MISO Interference Channel with Single User Decoding and Partial CSIT

Yair Noam and Naama Kimelfeld (Bar Ilan University, Israel); Benjamin Zaidel (Bar Ilan University)

This paper studies the Rayleigh fading two-user MISO interference channel with single user decoding and limited channel state information (CSI) feedback. The paper proposes two contributions. First, the achievable rate-region with partial CSI at the transmitters due to channel quantization error is analyzed. We derive an analytically tractable inner bound on that region, which provides insights into the problem. It is shown that, similarly to the case of perfect transmitter CSI, beamforming is optimal for achieving every boundary point of the inner bound. The second contribution is a new CSI feedback scheme that reduces CSI overhead for feeding back the cross-link interfering channel, and enhances throughput for any rate limited feedback scheme. Such CSI feedback reduction is crucial in cases where control channels for sharing information between transmitters and unintended receivers are rate limited. The proposed scheme takes feedback into account already at the channel estimation stage. This is done by a-priori creating a low-dimensional effective cross-link channel, which can be quantized more accurately than the full dimensional channel, while maintaining a high array gain in the direct channel towards the intended user.

Performance Trade-Offs in Multi-Processor Approximate Message Passing

Junan Zhu (NCSU, USA); Ahmad Beirami (Duke University, MIT, USA); Dror Baron (North Carolina State University, USA)

We consider large-scale linear inverse problems in Bayesian settings. Our general approach follows a recent line of work that applies the approximate message passing (AMP) framework in multi-processor (MP) computational systems by storing and processing a subset of rows of the measurement matrix along with corresponding measurements at each MP node. In each MP-AMP iteration, nodes of the MP system and its fusion center exchange lossily compressed messages pertaining to their estimates of the input. There is a trade-off between the physical costs of the reconstruction process including computation time, communication loads, and the reconstruction quality, and it is impossible to simultaneously minimize all the costs. We pose this minimization as a multi-
objective optimization problem (MOP), and study the properties of the best trade-offs (Pareto optimality) in this MOP. We prove that the achievable region of this MOP is convex, and conjecture how the combined cost of computation and communication scales with the desired mean squared error. These properties are verified numerically.

Quasi Linear Codes: Application to Point-to-Point and Multi-Terminal Source Coding

Farhad Shirani Chaharsooghi and Mohsen Heidari Khoozani (University of Michigan, USA); Sandeep Pradhan (University Michigan, USA)

A new ensemble of structured codes is introduced. These codes are called Quasi Linear Codes (QLC). The QLC’s are constructed by taking subsets of linear codes. They have a looser structure compared to linear codes and are not closed under addition. We argue that these codes provide gains in terms of achievable Rate-Distortions (RD) in different multi-terminal source coding problems. We derive the necessary covering bounds for analyzing the performance of QLC’s. We then consider the Multiple-Descriptions (MD) problem, and prove through an example that the application of QLC’s gives an improved achievable RD region for this problem. Finally, we derive an inner bound to the achievable RD region for the general MD problem which strictly contains all of the previous known achievable regions.

Community Detection with Colored Edges

Narae Ryu and Sae-Young Chung (KAIST, Korea)

In this paper, we prove a sharp limit on the community detection problem with colored edges. We assume two equal-sized communities and there are $m$ different types of edges. If two vertices are in the same community, the distribution of edges follows $p_i=\alpha_i\log{n}/n$ for $1\leq i \leq m$, otherwise the distribution of edges is $q_i=\beta_i\log{n}/n$ for $1\leq i \leq m$, where $\alpha_i$ and $\beta_i$ are positive constants and $n$ is the total number of vertices. Under these assumptions, a fundamental limit on community detection is characterized using the Hellinger distance between the two distributions. If $\sum_{i=1}^{m} {(\sqrt{\alpha_{i}}-\sqrt{\beta_{i}})}^{2}>2$, then the community detection via maximum likelihood (ML) estimator is possible with high probability. If $\sum_{i=1}^{m} {(\sqrt{\alpha_{i}}-\sqrt{\beta_{i}})}^{2}<2$, the probability that the ML estimator fails to detect the communities does not go to zero.

Multiplicative Repetition Based Superposition Transmission of Nonbinary Codes

Xijin Mu, Baoming Bai and Rui Zhang (Xidian University, P.R. China)

This paper presents a new superposition transmission method, called multiplicative repetition based superposition transmission (MRST). This method is based on multiplicative repetition and superposition of nonbinary codes. For the encoding process, we use a short nonbinary code as basic code, and superimpose the multiplicatively repeated codewords of the basic code. Two decoding algorithms, the joint decoding algorithm and sliding-window decoding algorithm are proposed. Numerical results show that MRST with joint decoding algorithm is able to achieve to good performance. Compared with block Markov superposition transmission (BMST), MRST with sliding-window decoding algorithm can achieve better performance with less memory order.

Integer-Forcing Source Coding: Successive Cancellation and Source-Channel Duality

Wenbo He and Bobak Nazer (Boston University, USA)

Integer-forcing is a technique that exploits the algebraic structure of a linear or lattice code to realize “single-user” encoding and decoding algorithms with significant rate gains over conventional strategies. It was originally proposed for the Gaussian MIMO multiple-access channel. Subsequent efforts have generalized this strategy to the Gaussian MIMO broadcast channel and the Gaussian distributed source coding problem. Our prior work has established uplink-downlink duality for integer-forcing. Here, we propose a successive cancellation generalization of integer-forcing source coding. We then develop source-channel duality results that connect the achievable rates of this scheme to those of successive integer-forcing channel coding.

Trade-off between Communication and Cooperation in the Interference Channel

Farhad Shirani Chaharsooghi (University of Michigan, USA); Sandeep Pradhan (University Michigan, USA)

We consider the problem of coding over the multi-user Interference Channel (IC). It is well-known that aligning the interfering signals results in improved achievable rates in certain setups involving more than two users. We argue that in the general interference problem, senders face a tradeoff between communicating their message to their corresponding decoder or cooperating with other users by aligning their signals. Traditionally, interference alignment is carried out using structured codes such as linear codes and group codes. We show through an example that the usual structured coding schemes used for interference neutralization lack the necessary flexibility to optimize this tradeoff. Based on this intuition, we propose a new class of codes for this problem. We use the example to show that the application of these codes gives strict improvements in terms of achievable rates. Finally, we derive a new achievable region for the three user IC which strictly improves upon the previously known inner bounds for this problem.

How to Compute Modulo Prime-Power Sums

Mohsen Heidari Khoozani (University of Michigan, USA); Sandeep Pradhan (University Michigan, USA)

The problem of computing modulo prime-power sums is investigated in distributed source coding as well as computation over Multiple-Access Channel (MAC). We build upon group codes and present a new class of codes called Quasi Group Codes (QGC). A QGC is a subset of a group code. These codes are not closed under the group addition. We investigate some properties of QGC’s, and provide a packing and a covering bound. Next, we use these bounds to derived achievable rates for distributed source coding as well as computation over MAC. We show that strict improvements over the previously known schemes can be obtained using QGC’s.

Chained Kullback-Leibler Divergences

Dmitri Pavlichin and Tsachy Weissman (Stanford University, USA)

We define and characterize the “chained” Kullback-Leibler divergence $\min_w D(p||w) + D(w||q)$ minimized over all intermediate distributions $w$ and the analogous $k$-fold chained K-L divergence $\min D(p||w_1) + D(w_1||w_2) + \ldots + D(w_k||q)$ minimized over the entire path $(w_1,\ldots,w_k)$. This quantity arises in a large deviations analysis of a Markov chain on the set of types — the Wright-Fisher model of neutral genetic drift: a population with allele distribution $q$ produces offspring with allele distribution $w$, which then produce offspring with allele distribution $p$, and so on.
The chained divergences enjoy some of the same properties as the K-L divergence (like joint convexity in the arguments) and appear in $k$-step versions of some of the same settings as the K-L divergence (like information projections and a conditional limit theorem). We further characterize the optimal $k$-step “path” of distributions appearing in the definition and apply our findings in a large deviations analysis of the Wright-Fisher process. We make a connection to information geometry via the previously studied continuum limit, where the number of steps tends to infinity, and the limiting path is a geodesic in the Fisher information metric.
Finally, we offer a thermodynamic interpretation of the chained divergence (as the rate of operation of an appropriately defined Maxwell’s demon) and we state some natural extensions and applications (a $k$-step mutual information and $k$-step maximum likelihood inference). We release code for computing the objects we study.

Windowed Encoding of Spatially Coupled LDGM Codes for Lossy Source Compression

Ahmad Golmohammadi and David G. M. Mitchell (New Mexico State University, USA); Joerg Kliewer (New Jersey Institute of Technology, USA); Daniel J. Costello, Jr. (University of Notre Dame, USA)

Recently, it has been shown that a class of spatially coupled low-density generator-matrix (SC-LDGM) code ensembles displays distortion saturation for the lossy binary symmetric source coding problem with the belief propagation guided decimation (BPGD) algorithm, i.e., the BPGD distortion approaches the optimal expected distortion of the underlying ensemble asymptotically in code length. Here, we investigate the distortion performance of a practical class of protograph-based SC-LDGM code ensembles and demonstrate distortion saturation numerically. Moreover, we propose an efficient windowed encoding (WE) algorithm that takes advantage of the convolutional structure of the SC-LDGM codes. By using the WE algorithm, a distortion very close to the rate-distortion limit can be achieved for a fixed compression rate with low-to-moderate encoding latency.

Cooperative Data Exchange with Priority Classes

Anoosheh Heidarzadeh (Texas A&M University); Muxi Yan and Alex Sprintson (Texas A&M University, USA)

This paper considers the problem of cooperative data exchange with different client priority classes. In this problem, each client initially knows a subset of packets in the ground set X of size K, and all clients wish to learn all packets in X. The clients exchange packets by broadcasting coded combinations of their packets. The primary objective is to satisfy all high-priority clients in the first round of transmissions with minimum sum-rate, and the secondary objective is to satisfy low-priority clients in the second round of transmissions with minimum sum-rate, subject to minimizing the sum-rate in the first round. For any arbitrary problem instance, we provide a linear programming-based approach to find the minimum sum-rate in each round. Moreover, for the case in which the packets are randomly distributed among clients, we derive a closed-form expression for the minimum sum-rate in each round, which holds with probability approaching 1 as K tends to infinity.

Communicating Correlated Sources Over a MAC in the absence of a Gács-Körner Common Part

Arun Padakandla (Purdue University, USA)

The joint source channel coding problem of transmitting a pair of correlated sources over a $2-$user MAC is considered. A new concatenated coding scheme, comprising of an inner code of fixed blocklength and an outer code of arbitrarily large blocklength, is proposed. Its information theoretic performance is analyzed to derive a new set of sufficient conditions. An example is identified to demonstrate that the proposed coding technique can strictly outperform the current known best, which is due to Cover El Gamal and Salehi. Our findings are based on Dueck’s ingenious coding technique proposed for the particular example therein.

Joint Source-Channel Coding for Broadcasting Correlated Sources

Erman Köken and Ertem Tuncel (UC Riverside, USA)

We consider the lossy transmission of a memoryless bivariate Gaussian source over an average-power-constrained bandwidth-mismatched Gaussian broadcast channel with two receivers where each receiver is interested in only one component. We propose new hybrid digital/analog coding schemes which are demonstrated to outperform the previously known schemes.

Linear Network Coding Capacity Region of The Smart Repeater with Broadcast Erasure Channels

Jaemin Han and Chih-Chun Wang (Purdue University, USA)

This work considers the smart repeater network where a single source s wants to send two independent packet streams to destinations {d1, d2} with the help of relay r. The transmission from s or r is modeled by packet erasure channels: For each time slot, a packet transmitted by s may be received, with some probabilities, by a random subset of {d1, d2, r}; and those transmitted by r will be received by a random subset of {d1, d2}. Interference is avoided by allowing at most one of {s, r} to transmit in each time slot. One example of this model is any cellular network that supports two cell-edge users when a relay in the middle uses the same downlink resources for throughput/safety enhancement.

In this setting, we study the capacity region of (R1,R2) when allowing linear network coding (LNC). The proposed LNC inner bound introduces more advanced packing-mixing operations other than the previously well-known butterfly-style XOR operation on overheard packets of two co-existing flows. A new LNC outer bound is derived by exploring the inherent algebraic structure of the LNC problem. Numerical results show that, with more than 85% of the experiments, the relative sum-rate gap between the proposed outer and inner bounds is smaller than 0.08%, thus effectively bracketing the LNC capacity of the smart repeater problem.

Effects of the approximations from BP to AMP for small-sized problems

Arise Kuriya and Toshiyuki Tanaka (Kyoto University, Japan)

Approximate Massage Passing (AMP) algorithm is derived from Belief Propagation (BP) algorithm by introducing approximations.
While the properties and behaviors of AMP in large systems are well studied and understood, there are few studies about AMP applied to relatively small sized problems where the effect of the approximations are neither negligible nor trivial. We investigate AMP in small-sized problems, especially focusing on the effects of the approximations and the mechanism of the performance degradation. To observe the effects of the approximations, we conduct numerical experiments which compare AMP and BP algorithms. We apply these algorithms to the problems of CDMA-MUD and Ising perceptron learning. In the numerical experiments, the results via Bayes optimal estimation obtained via exactly calculating marginals and an approximated BP algorithm which is obtained as an intermediate step to derive AMP from BP are also provided and discussed for the comparisons.

Age of Information: The Gamma Awakening

Elie Najm (Ecole Polytechnique Fédérale de Lausanne, Switzerland); Rajai Nasser (École Polytechnique Fédérale de Lausanne, Switzerland)

We consider a scenario where a monitor is interested in being up to date with respect to the status of some system which is not directly accessible to this monitor. However, we assume a source node has access to the status and can send status updates as packets to the monitor through a communication system. We also assume that the status updates are generated randomly as a Poisson process. The source node can manage the packet transmission to minimize the age of information at the destination node, which is defined as the time elapsed since the last successfully transmitted update was generated at the source. We use queuing theory to model the source-destination link and we assume that the time to successfully transmit a packet is a gamma distributed service time. We consider two packet management schemes: LCFS with preemption and LCFS without preemption. We compute and analyze the average age and the average peak age of information under these assumptions. Moreover, we extend these results to the case where the service time is deterministic.

Decentralized Sequential Change Detection with Ordered CUSUM

Sourabh Banerjee and Georgios Fellouris (University of Illinois at Urbana-Champaign, USA)

We consider the problem of decentralized sequential change detection, in which K sensors monitor a system in real time, and at some unknown time there is an anomaly in the environment that changes the distribution of the observations in all sensors. The sensors communicate with a fusion center that is responsible for quickly detecting the change, while controlling the false alarm rate. We focus on two families of decentralized detection rules with minimal communication requirements. First, we assume that each sensor runs a local CUSUM algorithm and communicates with the fusion center only once, when it detects the change. The fusion center then declares that a change has occurred when M of the K sensors have raised an alarm. Assuming that all sensors have the same signal strength, we show that the asymptotic performance of these one-shot schemes is free of M to a first order, but decreases with M to a second-order, suggesting that the best strategy for the fusion center is to detect the change with the first alarm. Second, we consider schemes that detect the change when M of the K sensors agree simultaneously that the change has occurred. While a first-order asymptotic analysis suggests that it is optimal for the fusion center to wait for all sensors to agree simultaneously, a second-order analysis reveals that it can be better to wait fewer (but more than half) of the sensors to agree. The insights from these asymptotic results are supported by a simulation study.

EXIT Analysis for Belief Propagation in Degree-Correlated Stochastic Block Models

Hussein Saad and Ahmed Abotabl (University of Texas at Dallas, USA); Aria Nosratinia (University of Texas, Dallas, USA)

This paper proposes the extrinsic information transfer (EXIT) method for the analysis of belief propagation in community detection on random graphs, specifically under the degree correlated stochastic block model. Belief propagation in community detection has been studied under density evolution; this work for the first time brings EXIT analysis to community detection on random graphs, which has certain advantages that are well documented in the parallel context of error control coding. We show using simulations that in the case of equally-sized communities, when the probability of connectivity in the communities are different, there is only one intersection point, hence belief propagation is optimal. When the probability of connectivity in the communities are the same, we show that belief propagation is equivalent to random guessing and the curves intersect at the trivial zero-zero point. For the roughly equal-sized communities, we show that there is always only one intersection point, suggesting that belief propagation is optimal. Finally, for the communities with disparate size, we show that there are multiple intersection points, hence belief propagation is likely to be sub-optimal.

A Progressive Edge Growth Algorithm for Bit Mapping Design of LDPC Coded BICM Schemes

Junyi Du (University of Electronic Science and Technology of China & University of New South Wales, P.R. China); Jinhong Yuan (University of New South Wales, Australia); Liang Zhou (UESTC, P.R. China); Xuan He (University of Electronic Science and Technology of China, P.R. China)

In this paper, we consider the design of the bit mapping in low-density parity-check (LDPC) coded bit-interleaved coded modulation (BICM) schemes. We introduce a two-layer bipartite graph to represent the LDPC coded BICM scheme where a new bit mapping graph linking sub-channels to variable nodes (VNs) is added to the conventional Tanner graph. We propose a progressive edge growth (PEG) algorithm to design the bit mapping for the BICM scheme. The design paradigm is to provide more protections to the VNs that are allocated to the sub-channels with the lowest mutual information. We define a novel concept of unreliable depth profile to classify the reliability of the VNs. By connecting more reliable edges to the least reliable VNs, we can significantly improve the reliable edge distribution for the unreliable VNs, thus improve the extrinsic information in the iterative decoding. The proposed bit mapping algorithm is employed for the design of the LDPC coded BICM with 64-QAM and 256-QAM. Simulation results show that the proposed design can considerably improve the error performance, compared to the conventional consecutive bit mapping strategy.

Bandwidth Adaptive & Error Resilient Regenerating Codes with Minimum Repair Bandwidth

Kaveh Mahdaviani and Ashish Khisti (University of Toronto, Canada); Soheil Mohajer (University of Minnesota, USA)

Regenerating codes are efficient methods for distributed storage in practical networks where node failures are common. They guarantee low cost data reconstruction and repair through accessing only a predefined number of arbitrary chosen storage nodes in the network. In this work we study the fundamental limits of required total repair bandwidth and the storage capacity of these codes under the assumption that i) both data reconstruction and repair are resilient to the presence of a certain number of erroneous nodes in the network and ii) the number of helper nodes in every repair is not fixed, but is a flexible parameter that can be selected during the run-time. We focus on the minimum repair bandwidth point in this work, propose the associated coding scheme to posses both these extra properties, and prove its optimality.

(Almost) Practical Tree Codes

Anatoly Khina (California Institute of Technology); Wael Halbawi and Babak Hassibi (California Institute of Technology, USA)

We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of %cyber-physical and networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control systems setting.

Generalized Fisher Information and Upper Bounds on the Differential Entropy of Independent Sums

Jihad Fahs and Ibrahim Abou-Faycal (American University of Beirut, Lebanon)

We consider infinitesimal perturbations along symmetric stable variables and define a new information measure. We derive a generalized de Bruijn’s identity, prove that the new measure satisfies a data processing inequality and a generalized Fisher information inequality which are used to establish an upper bound on the entropy of independent sums when one of the variables is stable.

Information-Theoretic Lower Bounds for Recovery of Diffusion Network Structures

Keehwan Park and Jean Honorio (Purdue University, USA)

We study the information-theoretic lower bound of the sample complexity of the correct recovery of diffusion network structures. We introduce a discrete-time diffusion model based on the Independent Cascade model for which we obtain a lower bound of order $\Omega(k \log p)$, for directed graphs of $p$ nodes, and at most $k$ parents per node. Next, we introduce a continuous-time diffusion model, for which a similar lower bound of order $\Omega(k \log p)$ is obtained. Our results show that the algorithm of Pouget-Abadie et al. is statistically optimal for the discrete-time regime. Our work also opens the question of whether it is possible to devise an optimal algorithm for the continuous-time regime.

A Geometric Analysis of Phase Retrieval

Ju Sun, Qing Qu and John Wright (Columbia University, USA)

Given measurements of the form $y_k = | \langle \mathbf{a}_k, \mathbf{x} \rangle |$ for $k = 1, \dots, m$, is it possible to recover $\mathbf{x} \in C^n$? This is the phase retrieval (PR) problem which is a fundamental task in various disciplines. Natural nonconvex heuristics often work remarkably well for PR in practice, but lack clear theoretical explanations. In this paper, we take a step towards bridging this gap. We show that when the sensing vectors $\mathbf{a}_k$’s are generic (i.i.d. complex Gaussian) and the number of measurements is large enough ($m \ge O(n \log^3 n)$), with high probability, a natural least-squares formulation for PR has the following benign geometric structure: (1) all global minimizers are the target signal $\mathbf{x}$ and its equivalent copies; and (2) the objective function has a negative curvature around each saddle point. Such structure allows algorithmic possibilities for efficient global optimization. We describe a second-order trust-region algorithm that provably finds a global minimizer in polynomial time, from an arbitrary initialization.

Polar Coding for Group Testing

Sreechakra Goparaju (University of California, San Diego, USA); Yonatan Kaspi (UCSD, USA); Alexander Vardy (University of California San Diego, USA); Lele Wang (Tel Aviv University & Stanford University, Israel)

Group testing has been studied in various forms and applied to various disciplines in the past decades. The objective of a group testing problem is to identify a set of $D$ targets from a set of size $M$ using pooled tests, where a test output on a subset of items is positive if the subset includes at least one target and negative otherwise. We consider a noisy group testing version of this problem where the output is corrupted by a noise that depends on the number of items a test pools. This is inspired by certain practical scenarios such as more randomness introduced by pooling more blood samples, or larger measurement noise incurred when probing a larger area for a target.

Using the multiple access channel (MAC) interpretation provided recently by Kaspi et al., for the case of two defects, we construct a low-complexity ($O(\log(M)\log\log(M))$) group testing code, using a polar coding construction. This achieves the order optimal number of tests (including the exact constant), in the asymptotic limit as $M \to \infty$. Concurrently, the polar coding construction also provides a coding scheme for the two-user MAC when the users are restricted to use the same codebook and allowed to decode up to a permutation of the transmitted messages.

To Feedback or Not to Feedback

Changho Suh (KAIST, Korea); David Tse (Stanford University, USA); Jaewoong Cho (KAIST, Korea)

We explore two-way interference channels (ICs) where there are forward and backward ICs with four independent messages: two associated with the forward IC and the other two with respect to the backward IC. For a linear deterministic model of this channel, we develop inner and outer bounds on the capacity region. As a consequence, we demonstrate that interaction across forward and backward channels enables a more beneficial use of the channels, thereby yielding strict capacity improvements over non-interactive independent transmission. Moreover, our novel outer bound establishes the complete characterization of the entire channel regime in which interaction has no bearing on capacity.

GDoF region characterization of the weak MIMO IC with No CSIT

Sanjay Karmakar (North Dakota State University, USA)

We characterize the generalized degrees of freedom (GDoF) region of an ergodic fading 2-user interference channel (IC) with multiple antennas at each node (MIMO) under the assumption that the channel realizations are known only at the receivers and not at the transmitters (No CSIT). We consider the {\it weak} interference scenario where the ratio – denoted by $\alpha$ – of the signal strength of the interfering links to that of the corresponding direct links in dB scale is assumed to be smaller than half. Assuming that transmitters and receivers have $M_1,M_2$ and $N_1, N_2$, antennas respectively, except from a small class where $(M_2+N_2)>\min\{M_1,N_1\}>N_2>M_2$, we characterize the GDoF region of the {weak} IC for all other antenna configurations. We derive newer outer bounds to the GDoF which enable us bypass the use of full CSIT GDoF bounds as outer bounds for the No-CSIT scenario and therefore has the potential to be useful for analyzing other networks for which full-CSIT GDoF characterization is not available. The achievability of these upper bounds are then proved specifying explicit coding-decoding schemes.

New Sufficient Conditions for Multiple-Access Channel with Correlated Sources

Mohsen Heidari Khoozani and Farhad Shirani Chaharsooghi (University of Michigan, USA); Sandeep Pradhan (University Michigan, USA)

The problem of three-user Multiple-Access Channel (MAC) with correlated sources is investigated. An extension to the Cover-El Gamal-Salehi (CES) scheme is introduced. We argue that if the sources impose certain algebraic structures, then the application of structured codes improves upon the CES scheme. Based on this notion, we use a combination of the CES scheme with linear codes, and propose a new coding strategy. We derive new sufficient conditions to transmit correlated sources reliably. We consider an example of a three-user MAC with binary inputs. Using this example, we show strict improvements over the CES scheme.

Learning Markov Distributions: Does Estimation Trump Compression?

Moein Falahatgar (University of California San Diego, USA); Alon Orlitsky (University of California, San Diego, USA); Venkatadheeraj Pichapati (UCSD, India); Ananda Theertha Suresh (University of California, San Diego, USA)

A significant amount of multidisciplinary research has recently focused on the rate at which i.i.d. distributions can be estimated. In particular, for i.i.d. distributions, optimal estimation was shown to imply optimal compression.
Progressing from idealized to practical distributions, we define and study the rate at which Markov distributions can be estimated. We determine this rate up to a constant factor. Perhaps surprisingly, the results show that unlike the i.i.d. case, for Markov distributions, optimal estimation does not imply optimal compression. Yet we present an estimator that simultaneously achieves optimal compression to the right constant, and optimal estimation to the same constant factor as in our bounds.
We also consider the important subclass of Markov distributions where all transition probabilities are bounded away from zero. For this subclass we determine the best estimation rate to the right constant factor and show that for this subclass, optimal estimation again implies optimal compression.

Superposition coding in the combination network

Henry Romero (University of Colorado at Boulder & MIT Lincoln Laboratory, USA); Mahesh Kumar Varanasi (University of Colorado, USA)

An inner bound if presented for the combination network based on superposition coding and partial interference decoding. This inner bound is tight in every case where capacity is known, and moreover, unlike previous achievability schemes, the scheme presented herein does not require network coding. By avoiding network coding, the inner bound has fewer extraneous parameters. Moreover, it contains the intersection of polymatroids, one for each receiver, a structure that may be more amenable to further analysis that the previous inner bounds for the combination network.

On the Soliton Spectral Efficiency in Non-linear Optical fibers

Pavlos Kazakopoulos and Aris L. Moustakas (University of Athens, Greece)

Optical fiber communications can be analyzed using the non-linear Schrodinger equation, which is fully integrable. In this paper we show how this integrability can be exploited to communicate using solitonic pulses. Based on a white Gaussian input signal distribution we use the known distributions of eigenvalues and scattering data to come up with an analytical expression for a lower bound to the spectral efficiency, taking into account the effects of noise due to amplification explicitly. We show that in the low noise regime, the soliton channel shows two different behaviors, interpolated by a single scalar parameter that controls the nonlinearity of the system. Close to linearity the soliton channel approaches AWGN, while for strongly nonlinear systems the SE declines. The bound reaches a maximum between the two regions.

On Rényi Entropy Power Inequalities

Eshed Ram and Igal Sason (Technion – Israel Institute of Technology, Israel)

This work introduces Renyi entropy power inequalities (R-EPIs) for sums of independent random vectors, improving recent R-EPIs by Bobkov and Chistyakov. The latter work inspired the derivation of the improved bounds.

A Systematic Design Approach for Non-coherent Grassmannian Constellations

Kareem M. Attiah (Alexandria University & Faculty of Engineering, Egypt); Karim G Seddik (American University in Cairo, Egypt); Ramy Gohary and Halim Yanikomeroglu (Carleton University, Canada)

In this paper, we develop a geometry-inspired methodology for generating systematic and structured Grassmannian constellations with large cardinalities. In the proposed methodology, we begin with a small close-to-optimal “parent” Grassmann constellation. Each point in this constellation is augmented with a number of “children” points, which are generated along a set of geodesics emanating from that point. These geodesics are chosen to ensure close-to-maximal spacing. In particular, the directions of the geodesics and the distance that each “children” point is moved are chosen to maximize the pairwise Frobenius distance between the resulting constellation points. Although finding these directions directly seems difficult, by embedding the Grassmann manifold on a sphere of larger dimension, we were able to develop structures that are not only simple to generate but that also yield constellations that, under certain conditions, satisfy the maximum distance criterion and lie within a decaying gap from a tight upper bound. Numerical results suggest that the performance of the new constellations is comparable to that of the ones generated directly and significantly better than the performance of the ones generated using the exponential map.

Erasure Schemes Using Generalized Polar Codes: Zero-Undetected-Error Capacity and Performance Trade-offs

Rajai Nasser (École Polytechnique Fédérale de Lausanne, Switzerland)

We study the performance of generalized polar (GP) codes when they are used for coding schemes involving erasure. GP codes are a family of codes which contains, among others, the standard polar codes of Arikan and Reed-Muller codes. We derive a closed formula for the zero-undetected-error capacity $I_0^{GP}(W)$ of GP codes for a given binary memoryless symmetric (BMS) channel $W$ under the low complexity successive cancellation decoder with erasure. We show that for every $R<I_0^{GP}(W)$, there exists a generalized polar code of blocklength $N$ and of rate at least $R$ where the undetected-error probability is zero and the erasure probability is less than $2^{-N^{1/2-\epsilon}}$. On the other hand, for any GP code of rate $I_0^{GP}(W)<R<I(W)$ and blocklength $N$, the undetected error probability cannot be made less than $2^{-N^{1/2+\epsilon}}$ unless the erasure probability is close to $1$.

On Coding Capacity of Delay-constrained Network Information Flow: An Algebraic Approach

Minghua Chen (The Chinese University of Hong Kong, P.R. China); Ye Tian (Nanjing University & The Chinese University of Hong Kong, P.R. China); Chih-Chun Wang (Purdue University, USA)

Recently in [1], Wang and Chen showed that network coding (NC) can double the throughput as compared to routing in delay-constrained single-unicast communication. This is in sharp contrast to its delay-unconstrained counterpart where coding has no throughput gain. The result reveals that the landscape of delay-constrained communication is fundamentally different from the well-understood delay-unconstrained one and calls for investigation participation. In this paper, we generalize the Koetter-Medard algebraic approach [2] for delay-unconstrained network coding to the delay-constrained setting. The generalized approach allows us to systematically model deadline-induced interference, which is the unique challenge in studying network coding for delay-constrained communication. Using this algebraic approach, we characterize the coding capacity for single-source unicast and multicast, as the rank difference between an information space and a deadline-induced interference space. The results allow us to numerically compute the NC capacity for any given graph, serving as a benchmark for existing and future solutions on improving delay-constrained throughput.