6356 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011
A Recursive Construction of the Set of Binary
Entropy Vectors and Related Algorithmic Inner
Bounds for the Entropy Region
John MacLaren Walsh, Member, IEEE, and Steven Weber, Member, IEEE
Abstract—A method for checking membership in the region of
entropic vectors generated from bits is presented. A general
technique for utilizing this method to create inner bounds for re-
gions of entropic vectors as a function of outer bounds is then pre-
sented. These two algorithms are then used to provide new insights
regarding relationships among well known bounds for the region
of entropic vectors.
Index Terms—Binary entropic vectors, information inequalities,
network coding capacity region.
I. INTRODUCTION
C
HARACTERIZING the set of entropy vectors under var-
ious distribution constraints is a fundamentally important
problem in information theory [3], [4]. Not only would such an
accurate characterization allow for the determination of all in-
formation inequalities, but it would enable the direct computa-
tion of all rate regions for network coding [3], [5] and multiter-
minal source coding.
The interest in the (closure of) the set of entropy vectors for
discrete unbounded cardinality random variables [3] and its
normalized counterpart [4], [6] originated in the study of
linear information inequalities [7]–[9], as these correspond to
supporting halfspaces for these sets. More recently, it has been
noted that the network coding capacity region [3], [5] is a linear
projection of intersected with a vector subspace. These two
problems are inherently related, as it has been shown [10] that
for every linear non-Shannon type inequality (i.e., every sup-
porting half space of ) there is a multi-source network coding
problem for which the capacity region requires this inequality.
More generally, as all achievable rate regions in information
theory are expressed in terms of information measures between
random variables obeying certain distribution constraints, they
can be expressed as linear projections of entropy vectors associ-
ated with these constrained random variables. Hence, there is a
Manuscript received September 01, 2009; revised March 29, 2011; accepted
June 02, 2011. Date of current version October 07, 2011. The authors thank
the National Science Foundation and Air Force Office of Scientific Research
for their support in part under the awards CCF-0728496, CCF-1016588, CCF-
1053702, and FA9550-09-C-0014. Preliminary versions of some of the results
in this paper were presented at the Allerton Conference on Communication,
Control, and Computing in September, 2009 and September, 2010.
The authors are with Drexel University, Philadelphia, PA 19104 USA (e-mail:
jwalsh@ece.drexel.edu; sweber@ece.drexel.edu).
Communicated by I. Kontoyiannis, Associate Editor for Shannon Theory.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIT.2011.2165817
fundamental interest in multi-terminal information theory in the
region of entropy vectors associated with random vari-
ables obeying distribution constraints .
is difficult to characterize for arbitrary , as Matùš
recently definitively proved [11], because there are an infinite
number of associated linear information inequalities, i.e., the
region is non-polyhedral (“curved”) for . There are
few known computationally tractable inner bounds for for
[6], [12] , and it is generally unknown how to determine
whether a given candidate vector is entropic or not.
An algorithm capable of determining whether or not
is not presently available for . This paper points out
that by restricting the discrete random variables to be binary,
one can obtain efficient descriptions of the corresponding
entropy region, called the set of binary entropy vectors, .
In particular, we introduce in Section III an algorithm which
can definitively determine whether a candidate entropy vector
can be generated by a distribution on bits. This
enables us to deliver in Section IV an algorithm which, given
any polytope outer bound for , returns a tuned inner
bound agreeing on all of its (tight) exposed faces shared with
. Because the inner bounds have this property, their
performance increases with increasingly better performing
outer bounds. The inner bound technique is easily extended to
obtaining bounds for and as a function
of outer bounds for these sets, as they contain .
Having developed the algorithmic tools, we pass to studying
particular inner and outer bounds in Section V. The first exam-
ples use the new algorithmic tools to determine novel proper-
ties of known outer and inner bounds for : the Shannon outer
bound and the Ingleton inner bound, as well as the improved
outer bound formed by including known non-Shannon informa-
tion inequalities of Zhang and Yeung [8] and Dougherty et al.
[9].
II. ENTROPY VECTOR REGIONS OF INTEREST
We first review the definition of the set of entropy vectors .
Consider all subsets of discrete random
variables , and stack the entropies of
each nonempty subset into a vector
called an entropy vector, where . The en-
tropy vector is clearly a function of the joint dis-
tribution on the discrete random variables . Define the
set of possible entropy vectors as the clo-
sure of the image under the function of the set of viable
0018-9448/$26.00 © 2011 IEEE