3258 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 12, DECEMBER 2003
[6] T. M. Cover, “Broadcast channels,” IEEE Trans. Inform. Theory, vol.
IT-18, pp. 2–14, Jan. 1972.
[7] T. M. Cover and J. A. Thomas, Elements of Information Theory. New
York: Wiley, 1991.
[8] T. M. Cover, “Comments on broadcast channels,” IEEE Trans. Inform.
Theory, vol. 44, pp. 2524–2530, Oct. 1998.
[9] M. H. A. Davis, “Capacity and cutoff rate for the Poisson-type chan-
nels,” IEEE Trans. Inform. Theory, vol. IT-26, pp. 710–715, Nov. 1980.
[10] M. R. Frey, “Information capacity of the Poisson channel,” IEEE Trans.
Inform. Theory, vol. 37, pp. 244–259, Mar. 1991.
[11] R. G. Gallager, “A simple derivation of the coding theorem and some
applications,” IEEE Trans. Inform. Theory, vol. IT-11, pp. 3–18, Jan.
1965.
[12] , “Capacity and coding for degraded broadcast channels,” Probl.
Pered. Inform., vol. 10, no. 3, pp. 3–14, 1974.
[13] Y. M. Kabanov, “The capacity of a channel of a Poisson type,” Theory
Prob. Appl., vol. 23, pp. 143–147, 1978.
[14] A. Lapidoth, “On the reliability function of the ideal Poisson channel
with noiseless feedback,” IEEE Trans. Inform. Theory, vol. 39, pp.
491–503, Mar. 1993.
[15] A. Lapidoth and S. Shamai (Shitz), “The Poisson multiple-access
channel,” IEEE Trans. Inform. Theory, vol. 44, pp. 472–501, Mar. 1998.
[16] , “How perfect need ‘Perfect side information’ be?,” IEEE Trans.
Inform. Theory, vol. 48, pp. 1118–1134, May 2002.
[17] A. Lapidoth,
˙
I. E. Telatar, and R. Urbanke, “On wide band broadcast
channels,” in Proc. IEEE Int. Symp. Information Theory (ISIT’98), Cam-
bridge, MA, Aug. 1998, p. 188.
[18] J. L. Massey, “Capacity, cutoff rate and coding for direct-detection op-
tical channel,” IEEE Trans. Commun., vol. COM-29, pp. 1616–1621,
Nov. 1981.
[19] R. J. McEliece and L. Swanson, “A note on the wide-band Gaussian
broadcast channel,” IEEE Trans. Commun., vol. COM-35, pp. 452–453,
Apr. 1987.
[20] E. C. Posner, “Strategies for weather-dependent data acquisition,” IEEE
Trans. Commun., vol. COM-31, pp. 509–517, Apr. 1983.
[21] S. Shamai (Shitz), “On the capacity of a pulse amplitude modulated di-
rect detection photon channel,” Proc. Inst. Elec. Eng., pt. I, vol. 137, no.
6, pp. 424–430, Dec. 1990.
[22] , “On the capacity of a direct detection photon channel with inter-
transition constrained binary input,” IEEE Trans. Inform. Theory, vol.
37, pp. 1540–1550, Nov. 1991.
[23] S. Shamai (Shitz) and A. Lapidoth, “Bounds on the capacity of a spec-
trally constrained Poisson channel,” IEEE Trans. Inform. Theory, vol.
39, pp. 19–29, Jan. 1993.
[24] S. Verdú, “On channel capacity per unit cost,” IEEE Trans. Inform.
Theory, vol. 36, pp. 1019–1030, Sept. 1990.
[25] , “Recent results on the capacity of wideband channels in the low-
power regime,” IEEE Wireless Commun., vol. 1, pp. 40–45, Aug. 2002.
[26] S. Verdú, G. Caire, and D. Tuninetti, “Is TDMA optimal in the low power
regime?,” in Proc. IEEE Int. Symp. Information Theory (ISIT’02), Lau-
sanne, Switzerland, June/July 2002, p. 193.
[27] A. D. Wyner, “Capacity and error exponent for the direct detection
photon channel—Parts I and II,” IEEE Trans. Inform. Theory, vol. 34,
pp. 1449–1471, Nov. 1988.
On Witsenhausen’s Zero-Error Rate for Multiple Sources
Gábor Simonyi
Abstract—We investigate the problem of minimum rate zero-error
source coding when there are several decoding terminals having different
side information about the central source variable and each of them should
decode in an error-free manner. For one decoder this problem was consid-
ered by Witsenhausen. The Witsenhausen rate of the investigated multiple
source is the asymptotically achievable minimum rate. We prove that the
Witsenhausen rate of a multiple source equals the Witsenhausen rate of its
weakest element. The proof relies on a powerful result of Gargano, Körner,
and Vaccaro about the zero-error capacity of the compound channel.
Index Terms—Chromatic number, side information, source coding, Wit-
senhausen’s rate, zero error.
I. INTRODUCTION
Let be discrete random variables.
Consider as a “central” variable available for a transmitter and
the ’s as side information available for different stations
that are located at different places. The joint distribution
is known for and for every . The task is that broadcasts a
message received by all ’s in such a way that learning this message
all ’s should be able to determine in an error-free manner. The
question is the minimum number of bits that should be used for this per
transmission if block coding is allowed. This problem is considered
for by Witsenhausen in [20]. He translated the problem to a
graph-theoretic one and showed that block coding can indeed help in
decreasing the (per transmission) number of possible messages that
should be used. The optimal number of bits to be sent per transmission
defines a graph parameter that is called Witsenhausen’s zero-error rate
in [1]. (We will write simply Witsenhausen rate in the sequel.) In this
correspondence, we define the Witsenhausen rate of a family of graphs.
Our main result is that the Witsenhausen rate of a family of graphs
equals its obvious lower bound: the largest Witsenhausen rate of the
graphs in the family. This will easily follow from a powerful result of
Gargano, Körner, and Vaccaro [8].
II. THE GRAPH THEORY MODEL
For each we define the following graph . The
vertex set is the support set of the variable for every
. Two elements, and , of form an edge in if and only if there
exists some possible value of the variable that is jointly pos-
sible with both and , i.e., It is already
explained in [20] that the minimum number of bits to be sent by
to (one) for making it learn (for one instance) in an error-free
manner is , where denotes the chromatic number of
graph . Indeed, if would use fewer bits, than there would be some
two elements of that would form an edge in and still would
send the same message when one or the other would appear as the ac-
tual value of . Since they form an edge, there is some possible value
Manuscript received August 10, 2001; revised July 1, 2003. This work was
supported in part by the Hungarian Foundation for Scientific Research under
Grants (OTKA) F023442, T029255, T032323, and T037486. The material in
this correspondence was presented at the IEEE International Symposium on In-
formation Theory, Lausanne, Switzerland, June/July 2002.
The author is with the Alfréd Rényi Institute of Mathematics, Hungarian
Academy of Sciences, 1364 Budapest, Hungary (e-mail: simonyi@renyi.hu).
Communicated by
˙
I. E. Telatar, Associate Editor for Shannon Theory.
Digital Object Identifier 10.1109/TIT.2003.820048
0018-9448/03$17.00 © 2003 IEEE