FUZZINESS FROM THE VIEWPOINT OF INFORMATION THEORY UDC 621.391.519.72
N. N. Diduk
In this article we apply the apparatus of [i, 2] to information-theoretical analysis of fuzzy sets. The notion of fuzziness is becoming increasingly popular among both "pure" and "applied" mathematicians. The apparatus of fuzziness is rapidly developing. At this stage, in addition to the basic notion of fuzzy set, there are also various derived concepts~ such as fuzzy relation, fuzzy graph, fuzzy algorithm, fuzzy logic, etc. [3]. The acceptance of fuzziness in various branches of mathematics naturally leads to new questions and problems. However, they are usually quite far from such "naive" questions as, "what essentially is fuzziness?" Yet the notion of fuzziness has introduced us to a new type of uncertainty, which is basically different from probabilistic uncertainty and total ignorance uncertainty. The time appears to be ripe for an undependent study of this new type of uncertainty. The apparatus proposed in [i, 2] provides information-theoretical tools for the analysis of fuzziness. Fuzzy sets are actually a very useful object for demonstrating the possibilities of the proposed apparatus. Uncertainty spaces associated with fuzzy sets are apparently the simplest nontrivial example of non-Shannon information spaces, i.e., spaces in which the Shannon entropy of the probability distribution splits into two independent characteristics: entropy and uncertainty. The main results of this paper are the following. We derive the general form of uncertainty spaces, which constitute an information-theoretical analog of "fuzzy sets." We derive expressions for the two independent characteristics of the dispersion of fuzzy sets: entropy and uncertainty. A basic interpretation of the general coding theorem [i] is developed in application to fuzzy sets, linking the entropy of fuzzy sets with the values of one class of antagonistic games which involve guessing a particular element from a fuzzy set~ We also examine some implications of the strong coding theorem [2] for fuzzy sets, relating to the joint properties of their entropy and uncertainty. The paper does not assume knowledge of the apparatus of fuzzy sets: brief familiarity with the basic concepts will suffice (say, on the level of [4] or [5]). The interested reader may also refer to the monograph by the founder of fuzzy set theory, Zadeh [6]. A brief description of the relevant apparatus is also presented in [7]. i.
DISCRETE FUZZINESS SPACES
In order to apply the new information-theoretical apparatus of [I, 2] to the study of fuzzy sets, we have to be able to associate with each fuzzy set some uncertainty space. Since our apparatus is only applicable to discrete uncertainty spaces (DUS), we will only consider here discrete fuzzy sets of at most denumerable cardinality. We start with ordinary "sharp" discrete sets. By specifying a certain opportunity set ~, we characterize a type of uncertainty usually regarded as the counterpoint of the purely probabilistic uncertainty, which assumes that a certain probability distribution is defined on ~. If there is no information (not even indirect) on the possible probability distribution on ~, we usually have to be guided by the worst possible case. This means that the best formalization of uncertainty of this type is provided by a DUS of the form (~, sup). Thus, with any "sharp" discrete set we associate the DUS (~, sup). In [2] we have discussed the similarities and differences between uncertainty described by the DUS (~, sup) and uncertainty characterized by a uniform probability distribution on the set ~, as described by the DUS (~, ). Here, however, we will not go into a comparison of these spaces and focus on fuzzy sets. According to Zadeh's definition [4, 6], any fuzzy subset A of the set ~ is characterized by its membership function =, which associates with each element ~ 6 ~ a real number =(m) from [0, I] representing the degree of membership of the element m in the set A. Each Translated from Kibernetika, No. 2, ppo 80-86, March-April, 1987. mitted December 2, 1985.
0011-4235/87/2302-0245512.50
Original article sub-
9 1987 Plenum Publishing Corporation
245
"sharp" subset B of ~ may be described by its characteristic function which only takes the values 0 and I and is a particular case of the membership function. With each "sharp" subset B of fi we associate the DUS (B, sup), and therefore our formalization of fuzziness should lead to a DUS which becomes equivalent to the space (B, sup) when the membership function o f the given fuzzy set reduces to the characteristic function of the set B. Thus, before starting to look for an information-theoretical analog of "fuzzy set," we have to formalize the notion of equivalence of two DUS. Definition I. Let a discrete uncertainty space (~, T) and a point ~ 6 ~ be given. We say that the point w is irrelevant in the DUS (~, T) if for any two functions [, /'6 R~ which coincide on the set ~ { ~ } we have T(f) ffiT(f'). The irrelevance domain of a finite DUS (~, T) is the set of all the irrelevant points in (~, T), the relevance domain is the complement of the irrelevance domain to ~. Proposition i. Let ~ be the relevance domain of a finite DUS (~, T) and let f and f' be two functions from R ~ which coincide on ~. Then T(f) ffiT(f'). Proof. The irrelevance domain of ~ \ T is finite, and therefore its points can be indexed. After indexing, we obtain a finite sequence wl, ~2, .... Consider the sequence f0, fl, f=,'",fn of functions from R~ such that ~ = [ and for each i > 0 the function f, coincides of with fi-1 at all the points with the exception of ~i, where it takes the value [t(~i)= ['(~). ot By irrelevance of each point ~i, we have T(f) ffiT(f 0) = T(f l) = T(f 2) = . . . = T(f n) = T(f'). Q.E.D. Definition 2. Two DUS (~, T) and (~', T') are called equivalent if i) their relevance domains coincide; 2) for any two functions f 6 R ~ and ~'6 R ~ which coincide on their c o ~ o n relevance domain, T(f) ffiT'(f'). It is easily seen that the concept of equivalence is well-defined, i.e., equivalence of uncertainty spaces is a reflexive, symmetric, and transitive relation. As before, let ~ be a finite or denumerable set and ~ a mapping of the set ~ into R+. In [2] we considered the transformation T ~ T = of the convolution criteria T (over the set ~) into a new convolution criteria T~. The application of this transformation to the convolution criteria sup transformed the DUS (~, sup) to the DUS (~, sup=). Let us consider the uncertainty space (~, sup=) in more detail. Without repeating the description of the transformation T ~ T ~ , we will directly write the convolution criteria s u p = i n explicit form. This will require the binary operation | which is an extension to ~+ = R+ U {co} of the ordinary multiplication of real numbers. Ol We reiterate the conventions introduced in [2]: I) for any (finite or infinite) number rER+, let 0 O r = r G 0 = 0 ; 2) for any (nonzero) number r E ~ , let c o | 3) for Of
Of
D?
Dr
any two (finite) numbers r, sER+, let r Q s = r.s. The operation | by these conventions Df iS obviously commutative and associative. Now take a finite or denumerable set ~ and a function = which maps ~ into R+. Then the convolution criteria sup~ (over ~) is a functional of the form sup= = [ ~-~ sup = (o) |
-~ [ (o) - - R+,
(i)
It is clear from (I) that with each fuzzy subset A of ~ characterizable by some membership function ~ we can associate the DUS (~, sup=), so that different DUS correspond to different functions and the correspondence is one-to-one. The mapping =: ~ - + R + is the membership function of some fuzzy subset A of ~ if sup (~) = sup = (o) <~ 1.
(2)
o~ ~En
The number sup (~), according to Zadeh, is called the height of equality (2), fuzzy sets of unit height occupy a special place distinctive feature is reflected in the associated uncertainty of unit height is associated a standard [2] (and even uniform) verse is also true, to each standard DUS of the form (~, sups) unit height.
246
the fuzzy subset A. By inamong all fuzzy sets. T h i s spaces: with each fuzzy subset uncertainty space. The conis associated a fuzzy set of
Among the fuzzy subsets of fi, some occupy a special place even in the class of subsets Of unit height. These are fuzzy analogies of "sharp" subsets. proposition 2. Let = be the characteristic function of some (ordinary) subset B of ~. Then the uncertainty space (D, supa) and (B, sup) are equivalent. Proof. From the properties of the operation O it directly follows that the irrelevance domain of the DUS (~, supa) consists of those and only those points ~ E ~, for which a(~) = 0o Therefore, the relevance domain coincides with B, and for all ~ E B we have ~(e) = i. QoE~D~ We thus conclude that an uncertainty space of the form (S, supa) provides a suitable information-theoretical analog of "fuzzy set." Definition 3. A (discrete) fuzziness space is any DUS of the form (~, sup~)~ where is the membership of some fuzzy subset of the set ~ ( i . e ~ a mapping of ~ into [0~ I])~ 2.
ENTROPY AND UNCERTAINTY OF FUZZY SETS
Among all fuzziness spaces, the information spaces are of major interest from the information-theoretical viewpoint, since only they have finite entropy (see Proposition 3 below). Moreover, despite their structural simplicity, these spaces are richer than Shannon spaces in one respect: in addition to entropy, they have another independent characteristic of dispersion - u n c e r t a i n t y [2]. Both these characteristics, was well as the information property, can be naturally extended to the corresponding fuzzy sets, whereby with each element of an information fuzzy set we associate a certain own information. All these concepts formally rely on the general and strong coding theorem. Their substantive interpretation for fuzzy sets is the main objective of this paper. Before proceeding with substantive interpretation, we have to study the formal properties of these concepts for fuzzy sets.
(n,
We start with entropy. SUpa),
Using (i), we obtain an expression for the entropy of the DUS I
H (~, sup,~) = ~ Einf ~ p (~) ~ sup ~ ~ (o) | Jog
(3)
where ~ (~) is the class of all probability distributions on ~. Clearly, (3) cannot be used to compute entropy. We will have to develop a suitable computational procedure. In order to eliminate trivial cases, we assume from the start that the set ~ contains at least two elements and the function ~ has a nonzero value at least in one point of ~o We use the given function ~ to construct a new function v a which maps the semiinfinite interval [I, ~[ into R+, I Df
oEe
B y Q(a) we denote the following assertion about the function a: there exists a (finite) number b -> 1 such that v~(b) = i. Now assume that the function a has the property Q(a). Then clearly the number b satisfying the condition va(b) = i is unique. We denote it by c a . Then the function I
is a probability distribution on the set ~. If the function a takes a nonzero value only at one point of the set ~, then the distribution qa is degenerate (the probability of this unique point is i). If there are two distinct elements in ~ for which the function a takes nonzero values, the distribution qa has the following property: i) for each ~6~, such that =(~) = 0~ we have q~(m) = 0; 2) for any ~E~, such that a(m) ~ 0 we have q~(~) ~ 0 and I ~(o).logo~ q~--7~ = t.
(6)
We will now introduce the following convention from [2]: the base a of the logarithms in the expression of entropy is specified by attaching a lower index a to the entropy symbol H. For instance, H2(~ , T) represents the entropy of the DUS (~, T) with base 2. From the expression (i) we see that the convolution criterion supe is homogeneous [2], so that the transformation from one base to another in (3) is performed precisely as for Shannon spaces. Using the number c a as in the base, we obtain
247
Hr (fl, supa) = inf sup a ((o) | logr162 t ,e~(~) ~oe~
p ((o) "
(7)
The distribution q= is the unique distribution from ~(~), on which the infimum in (7) is attained. Indeed, if, for instance, the distribution q~ is degenerate, then Hc~(~, sups) = 0 and the zero value is attained only on q=. Now assume that the distribution q= is nondegencrate and let p be a distribution from ~(~), other than q=. Then there exists~*6~ such that 1 p(~*)<%(~*). But then [by =(~*) > 0] we have =(~*)Q|oge= p(~,) >|. Thus, if the function ~ has the property Q(~), then the DUS (~, sup=) is an information space, and the convolution criterion sup= induces the distribution q=. If this distribution is nondegenerate, then the entropy of the space (~, sup=) to the base c a is equal to I,
Hc~ (e, sups) = 1.
(8)
The general expression for entropy in this case is
H (~, sups) = log co.
(9)
T h e r e f o r e , g i v e n Q(~) t h e e n t r o p y H(~, s u p s ) i s a l w a y s f i n i t e and i t s c o m p u t a t i o n ( w i t h nondegenerate distribution qa) r e d u c e s t o f i n d i n g t h e s o l u t i o n c a o f t h e t r a n s c e n d e n t a l e q u a t i o n v a ( x ) = 1 ( f o r t h e unknown x ) . The f o l l o w i n g p r o p o s i t i o n shows, i n p a r t i c u l a r , t h a t i f Q(~) does n o t h o l d , t h e n t h e r e i s no p o i n t i n c o m p u t i n g t h e e n t r o p y , s i n c e i t i s anyhow i n f i n i t e . Propositio n 3. Let ~ be a finite or denumerable set with at least two elements and a mapping of S into R+ which takes a nonzero value at least in one point of ~. Then the following statements are equivalent: i) the function ~ has the property Q(=); 2) the DUS (~, supa) is an information space; 3) the entropy H(~, sups) is finite. Proof. Sufficiency of condition I) for conditions 2) and 3) has been demonstrated above. Let us prove its necessity. The information property of the DUS (~, sup=) implies its nondegeneracy (since ~ contains more than a single element). A nondegenerate DUS, by Proposition 5 of [2], has a finite entropy. Thus, the information property of the DUS (~, supa) implies finite entropy. Assume that the entropy H(~, supe) is finite. bution pe on ~ such that
Then there exists a probability distri-
! H(fl, sups) = sup ~ ~ (~) | log p* (~ .
(lO)
In order for (I0) to hold, the distribution p* should have the following properties: I) pe(~) = 0 implies a(~) = 0; 2) p*(~) ~ 0 implies
H (~, sups) = ~ (~) log p , (~) . We w~ll show that the function ~ has the property Q(a). To this end, it suffices to show that c~ = e~Je'~"~. Indeed, since the entropy He(~, sups) is finite (and nonnegative), we have i<~. e~(n'~u~)<~o-. Moreover, = ~ eIn~*(~)= 1. Q.E.D.
o~(eHe(n'sup~)
Thus, with each fuzzy set of elements from ~ with a membership function a we can associate the entropy H(~, sups) [when the membership function ~ does not satisfy the condition Q(~), the entropy is infinite]. Moreover, when the condition Q(~) is satisfied, with fuzzy sets of elements from ~ we can associate the information function I[supa] and the uncertainty G(~, supa) [2]. The own information of the element ~ of a fuzzy set with membership function has the form
! [sups] (~) = - - log % (~),
(ii )
and the uncertaintyG(fl, sups) of this set coincides with the entropy H(qa) of the distribution qa3.
K GAME INTERPRETATION OF FUZZY SET CODING
The values of entropy and uncertainty of an information space indicate what saving is attainable in principle for the resource required to code the space points. The relevant resources are code complexity and load [2]. The aim of this section is to construct an
248
interpretation of these two types of resources for fuzzy sets. Recall that a similar problem has been solved for Shannon spaces. Equality of entropy and uncertainty for these spaces indicates equality of code complexity and load. This common characteristic of codes for Shannon spaces is usually interpreted as the mean number of standard questions which must be asked in order to identify the particular element of a Shannon space selected in a random draw. The proposed interpretation of fuzzy set coding is also associated with guessing of set elements, but in a conflict-of-interest setting, rather than random drawing. We will show that the coding of any information space of the form (~, sup=) may be associated with a certain class of two-player games which involve guessing an element of the set S. The first player selects a certain element ~* from the set ~. The second player asks questions and the first answers them. A round ends after one of the first-player answers, when the cumulative information provided by all the answers is sufficient to uniquely identify the element ~*. When a round ends, the second player pays a certain penalty to the first player. The interests of the two players are strictly antagonistic (the sum of the game is zero). We will first describe these games in formal terms, and then show that the general coding theory makes it possible to associate the valu__~eof any such game with the entropy of a space of the form ( ~ sups). We will also highlight some general implications of the strong coding theorem. For purposes of more formal description of the guessing game, it is desirable to deviate somewhat from the traditional interpretation. If we assume that the first player makes the first move by selecting the element ~* then we have a game with incomplete information (since the result of this move remains unknown to the second player until the end of the round). Formal analysis of games with incomplete information is more complex than that of completeinformation games. There is, however, another variant of formalization of (essentially) the same guessing games which leads to complete-information games. In these games, the first player does not select anything at the beginning of the round, and he answers each question of the second players by making an independent draw. Formalization of the second variant of these games requires that i) the sum total of all the draws of the first player in each particular round be equivalent to a single draw of some element from ~; 2) any element from may be actually drawn by the first player while answering the questions of the second player. Let us now proceed to describe this second variant of the guessing game. The particular game is chosen by specifying some natural number n > i. The description of the game for an infinite ~ is somewhat more complex than for a finite set. We will therefore first assume that ~ is finite. The second player makes the first move by selecting a certain partition of the set ~ into m nonempty parts, where m is such that I
~ (~, ~*) =
log n. ~ . ~ (~*).
(12)
bf
This concludes the description of the guessing game for finite ~. Let us now consider the case of infinite (denumerable) ~. In this case~ the actions of the two players are restricted by additional constraints otherwise, there is no guarantee that any round will end in finitely many steps. In order to describe these constraints~ we have to consider the player strategies to explicit form. The strategy sets of the first and second player depend on the number n chosen; for a given n, we denote them by X~ and %~, respectively. The first constraint relates to the first-player strategies. The second player in each round constructs a series of partitions, based on an infinite set ~. The first player selects elements from these partitions. But any partition of an infinite set into finitely many parts contains at least one infinite subset of this set. If the first player, in each 249
presented partition, always selects the element representing the infinite subset of ~, the round will never end. Therefore the first player's strategies must be constrained so as to preclude infinite chains of infinite subsets of ~: sooner or later the first player must select a finite subset of ~. The first-player strategies from X~ satisfying this requirement are called admissible. The set of all admissible strategies of the first player will be denoted by ~ . The constraint imposed on the first-player admissible strategies does not guarantee finiteness of all the rounds of the guessing game, since the second player may deprive the first player of the possibility of applying an admissible strategy. Indeed, the second player may construct the partitions in such a way that they consist ~ of infinite subsets of ~. Therefore, the second-player strategy also should be constrained in a certain way. Each second-player strategy from Y~ may be represented as a directed labeled tree whose leaves are all labeled by the elements of the set ~, the nonleaf nodes are labeled by the own moves of the second player (i.e., partitions), and the edges are labeled by all the possible answers of the first player (i.e., the elements of these partitions - subsets of ~). A node is a leaf if and only if the incoming edge is labeled by a one-element set; the leaf itself is labeled by the unique element of this set. If the set ~ is infinite, then each strategy from Y~ is represented by an infinite tree of this kind. In this tree, a leaf does not necessarily correspond to each element of ~: some elements of ~ may be always included by the second player in the infinite subsets of ~ and thus never have a corresponding leaf. Moreover, there may be a tree without any leaves (this is the case when the first player is prevented from applying any of his admissible strategies). If in some second-player strategy YEY~ the element ~06 ~ has no corresponding leaf, then the first player is deprived of the possibility of selecting the element ~0: this selection cannot be made by any of the admissible strategies. In order to ensure finiteness of all the rounds of the game and to restore the freedom of choice of the first player, the second-player strategles should be appropriately constrained. A strategy y E Y ~ of the second player is called admissible if the corresponding tree contains a leaf for each element ~ of the set ~ (labeled by this element). The set of all second-player admissible strategies will be denoted by ~ . Any pair(x, Y) 6 X~ • Y~ of admissible strategies of both players uniquely characterizes i) a certain round of the game which ends in finitely many steps; 2) a certain penalty which the second player pays to the first [given by (12)]. Thus, for each natural n > i, we have described the guessing game associated with the given fuzziness set (~, sup=). We denote this game by W~. The game W~ is obviously not fair. To make it fair, the first player must pay a certain retainer to the second player at the beginning of each round. Since W~ is a complete-information game, it has a pure value (see [8], Theorem 1.3). This value, denoted in what follows by ~ , is the required fair retainer. Our immediate objective is to establish the relationship between the value ~ and the entropy of the space (~, sups). 4.
GAME INTERPRETATION OF ENTROPY AND UNCERTAINTY OF FUZZY SETS
According to the general definition of the pure value of a game [8],
n = inf sup M~(x, y), on
(13)
on
YEY~ xEX~
where M~ i s t h e p a y o f f f u n c t i o n of t h e f i r s t p l a y e r , i m p l i c i t l y , d e s c r i b e d in Sec. 3. L e t y be Some admissible strategy of the second player and ~ an element of ~. The tree representing the strategy y should contain a node labeled by the element ~ (since y is an admissible strategy). The height of the element ~ under the strategy y, denoted by h(y, ~) is the number of edges along the path from the root of the tree to the leaf labeled by the element ~. Now assume that for each natural number n, we have a certain alphabet L n of n letters, and let ~(~, L n) be the set of all prefix codes from K(~, Ln) [i, 7]. Proposition 4.
Let ~ be a finite or denumerable set with at least two elements.
Then
o
there exists a (surjective) mapping C of the set K(S, L n) onto the set ~ of all secondp l a y e r admissible strategies such that for each prefix code k6K(Q, in) the height h(~(k), ~) of each element ~ 6 ~ under the strategy C(k) is equal to the length Xk(~) of the code word
~(~).
Proof. We will show how to construct a (one-to-many) correspondence ~, which is the inverse of ~. Let y be an admissible strategy of the second player f r o m ~ . Consider the
250
tree representing the strategy y. We say that the nodes ~, ~,...,7 of this tree belong to the same bundle if there exists a node ~ (located closer to the root) from which edges lead directly to the nodes a, ~,...,y. Each node, except the root, is labeled by some letter of the alphabet Ln, observing the following restriction: all the nodes in the same bundle should be labeled by different letters (there are enough letters, since the number of nodes in one bundle does not exceed n). After this labeling, we associate with each leaf the word formed by consecutively writing the letters associated with all the nodes along the path from the root to the given leaf. Since to each element ~ of the set ~ corresponds a certain leaf, and to each leaf corresponds a certain word in the alphabet Ln, we obtain a certain code from K(~, Ln). It is easily seen that this technique generates only prefix codes. The set of all prefix codes which may be generated in this way from the strategy y will be denoted by ~(g). For two different admissible strategies g,g'6Y~, setting, and
U
the sets ~ (9) and ~(g') are noninter-
~ (y) = K(a, L,,).
(t4)
on
v~Y~
We h a v e t h u s c o n s t r u c t e d
t h e mapping ~.
The e q u a l i t y
obviously
h(~(k), r
holds.
Q.E.D.
As we have noted above, and any ( x , y ) 6 ) ~ X Y~ of admissible strategies of the two players uniquely characterizes a certain round of the guessing game, and therefore a particular element of the set ~ selected by the first player in the current round. Denote this element by ~(x, y). We thus essentially have a function q which maps the set X~ • Y~ into ~. Using this function and the penalty expression (12), we can write an expression for the payoff function: for anyx6 ~, g6 u,
M~ (x, V)=log n. h (V, ~ (x, y)).a (rl (x, y)).
( 15 )
Now, relying on Proposition 4, we can replace (13) with
~ = logn.inf
sup X~ (.~ (x, ~ (k)))~
o
Since for any o 6 ~
and 96
(x, ~ (k))).
(16)
on
o there is x 6 X ~ , such that ~(x, y) = ~, we obtain from (16) v n= = logn
inf
supx~(o)~z(o).
(17)
Let us now apply the general coding theorem [I] to the space (~, sups). Complexity and subcomplexity of the code kEK(~, Ln) for the space (~, sup=) are characterized, respectively, by
g~(&2, sup=) = l o g n sup Xa (~) ~ (~),
(18)
~7 ($2~sup=) = Iogn sup (Z~ (~) - - I) a (o).
(19)
Since ~'T (fl, sups) ~ log n [sup Zh (~) a (o~) -- sup (~)] = ~h (Q, sups) - - log n sup (~),
(20)
oE~
the general coding theorem implies that for any natural n, i) there exists a prefix code k~ K(~, Ln), such that
H (f~, sup=) ~< ~k (f~, sup=) ~< H (f~, sup=) + log n- sup (~z);
(21 )
2) there is no prefix code k6/((f~, L~) s u c h that ~k (~), sups) < H (~Q, sups). THEOREM I. Let (~, supa) be an information space and n > i an integer. value ~n of the guessing game W n satisfies the inequality
H (~, supa) <~. v~ ~.~ H (~, sups) + log n sup (cr Proof.
(22) Then the pure
(23)
It suffices to note that from (17) and (18),
n=
inf
~ (•, sup~d.
(24)
Q.E.D.
251
Theorem I is the first important result for our game interpretation. is a direct corollary of the strong coding theorem [2].
The second result
THEOREM 2. Let (~, sups) be an information space and n > 1 an integer. Then in the guessing game W n the second player I) has an admissible strategy g 6 Y~ such that simultaneously
H (f~, sups) logn ~-~(sup y , ~h
o ) . s (o~)~
H (e, sups) log n + sup(s),
(25)
and
(~(f~, sup=) G (f~, sup,~) log,, ~ < e~~ q= (,o) . h (u, ~ ) < logn + 1;
(26)
2) does not have an admissible strategy such that y 6 Y" suph(y, o).s(o) < H(fl, sup=L, o~n log n or
oEn
q= (o). h (y, o) <
G (~, sups) log n
(27)
( 28 )
where qs is the probability distribution induced according to (5) by the information convolution criterion sup~. Recall that h(y, ~) in (25)-(28) is the number of questions that the second player should ask the first player before the end of the round on condition that the second player uses the strategy y and the first player in the final analysis selects the element m. We have thus constructed a game interpetation of coding for fuzzy sets. Theorems I and 2 show its role in understanding uncertain situations associated with the notion of fuzzy set. In conclusion, we would like to call the reader's attention to one important difference between the notions of entropy and uncertainty of information spaces: entropy is a dimensional quantity, while uncertainty is nondimensional (this has already been noted in [2]). This is clear from the example of the information fuzziness space (~, sups) considered above. The uncertainty G(~, sups) of a space, like the own information Isup~(~) of each point in this space, is nondimensional, since the probability distribution q= is nondimensional. The entropy H(~, sups) , as we see from (13) and (23), has the same dimension as the penalty %~(~, ~*) of the value v~ of the game W~. LITERATURE CITED I. 2. 3. 4. 5.
N . N . Diduk, "Uncertainty spaces: entropy and coding theorem," Kibernetika, No. 2, 69-73 (1984). N . N . Diduk, "Information spaces. The concept of own information and uncertainty," Kibernetika, No. 4, 74-80 (1986). A. Kaufmann, An Introduction to Fuzzy Set Theory [Russian translation], Radio i Svyaz', Moscow (1982). L.A. Zadeh, "Fundamentals of a new approach to the analysis of complex systems and decision-making processes," in: Mathematics Today [in Russian], Znanie, Moscow (1974). A . I . Orlov, Optimization Problems and Fuzzy Variables [in Russian], Znanie, Moscow
(1980). 6. 7. 8.
252
L . A . Zadeh, The Notion of Linguistic Variant and Its Application to Approximate Decisions [Russian translation], Mir, Moscow (1976). N . N . Diduk, "Entropy of discrete uncertainty spaces," Dokl. Akad. Nauk UkrSSR, Ser. A, No. I, 63-65 (1983). D. Blackwell and M. A. Girshick, Theory of Games and Statistical Decisions [Russian translation], IL, Moscow (1958).