ISSN 0081-5438, Proceedings of the Steklov Institute of Mathematics, 2015, Vol. 291, Suppl. 1, pp. S222–S238. c Pleiades Publishing, Ltd., 2015. c V.N. Ushakov, A.S. Lakhtin, P.D. Lebedev, 2014, Original Russian Text published in Trudy Instituta Matematiki i Mekhaniki UrO RAN, 2014, Vol. 20, No. 3.
Optimization of the Hausdorff Distance between Sets in Euclidean Space V. N. Ushakov1,2 , A. S. Lakhtin3 , and P. D. Lebedev1 Received May 20, 2014
Abstract—Set approximation problems play an important role in many areas of mathematics and mechanics. For example, approximation problems for solvability sets and reachable sets of control systems are intensively studied in differential game theory and optimal control theory. In N.N. Krasovskii and A.I. Subbotin’s investigations devoted to positional differential games, one of the key problems was the problem of identification of solvability sets, which are maximal stable bridges. Since this problem can be solved exactly in rare cases only, the question of the approximate calculation of solvability sets arises. In papers by A.B. Kurzhanskii and F.L. Chernous’ko and their colleagues, reachable sets were approximated by ellipsoids and parallelepipeds. In the present paper, we consider problems in which it is required to approximate a given set by arbitrary polytopes. Two sets, polytopes A and B, are given in Euclidean space. It is required to find a position of the polytopes that provides a minimum Hausdorff distance between them. Though the statement of the problem is geometric, methods of convex and nonsmooth analysis are used for its investigation. One of the approaches to dealing with planar sets in control theory is their approximation by families of disks of equal radii. A basic component of constructing such families is best n-nets and their generalizations, which were described, in particular, by A.L. Garkavi. The authors designed an algorithm for constructing best nets based on decomposing a given set into subsets and calculating their Chebyshev centers. Qualitative estimates for the deviation of sets from their best n-nets as n grows to infinity were given in the general case by A.N. Kolmogorov. We derive a numerical estimate for the Hausdorff deviation of one class of sets. Examples of constructing best n-nets are given. Keywords: Hausdorff distance, polygon, best n-net, disk cover.
DOI: 10.1134/S0081543815090151 1. STATEMENT OF THE PROBLEM ON AN ARRANGEMENT OF POLYGONS In the work with different sets arising in the solution of control problems and differential games [1], it is often required to compare the degree of their closeness to each other and construct their approximation by simpler figures. For example, in [2–7], reachable sets of dynamic systems 1
Institute of Mathematics and Mechanics, Ural Branch of the Russian Academy of Sciences, ul. S. Kovalevskoi 16, Yekaterinburg, 620990 Russia e-mail:
[email protected],
[email protected] 2 Institute of Radioelectronics and Information Technologies, Ural Federal University, ul. Mira 32, Yekaterinburg, 620002 Russia 3 Institute of Mathematics and Computer Science, Ural Federal University, pr. Lenina 51, Yekaterinburg, 620000 Russia e-mail:
[email protected]
S222
OPTIMIZATION OF HAUSDORFF DISTANCE
S223
were replaced by ellipses on a plane and ellipsoids in a three-dimensional space. For the solution of differential games, the authors replaced planar sections of stable bridges by ellipses with subsequent aiming at these ellipses in the construction of a solving control in [8, 9]. The main criterion in the choice of sets that replace original sets is their closeness in the sense of the Hausdorff metric [10]. We consider several problems connected with constructing approximations of sets by given geometric objects whose coordinates are varied a Euclidean space. Let two arbitrary polyhedra A, B ⊂ Rn be given. It is required to find their mutual arrangement that provides the minimum of the Hausdorff distance between them. This problem statement extends and develops studies [11,12], where a similar problem was considered for convex polyhedra. Recall that the Hausdorff distance between nonempty bounded closed sets A and B is (1.1) d(A, B) = max max min a − b, max min a − b . a∈A b∈B
b∈B a∈A
In the case when A and B are convex, we have d(A, B) = max max max (l, a − SB (l)) , max max (l, b − SA (l)) , a∈A l=1
(1.2)
b∈B l=1
where SX (l) = max{x, l : x ∈ X} is the support function of a compact set X ∈ Rn . We assume that the polyhedron A is fixed and the polyhedron B can be moved by means of a parallel translation by a vector x ∈ Rn ; i.e., we consider the set B + {x}. Each vector x ∈ Rn specifies a position of the polyhedron B + {x} and uniquely defines the distance between A and B + {x}, which yields the function F (x) = d(A, B + {x}), where any of the formulas (1.1) and (1.2) can be used. Thus, the problem under consideration reduces to the search for a minimum point of this function. Let us examine the properties of the function F (x) = d(A, B + {x}). 2. THE CASE OF CONVEX POLYHEDRA Consider the case when the polyhedra A and B are convex. For any convex set Y ⊂ Rn , the function ρ(x, Y ) = min x − y of the distance to this set from a point x ∈ Rn is convex. It is known y∈Y
from convex analysis (see, for example, [13]) that the maximum value of a convex function on a convex polyhedron is attained at vertices of this polyhedron. Thus, in formula (1.1), the maxima max and max can be taken only over the vertices ai and bj of the polyhedra; i.e., a∈A
b∈B
ai ∈ A,
i = 1, . . . , Va ; bj ∈ B, j = 1, . . . , Vb ; d(A, B) = max max ρ(ai , B), max ρ(bj , A) . i=1,...,Va
(2.1)
j=1,...,Vb
Here, Va and Vb are the numbers of vertices of the polyhedra A and B, respectively. In what follows, we use the notation ai and bj for vertices of the polyhedra A and B. Using the known (see, for example, [13]) property of support functions S(B+{x}) (l) = SB (l) + x, l, we obtain from formulas (1.2) and (2.1) an expression for the minimized function F (x) in the case of convex sets: (2.2) F (x) = max max max (l, ai − x − SB (l)), max max (l, bj + x − SA (l)) . i=1,...,Va l=1
j=1,...,Vb l=1
PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S224
USHAKOV et al.
Obviously, this function in the case of convex polyhedra A and B is continuous and convex as the maximum of functions linear in x. Since the function F (x) is not smooth, its subgradients should be used to study it instead of the gradients. Recall a known statement (see, for example, [14, pp. 108,109]), which makes it possible to calculate subdifferentials in our problem. Assertion. Let f1 , . . . , fm be convex functions on an open convex set X. Then, the subdifferential of the function f (x) = max fi (x) has the form i=1,...,m
∂f (x) = co
∂fi (x) ,
x ∈ X,
i∈I(x)
where I(x) = {i : fi (x) = f (x)}. In our case, formula (2.2), which defines the function F (x) in the case of convex sets A and B, can be written in the form F (x) =
max
i=1,...,Va , j=1,...,Vb
{fi (x), fj (x)}, where
for i = 1, . . . , Va fi (x) = max {l, ai − x − SB (l)}, l=1
for j = 1, . . . , Vb fj (x) = max {l, bj + x − SA (l)}. l=1
Applying the assertion and introducing additional notation, we obtain a formula for the subdifferential of the function F (x). The whole space Rn is taken as the set X, and the set I(x) has the form I(x) = IA (x) ∪ JB (x), IA (x) = i : ρ(ai , B + {x}) = F (x) , JB (x) = j : ρ(bj + x, A) = F (x) . The set IA (x) is the set of indices of vertices of A such that the distance from them to B +{x} is the Hausdorff distance between A and B +{x}. The set IB (x) of indices of vertices of B +{x} is defined similarly. Thus, the subdifferential of the function F (x) has the form ∂F (x) = co {LA (x) ∪ LB (x)}, LA (x) =
− l : ∃i ∈ IA (x) : l, ai − x − SB (l) = F (x), l = 1 ,
LB (x) = l : ∃j ∈ JB (x) : l, bj + x − SA (l) = F (x), l = 1 . The set LA (x), in the case of its nonemptiness, is a family of vectors of unit length directed to the polyhedron B + {x} from the vertices of A with indices from IA (x). Similarly, the set LB (x), in the case of its nonemptiness, is a family of vectors of unit length directed from A to vertices of B + {x} with indices from JB (x). Note that the subdifferential ∂F (x) for each x ∈ Rn is a nonempty set, which can be specified by at most Va + Vb vectors. Geometrically, the subdifferential ∂F (x) is the convex hull of unit vectors directed from A to B + {x} and such that the Hausdorff distance between A and B + {x} is attained in their direction. The subdifferential is a convex polyhedron in Rn inscribed in the unit ball centered at the origin. PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
OPTIMIZATION OF HAUSDORFF DISTANCE
S225
Since F (x) is a continuous convex function with bounded Lebesgue sets, it has a minimum point x∗ . Necessary and sufficient conditions for the minimum of F (x) are formulated in terms of the subdifferential of this function as follows (see, for example, [13, 14]): a point x∗ is a minimum point of a continuous convex function F (x) if and only if 0 ∈ ∂F (x∗ ), 0 = (0, . . . , 0) ∈ Rn . If this condition holds for a convex function, then the found point is a point of global minimum. 3. SUBGRADIENT METHODS To find a minimum point of the considered convex function F (x), which is obtained in the case of convex polyhedra A and B, different approaches to the realization of numerical subgradient methods were applied. The case of two-dimensional space R2 was chosen for the program realization, i.e., polygons on a plane were considered. 3.1. Shor’s subgradient method. One of the ideas that can be used to solve numerically the problem of finding the minimum of a continuous convex nonsmooth function belongs to Shor [15] and is expressed by the formula xk+1 = xk − γk
hk , hk
hk ∈ ∂F (xk ).
(3.1)
This formula implements a shift by the value γk hk in the direction opposite to the subgradient. Imperfections of this method include the impossibility of finding the length of a necessary step γk without additional information and the fact that an arbitrary subgradient is taken from the set of subgradients. The issue of the choice of the step length is solved differently in different subgradient methods. In Shor’s method, the sequence of step lengths γk must satisfy two conditions. This sequence must tend to zero, but the corresponding series must be divergent. For the realization of the numerical algorithm, the sequence γk = γ0 /k was used. Note that method (3.1) is not, in general, a descent method, in which the sequence F (xk ) is decreasing. Nevertheless, it is proved in [15, 16] that there is a convergence from the viewpoint of the best value of the function attained at a given moment. Since the series ∞ k=1 γk is divergent, obviously, method (3.1) cannot converge fast. In addition, it is difficult to estimate the attained accuracy. Programs implementing the search for a minimum point of the function F (x) by formulas (3.1) were designed. Comparing the results obtained by the programs for different initial data and different values of the parameter γ0 , we found that it is reasonable to take γ0 = F (x0 ), i.e., the value equal to the Hausdorff distance between the polyhedra at the initial moment. Different methods of choosing a subgradient were applied; for example, taking the arithmetic mean of vertices of the subdifferential polyhedron or the subgradient of minimum or maximum norm. It turned out that the method of choosing a subgradient does not influence essentially the accuracy and the convergence rate. 3.2. A method with result prediction. If the minimum value of the function F ∗ = F (x∗ ) = minn F (x) is known, there exists a solution of the problem of choosing the necessary x∈R
length of an iteration step in formula (3.1) (see, e.g., [16, Sect. 5.3]). In this case, the subgradient method has the form F (xk ) − F ∗ hk , hk ∈ ∂F (xk ). (3.2) xk+1 = xk − hk 2 PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S226
USHAKOV et al.
In the general case, when the value F ∗ is unknown, we can use an analog of formula (3.2) with some approximate value f ∗ , which can be either less or greater than F ∗ (see, for example, [16, Sect. 5.3]). More exactly, let us write the subgradient method in the form xk+1 = xk −
F (xk ) − f ∗ hk , hk 2
hk ∈ ∂F (xk ),
k = 0, 1, . . . ,
(3.3)
where the initial iteration x0 and the value f ∗ satisfy the inequality F (x0 ) ≥ f ∗ and the vector h0 satisfies the inclusion h0 ∈ ∂F (x0 ). Note that, in both methods (3.2) and (3.3), the vectors hk can be chosen nonuniquely since, in general, ∂F (xk ) can be a multivalued set. The choice of the vector hk will be specified below. Assume that, in the calculation process, iterations x0 , x1 , . . . , xj , . . . , xk of method (3.3) were realized. Suppose that each iteration xj , j = 1, k, satisfies the inequality S∂F (xj ) (hj−1 ) ≥ 0, where hj−1 ∈ ∂F (xj−1 ). To calculate the iteration xk+1 , assume that hk ∈ ∂F (xk ) satisfies the inequality hk , hk−1 ≥ 0. Such a vector hk exists since S∂F (xk ) (hk−1 ) ≥ 0. By the definition of the subgradient hk ∈ ∂F (xk ), we have F (x) ≥ F (xk ) + x−xk , hk , x ∈ Rn , k = 1, 2 . . .; hence, (3.4) F (xk+1 ) ≥ F (xk ) + xk+1 − xk , hk , k = 1, 2 . . . .
F (xk ) − f ∗ h , h It follows from (3.3) and (3.4) that F (xk+1 ) ≥ F (xk ) + − k k , k = 1, 2 . . .; hk 2 hence, F (xk+1 ) ≥ f ∗ , k = 1, 2 . . . . In view of these inequalities and F (x0 ) ≥ f ∗ , we obtain F (xk−1 ) ≥ f ∗ , k = 1, 2 . . . . In addition, according to (3.3), we have xk = xk−1 −
F (xk−1 ) − f ∗ hk−1 , hk−1 2
hk−1 ∈ ∂F (xk−1 ),
k = 0, 1 . . . .
(3.5)
It follows from (3.4) and (3.5) that
) ≥ F (x ) +
F (xk−1 ) − f ∗ hk−1 , hk , hk−1 2
k = 1, 2 . . . ;
F (xk−1 ) ≥ F (xk ) +
hk−1 , hk (F (xk−1 ) − f ∗ ), hk−1 2
k = 1, 2 . . . .
F (x
k−1
k
i.e., (3.6)
Since hk−1 , hk ≥ 0, k = 1, 2 . . ., and F (xk−1 ) − f ∗ ≥ 0, we have F (xk−1 ) ≥ F (xk ). Arguing similarly, we find that, in the case S∂F (xj ) (hj−1 ) ≥ 0, j = 1, k, the vectors hj ∈ ∂F (xj ), j = 2, k, can be chosen from the condition hj−1 , hj ≥ 0. Under this choice of the vectors hj , j = 2, k, the sequence F (xj ), j = 0, k, decreases nonstrictly: F (x0 ) ≥ F (x1 ) ≥ . . . ≥ F (xj ) ≥ . . . ≥ F (xk ). Geometrically, this means that the values of the function do not increase if the subgradients at the current and at the preceding step are codirectional. / ∂F (xk ). Then, xk+1 = xk , and iterative process (3.3) It may turn out that F (xk ) = f ∗ and 0 ∈ stops. The relation 0 ∈ / ∂F (xk ) means that xk is not a minimum point of the function F (x) and, ∗ ∗ hence, f > F . This case is shown in Fig. 1. In this case, we continue the iterative process of the approximate calculation of the minimum point x∗ but use another number f ∗ , which is less than the number used in the calculation of the points x0 , x1 , . . . , xk . For the new value of f ∗ in PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
OPTIMIZATION OF HAUSDORFF DISTANCE
S227
-
-
Fig. 1. Realization of a step of the subgradient Fig. 2. Realization of a step of the subgradient method for f ∗ < F ∗ . method for f ∗ > F ∗ .
formula (3.3), we can take the half of the preceding number f ∗ . Thus, we continue the iterative process. Suppose that, for a sequence of points xk+1 , xk+2 , . . . , xm , we obtain a situation similar to the situation for the preceding sequence x0 , x1 , . . . , xk ; i.e., let hj−1 , hj ≥ 0, j = k + 1, m. In this case, the sequence F (xj ), j = k, m decreases nonstrictly: F (xk ) ≥ F (xk+1 ) ≥ . . . ≥ F (xm ). If the inequality hm−1 , hm ≥ 0 fails for j = m, i.e., the inequality hm−1 , hm < 0 holds, it may turn out that F (xm ) ≥ F (xm+1 ); then, the iterative process is continued with the same f ∗ . If F (xm ) < F (xm+1 ), then we should replace f ∗ by a new, greater, value equal to (f ∗ + F (xm ))/2. This situation corresponds to Fig. 2. In this case, we remember in further iterations the value f ∗ used before in the variable f 1 as a lower bound for the real minimum value F ∗ . / ∂F (xm ), then xm+1 = xm , and iterative process (3.3) with new f ∗ stops. If F (xm ) = f ∗ and 0 ∈ m Since 0 ∈ / ∂F (x ), we find that xm is not a minimum point of the function F (x); consequently, once again, we have f ∗ > F ∗ . Continuing the iterative process, we introduce a new number f ∗ equal to (f ∗ + f 1 )/2. The iterative process is continued. If F (xm ) = f ∗ and 0 ∈ ∂F (xm ), then the solution process for the problem of finding a minimum point x∗ of the function F (x) stops. Based on the above argument, we can consider the subgradient method with result prediction as a bisection method with respect to values of the function. If we do not hit the minimum point at some step of the iterative process, i.e., the inclusion 0 ∈ ∂F (xk ) does not hold, then the process stops after attaining the required accuracy with respect to values of the function. In this case, the value F (xk ) − f 1 is compared with the accuracy threshold, since f 1 , by the above argument, is a lower bound for the minimum value F ∗ of the function F (x). All the assumptions made within the preliminary argument were verified by numerous computational experiments. In all the tests conducted, the program implementing the described algorithm either reached a point x∗ such that 0 ∈ ∂F (x∗ ) or attained the required accuracy. As any bisection method, this process has high convergence rate, which was proved experimentally. The program implementing this method showed essentially higher convergence rate as compared to the simple subgradient method. Advantages of the program also include a comparative simplicity of passing to problems in spaces of greater dimension and a small volume of used data. PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S228
USHAKOV et al.
Comparing the convergence of the method with result prediction and of Shor’s method, we note that convergence rate increases tenfold. For example, tests on identical pairs of polygons revealed that attaining the same accuracy to the third decimal place, method (3.1) needed on average 3000 iterations, whereas method (3.2) required at most 250 iterations. 4. THE CASE OF NONCONVEX POLYHEDRA In the case of arbitrary polyhedra A and B, the basic problem of calculating the Hausdorff distance between them is much more complicated since it is insufficient to consider only vertices of these polyhedra, as in the convex case. The distance can be attained at boundary points of one polyhedron, which are equidistant from the other polyhedron, and even at interior points of a polyhedron. Let us illustrate the variety of arising possibilities in the planar case, i.e., for n = 2. We can observe the following cases. 1. A boundary point of one polygon is equidistant from sides of the other (Fig. 3a). 2. A boundary of point of one polygon is at equal maximal distances from a vertex and a side of the other polygon (Fig. 3b). 3. A boundary point of one polygon is equidistant from vertices of the other (Fig. 3c). 4. A point at which the Hausdorff distance is attained is an interior point of a polygon. In this case, this point is the center of the corresponding circle defined by (a) three sides of the second polygon; the point is the center of the inscribed circle (Fig. 4a); (b) two sides and a vertex of the second polygon (Fig. 4b); (c) a side and two vertices of the second polygon (Fig. 4c); (d) three vertices of the second polygon; the point is the center of the circle containing these vertices (Fig. 4d). (a)
(b)
(c)
Fig. 3. Polygons for which the Hausdorff distance is attained at a boundary point.
(a)
(b)
(c)
(d)
Fig. 4. Polygons for which the Hausdorff distance is attained at an interior point.
PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
OPTIMIZATION OF HAUSDORFF DISTANCE
S229
y
y
900
700
850
600
800
500
A
400
750
B
700
300
650
200
600
100
550 500
0 0
200
400
600
800
x
1000
Fig. 5. Polygons giving a nonconvex function F (x).
x -400
-300
-200
-100
0
100
Fig. 6. Level curves of the function F (x) in the nonconvex case.
In the general case of arbitrary polyhedra A and B, the convexity of the function F (x) = d(A, B + {x}) is not guarantied. To study the convexity properties of this function, a series of computational experiments on a plane was conducted. As a result, we established that, for the majority of pairs of randomly chosen polygons, the function F (x) was convex even if the polygons were nonconvex. Moreover, it was shown experimentally that, to obtain a nonconvex function, we should use polygons with comparable sizes. These facts can be explained by the following theoretical observations. Lemma 1. For a convex polyhedron A ⊂ Rn and an arbitrary polyhedron B ⊂ Rn , the function f (x) = h(B + {x}, A) is convex. Here, B + {x} is the polyhedron B shifted by the vector x ∈ Rn and h(B, A) is the Hausdorff deviation of B from A, which can be calculated by the formula h(B, A) = max min a − b. b∈B a∈A
Indeed, the distance from a point x ∈ Rn to a convex set A is a convex function. The maximum of convex functions is also a convex function. Lemma 2. If, in a degenerate case, the polyhedron B ⊂ Rn is a point, then the function F (x) = d(A, B + {x}) is convex for any polyhedron A ⊂ Rn . Proof. This follows from the fact that the Hausdorff distance for the sets A and B is defined by formula (1.1) as the maximum of the Hausdorff deviations of A from B and, symmetrically, of B from A. In the case of a point set B, this maximum is attained at the Hausdorff deviation of A from B. By Lemma 1, since a point set is convex, we obtain the convexity of F (x). Thus, if the size of one polyhedron is essentially greater than the size of the other, then the smaller polyhedron can be replaced by its convex hull. In this case, the Hausdorff distance does not change. To prove the convexity of F (x), we can argue similarly to the presented lemmas. A program module was written that computed values of the function F (x) = d(A, B + {x}) for an arbitrary pair of polygons on a plane by calculating the values at nodes of a grid with step 1. We proved experimentally the fact that, in the majority of cases, the function F (x) is convex, though the polygons A and B are nonconvex. In Fig. 5, an example of polygons is presented, and, in Fig. 6, level curves of the function F (x) are shown in the case of its nonconvexity. PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S230
USHAKOV et al.
5. APPLICATION OF SUBGRADIENT METHODS IN THE NONCONVEX CASE If the function F (x) is globally nonconvex, then we can consider the existence of subdifferentials of this function only at points of its local convexity. Here, all the preceding arguments remain valid; consequently, as in the convex case, a subdifferential of F (x) coincides with the convex hull of the vectors on which the Hausdorff distance between polygons under consideration is attained. As before, we have a condition of attaining a minimum point, which consists in the fact that the origin belongs to the subdifferential. Note that, since the function F (x) can be nonconvex, this extremum condition can give only a point of local minimum. A heuristic algorithm was designed to solve the formulated problem in the case of arbitrary polygons on a plane. Essential changes were made in the algorithm of calculating a current subdifferential of the function F (x). Since the function F (x) is not convex, we cannot use the subgradient method with result prediction. To realize the algorithm, the simple Shor’s subgradient method was taken as a base. Due to a large volume of computational experiments, we established that, for a pair of arbitrarily generated polygons, the implemented heuristic algorithm always finds a minimum point, and this minimum is global in approximately 92% of cases. We increased the convergence rate by solving preliminarily the optimal arrangement problem for the convex hulls of specified polygons and by using the found arrangement at the initial step of solving the problem for the polygons. The algorithm’s outputs were tested by comparing them with the approximate global minimum point found by the search over all positions with fixed step on a rectangular grid. The implemented algorithms can be adapted to spaces of greater dimension. 6. STATEMENT OF THE PROBLEM OF FINDING A BEST n-NET One of the most effective methods of the approximation of sets on a plane is the replacement of the original set by a family of points [17–19]. The resulting families of points must satisfy an optimality condition understood as the minimum of the Hausdorff deviation of the given set from the union of a fixed number of points. Questions of existence and uniqueness for optimal families of points in different Euclidean spaces were studied by Garkavi [20–22] and Sosov [23]. The first step in the construction of an approximation of a set is to find its central point. Definition 1. The Chebyshev center of a compact set M ⊂ Rm is a point c(M ) such that h(M, {c(M )}) = inf{h(M, {x}) : x ∈ Rm }.
(6.1)
In other words, the Chebyshev center is the center of a ball of minimum possible radius containing M . For any compact set, the Chebyshev center exists, is unique, and belongs to the convex hull of M [22]. Definition 2. The Chebyshev radius r(M ) of a compact set M ⊂ Rm is value (6.1). The notion of Chebyshev center can be generalized to the case of several points. Definition 3. An n-net in the space Rm is a nonempty set consisting of at most n points. Denote by Σn the set of all n-nets of the space Rm . Definition 4. A net S ∗ is called a best n-net of a compact set M ⊂ Rm if h(M, S ∗ ) = min{h(M, S) : S ∈ Σn }. PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
OPTIMIZATION OF HAUSDORFF DISTANCE
S231
The existence of a best n-net of a compact set is proved in [20]. However, in contrast to the Chebyshev center, it is nonunique in the general case, and some of its points can lie outside the convex hull of M . Let us formulate the problem on finding a best n-net of a planar compact set. Suppose that a bounded closed set M ⊂ R2 and a number n ∈ N are given. It is required to find at least one best n-net S for it. This problem can be interpreted as the problem on an optimal cover of M by a family of n disks of equal radius r. An optimality criterion is the minimum of the value r. Points si , i = 1, . . . , n, of an n-net S are the centers of disks O(si , r) of an optimal cover, and their radius equals to the Hausdorff deviation r = h(M, S) of the compact set M from the n-net S. The authors studied this problem in [17–19]. For some sets, best n-nets were found analytically for small n. An expression for best 2 and 3-nets was given in [18] for wide classes of triangles. In [19], qualitative features of best 2 and 3-nets were studied for some triangles, and their sets were constructed completely. 7. A BOUND FOR BEST n-NETS In many problems, it is important to estimate the behavior of the Hausdorff deviation of a compact set M from its best n-net Sn as n grows. Denote this deviation as a function of the positive integer n by RM (n) = h(M, Sn ). The qualitative properties of this function were studied by Kolmogorov in [24, 25]. For planar sets of nonzero area, he obtained the estimate √ RM (n) = O(1/ n).
(7.1)
For specific sets, it is often required to find a numerical bound for the behavior of the function RM (n) that specifies formula (7.1) for a given figure. A lower bound can be easily derived from
the fact that the area of the family of disks Ξ Sn , RM (n) cannot exceed the area of M . An upper bound, as a rule, can be obtained only by studying the structure of a best n-net. Theorem. Let a set M be a square with side length 2l, l ∈ (0, ∞). Then, the function RM (n) satisfies the bound √ √ lim RM (n) n ≤ 2l. (7.2) n→∞
Here, lim an is the upper limit of the sequence {an }∞ n=1 . n→∞
Proof. Without loss of generality, consider a square M centered at the origin with sides parallel to the axes Ox and Oy. Any square can be obtained from it by operations of rotation and parallel translation. Assume that bound (7.2) does not hold. Then, there exist a monotonically increasing numerical sequence {ni }∞ i=1 ⊆ N, lim ni = ∞, and a number γ ∈ (0, ∞) such that i→∞
√ √ 2l + γ . ∀i ∈ N RM (ni ) ni >
(7.3)
Note that, if the number of points of a best n-net is a squared positive integer n = m2 , m ∈ N, then √ 2l . (7.4) h(M, Sn ) ≤ m PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S232
USHAKOV et al.
Indeed, the square M is the union of m2 squares Mm with side length equal to 2l/m. The union Sn∗ of their centers forms an n-net, for which, by construction, √ h(M, Sn∗ )
=
2l . m
Hence, (7.4) holds for a best n-net. For arbitrary ni , find the greatest square mi not exceeding this number: ni = m2i + ki . Obviously, ki ≤ 2mi . Then, √ ki ≤ 2 m2i ≤ 2 m2i + ki = 2 ni . This bound and inequality (7.4) yield √
√ √ √ 2l 2l 2l 2l 1 ≤√ ≤ RM (ni ) ≤ √ =√ √ . mi ni 1 − 2/ ni ni − ki ni − 2 ni We can write the bound in the form √ √ RM (ni ) ni ≤ 2lϕ(ni ),
where ϕ(ni ) =
1 1−
−1/2 2ni
.
In the limit, we obtain 1 = 1. lim ϕ(ni ) = lim ni →∞ −1/2 1 − 2ni
i→∞
Therefore, for any γ > 0, there exists a number i∗ such that ∀i > i∗ (ϕ(ni ) ≤ 1 + γ/(2l)). Consequently, for sufficiently large ni , we have √ √ √ RM (ni ) ni ≤ 2l(1 + γ/(2l)) = 2l +
√
√ 2γ < 2l + γ. 2
This bound contradicts (7.3).
It is clear that bound (7.4) holds not only for the square with side 2l, but also for any set embedded in it, for example, for the disk of radius l. 8. EXAMPLES OF THE CONSTRUCTION OF BEST n-NETS The analytic construction of best n-nets is possible only in rare cases. For the majority of practical problems, even in the case of a convex [26] planar set with simple geometry, the application of numerical methods is required [27]. To construct best n-nets of planar sets, the authors implemented a program complex using the algorithm described in [19, p. 94]. It is based on decomposing a set M into Dirichlet domains [28, p. 18] by means of Voronoi diagrams [29] and finding their Chebyshev centers. At the first stage, the algorithm randomly generates an initial iteration of an n-net S 0 . The result of calculations may vary essentially depending on this initial net. That is why the program complex was applied many times for each specific example with subsequent comparison of the results. Similar methods of constructing optimal covers of planar sets by disks were applied in [30]. PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
OPTIMIZATION OF HAUSDORFF DISTANCE
y
y 1
1
Ξ
S
Ξ
S
0.5
0.5
0
0
-0.5
S233
M
M
-0.5 -1
-1 -1.5
-1
-0.5
0
0.5
1
1.5
x
-1.5
-1
-0.5
0
0.5
1
1.5
x
Fig. 7. The set M , the approximation S of its best Fig. 8. The set M , the approximation S of its best 16-net, and the cover Ξ by 16 disks in Example 1. 15-net, and the cover Ξ by 15 disks in Example 1.
For elementary geometric shapes, in particular, for the disk and the square, best n-nets were found analytically for small n. Their review and the corresponding optimal covers by disks are given in [31]. The authors of the present paper focused on the numerical construction of approximations for best n-nets in the case of large n. Example 1. It is required to construct best n-nets for n = 15 and 16 if the set M is a square with side 2 centered at the origin. The problem is solved numerically by means of improving iteratively several different n-nets embedded in M . The numerically obtained approximation S15 of a best 15-net has the form S15 = {si }15 i=1 ≈ {(−0.7392, −0.7450), (−0.7487, −0.2254), (−0.8191, 0.3564), (−0.6729, 0.8368), (−0.2283, −0.7349), (−0.2218, −0.2035), (−0.2877, 0.3248), (−0.0132, 0.8486), (0.3124, −0.7808), (0.2694, −0.2160), (0.2340, 0.3626), (0.6596, 0.8671), (0.8015, −0.6953), (0.7754, −0.1044), (0.7754, 0.4657)}. The Hausdorff deviation of the square M from the approximation of a best 15-net is r = h(M, S15 ) ≈ 0.3655. The set M , the approximation S of a best 15-net, and the family of disks of the optimal cover Ξ(S15 , r) = O(s1 , r) ∪ O(s2 , r) ∪ . . . ∪ O(s15 , r) are presented in Fig. 7. Note that the constructed 15-net improves the result obtained in [28], where the square M1 with side 1 was covered by a family of 15 disks of radius r = 0.1852 (see [28, p. 26]). If we shrink homothetically the square M and the system of disks Ξ twofold, we obtain the square M1 embedded in the union of 15 disks of radius r = r/2 = 0.1828 < r. The percent decrease of the radius is r − r × 100% ≈ 1.31%. r The approximation S16 of a best 16-net has the form S16 = {si }16 i=1 ≈ {(−0.7580, −0.7456), (−0.7502, −0.2441), (−0.7526, 0.2523), (−0.7509, 0.7512), PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S234
USHAKOV et al.
y 1
Ξ 0.5
S
0
-0.5
M -1
-1.5
-1
-0.5
0
0.5
1
1.5
x
Fig. 9. The set M , the approximation S of its best 14-net, and the cover Ξ by 14 disks in Example 2.
(−0.2650, −0.7564), (−0.2595, −0.2518), (−0.2540, 0.2536), (−0.2507, 0.7541), (0.2291, −0.7494), (0.2741, −0.2655), (0.2779, 0.1921), (0.2346, 0.7388), (0.7358, −0.7736), (0.8059, −0.2580), (0.7630, 0.2871), (0.7343, 0.7719)}. The Hausdorff deviation of M from the approximation of a best 16-net is r = h(M, S16 ) ≈ 0.3521. The set M , the approximation S, and the family of disks of the optimal cover Ξ(S16 , r) = O(s1 , r) ∪ O(s2 , r)∪. . .∪O(s16 , r) are presented in Fig. 8. Note that though the number of points in the approximation of a best 16-net is the square of 4, these points are not located in the Chebyshev centers of the 16 squares with side 0.5 that compose M but form a more complicated structure. √ If we calculate the value h(M, Sn ) n in Example 1, then it will be equal to 1.4156 for n = 15 and 1.4084 for n = 16. These values are in accordance with the theorem, which gives the upper √ √ bound 2 ≈ 1.4142 for RM (n) n in the limit as n → ∞. Example 2. It is required to construct a best n-net for n = 14 if the set M is a disk of radius 1 centered at the origin. The problem is solved numerically by improving iteratively several different 14-nets embedded in the square M . The numerically obtained approximation S of a best 14-net has the form S14 = {si }14 i=1 ≈ {(0.4472, 0.6311), (0.8723, 0.3070), (0.4249, 0), (0, 0.9431), (−0.4468, 0.6309), (0, 0.3192), (0, −0.3193), (−0.8732, 0.3079), (−0.4251, 0), (−0.8717, −0.3062), (−0.4476, −0.6312), (0, −0.9431), (0.4473, −0.6311), (0.8723, −0.3069)}. The Hausdorff deviation of the disk M from the approximation of a best 14-net is r = h(M, S14 ) ≈ 0.3325. The set M , the approximation S, and the family of disks of the optimal cover Ξ(S14 , r) = O(s1 , r) ∪ O(s2 , r) ∪ . . . ∪ O(s14 , r) are presented in Fig. 9. It is seen in the figure that points of the approximation form a regular structure. It consists of families of similar diamonds and has two symmetry axes coinciding with the coordinate axes. PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
OPTIMIZATION OF HAUSDORFF DISTANCE
S235
9. SPHERICAL n-NETS In some cases, applied problems are considered not on a plane but on more complicated surfaces, in particular, on a sphere (or a spherical segment) [32]. Let us consider sets on a sphere Θ of unit radius centered at the origin. Definition 5. The spherical distance σ(a, b) between points a ∈ Θ and b ∈ Θ is the minimum length of a curve Γ ⊂ Θ connecting a and b. Note that the curve of minimum length connecting two points a and b is the arc of the great circle passing through a and b. Recall that a great circle is the intersection of the sphere Θ with a plane passing through the origin [33]. In the case when the points are the ends of a diameter of the sphere, there are infinitely many such arcs. As follows from spherical geometry, the spherical distance between points a ∈ Θ and b ∈ Θ and the distance in the three-dimensional Euclidean space are connected by the one-to-one relation σ(a, b) = 2 arcsin
a − b . 2
(9.1)
Let us generalize the notions of Hausdorff deviation and best n-net to a sphere. Definition 6. The spherical deviation of a compact set A ⊂ Θ from a compact set B ⊂ Θ is the number hσ (A, B) = max{ρσ (a, B) : a ∈ A}. Here, ρσ (a, B) = min{σ(a, b) : b ∈ B}. Definition 7. A spherical n-net is a nonempty subset of Θ consisting of at most n points. Denote by Σσn the set of all spherical n-nets. Definition 8. A spherical n-net S ∗ is called a best spherical n-net of a compact set M ⊂ Θ if hσ (M, S ∗ ) = min{h(M, S) : S ∈ Σσn }.
(9.2)
Let us formulate the problem on a best cover of a compact set on a sphere. For this, we introduce the notion of spherical disk Oσ (s, r) as the set of points of Θ lying at a spherical distance of at most r from s: Oσ (s, r) = {o ∈ Θ : ρσ (o, s) ≤ r}. We assume that s belongs to Θ and the radius r satisfies the constraint r ∈ (0, π]. A spherical disk is a spherical cap formed by the intersection of Θ with a ball whose center belongs to Θ. Consider a bounded closed set M ⊂ Θ. It is required to find a best spherical n-net S of M . We give the following interpretation for this problem. Consider families of spherical disks Oσ (s1 , r), Oσ (s2 , r), . . ., Oσ (sn , r) of equal radius r ∈ (0, π] centered at points s1 , s2 , . . ., sn . Then, the points of a best spherical n-net S are centers of disks from a family that, for the smallest r, satisfies the inclusion M ⊆ Oσ (s1 , r) ∪ Oσ (s2 , r) ∪ . . . ∪ Oσ (sn , r). In this case, the radius r of the spherical disks is hσ (M, S). Let us call such a family best for the specified set M ⊂ Θ. The solution of the problem on a best cover reduces to constructing a best spherical n-net S of the set M , whose points coincide with centers of spherical disks of a best family. Note that the Chebyshev center c of a spherical disk Oσ (s, r) considered as a set in three-dimensional space does not coincide with s. However, for r < π/2, the points c and s lie on the same radius of the sphere Θ. If r ≥ π/2, then c coincides with the origin [33]. To solve the problem of constructing of a best spherical n-net, an algorithm was developed similar to the algorithm described in [19] but based on decomposing the sphere Θ by great circles into influence domains of points (in contrast to decomposing a plane by straight lines). PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S236
USHAKOV et al.
y 1
M
Ξ
0.5
S
0
-0.5
-1
-1.5
-1
-0.5
0
0.5
1
1.5
x
Fig. 10. Projections of the set M , of the approximation S of its best spherical 16-net, and of the cover Ξ by 16 spherical disks in Example 3.
z 1.2
Ξ
S
1 0.8 0.6
M
0.4 0.2 0
1
0.5
y
0
-0.5
-1
-1
-0.5
0 x
0.5
1
Fig. 11. The set M , the approximation S16 of its best spherical 16-net, and the cover Ξ by 16 spherical disks in Example 3.
Example 3. It is required to construct a best family of spherical disks for the set M = Oσ (p, π/2) coinciding with the upper half-sphere of Θ for n = 16. The problem was solved numerically with application of an iterative algorithm. The obtained approximation S16 of a best spherical 16-net has the form S16 = {si }16 i=1 ≈ {(0.2602, −0.5246, 0.8106), (−0.0303, −0.0318, 0.9990), (0.8912, −0.4536, 0), (−0.6116, 0.1499, 0.7768), (0.4599, 0.2887, 0.8397), (−0.7548, −0.5351, 0.3794), (0.3189, −0.8998, 0.2979), (0.9166, 0.2981, 0.2664), (−0.0500, 0.6724, 0.7385), (−0.3503, −0.6018, 0.7177), (−0.9880, 0.0038, 0.1543), PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
OPTIMIZATION OF HAUSDORFF DISTANCE
S237
(0.7275, −0.3376, 0.5973), (−0.3470, −0.9119, 0.2190), (−0.2139, 0.9686, 0.1266), (−0.7313, 0.5848, 0.3508), (0.5093, 0.8199, 0.2614)}. The spherical deviation of the set M from S16 is r = hσ (M, S16 ) ≈ 0.4388. The projections to the plane xOy of the set M , the approximation S16 , and the families of spherical disks Ξ(S16 , r) = Oσ (s1 , r) ∪ Oσ (s2 , r) ∪ . . . ∪ Oσ (s16 , r) are shown in Fig. 10. The form of the same sets in threedimensional space is presented in Fig. 11. 10. CONCLUSIONS In this paper, different problems of approximation of sets by more convenient constructions are considered. A numerical modelling of a series of examples on the choice of an optimal position of a polygon and on the construction of best Chebyshev nets on a plane and their generalizations on a spherical surface is conducted. The effectiveness of the numerical methods is evaluated. The approximations are illustrated in the form of level curves of the Hausdorff distance function and families of disks optimally covering the sets. The developed methods and algorithms can also be used to solve transportation problems and logistic problems [34–36]. ACKNOWLEDGMENTS This work was supported by the Ural Branch of the Russian Academy of Sciences (project no. 12P-1-1012) within the Program of the Presidium of the Russian Academy of Sciences “Mathematical Models and Algorithms in Control Systems with Nonlinear Dynamics,” by the Ural Branch of the Russian Academy of Sciences (project no. 12-P-1-1002) within the Program of the Presidium of the Russian Academy of Sciences “Dynamical Systems and Control Theory,” by the Russian Foundation for Basic Research (project nos. 14-01-00486 a and 14-01-31319-mol a), and by the Program for State Support of Leading Universities of the Russian Federation (agreement no. 02.A03.21.0006 of August 27, 2013). REFERENCES 1. N. N. Krasovskii and A. I. Subbotin, Positional Differential Games (Nauka, Moscow, 1974) [in Russian]. 2. A. B. Kurzhanskii and T. F. Filippova, “Description of a set of viable trajectories of a differential inclusion,” Dokl. Akad. Nauk SSSR 289 (1), 38–41 (1986). 3. A. B. Kurzhanski and T. F. Filippova, “On the theory of trajectory tubes — a mathematical formalism for uncertain dynamics, viability and control,” in Advances in Nonlinear Dynamics and Control: A Report from Russia, Ed. by A. B. Kurzhanski (Birkh¨ auser, Boston, 1993), Ser. Progress in Systems and Control Theory, Vol. 17, pp. 122–188. 4. A. B. Kurzhanski and I. Valyi, Ellipsoidal Calculus for Estimation and Control (Birkh¨ auser, Boston, 1997). 5. F. L. Chernous’ko, Estimation of the Phase State of Dynamical Systems (Nauka, Moscow, 1988) [in Russian]. 6. F. L. Chernous’ko, “Ellipsoidal estimates of a controlled system’s attainability domain,” J. Appl. Math. Mech. 45 (1), 7–12 (1981). 7. F. L. Chernousko and A. I. Ovseevich, “Properties of the optimal ellipsoids approximating the reachable sets of uncertain systems,” J. Optimiz. Theory Appl. 120 (2), 223–246 (2004). 8. V. N. Ushakov, A. R. Matviichuk, and P. D. Lebedev, “Stability defect in a game problem of approach at a fixed time,” Vestn. Udmurt. Univ., Ser. Mat. Mekh. Komp. Nauki, No. 3, 87–103 (2010). 9. V. N. Ushakov, P. D. Lebedev, A. R. Matviychuk, and A. G. Malev, “Differential games with fixed terminal time and estimation of the instability degree of sets in these games,” Proc. Steklov Inst. Math. 277, 266–277 (2012). PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015
S238
USHAKOV et al.
10. F. Hausdorff, Set Theory (Amer. Math. Soc., Providence, RI, 1957; Komkniga, Moscow, 2006). 11. A. S. Lakhtin and V. N. Ushakov, “The problem of minimization of the Hausdorff distance between two convex polyhedra,” Sovrem. Mat. Prilozh. 9, 60–67 (2003). 12. A. S. Lakhtin and V. N. Ushakov, “Minimization of Hausdorff distance between convex polyhedrons,” J. Math. Sci. (N.Y.) 126 (6), 1553–1560 (2005). 13. R. Rockafellar, Convex Analysis (Princeton Univ., Princeton, 1970; Moscow, Mir, 1973). 14. A. G. Sukharev, A. V. Timokhov, and V. V. Fedorov, A Course in Optimization Methods (Nauka, Moscow, 1986) [in Russian]. 15. N. Z. Shor, Minimization Methods for Non-Differentiable Functions and Applications (Naukova Dumka, Kiev, 1979; Springer, Berlin, 1985). 16. B. T. Polyak, Introduction to Optimization (Nauka, Moscow, 1983; Optimization Software, New York, 1987). 17. P. D. Lebedev and A. V. Ushakov, “Approximating sets on a plane with optimal sets of circles,” Autom. Remote Control 73 (3), 485–493 (2012). 18. P. D. Lebedev and D. S. Bukharov, “Approximation of polyhedra by optimal sets of disks,” Izv. Irkutsk. Gos. Univ., Ser. Mat., No. 3, 72–87 (2013). 19. P. D. Lebedev, A. A. Uspenskii, and V. N. Ushakov, “Algorithms for best approximation of sets on a plane by sets of disks,” Vestn. Udmurt. Univ., Ser. Mat. Mekh. Komp. Nauki, No. 4, 88–99 (2013). 20. A. L. Garkavi, “Existence of the best net and the best width for a set in a Banach space,” Uspekhi Mat. Nauk 15 (2), 210–211 (1960). 21. A. L. Garkavi, “On the optimal net and best cross-section of a set in a normed space,” Izv. Akad. Nauk SSSR, Ser. Mat. 26 (1), 87–106 (1962). 22. A. L. Garkavi, “On the Chebyshev center and convex hull of a set,” Uspekhi Mat. Nauk 19 (6), 139–145 (1964). 23. E. N. Sosov, “The metric space of all N -nets in a geodesic space,” Uchen. Zap. Kazansk. Gos. Univ., Ser. Fiz.-Mat. Nauk 15 (4), 136–149 (2009). 24. A. N. Kolmogorov, “On certain asymptotic characteristics of completely bounded metric spaces,” Dokl. Akad. Nauk SSSR 108 (3), 385–388 (1956). 25. A. N. Kolmogorov and V. M. Tikhomirov, “The ε-entropy and ε-capacity of sets in function spaces,” Uspekhi Mat. Nauk, No. 2 (86), 3–86 (1959). 26. K. Leichtweiss, Konvexe Mengen (Springer, Berlin, 1980; Nauka, Moscow, 1985). 27. S. A. Piyavskii, “On the optimization of nets,” Izv. Akad. Nauk SSSR, Ser. Tekhn. Kibernet., No. 1, 68–80 (1968). 28. V. S. Brusov and S. L. Piyavskii, “A computational algorithm for optimally covering a plane region,” USSR Comp. Math. Math. Phys. 11 (2), 17–27 (1971). 29. F. Preparata and M. Shamos, Computational Geometry (Springer, New York, 1985; Mir, Moscow, 1989). 30. Sh. I. Galiev and M. A. Karpova, “Optimization of multiple covering of a bounded set with circles,” Comp. Math. Math. Phys. 50 (4), 721–732 (2010). 31. Erich’s Packing Center. http://www2.stetson.edu/~efriedma/packing.html 32. I. V. Bychkov, N. N. Maksimkin, I. S. Khozyainov, and L. V. Kiselev, “On the problem of patrolling the boundary of a water zone guarded by a group of underwater vehicles,” in Technical Problems of Exploring the Global Ocean: Proceedings of the Fifth All-Russia Research and Technology Conference, Vladivostok, Russia, 2013 (IPMT, Vladivostok, 2013), pp. 424–429. 33. N. N. Stepanov, Spherical Trigonometry (OGIZ, Moscow, 1948) [in Russian]. 34. A. L. Kazakov and A. A. Lempert, “An approach to optimization in transport logistics,” Autom. Remote Control 72 (7), 1398–1404 (2011). 35. A. L. Kazakov, A. A. Lempert, and D. S. Bukharov, “On segmenting logistical zones for servicing continuously developed consumers,” Autom. Remote Control 74 (6), 968–977 (2013). 36. A. A. Lempert, A. L. Kazakov, and D. S. Bukharov, “A mathematical model and a software system for a problem of logistic object arrangement,” Upravl. Bol’shimi Sist., No. 41, 270–284 (2013).
Translated by E. Vasil’eva
PROCEEDINGS OF THE STEKLOV INSTITUTE OF MATHEMATICS
Vol. 291
Suppl. 1
2015