aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
cs0501033 | 1589732927 | We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation — that is, computation of successive bits of information upon request. The core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like call-cc and proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and by Hyland and Ong on the other hand. | Sequential algorithms turned out to be quite central in the study of sequentiality. First, let us mention that Kleene has developed (for lower types) similar notions @cite_25 , under the nice name of oracles, in his late works on the semantics of higher order recursion theory (see @cite_17 for a detailed comparison). | {
"abstract": [
"Go back to An-fang, the Peace Square at An-Fang, the Beginning Place at An-Fang, where all things start (…) An-Fang was near a city, the only living city with a pre-atomic name (…) The headquarters of the People Programmer was at An-Fang, and there the mistake happened: A ruby trembled. Two tourmaline nets failed to rectify the laser beam. A diamond noted the error. Both the error and the correction went into the general computer. Cordwainer Smith The Dead Lady of Clown Town, 1964.",
"We show that Kleene's theory of unimonotone functions strictly relates to the theory of sequentiality originated by the full abstraction problem for PCF. Unimonotone functions are defined via a class of oracles, which turn out to be alternative descriptions of a subclass of Berry-Curien's sequential algorithms."
],
"cite_N": [
"@cite_25",
"@cite_17"
],
"mid": [
"2145617779",
"1547036180"
]
} | Playful, streamlike computation | λx.M is a term. Usual abbreviations are λx 1 x 2 .M for λx 1 .(λx 2 .M), and MN 1 N 2 for (MN 1 )N 2 , and similarly for n-ary abstraction and application. A more macroscopic view is quite useful: it is easy to check that any λ-term has exactly one of the following two forms:
(n ≥ 1, p ≥ 1) λx 1 · · · x n .xM 1 · · · M p (n ≥ 0, p ≥ 1)
λx 1 · · · x n .(λx.M)M 1 · · · M p
The first form is called a head normal form (hnf), while the second exhibits the head redex (λx.M)M 1 . The following easy property justifies the name of head normal form: any reduction sequence starting from a hnf λx 1 · · · x n .xM 1 · · · M p consists of an interleaving of independent reductions of M 1 , . . . , M p . More precisely, we have:
(λx 1 · · · x n .xM 1 · · · M p → * P ) ⇒ ∃ N 1 , . . . N p P = λx 1 · · · x n .xN 1 · · · N p and ∀ i ≤ p M i → * N i .
Here, reduction means the replacement in any term of a sub-expression of the form (λx.M)N, called a β-redex, by M[x ← N]. A normal form is a term that contains no β-redex, or equivalently that contains no head redex. Hence the syntax of normal forms is given by the following two constructions: a variable x is a normal form, and if M 1 , . . . , M p are normal forms, then λx 1 · · · x n .xM 1 · · · M p is a normal form. Now, we are ready to play. Consider the following two normal forms:
M = zM 1 M 2 (λz 1 z 2 .z 1 M 3 M 4 ) N = λx 1 x 2 x 3 .x 3 (λy 1 y 2 .y 1 N 1 )N 2
The term M[z ← N] = NM 1 M 2 (λz 1 z 2 .z 1 M 3 M 4 ) is not a normal form anymore, and can be β-reduced as follows:
NM 1 M 2 (λz 1 z 2 .z 1 M 3 M 4 ) → (λz 1 z 2 .z 1 M 3 M 4 )(λy 1 y 2 .y 1 N ′ 1 )N ′
Then we represent computation as the progression of two tokens in the two trees. Initially, the tokens are at the root (we use underlining to indicate the location of the tokens):
z M 1 M 2 λz 1 z 2 . z 1 λu. t M 5 M 6 M 4 λx 1 x 2 x 3 . x 3 λy 1 y 2 . y 1 N 1 N 2
Note here that N cannot help to choose the next move in M. The machinery stops here. After all, most functional programming languages stop evaluation on (weak) head normal forms. But what about getting the full normal form, i.e., computing M ′′ 5 and M ′′ 6 ? The interactive answer to this question is: by exploration of branches, on demand, or in a streamlike manner. The machine displays t as the head variable of the normal form of M[z ← N]. Now, you, the Opponent, can choose which of the branches below t to explore, and then the machine will restart until it reaches a head normal form. For example, if you choose the first branch, then you will eventually reach the head variable of M ′′ 5 . This is called streamlike, because that sort of mechanism has been first analysed for infinite lists built progressively. A λ-term too has a "potentially infinite normal form": it's Böhm tree.
This prologue served the purpose of introducing some keywords, such as interactivity, playful interpretation, streamlike computation. We now start from the beginning.
Introduction
Scott's and Plotkin's denotational semantics takes its roots in recursion theory. It is worth recalling here the statement of Rice's theorem. This theorem asserts a property of recursively enumerable (r.e.) sets of partial recursive (p.r.) functions, defined through a fixed enumeration (φ n ) of the p.r. functions (i.e. φ is a surjection from ω -the set of natural numbers -to ω ⇀ ω, using ⇀ for sets of partial functions). Let P R ⊆ ω ⇀ ω denote the set of p.r. functions. A subset A ⊆ P R is called r.e. if {n | φ n ∈ A} is r.e. in the usual sense. The theorem asserts that if A is r.e. and if f ∈ A, then there exists a finite approximation g of f such that g ∈ A. That g is an approximation of f means that f is an extension of g, i.e., the domain on which the partial function f is defined, or domain of definition of f , contains that of g and f and g coincide where they are both defined. A simpler way of saying this is that the graph of g is contained in the graph of f . Moreover, the domain of definition of g is finite. Rice's theorem is about an intrinsic continuity property in the realm of p.r. functions. It highlights the (complete) partial order structure of ω ⇀ ω, and in particular the presence of a bottom element ⊥ in this partial order: the everywhere undefined function.
Certainly, one of the key departure points taken by Scott was to take ⊥ seriously. Once this element is part of the picture, one takes a new look at some basic functions. Take the booleans, for example. In Scott's semantics, this is not the set {tt, ff }, but the set {⊥, tt, ff } ordered as follows: x ≤ y if and only if x = y or x = ⊥ (this is called flat ordering). Take now the good old disjunction function or : Bool×Bool → Bool. It gives rise to four different functions over the flat domain version of Bool (the specifications below can be completed to full definitions by monotonicity):
por(⊥, tt) = tt por(tt, ⊥) = tt por(⊥, ff ) = ⊥ por(ff , ⊥) = ⊥ por(ff , ff ) = ff lor(⊥, y) = ⊥ lor(tt, ⊥) = tt lor(ff , y) = y ror(⊥, tt) = tt ror(x, ⊥) = ⊥ ror(x, ff ) = x sor(⊥, tt) = ⊥ sor(tt, ⊥) = ⊥ sor(⊥, ff ) = ⊥ sor(ff , ⊥) = ⊥ sor(ff , tt) = tt sor(tt, ff ) = tt sor(tt, tt) = tt sor(ff , ff ) = ff
It should be clear that lor and ror are computed by programs of the following shape, respectively:
λxy. if x = tt then tt else if y = · · · λxy. if y = tt then tt else if x = · · ·
On the other hand, it should be intuitively clear that no sequential program of the same sort can compute por, because a sequential program will • either start by examining one of the arguments, say x, in which case it can't output anything before a value for x is given, thus missing the specification por(⊥, tt) = tt,
• or output some value rightaway, say tt (λxy.tt), thus mising the specification por(⊥, ⊥) = ⊥.
For a formal proof that por is not sequentially definable, we refer to [33] (syntactic proof), to [22][section 6.1] (model-theoretic proof), and to [5][section 4.5] (via logical relations). As for sor, the story is yet different, there are two natural sequential programs for it:
λxy. if x = tt then if y = · · · else if y = · · · λxy. if y = tt then if x = · · · else if x = · · ·
The starting point of the model of sequential algorithms (next section) was to interpret these two programs as different objects lsor and rsor. Notice finally that there are many more sequential programs computing lor, ror, or sor. Another program for lor might e.g. look like
λxy. if x = tt then tt else if x = tt then tt else if y = · · ·
Such a "stuttering" program is perfectly correct syntactically. Whether this program is interpreted in the model by an object different from the above program for lor is the departure point between the model of sequential algorithm on one hand and the more recent games semantics on the other hand. We shall come back to this point in the next section.
Before we close the section, let us give some rationale for the names used in this section. As the reader might have guessed, the prefixes p, l, r, s, ls, rs stand for "parallel", "left", "right", "left strict", and "right strict", respectively. except for what regards the coincidence between the two definitions of composition, for which the proof from [14][section 3.6] can easily be adapted.
Definition 3.1 A sequential data structure S = (C, V, P ) is given by two sets C and V of cells and values, which are assumed disjoint, and by a collection P of non-empty words p of the form:
c 1 v 1 · · · c n v n or c 1 v 1 · · · c n−1 v n−1 c n ,
where c i ∈ C and v i ∈ V for all i. Thus any p ∈ P is alternating and starts with a cell. Moreover, it is assumed that P is closed under non-empty prefixes. We call the elements of P positions of S. We call move any element of M = C ∪ V . We use m to denote a move. A position ending with a value is called a response, and a position ending with a cell is called a query. We use p (or s, or t), q, and r, to range over positions, queries, and responses, respectively. We denote by Q and R the sets of queries and responses, respectively.
Let us pause here for some comments and perspective. An important step in the semantic account of sequential computing was taken by Berry, who developed the stable model in which the function por is excluded. Winskel described this model more concretely in terms of event structures, and Girard proposed a simpler form called coherence spaces, that led him to the discovery of linear logic [19] (see also [5][chapters 12 and 13]). In event structures or coherence spaces, data are constructed out of elementary pieces, called events, or tokens. For example, the pair of booleans (tt, ff ) is obtained as the set of two elementary pieces: (tt, ⊥) and (⊥, ff ). More precisely and technically, the structure Bool × Bool as a coherence space has four events: tt .1, ff .1, tt.2, and ff .2. Then (tt, ff ) is the set {tt.1, ff .2}.
In a sequential data structure (or in a concrete data structure, not defined here) events are further cut in two "halves": a cell and a value, or an opponent's move and a player's move. The structure Bool × Bool as an sds has two cells ?.1 and ?.2 and has four values tt.1, ff .1, tt.2, and ff .2. An event, say tt.1, is now decomposed as a position (?.1) (tt.1). The best way to understand this is to think of a streamlike computation. Your pair of booleans is the output of some program, which will only work on demand. The cell ?.1 reads as "I -another program, or an observer -want to know the left coordinate of the result of the program", and tt.1 is the answer to this query.
An important remark, which will be further exploited in section 5, is that this decomposition of events gives additional space: there is no counterpart in the world of coherence spaces or in any other usual category of domains of a structure with one cell and no value. Definition 3.2 A strategy of S is a subset x of R that is closed under response prefixes and binary non-empty greatest lower bounds (glb's):
r 1 , r 2 ∈ x, r 1 ∧ r 2 = ǫ ⇒ r 1 ∧ r 2 ∈ x
where ǫ denotes the empty word. A counter-strategy is a non-empty subset of Q that is closed under query prefixes and under binary glb's. We use x, y, . . . and α, β, . . . to range over strategies and counter-strategies, respectively. If x is a strategy and if r ∈ x, q = rc for some c and if there is no v such that qv ∈ x, we write q ∈ A(x) (and say that q is accessible from x). Likewise we define r ∈ A(α) for a response r and a counter-strategy α.
Both sets of strategies and of counter-strategies are ordered by inclusion. They are denoted by D(S) and D ⊥ (S), respectively. We write K(D(S)) and K(D ⊥ (S)) for the sets of finite strategies and counter-strategies, respectively. Notice that D(S) has always a minimum element (the empty strategy, written ∅ or ⊥), while D ⊥ (S) has no minimum element in general.
A more geometric reading of the definitions of sds, strategy and counter-strategy is the following. An sds is a labelled forest, where the ancestor relation alternates cells and values, and where the roots are labelled by cells. A strategy is a sub-forest which is allowed to branch only at values. A counter-strategy α is a non-empty subtree which is allowed to branch only at cells.
Let us see what collections of positions form and do not form a strategy in Bool × Bool. The set {(?.1) (tt.1) , (?.2) (ff .2}) (representing (tt, ff )) is a strategy, while {(?.1) (tt.1) , (?.1) (ff .1)} is not a strategy. A way to understand this is to say that the cell ?.1 can hold only one value, which is the answer to the question. A strategy consists in having ready determinate answers for the movements of the opponent. If strategies are data, what are counter-strategies? They can be considered as exploration trees, see below.
The pairs cell-value, query-response, and strategy-counter-strategy give to sds's a flavour of symmetry. These pairs are related to other important dualities in programming: input-output, constructor-destructor (see [17]). It is thus tempting to consider the counter-strategies of an sds S as the strategies of a dual structure S ⊥ whose cells are the values of S and whose values are the cells of S. However, the structure obtained in this way is not an sds anymore, since positions now start with a value. This situation, first analysed by Lamarche [28], is now well-understood since the thesis work of Laurent [29]. We come back to this below.
The following definition resembles quite closely to the dynamics described in section 1.
Definition 3.3 (play)
Let S be an sds, x be a strategy and α be a counter-strategy of S, one of which is finite. We define x α, called a play, as the set of positions p which are such that all the response prefixes of p are in x and all the query prefixes of p are in α.
Proposition 3.4 Given x and α as in definition 3.3, the play x α is non-empty and totally ordered, and can be confused with its maximum element, which is uniquely characterized as follows:
x α is the unique element of x ∩ A(α) if x α is a response x α is the unique element of α ∩ A(x) if x α is a query .
Definition 3.5 (winning) Let x and α be as in definition 3.3. If x α is a response, we say that x wins against α, and we denote this predicate by x⊳α. If x α is a query, we say that α wins against x, and we write x⊲α, thus ⊲ is the negation of ⊳. To stress who is the winner, we write:
x α = x ⊳ α when x wins x ⊲ α when α wins .
The position x α formalizes the interplay between the player with strategy x and the opponent with strategy α. If x α is a response, then the player wins since he made the last move, and if x α is a query, then the opponent wins. Here is a game theoretical reading of x α. At the beginning the opponent makes a move c: his strategy determines that move uniquely. Then either the player is unable to move (x contains no position of the form cv), or his strategy determines a unique move. The play goes on until one of x or α does not have the provision to answer its opponent's move (cf. section 1).
We next define the morphisms between sds's. There are two definitions, a concrete one and a more abstract one. The concrete one is needed since we want the morphisms to form in turn an sds in order to get a cartesian closed category (actually a monoidal closed one, to start with). Accordingly, there will be two definitions of the composition of morphisms. Their equivalence is just what full abstraction -that is, the coincidence of operational and denotational semantics -boils down to, once we have tailored the model to the syntax (programs as morphisms) and tailored the syntax to the semantics (like in the language CDS [7]). We start with the concrete way.
Definition 3.6 Given sets A, B ⊆ A, for any word w ∈ A * , we define w⌈ B as follows:
ǫ⌈ B = ǫ wm⌈ B = w⌈ B if m ∈ A\B (w⌈ B )m if m ∈ B .
Definition 3.7 Given two sds's S = (C, V, P ) and S ′ = (C ′ , V ′ , P ′ ), we define S ⊸ S ′ = (C ′′ , V ′′ , P ′′ ) as follows. The sets C ′′ and V ′′ are disjoint unions:
C ′′ = {request c ′ | c ′ ∈ C ′ } ∪ {is v | v ∈ V } V ′′ = {output v ′ | v ′ ∈ V ′ } ∪ {valof c | c ∈ C} .
P ′′ consists of the alternating positions s starting with a request c ′ , and which are such that:
s⌈ S ′ ∈ P ′ , (s⌈ S = ǫ or s⌈ S ∈ P )
, and s has no prefix of the form s(valof c)(request c ′ ).
We often omit the tags request, valof , is, output, as we have just done in the notation s⌈ S = s⌈ C∪V (and similarly for s⌈ S ′ ). We call affine sequential algorithms (or affine algorithms) from S to S ′ the strategies of S ⊸ S ′ .
The constraint 'no scc ′ ' can be formulated more informally as follows. Thinking of valof c as a call to a subroutine, the principal routine cannot proceed further until it receives a result v from the subroutine.
The identity affine algorithm id ∈ D(S ⊸ S ′ ) is defined as follows:
id = {copycat(r) | r is a response of S},
where copycat is defined as follows:
copycat(ǫ) = ǫ copycat(rc) = copycat(r)(request c)(valof c) copycat(qv) = copycat(q)(is v)(output v) .
The word copycat used in the description of the identity algorithm has been proposed in [1], and corresponds to a game theoretical understanding: the player always repeats the last move of the opponent. In some influential talks, Lafont had taken images from chess (Karpov -Kasparov) to explain the same thing.
{(request ?)(valof ?), (request ?)(valof ?)(is tt )(output ff ), (request ?)(valof ?)(is ff )(output tt)} .
(2) On the other hand, the left disjunction function cannot be computed by an affine algorithm. Indeed, transcribing the program for lor as a strategy leads to:
{(request ?)(valof ?.1), (request ?)(valof ?.1)(is tt)(output tt ), (request ?)(valof ?.1)(is ff )(valof ?.2), (request ?)(valof ?.1)(is ff )(valof ?.2)(is tt)(output tt ), (request ?)(valof ?.1)(is ff )(valof ?.2)(is ff )(output ff )} ,
which is not a subset of the set of positions of Bool 2 ⊸ Bool, because the projections on Bool 2 of the last two sequences of moves are not positions of Bool 2 . But the program does transcribe into a (non-affine) sequential algorithm, as we shall see.
(3) Every constant function gives rise to an affine algorithm, whose responses have
the form (request c ′ 1 )(output v ′ 1 ) . . . (request c ′ n )(output v ′ n )..
The second and third example above thus justify the terminology affine (in the affine framework, in contrast to the linear one, weakening is allowed). The second example suggests the difference between affine and general sequential algorithms. Both kinds of algorithms ask successive queries to their input, and continue to proceed only after they get responses to these queries. An affine algorithm is moreover required to ask these queries monotonically: each new query must be an extension of the previous one. The 'unit' of resource consumption is thus a sequence of queries/responses that can be arbitrarily large, as long as it builds a position of the input sds. The disjunction algorithms are not affine, because they may have to ask successively the queries ?.1 and ?.2, which are not related by the prefix ordering.
A generic affine algorithm, as represented in figure 1, can be viewed as a 'combination' of the following (generic) output strategy and input counter-strategy (or
request c ′ valof c is v 1 · · · . . . is v i valof d . . . is w output v ′ request c ′ 1 · · · . . . request c ′ m · · · . . . . . .
is v n · · · input counter-strategy output strategy
c v 1 · · · . . . v i d . . . w . . . . . . v n · · · c ′ v ′ c ′ 1 · · · . . . c ′ m · · ·
We now give a definition of composition of affine algorithms by means of a simple abstract machine. Sequential algorithms are syntactic objects, and were indeed turned into a programming language called CDS [7]. What we present here is a simplified version of the operational semantics presented in [14][section 3.5] in the special case of affine algorithms. Given φ ∈ D(S ⊸ S ′ ) and φ ′ ∈ D(S ′ ⊸ S ′′ ), the goal is to compute on demand the positions that belong to their composition φ ′′ in the sds S ⊸ S ′′ . The abstract machine proceeds by rewriting triplets (s, s ′ , s ′′ ) where s, s ′ , s ′′ are positions of S ⊸ S ′ , S ′ ⊸ S ′′ , and S ⊸ S ′′ , respectively. The rules are given in Figure 2 (where P ′′ designates the set of positions of S ⊸ S ′′ , etc...):
The first two rules are left to the (streamlike) initiative of the observer. Each time one of these rules is activated, it launches the machine proper, that consists The initial triplet is (ǫ, ǫ, ǫ). The observer wants to know the content of c ′′ , or more precisely wants to know what the function does in order to compute the contents of c ′′ in the output. Thus, he chooses to perform the following rewriting:
(r, r ′ , r ′′ ) −→ (r, r ′ c ′′ , r ′′ c ′′ ) (r ′′ c ′′ ∈ P ′′ ) (r, r ′ , r ′′ ) −→ (rv, r ′ c ′′ , r ′′ v) (r ′′ v ∈ P ′′ ) (r, q ′ , q ′′ ) −→ (r, q ′ v ′′ , q ′′ v ′′ ) (q ′ v ′′ ∈ φ ′ ) (r, q ′ , q ′′ ) −→ (rc ′ , q ′ c ′ , q ′′ ) (q ′ c ′ ∈ φ ′ ) (q, r ′ , q ′′ ) −→ (qv ′ , r ′ v ′ , q ′′ ) (qv ′ ∈ φ) (q, r ′ , q ′′ ) −→ (qc, r ′ , q ′′ c) (qc ∈ φ)(ǫ, ǫ, ǫ) −→ (ǫ, ǫ, c ′′ )
The request is transmitted to φ ′ :
(ǫ, ǫ, c ′′ ) −→ (ǫ, c ′′ , c ′′ )
There are two cases here. Either φ ′ does not consult its input and produces immediately a value for c ′′ , in which case, this value is transmitted as the final result of the oberver's query:
(ǫ, c ′′ , c ′′ ) −→ (ǫ, c ′′ v ′′ , c ′′ v ′′ ) (c ′′ v ′′ ∈ φ ′ )
Or φ ′ needs to consult its input (like the various sequential or functions), and then an interaction loop (in the terminology of Abramsky and Jagadeesan [2]) is initiated:
(ǫ, c ′′ , c ′′ ) −→ (c ′ 1 , c ′′ c ′ 1 , c ′′ ) (c ′′ c ′ 1 ∈ φ ′ ) −→ (c ′ 1 v ′ 1 , c ′′ c ′ 1 v ′ 1 , c ′′ ) (c ′ 1 v ′ 1 ∈ φ) −→ (c ′ 1 v ′ 1 c ′ 2 , c ′′ c ′ 1 v ′ 1 c ′ 2 , c ′′ ) (c ′′ c ′ 1 v ′ 1 c ′ 2 ∈ φ ′ ) . . .
This dialogue between φ and φ ′ may terminate in two ways. Either at some stage φ ′ has received enough information from φ to produce a value v ′′ for c ′′ , i.e.
c ′ 1 v ′ 1 . . . c ′ n v ′ n v ′′ ∈ φ ′ : (c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n , c ′′ ) −→ (c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n v ′′ , c ′′ v ′′ )
or φ itself says it needs to consult its input, i.e., c ′ 1 v ′ 1 . . . c ′ n c ∈ φ: this information is passed as a final (with respect to the query c ′′ ) result to the observer, who then knows that φ ′′ needs to know the content of c.
(c ′ 1 v ′ 1 . . . c ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ ) −→ (c ′ 1 v ′ 1 . . . c ′ n c, c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ c)
It is then the observer's freedom to explore further the semantics of φ ′′ by issuing a new query (provided it is in P ′′ ) :
(c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n v ′′ , c ′′ v ′′ ) −→ (c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n v ′′ , c ′′ v ′′ c ′′ 1 ) or (c ′ 1 v ′ 1 . . . c ′ n c, c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ c) −→ (c ′ 1 v ′ 1 . . . c ′ n c, c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ cv)
The query c ′′ cv reads as: "knowing that φ ′′ needs c, how does it behave next when I feed v to c". After this, the computation starts again using the four deterministic rules along the same general pattern. Notice how φ and φ ′ take in turn the leadership in the interaction loop (cf. section 1.
We now turn to the abstract definition of our morphisms. y ≤ x such that f (y)⊳α ′ (m(g, α ′ , x), denoted by m(f, x, α ′ ). One defines similarly a notion of stable function g : D ⊥ (S ′ ) ⇀ D ⊥ (S), with notation m(g, α ′ , x).
Definition 3.10 (symmetric algorithm) Let S and S ′ be two sds's. A symmetric algorithm from S to S ′ is a pair
(f : D(S) → D(S ′ ), g : D ⊥ (S ′ ) ⇀ D ⊥ (S))
of a function and a partial function that are both continuous and satisfy the following axioms:
(L) (x ∈ D(S), α ′ ∈ K(D ⊥ (S ′ )), f (x)⊳α ′ ) ⇒ x⊳g(α ′ ) and m(f, x, α ′ ) = x ⊳ g(α ′ ) (R) (α ′ ∈ D ⊥ (S ′ ), x ∈ K(D(S)), x⊲g(α ′ ) ⇒ f (x)⊲α ′ and m(g, α ′ , x) = f (x) ⊲ α ′
We set as a convention, for any x and any α ′ such that g(α ′ ) is undefined:
x⊳g(α ′ ) and x ⊳ g(α ′ ) = ∅.
Hence the conclusion of (L) is simply m(f, x, α ′ ) = ∅ when g(α ′ ) is undefined. In contrast, when we write x⊲g(α ′ ) in (R), we assume that g(α ′ ) is defined.
Thus, g provides the witnesses of stability of f , and conversely. Moreover, the above definition is powerful enough to imply other key properties of f and g.
Definition 3.11 A (continuous) function f : D(S) → D(S ′ ) is called sequential if, for any pair (x, α ′ ) ∈ K(D(S)) × K(D ⊥ (S ′ )) such that f (x)⊲α ′ and f (z)⊳α ′ for some z ≥ x, there exists α ∈ K(D ⊥ (S)), called a sequentiality index of f at (x, α ′ ),
such that x⊲α and for any y ≥ x, f (y)⊳α ′ implies y⊳α.
(LS) If x ∈ D(S), α ′ ∈ K(D ⊥ (S ′ )),f (x)
⊲α ′ , and f (y)⊳α ′ for some y > x, then x⊲g(α ′ ), and x ⊲ g(α ′ ) is a sequentiality index of f at (x, α ′ ).
(RS) If α ′ ∈ D ⊥ (S ′ ), x ∈ K(D(S)), x⊳g(α ′ ), and x⊲g(β ′ ) for some β ′ > α ′ , then f (x)⊳α ′ , and f (x) ⊳ α ′ is a sequentiality index of g at (α ′ , x). Hence f and g are sequential, and g provides the witnesses of sequentiality for f and conversely.
We turn to the composition of affine algorithms.
Definition 3.13 Let S, S ′ and S ′′ be sds's, and let (f, g) and (f ′ , g ′ ) be symmetric algorithms from S to S ′ and from S ′ to S ′′ . We define their composition (f ′′ , g ′′ ) from S to S ′′ as follows:
f ′′ = f ′ • f and g ′′ = g • g ′ .
The announced full abstraction theorem is the following.
Theorem 3.14 The sets of affine algorithms and of symmetric algorithms are in a bijective correspondence (actually, an isomorphism), and the two definitions of composition coincide up to the correspondence.
We just briefly indicate how to pass from one point of view to the other. Given φ ∈ D(S ⊸ S ′ ), we define a pair (f, g) of a function and a partial function as follows:
f (x) = {r ′ | r ′ = s⌈ S ′ and s⌈ S ∈ x for some s ∈ φ} g(α ′ ) = {q | q = s⌈ S and s⌈ S ′ ∈ α ′ for some s ∈ φ}.
(By convention, if the right hand side of the definition of g is empty for some α ′ , we interpret this definitional equality as saying that g(α ′ ) is undefined.)
Conversely, given a symmetric algorithm (f, g) from S to S ′ , we construct an affine algorithm φ ∈ D(S ⊸ S ′ ) by building the positions s of φ by induction on the length of s (a streamlike process!). For example, if s ∈ φ, if s⌈ S and s⌈ S ′ are responses, and if q ′ = (s⌈ S ′ )c ′ for some c ′ , then:
sc ′ c ∈ φ if (s⌈ S )c ∈ g(q ′ ) sc ′ v ′ ∈ φ if q ′ v ′ ∈ f (s⌈ S ) .
But, as remarked above, we do not get all sequential functions in this way. Recall that in linear logic the usual implication A ⇒ B is decomposed as (!A) ⊸ B (!, and its de Morgan dual ?, are called exponentials in linear logic). ρq ∈ P ! if q ∈ A(strategy(ρ)) ρq(qv) ∈ P ! if ρq ∈ P ! , strategy(ρq(qv)) ∈ D(M), and qv ∈ strategy(ρ)
where strategy is the following function mapping responses (or ǫ) of P ! to strategies of S: strategy(ǫ) = ∅ strategy(ρq(qv)) = strategy(r) ∪ {qv}.
Sequential algorithms between two sds's S and S ′ are by definition affine algorithms between !S and S ′ .
It is easily checked that the programs for lor (cf. example 3.8), ror, lsor, and rsor transcribe as sequential algorithms from Bool × Bool to Bool.
Sequential algorithms also enjoy two direct definitions, a concrete one and an abstract one, and both an operational and a denotational definition of composition, for which full abstraction holds, see [14].
Let us end the section with a criticism of the terminology of symmetric algorithm. As already pointed out, the pairs (f, g) are not quite symmetric since g unlike f is a partial function. Logically, S ⊸ S ′ should read as S ⊥ S ′ . But something odd is going on: the connective would have two arguments of a different polarity: in S ′ it is Opponent who starts, while Player starts in S ⊥ . For this reason, Laurent proposed to decompose the affine arrow [29] (see also [8]):
S ⊸ S ′ = (↓ S) ⊥ S ′
where ↓ is a change of polarity operator. For sds's, this operation is easy to define: add a new initial opponent move, call it ⋆, and prefix it to all the positions of S ⊥ . For example, ↓ (Bool ⊥ ) has ⋆ ? tt and ⋆ ? ff as (maximal) positions. According to Laurent's definition, the initial moves of S 1 S 2 are pairs (c 1 , c 2 ) of initial (Opponent's) moves of S 1 and S 2 . Then the positions continue as interleavings of a position of S 1 and of S 2 . Notice that this is now completely symmetric in S 1 and S 2 . Now, let us revisit the definition of S ⊸ S ′ . We said that the positions of this sds had to start with a c ′ , which is quite dissymetric. But the ↓ construction allows us to restore equal status to the two components of the . A position in S ⊥ S ′ must start with two moves played together in S and S ′ . It happens that these moves have necessarily the form (⋆, c ′ ), which is conveying the same information as c ′ .
Control
We already pointed out that theorem 3.14 is a full abstraction result (for the affine case), and that the same theorem has been proved for all sequential algorithms with respect to the language CDS. Sequential algorithms allow inherently to consult the internal behaviour of their arguments and to make decisions according to that behaviour. For example, there exists a sequential algorithm of type (Bool 2 → Bool) → Bool that maps lsor to tt and rsor to ff (cf. end of section 2). Cartwright and Felleisen made the connection with more standard control operators explicit, and this lead to the full abstraction result of sequential algorithms with respect to an extension of PCF with a control operator [13].
In this respect, we would like to highlight a key observation made by Laird
o 1 → o 2 → o ǫ ).
It is an instructive exercise to write down explicitly the inverse isomorphisms as sequential algorithms: in one direction, one has the if then else function, in the other direction, we have the control operation catch considered in [13], which tells apart the two strategies , {? ǫ ? 1 }, {? ǫ ? 2 }. Here, we shall show (at type bool) how the control operator call-cc of Scheme or Standard ML is interpreted as a sequential algorithm of type ((bool → B) → bool) → bool. The formula ((A → B) → A) → A is called Pierce's law and is a typical tautology of classical logic. The connection between control operators and classical logic -and in particular the fact that call-cc corresponds to Pierce's law-was first discovered in [21]. Here is is the sequential algorithm interpreting call-cc for A = bool:
? ǫ ? 1
? 11 ? 111 tt 111 tt ǫ ff 111 ff ǫ tt 1 tt ǫ ff 1 ff ǫ (with labelling of moves ((bool 111 → B 11 ) → bool 1 ) → bool ǫ ). The same algorithm, with bool replaced by o → o → o, is:
? ǫ ? 1 ? 11 ? 111 ? 1111 ? 2 ? 1112 ? 3 ? 12 ? 2 ? 13 ? 3 (with labelling (((o 1111 → o 1112 ) → o 111 → B 11 ) → o 12 → 13 → o 1 ) → o 2 → o 3 → o ǫ ).
The reader familiar with continuations may want to compare this tree with the continuation-passing (CPS) style interpretation λyk.y(λxk ′ .xk)k of call-cc, or in tree form (cf. section 1):
λyk.y λxk ′ .x k k
where the first k indicates a copy-cat from o 111 to o ǫ while the second one indicates a copycat from o 1 to o ǫ . The bound variable k ′ amounts to the fact B itself is of the form B ′ → o (see below). This is an instance of the injection from terms to strategies mentioned in section 4 (in this simple example, Laird's HO style model coincides with that of sequential algorithms). CPS translations are the usual indirect way to interpret control operators: first translate, then interpret in your favorite cartesian closed category. In contrast, sequential algorithms look as a direct semantics. The example above suggests that this is an "illusion": once we explicitly replace bool by o → o → o, we find the indirect way underneath.
A more mathematical way to stress this is through Hofmann-Streicher's notion of continuation model [23]: given a category having all the function spaces A → R for some fixed object R called object of final results, one only retains the full subcategory of negative objects, that is, objects of the form A → R. In this category, control can be interpreted. (For the logically inclined reader, notice that thinking of R as the formula "false", then the double negation of A reads as (A → R) → R, and the classical tautology ((A → R) → R) → A is intuitionistically provable for all negative A = B → R.) Now, taking R = o, the above isomorphism exhibits bool as a negative object. But then all types are negative: given A and B = B ′ → R, then A → B ∼ (A × B ′ ) → R is also negative. Hence the model of sequential algorithms (and Laird's model of control) are indeed continuation models, but it is not written on their face.
A few more remarks
We would like to mention that this whole line of research on sequential interaction induced such side effects as the design of the Categorical Abstract Machine [11], that gave its name to the language CAML, and of a theory of Abstract Böhm Trees, alluded to in section 1.
As for future lines of research, imports from and into the program of ludics newly proposed by Girard [20] are expected. We just quote one connection with ludics. We insisted in section 2 that lsor and rsor were different programs for the same function. But there is a way to make them into two different functions, by means of additional error values, and accordingly of additional constants in the syntax. Actually, one error is enough, call it err. Indeed, we have: lsor(err, ⊥) = err rsor (err, ⊥) = ⊥ .
Because lsor looks at its left argument first, if an error is fed in that argument, it is propagated, whence the result err. Because rsor looks at its right argument first, if no value is is fed for that argument, then the whole computation is waiting, whence the result ⊥. One could achieve the same more symmetrically with two different errors: lsor (err 1 , err 2 ) = err 1 , rsor(err 1 , err 2 ) = err 2 . But the economy of having just one error is conceptually important, all the more because in view of the isomorphism of section 5, we see that we can dispense (at least for bool but also for any finite base type) with the basic values tt, ff , 0, 1, . . .. We arrive then at a picture with only two (base type) constants: ⊥ and err! This is the point of view adopted in Girard's ludics. In ludics, the counterpart of err is called Daimon. The motivation for introducing Daimon is quite parallel to that of having errors. Girard's program has the ambition of giving an interactive account of proofs. So, in order to explore a proof of a proposition A, one should play it against a "proof" of A ⊥ (the negation of linear logic). But it can't be a proof, since not both A and A ⊥ can be proved.
So, the space of "proofs" must be enlarged to allow for more opponents to interact with. Similarly, above, we motivated errors by the remark that, once introduced, they allow more observations to be made: here, they allowed us to separate lsor and rsor. More information, also of a survey kind, can be found in [17]. | 7,470 |
cs0501033 | 1589732927 | We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation — that is, computation of successive bits of information upon request. The core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like call-cc and proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and by Hyland and Ong on the other hand. | Two important models of functions that have been constructed since turned out to be the extensional collapse (i.e. the hereditary quotient equating sequential algorithms computing the same function, i.e. (in the affine case) two algorithms @math and @math such that @math ): Bucciarelli and Ehrhard's model of strongly stable functions @cite_3 @cite_16 , and Longley's model of sequentially realizable functionals @cite_10 . The first model arose from an algebraic characterization of sequential (first-order) functions, that carries over to all types. The second one is a realizability model over a combinatory algebra in which the interaction at work in sequential algorithms is encoded. | {
"abstract": [
"We prove that, in the hierarchy of simple types based on the type of natural numbers, any finite strongly stable function is equal to the application of the semantics of a PCF-definable functional to some strongly stable (generally not PCF-definable) functionals of type two. Applying a logical relation technique, we derive from this result that the strongly stable model of PCF is the extensional collapse of its sequential algorithms model.",
"In the previous chapter, we saw how the model PC offers a ‘maximal’ class of partial computable functionals strictly extending SF (in the sense of the poset ( J ( N _ ) ) of Subsection 3.6.4). In the present chapter, we show that SF can also be extended in a very different direction to yield another class SR of ‘computable’ functionals which is in some sense incompatible with PC. This class was first identified by Bucciarelli and Ehrhard [45] as the class of strongly stable functionals; later work by Ehrhard [69], van Oosten [294] and Longley [176] established the computational significance of these functionals, investigated their theory in some detail, and provided a range of alternative characterizations.",
"We present a cartesian closed category of dI-domains with coherence and strongly stable functions which provides a new model of PCF, where terms are interpreted by functions and where, at first order, all functions are sequential. We show how this model can be refined in such a way that the theory it induces on the terms of PCF be strictly finer than the theory induced by the Scott model of continuous functions."
],
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_3"
],
"mid": [
"2038883640",
"1985955425",
"1963626918"
]
} | Playful, streamlike computation | λx.M is a term. Usual abbreviations are λx 1 x 2 .M for λx 1 .(λx 2 .M), and MN 1 N 2 for (MN 1 )N 2 , and similarly for n-ary abstraction and application. A more macroscopic view is quite useful: it is easy to check that any λ-term has exactly one of the following two forms:
(n ≥ 1, p ≥ 1) λx 1 · · · x n .xM 1 · · · M p (n ≥ 0, p ≥ 1)
λx 1 · · · x n .(λx.M)M 1 · · · M p
The first form is called a head normal form (hnf), while the second exhibits the head redex (λx.M)M 1 . The following easy property justifies the name of head normal form: any reduction sequence starting from a hnf λx 1 · · · x n .xM 1 · · · M p consists of an interleaving of independent reductions of M 1 , . . . , M p . More precisely, we have:
(λx 1 · · · x n .xM 1 · · · M p → * P ) ⇒ ∃ N 1 , . . . N p P = λx 1 · · · x n .xN 1 · · · N p and ∀ i ≤ p M i → * N i .
Here, reduction means the replacement in any term of a sub-expression of the form (λx.M)N, called a β-redex, by M[x ← N]. A normal form is a term that contains no β-redex, or equivalently that contains no head redex. Hence the syntax of normal forms is given by the following two constructions: a variable x is a normal form, and if M 1 , . . . , M p are normal forms, then λx 1 · · · x n .xM 1 · · · M p is a normal form. Now, we are ready to play. Consider the following two normal forms:
M = zM 1 M 2 (λz 1 z 2 .z 1 M 3 M 4 ) N = λx 1 x 2 x 3 .x 3 (λy 1 y 2 .y 1 N 1 )N 2
The term M[z ← N] = NM 1 M 2 (λz 1 z 2 .z 1 M 3 M 4 ) is not a normal form anymore, and can be β-reduced as follows:
NM 1 M 2 (λz 1 z 2 .z 1 M 3 M 4 ) → (λz 1 z 2 .z 1 M 3 M 4 )(λy 1 y 2 .y 1 N ′ 1 )N ′
Then we represent computation as the progression of two tokens in the two trees. Initially, the tokens are at the root (we use underlining to indicate the location of the tokens):
z M 1 M 2 λz 1 z 2 . z 1 λu. t M 5 M 6 M 4 λx 1 x 2 x 3 . x 3 λy 1 y 2 . y 1 N 1 N 2
Note here that N cannot help to choose the next move in M. The machinery stops here. After all, most functional programming languages stop evaluation on (weak) head normal forms. But what about getting the full normal form, i.e., computing M ′′ 5 and M ′′ 6 ? The interactive answer to this question is: by exploration of branches, on demand, or in a streamlike manner. The machine displays t as the head variable of the normal form of M[z ← N]. Now, you, the Opponent, can choose which of the branches below t to explore, and then the machine will restart until it reaches a head normal form. For example, if you choose the first branch, then you will eventually reach the head variable of M ′′ 5 . This is called streamlike, because that sort of mechanism has been first analysed for infinite lists built progressively. A λ-term too has a "potentially infinite normal form": it's Böhm tree.
This prologue served the purpose of introducing some keywords, such as interactivity, playful interpretation, streamlike computation. We now start from the beginning.
Introduction
Scott's and Plotkin's denotational semantics takes its roots in recursion theory. It is worth recalling here the statement of Rice's theorem. This theorem asserts a property of recursively enumerable (r.e.) sets of partial recursive (p.r.) functions, defined through a fixed enumeration (φ n ) of the p.r. functions (i.e. φ is a surjection from ω -the set of natural numbers -to ω ⇀ ω, using ⇀ for sets of partial functions). Let P R ⊆ ω ⇀ ω denote the set of p.r. functions. A subset A ⊆ P R is called r.e. if {n | φ n ∈ A} is r.e. in the usual sense. The theorem asserts that if A is r.e. and if f ∈ A, then there exists a finite approximation g of f such that g ∈ A. That g is an approximation of f means that f is an extension of g, i.e., the domain on which the partial function f is defined, or domain of definition of f , contains that of g and f and g coincide where they are both defined. A simpler way of saying this is that the graph of g is contained in the graph of f . Moreover, the domain of definition of g is finite. Rice's theorem is about an intrinsic continuity property in the realm of p.r. functions. It highlights the (complete) partial order structure of ω ⇀ ω, and in particular the presence of a bottom element ⊥ in this partial order: the everywhere undefined function.
Certainly, one of the key departure points taken by Scott was to take ⊥ seriously. Once this element is part of the picture, one takes a new look at some basic functions. Take the booleans, for example. In Scott's semantics, this is not the set {tt, ff }, but the set {⊥, tt, ff } ordered as follows: x ≤ y if and only if x = y or x = ⊥ (this is called flat ordering). Take now the good old disjunction function or : Bool×Bool → Bool. It gives rise to four different functions over the flat domain version of Bool (the specifications below can be completed to full definitions by monotonicity):
por(⊥, tt) = tt por(tt, ⊥) = tt por(⊥, ff ) = ⊥ por(ff , ⊥) = ⊥ por(ff , ff ) = ff lor(⊥, y) = ⊥ lor(tt, ⊥) = tt lor(ff , y) = y ror(⊥, tt) = tt ror(x, ⊥) = ⊥ ror(x, ff ) = x sor(⊥, tt) = ⊥ sor(tt, ⊥) = ⊥ sor(⊥, ff ) = ⊥ sor(ff , ⊥) = ⊥ sor(ff , tt) = tt sor(tt, ff ) = tt sor(tt, tt) = tt sor(ff , ff ) = ff
It should be clear that lor and ror are computed by programs of the following shape, respectively:
λxy. if x = tt then tt else if y = · · · λxy. if y = tt then tt else if x = · · ·
On the other hand, it should be intuitively clear that no sequential program of the same sort can compute por, because a sequential program will • either start by examining one of the arguments, say x, in which case it can't output anything before a value for x is given, thus missing the specification por(⊥, tt) = tt,
• or output some value rightaway, say tt (λxy.tt), thus mising the specification por(⊥, ⊥) = ⊥.
For a formal proof that por is not sequentially definable, we refer to [33] (syntactic proof), to [22][section 6.1] (model-theoretic proof), and to [5][section 4.5] (via logical relations). As for sor, the story is yet different, there are two natural sequential programs for it:
λxy. if x = tt then if y = · · · else if y = · · · λxy. if y = tt then if x = · · · else if x = · · ·
The starting point of the model of sequential algorithms (next section) was to interpret these two programs as different objects lsor and rsor. Notice finally that there are many more sequential programs computing lor, ror, or sor. Another program for lor might e.g. look like
λxy. if x = tt then tt else if x = tt then tt else if y = · · ·
Such a "stuttering" program is perfectly correct syntactically. Whether this program is interpreted in the model by an object different from the above program for lor is the departure point between the model of sequential algorithm on one hand and the more recent games semantics on the other hand. We shall come back to this point in the next section.
Before we close the section, let us give some rationale for the names used in this section. As the reader might have guessed, the prefixes p, l, r, s, ls, rs stand for "parallel", "left", "right", "left strict", and "right strict", respectively. except for what regards the coincidence between the two definitions of composition, for which the proof from [14][section 3.6] can easily be adapted.
Definition 3.1 A sequential data structure S = (C, V, P ) is given by two sets C and V of cells and values, which are assumed disjoint, and by a collection P of non-empty words p of the form:
c 1 v 1 · · · c n v n or c 1 v 1 · · · c n−1 v n−1 c n ,
where c i ∈ C and v i ∈ V for all i. Thus any p ∈ P is alternating and starts with a cell. Moreover, it is assumed that P is closed under non-empty prefixes. We call the elements of P positions of S. We call move any element of M = C ∪ V . We use m to denote a move. A position ending with a value is called a response, and a position ending with a cell is called a query. We use p (or s, or t), q, and r, to range over positions, queries, and responses, respectively. We denote by Q and R the sets of queries and responses, respectively.
Let us pause here for some comments and perspective. An important step in the semantic account of sequential computing was taken by Berry, who developed the stable model in which the function por is excluded. Winskel described this model more concretely in terms of event structures, and Girard proposed a simpler form called coherence spaces, that led him to the discovery of linear logic [19] (see also [5][chapters 12 and 13]). In event structures or coherence spaces, data are constructed out of elementary pieces, called events, or tokens. For example, the pair of booleans (tt, ff ) is obtained as the set of two elementary pieces: (tt, ⊥) and (⊥, ff ). More precisely and technically, the structure Bool × Bool as a coherence space has four events: tt .1, ff .1, tt.2, and ff .2. Then (tt, ff ) is the set {tt.1, ff .2}.
In a sequential data structure (or in a concrete data structure, not defined here) events are further cut in two "halves": a cell and a value, or an opponent's move and a player's move. The structure Bool × Bool as an sds has two cells ?.1 and ?.2 and has four values tt.1, ff .1, tt.2, and ff .2. An event, say tt.1, is now decomposed as a position (?.1) (tt.1). The best way to understand this is to think of a streamlike computation. Your pair of booleans is the output of some program, which will only work on demand. The cell ?.1 reads as "I -another program, or an observer -want to know the left coordinate of the result of the program", and tt.1 is the answer to this query.
An important remark, which will be further exploited in section 5, is that this decomposition of events gives additional space: there is no counterpart in the world of coherence spaces or in any other usual category of domains of a structure with one cell and no value. Definition 3.2 A strategy of S is a subset x of R that is closed under response prefixes and binary non-empty greatest lower bounds (glb's):
r 1 , r 2 ∈ x, r 1 ∧ r 2 = ǫ ⇒ r 1 ∧ r 2 ∈ x
where ǫ denotes the empty word. A counter-strategy is a non-empty subset of Q that is closed under query prefixes and under binary glb's. We use x, y, . . . and α, β, . . . to range over strategies and counter-strategies, respectively. If x is a strategy and if r ∈ x, q = rc for some c and if there is no v such that qv ∈ x, we write q ∈ A(x) (and say that q is accessible from x). Likewise we define r ∈ A(α) for a response r and a counter-strategy α.
Both sets of strategies and of counter-strategies are ordered by inclusion. They are denoted by D(S) and D ⊥ (S), respectively. We write K(D(S)) and K(D ⊥ (S)) for the sets of finite strategies and counter-strategies, respectively. Notice that D(S) has always a minimum element (the empty strategy, written ∅ or ⊥), while D ⊥ (S) has no minimum element in general.
A more geometric reading of the definitions of sds, strategy and counter-strategy is the following. An sds is a labelled forest, where the ancestor relation alternates cells and values, and where the roots are labelled by cells. A strategy is a sub-forest which is allowed to branch only at values. A counter-strategy α is a non-empty subtree which is allowed to branch only at cells.
Let us see what collections of positions form and do not form a strategy in Bool × Bool. The set {(?.1) (tt.1) , (?.2) (ff .2}) (representing (tt, ff )) is a strategy, while {(?.1) (tt.1) , (?.1) (ff .1)} is not a strategy. A way to understand this is to say that the cell ?.1 can hold only one value, which is the answer to the question. A strategy consists in having ready determinate answers for the movements of the opponent. If strategies are data, what are counter-strategies? They can be considered as exploration trees, see below.
The pairs cell-value, query-response, and strategy-counter-strategy give to sds's a flavour of symmetry. These pairs are related to other important dualities in programming: input-output, constructor-destructor (see [17]). It is thus tempting to consider the counter-strategies of an sds S as the strategies of a dual structure S ⊥ whose cells are the values of S and whose values are the cells of S. However, the structure obtained in this way is not an sds anymore, since positions now start with a value. This situation, first analysed by Lamarche [28], is now well-understood since the thesis work of Laurent [29]. We come back to this below.
The following definition resembles quite closely to the dynamics described in section 1.
Definition 3.3 (play)
Let S be an sds, x be a strategy and α be a counter-strategy of S, one of which is finite. We define x α, called a play, as the set of positions p which are such that all the response prefixes of p are in x and all the query prefixes of p are in α.
Proposition 3.4 Given x and α as in definition 3.3, the play x α is non-empty and totally ordered, and can be confused with its maximum element, which is uniquely characterized as follows:
x α is the unique element of x ∩ A(α) if x α is a response x α is the unique element of α ∩ A(x) if x α is a query .
Definition 3.5 (winning) Let x and α be as in definition 3.3. If x α is a response, we say that x wins against α, and we denote this predicate by x⊳α. If x α is a query, we say that α wins against x, and we write x⊲α, thus ⊲ is the negation of ⊳. To stress who is the winner, we write:
x α = x ⊳ α when x wins x ⊲ α when α wins .
The position x α formalizes the interplay between the player with strategy x and the opponent with strategy α. If x α is a response, then the player wins since he made the last move, and if x α is a query, then the opponent wins. Here is a game theoretical reading of x α. At the beginning the opponent makes a move c: his strategy determines that move uniquely. Then either the player is unable to move (x contains no position of the form cv), or his strategy determines a unique move. The play goes on until one of x or α does not have the provision to answer its opponent's move (cf. section 1).
We next define the morphisms between sds's. There are two definitions, a concrete one and a more abstract one. The concrete one is needed since we want the morphisms to form in turn an sds in order to get a cartesian closed category (actually a monoidal closed one, to start with). Accordingly, there will be two definitions of the composition of morphisms. Their equivalence is just what full abstraction -that is, the coincidence of operational and denotational semantics -boils down to, once we have tailored the model to the syntax (programs as morphisms) and tailored the syntax to the semantics (like in the language CDS [7]). We start with the concrete way.
Definition 3.6 Given sets A, B ⊆ A, for any word w ∈ A * , we define w⌈ B as follows:
ǫ⌈ B = ǫ wm⌈ B = w⌈ B if m ∈ A\B (w⌈ B )m if m ∈ B .
Definition 3.7 Given two sds's S = (C, V, P ) and S ′ = (C ′ , V ′ , P ′ ), we define S ⊸ S ′ = (C ′′ , V ′′ , P ′′ ) as follows. The sets C ′′ and V ′′ are disjoint unions:
C ′′ = {request c ′ | c ′ ∈ C ′ } ∪ {is v | v ∈ V } V ′′ = {output v ′ | v ′ ∈ V ′ } ∪ {valof c | c ∈ C} .
P ′′ consists of the alternating positions s starting with a request c ′ , and which are such that:
s⌈ S ′ ∈ P ′ , (s⌈ S = ǫ or s⌈ S ∈ P )
, and s has no prefix of the form s(valof c)(request c ′ ).
We often omit the tags request, valof , is, output, as we have just done in the notation s⌈ S = s⌈ C∪V (and similarly for s⌈ S ′ ). We call affine sequential algorithms (or affine algorithms) from S to S ′ the strategies of S ⊸ S ′ .
The constraint 'no scc ′ ' can be formulated more informally as follows. Thinking of valof c as a call to a subroutine, the principal routine cannot proceed further until it receives a result v from the subroutine.
The identity affine algorithm id ∈ D(S ⊸ S ′ ) is defined as follows:
id = {copycat(r) | r is a response of S},
where copycat is defined as follows:
copycat(ǫ) = ǫ copycat(rc) = copycat(r)(request c)(valof c) copycat(qv) = copycat(q)(is v)(output v) .
The word copycat used in the description of the identity algorithm has been proposed in [1], and corresponds to a game theoretical understanding: the player always repeats the last move of the opponent. In some influential talks, Lafont had taken images from chess (Karpov -Kasparov) to explain the same thing.
{(request ?)(valof ?), (request ?)(valof ?)(is tt )(output ff ), (request ?)(valof ?)(is ff )(output tt)} .
(2) On the other hand, the left disjunction function cannot be computed by an affine algorithm. Indeed, transcribing the program for lor as a strategy leads to:
{(request ?)(valof ?.1), (request ?)(valof ?.1)(is tt)(output tt ), (request ?)(valof ?.1)(is ff )(valof ?.2), (request ?)(valof ?.1)(is ff )(valof ?.2)(is tt)(output tt ), (request ?)(valof ?.1)(is ff )(valof ?.2)(is ff )(output ff )} ,
which is not a subset of the set of positions of Bool 2 ⊸ Bool, because the projections on Bool 2 of the last two sequences of moves are not positions of Bool 2 . But the program does transcribe into a (non-affine) sequential algorithm, as we shall see.
(3) Every constant function gives rise to an affine algorithm, whose responses have
the form (request c ′ 1 )(output v ′ 1 ) . . . (request c ′ n )(output v ′ n )..
The second and third example above thus justify the terminology affine (in the affine framework, in contrast to the linear one, weakening is allowed). The second example suggests the difference between affine and general sequential algorithms. Both kinds of algorithms ask successive queries to their input, and continue to proceed only after they get responses to these queries. An affine algorithm is moreover required to ask these queries monotonically: each new query must be an extension of the previous one. The 'unit' of resource consumption is thus a sequence of queries/responses that can be arbitrarily large, as long as it builds a position of the input sds. The disjunction algorithms are not affine, because they may have to ask successively the queries ?.1 and ?.2, which are not related by the prefix ordering.
A generic affine algorithm, as represented in figure 1, can be viewed as a 'combination' of the following (generic) output strategy and input counter-strategy (or
request c ′ valof c is v 1 · · · . . . is v i valof d . . . is w output v ′ request c ′ 1 · · · . . . request c ′ m · · · . . . . . .
is v n · · · input counter-strategy output strategy
c v 1 · · · . . . v i d . . . w . . . . . . v n · · · c ′ v ′ c ′ 1 · · · . . . c ′ m · · ·
We now give a definition of composition of affine algorithms by means of a simple abstract machine. Sequential algorithms are syntactic objects, and were indeed turned into a programming language called CDS [7]. What we present here is a simplified version of the operational semantics presented in [14][section 3.5] in the special case of affine algorithms. Given φ ∈ D(S ⊸ S ′ ) and φ ′ ∈ D(S ′ ⊸ S ′′ ), the goal is to compute on demand the positions that belong to their composition φ ′′ in the sds S ⊸ S ′′ . The abstract machine proceeds by rewriting triplets (s, s ′ , s ′′ ) where s, s ′ , s ′′ are positions of S ⊸ S ′ , S ′ ⊸ S ′′ , and S ⊸ S ′′ , respectively. The rules are given in Figure 2 (where P ′′ designates the set of positions of S ⊸ S ′′ , etc...):
The first two rules are left to the (streamlike) initiative of the observer. Each time one of these rules is activated, it launches the machine proper, that consists The initial triplet is (ǫ, ǫ, ǫ). The observer wants to know the content of c ′′ , or more precisely wants to know what the function does in order to compute the contents of c ′′ in the output. Thus, he chooses to perform the following rewriting:
(r, r ′ , r ′′ ) −→ (r, r ′ c ′′ , r ′′ c ′′ ) (r ′′ c ′′ ∈ P ′′ ) (r, r ′ , r ′′ ) −→ (rv, r ′ c ′′ , r ′′ v) (r ′′ v ∈ P ′′ ) (r, q ′ , q ′′ ) −→ (r, q ′ v ′′ , q ′′ v ′′ ) (q ′ v ′′ ∈ φ ′ ) (r, q ′ , q ′′ ) −→ (rc ′ , q ′ c ′ , q ′′ ) (q ′ c ′ ∈ φ ′ ) (q, r ′ , q ′′ ) −→ (qv ′ , r ′ v ′ , q ′′ ) (qv ′ ∈ φ) (q, r ′ , q ′′ ) −→ (qc, r ′ , q ′′ c) (qc ∈ φ)(ǫ, ǫ, ǫ) −→ (ǫ, ǫ, c ′′ )
The request is transmitted to φ ′ :
(ǫ, ǫ, c ′′ ) −→ (ǫ, c ′′ , c ′′ )
There are two cases here. Either φ ′ does not consult its input and produces immediately a value for c ′′ , in which case, this value is transmitted as the final result of the oberver's query:
(ǫ, c ′′ , c ′′ ) −→ (ǫ, c ′′ v ′′ , c ′′ v ′′ ) (c ′′ v ′′ ∈ φ ′ )
Or φ ′ needs to consult its input (like the various sequential or functions), and then an interaction loop (in the terminology of Abramsky and Jagadeesan [2]) is initiated:
(ǫ, c ′′ , c ′′ ) −→ (c ′ 1 , c ′′ c ′ 1 , c ′′ ) (c ′′ c ′ 1 ∈ φ ′ ) −→ (c ′ 1 v ′ 1 , c ′′ c ′ 1 v ′ 1 , c ′′ ) (c ′ 1 v ′ 1 ∈ φ) −→ (c ′ 1 v ′ 1 c ′ 2 , c ′′ c ′ 1 v ′ 1 c ′ 2 , c ′′ ) (c ′′ c ′ 1 v ′ 1 c ′ 2 ∈ φ ′ ) . . .
This dialogue between φ and φ ′ may terminate in two ways. Either at some stage φ ′ has received enough information from φ to produce a value v ′′ for c ′′ , i.e.
c ′ 1 v ′ 1 . . . c ′ n v ′ n v ′′ ∈ φ ′ : (c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n , c ′′ ) −→ (c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n v ′′ , c ′′ v ′′ )
or φ itself says it needs to consult its input, i.e., c ′ 1 v ′ 1 . . . c ′ n c ∈ φ: this information is passed as a final (with respect to the query c ′′ ) result to the observer, who then knows that φ ′′ needs to know the content of c.
(c ′ 1 v ′ 1 . . . c ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ ) −→ (c ′ 1 v ′ 1 . . . c ′ n c, c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ c)
It is then the observer's freedom to explore further the semantics of φ ′′ by issuing a new query (provided it is in P ′′ ) :
(c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n v ′′ , c ′′ v ′′ ) −→ (c ′ 1 v ′ 1 . . . c ′ n v ′ n , c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n v ′ n v ′′ , c ′′ v ′′ c ′′ 1 ) or (c ′ 1 v ′ 1 . . . c ′ n c, c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ c) −→ (c ′ 1 v ′ 1 . . . c ′ n c, c ′′ c ′ 1 v ′ 1 c ′ 2 . . . c ′ n , c ′′ cv)
The query c ′′ cv reads as: "knowing that φ ′′ needs c, how does it behave next when I feed v to c". After this, the computation starts again using the four deterministic rules along the same general pattern. Notice how φ and φ ′ take in turn the leadership in the interaction loop (cf. section 1.
We now turn to the abstract definition of our morphisms. y ≤ x such that f (y)⊳α ′ (m(g, α ′ , x), denoted by m(f, x, α ′ ). One defines similarly a notion of stable function g : D ⊥ (S ′ ) ⇀ D ⊥ (S), with notation m(g, α ′ , x).
Definition 3.10 (symmetric algorithm) Let S and S ′ be two sds's. A symmetric algorithm from S to S ′ is a pair
(f : D(S) → D(S ′ ), g : D ⊥ (S ′ ) ⇀ D ⊥ (S))
of a function and a partial function that are both continuous and satisfy the following axioms:
(L) (x ∈ D(S), α ′ ∈ K(D ⊥ (S ′ )), f (x)⊳α ′ ) ⇒ x⊳g(α ′ ) and m(f, x, α ′ ) = x ⊳ g(α ′ ) (R) (α ′ ∈ D ⊥ (S ′ ), x ∈ K(D(S)), x⊲g(α ′ ) ⇒ f (x)⊲α ′ and m(g, α ′ , x) = f (x) ⊲ α ′
We set as a convention, for any x and any α ′ such that g(α ′ ) is undefined:
x⊳g(α ′ ) and x ⊳ g(α ′ ) = ∅.
Hence the conclusion of (L) is simply m(f, x, α ′ ) = ∅ when g(α ′ ) is undefined. In contrast, when we write x⊲g(α ′ ) in (R), we assume that g(α ′ ) is defined.
Thus, g provides the witnesses of stability of f , and conversely. Moreover, the above definition is powerful enough to imply other key properties of f and g.
Definition 3.11 A (continuous) function f : D(S) → D(S ′ ) is called sequential if, for any pair (x, α ′ ) ∈ K(D(S)) × K(D ⊥ (S ′ )) such that f (x)⊲α ′ and f (z)⊳α ′ for some z ≥ x, there exists α ∈ K(D ⊥ (S)), called a sequentiality index of f at (x, α ′ ),
such that x⊲α and for any y ≥ x, f (y)⊳α ′ implies y⊳α.
(LS) If x ∈ D(S), α ′ ∈ K(D ⊥ (S ′ )),f (x)
⊲α ′ , and f (y)⊳α ′ for some y > x, then x⊲g(α ′ ), and x ⊲ g(α ′ ) is a sequentiality index of f at (x, α ′ ).
(RS) If α ′ ∈ D ⊥ (S ′ ), x ∈ K(D(S)), x⊳g(α ′ ), and x⊲g(β ′ ) for some β ′ > α ′ , then f (x)⊳α ′ , and f (x) ⊳ α ′ is a sequentiality index of g at (α ′ , x). Hence f and g are sequential, and g provides the witnesses of sequentiality for f and conversely.
We turn to the composition of affine algorithms.
Definition 3.13 Let S, S ′ and S ′′ be sds's, and let (f, g) and (f ′ , g ′ ) be symmetric algorithms from S to S ′ and from S ′ to S ′′ . We define their composition (f ′′ , g ′′ ) from S to S ′′ as follows:
f ′′ = f ′ • f and g ′′ = g • g ′ .
The announced full abstraction theorem is the following.
Theorem 3.14 The sets of affine algorithms and of symmetric algorithms are in a bijective correspondence (actually, an isomorphism), and the two definitions of composition coincide up to the correspondence.
We just briefly indicate how to pass from one point of view to the other. Given φ ∈ D(S ⊸ S ′ ), we define a pair (f, g) of a function and a partial function as follows:
f (x) = {r ′ | r ′ = s⌈ S ′ and s⌈ S ∈ x for some s ∈ φ} g(α ′ ) = {q | q = s⌈ S and s⌈ S ′ ∈ α ′ for some s ∈ φ}.
(By convention, if the right hand side of the definition of g is empty for some α ′ , we interpret this definitional equality as saying that g(α ′ ) is undefined.)
Conversely, given a symmetric algorithm (f, g) from S to S ′ , we construct an affine algorithm φ ∈ D(S ⊸ S ′ ) by building the positions s of φ by induction on the length of s (a streamlike process!). For example, if s ∈ φ, if s⌈ S and s⌈ S ′ are responses, and if q ′ = (s⌈ S ′ )c ′ for some c ′ , then:
sc ′ c ∈ φ if (s⌈ S )c ∈ g(q ′ ) sc ′ v ′ ∈ φ if q ′ v ′ ∈ f (s⌈ S ) .
But, as remarked above, we do not get all sequential functions in this way. Recall that in linear logic the usual implication A ⇒ B is decomposed as (!A) ⊸ B (!, and its de Morgan dual ?, are called exponentials in linear logic). ρq ∈ P ! if q ∈ A(strategy(ρ)) ρq(qv) ∈ P ! if ρq ∈ P ! , strategy(ρq(qv)) ∈ D(M), and qv ∈ strategy(ρ)
where strategy is the following function mapping responses (or ǫ) of P ! to strategies of S: strategy(ǫ) = ∅ strategy(ρq(qv)) = strategy(r) ∪ {qv}.
Sequential algorithms between two sds's S and S ′ are by definition affine algorithms between !S and S ′ .
It is easily checked that the programs for lor (cf. example 3.8), ror, lsor, and rsor transcribe as sequential algorithms from Bool × Bool to Bool.
Sequential algorithms also enjoy two direct definitions, a concrete one and an abstract one, and both an operational and a denotational definition of composition, for which full abstraction holds, see [14].
Let us end the section with a criticism of the terminology of symmetric algorithm. As already pointed out, the pairs (f, g) are not quite symmetric since g unlike f is a partial function. Logically, S ⊸ S ′ should read as S ⊥ S ′ . But something odd is going on: the connective would have two arguments of a different polarity: in S ′ it is Opponent who starts, while Player starts in S ⊥ . For this reason, Laurent proposed to decompose the affine arrow [29] (see also [8]):
S ⊸ S ′ = (↓ S) ⊥ S ′
where ↓ is a change of polarity operator. For sds's, this operation is easy to define: add a new initial opponent move, call it ⋆, and prefix it to all the positions of S ⊥ . For example, ↓ (Bool ⊥ ) has ⋆ ? tt and ⋆ ? ff as (maximal) positions. According to Laurent's definition, the initial moves of S 1 S 2 are pairs (c 1 , c 2 ) of initial (Opponent's) moves of S 1 and S 2 . Then the positions continue as interleavings of a position of S 1 and of S 2 . Notice that this is now completely symmetric in S 1 and S 2 . Now, let us revisit the definition of S ⊸ S ′ . We said that the positions of this sds had to start with a c ′ , which is quite dissymetric. But the ↓ construction allows us to restore equal status to the two components of the . A position in S ⊥ S ′ must start with two moves played together in S and S ′ . It happens that these moves have necessarily the form (⋆, c ′ ), which is conveying the same information as c ′ .
Control
We already pointed out that theorem 3.14 is a full abstraction result (for the affine case), and that the same theorem has been proved for all sequential algorithms with respect to the language CDS. Sequential algorithms allow inherently to consult the internal behaviour of their arguments and to make decisions according to that behaviour. For example, there exists a sequential algorithm of type (Bool 2 → Bool) → Bool that maps lsor to tt and rsor to ff (cf. end of section 2). Cartwright and Felleisen made the connection with more standard control operators explicit, and this lead to the full abstraction result of sequential algorithms with respect to an extension of PCF with a control operator [13].
In this respect, we would like to highlight a key observation made by Laird
o 1 → o 2 → o ǫ ).
It is an instructive exercise to write down explicitly the inverse isomorphisms as sequential algorithms: in one direction, one has the if then else function, in the other direction, we have the control operation catch considered in [13], which tells apart the two strategies , {? ǫ ? 1 }, {? ǫ ? 2 }. Here, we shall show (at type bool) how the control operator call-cc of Scheme or Standard ML is interpreted as a sequential algorithm of type ((bool → B) → bool) → bool. The formula ((A → B) → A) → A is called Pierce's law and is a typical tautology of classical logic. The connection between control operators and classical logic -and in particular the fact that call-cc corresponds to Pierce's law-was first discovered in [21]. Here is is the sequential algorithm interpreting call-cc for A = bool:
? ǫ ? 1
? 11 ? 111 tt 111 tt ǫ ff 111 ff ǫ tt 1 tt ǫ ff 1 ff ǫ (with labelling of moves ((bool 111 → B 11 ) → bool 1 ) → bool ǫ ). The same algorithm, with bool replaced by o → o → o, is:
? ǫ ? 1 ? 11 ? 111 ? 1111 ? 2 ? 1112 ? 3 ? 12 ? 2 ? 13 ? 3 (with labelling (((o 1111 → o 1112 ) → o 111 → B 11 ) → o 12 → 13 → o 1 ) → o 2 → o 3 → o ǫ ).
The reader familiar with continuations may want to compare this tree with the continuation-passing (CPS) style interpretation λyk.y(λxk ′ .xk)k of call-cc, or in tree form (cf. section 1):
λyk.y λxk ′ .x k k
where the first k indicates a copy-cat from o 111 to o ǫ while the second one indicates a copycat from o 1 to o ǫ . The bound variable k ′ amounts to the fact B itself is of the form B ′ → o (see below). This is an instance of the injection from terms to strategies mentioned in section 4 (in this simple example, Laird's HO style model coincides with that of sequential algorithms). CPS translations are the usual indirect way to interpret control operators: first translate, then interpret in your favorite cartesian closed category. In contrast, sequential algorithms look as a direct semantics. The example above suggests that this is an "illusion": once we explicitly replace bool by o → o → o, we find the indirect way underneath.
A more mathematical way to stress this is through Hofmann-Streicher's notion of continuation model [23]: given a category having all the function spaces A → R for some fixed object R called object of final results, one only retains the full subcategory of negative objects, that is, objects of the form A → R. In this category, control can be interpreted. (For the logically inclined reader, notice that thinking of R as the formula "false", then the double negation of A reads as (A → R) → R, and the classical tautology ((A → R) → R) → A is intuitionistically provable for all negative A = B → R.) Now, taking R = o, the above isomorphism exhibits bool as a negative object. But then all types are negative: given A and B = B ′ → R, then A → B ∼ (A × B ′ ) → R is also negative. Hence the model of sequential algorithms (and Laird's model of control) are indeed continuation models, but it is not written on their face.
A few more remarks
We would like to mention that this whole line of research on sequential interaction induced such side effects as the design of the Categorical Abstract Machine [11], that gave its name to the language CAML, and of a theory of Abstract Böhm Trees, alluded to in section 1.
As for future lines of research, imports from and into the program of ludics newly proposed by Girard [20] are expected. We just quote one connection with ludics. We insisted in section 2 that lsor and rsor were different programs for the same function. But there is a way to make them into two different functions, by means of additional error values, and accordingly of additional constants in the syntax. Actually, one error is enough, call it err. Indeed, we have: lsor(err, ⊥) = err rsor (err, ⊥) = ⊥ .
Because lsor looks at its left argument first, if an error is fed in that argument, it is propagated, whence the result err. Because rsor looks at its right argument first, if no value is is fed for that argument, then the whole computation is waiting, whence the result ⊥. One could achieve the same more symmetrically with two different errors: lsor (err 1 , err 2 ) = err 1 , rsor(err 1 , err 2 ) = err 2 . But the economy of having just one error is conceptually important, all the more because in view of the isomorphism of section 5, we see that we can dispense (at least for bool but also for any finite base type) with the basic values tt, ff , 0, 1, . . .. We arrive then at a picture with only two (base type) constants: ⊥ and err! This is the point of view adopted in Girard's ludics. In ludics, the counterpart of err is called Daimon. The motivation for introducing Daimon is quite parallel to that of having errors. Girard's program has the ambition of giving an interactive account of proofs. So, in order to explore a proof of a proposition A, one should play it against a "proof" of A ⊥ (the negation of linear logic). But it can't be a proof, since not both A and A ⊥ can be proved.
So, the space of "proofs" must be enlarged to allow for more opponents to interact with. Similarly, above, we motivated errors by the remark that, once introduced, they allow more observations to be made: here, they allowed us to separate lsor and rsor. More information, also of a survey kind, can be found in [17]. | 7,470 |
cs0501036 | 2151384127 | in this paper we describe a method which allows agents to dynamically select protocols and roles when they need to execute collaborative tasks | Protocols selection in agents interactions design is something generally done at design time. Indeed, most of the agent-oriented design methodologies ( @cite_3 and @cite_4 to quote a few) all make designers decide which role agents should play for each single interaction. However dynamic behaviours and openness in MAS demand greater flexibility. | {
"abstract": [
"To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents.",
"This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societ al) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system)."
],
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"1729536562",
"2111877087"
]
} | Enabling Agents to Dynamically Select Protocols for Interactions | Generally, the interaction protocols which support agents collaborative tasks' execution are imposed upon multiagent systems (MAS) at design time. This static protocols selection severely limits the system's openness, the dynamic behaviours agents can exhibit, the integration of new protocols, etc. For example, consider a collaborative task which can be executed following varied methods either by means of a Request protocol [5] (an identified agent exhibiting specific skills is requested to perform the task) or a Contract Net protocol (CNP) [6] (a competition holds between some identified agents in order to find out the best one to perform the task). Since the MAS is open and CNP looks for the best contractor, it will undoubtedly be preferred to Request for such a task in absence of constraints such as execution delay. Thus, selecting Request at design-time prevents the initiator agent from benefiting a better processing of this task. Moreover, the set of protocols used in multi-agent interactions is increasingly enlarging, and a static protocol selection will only consider the protocols the designer knows even if these agents have the capacity of interpreting and executing other protocols unknown at design time. To overcome this limitation, we should enable agents to dynamically select protocols in order to interact.
As yet, there have been some efforts [1,4] to enable agents to dynamically select the roles they play during interactions using Markov Decision Processes, planning or even probabilistic approaches. However, they don't suit protocol based coordination mechanisms. Indeed, as protocols are partially sorted sequences of pre-formatted messages exchange, selecting them to execute a task requires that their descriptions match that of the collaborative task. The solutions proposed so far do not explicitly focus on protocols and do not check such compliance either. To address this void, we developed a method which enables agents to dynamically select protocols and roles in order to interact. Our method puts the usual assumptions about multi-agent interactions a step further. First, we consider that some interaction protocols can be known only at runtime. Thus, starting from a minimal version, agents interaction models can grow up by integrating these protocols from safe and authenticated libraries of interaction protocols when needed. Second, we consider that agents may have different designers, therefore they may encompass different protocols specification formalisms. Furthermore, an interaction protocol is a triple {R, M, Ω} where R is a set of interacting roles which can be of two types: initiator and participant. An initiator role is the unique role in charge of starting the protocol whereas a participant role is any role taking part in the protocol. Consequently, an initiator agent will be any agent playing the initiator role in an interaction while a participant agent will be any agent taking up a participant role. In addition, protocols used in MAS can be classified 1 in three categories: (1) 1-1 protocols, which are protocols made of two roles (initiator and participant) both of them having only one instance (ex Request); (2) 1-1 N protocols, again protocols made of two roles with several instances of the participant (ex CNP); (3) 1-N protocols, which are protocols with several distinct participant roles each of them having only one instance (ex an auction protocol with one buyer, one seller and one manager). Subsequently, we define the dynamic protocols selection process for each of these categories.
Rather than explicitly indicating the protocols and the roles to use for all the agents which will execute the desired interaction, we suggest that agents programmers simply mention the collaborative task's description in the initiator agent's source code. As soon as an agent locates such a description, it identifies some potential participant agents and thereafter fires the dynamic protocol selection process taking up the initiator role. We assume that the MAS is provided with potential participant agents identification procedures. Agents can dynamically select protocols in two possible ways. First, the initiator agent and all the potential participant agents collectively select a protocol and assign roles to each agent inside this protocol. This is the joint protocol selection method which assumes that agents trust one another and that they don't dread publishing their knowledge and preferences. On the other hand, agents can individually select protocols and roles and start the desired interaction. This is the individual protocol selection method which assumes that agents do not trust one another and/or the system is heterogeneous (several sub systems with different protocol formalisms are plugged together). In this method, as the selected roles may mismatch, agents should anticipate errors in order to guarantee consistent messages exchange. We focus on wrong message structure error which indicates that something is wrong in the message structure (performative, content, language, ontology, etc.) and wrong message content error which indicates that the message's content doesn't match the expected content pattern. We argue that our method introduces more flexibility in protocols execution, fosters agents autonomy, favours their dynamic behaviours and suits MAS' openness. In this paper we describe both methods and detail their principles, concepts and algorithms. We exemplify them towards a web documents filtering MAS composed of (1) query agents representing the queries users formulate, (2) document agents representing the documents retrieved from the web, (3) and rule agents corresponding to any linguistic rule invoked to compute documents attributes (author(s), content, language, etc.).
The paper is organised as follows. Section 2 formally defines the protocols selection problem. Sections 3 and 4 detail our methods. Section 5 discusses some related work and section 6 draws some conclusions.
Problem Description
Our purpose in this research, is to ease protocols definition, implementation and use. Thus, we claim to free agents programmers from hard coding the protocols and the exact roles to use every time their agents have to execute a collaborative task. Rather, they should only mention in the initiator agent's source code the description of the collaborative task to execute. Then, once an agent comes across such a description, it will launch the dynamic selection process implicating potential participant agents. Concretely, given a collaborative task t j which is to be executed by a set A = {a 1 , a 2 , . . . a k } of agents, the selection problem is stated as how an agent can select a protocol and a role inside this protocol to execute t j ? It consists in finding out a protocol p and the roles r 1 , r 2 , . . . r p each a i should be enacting in this protocol in order to get t j executed. We assume that each agent is provided with an interaction model I = {r 1 , r 2 , . . . r n } containing some configured protocols (p 1 . . . p m ) some of which can be introduced at runtime and for each protocol p i a set of roles {r 1 , r 2 , . . . r k } this agent can play during interactions based on p i .
In the following two sections, we elaborate on our solution to the selection problem.
Joint Protocol Selection
Once an agent locates the description of a collaborative task, it finds out a set of protocols needed to execute this task which it refines to a sub set of protocols whose initiator roles are configured inside its interaction model. Moving from collaborative tasks models to protocols' requires the agents to analyse both models and detect their adequacy. In this paper we assume that agents are able to examine tasks and protocols models and relate the first ones to the second ones. After moving from task to protocols the initiator agent should identify all the potential participant agents for the determined protocols. Both steps provide the initiator agent with a sparse matrix: potential participants linked to protocols. These potential participant agents are thus contacted whether at the same time or one after the other and are required to validate a protocol. To contrast the messages exchanged during the joint selection and those exchanged during normal interactions, we proposed some performatives which we informally describe here bellow:
call-for-collaboration the sender of this performative invites the receiver to take part in a protocol described in the content field.
unable-to-select the sender of this performative informs the receiver that it cannot play a participant role in the related protocol. The in-reply-to and reply-with fields help relate this message to a prior call-for-collaboration. The reasons why an agent may reply this performative, though identified as a potential participant for the protocol, are (1) its autonomy since it may not want to execute this protocol at this moment and (2) some errors in some fields.
stop-selection the sender of this performative asks the receiver to stop the selection process this message is linked to.
ready-to-select the sender of this performative notifies the receiver of the participant roles it can be enacting regarding the protocol description it received. All the participant roles in any protocol compatible with the current one can be listed. This grouping not only reveals by order of preference the roles the sender commits in playing but it also avoid going back and forth about protocols sharing the same background. Roles of protocols are compatible when they can execute safe interactions albeit the difference in their respective specifications. As an example the initiator role of CNP can interact with either the participant role of CNP or that of Iterated CNP (ICNP [5]). While the initiator of ICNP can't interact with the participant of CNP because of the probable iterations.
notify-assignment the sender of this performative informs the receiver about the role the latter has been assigned to in the jointly selected protocol. The assigned role is one among those the receiver priorly committed in playing.
Whatever protocol category the selection is concerned with, we can describe the joint selection messages exchange sequence as follows:
1. the initiator agent sends a call-for-collaboration encapsulating a protocol's description.
2. Each participant agent can reply with an unable-to-select driving the initiator agent to stop the selection process between both agents by sending a stop-selection.
3. Each participant agent can also reply with a ready-to-select. In this case, the initiator agent parses the participant's proposals and adopts one of them sending a notify-assignment or reject all the proposals sending a stop-selection.
In the remainder of this section we detail the joint selection method for each class of protocol.
Inside the Joint Protocol Selection
1-1 Protocols
In the dynamic 1-1 protocols selection, the aim of the initiator agent is to early find out a solution, a couple (a i , p j )
where a i is one of the potential participant agents formerly identified and p j one of the 1-1 protocols determined for the current task. In the midst of the solution search is the matrix's exploration. Hence, it behoves the initiator agent to explore the matrix traversing protocols or potential participant agents. In the protocol-oriented exploration, the initiator selects a protocol and iterates through the set of agents which it identified for this protocol and retains one that fits the protocol. As soon as an agent is determined the selection process successfully completes. Otherwise, the iteration proceeds until there is no more protocol to select. Analogously, in the agent-oriented exploration the initiator selects an agent and delves into its protocols' set looking for one they can execute together. Whatever exploration the initiator adopts, it should overcome the matrix sparsity by selecting as next element (protocol or agent) the one holding the least sparse vector.
Consider, by way of illustration, a query agent q 1 which is requested to execute a task t 1 : "find out a document exhibiting the following characteristics: (lan-guage='English', content='plain/text')". Protocols and potential participant agents identifications for t 1 lead to the matrix given in table 1 where a cross in a cell [l,c] indicates that the agent at column c can play a participant role in the protocol at line l. This matrix reveals that q 1 has identified IPS (an Incremental Problem Solving protocol where a problem submitter -initiator-and its solver -the participant-progressively find out a solution to a given problem) and Request. A diagrammatic representation of IPS is given in figure 1. In addition, q 1 identified seven potential participant document agents d 1 . . . d 7 . q 1 adopts a protocol-oriented explo- ration and selects IP S (the least sparse vector). Therefore, it will try to validate IP S with d 1 , d 2 , d 4 , d 5 or d 7 . If q 1 receives a ready-to-select in reply to a prior call-for-collaboration, it explores the list of preferred roles and as soon as it finds the participant role of IPS or Request it notifies its agreement with a notify-assignment. If q 1 identified more than these protocols and a role of any of them is pointed out in the participant's ready-to-select, this protocol will be adopted. In the case it didn't find out any role it expects or it received an unable-to-select, q 1 replies with a stop-selection and as long as there are still unexplored potential participant agents for IPS, q 1 will con- tinue contacting them. In absence of solution when the potential participants' set has been thoroughly explored for a protocol, the same process is taken again upon another protocol if there is any. In case no solution has been found and no more protocol and participant can be explored, the dynamic protocol selection fails and the subsequent task remains not executed.
d 1 d 2 d 3 d 4 d 5 d 6 d 7 IPS x x x x x Request x x x x
1-1 N Protocols
A solution to the dynamic 1-1 N Protocols selection problem is a couple (A, p j ) where A is the set of participant agents and p j the protocol to use. For this category of protocols the matrix is explored only in a protocol-oriented way since all the identified agents for a protocol are contacted at the same time. Once all the contacted agents have replied, the initiator agent should select a common protocol for the agents (generally for a part of them) which replied a ready-to-select. We devised several strategies to perform this selection but here we only describe one, the largest set's strategy, which looks for the role most agents selected. If there exists a role r i that all the agents pointed out, then this one is selected for all the agents which replied a ready-to-select. Otherwise, we look for a role that will involve the largest set of agents. Therefore, we construct an array where each index ı contains a collection of all the roles which exactly ı agents candidated for. As we didn't succeed in finding out a role for all the n agents which sent a ready-to-select, the highest index in this array points to a collection of roles that exactly n − 1 agents candidated for. We traverse the array from the highest index down to the lowest. While exploring index k, if the collection of roles is not empty, we represent for each role r ı the set e ı of agents which candidated for it.
1. If all the e i are equal, then we randomly select a r p and adopts its corresponding e p as the participant agents' set. The solution is then e p and the relevant protocol r p belongs to. After a solution has been found for an index k, we check whether some sets have not been saved for a higher index. If no sets were found the final solution is the one at hand. Otherwise, we select from the latters the set whose intersection with the currently selected set is the largest. If no solution has been found we iterate through the selection process changing protocols.
1-N Protocols
A solution to the dynamic 1-N Protocols selection problem is a triple (A, p j , m) where A is the set of participant agents, p j the protocol to use and m an associative array mapping each agent to the role(s) it will play in the protocol. Here again, the matrix is explored only in a protocol-oriented way. The initiator agent a ı waits for all the participants' replies and gathers the ready-to-select messages. The roles are clustered following the protocols they belong to and the protocols which have not been identified by the initiator agent are eliminated. For each protocol p, a ı maps each role r to a set of agents which candidated for it: candidates(r ) = k {a k }. If candidates(r ) = ∅, the protocol r belongs to is no more considered in the selection process. Moreover, as there exists several participant roles in 1-N protocols, some of them may receive their first message from other participant roles. Thus, we introduce a new relation, father: given two roles r 1 and r 2 of a 1-N protocol, r 1 = f ather(r 2 ) |= r 1 is the sender of the r ′ 2 s f irst message. For each protocol retained after the candidates sets construction, the initiator agent constructs a tree t wherein nodes are the roles of the protocol. A node r m is child of another node r n if r n = father(r m ). t is traversed in a breadth-first way and for each node r of t an agent a is assigned to r from candidates(r ). Assigning a role to an agent can be performed by any well known resource allocation algorithm (ex election). This assignment is achieved for all the trees and the initiator agent uses a strategy to select one of the totally assigned trees. An improvement during the roles assignment is to avoid situations where the same agent plays several roles in a protocol. Thus, when candidates is a singleton, its only one agent is removed from all other candidates sets it appears in when these are not singletons. As well, while exploring t, once a role has been assigned to an agent we should remove this agent from all the candidates sets it appears in provided these are not singletons. The singleton criterion may guide a tree selection strategy.
Beyond the joint protocol selection
The joint protocol selection mechanism does not apply to all interaction contexts. One evident issue is that generic protocols are thought to be invariably specified in the MAS. However, the only one aspect that remains invariable in generic protocols' specification is the description of exchanged messages which is imposed by communication languages (KQML, FIPA ACL) and embedded in the protocols' specification. Hence, there is no guarantee for generic protocols to be specified in a unique formalism and agents might fail to interpret some of the formalisms used to specify generic protocols in the MAS. Particularly, plugging heterogeneous sub-systems together in a MAS increases the risk for multiple generic protocols specification formalisms. The joint protocol selection then falls short in a MAS where generic protocols are specified in several formalisms and agents are unable to interpret all those formalisms. In addition, there are also situations where agents do not always trust one another. Then, basing protocols selection on specifications exchange becomes unsafe.
To address these drawbacks, we developed an individual protocol selection method.
Individual Protocol Selection
This selection form is carried out concomitantly to the targeted interaction. Likewise the joint protocol selection form, the initiator agent is in charge of starting the selection process when it locates a collaborative task's descriptions. It finds out some protocols which comply with the task's descriptions and wherein it can play an initiator role. The initiator agent may adopt a static behaviour during this selection by choosing a protocol among the candidates. in this case, the strategy it adopts is required to be fair. It may also be given the possibility to exhibit dynamic behaviours by changing protocols in order to address occurring inconsistencies. In this paper we consider the first case.
The initiator agent sends the initial message m 0 of the selected protocol p ı to one or several potential participant agents. m 0 actually denotes a need for a new interaction and any agent which receives it selects a participant role r which starts with m 0 's reception. Hence, each participant agent a constructs the collection of candidate roles r which we refer to in the remainder of this section as collection(a , t k ). The roles are then selected from collection(a , t k ) and instantiated so that the interaction can take place. The individual protocol selection, although more sophisticated and powerful, can lead to interaction inconsistencies. Indeed, as individually selected roles may mismatch, the exchanged messages' content or structure (performative, ontology, language, etc.) may be wrong. Thus, we provide agents with techniques to anticipate such errors by checking incoming messages over structure and content compliance. When collection(a , t k ) is a singleton, the only one role is instantiated in order to interact. If any error occurs during the interaction no recovery would have been possible. Dynamically selecting protocols is more appealing when the collection(a , t k ) contains several roles. In this case we explore collection(a , t k ) either sequentially or in parallel. In this paper, we only describe individual 1-1 protocols selection since the selection mechanism is quite similar for the other two types of protocols and only some extensions are required to fit the specificity of these protocols.
Sequential Roles Instantiation
In the purpose of starting an interaction or replacing a failing role during an interaction, a participant agent randomly (or using another strategy we'll define later) selects roles from the collection one after the other until there is no available role to select or the interaction eventually safely ends up. Once selected, roles are removed from the collection in order to avoid selecting them anew during the same interaction. When a message is wrong the participant agent must recover from this error by replacing the failing role. The recovery process lies on the interaction's journal where agents log the executed methods and the related events (input: events which fired the method, and output: events generated by the method's execution). Each method and its events form a record. The following four steps define the error recovery process:
1. If an agent detects an error, it notifies its interlocutor; 2. a then purges collection(a , t k ) and selects another role;
3. a computes the point where the interaction should continue at in the newly selected role and notifies the initiator;
4. Both agents update their journals by erasing the wrong records and the interaction proceeds.
The participant role replacement during error recovery can require the initiator agent to roll some actions back in order to synchronise with the newly instantiated role. To purge collection(a , t k ), a :
1. Removes from collection(a , t k ) all the roles whose description, from the beginning of the role to the point the error occurs at, does not match the journal;
2. If the message structure is wrong and the error has been detected by the initiator agent, removes from collection(a , t k ) the roles that generated the wrong message. If the error has been detected by a itself, it removes from collection(a , t k ) the roles that can't receive the claimed erroneous message;
3. If the message content is wrong and if the error was detected by the initiator agent, removes from collection(a , t k ) the roles that cannot generate the same message structure at the point the error occurs at and removes from collection(a , t k ) the roles that use the same method as the one causing the error. If the error was detected by the participant agent, removes from collection(a , t k ) the roles that do not receive the same message structure at the point the error occurred at and also removes from collection(a , t k ) the roles that do not receive the same message content at the point the error occurred at
Since it's no use checking the content when the message structure is wrong, structure compliance is checked prior to content's. When a role does not comply with the current execution, it is removed from collection(a , t k ). Roles removal actually consists in marking them so that they can no more be instantiated in the current interaction. Then, from the updated collection(a , t k ), a selects a new role following the strategy described hereby:
1. If the message content is wrong: (a) for each role of collection(a , t k ), construct the set of messages (generated or received) at the point the error occurred at; (b) withdraw from these sets the message that caused the error; (c) compare these sub sets and select the weakest one; Then the role the selected sub set originates from is instantiated. The weakest messages sub set is the one containing the higher number of weak messages. Weak messages are those which lead to interaction termination; these messages are potentially weaker than those which continue the interaction. The reason why we prefer the weakest messages sub set is that we wish to avoid producing another message than the "structurally" correct one we generated priorly. When there are several sub sets candidate for selection or when there are none, a role is randomly selected.
2. If the message structure is wrong: randomly select a role in the collection(a , t k ).
Once a new role has been selected, the participant agent might expect to continue its execution from the point the er-ror occurred at. However, doing so can bring inconsistencies in the interaction execution because the roles, though enacted by the same agent, do not necessarily use the same methods. These inconsistencies could be avoided by looking for methods of the new role which follow the same sequence order as in the journal from the starting point. This set of methods won't be re-executed. We represent the methods of the new role as nodes of a directed graph wherein an edge m i m j means that method m j can be executed immediately after m i completes and the conditions for its execution hold. Algorithm 1 achieves this computation and returns the recovery points for both the initiator and the participant. In this algorithm, ı is the number of the latest mes- sage the initiator should be considering it has sent to the participant and the point where the participant is to start executing its new role from. The third event type considered in the journal (data value change in addition to message emission and reception) accounts for the difference between ı and . Once the initiator receives ı, it looks for the record in its journal representing the ı th message it sent to the participant. All the records following this one will be erased from the journal. The participant also updates its journal in quite the same way basing on .
Suppose d 1 wants to identify the language its document is written in; this task (t 2 ) requires an interaction with a rule agent. We assume d 1 finds out a protocol which starts with an ask-one emission and expects a tell. Consider d 1 contacted a rule agent c 1 whose interaction model is partially depicted in figure 2. In this figure, for example the given portion of r 1 can be interpreted as: r 1 receives an ask-one and can reply an insert or a sorry. collection(c 1 , t 2 ) = {r 1 , r 2 , r 3 , r 4 }.
Figure 2. agent c's interaction model
• Whatever role c 1 selects, if it replies a sorry the interaction will end up, may be prematurely -the first message has not been validated yet. To get sure it's not so d 1 issues a warning: "May be premature interaction termination!". c 1 tries to select another role and the interaction proceeds or definitely stops.
• If c 1 selected r 4 and sent a tell whose content is wrong d 1 notifies c 1 a wrong message content error. c 1 then stops r 4 , purges its collection(a , t k ) by removing r 1 (since it cannot generate a tell at the error location), selects r 3 because it corresponds to the weakest sub set and computes the recovery points: ı = = Rec♯1. c 1 updates its journal and replies anew to the ask-one.
In order to avoid inconsistencies at the end of interactions, we require both agents to explicitly notify each other the protocol's termination. Instead of selecting roles on the basis of messages they can generate, it sounds to select them on the basis of messages they really generated. This is possible only if all candidates roles are instantiated at the same time. This parallel roles instantiation is only known from the participant agent; the initiator agent still has the perception of a sequential roles instantiation.
Mixed Roles Instantiation
All the roles a identified in collection(a , t k ) are instantiated at the same time. They handle the received message and generate their reply messages which are stored in a control zone (C z ); C z also contain the messages which are destined to currently activated roles. The roles are then deactivated. Only one message m k is selected from C z and sent to the initiator. This selection can be performed following several strategies. For example, the participant agent can randomly select a message among those which don't shorten the interaction. Therefore, if an insert, a sorry, an error and a tell are generated in reply to an ask-one, insert and tell will be preferred to sorry and error, and the random selection will be performed between the first two messages. After a message m k has been selected, all the roles which generated a message of the same structure and content are activated. In this instantiation mode, when an error occurs, the participant agent recovers from it by stopping the wrong roles and by reactivating one or several other roles. Thus, if an error occurs on m k : 1. All the activated roles are stopped; 2. If the message structure is wrong: all the m k as well as the messages having the same structure are removed from C z and their roles are stopped. The participant selects another message m k ′ following the same principle as m k 's selection.
3. If the message content is wrong: all the m k messages as well as those having the same content pattern are removed from C z and their roles are stopped. The participant selects another message m k ′ having the same structure but a different content pattern.
When some roles stayed activated, they all generate their messages. If the messages have the same structure and content, all these roles remain activated. Otherwise, only one message is selected and all the roles whose message have not been selected are deactivated. If all the previously activated roles have been stopped, the participant agent reactivates the most recently deactivated role but cares about early interaction termination. When there are more than one such roles, they all are reactivated. The participant role reactivation, might require the initiator to roll some actions back to a recovery point. An algorithm similar to algorithm 1 performs the recovery points computation for both the initiator and participant. For each role the algorithm is applied, considering the roles' current execution, and the final recovery point is the earliest.
Conclusion
Designing agents for open and dynamic environments is still a challenging task, especially in regard to protocol based interactions. Two main concerns arise from interactions modelling and design in such systems. First, how interactions which are based on generic protocols are configured so that consistent messages exchange can take place? Second, does it sound that designers always decide which protocols and roles to use every time an interaction is asked for? We address both issues by developing several methods. In this paper we focus on the second concern. We argued that due to openness and dynamic behaviours more flexibility is needed in protocols selection. Furthermore, in the context of complex applications demanding multi-protocols agents, moving from static to dynamic protocol selection greatly increases such systems' efficiency and properly handles the situation tightly related to openness where all the protocols are not known at design time. Thus, we enabled agents to dynamically select protocols upon the prevailing circumstances.
One outcome of the dynamic protocol selection is that the protocols to use are no more hard-coded in all agents source code. Rather, programmers mention collaborative tasks descriptions in the initiator agent's source code only making the latter in charge of firing the interaction. Agents are given two ways to select protocols. First, the initiator agent and all (or a part of them) the potential participant agents it identified can join together and share information and preferences about the protocols at hand in order to select a protocol and assign a role to each agent. Second, agents are given the possibility to individually select their protocols and roles anticipating errors. We focus on two types of errors: wrong message structure and wrong message content. As roles replacement are performed as soon as an anomaly is detected, we constrain actions executed during interactions to be reversible and not to render critical side effect. Furthermore, when there are several candidate protocols in the individual protocol selection, we developed two exploration mechanisms for these candidates: (1) a sequential exploration and (2) a mixed exploration modes.
Both methods have been proposed and tested in the context of a European project dedicated for information filtering. They proved their usefulness to efficiently manage the multiple interactions that take place between agents. In this paper, we don't provide the results we obtained from the application of these methods since they need to be interpreted and compared to static selection cases. In the bar-gain, our aim was to describe the theoretical basis of a dynamic protocols selection method. Our method intensively benefits from the agents' capacity to interpret, relate and update models embedded inside them. | 5,767 |
cs0501036 | 2151384127 | in this paper we describe a method which allows agents to dynamically select protocols and roles when they need to execute collaborative tasks | To date, there have been some efforts to overcome this limitation. @cite_0 introduces more flexibility in agents' coordination but it only applies to planning mechanisms of the individual agents. @cite_1 also proposes a framework based on multi-agent Markov decision processes. Rather than identifying a coordination mechanism which suits best for a situation, this work deals with optimal reasoning within the context of a given coordination mechanism. @cite_6 proposed a framework that enables autonomous agents to dynamically select the mechanism they employ in order to coordinate their inter-related activities. Using this framework, agents select their coordination mechanisms reasoning about the rewards they can obtain from collaborative tasks execution as well as the probability for these tasks to succeed. | {
"abstract": [
"",
"Coordination of agent activities is a key problem in multiagent systems. Set in a larger decision theoretic context, the existence of coordination problems leads to difficulty in evaluating the utility of a situation. This in turn makes defining optimal policies for sequential decision processes problematic. We propose a method for solving sequential multi-agent decision problems by allowing agents to reason explicitly about specific coordination mechanisms. We define an extension of value iteration in which the system's state space is augmented with the state of the coordination mechanism adopted, allowing agents to reason about the short and long term prospects for coordination, the long term consequences of (mis)coordination, and make decisions to engage or avoid coordination problems based on expected value. We also illustrate the benefits of mechanism generalization.",
"This paper presents a framework that enables autonomous agents to dynamically select the mechanism they employ in order to coordinate their inter-related activities. Adopting this framework means coordination mechanisms move from the realm of being imposed upon the system at design time, to something that the agents select at run-time in order to fit their prevailing circumstances and their current coordination needs. Empirical analysis is used to evaluate the effect of various design alternatives for the agent's decision making mechanisms and for the coordination mechanisms themselves."
],
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_6"
],
"mid": [
"",
"1484740474",
"1607001533"
]
} | Enabling Agents to Dynamically Select Protocols for Interactions | Generally, the interaction protocols which support agents collaborative tasks' execution are imposed upon multiagent systems (MAS) at design time. This static protocols selection severely limits the system's openness, the dynamic behaviours agents can exhibit, the integration of new protocols, etc. For example, consider a collaborative task which can be executed following varied methods either by means of a Request protocol [5] (an identified agent exhibiting specific skills is requested to perform the task) or a Contract Net protocol (CNP) [6] (a competition holds between some identified agents in order to find out the best one to perform the task). Since the MAS is open and CNP looks for the best contractor, it will undoubtedly be preferred to Request for such a task in absence of constraints such as execution delay. Thus, selecting Request at design-time prevents the initiator agent from benefiting a better processing of this task. Moreover, the set of protocols used in multi-agent interactions is increasingly enlarging, and a static protocol selection will only consider the protocols the designer knows even if these agents have the capacity of interpreting and executing other protocols unknown at design time. To overcome this limitation, we should enable agents to dynamically select protocols in order to interact.
As yet, there have been some efforts [1,4] to enable agents to dynamically select the roles they play during interactions using Markov Decision Processes, planning or even probabilistic approaches. However, they don't suit protocol based coordination mechanisms. Indeed, as protocols are partially sorted sequences of pre-formatted messages exchange, selecting them to execute a task requires that their descriptions match that of the collaborative task. The solutions proposed so far do not explicitly focus on protocols and do not check such compliance either. To address this void, we developed a method which enables agents to dynamically select protocols and roles in order to interact. Our method puts the usual assumptions about multi-agent interactions a step further. First, we consider that some interaction protocols can be known only at runtime. Thus, starting from a minimal version, agents interaction models can grow up by integrating these protocols from safe and authenticated libraries of interaction protocols when needed. Second, we consider that agents may have different designers, therefore they may encompass different protocols specification formalisms. Furthermore, an interaction protocol is a triple {R, M, Ω} where R is a set of interacting roles which can be of two types: initiator and participant. An initiator role is the unique role in charge of starting the protocol whereas a participant role is any role taking part in the protocol. Consequently, an initiator agent will be any agent playing the initiator role in an interaction while a participant agent will be any agent taking up a participant role. In addition, protocols used in MAS can be classified 1 in three categories: (1) 1-1 protocols, which are protocols made of two roles (initiator and participant) both of them having only one instance (ex Request); (2) 1-1 N protocols, again protocols made of two roles with several instances of the participant (ex CNP); (3) 1-N protocols, which are protocols with several distinct participant roles each of them having only one instance (ex an auction protocol with one buyer, one seller and one manager). Subsequently, we define the dynamic protocols selection process for each of these categories.
Rather than explicitly indicating the protocols and the roles to use for all the agents which will execute the desired interaction, we suggest that agents programmers simply mention the collaborative task's description in the initiator agent's source code. As soon as an agent locates such a description, it identifies some potential participant agents and thereafter fires the dynamic protocol selection process taking up the initiator role. We assume that the MAS is provided with potential participant agents identification procedures. Agents can dynamically select protocols in two possible ways. First, the initiator agent and all the potential participant agents collectively select a protocol and assign roles to each agent inside this protocol. This is the joint protocol selection method which assumes that agents trust one another and that they don't dread publishing their knowledge and preferences. On the other hand, agents can individually select protocols and roles and start the desired interaction. This is the individual protocol selection method which assumes that agents do not trust one another and/or the system is heterogeneous (several sub systems with different protocol formalisms are plugged together). In this method, as the selected roles may mismatch, agents should anticipate errors in order to guarantee consistent messages exchange. We focus on wrong message structure error which indicates that something is wrong in the message structure (performative, content, language, ontology, etc.) and wrong message content error which indicates that the message's content doesn't match the expected content pattern. We argue that our method introduces more flexibility in protocols execution, fosters agents autonomy, favours their dynamic behaviours and suits MAS' openness. In this paper we describe both methods and detail their principles, concepts and algorithms. We exemplify them towards a web documents filtering MAS composed of (1) query agents representing the queries users formulate, (2) document agents representing the documents retrieved from the web, (3) and rule agents corresponding to any linguistic rule invoked to compute documents attributes (author(s), content, language, etc.).
The paper is organised as follows. Section 2 formally defines the protocols selection problem. Sections 3 and 4 detail our methods. Section 5 discusses some related work and section 6 draws some conclusions.
Problem Description
Our purpose in this research, is to ease protocols definition, implementation and use. Thus, we claim to free agents programmers from hard coding the protocols and the exact roles to use every time their agents have to execute a collaborative task. Rather, they should only mention in the initiator agent's source code the description of the collaborative task to execute. Then, once an agent comes across such a description, it will launch the dynamic selection process implicating potential participant agents. Concretely, given a collaborative task t j which is to be executed by a set A = {a 1 , a 2 , . . . a k } of agents, the selection problem is stated as how an agent can select a protocol and a role inside this protocol to execute t j ? It consists in finding out a protocol p and the roles r 1 , r 2 , . . . r p each a i should be enacting in this protocol in order to get t j executed. We assume that each agent is provided with an interaction model I = {r 1 , r 2 , . . . r n } containing some configured protocols (p 1 . . . p m ) some of which can be introduced at runtime and for each protocol p i a set of roles {r 1 , r 2 , . . . r k } this agent can play during interactions based on p i .
In the following two sections, we elaborate on our solution to the selection problem.
Joint Protocol Selection
Once an agent locates the description of a collaborative task, it finds out a set of protocols needed to execute this task which it refines to a sub set of protocols whose initiator roles are configured inside its interaction model. Moving from collaborative tasks models to protocols' requires the agents to analyse both models and detect their adequacy. In this paper we assume that agents are able to examine tasks and protocols models and relate the first ones to the second ones. After moving from task to protocols the initiator agent should identify all the potential participant agents for the determined protocols. Both steps provide the initiator agent with a sparse matrix: potential participants linked to protocols. These potential participant agents are thus contacted whether at the same time or one after the other and are required to validate a protocol. To contrast the messages exchanged during the joint selection and those exchanged during normal interactions, we proposed some performatives which we informally describe here bellow:
call-for-collaboration the sender of this performative invites the receiver to take part in a protocol described in the content field.
unable-to-select the sender of this performative informs the receiver that it cannot play a participant role in the related protocol. The in-reply-to and reply-with fields help relate this message to a prior call-for-collaboration. The reasons why an agent may reply this performative, though identified as a potential participant for the protocol, are (1) its autonomy since it may not want to execute this protocol at this moment and (2) some errors in some fields.
stop-selection the sender of this performative asks the receiver to stop the selection process this message is linked to.
ready-to-select the sender of this performative notifies the receiver of the participant roles it can be enacting regarding the protocol description it received. All the participant roles in any protocol compatible with the current one can be listed. This grouping not only reveals by order of preference the roles the sender commits in playing but it also avoid going back and forth about protocols sharing the same background. Roles of protocols are compatible when they can execute safe interactions albeit the difference in their respective specifications. As an example the initiator role of CNP can interact with either the participant role of CNP or that of Iterated CNP (ICNP [5]). While the initiator of ICNP can't interact with the participant of CNP because of the probable iterations.
notify-assignment the sender of this performative informs the receiver about the role the latter has been assigned to in the jointly selected protocol. The assigned role is one among those the receiver priorly committed in playing.
Whatever protocol category the selection is concerned with, we can describe the joint selection messages exchange sequence as follows:
1. the initiator agent sends a call-for-collaboration encapsulating a protocol's description.
2. Each participant agent can reply with an unable-to-select driving the initiator agent to stop the selection process between both agents by sending a stop-selection.
3. Each participant agent can also reply with a ready-to-select. In this case, the initiator agent parses the participant's proposals and adopts one of them sending a notify-assignment or reject all the proposals sending a stop-selection.
In the remainder of this section we detail the joint selection method for each class of protocol.
Inside the Joint Protocol Selection
1-1 Protocols
In the dynamic 1-1 protocols selection, the aim of the initiator agent is to early find out a solution, a couple (a i , p j )
where a i is one of the potential participant agents formerly identified and p j one of the 1-1 protocols determined for the current task. In the midst of the solution search is the matrix's exploration. Hence, it behoves the initiator agent to explore the matrix traversing protocols or potential participant agents. In the protocol-oriented exploration, the initiator selects a protocol and iterates through the set of agents which it identified for this protocol and retains one that fits the protocol. As soon as an agent is determined the selection process successfully completes. Otherwise, the iteration proceeds until there is no more protocol to select. Analogously, in the agent-oriented exploration the initiator selects an agent and delves into its protocols' set looking for one they can execute together. Whatever exploration the initiator adopts, it should overcome the matrix sparsity by selecting as next element (protocol or agent) the one holding the least sparse vector.
Consider, by way of illustration, a query agent q 1 which is requested to execute a task t 1 : "find out a document exhibiting the following characteristics: (lan-guage='English', content='plain/text')". Protocols and potential participant agents identifications for t 1 lead to the matrix given in table 1 where a cross in a cell [l,c] indicates that the agent at column c can play a participant role in the protocol at line l. This matrix reveals that q 1 has identified IPS (an Incremental Problem Solving protocol where a problem submitter -initiator-and its solver -the participant-progressively find out a solution to a given problem) and Request. A diagrammatic representation of IPS is given in figure 1. In addition, q 1 identified seven potential participant document agents d 1 . . . d 7 . q 1 adopts a protocol-oriented explo- ration and selects IP S (the least sparse vector). Therefore, it will try to validate IP S with d 1 , d 2 , d 4 , d 5 or d 7 . If q 1 receives a ready-to-select in reply to a prior call-for-collaboration, it explores the list of preferred roles and as soon as it finds the participant role of IPS or Request it notifies its agreement with a notify-assignment. If q 1 identified more than these protocols and a role of any of them is pointed out in the participant's ready-to-select, this protocol will be adopted. In the case it didn't find out any role it expects or it received an unable-to-select, q 1 replies with a stop-selection and as long as there are still unexplored potential participant agents for IPS, q 1 will con- tinue contacting them. In absence of solution when the potential participants' set has been thoroughly explored for a protocol, the same process is taken again upon another protocol if there is any. In case no solution has been found and no more protocol and participant can be explored, the dynamic protocol selection fails and the subsequent task remains not executed.
d 1 d 2 d 3 d 4 d 5 d 6 d 7 IPS x x x x x Request x x x x
1-1 N Protocols
A solution to the dynamic 1-1 N Protocols selection problem is a couple (A, p j ) where A is the set of participant agents and p j the protocol to use. For this category of protocols the matrix is explored only in a protocol-oriented way since all the identified agents for a protocol are contacted at the same time. Once all the contacted agents have replied, the initiator agent should select a common protocol for the agents (generally for a part of them) which replied a ready-to-select. We devised several strategies to perform this selection but here we only describe one, the largest set's strategy, which looks for the role most agents selected. If there exists a role r i that all the agents pointed out, then this one is selected for all the agents which replied a ready-to-select. Otherwise, we look for a role that will involve the largest set of agents. Therefore, we construct an array where each index ı contains a collection of all the roles which exactly ı agents candidated for. As we didn't succeed in finding out a role for all the n agents which sent a ready-to-select, the highest index in this array points to a collection of roles that exactly n − 1 agents candidated for. We traverse the array from the highest index down to the lowest. While exploring index k, if the collection of roles is not empty, we represent for each role r ı the set e ı of agents which candidated for it.
1. If all the e i are equal, then we randomly select a r p and adopts its corresponding e p as the participant agents' set. The solution is then e p and the relevant protocol r p belongs to. After a solution has been found for an index k, we check whether some sets have not been saved for a higher index. If no sets were found the final solution is the one at hand. Otherwise, we select from the latters the set whose intersection with the currently selected set is the largest. If no solution has been found we iterate through the selection process changing protocols.
1-N Protocols
A solution to the dynamic 1-N Protocols selection problem is a triple (A, p j , m) where A is the set of participant agents, p j the protocol to use and m an associative array mapping each agent to the role(s) it will play in the protocol. Here again, the matrix is explored only in a protocol-oriented way. The initiator agent a ı waits for all the participants' replies and gathers the ready-to-select messages. The roles are clustered following the protocols they belong to and the protocols which have not been identified by the initiator agent are eliminated. For each protocol p, a ı maps each role r to a set of agents which candidated for it: candidates(r ) = k {a k }. If candidates(r ) = ∅, the protocol r belongs to is no more considered in the selection process. Moreover, as there exists several participant roles in 1-N protocols, some of them may receive their first message from other participant roles. Thus, we introduce a new relation, father: given two roles r 1 and r 2 of a 1-N protocol, r 1 = f ather(r 2 ) |= r 1 is the sender of the r ′ 2 s f irst message. For each protocol retained after the candidates sets construction, the initiator agent constructs a tree t wherein nodes are the roles of the protocol. A node r m is child of another node r n if r n = father(r m ). t is traversed in a breadth-first way and for each node r of t an agent a is assigned to r from candidates(r ). Assigning a role to an agent can be performed by any well known resource allocation algorithm (ex election). This assignment is achieved for all the trees and the initiator agent uses a strategy to select one of the totally assigned trees. An improvement during the roles assignment is to avoid situations where the same agent plays several roles in a protocol. Thus, when candidates is a singleton, its only one agent is removed from all other candidates sets it appears in when these are not singletons. As well, while exploring t, once a role has been assigned to an agent we should remove this agent from all the candidates sets it appears in provided these are not singletons. The singleton criterion may guide a tree selection strategy.
Beyond the joint protocol selection
The joint protocol selection mechanism does not apply to all interaction contexts. One evident issue is that generic protocols are thought to be invariably specified in the MAS. However, the only one aspect that remains invariable in generic protocols' specification is the description of exchanged messages which is imposed by communication languages (KQML, FIPA ACL) and embedded in the protocols' specification. Hence, there is no guarantee for generic protocols to be specified in a unique formalism and agents might fail to interpret some of the formalisms used to specify generic protocols in the MAS. Particularly, plugging heterogeneous sub-systems together in a MAS increases the risk for multiple generic protocols specification formalisms. The joint protocol selection then falls short in a MAS where generic protocols are specified in several formalisms and agents are unable to interpret all those formalisms. In addition, there are also situations where agents do not always trust one another. Then, basing protocols selection on specifications exchange becomes unsafe.
To address these drawbacks, we developed an individual protocol selection method.
Individual Protocol Selection
This selection form is carried out concomitantly to the targeted interaction. Likewise the joint protocol selection form, the initiator agent is in charge of starting the selection process when it locates a collaborative task's descriptions. It finds out some protocols which comply with the task's descriptions and wherein it can play an initiator role. The initiator agent may adopt a static behaviour during this selection by choosing a protocol among the candidates. in this case, the strategy it adopts is required to be fair. It may also be given the possibility to exhibit dynamic behaviours by changing protocols in order to address occurring inconsistencies. In this paper we consider the first case.
The initiator agent sends the initial message m 0 of the selected protocol p ı to one or several potential participant agents. m 0 actually denotes a need for a new interaction and any agent which receives it selects a participant role r which starts with m 0 's reception. Hence, each participant agent a constructs the collection of candidate roles r which we refer to in the remainder of this section as collection(a , t k ). The roles are then selected from collection(a , t k ) and instantiated so that the interaction can take place. The individual protocol selection, although more sophisticated and powerful, can lead to interaction inconsistencies. Indeed, as individually selected roles may mismatch, the exchanged messages' content or structure (performative, ontology, language, etc.) may be wrong. Thus, we provide agents with techniques to anticipate such errors by checking incoming messages over structure and content compliance. When collection(a , t k ) is a singleton, the only one role is instantiated in order to interact. If any error occurs during the interaction no recovery would have been possible. Dynamically selecting protocols is more appealing when the collection(a , t k ) contains several roles. In this case we explore collection(a , t k ) either sequentially or in parallel. In this paper, we only describe individual 1-1 protocols selection since the selection mechanism is quite similar for the other two types of protocols and only some extensions are required to fit the specificity of these protocols.
Sequential Roles Instantiation
In the purpose of starting an interaction or replacing a failing role during an interaction, a participant agent randomly (or using another strategy we'll define later) selects roles from the collection one after the other until there is no available role to select or the interaction eventually safely ends up. Once selected, roles are removed from the collection in order to avoid selecting them anew during the same interaction. When a message is wrong the participant agent must recover from this error by replacing the failing role. The recovery process lies on the interaction's journal where agents log the executed methods and the related events (input: events which fired the method, and output: events generated by the method's execution). Each method and its events form a record. The following four steps define the error recovery process:
1. If an agent detects an error, it notifies its interlocutor; 2. a then purges collection(a , t k ) and selects another role;
3. a computes the point where the interaction should continue at in the newly selected role and notifies the initiator;
4. Both agents update their journals by erasing the wrong records and the interaction proceeds.
The participant role replacement during error recovery can require the initiator agent to roll some actions back in order to synchronise with the newly instantiated role. To purge collection(a , t k ), a :
1. Removes from collection(a , t k ) all the roles whose description, from the beginning of the role to the point the error occurs at, does not match the journal;
2. If the message structure is wrong and the error has been detected by the initiator agent, removes from collection(a , t k ) the roles that generated the wrong message. If the error has been detected by a itself, it removes from collection(a , t k ) the roles that can't receive the claimed erroneous message;
3. If the message content is wrong and if the error was detected by the initiator agent, removes from collection(a , t k ) the roles that cannot generate the same message structure at the point the error occurs at and removes from collection(a , t k ) the roles that use the same method as the one causing the error. If the error was detected by the participant agent, removes from collection(a , t k ) the roles that do not receive the same message structure at the point the error occurred at and also removes from collection(a , t k ) the roles that do not receive the same message content at the point the error occurred at
Since it's no use checking the content when the message structure is wrong, structure compliance is checked prior to content's. When a role does not comply with the current execution, it is removed from collection(a , t k ). Roles removal actually consists in marking them so that they can no more be instantiated in the current interaction. Then, from the updated collection(a , t k ), a selects a new role following the strategy described hereby:
1. If the message content is wrong: (a) for each role of collection(a , t k ), construct the set of messages (generated or received) at the point the error occurred at; (b) withdraw from these sets the message that caused the error; (c) compare these sub sets and select the weakest one; Then the role the selected sub set originates from is instantiated. The weakest messages sub set is the one containing the higher number of weak messages. Weak messages are those which lead to interaction termination; these messages are potentially weaker than those which continue the interaction. The reason why we prefer the weakest messages sub set is that we wish to avoid producing another message than the "structurally" correct one we generated priorly. When there are several sub sets candidate for selection or when there are none, a role is randomly selected.
2. If the message structure is wrong: randomly select a role in the collection(a , t k ).
Once a new role has been selected, the participant agent might expect to continue its execution from the point the er-ror occurred at. However, doing so can bring inconsistencies in the interaction execution because the roles, though enacted by the same agent, do not necessarily use the same methods. These inconsistencies could be avoided by looking for methods of the new role which follow the same sequence order as in the journal from the starting point. This set of methods won't be re-executed. We represent the methods of the new role as nodes of a directed graph wherein an edge m i m j means that method m j can be executed immediately after m i completes and the conditions for its execution hold. Algorithm 1 achieves this computation and returns the recovery points for both the initiator and the participant. In this algorithm, ı is the number of the latest mes- sage the initiator should be considering it has sent to the participant and the point where the participant is to start executing its new role from. The third event type considered in the journal (data value change in addition to message emission and reception) accounts for the difference between ı and . Once the initiator receives ı, it looks for the record in its journal representing the ı th message it sent to the participant. All the records following this one will be erased from the journal. The participant also updates its journal in quite the same way basing on .
Suppose d 1 wants to identify the language its document is written in; this task (t 2 ) requires an interaction with a rule agent. We assume d 1 finds out a protocol which starts with an ask-one emission and expects a tell. Consider d 1 contacted a rule agent c 1 whose interaction model is partially depicted in figure 2. In this figure, for example the given portion of r 1 can be interpreted as: r 1 receives an ask-one and can reply an insert or a sorry. collection(c 1 , t 2 ) = {r 1 , r 2 , r 3 , r 4 }.
Figure 2. agent c's interaction model
• Whatever role c 1 selects, if it replies a sorry the interaction will end up, may be prematurely -the first message has not been validated yet. To get sure it's not so d 1 issues a warning: "May be premature interaction termination!". c 1 tries to select another role and the interaction proceeds or definitely stops.
• If c 1 selected r 4 and sent a tell whose content is wrong d 1 notifies c 1 a wrong message content error. c 1 then stops r 4 , purges its collection(a , t k ) by removing r 1 (since it cannot generate a tell at the error location), selects r 3 because it corresponds to the weakest sub set and computes the recovery points: ı = = Rec♯1. c 1 updates its journal and replies anew to the ask-one.
In order to avoid inconsistencies at the end of interactions, we require both agents to explicitly notify each other the protocol's termination. Instead of selecting roles on the basis of messages they can generate, it sounds to select them on the basis of messages they really generated. This is possible only if all candidates roles are instantiated at the same time. This parallel roles instantiation is only known from the participant agent; the initiator agent still has the perception of a sequential roles instantiation.
Mixed Roles Instantiation
All the roles a identified in collection(a , t k ) are instantiated at the same time. They handle the received message and generate their reply messages which are stored in a control zone (C z ); C z also contain the messages which are destined to currently activated roles. The roles are then deactivated. Only one message m k is selected from C z and sent to the initiator. This selection can be performed following several strategies. For example, the participant agent can randomly select a message among those which don't shorten the interaction. Therefore, if an insert, a sorry, an error and a tell are generated in reply to an ask-one, insert and tell will be preferred to sorry and error, and the random selection will be performed between the first two messages. After a message m k has been selected, all the roles which generated a message of the same structure and content are activated. In this instantiation mode, when an error occurs, the participant agent recovers from it by stopping the wrong roles and by reactivating one or several other roles. Thus, if an error occurs on m k : 1. All the activated roles are stopped; 2. If the message structure is wrong: all the m k as well as the messages having the same structure are removed from C z and their roles are stopped. The participant selects another message m k ′ following the same principle as m k 's selection.
3. If the message content is wrong: all the m k messages as well as those having the same content pattern are removed from C z and their roles are stopped. The participant selects another message m k ′ having the same structure but a different content pattern.
When some roles stayed activated, they all generate their messages. If the messages have the same structure and content, all these roles remain activated. Otherwise, only one message is selected and all the roles whose message have not been selected are deactivated. If all the previously activated roles have been stopped, the participant agent reactivates the most recently deactivated role but cares about early interaction termination. When there are more than one such roles, they all are reactivated. The participant role reactivation, might require the initiator to roll some actions back to a recovery point. An algorithm similar to algorithm 1 performs the recovery points computation for both the initiator and participant. For each role the algorithm is applied, considering the roles' current execution, and the final recovery point is the earliest.
Conclusion
Designing agents for open and dynamic environments is still a challenging task, especially in regard to protocol based interactions. Two main concerns arise from interactions modelling and design in such systems. First, how interactions which are based on generic protocols are configured so that consistent messages exchange can take place? Second, does it sound that designers always decide which protocols and roles to use every time an interaction is asked for? We address both issues by developing several methods. In this paper we focus on the second concern. We argued that due to openness and dynamic behaviours more flexibility is needed in protocols selection. Furthermore, in the context of complex applications demanding multi-protocols agents, moving from static to dynamic protocol selection greatly increases such systems' efficiency and properly handles the situation tightly related to openness where all the protocols are not known at design time. Thus, we enabled agents to dynamically select protocols upon the prevailing circumstances.
One outcome of the dynamic protocol selection is that the protocols to use are no more hard-coded in all agents source code. Rather, programmers mention collaborative tasks descriptions in the initiator agent's source code only making the latter in charge of firing the interaction. Agents are given two ways to select protocols. First, the initiator agent and all (or a part of them) the potential participant agents it identified can join together and share information and preferences about the protocols at hand in order to select a protocol and assign a role to each agent. Second, agents are given the possibility to individually select their protocols and roles anticipating errors. We focus on two types of errors: wrong message structure and wrong message content. As roles replacement are performed as soon as an anomaly is detected, we constrain actions executed during interactions to be reversible and not to render critical side effect. Furthermore, when there are several candidate protocols in the individual protocol selection, we developed two exploration mechanisms for these candidates: (1) a sequential exploration and (2) a mixed exploration modes.
Both methods have been proposed and tested in the context of a European project dedicated for information filtering. They proved their usefulness to efficiently manage the multiple interactions that take place between agents. In this paper, we don't provide the results we obtained from the application of these methods since they need to be interpreted and compared to static selection cases. In the bar-gain, our aim was to describe the theoretical basis of a dynamic protocols selection method. Our method intensively benefits from the agents' capacity to interpret, relate and update models embedded inside them. | 5,767 |
cs0412012 | 2951615240 | This report presents Jartege, a tool which allows random generation of unit tests for Java classes specified in JML. JML (Java Modeling Language) is a specification language for Java which allows one to write invariants for classes, and pre- and postconditions for operations. As in the JML-JUnit tool, we use JML specifications on the one hand to eliminate irrelevant test cases, and on the other hand as a test oracle. Jartege randomly generates test cases, which consist of a sequence of constructor and method calls for the classes under test. The random aspect of the tool can be parameterized by associating weights to classes and operations, and by controlling the number of instances which are created for each class under test. The practical use of Jartege is illustrated by a small case study. | Our work has been widely inspired by the JML-JUnit approach @cite_27 . The JML-JUnit tool generates test cases for a method which consist of a combination of calls of this method with various parameter values. The tester must supply the object invoking the method and the parameter values. With this approach, interesting values could easily be forgotten by the tester. Moreover, as a test case only consists of one method call, it is not possible to detect errors which result of several calls of different methods. At last, the JML-JUnit approach compels the user to construct the test data, which may require the call of several constructors. Our approach thus has the advantage of being more automatic, and of being able to detect more potential errors. | {
"abstract": [
"Writing unit test code is labor-intensive, hence it is often not done as an integral part of programming. However, unit testing is a practical approach to increasing the correctness and quality of software; for example, the Extreme Programming approach relies on frequent unit testing. In this paper we present a new approach that makes writing unit tests easier. It uses a formal specification language's runtime assertion checker to decide whether methods are working correctly, thus automating the writing of unit test oracles. These oracles can be easily combined with hand-written test data. Instead of writing testing code, the programmer writes formal specifications (e.g., pre- and postconditions). This makes the programmer's task easier, because specifications are more concise and abstract than the equivalent test code, and hence more readable and maintainable. Furthermore, by using specifications in testing, specification errors are quickly discovered, so the specifications are more likely to provide useful documentation and inputs to other tools. We have implemented this idea using the Java Modeling Language (JML) and the JUnit testing framework, but the approach could be easily implemented with other combinations of formal specification languages and unit test tools."
],
"cite_N": [
"@cite_27"
],
"mid": [
"2483314354"
]
} | 0 |
||
cs0410074 | 2952920456 | We propose a simple distributed hash table called ReCord, which is a generalized version of Randomized-Chord and offers improved tradeoffs in performance and topology maintenance over existing P2P systems. ReCord is scalable and can be easily implemented as an overlay network, and offers a good tradeoff between the node degree and query latency. For instance, an @math -node ReCord with @math node degree has an expected latency of @math hops. Alternatively, it can also offer @math hops latency at a higher cost of @math node degree. Meanwhile, simulations of the dynamic behaviors of ReCord are studied. | Plaxton @cite_2 proposed a distributed routing protocol based on hypercubes for a static network with given collection of nodes. Plaxton's algorithm uses the technique to locate the shared resources on an overlay network in which each node only maintains a small-sized routing table. Pastry @cite_3 and Tapestry @cite_1 use Plaxton's scheme in the dynamic distributed environment. The difference between them is that Pastry uses routing scheme, whereas Tapestry uses scheme. The number of bits per digit for both Tapestry and Pastry can be reconfigured but it remains fixed during run-time. Both Pastry and Tapestry can build the overlay topology using proximity neighbor selection. However, it is still unclear whether there is any better approach to achieve globally effective routing. | {
"abstract": [
"In today’s chaotic network, data and services are mobile and replicated widely for availability, durability, and locality. Components within this infrastructure interact in rich and complex ways, greatly stressing traditional approaches to name service and routing. This paper explores an alternative to traditional approaches called Tapestry. Tapestry is an overlay location and routing infrastructure that provides location-independent routing of messages directly to the closest copy of an object or service using only point-to-point links and without centralized resources. The routing and directory information within this infrastructure is purely soft state and easily repaired. Tapestry is self-administering, faulttolerant, and resilient under load. This paper presents the architecture and algorithms of Tapestry and explores their advantages through a number of experiments.",
"This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications. Pastry performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties.",
"Consider a set of shared objects in a distributed network, where several copies of each object may exist at any given time. To ensure both fast access to the objects as well as efficient utilization of network resources, it is desirable that each access request be satisfied by a copy \"close\" to the requesting node. Unfortunately, it is not clear how to efficiently achieve this goal in a dynamic, distributed environment in which large numbers of objects are continuously being created, replicated, and destroyed. In this paper, we design a simple randomized algorithm for accessing shared objects that tends to satisfy each access request with a nearby copy. The algorithm is based on a novel mechanism to maintain and distribute information about object locations, and requires only a small amount of additional memory at each node. We analyze our access scheme for a class of cost functions that captures the hierarchical nature of wide-area networks. We show that under the particular cost model considered: (i) the expected cost of an individual access is asymptotically optimal, and (ii) if objects are sufficiently large, the memory used for objects dominates the additional memory used by our algorithm with high probability. We also address dynamic changes in both the network as well as the set of object copies."
],
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"1650675509",
"2167898414",
"2000876023"
]
} | ReCord: A Distributed Hash Table with Recursive Structure | 0 |
|
cs0410074 | 2952920456 | We propose a simple distributed hash table called ReCord, which is a generalized version of Randomized-Chord and offers improved tradeoffs in performance and topology maintenance over existing P2P systems. ReCord is scalable and can be easily implemented as an overlay network, and offers a good tradeoff between the node degree and query latency. For instance, an @math -node ReCord with @math node degree has an expected latency of @math hops. Alternatively, it can also offer @math hops latency at a higher cost of @math node degree. Meanwhile, simulations of the dynamic behaviors of ReCord are studied. | It is difficult to say which one of above proposed DHTs is best". Each routing algorithm offers some insight on routing in overlay network. One appropriate strategy is to combine these insights and formulate an even better scheme @cite_0 . | {
"abstract": [
"Even though they were introduced only a few years ago, peer-to-peer (P2P) filesharing systems are now one of the most popular Internet applications and have become a major source of Internet traffic. Thus, it is extremely important that these systems be scalable. Unfortunately, the initial designs for P2P systems have significant scaling problems; for example, Napster has a centralized directory service, and Gnutella employs a flooding based search mechanism that is not suitable for large systems."
],
"cite_N": [
"@cite_0"
],
"mid": [
"1521538824"
]
} | ReCord: A Distributed Hash Table with Recursive Structure | 0 |
|
quant-ph0407221 | 2026103473 | Quantum information processing is at the crossroads of physics, mathematics and computer science. It is concerned with what we can and cannot do with quantum information that goes beyond the abilities of classical information processing devices. Communication complexity is an area of classical computer science that aims at quantifying the amount of communication necessary to solve distributed computational problems. Quantum communication complexity uses quantum mechanics to reduce the amount of communication that would be classically required. | There are other pseudo-telepathy games that are related to the magic square game. Ad 'an Cabello's game @cite_31 @cite_26 does not resembles the magic square game on first approach. However, closer analysis reveals that the two games are totally equivalent! | {
"abstract": [
"A proof of Bell's theorem using two maximally entangled states of two qubits is presented. It exhibits a similar logical structure to Hardy's argument of nonlocality without inequalities''. However, it works for 100 of the runs of a certain experiment. Therefore, it can also be viewed as a Greenberger-Horne-Zeilinger-like proof involving only two spacelike separated regions.",
"Bell’s theorem [1] refutes local theories based on Einstein, Podolsky, and Rosen’s (EPR’s) “elements of reality” [2]. A recently introduced proof without inequalities [3] presents the same logical structure as that of Hardy’s proof [4], but exhibits a greater contradiction between EPR local elements of reality and quantum mechanics. Here a simpler version of the proof in [3] will be introduced. This new version parallels Mermin’s reformulation [5] of Greenberger, Horne, and Zeilinger’s (GHZ’s) proof [6] and, besides being simpler, it emphasizes the fact that [3] is also an “all versus nothing” [7] or GHZ-type proof of Bell’s theorem, albeit with only two observers. In addition, this new approach will allow us to derive an inequality between correlation functions which is violated by quantum mechanics. Moreover, this new version will also constitute the basis for a new stateindependent proof of the Kochen-Specker (KS) theorem [8]. The whole set of new results provides a wider perspective on the relations between the most relevant proofs of no hidden variables. Consider four qubits, labeled 1, 2, 3, 4, prepared in the state jc1234 1 j0011 2 j0110 2 j1001 1 j1100 , (1) which, as can be easily checked, is the product of two singlet states, jc 2 13 ≠j c 2 24. Let us suppose that qubits 1 and 2 fly apart from qubits 3 and 4, and that an observer, Alice, performs measurements on qubits 1 and 2, while in a spacelike separated region a second observer, Bob, performs measurements on qubits 3"
],
"cite_N": [
"@cite_31",
"@cite_26"
],
"mid": [
"2143828652",
"2105652832"
]
} | 0 |
||
quant-ph0407221 | 2026103473 | Quantum information processing is at the crossroads of physics, mathematics and computer science. It is concerned with what we can and cannot do with quantum information that goes beyond the abilities of classical information processing devices. Communication complexity is an area of classical computer science that aims at quantifying the amount of communication necessary to solve distributed computational problems. Quantum communication complexity uses quantum mechanics to reduce the amount of communication that would be classically required. | Also, Aravind has generalized his own magic square idea @cite_14 to a two-player pseudo-telepathy game in which the players share @math Bell states, @math being an arbitrary odd number larger than 1. | {
"abstract": [
"A proof of Bell’s theorem without inequalities and involving only two observers is given by suitably extending a proof of the Bell-Kochen-Specker theorem due to Mermin. This proof is generalized to obtain an inequality-free proof of Bell’s theorem for a set of n Bell states (with n odd) shared between two distant observers. A generalized CHSH inequality is formulated for n Bell states shared symmetrically between two observers and it is shown that quantum mechanics violates this inequality by an amount that grows exponentially with increasing n."
],
"cite_N": [
"@cite_14"
],
"mid": [
"1859341720"
]
} | 0 |
||
math0406379 | 2012604491 | We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.) | Long-range percolation, of which our model is an example, originated in the mathema -ti -cal-physics literature as a model that exhibits a phase transition even in spatial dimension one (e.g., Newman and Schulman @cite_1 , Schulman @cite_2 , Aizenman and Newman @cite_3 , Imbrie and Newman @cite_10 ). It soon became clear that @math and @math are two distinguished values; for @math the model is essentially mean-field (or complete-graph) alike, for @math the behavior is more or less as for the nearest-neighbor percolation. The regime @math turned out to be quite interesting; indeed, it is the only general class of percolation models with Euclidean (or amenable) geometry where one can prove absence of percolation at the percolation threshold (Berger @cite_7 ). In all dimensions, the model with @math has a natural continuum scaling limit. | {
"abstract": [
"We study the behavior of the random walk on the infinite cluster of independent long-range percolation in dimensions d= 1,2, where x and y are connected with probability ( ). We show that if d s>2d, then there is no infinite cluster at criticality. This result is extended to the free random cluster model. A second corollary is that when d≥& 2 and d>s>2d we can erase all long enough bonds and still have an infinite cluster. The proof of recurrence in two dimensions is based on general stability results for recurrence in random electrical networks. In particular, we show that i.i.d. conductances on a recurrent graph of bounded degree yield a recurrent electrical network.",
"Consider a one-dimensional independent bond percolation model withpj denoting the probability of an occupied bond between integer sitesi andi±j,j≧1. Ifpj is fixed forj≧2 and ( j )j2pj>1, then (unoriented) percolation occurs forp1 sufficiently close to 1. This result, analogous to the existence of spontaneous magnetization in long range one-dimensional Ising models, is proved by an inductive series of bounds based on a renormalization group approach using blocks of variable size. Oriented percolation is shown to occur forp1 close to 1 if ( j )jspj>0 for somes<2. Analogous results are valid for one-dimensional site-bond percolation models.",
"We consider one dimensional percolation models for which the occupation probability of a bond −Kx,y, has a slow power decay as a function of the bond's length. For independent models — and with suitable reformulations also for more general classes of models, it is shown that: i) no percolation is possible if for short bondsKx,y≦p =1. This dichotomy resembles one for the magnetization in 1 |x−y|2 Ising models which was first proposed by Thouless and further supported by the renormalization group flow equations of Anderson, Yuval, and Hamann. The proofs of the above percolation phenomena involve (rigorous) renormalization type arguments of a different sort.",
"The problem of long-range percolation in one dimension is proposed. The authors consider a one-dimensional bond percolation system with bonds connecting an infinite number of neighbours where the occupation probability for the nth nearest-neighbour bond pn varies as p1 ns. Using the transfer-matrix method, they find that when s>2 only the short-range percolation exists; namely the system percolates only when p1=1. A transition to long-range percolation is found at s=2 where the percolation threshold drops suddenly from the short-range value p1c=1 to the long-range value p1c=0.",
"We rigorously establish the existence of an intermediate ordered phase in one-dimensional 1 |x−y|2 percolation, Ising and Potts models. The Ising model truncated two-point function has a power law decay exponent θ which ranges from its low (and high) temperature value of two down to zero as the inverse temperature and nearest neighbor coupling vary. Similar results are obtained for percolation and Potts models."
],
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_10"
],
"mid": [
"2066678737",
"2325532214",
"1601823992",
"1997454227",
"2026423879"
]
} | 0 |
||
math0406379 | 2012604491 | We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.) | Recently, long-range percolation has been invoked as a fruitful source of graphs with non-trivial growth properties. Our interest was stirred by the work of Benjamini and Berger @cite_13 who proposed (and studied) long-range percolation as a model of social networks. It is this context where the graph distance scaling, and volume growth, are particularly of much interest. Thanks to numerous contributions that followed @cite_13 , this scaling is now known for most values of @math and @math . Explicitly, for @math , a corollary to the main result of Benjamini, Kesten, Peres and Schramm @cite_0 asserts that almost surely. As @math , the right-hand side tends to infinity and so, at @math , we expect @math . And, indeed, the precise growth rate in this case has been established by Coppersmith, Gamarnik and Sviridenko @cite_12 , where @math '' means that the ratio of left and right-hand side is a random variable that is bounded away from zero and infinity with probability tending to one. | {
"abstract": [
"The uniform spanning forest (USF) in ℤ d is the weak limit of random, uniformly chosen, spanning trees in [−n, n] d . Pemantle [11] proved that the USF consists a.s. of a single tree if and only if d ≤ 4. We prove that any two components of the USF in ℤ d are adjacent a.s. if 5 ≤ d ≤ 8, but not if d ≥ 9. More generally, let N(x, y) be the minimum number of edges outside the USF in a path joining x and y in ℤ d . Then @math",
"Bounds for the diameter and expansion of the graphs created by long-range percolation on the cycle ℤ Nℤ are given. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 19: 102–111, 2001",
"We consider the following long-range percolation model: an undirected graph with the node set 0, 1, ..., N d, has edges (x, y) selected with probability ≈ β ||x -y||s if ||x - y|| ' η1 > η2 > 1, it is at most Nη2 when s = 2d, and is at least Nη1 when d = 1, s = 2, β > 1 or when s > 2d. We also provide a simple proof that the diameter is at most log O(1) N with high probability, when d > s > 2d, established previously in [2]."
],
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_12"
],
"mid": [
"2162375967",
"1968939650",
"2088668624"
]
} | 0 |
||
math0406379 | 2012604491 | We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.) | For @math , the present paper states @math . Here we note that @math as @math which, formally, is in agreement with . For @math we in turn have @math and so, at @math , a polylogarithmic growth is no longer sustainable. Instead, for the case of the decay @math one expects that where @math varies through @math as @math sweeps through @math . This claim is supported by upper and lower bounds in somewhat restricted one-dimensional cases (Benjamini and Berger @cite_13 , Coppersmith, Gamarnik and Sviridenko @cite_12 ). However, even the existence of a sharp exponent @math has been elusive so far. | {
"abstract": [
"Bounds for the diameter and expansion of the graphs created by long-range percolation on the cycle ℤ Nℤ are given. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 19: 102–111, 2001",
"We consider the following long-range percolation model: an undirected graph with the node set 0, 1, ..., N d, has edges (x, y) selected with probability ≈ β ||x -y||s if ||x - y|| ' η1 > η2 > 1, it is at most Nη2 when s = 2d, and is at least Nη1 when d = 1, s = 2, β > 1 or when s > 2d. We also provide a simple proof that the diameter is at most log O(1) N with high probability, when d > s > 2d, established previously in [2]."
],
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"1968939650",
"2088668624"
]
} | 0 |
||
math0406379 | 2012604491 | We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.) | For @math one expects @cite_13 the same behavior as for the original graph. And indeed, the linear asymptotic, has been established by Berger @cite_8 . For the nearest-neighbor percolation case, this statement goes back to the work of Antal and Pisztora @cite_9 . | {
"abstract": [
"We prove large deviation estimates at the correct order for the graph distance of two sites lying in the same cluster of an independent percolation process. We improve earlier results of Gartner and Molchanov and Grimmett and Marstrand and answer affirmatively a conjecture of Kozlov.",
"Bounds for the diameter and expansion of the graphs created by long-range percolation on the cycle ℤ Nℤ are given. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 19: 102–111, 2001",
"We consider long-range percolation in dimension @math , where distinct sites @math and @math are connected with probability @math . Assuming that @math is translation invariant and that @math with @math , we show that the graph distance is at least linear with the Euclidean distance."
],
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_8"
],
"mid": [
"1986564122",
"1968939650",
"1580648572"
]
} | 0 |
||
math0406379 | 2012604491 | We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.) | Further motivation comes from the recent interest in diffusive properties of graphs arising via long-range percolation. An early work in this respect was that of Berger @cite_7 who characterized regimes of recurrence and transience for the simple random walk on such graphs. Benjamini, Berger and Yadin @cite_11 later showed that the mixing time @math of the random walk on @math in @math scales like with an apparent jump in the exponent when @math passes through 2. Misumi @cite_5 found estimates on the effective resistance in @math that exhibit a similar transition. | {
"abstract": [
"",
"We study the behavior of the random walk on the infinite cluster of independent long-range percolation in dimensions d= 1,2, where x and y are connected with probability ( ). We show that if d s>2d, then there is no infinite cluster at criticality. This result is extended to the free random cluster model. A second corollary is that when d≥& 2 and d>s>2d we can erase all long enough bonds and still have an infinite cluster. The proof of recurrence in two dimensions is based on general stability results for recurrence in random electrical networks. In particular, we show that i.i.d. conductances on a recurrent graph of bounded degree yield a recurrent electrical network.",
"We provide an estimate, sharp up to poly-logarithmic factors, of the asymptotic almost sure mixing time of the graph created by long-range percolation on the cycle of length N ( @math ). While it is known that the asymptotic almost sure diameter drops from linear to poly-logarithmic as the exponent s decreases below 2 [4, 9], the asymptotic almost sure mixing time drops from N2 only to Ns-1 (up to poly-logarithmic factors)."
],
"cite_N": [
"@cite_5",
"@cite_7",
"@cite_11"
],
"mid": [
"1507652446",
"2066678737",
"2141584653"
]
} | 0 |
||
math0406379 | 2012604491 | We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.) | Very recently, precise bounds for the heat kernel and spectral gap of such random walks have been derived by Crawford and Sly @cite_14 . These are claimed to lead to the proof that the law of such random walks scales to @math -stable processes for @math in @math and @math in @math . For @math on the increasing side of these regimes, the random walk is expected to scale to Brownian motion. | {
"abstract": [
"In this paper, we derive upper bounds for the heat kernel of the simple random walk on the infinite cluster of a supercritical long range percolation process. For any @math and for any exponent @math giving the rate of decay of the percolation process, we show that the return probability decays like @math up to logarithmic corrections, where @math denotes the time the walk is run. Moreover, our methods also yield generalized bounds on the spectral gap of the dynamics and on the diameter of the largest component in a box. Besides its intrinsic interest, the main result is needed for a companion paper studying the scaling limit of simple random walk on the infinite cluster."
],
"cite_N": [
"@cite_14"
],
"mid": [
"1859314131"
]
} | 0 |
||
math0401432 | 2951956456 | We consider the general question of estimating decay of correlations for non-uniformly expanding maps, for classes of observables which are much larger than the usual class of Holder continuous functions. Our results give new estimates for many non-uniformly expanding systems, including Manneville-Pomeau maps, many one-dimensional systems with critical points, and Viana maps. In many situations, we also obtain a Central Limit Theorem for a much larger class of observables than usual. Our main tool is an extension of the coupling method introduced by L.-S. Young for estimating rates of mixing on certain non-uniformly expanding tower maps. | Let us now mention some other results concerning estimates on decay of correlations for non-H "older observables. Most of these are stated in the context of one-sided finite alphabet shift maps, or subshifts of finite type. (For a comprehensive discussion of shift maps and equilibrium measures, we suggest the book of Baladi, @cite_6 .) Shift maps are relatively simple dynamical systems, but are often used to more complicated systems via a semi-conjugacy, in much the same way that each of the examples we consider can be represented by a suitable (see ). Where a system @math being coded has an invariant measure @math which is absolutely continuous with respect to Lebesgue measure, @math is an equilibrium measure for the potential @math , where @math is the Jacobian with respect to Lebesgue measure. Most results for shift maps work with an equilibruim measure given by a potential @math which is H "older continuous (in terms of the usual metric on shift spaces - two sequences are said to be distance @math apart if they agree for exactly the first @math symbols). This assumption corresponds to assuming good distortion for @math . | {
"abstract": [
"Subshifts of finite type a key symbolic model smooth uniformly expanding dynamics piecewise expanding systems hyperbolic systems."
],
"cite_N": [
"@cite_6"
],
"mid": [
"1582592707"
]
} | Decay of correlations for non-Hölder observables | In this paper, we are interested in mixing properties (in particular, decay of correlations) of non-uniformly expanding maps. Much progress has been made in recent years, with upper estimates being obtained for many examples of such systems. Almost invariably, these estimates are for observables which are Hölder continuous. Our aim here is to extend the study to much larger classes of observables.
Let f : (X, ν) be some mixing system. We define a correlation function C n (ϕ, ψ; ν) = (ϕ • f n )ψdν − ϕdν ψdν for ϕ, ψ ∈ L 2 . The rate at which this sequence decays to zero is a measure of how quickly ϕ • f n becomes independent from ψ. It is well known that for any non-trivial mixing system, there exist ϕ, ψ ∈ L 2 for which correlations decay arbitrarily slowly. For this reason, we must restrict at least one of the observables to some smaller class of functions, in order to get an upper bound for C n .
Here, we present a result which is general in the context of towers, as introduced by L.-S. Young ([Yo]). There are many examples of systems which admit such towers, and we shall see that under a fairly weak assumption on the relationship between the tower and the system (which is satisfied in all the examples we mention) we get estimates for certain classes of observables with respect to the system itself. One of the main strengths of this method is that these classes of observables may be defined purely in terms of their regularity with respect to the manifold; this contrasts with some results, where regularity is considered with respect to some Markov partition.
All of our results shall take the following form. Given a system f : X , a mixing acip ν, and ϕ ∈ L ∞ (X, ν), ψ ∈ I, for some class I = (Ri, γ) as above, we obtain in each example an estimate of the form
C n (ϕ, ψ; ν) ≤ ϕ ∞ C(ψ)u n ,
where · ∞ is the usual norm on L ∞ (X, ν), C(ψ) is a constant depending on f and ψ, and (u n ) is some sequence decaying to zero with rate determined by f and R ε (ψ). Notice that we make no assumption on the regularity of the observable ϕ; when discussing the regularity class of observables, we shall always be referring to the choice of the function ψ. (This is not atypical, although some existing results do require that both functions have some minimum regularity.)
For brevity, we shall simply give an estimate for u n in the statement of each result. For each example we also have a Central Limit Theorem for those observables which give summable decay of correlations, and are not coboundaries. We recall that a real-valued observable ψ satisfies the Central Limit Theorem for f if there exists σ > 0 such that for every interval J ⊂ R,
ν x ∈ X : 1 √ n n−1 j=0 ϕ(f j (x)) − ϕdν ∈ J → 1 σ √ 2π J e − t 2 2σ 2 dt.
Note that the range of examples given in the following subsections is meant to be illustrative rather than exhaustive, and so we shall miss out some simple generalisations for which essentially the same results hold. We shall instead try to make clear the conditions needed to apply these results, and direct the reader to the papers mentioned below for further examples which satisfy these conditions.
Uniformly expanding maps
Let f : M be a C 2 -diffeomorphism of a compact Riemannian manifold. We say f is uniformly expanding if there exists λ > 1 such that Df x v ≥ λ v for all x ∈ M , and all tangent vectors v. Such a map admits an absolutely continuous invariant probability measure µ, which is unique and mixing.
Theorem 1 Let ϕ ∈ L ∞ (M, µ), and let ψ : M → R be continuous. Upper bounds are given for (u n ) as follows:
• if ψ ∈ (R1), then u n = O(θ n ) for some θ ∈ (0, 1);
• if ψ ∈ (R2, γ), for some γ ∈ (0, 1), then u n = O(e −n γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R3, γ), for some γ > 1, then u n = O(e −(log n) γ ′ ) for every γ ′ < γ;
• for any constant C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (R4, γ) for some γ > ζ −1 , and R ∞ (ψ) < C ∞ , then u n = O(n 1−ζγ ).
Furthermore, the Central Limit Theorem holds when ψ ∈ (R4, γ) for sufficiently large γ, depending on R ∞ (ψ).
Such maps are generally regarded as being well understood, and in particular, results of exponential decay of correlations for observables in (R1) go back to the seventies, and the work of Sinai, Ruelle and Bowen ([Si], [R], [Bo]). For a more modern perspective, see for instance the books of Baladi ([Ba]) and Viana ([V2]).
I have not seen explicit claims of similar results for observables in classes (R2 − 4). However, it is well known that any such map can be coded by a one-sided full shift on finitely many symbols, so an analogous result on shift spaces would be sufficient, and may well already exist. The estimates here are probably not sharp, particularly in the (R4) case.
The other examples we consider are not in general reducible to finite alphabet shift maps, so we can be more confident that the next set of results are new.
Maps with indifferent fixed points
These are perhaps the simplest examples of strictly non-uniformly expanding systems. Purely for simplicity, we restrict to the well known case of the Manneville-Pomeau map.
Theorem 2 Let f : [0,1] be the map f (x) = x + x 1+α (mod 1), for some α ∈ (0, 1), and let ν be the unique acip for this system. For ϕ, ψ : [0, 1] → R with ϕ bounded and ψ continuous, for every constant C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (R4, γ) for some γ > 2ζ −1 , with R ∞ (ψ) < C ∞ , then
• if γ = ζ −1 (τ + 1), then u n = O(n 1−τ log n);
• otherwise, u n = O(max(n 1−τ , n 2−ζγ ));
where τ = α −1 . In particular, when γ > 3 ζ the Central Limit Theorem holds.
In the case where ψ ∈ (R4, γ) for every large γ, this gives u n = O(n 1− 1 α ), which is the bound obtained in [Yo] for ψ ∈ (R1). We do not give separate estimates for observables in classes (R2) and (R3), as we obtain the same upper bound in each case. Note that the polynomial upper bound for (R1) observables is known to be sharp ( [Hu]), and hence the above gives a sharp bound in the (R2) and (R3) cases, and for (R4, γ) when γ is large.
The above results apply in the more general 1-dimensional case considered in [Yo], where in particular a finite number of expanding branches are allowed, and it is assumed that xf ′′ (x) ≈ x α near the indifferent fixed point.
In our remaining examples, estimates will invariably correspond to either the above form, or that of Theorem 1, and we shall simply say which is the case, specifying the parameter τ as appropriate.
One-dimensional maps with critical points
Let us consider the systems of [BLS]. These are one-dimensional multimodal maps, where there is some long-term growth of derivative along the critical orbits. Let f : I → I be a C 3 interval or circle map with a finite critical set C and no stable or neutral periodic orbit. We assume all critical points have the same critical order l ∈ (1, ∞); this means that for each c ∈ C, there is some neighbourhood in which f can be written in the form
f (x) = ±|ϕ(x − c)| l + f (c)
for some diffeomorphism ϕ : R → R fixing 0, with the ± allowed to depend on the sign of x − c.
For c ∈ C, let D n (c) = |(f n ) ′ (f (c))|. From [BLS] we know there exists an acip µ provided
n D − 1 2l−1 n (c) < ∞ ∀c ∈ C.
If f is not renormalisable on the support of µ then µ is mixing.
Theorem 3 Let ϕ ∈ L ∞ (I, µ), and let ψ be continuous.
Case 1: Suppose there exist C > 0, λ > 1 such that D n (c) ≥ Cλ n for all n ≥ 1, c ∈ C. Then we have estimates for (u n ) exactly as in the uniformly expanding case (Theorem 1).
Case 2: Suppose there exist C > 0, α > 2l − 1 such that D n (c) ≥ Cn α for all n ≥ 1, c ∈ C. Then we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2) for every τ < α−1 l−1 . In particular, the Central Limit Theorem holds in either case when ψ ∈ (R4, γ) for sufficiently large γ, depending on R ∞ (ψ).
Again, we have restricted our attention to some particular cases; analogous results should be possible for the intermediate cases considered in [BLS]. In particular, for the class of Fibonacci maps with quadratic critical points (see [LM]) we obtain estimates as in Theorem 2 for every τ > 1.
Viana maps
Next we consider the class of Viana maps, introduced in [V1]. These are examples of non-uniformly expanding maps in more than one dimension, with sub-exponential decay of correlations for Hölder observables. They are notable for being possibly the first examples of non-uniformly expanding systems in more than one dimension which admit an acip, and also because the attractor, and many of its statistical properties, persist in a C 3 neighbourhood of systems.
Let a 0 be some real number in (1, 2) for which x = 0 is pre-periodic for the system x → a 0 − x 2 . We define a skew productf :
S 1 × R bŷ f (s, x) = (ds mod 1, a 0 + α sin(2πs) − x 2 ),
where d is an integer ≥ 16, and α > 0 is a constant. When α is sufficiently small, there is a compact interval I ⊂ (−2, 2) for which S 1 × I is mapped strictly inside its own interior, andf admits a unique acip, which is mixing for some iterate, and has two positive Lyapunov exponents ( [V1], [AV]). The same is also true for any f in a sufficiently small C 3 neighbourhood N off .
Let us fix some small α, and let N be a sufficiently small C 3 neighbourhood off such that for every f ∈ N the above properties hold. Choose some f ∈ N ; if f is not mixing, we consider instead the first mixing power.
Theorem 4 For ϕ ∈ L ∞ (S 1 × R, ν), ψ ∈ (R4, γ), we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2) for every τ > 1.
The Central Limit Theorem holds for ψ ∈ (R4, γ) when γ is sufficiently large, depending on R ∞ (ψ).
Another way of saying the above is that if ψ ∈ (R4, γ), then u n = O(n 2−ζγ ), with the usual dependency of ζ on R ∞ (ψ). Note that for observables in ∩ γ>1 (R4, γ), we get super-polynomial decay of correlations, the same estimate as we obtain for Hölder observables (though Baladi and Gouëzel have recently announced a stretched exponential bound for Hölder observables -see BG).
There are a number of generalisations we could consider, such as allowing D ≥ 2 ( [BST] -note they require f to be C ∞ close tof ), or replacing sin(2πs) by an arbitrary Morse function.
Non-uniformly expanding maps
Finally, we discuss probably the most general context in which our methods can currently be applied, the setting of [ALP]. In particular, this setting generalises that of Viana maps.
Let f : M → M be a transitive C 2 local diffeomorphism away from a singular/critical set S, with M a compact finite-dimensional Riemannian manifold. Let Leb be a normalised Riemannian volume form on M , which we shall refer to as Lebesgue measure, and d a Riemannian metric. We assume f is nonuniformly expanding, or more precisely, there exists λ > 0 such that
lim inf n→∞ 1 n n−1 i=0 log Df −1 f i (x) −1 ≥ λ > 0.(1)
For almost every x in M , we may define
E(x) = min N : 1 n n−1 i=0 log Df −1 f i (x) −1 ≥ λ/2, ∀n ≥ N .
The decay rate of the sequence Leb{E(x) > n} may be considered to give a degree of hyperbolicity. Where S is non-empty, we need the following further assumptions, firstly on the critical set. We assume C is non-degenerate, that is, m(C) = 0, and ∃β > 0 such that ∀x ∈ M C we have d(x, C) β Df x v / v d(x, C) −β ∀v ∈ T x M , and the functions log det Df and log Df −1 are locally Lipschitz with Lipschitz constant d(x, C) −β . Now let d δ (x, S) = d(x, S) when this is ≤ δ, and 1 otherwise. We assume that for any ε > 0 there exists δ > 0 such that for Lebesgue a.e. x ∈ M , lim sup
n→∞ 1 n n−1 j=0 − log d δ (f j (x), S) ≤ ε.(2)
We define a recurrence time
T (x) = min N ≥ 1 : 1 n n−1 i=0 − log d δ (f j (x), S) ≤ 2ε, ∀n ≥ N .
Let f be a map satisfying the above conditions, and for which there exists α > 1 such that
Leb({E(x) > n or T (x) > n}) = O(n −α ).
Then f admits an acip ν with respect to Lebesgue measure, and we may assume ν to be mixing by taking a suitable power of f .
Theorem 5 For ϕ ∈ L ∞ (M, ν), ψ ∈ (R4, γ), we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2), for τ = α. Furthermore, when α > 2, the Central Limit Theorem holds for ψ ∈ (R4, γ) when γ is sufficiently large for given R ∞ (ψ).
Young's tower
In the previous section, we indicated the variety of systems we may consider. We shall now state the main technical result, and with it the conditions a system must satisfy in order for our result to be applicable. As verifying that a system satisfies such conditions is often considerable work, we refer the reader to those papers mentioned in each of the previous subsections for full details. The relevant setting for our arguments will be the tower object introduced by Young in [Yo], and we recap its definition. We start with a map F R : (∆ 0 , m 0 ) , where (∆ 0 , m 0 ) is a finite measure space. This shall represent the base of the tower. We assume there exists a partition (mod 0) P = {∆ 0,i : i ∈ N} of ∆ 0 , such that F R |∆ 0,i is an injection onto ∆ 0 for each ∆ 0,i . We require that the partition generates, i.e. that ∞ j=0 (F R ) −j P is the trivial partition into points. We also choose a return time function R : ∆ 0 → N, which must be constant on each ∆ 0,i .
We define a tower to be any map F : (∆, m) determined by some F R , P, and R as follows. Let ∆ = {(z, l) : z ∈ ∆ 0 , l < R(z)}. For convenience let ∆ l refer to the set of points (·, l) in ∆. This shall be thought of as the lth level of ∆. (We shall freely confuse the zeroth level {(z, 0) : z ∈ ∆ 0 } ⊂ ∆ with ∆ 0 itself. We shall also happily refer to points in ∆ by a single letter x, say.) We write ∆ l,i = {(z, l) : z ∈ ∆ 0,i } for l < R(∆ 0,i ). The partition of ∆ into the sets ∆ l,i shall be denoted by η.
The map F is then defined as follows:
F (z, l) = (z, l + 1) if l + 1 < R(z) (F R (z), 0) otherwise.
We notice that the map F R(x) (x) on ∆ 0 is identical to F R (x), justifying our choice of notation. Finally, we define a notion of separation time; for x, y ∈ ∆ 0 , s(x, y) is defined to be the least integer n ≥ 0 s.t. (F R ) n x, (F R ) n y are in different elements of P. For x, y ∈ some ∆ l,i , where x = (x 0 , l), y = (y 0 , l), we set s(x, y) := s(x 0 , y 0 ); for x, y in different elements of η, s(x, y) = 0.
We say that the Jacobian JF R of F R with respect to m 0 is the real-valued function such that for any measurable set E on which JF R is injective,
m 0 (F R (E)) = E JF R dm 0 .
We assume JF R is uniquely defined, positive, and finite m 0 -a.e. We require some further assumptions.
• Measure structure: Let B be the σ-algebra of m 0 -measurable sets. We assume that all elements of P and each n−1 i=0 (F R ) −i P belong to B, and that F R and (F R |∆ 0,i ) −1 are measurable functions. We then extend m 0 to a measure m on ∆ as follows: for E ⊂ ∆ l , any l ≥ 0, we let m(E) = m 0 (F −l E), provided that F −l E ∈ B. Throughout, we shall assume that any sets we choose are measurable. Also, whenever we say we are choosing an arbitrary point x, we shall assume it is a good point, i.e. that each element of its orbit is contained within a single element of the partition η, and that JF R is well-defined and positive at each of these points.
• Bounded distortion: There exist C > 0 and β < 1 s.t. for x, y ∈ any ∆ 0,i ∈ P,
JF R (x) JF R (y) − 1 ≤ Cβ s(F R x,F R y) .
• Aperiodicity: We assume that gcd{R(x) : x ∈ ∆ 0 } = 1. This is a necessary and sufficient condition for mixing (in fact, for exactness).
• Finiteness: We assume Rdm 0 < ∞. This tells us that m(∆) < ∞.
Let F : (∆, m) be a tower, as defined above. We define classes of observable similar to those we consider on the manifold, but characterised instead in terms of the separation time s on ∆. Given a bounded function ψ : ∆ → R, we define the variation for n ≥ 0:
v n (ψ) = sup{|ψ(x) − ψ(y)| : s(x, y) ≥ n}.
Let us use this to define some regularity classes:
Exponential case: ψ ∈ (V 1, γ), γ ∈ (0, 1), if v n (ψ) = O(γ n ); Stretched exponential case: ψ ∈ (V 2, γ), γ ∈ (0, 1), if v n (ψ) = O(exp{−n γ }); Intermediate case: ψ ∈ (V 3, γ), γ > 1, if v n (ψ) = O(exp{−(log n) γ }); Polynomial case: ψ ∈ (V 4, γ), γ > 1, if v n (ψ) = O(n −γ ).
We shall see that the classes (V1-4) of regularity correspond naturally with the classes (R1-4) of regularity on the manifold respectively, under fairly weak assumptions on the relation between the system and the tower we construct for it. (We shall discuss this further in §13.) These classes are essentially those defined in [P], although there the functions are considered to be potentials rather than observables.
We now state the main technical result.
Theorem 6 Let F : (∆, m) be a tower satisfying the assumptions stated above. Then F : (∆, m) admits a unique acip ν, which is mixing. Furthermore, for all ϕ, ψ ∈ L ∞ (∆, m),
(ϕ • F n )ψdν − ϕdν ψdν ≤ ϕ ∞ C(ψ)u n ,
where C(ψ) > 0 is some constant, and (u n ) is a sequence converging to zero at some rate determined by F and v n (ψ). In particular:
Case 1: Suppose m 0 {R > n} = O(θ n ), some θ ∈ (0, 1). Then • if ψ ∈ (V 1, γ) for some γ ∈ (0, 1), then u n = O(θ n ) for some θ ∈ (0, 1); • if ψ ∈ (V 2, γ) for some γ ∈ (0, 1), then u n = O(e −n γ ′ ) for every γ ′ < γ; • if ψ ∈ (V 3, γ) for some γ > 1, then u n = O(e −(log n) γ ′ ) for every γ ′ < γ; • for any constant C ∞ > 0, there exists ζ < 1 such that if ψ ∈ (V 4, γ) for some γ > 1 ζ , and v 0 (ψ) < C ∞ , then u n = O(n 1−ζγ ). Case 2: Suppose m 0 {R > n} = O(n −α ) for some α > 1. Then for every C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (V 4, γ) for some γ > 2 ζ , with v 0 (ψ) < C ∞ , then • if γ = α+1 ζ , u n = O(n 1−α log n); • otherwise, u n = O max(n 1−α , n 2−ζγ ) .
The existence of a mixing acip is proved in [Yo], as is the result in the case ψ ∈ (V 1). As a corollary of the above, we get a Central Limit Theorem in the cases where the rate of mixing is summable.
Corollary 1 Suppose F satisfies the above assumptions, and m 0 {R > n} = O(n −α ), for some α > 2. Then the Central Limit Theorem is satisfied for ψ ∈ (R4, γ) when γ is sufficiently large, depending on F and v 0 (ψ).
In §13 we shall give the exact conditions needed on a system in order to apply the above results.
Overview of method
Our strategy in proving the above theorem is to generalise a coupling method introduced by Young in [Yo]. Our argument follows closely the line of approach of that paper, and we give an outline of the key ideas here.
First, we need to reduce the problem to one in a slightly different context. Given a system F : (∆, m) , we define a transfer operator F * which, for any measure λ on ∆ for which F is measurable, gives a measure F * λ on ∆ defined by
(F * λ)(A) = λ(F −1 A)
whenever A is a λ-measurable set. Clearly any F -invariant measure is a fixed point for this operator. Also, a key property of F * is that for any function
φ : ∆ → R, φ • F dλ = φd(F * λ).
Next, we define a variation norm on m-absolutely continuous signed measures, that is, on the difference between any two (positive) measures which are absolutely continuous. Given two such measures λ, λ ′ , we write
|λ − λ ′ | := dλ dm − dλ ′ dm dm.
Now let us fix an acip ν and choose observables
ϕ ∈ L ∞ (∆, ν), ψ ∈ L 1 (∆, ν), with inf ψ > 0, ψdν = 1. We have (ϕ • F n )ψdν = (ϕ • F n )d(ψν) = ϕd(F n * (ψν)),
where ψν denotes the unique measure which has density ψ with respect to ν. So
(ϕ • F n )ψdν − ϕdν ψdν ≤ ϕ ∞ dF n * (ψν) dm − dν dm dm = ϕ ∞ |F n * (ψν) − ν|.
Hence we may reduce the problem to one of estimating the rate at which certain measures converge to the invariant measure, in terms of the variation norm. In fact, it will be useful to consider the more general question of estimating |F n * λ − F n * λ ′ | for a pair of measures λ, λ ′ whose densities with respect to m are of some given regularity. (We shall require an estimate in the case λ ′ = ν when we consider the Central Limit Theorem.)
Let us now outline the main argument. We work with two copies of the system, and the direct product
F × F : (∆ × ∆, m × m) . Let P 0 = λ × λ ′ ,
and consider it to be a measure on ∆ × ∆. If we let π, π ′ : ∆ × ∆ → ∆ be the projections onto the first and second coordinates respectively, we have that
|F n * λ − F n * λ ′ | = |π * (F × F ) n * P 0 − π ′ * (F × F ) n * P 0 | ≤ 2|(F × F ) n * P 0 |.
Our strategy will involve summing the differences between the two projections over small regions of the space, only comparing them at convenient times that vary with the region of space we are considering. At each of these times, we shall subtract some measure from both coordinates so that the difference is unaffected, yet the total measure of P 0 is reduced, giving an improved upper bound on the difference.
The key difference between our method and that of [Yo] is that we introduce a sequence (ε n ), which shall represent the rate at which we attempt to subtract measure from P 0 . When the densities of λ, λ ′ are of class (V 1), (ε n ) can be taken to be a small constant, and the method here reduces to that of [Yo]; however, by allowing sequences ε n → 0, we may also consider measure densities of weaker regularity.
We shall see that it is possible to define an induced mapF : ∆×∆ → ∆ 0 ×∆ 0 for which there is a partitionξ 1 of ∆×∆, with every element mapping injectively onto ∆ 0 × ∆ 0 underF . In fact, there is a stopping time
T 1 :ξ 1 → N such that for each Γ ∈ξ 1 ,F |Γ = (F × F ) T1(Γ) . If we choose some Γ ∈ξ 1 , thenF * (P 0 |Γ) is a measure on ∆ 0 × ∆ 0 .
The density of P 0 with respect to m × m has essentially the same regularity as the measures λ, λ ′ , and the density ofF * (P 0 |Γ) will be similar, except possibly weakened slightly by any irregularity in the mapF . (We shall see that the mapF is not too irregular.) Let
c(Γ) = inf w∈∆0×∆0 dF * (P 0 |Γ) d(m × m) (w).
For any ε 1 ∈ [0, 1], we may writê
F * (P 0 |Γ) = ε 1 c(Γ)(m × m|∆ 0 × ∆ 0 ) +F * (P 1 |Γ)
for some (positive) measure P 1 |Γ; this is uniquely defined sinceF |Γ is injective. Essentially, we are subtracting some amount of mass from the measureF * (P 0 |Γ). Moreover, we are subtracting it equally from both coordinates; this means that writing Γ = A × B and k = T 1 (Γ), the distance between the measures F k * (λ|A) and F k * (λ ′ |B), both defined on ∆ 0 , is unaffected. However, we also see that the remaining measureF * (P 1 |Γ) has smaller total mass, and this is an upper bound for |F k * (λ|A) − F k * (λ ′ |B)|. We fix an ε 1 , and perform this subtraction of measure for each Γ ∈ξ 1 , obtaining a measure P 1 defined on ∆ × ∆. The total mass of P 1 represents the difference between F n * λ and F n * λ ′ at time n = T 1 , taking into account that T 1 is not constant over ∆ × ∆. Clearly, we obtain the best upper bound by taking ε 1 = 1; however, we shall see that it is to our advantage to choose some smaller value for ε 1 .
We choose a sequence (ε n ), and proceed inductively as follows. First, we define a sequence of partitions {ξ i } such that Γ ∈ξ i is mapped injectively onto ∆ 0 × ∆ 0 underF i . Now given the measure P i−1 , we take an element Γ ∈ξ i and consider the measureF i
* (P i−1 |Γ) on ∆ 0 × ∆ 0 . Here, we let c(Γ) = inf w∈∆0×∆0 dF i * (P i−1 |Γ) d(m × m|∆ 0 × ∆ 0 ) (w),
and specify P i |Γ bŷ
F i * (P i−1 |Γ) = ε i c(Γ)(m × m|∆ 0 × ∆ 0 ) +F i * (P i |Γ).
As before, we construct a measure P i , the total mass of which gives an upper bound at time T i . To fully determine the sequence {P i }, it remains to choose a sequence (ε i ). Our choice relates to the regularity of the densities dλ dm , dλ ′ dm . This is relevant because the method requires that the family of measure densities
dF i * (P i−1 |Γ) d(m × m) : i ≥ 1, Γ ∈ξ i
has some uniform regularity. (In fact, we require that the log of each of the above densities is suitably regular.) We require this in order that at the ith stage of the procedure, when we subtract an ε i proportion of the minimum local density, this corresponds to a similarly large proportion of the average density. Hence, provided this regularity is maintained, the total mass of P i−1 is decreased by a similar proportion. When we subtract a constant from a density as above, this weakens the regularity. However, at the next step of the procedure, we work with elements of the partitionξ i+1 . Since these sets are smaller, we regain some regularity by working with measures on ∆ 0 × ∆ 0 pushed forward from such sets. That is, we expect the densities dF i+1 *
(P i |Γ) d(m × m) : Γ ∈ξ i+1
to be more regular than the densities
dF i * (P i |Γ) d(m × m) : Γ ∈ξ i .
(This relies on the mapF being smooth enough that another application of the operatorF * doesn't much affect the regularity.)
The degree of regularity we gain in this way depends on the initial regularity of Φ, and hence of dλ dm , dλ ′ dm , with respect to the sequence of partitions. In the usual case, where dλ dm , dλ ′ dm ∈ (V 1), the regularity we gain each time we refine the partition is similar to the regularity we lose when we subtract a small constant proportion of the density; hence we may take every ε i to be a small constant ε. Where the initial regularities are not so good, we gain less regularity from refining the partition, and so we may only subtract correspondingly less measure.
For this reason, outside the (V 1) case we shall require that the sequence (ε i ) converges to zero at some minimum rate. However, if (ε i ) decays faster than necessary, we will simply obtain a suboptimal bound. So part of the problem is to try to choose a sequence (ε i ) decaying as slowly as is permissible. We shall also need to take into account the stopping time T 1 (which is unbounded), in order to estimate the speed of convergence in terms of the original map F .
Coupling
Over the next few sections, we give the proof of the main technical theorem.
Let F : (∆, m) be a tower, as defined in §3. Let I = {ϕ : ∆ → R | v n (ϕ) → 0}, and let I + = {ϕ ∈ I : inf ϕ > 0}. We shall work with probability measures whose densities with respect to m belong to I + . We see
ϕ(x) ϕ(y) − 1 = 1 ϕ(y) |ϕ(x) − ϕ(y)| ≤ C ϕ v s(x,y) (ϕ),(3)
where C ϕ depends on inf ϕ. Let λ, λ ′ be measures with dλ dm , dλ ′ dm ∈ I + , and let P = λ × λ ′ . For convenience, we shall write v n (λ) = v n ( dλ dm ) for such measures λ, and use the two notations interchangeably. We let C λ = C ϕ above, where ϕ = dλ dm . We shall write ν for the unique acip for F , which is equivalent to m.
We consider the direct product F × F : (∆ × ∆, m × m) , and specify a return function to ∆ 0 × ∆ 0 . We first fix n 0 > 0 to be some integer large enough that m(F −n ∆ 0 ∩ ∆ 0 ) ≥ some c > 0 for all n ≥ n 0 . Such an integer exists since ν is mixing and equivalent to m. Now we letR(x) be the first arrival time to ∆ 0 (settingR|∆ 0 ≡ 0). We define a sequence {τ i } of stopping time functions on ∆ × ∆ as follows:
τ 1 (x, y) = n 0 +R(F n0 x), τ 2 (x, y) = τ 1 + n 0 +R(F τ1+n0 y), τ 3 (x, y) = τ 2 + n 0 +R(F τ2+n0 x),
and so on, alternating between the two coordinates x, y each time. Correspondingly, we shall define an increasing sequence ξ 1 < ξ 2 < . . . of partitions of ∆×∆, according to each τ i . First, let π, π ′ be the coordinate projections of ∆ × ∆ onto ∆, that is, π(x, y) := x, π ′ (x, y) := y. At each stage we refine the partition according to one of the two coordinates, alternating between the two copies of ∆. First, ξ 1 is given by taking the partition into rectangles E × ∆, E ∈ η, and refining so that τ 1 is constant on each element Γ ∈ ξ 1 , and F τ1 |π(Γ) is for each Γ an injection onto ∆ 0 . To be precise, we write
ξ 1 (x, y) = τ1(x,y)−1 j=0 F −j η (x) × ∆,
using throughout the convention that for a partition ξ, ξ(x) denotes the element of ξ containing x. Subsequently, we say ξ i is the refinement of ξ i−1 such that each element of ξ i−1 is partitioned in the first (resp. second) coordinate for i odd (resp. even) so that τ i is constant on each element Γ ∈ ξ i , and F τi maps π(Γ) (resp. π ′ (Γ)) injectively onto ∆ 0 .
We define T to be the smallest τ i , i ≥ 2, with (F × F ) τi (x, y) ∈ ∆ 0 × ∆ 0 . This is well-defined m-a.e. since ν × ν is ergodic (in fact, mixing). Note that this is not necessarily the first return time to ∆ 0 × ∆ 0 for F × F . We now consider the simultaneous return functionF := (F × F ) T , and partition ∆ × ∆ into regions whichF n maps injectively onto ∆ 0 × ∆ 0 .
For i ≥ 1 we let T i be the time corresponding to the ith iterate ofF , i.e. T 1 ≡ T , and for i ≥ 2,
T i (z) = T i−1 (z) + T (F i−1 z).
Corresponding to {T i } we define a sequence of partitions η × η ≤ξ 1 ≤ξ 2 ≤ . . . of ∆ × ∆ similarly to before, such that for each Γ ∈ξ n , T n |Γ is constant andF n maps Γ injectively onto ∆ 0 × ∆ 0 . It will be convenient to define a separation timeŝ with respect toξ 1 ;ŝ(w, z) is the smallest n ≥ 0 s.t.F n w,F n z are in different elements ofξ 1 . We notice that if w = (x, x ′ ), z = (y, y ′ ), then s(w, z) ≤ min(s(x, y), s(x ′ , y ′ )).
Let ϕ = dλ dm , ϕ ′ = dλ ′ dm , and let Φ = dP dm×m = ϕ · ϕ ′ . We first consider the regularity ofF and Φ with respect to the separation timeŝ.
Sublemma 1
1. For all w, z ∈ ∆ × ∆ withŝ(w, z) ≥ n,
log JF n (w) JF n (z) ≤ CF βŝ (F n w,F n z)
for CF depending only on F .
For all
w, z ∈ ∆ × ∆, log Φ(w) Φ(z) ≤ C Φ vŝ (w,z) (Φ),
where v n (Φ) := max (v n (ϕ), v n (ϕ ′ )), and C Φ = C ϕ + C ϕ ′ , where C ϕ ,C ϕ ′ are the constants given in (3) above, corresponding to ϕ, ϕ ′ respectively.
Proof: Let w = (x, x ′ ), z = (y, y ′ ). Whenŝ(w, z) ≥ n, there exists k ∈ N withF n ≡ (F × F ) k when restricted to the element ofξ n containing w, z. So log JF n (w)
JF n (z) = log JF k (x)JF k (x ′ ) JF k (y)JF k (y ′ ) ≤ log JF k (x) JF k (y) + log JF k (x ′ ) JF k (y ′ ) .
Let j be the number of times F i (x) enters ∆ 0 , for i = 1, . . . , k. We have
log JF k (x) JF k (y) ≤ j i=1 Cβ s(F k x,F k y)+(j−i) ≤ C ′ β s(F k x,F k y)
for some C ′ > 0, and similarly for x ′ , y ′ . So log JF n (w)
JF n (z) ≤ CF βŝ (F n w,F n z)
for some CF > 0. For the second part, we have
log Φ(w) Φ(z) ≤ log ϕ(x) ϕ(y) + log ϕ ′ (x ′ ) ϕ ′ (y ′ ) ≤ C ϕ v s(x,y) (ϕ) + C ϕ ′ v s(x ′ ,y ′ ) (ϕ ′ ) ≤ C Φ vŝ (w,z) (Φ).
We now come to the core of the argument. We choose a sequence (ε i ) < 1, which represents the proportion of P we try to subtract at each step of the construction. Let ψ 0 ≡ψ 0 ≡ Φ. We proceed as follows; we push-forward Φ byF to obtain the function ψ 1 (z) := Φ(z) JF (z) . On each element Γ ∈ξ 1 we subtract the constant ε 1 inf{ψ 1 (z) : z ∈ Γ} from the density ψ 1 |Γ. We continue inductively, pushing forward by dividing the density by JF (F z) to get ψ 2 (z), subtracting ε 2 inf{ψ 2 (z) : z ∈ Γ} from ψ 2 |Γ for each Γ ∈ξ 2 , and so on. That is, we define:
ψ i (z) =ψ i−1 (z) JF (F i−1 z) ; ε i,z = ε i inf w∈ξi(z) ψ i (w); ψ i (z) = ψ i (z) − ε i,z .
We show that under certain conditions on (ε i ), the sequence {ψ i } satisfies a uniform bound on the ratios of its values for nearby points. A similar proposition to the following was obtained simultaneously but independently by Holland ([Hol]); there, the emphasis was on the regularity of the Jacobian.
Proposition 1 Suppose (ε ′ i ) ≤ 1 2 is a sequence with the property that v i (Φ) i j=1 (1 + ε ′ j ) ≤ K 0 ,(4)i j=1 i k=j (1 + ε ′ k ) β i−j+1 ≤ K 0 (5)
are both satisfied for some sufficiently large constant K 0 allowed to depend only on F and v 0 (Φ). Then there existδ < 1 andC > 0 each depending only on F , C Φ and v 0 (Φ) such that if we choose ε i =δε ′ i for each i, then for all w, z witĥ s(w, z) ≥ i ≥ 1,
logψ i (w) ψ i (z) ≤C.
Proof: Suppose we are given such a sequence (ε ′ i ) and assume that for each i we have
logψ i (w) ψ i (z) ≤ (1 + ε ′ i ) log ψ i (w) ψ i (z)(6)
for every w, z withŝ(w, z) ≥ i. We shall see that we may achieve this by a suitable choice of (ε i ). We note that whenŝ(w, z) ≥ i,
log ψ i (w) ψ i (z) ≤ logψ i−1 (w) ψ i−1 (z) + CF βŝ (w,z)−i , so logψ i (w) ψ i (z) ≤ (1 + ε ′ i ) logψ i−1 (w) ψ i−1 (z) + CF βŝ (w,z)−i .
Applying this inductively, we obtain the estimate
logψ i (w) ψ i (z) ≤ C Φ i j=1 (1 + ε ′ j ) vŝ (w,z) (Φ) + (1 + ε ′ i )CF βŝ (w,z)−i + (1 + ε ′ i )(1 + ε ′ i−1 )CF βŝ (w,z)−(i−1) + . . . + i j=1 (1 + ε ′ j )CF βŝ (w,z)−1 .
We see this is bounded above by the constant (C Φ + β −1 CF )K 0 =:C. So we have
log ψ i (w) ψ i (z) ≤C + CF forŝ(w, z) ≥ i.
It remains to examine the choice of sequence (ε i ) necessary for (6) to hold. For now, let (ε i ) be some sequence with ε i ≤ ε ′ i for each i. Let Γ =ξ i (w) =ξ i (z) and write ε i,Γ := ε i,w = ε i,z . Then
logψ i (w) ψ i (z) − log ψ i (w) ψ i (z) ≤ log ψ i (w) − ε i,Γ ψ i (z) − ε i,Γ · ψ i (z) ψ i (w) = log 1 + ε i,Γ ψ i (w) − ε i,Γ ψ i (z) (ψ i (z) − ε i,Γ )ψ i (w) = log 1 + εi,Γ ψi(z) − εi,Γ ψi(w)
We see that 0 ≤ εi,Γ ψi(w) ≤ ε i for all w ∈ Γ, and
εi,Γ ψi(z) − εi,Γ ψi(w) 1 − εi,Γ ψi(z) ≥ −ε i > − 1 2 ,
so C 1 may be chosen so as not to depend on anything. Continuing from the estimate above,
logψ i (w) ψ i (z) − log ψ i (w) ψ i (z) ≤ C 1 ε i,Γ ψ i (z) 1 − ψ i (z) ψ i (w) 1 1 − εi,Γ ψi(z) ≤ C 1 C 2 ε i 1 − ε i log ψ i (w) ψ i (z) ,
where C 2 may be chosen independently of i, w, z since ψi(w) ψi(z) ≥ e −(C+CF ) , provided that at each stage we choose ε i small enough that C 1 C 2 εi 1−εi ≤ ε ′ i . We confirm that it is sufficient to take ε i =δε ′ i for small enoughδ > 0. This means
C 1 C 2 ε i 1 − ε i = C 1 C 2δ ε ′ i 1 −δε ′ i < C 1 C 2δ 1 −δ ε ′ i , so takingδ = 1 1+C1C2 is sufficient.
7 Choosing a sequence (ε i )
Having shown that it is sufficient for our purposes for the sequence (ε ′ i ) to satisfy conditions (4) and (5), we now consider how we might choose a sequence (ε i ) which, subject to these conditions, decreases as slowly as possible. Having chosen a sequence, we shall then estimate the rate of convergence this gives us.
Lemma 1 Given a sequence v i (Φ), there exists a sequence (ε ′ i ) ≤ 1 2 satisfying (4) and (5) such that for ε i =δε ′ i , and any K > 1,
i j=1 1 − ε j K ≤ C max v i (Φ)δ K , θ i
for some θ < 1 depending only on F , and some C > 0.
Proof: We start by defining a sequence (v * i ) > 0 as follows:
we let v * 0 = v 0 (Φ) and v * i = max(v i (Φ), cv * i−1 ), where c is some constant such that exp{− min( 1 2 , β −1 − 1)} < c < 1. We claim that v * i = O(v i (Φ))
unless v i (Φ) decays exponentially fast, in which case v * i decays at some (possibly slower) exponential rate. To see this, suppose otherwise, in the case where v i (Φ) decays slower than any exponential speed. Then for large i certainly v * i > v i (Φ), and so v * i = cv * i−1 for large i, and (v * i ) decays exponentially fast. But this means v i (Φ) decays exponentially fast, which is a contradiction.
Let us now choose ε
′ i = log v * i−1 v * i .
(We ignore the trivial case v 0 (Φ) = 0.) We see that all terms are small enough that (5) is satisfied, and in particular,
ε i ≤ 1 2 . Furthermore, v i (Φ) i j=1 (1 + ε ′ j ) ≤ v * i exp i j=1 ε ′ j = v * i exp{log v * 0 − log v * i } = v 0 (Φ). For any K > 1, i j=1 1 − ε j K ≤ exp −δ K i j=1 ε ′ j = exp −δ K (log v * 0 − log v * i ) = (v 0 (Φ) −1 v * i )δ K . If (v * i ) decays exponentially fast, we get an exponential bound. Otherwise, v * δ K i = O(v i (Φ)δ K ).
Convergence of measures
We introduce a sequence of measure densitiesΦ 0 ≡ Φ ≥Φ 1 ≥Φ 2 ≥ . . . corresponding to the sequence {ψ i } in the following way:
Φ i (z) :=ψ i (z)JF i (z).
Lemma 2 Given a sequence (ε i ) = (δε ′ i ) satisfying the assumptions of Proposition 1, there exists K > 1 dependent only on F , C Φ and v 0 (Φ) such that for
all z ∈ ∆ × ∆, i ≥ 1,Φ i (z) ≤ 1 − ε i K Φ i−1 (z).
Proof: If we fix i ≥ 1, Γ ∈ξ i , and w, z ∈ Γ, then by Proposition 1 we havê
Φ i (w) JF i (w) ≤C 0Φ i (z) JF i (z) whereC 0 = eC > 1. From Sublemma 3, we have 1 JF (F i−1 w) ≤ e CF 1 JF (F i−1 z) , soΦ i−1 (w) JF i (w) =Φ i−1 (w) JF i−1 (w) · 1 JF (F i−1 w) ≤C 0 e CFΦ i−1 (z) JF i (z) .
Now we obtain a relationship betweenΦ i andΦ i−1 by writinĝ
Φ i (z) = (ψ i (z) − ε i,z )JF i (z) = ψ i−1 (z) JF (F i−1 z) − ε i inf w∈ξi(z)ψ i−1 (w) JF (F i−1 w) JF i (z) = Φ i−1 (z) JF i (z) − ε i inf w∈ξi(z)Φ i−1 (w) JF i (w) JF i (z).
So for any z ∈ ∆ × ∆ we have that
Φ i (z) ≤ Φ i−1 (z) JF i (z) − ε i KΦ i−1 (z) JF i (z) JF i (z) = 1 − ε i K Φ i−1 (z), where K =C 0 e CF .
The above lemma gives an estimate on the total mass ofΦ i for each i. To obtain an estimate for the difference between F n * λ and F n * λ ′ , we must use this, and also take into account the length of the simultaneous return time T .
Lemma 3 For all n > 0,
|F n * λ − F n * λ ′ | ≤ 2P {T > n} + 2 n i=1 i j=1 1 − ε j K P {T i ≤ n < T i+1 },
where K is as in the previous lemma.
Proof: We define a sequence {Φ i } of measure densities, corresponding to the measure unmatched at time i with respect to F × F . We shall often write Φ i (m × m), say, to refer to the measure which has density Φ i with respect to
m × m. For z ∈ ∆ × ∆ we let Φ n (z) =Φ i (z), where i is the largest integer such that T i (z) ≤ n. Writing Φ = Φ n + n k=1 (Φ k−1 − Φ k ), we have |F n * λ − F n * λ ′ | = |π * (F × F ) n * (Φ(m × m)) − π ′ * (F × F ) n * (Φ(m × m))| ≤ |π * (F × F ) n * (Φ n (m × m)) − π ′ * (F × F ) n * (Φ n (m × m))| + n k=1 |(π * − π ′ * ) [(F × F ) n * ((Φ k−1 − Φ k )(m × m))]| . (7)
The first term is clearly ≤ 2 Φ n d(m × m). Our construction should ensure the remaining terms are zero, since we have arranged that the measure we subtract is symmetric in the two coordinates. To confirm this, we partition ∆ × ∆ into regions on which each T m is constant, at least while T m < n.
Consider the family of sets A k,i , i, k ∈ N, where A k,i := {z ∈ ∆×∆ : T i (z) = k}. Clearly, each A k,i is a union of elements ofξ i , and for any fixed k the sets A k,i are pairwise disjoint. It is also clear that on any
A k,i , Φ k−1 − Φ k ≡Φ i−1 −Φ i , and for any k, Φ k−1 ≡ Φ k on ∆ × ∆ − ∪ i A k,i . So for each k, π * (F × F ) n * ((Φ k−1 − Φ k )(m × m)) = i Γ∈(ξi|A k,i ) F n−k * π * (F × F ) Ti * ((Φ i−1 −Φ i )((m × m)|Γ)).
We show that this measure is unchanged if we replace π with π ′ in the last expression. Let E ⊂ ∆ be an arbitrary measurable set, and fix some Γ ∈ξ i |A k,i . Then
π * F i * ((Φ i−1 −Φ i )((m × m)|Γ))(E) =F i * ε i JF i inf w∈ΓΦ i−1 (w) JF i (w) ((m × m)|Γ) (E × ∆) = ε i CJF i (m × m) (F −i (E × ∆) ∩ Γ)
where C is constant on Γ. This equals
F −i (E×∆)∩Γ ε i CJF i d(m × m) = ε i C(m × m)(E × ∆).
Since (m × m)(E × ∆) = (m × m)(∆ × E), the terms of the sum in (7) all have zero value, as claimed. Now
Φ n d(m × m) = ∞ i=0 {Ti≤n<Ti+1} Φ n d(m × m);
in fact, since T i ≥ i, all terms of the series are zero for i > n. For 1 ≤ i ≤ n,
{Ti≤n<Ti+1} Φ n = {Ti≤n<Ti+1}Φ i ≤ {Ti≤n<Ti+1} i j=1 1 − ε j K Φ.
The estimate claimed for |F n * λ − F n * λ ′ | follows easily. Finally we state a simple relationship between P {T > n} and (m × m){T > n}. From now on we shall use the convention that P {condition|Γ} := 1 P (Γ) P {x ∈ Γ : x satisfies condition}.
Sublemma 2 There existsK > 0 depending only on C Φ and v 0 (Φ) s.t. ∀i ≥ 1,
∀Γ ∈ξ i , P {T i+1 − T i > n|Γ} ≤K(m 0 × m 0 ){T > n}.
The dependence ofK on P may be removed entirely if we take only i ≥ some i 0 (P ).
Proof: Let µ = 1 P (Γ)F i * (P |Γ). We see that P {T i+1 − T i > n|Γ} = µ{T > n}. We prove a distortion estimate for dµ d(m0×m0) , using the estimates of Sublemma 1. Let w, z ∈ ∆ 0 × ∆ 0 and let w 0 , z 0 ∈ Γ be such thatF i w 0 = w,F i z 0 = z. Then
log dµ d(m×m) (w) dµ d(m×m) (z) ≤ log Φ(w 0 ) Φ(z 0 ) + log JF i w 0 JF i z 0 ≤ C Φ v i (Φ) + CF . This gives dµ d(m × m) ≤ e (CΦvi(Φ)+CF ) (m 0 × m 0 )(∆ 0 × ∆ 0 )
and hence the result follows.
Combinatorial estimates
In Lemma 3 we have given the main estimate involving P , T , and the sequence (ε i ). It remains to relate P and T to the sequence m 0 {R > n}. Primarily, this involves estimates relating the sequences P {T > n} and m 0 {R > n}. We shall state only some key estimates of the proof, referring the reader to [Yo] for full details. Our statements differ slightly, as the estimates of [Yo] are stated in terms of m{R > n}; they are easily reconciled by noting that m{R > n} = i>n m 0 {R > i}. (As earlier,R ≥ 0 is the first arrival time to ∆ 0 .)
Proposition 2
1. If m 0 {R > n} = O(θ n ) for some 0 < θ < 1, then P {T > n} = O(θ n 1 ), for some 0 < θ 1 < 1. Also, for sufficiently small δ 1 > 0, P {T i ≤ n < T i+1 } ≤ Cθ ′n for i ≤ δ 1 n, for some 0 < θ ′ < 1, C > 0 independent of i. The constants θ 1 , θ ′ , δ 1 may all be chosen independently of P .
2. If m 0 {R > n} = O(n −α ) for some α > 1, then P {T > n} = O(n 1−α ).
This proposition follows from estimates involving the combinatorics of the intermediate stopping times {τ i }. Let us make explicit a key sublemma used in the proofs, concerning the regularity of the pushed-forward measure densities dF n * λ dm ; for the rest of the argument we refer to [Yo], as the changes are minor. Sublemma 3 For any k > 0, let
Ω ∈ k−1 i=0 F −i η be s.t. F k Ω = ∆ 0 . Let µ = F k * (λ|Ω). Then ∀x, y ∈ ∆ 0 , we have dµ dm (x) dµ dm (y) − 1 ≤ C 0 for some C 0 (λ),
where the dependence on λ is only on v 0 (λ) and C λ , and may be removed entirely if we only consider k ≥ some k 0 (λ).
Proof: Let ϕ = dλ dm , fix x, y ∈ ∆ 0 , and let x 0 , y 0 be the unique points in Ω such that F k x 0 = x, F k y 0 = y. We note that dµ
dm (x) = ϕ(x 0 ) · dF k * (m|Ω) dm (x 0 ). So ϕ(x 0 ) JF k x 0 · JF k y 0 ϕ(y 0 ) − 1 = JF k y 0 ϕ(y 0 ) ϕ(x 0 ) JF k x 0 − ϕ(y 0 ) JF k y 0 ≤ JF k y 0 ϕ(y 0 ) ϕ(x 0 ) 1 JF k x 0 − 1 JF k y 0 + 1 JF k y 0 |ϕ(x 0 ) − ϕ(y 0 )| ≤ ϕ(x 0 ) ϕ(y 0 ) JF k y 0 JF k x 0 − 1 + ϕ(x 0 ) ϕ(y 0 ) − 1 ≤ (1 + C ϕ v j (ϕ))C ′ + C ϕ v j (ϕ) ≤ (1 + C ϕ v 0 (ϕ))C ′ + C ϕ v 0 (ϕ),
where j is the number of visits to ∆ 0 up to time k. Clearly the penultimate bound can be made independent of λ for j ≥ some j 0 (λ).
The following result combines the estimates above with those of the previous section.
Proposition 3
1. When m 0 {R > n} ≤ C 1 θ n ,
|F n * λ − F n * λ ′ | ≤ Cθ ′n + 2 n i=[δ1n]+1 i j=1 (1 − ε j K )
for some 0 < θ ′ < 1 and sufficiently small δ 1 .
When
m 0 {R > n} ≤ C 1 n −α , α > 1, |F n * λ − F n * λ ′ | ≤ 2C 1 n 1−α + Cn 1−α [δ1n] i=1 i α i j=1 1 − ε j K + n i=[δ1n]+1 i j=1 (1 − ε j K )
for sufficiently small δ 1 .
Proof: In the first case, Proposition 2 and Lemma 3 tell us that
|F n * λ − F n * λ ′ | ≤ Cθ n 0 + 2 [δ1n] i=1 P {T i ≤ n < T i+1 } + 2 n i=[δ1n]+1 i j=1 1 − ε j K
for any 0 < δ 1 < 1, for some 0 < θ 0 < 1. For sufficiently small δ 1 , the middle term is ≤ C[δ 1 n]θ ′n , which decays at some exponential speed in n.
In the second case, for any 0 < δ 1 < 1 we have
|F n * λ − F n * λ ′ | ≤ Cn 1−α + 2 [δ1n] i=1 i j=1 1 − ε j K P {T i ≤ n < T i+1 } + n i=[δ1n]+1 i j=1 1 − ε j K .
We estimate the middle term by noting that
P {T i ≤ n < T i+1 } ≤ i j=0 P T j+1 − T j > n i + 1 ≤K(i + 1)(m × m) T > n i + 1 ≤ Cn 1−α (i + 1) α for some C > 0.
For the last step, note that Proposition 2 applies to the normalisation of (m×m) to a probability measure.
Specific regularity classes
We now combine all of our intermediate estimates to obtain a rate of decay of correlations in the specific cases mentioned in Theorem 6. First, we set ζ =δ K , which can be seen to depend only on F , C Φ and v 0 (Φ). Throughout this section, we shall let C denote a generic constant, allowed to depend only on F and Φ, which may vary between expressions.
Exponential return times
In this subsection, we suppose that m 0 {R > n} = O(θ n ), and hence m{R > n} = O(θ n ).
Class (V1): Suppose v i (Φ) = O(θ i 1 ) for some θ 1 < 1. By Lemma 1 we may take (ε i ) such that conditions (4) and (5) are satisfied, and
i j=1 1 − ε j K = O(θ i 2 )
for some 0 < θ 2 < 1. Applying Proposition 3 we have
|F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n]
θ i 2 for some θ ′ < 1 and sufficiently small δ 1 > 0. This give the required exponential bound in n.
Class (V2): Suppose v i (Φ) = O e −i γ , for some γ ∈ (0, 1). Then there exists (ε j ) such that i j=1 1 − ε j K = O(e −ζi γ ). So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n] e −ζi γ .
We see that e −ζi γ = O(e −i γ ′ ) for every 0 < γ ′ < γ, and it is well known that the sum is of order e −n γ ′′ for every 0 < γ ′′ < γ ′ .
Class (V3): Suppose v i (Φ) = O e −(log i) γ for some γ > 1. We may take (ε j ) such that i j=1 1 − ε j K = O(e −ζ(log i) γ ). So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n] e −ζ(log i) γ .
It is easy to show that e −ζ(log
i) γ = O e −(log i) γ ′ for every 0 < γ ′ < γ. So the sum is of order O(e −(log n) γ ′′ ) for every 0 < γ ′′ < γ. Class (V4): Suppose v i (Φ) = O(i −γ ) for some γ > 1 ζ . Then we can take (ε j ) such that i j=1 1 − ε j K ≤ Ci −ζγ . So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C n i=[δ1n+1] i −ζγ = O(n 1−ζγ ).
Polynomial return times
Here we suppose m 0 {R > n} = O(n −α ) for some α > 1. Suppose v n (Φ) = O(n −γ ), for some γ > 2 ζ . We can take (ε i ) such that
i j=1 1 − ε j K ≤ Ci −ζγ .
By Proposition 3, for some δ 1 ,
|F n * λ − F n * λ ′ | ≤ 2Cn 1−α + Cn 1−α [δ1n] i=1 i α−ζγ + C ∞ i=[δ1n]+1 i −ζγ .
The third term here is of order n 1−ζγ . To estimate the second term, we consider three cases.
Case 1: γ > α+1 ζ . Here, α − ζγ < −1, so the sum is bounded above independently of n, and the whole term is O(n 1−α ).
Case 2: γ = α+1 ζ . The sum is
i≤[δ1n] i −1 ≤ 1 + [δ1n] 1 x −1 dx = 1 + log[δ 1 n] = O(log n).
So the whole term is O(n 1−α log n). Case 3: 2 ζ < γ < α+1 ζ . The sum is of order n α+1−ζγ , and so the whole term is O(n 2−ζγ ).
Decay of correlations
Finally, we show how estimates for decay of correlations may be derived directly from those for the rates of convergence of measures.
Let ϕ, ψ ∈ L ∞ (∆, m), as in the statement of Theorem 6. We writeψ := b(ψ + a), where a = 1 − inf ψ and b is such that ψ dν = 1. We notice that b ∈ 1 1+v0(ψ) , 1 , and that infψ = b, supψ ≤ 1 + v 0 (ψ). Now let ρ = dν dm , and let λ be the measure on ∆ with dλ dm =ψρ. We have
(ϕ • F n ) ψdν − ϕdν ψdν = 1 b (ϕ • F n )ψdν − ϕdν ψ dν = 1 b ϕ d F n * (ψρm) − ϕρdm ≤ 1 b |ϕ| dF n * λ dm − ρ dm ≤ 1 b ϕ ∞ |F n * λ − ν| .
It remains to check the regularity ofψρ. First,
ψ (x)ρ(x) −ψ(y)ρ(y) ≤ ψ (x) (ρ(x) − ρ(y)) + ρ(y) ψ (x) −ψ(y) ≤ ψ ∞ |ρ(x) − ρ(y)| + ρ ∞ |b| |ψ(x) − ψ(y)| .
It can be shown that ρ is bounded below by some positive constant, and v n (ρ) ≤ Cβ n . (This is part of the statement of Theorem 1 in [Yo].) Soψρ is bounded away from zero, and v n (ψρ) ≤ Cβ n + Cv n (ψ), where C depends on v 0 (ψ).
Taking λ ′ = ν, we see that v n (Φ) ≤ Cβ n + Cv n (ψ). This shows that estimates for |F n * λ− F n * λ ′ | carry straight over to estimates for decay of correlations. To check that the dependency of the constants is as we require, we note that we can take
C λ = 1 inf dλ dm = 1 infψρ ≤ 1 b inf ρ ≤ 1 + v 0 (ψ) inf ρ .
So an upper bound for this constant is determined by v 0 (ψ). Clearly C Φ depends only on F and an upper bound for v 0 (ψ), and in particular these constants determine ζ =δ K .
Central Limit Theorem
We verify the Central Limit Theorem in each case for classes of observables which give summable decay of autocorrelations (that is, summable decay of correlations under the restriction ϕ = ψ).
A general theorem of Liverani ( [L]) reduces in this context to the following.
Theorem 7 Let (X, F , µ) be a probability space, and T : X a (non-invertible) ergodic measure-preserving transformation. Let ϕ ∈ L ∞ (X, µ) be such that
ϕdµ = 0. Assume ∞ n=1 (ϕ • T n )ϕdµ < ∞,(8)∞ n=1 (T * n ϕ)(x) is absolutely convergent for µ-a.e. x,(9)
whereT * is the dual of the operatorT : ϕ → ϕ • T . Then the Central Limit Theorem holds for ϕ if and only if ϕ is not a coboundary.
In the above, the dual operatorT * is the Perron-Frobenius operator corresponding to T and µ, that is
(T * ϕ)(x) = y:T y=x ϕ(y) JT (y)
.
Of course the Jacobian JT here is defined in terms of the measure µ. Let ϕ : ∆ → R be an observable which is not a coboundary, and for which
C n (ϕ, ϕ; ν) = (ϕ • F n )ϕdν − ( ϕdν) 2 is summable. Let φ = ϕ − ϕdν, so that φdν = ϕdν − ( ϕdν)dν = 0.
We shall show that φ satisfies the assumptions of the theorem above. It is straightforward to check that C n (ϕ, ϕ; ν) = C n (φ, φ; ν) = (φ • F n )φdν . Hence condition (8) above is satisfied for φ.
Since m and ν are equivalent measures, it suffices to verify the condition in (9) m-a.e. The operatorF * is defined in terms of the invariant measure, so for a measure λ ≪ m it sends dλ dν to dF * λ dν . By a change of coordinates (or rather, of reference measure), we find that
(F * n φ)(x) = 1 ρ(x) (P n (φρ))(x),
where P is the Perron-Frobenius operator with respect to m, that is, the operator sending densities dλ dm to dF * λ dm .
We shall now write φ as the difference of the densities of two (positive) measures of similar regularity to φ. We letφ = b(φ + a), for some large a, with b > 0 chosen such that φ ρdm = 1. We define measures λ, λ ′ by
dλ dm = (bφ +φ)ρ, dλ ′ dm =φρ.
It is straightforward to check this gives two probability measures, and that
b −1 dλ dm − dλ ′ dm = φρ.
As we showed in the previous section, v n (φρ) ≤ Cβ n + Cv n (φ) for some C > 0. Also, bφ +φ = b(2φ + a), which is bounded below by some positive constant, provided we choose sufficiently large a. We easily see v n (λ), v n (λ ′ ) ≤ Cv n (ϕ). We now follow the construction of the previous sections for these given measures λ, λ ′ , and consider the sequence of densities Φ n defined in §8. We have
F n * λ − F n * λ ′ = π * (F × F ) n * (Φ n (m × m)) − π ′ * (F × F ) n * (Φ n (m × m))
. Let ψ n be the density of the first term with respect to m, and ψ ′ n the density of the second. Since P is a linear operator, we see that
|P n (φρ)| = b −1 dF n * λ dm − dF n * λ ′ dm ≤ b −1 (ψ n + ψ ′ n )
These densities have integral and distortion which are estimable by the construction. We know ψ n dm = ψ ′ n dm = Φ n d(m × m). In the cases we consider (sufficiently fast polynomial variations) this is summable in n; notice that we have already used this expression as a key upper bound for 1 2 |F n * λ − F n * λ ′ | (see Lemma 3). It remains to show that a similar condition holds pointwise, by showing that ψ n , ψ ′ n both have bounded distortion on each ∆ l , and hence |F n * λ − F n * λ ′ | is an upper bound for ψ n + ψ ′ n , up to some constant. This follows non-trivially from Proposition 1, which gives a distortion bound on {Φ k }, and hence on {Φ n } when we restrict to elements of a suitable partition. The remainder of the argument is essentially no different from that given in [Yo], and we omit it here.
Applications
Having obtained estimates in the abstract framework of Young's tower, we now discuss how these results may be applied to other settings. First, we define formally what it means for a system to admit a tower.
Let X be a finite dimensional compact Riemannian manifold, with Leb denoting some Riemannian volume (Lebesgue measure) on X. We say that a locally C 1 non-uniformly expanding system f : X admits a tower if there exists a subset X 0 ⊂ X, Leb(X 0 ) > 0, a partition (mod Leb) P of X 0 , and a return time function R : X 0 → N constant on each element of P, such that • for every ω ∈ P, f R |ω is an injection onto X 0 ;
• f R and (f R |ω) −1 are Leb-measurable functions, ∀ω ∈ P;
• ∞ j=0 (f R ) −j P is the trivial partition into points;
• the volume derivative det Df R is well-defined and non-singular (i.e. 0 < | det Df R | < ∞) Leb-a.e., and ∃C > 0, β < 1, such that ∀ω ∈ P, ∀x, y ∈ ω,
det Df R (x) det Df R (y) − 1 ≤ Cβ s(F R x,F R y) ,
where s is defined in terms of f R and P as before.
We say the system admits the tower F : (∆, m) if the base ∆ 0 = X 0 , m|∆ 0 = Leb|X 0 , and the tower is determined by ∆ 0 , R, F R := f R and P as in §3. It is easy to check that the usual assumptions of the tower hold, except possibly for aperiodicity and finiteness. In particular, | det Df R | equals the Jacobian JF R .
If F : (∆, m) is a tower for f as above, there exists a projection π : ∆ → X we shall simply call the tower projection, which is a semi-conjugacy between f and F ; that is, for x ∈ ∆ l , with x = F l x 0 for x 0 ∈ ∆ 0 , π(x) := f l (x 0 ). In all the examples we have mentioned in §2, the standard tower constructions (as given in the papers we cited there) provide us with a tower projection π which is Hölder-continuous with respect to the separation time s on ∆. That is, given a Riemannian metric d, in each case we have that ∃β < 1 such that for x, y ∈ ∆, d(π(x), π(y)) = O(β s(x,y) ).
(10)
Note that the issue of the regularity of π is often not mentioned explicitly in the literature, but essentially follows from having good distortion control for every iterate of the map. (Formally, a tower is only required to have good distortion for the return map F R , which is not sufficient.)
Given a system f which admits a tower F : (∆, m) with projection π satisfying (10), we show how the observable classes (R1 − 4) on X correspond to the classes (V 1 − 4) of observables on ∆. Recall that for given ψ, R ε (ψ) := sup{|ψ(x) − ψ(y)| : d(x, y) ≤ ε}.
Given a regularity for ψ in terms of R ε (ψ), we estimate the regularity of ψ • π, which is an observable on ∆.
Lemma 4
• If ψ ∈ (R1, γ) for some γ ∈ (0, 1], then ψ • π ∈ (V 1);
• if ψ ∈ (R2, γ) for some γ ∈ (0, 1), then ψ • π ∈ (V 2, γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R3, γ) for some γ > 1, then ψ • π ∈ (V 3, γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R4, γ) for some γ > 1, then ψ • π ∈ (V 4, γ).
Proof: The computations are entirely straightforward, so we shall just make explicit the (R4) case for the purposes of illustration.
Suppose R ε (ψ) = O(|log ε| −γ ), for some γ > 0. Then, taking n large as necessary, v n (ψ • π) ≤ C| log Cβ n | −γ = C(n log β −1 − log C) −γ ≤ C( n 2 log β −1 ) −γ = O(n −γ ).
Let us point out that the condition (10) is not necessary for us to apply these methods. If we are given some weaker regularity on π, the classes (V 1 − 4) shall simply correspond to some larger observable classes on the manifold. It remains to check that the semi-conjugacy π preserves the statistical properties we are interested in.
Lemma 5 Let ν be the mixing acip on ∆ given by Theorem 6. Given ϕ, ψ : X → R, letφ = ϕ • π,ψ = ψ • π. Then C n (ϕ, ψ; π * ν) = C n (φ,ψ; ν).
Lemma 6 Suppose the Central Limit Theorem holds for (F, ν) for some observable ϕ : ∆ → R. Then the Central Limit Theorem also holds for (f, π * ν) for the observableφ = ϕ • π. Figure 1: A system admitting a tower via a non-Hölder semi-conjugacy.
Given 0 < a < b < 1, and α > 1, we define f : [0, 1] by
f (x) = 1 − (1 − b)(− log a) α (− log(a − x)) −α x ∈ [0, a] b b−a (x − a) x ∈ (a, b) b − b a exp{(1 − b) α −1 (log a)(1 − x) −α −1 } x ∈ [b, 1]
.
(See figure 1.) Note that the map has unbounded derivative near a, and that a maps onto the critical point at 1. It is easy to check that f is monotone increasing on each interval, and that f has a Markov structure on the intervals Taking ∆ 0 = [0, b], P = {[0, a], (a, b)} with R([0, a]) = 2, R((a, b)) = 1, it is clear that the conditions for f to admit a tower F : ∆ are satisfied. For x, y ∈ [0, a], we have that |x − y| ≈ ( b a ) −s(x,y) . If we fix k and consider |f (x) − f (y)| for x, y ∈ [0, a) with s(x, y) = k, then for y close to a, we have |f (x) − f (y)| ≈ (k log(b/a) + C) −α ≈ k −α for some C. This determines the regularity of the tower projection π, which is in particular not Hölder continuous. However, if we take ψ ∈ (R1, γ) for some | 13,261 |
math0401432 | 2951956456 | We consider the general question of estimating decay of correlations for non-uniformly expanding maps, for classes of observables which are much larger than the usual class of Holder continuous functions. Our results give new estimates for many non-uniformly expanding systems, including Manneville-Pomeau maps, many one-dimensional systems with critical points, and Viana maps. In many situations, we also obtain a Central Limit Theorem for a much larger class of observables than usual. Our main tool is an extension of the coupling method introduced by L.-S. Young for estimating rates of mixing on certain non-uniformly expanding tower maps. | The have been various results concerned primarily with weakening the assumption on the regularity of @math , and obtaining (slower) upper bounds for the rate of mixing with respect to the corresponding equilibrium measures. Kondah, Maume and Schmitt ( @cite_15 ) used a method of Birkhoff cones and projective metrics, Bressaud, Fernandez and Galves ( @cite_4 ) used a coupling method (different from the one described here), with estimates given in terms of , and Pollicott ( @cite_14 ) introduced a method involving composing transfer operators with conditional expectations. Each of these results has slightly different assumptions and gives slightly different estimates, but in each case a number of different classes of potentials are considered, and estimates are given for for observables of some similar regularity to (usually than) the potential. In particular, in all three examples polynomial mixing is given for a potential and observables with variations decaying at suitable polynomial rates. | {
"abstract": [
"Resume On etudie la vitesse de convergence vers l'etat d'equilibre pour des dynamiques markoviennes non holderiennes. On obtient une estimation de la vitesse de melange sur un sous-espace B dense dans l'espace des fonctions continues. En outre, on montre que le spectre de l'operateur de Perron-Frobenius, restreint aB, est un disque ferme dont chaque point est une valeur propre. Ceci implique que la vitesse de convergence vers l'etat d'equilibre ne peut pas etre exponentielle.",
"",
"We present an upper bound on the mixing rate of the equilibrium state of a dynamical system defined by the one-sided shift and a non Holder potential of summable variations. The bound follows from an estimation of the relaxation speed of chains with complete connections with summable decay, which is obtained via a explicit coupling between pairs of chains with different histories."
],
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_4"
],
"mid": [
"2169660011",
"1555319984",
"2140237849"
]
} | Decay of correlations for non-Hölder observables | In this paper, we are interested in mixing properties (in particular, decay of correlations) of non-uniformly expanding maps. Much progress has been made in recent years, with upper estimates being obtained for many examples of such systems. Almost invariably, these estimates are for observables which are Hölder continuous. Our aim here is to extend the study to much larger classes of observables.
Let f : (X, ν) be some mixing system. We define a correlation function C n (ϕ, ψ; ν) = (ϕ • f n )ψdν − ϕdν ψdν for ϕ, ψ ∈ L 2 . The rate at which this sequence decays to zero is a measure of how quickly ϕ • f n becomes independent from ψ. It is well known that for any non-trivial mixing system, there exist ϕ, ψ ∈ L 2 for which correlations decay arbitrarily slowly. For this reason, we must restrict at least one of the observables to some smaller class of functions, in order to get an upper bound for C n .
Here, we present a result which is general in the context of towers, as introduced by L.-S. Young ([Yo]). There are many examples of systems which admit such towers, and we shall see that under a fairly weak assumption on the relationship between the tower and the system (which is satisfied in all the examples we mention) we get estimates for certain classes of observables with respect to the system itself. One of the main strengths of this method is that these classes of observables may be defined purely in terms of their regularity with respect to the manifold; this contrasts with some results, where regularity is considered with respect to some Markov partition.
All of our results shall take the following form. Given a system f : X , a mixing acip ν, and ϕ ∈ L ∞ (X, ν), ψ ∈ I, for some class I = (Ri, γ) as above, we obtain in each example an estimate of the form
C n (ϕ, ψ; ν) ≤ ϕ ∞ C(ψ)u n ,
where · ∞ is the usual norm on L ∞ (X, ν), C(ψ) is a constant depending on f and ψ, and (u n ) is some sequence decaying to zero with rate determined by f and R ε (ψ). Notice that we make no assumption on the regularity of the observable ϕ; when discussing the regularity class of observables, we shall always be referring to the choice of the function ψ. (This is not atypical, although some existing results do require that both functions have some minimum regularity.)
For brevity, we shall simply give an estimate for u n in the statement of each result. For each example we also have a Central Limit Theorem for those observables which give summable decay of correlations, and are not coboundaries. We recall that a real-valued observable ψ satisfies the Central Limit Theorem for f if there exists σ > 0 such that for every interval J ⊂ R,
ν x ∈ X : 1 √ n n−1 j=0 ϕ(f j (x)) − ϕdν ∈ J → 1 σ √ 2π J e − t 2 2σ 2 dt.
Note that the range of examples given in the following subsections is meant to be illustrative rather than exhaustive, and so we shall miss out some simple generalisations for which essentially the same results hold. We shall instead try to make clear the conditions needed to apply these results, and direct the reader to the papers mentioned below for further examples which satisfy these conditions.
Uniformly expanding maps
Let f : M be a C 2 -diffeomorphism of a compact Riemannian manifold. We say f is uniformly expanding if there exists λ > 1 such that Df x v ≥ λ v for all x ∈ M , and all tangent vectors v. Such a map admits an absolutely continuous invariant probability measure µ, which is unique and mixing.
Theorem 1 Let ϕ ∈ L ∞ (M, µ), and let ψ : M → R be continuous. Upper bounds are given for (u n ) as follows:
• if ψ ∈ (R1), then u n = O(θ n ) for some θ ∈ (0, 1);
• if ψ ∈ (R2, γ), for some γ ∈ (0, 1), then u n = O(e −n γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R3, γ), for some γ > 1, then u n = O(e −(log n) γ ′ ) for every γ ′ < γ;
• for any constant C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (R4, γ) for some γ > ζ −1 , and R ∞ (ψ) < C ∞ , then u n = O(n 1−ζγ ).
Furthermore, the Central Limit Theorem holds when ψ ∈ (R4, γ) for sufficiently large γ, depending on R ∞ (ψ).
Such maps are generally regarded as being well understood, and in particular, results of exponential decay of correlations for observables in (R1) go back to the seventies, and the work of Sinai, Ruelle and Bowen ([Si], [R], [Bo]). For a more modern perspective, see for instance the books of Baladi ([Ba]) and Viana ([V2]).
I have not seen explicit claims of similar results for observables in classes (R2 − 4). However, it is well known that any such map can be coded by a one-sided full shift on finitely many symbols, so an analogous result on shift spaces would be sufficient, and may well already exist. The estimates here are probably not sharp, particularly in the (R4) case.
The other examples we consider are not in general reducible to finite alphabet shift maps, so we can be more confident that the next set of results are new.
Maps with indifferent fixed points
These are perhaps the simplest examples of strictly non-uniformly expanding systems. Purely for simplicity, we restrict to the well known case of the Manneville-Pomeau map.
Theorem 2 Let f : [0,1] be the map f (x) = x + x 1+α (mod 1), for some α ∈ (0, 1), and let ν be the unique acip for this system. For ϕ, ψ : [0, 1] → R with ϕ bounded and ψ continuous, for every constant C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (R4, γ) for some γ > 2ζ −1 , with R ∞ (ψ) < C ∞ , then
• if γ = ζ −1 (τ + 1), then u n = O(n 1−τ log n);
• otherwise, u n = O(max(n 1−τ , n 2−ζγ ));
where τ = α −1 . In particular, when γ > 3 ζ the Central Limit Theorem holds.
In the case where ψ ∈ (R4, γ) for every large γ, this gives u n = O(n 1− 1 α ), which is the bound obtained in [Yo] for ψ ∈ (R1). We do not give separate estimates for observables in classes (R2) and (R3), as we obtain the same upper bound in each case. Note that the polynomial upper bound for (R1) observables is known to be sharp ( [Hu]), and hence the above gives a sharp bound in the (R2) and (R3) cases, and for (R4, γ) when γ is large.
The above results apply in the more general 1-dimensional case considered in [Yo], where in particular a finite number of expanding branches are allowed, and it is assumed that xf ′′ (x) ≈ x α near the indifferent fixed point.
In our remaining examples, estimates will invariably correspond to either the above form, or that of Theorem 1, and we shall simply say which is the case, specifying the parameter τ as appropriate.
One-dimensional maps with critical points
Let us consider the systems of [BLS]. These are one-dimensional multimodal maps, where there is some long-term growth of derivative along the critical orbits. Let f : I → I be a C 3 interval or circle map with a finite critical set C and no stable or neutral periodic orbit. We assume all critical points have the same critical order l ∈ (1, ∞); this means that for each c ∈ C, there is some neighbourhood in which f can be written in the form
f (x) = ±|ϕ(x − c)| l + f (c)
for some diffeomorphism ϕ : R → R fixing 0, with the ± allowed to depend on the sign of x − c.
For c ∈ C, let D n (c) = |(f n ) ′ (f (c))|. From [BLS] we know there exists an acip µ provided
n D − 1 2l−1 n (c) < ∞ ∀c ∈ C.
If f is not renormalisable on the support of µ then µ is mixing.
Theorem 3 Let ϕ ∈ L ∞ (I, µ), and let ψ be continuous.
Case 1: Suppose there exist C > 0, λ > 1 such that D n (c) ≥ Cλ n for all n ≥ 1, c ∈ C. Then we have estimates for (u n ) exactly as in the uniformly expanding case (Theorem 1).
Case 2: Suppose there exist C > 0, α > 2l − 1 such that D n (c) ≥ Cn α for all n ≥ 1, c ∈ C. Then we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2) for every τ < α−1 l−1 . In particular, the Central Limit Theorem holds in either case when ψ ∈ (R4, γ) for sufficiently large γ, depending on R ∞ (ψ).
Again, we have restricted our attention to some particular cases; analogous results should be possible for the intermediate cases considered in [BLS]. In particular, for the class of Fibonacci maps with quadratic critical points (see [LM]) we obtain estimates as in Theorem 2 for every τ > 1.
Viana maps
Next we consider the class of Viana maps, introduced in [V1]. These are examples of non-uniformly expanding maps in more than one dimension, with sub-exponential decay of correlations for Hölder observables. They are notable for being possibly the first examples of non-uniformly expanding systems in more than one dimension which admit an acip, and also because the attractor, and many of its statistical properties, persist in a C 3 neighbourhood of systems.
Let a 0 be some real number in (1, 2) for which x = 0 is pre-periodic for the system x → a 0 − x 2 . We define a skew productf :
S 1 × R bŷ f (s, x) = (ds mod 1, a 0 + α sin(2πs) − x 2 ),
where d is an integer ≥ 16, and α > 0 is a constant. When α is sufficiently small, there is a compact interval I ⊂ (−2, 2) for which S 1 × I is mapped strictly inside its own interior, andf admits a unique acip, which is mixing for some iterate, and has two positive Lyapunov exponents ( [V1], [AV]). The same is also true for any f in a sufficiently small C 3 neighbourhood N off .
Let us fix some small α, and let N be a sufficiently small C 3 neighbourhood off such that for every f ∈ N the above properties hold. Choose some f ∈ N ; if f is not mixing, we consider instead the first mixing power.
Theorem 4 For ϕ ∈ L ∞ (S 1 × R, ν), ψ ∈ (R4, γ), we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2) for every τ > 1.
The Central Limit Theorem holds for ψ ∈ (R4, γ) when γ is sufficiently large, depending on R ∞ (ψ).
Another way of saying the above is that if ψ ∈ (R4, γ), then u n = O(n 2−ζγ ), with the usual dependency of ζ on R ∞ (ψ). Note that for observables in ∩ γ>1 (R4, γ), we get super-polynomial decay of correlations, the same estimate as we obtain for Hölder observables (though Baladi and Gouëzel have recently announced a stretched exponential bound for Hölder observables -see BG).
There are a number of generalisations we could consider, such as allowing D ≥ 2 ( [BST] -note they require f to be C ∞ close tof ), or replacing sin(2πs) by an arbitrary Morse function.
Non-uniformly expanding maps
Finally, we discuss probably the most general context in which our methods can currently be applied, the setting of [ALP]. In particular, this setting generalises that of Viana maps.
Let f : M → M be a transitive C 2 local diffeomorphism away from a singular/critical set S, with M a compact finite-dimensional Riemannian manifold. Let Leb be a normalised Riemannian volume form on M , which we shall refer to as Lebesgue measure, and d a Riemannian metric. We assume f is nonuniformly expanding, or more precisely, there exists λ > 0 such that
lim inf n→∞ 1 n n−1 i=0 log Df −1 f i (x) −1 ≥ λ > 0.(1)
For almost every x in M , we may define
E(x) = min N : 1 n n−1 i=0 log Df −1 f i (x) −1 ≥ λ/2, ∀n ≥ N .
The decay rate of the sequence Leb{E(x) > n} may be considered to give a degree of hyperbolicity. Where S is non-empty, we need the following further assumptions, firstly on the critical set. We assume C is non-degenerate, that is, m(C) = 0, and ∃β > 0 such that ∀x ∈ M C we have d(x, C) β Df x v / v d(x, C) −β ∀v ∈ T x M , and the functions log det Df and log Df −1 are locally Lipschitz with Lipschitz constant d(x, C) −β . Now let d δ (x, S) = d(x, S) when this is ≤ δ, and 1 otherwise. We assume that for any ε > 0 there exists δ > 0 such that for Lebesgue a.e. x ∈ M , lim sup
n→∞ 1 n n−1 j=0 − log d δ (f j (x), S) ≤ ε.(2)
We define a recurrence time
T (x) = min N ≥ 1 : 1 n n−1 i=0 − log d δ (f j (x), S) ≤ 2ε, ∀n ≥ N .
Let f be a map satisfying the above conditions, and for which there exists α > 1 such that
Leb({E(x) > n or T (x) > n}) = O(n −α ).
Then f admits an acip ν with respect to Lebesgue measure, and we may assume ν to be mixing by taking a suitable power of f .
Theorem 5 For ϕ ∈ L ∞ (M, ν), ψ ∈ (R4, γ), we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2), for τ = α. Furthermore, when α > 2, the Central Limit Theorem holds for ψ ∈ (R4, γ) when γ is sufficiently large for given R ∞ (ψ).
Young's tower
In the previous section, we indicated the variety of systems we may consider. We shall now state the main technical result, and with it the conditions a system must satisfy in order for our result to be applicable. As verifying that a system satisfies such conditions is often considerable work, we refer the reader to those papers mentioned in each of the previous subsections for full details. The relevant setting for our arguments will be the tower object introduced by Young in [Yo], and we recap its definition. We start with a map F R : (∆ 0 , m 0 ) , where (∆ 0 , m 0 ) is a finite measure space. This shall represent the base of the tower. We assume there exists a partition (mod 0) P = {∆ 0,i : i ∈ N} of ∆ 0 , such that F R |∆ 0,i is an injection onto ∆ 0 for each ∆ 0,i . We require that the partition generates, i.e. that ∞ j=0 (F R ) −j P is the trivial partition into points. We also choose a return time function R : ∆ 0 → N, which must be constant on each ∆ 0,i .
We define a tower to be any map F : (∆, m) determined by some F R , P, and R as follows. Let ∆ = {(z, l) : z ∈ ∆ 0 , l < R(z)}. For convenience let ∆ l refer to the set of points (·, l) in ∆. This shall be thought of as the lth level of ∆. (We shall freely confuse the zeroth level {(z, 0) : z ∈ ∆ 0 } ⊂ ∆ with ∆ 0 itself. We shall also happily refer to points in ∆ by a single letter x, say.) We write ∆ l,i = {(z, l) : z ∈ ∆ 0,i } for l < R(∆ 0,i ). The partition of ∆ into the sets ∆ l,i shall be denoted by η.
The map F is then defined as follows:
F (z, l) = (z, l + 1) if l + 1 < R(z) (F R (z), 0) otherwise.
We notice that the map F R(x) (x) on ∆ 0 is identical to F R (x), justifying our choice of notation. Finally, we define a notion of separation time; for x, y ∈ ∆ 0 , s(x, y) is defined to be the least integer n ≥ 0 s.t. (F R ) n x, (F R ) n y are in different elements of P. For x, y ∈ some ∆ l,i , where x = (x 0 , l), y = (y 0 , l), we set s(x, y) := s(x 0 , y 0 ); for x, y in different elements of η, s(x, y) = 0.
We say that the Jacobian JF R of F R with respect to m 0 is the real-valued function such that for any measurable set E on which JF R is injective,
m 0 (F R (E)) = E JF R dm 0 .
We assume JF R is uniquely defined, positive, and finite m 0 -a.e. We require some further assumptions.
• Measure structure: Let B be the σ-algebra of m 0 -measurable sets. We assume that all elements of P and each n−1 i=0 (F R ) −i P belong to B, and that F R and (F R |∆ 0,i ) −1 are measurable functions. We then extend m 0 to a measure m on ∆ as follows: for E ⊂ ∆ l , any l ≥ 0, we let m(E) = m 0 (F −l E), provided that F −l E ∈ B. Throughout, we shall assume that any sets we choose are measurable. Also, whenever we say we are choosing an arbitrary point x, we shall assume it is a good point, i.e. that each element of its orbit is contained within a single element of the partition η, and that JF R is well-defined and positive at each of these points.
• Bounded distortion: There exist C > 0 and β < 1 s.t. for x, y ∈ any ∆ 0,i ∈ P,
JF R (x) JF R (y) − 1 ≤ Cβ s(F R x,F R y) .
• Aperiodicity: We assume that gcd{R(x) : x ∈ ∆ 0 } = 1. This is a necessary and sufficient condition for mixing (in fact, for exactness).
• Finiteness: We assume Rdm 0 < ∞. This tells us that m(∆) < ∞.
Let F : (∆, m) be a tower, as defined above. We define classes of observable similar to those we consider on the manifold, but characterised instead in terms of the separation time s on ∆. Given a bounded function ψ : ∆ → R, we define the variation for n ≥ 0:
v n (ψ) = sup{|ψ(x) − ψ(y)| : s(x, y) ≥ n}.
Let us use this to define some regularity classes:
Exponential case: ψ ∈ (V 1, γ), γ ∈ (0, 1), if v n (ψ) = O(γ n ); Stretched exponential case: ψ ∈ (V 2, γ), γ ∈ (0, 1), if v n (ψ) = O(exp{−n γ }); Intermediate case: ψ ∈ (V 3, γ), γ > 1, if v n (ψ) = O(exp{−(log n) γ }); Polynomial case: ψ ∈ (V 4, γ), γ > 1, if v n (ψ) = O(n −γ ).
We shall see that the classes (V1-4) of regularity correspond naturally with the classes (R1-4) of regularity on the manifold respectively, under fairly weak assumptions on the relation between the system and the tower we construct for it. (We shall discuss this further in §13.) These classes are essentially those defined in [P], although there the functions are considered to be potentials rather than observables.
We now state the main technical result.
Theorem 6 Let F : (∆, m) be a tower satisfying the assumptions stated above. Then F : (∆, m) admits a unique acip ν, which is mixing. Furthermore, for all ϕ, ψ ∈ L ∞ (∆, m),
(ϕ • F n )ψdν − ϕdν ψdν ≤ ϕ ∞ C(ψ)u n ,
where C(ψ) > 0 is some constant, and (u n ) is a sequence converging to zero at some rate determined by F and v n (ψ). In particular:
Case 1: Suppose m 0 {R > n} = O(θ n ), some θ ∈ (0, 1). Then • if ψ ∈ (V 1, γ) for some γ ∈ (0, 1), then u n = O(θ n ) for some θ ∈ (0, 1); • if ψ ∈ (V 2, γ) for some γ ∈ (0, 1), then u n = O(e −n γ ′ ) for every γ ′ < γ; • if ψ ∈ (V 3, γ) for some γ > 1, then u n = O(e −(log n) γ ′ ) for every γ ′ < γ; • for any constant C ∞ > 0, there exists ζ < 1 such that if ψ ∈ (V 4, γ) for some γ > 1 ζ , and v 0 (ψ) < C ∞ , then u n = O(n 1−ζγ ). Case 2: Suppose m 0 {R > n} = O(n −α ) for some α > 1. Then for every C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (V 4, γ) for some γ > 2 ζ , with v 0 (ψ) < C ∞ , then • if γ = α+1 ζ , u n = O(n 1−α log n); • otherwise, u n = O max(n 1−α , n 2−ζγ ) .
The existence of a mixing acip is proved in [Yo], as is the result in the case ψ ∈ (V 1). As a corollary of the above, we get a Central Limit Theorem in the cases where the rate of mixing is summable.
Corollary 1 Suppose F satisfies the above assumptions, and m 0 {R > n} = O(n −α ), for some α > 2. Then the Central Limit Theorem is satisfied for ψ ∈ (R4, γ) when γ is sufficiently large, depending on F and v 0 (ψ).
In §13 we shall give the exact conditions needed on a system in order to apply the above results.
Overview of method
Our strategy in proving the above theorem is to generalise a coupling method introduced by Young in [Yo]. Our argument follows closely the line of approach of that paper, and we give an outline of the key ideas here.
First, we need to reduce the problem to one in a slightly different context. Given a system F : (∆, m) , we define a transfer operator F * which, for any measure λ on ∆ for which F is measurable, gives a measure F * λ on ∆ defined by
(F * λ)(A) = λ(F −1 A)
whenever A is a λ-measurable set. Clearly any F -invariant measure is a fixed point for this operator. Also, a key property of F * is that for any function
φ : ∆ → R, φ • F dλ = φd(F * λ).
Next, we define a variation norm on m-absolutely continuous signed measures, that is, on the difference between any two (positive) measures which are absolutely continuous. Given two such measures λ, λ ′ , we write
|λ − λ ′ | := dλ dm − dλ ′ dm dm.
Now let us fix an acip ν and choose observables
ϕ ∈ L ∞ (∆, ν), ψ ∈ L 1 (∆, ν), with inf ψ > 0, ψdν = 1. We have (ϕ • F n )ψdν = (ϕ • F n )d(ψν) = ϕd(F n * (ψν)),
where ψν denotes the unique measure which has density ψ with respect to ν. So
(ϕ • F n )ψdν − ϕdν ψdν ≤ ϕ ∞ dF n * (ψν) dm − dν dm dm = ϕ ∞ |F n * (ψν) − ν|.
Hence we may reduce the problem to one of estimating the rate at which certain measures converge to the invariant measure, in terms of the variation norm. In fact, it will be useful to consider the more general question of estimating |F n * λ − F n * λ ′ | for a pair of measures λ, λ ′ whose densities with respect to m are of some given regularity. (We shall require an estimate in the case λ ′ = ν when we consider the Central Limit Theorem.)
Let us now outline the main argument. We work with two copies of the system, and the direct product
F × F : (∆ × ∆, m × m) . Let P 0 = λ × λ ′ ,
and consider it to be a measure on ∆ × ∆. If we let π, π ′ : ∆ × ∆ → ∆ be the projections onto the first and second coordinates respectively, we have that
|F n * λ − F n * λ ′ | = |π * (F × F ) n * P 0 − π ′ * (F × F ) n * P 0 | ≤ 2|(F × F ) n * P 0 |.
Our strategy will involve summing the differences between the two projections over small regions of the space, only comparing them at convenient times that vary with the region of space we are considering. At each of these times, we shall subtract some measure from both coordinates so that the difference is unaffected, yet the total measure of P 0 is reduced, giving an improved upper bound on the difference.
The key difference between our method and that of [Yo] is that we introduce a sequence (ε n ), which shall represent the rate at which we attempt to subtract measure from P 0 . When the densities of λ, λ ′ are of class (V 1), (ε n ) can be taken to be a small constant, and the method here reduces to that of [Yo]; however, by allowing sequences ε n → 0, we may also consider measure densities of weaker regularity.
We shall see that it is possible to define an induced mapF : ∆×∆ → ∆ 0 ×∆ 0 for which there is a partitionξ 1 of ∆×∆, with every element mapping injectively onto ∆ 0 × ∆ 0 underF . In fact, there is a stopping time
T 1 :ξ 1 → N such that for each Γ ∈ξ 1 ,F |Γ = (F × F ) T1(Γ) . If we choose some Γ ∈ξ 1 , thenF * (P 0 |Γ) is a measure on ∆ 0 × ∆ 0 .
The density of P 0 with respect to m × m has essentially the same regularity as the measures λ, λ ′ , and the density ofF * (P 0 |Γ) will be similar, except possibly weakened slightly by any irregularity in the mapF . (We shall see that the mapF is not too irregular.) Let
c(Γ) = inf w∈∆0×∆0 dF * (P 0 |Γ) d(m × m) (w).
For any ε 1 ∈ [0, 1], we may writê
F * (P 0 |Γ) = ε 1 c(Γ)(m × m|∆ 0 × ∆ 0 ) +F * (P 1 |Γ)
for some (positive) measure P 1 |Γ; this is uniquely defined sinceF |Γ is injective. Essentially, we are subtracting some amount of mass from the measureF * (P 0 |Γ). Moreover, we are subtracting it equally from both coordinates; this means that writing Γ = A × B and k = T 1 (Γ), the distance between the measures F k * (λ|A) and F k * (λ ′ |B), both defined on ∆ 0 , is unaffected. However, we also see that the remaining measureF * (P 1 |Γ) has smaller total mass, and this is an upper bound for |F k * (λ|A) − F k * (λ ′ |B)|. We fix an ε 1 , and perform this subtraction of measure for each Γ ∈ξ 1 , obtaining a measure P 1 defined on ∆ × ∆. The total mass of P 1 represents the difference between F n * λ and F n * λ ′ at time n = T 1 , taking into account that T 1 is not constant over ∆ × ∆. Clearly, we obtain the best upper bound by taking ε 1 = 1; however, we shall see that it is to our advantage to choose some smaller value for ε 1 .
We choose a sequence (ε n ), and proceed inductively as follows. First, we define a sequence of partitions {ξ i } such that Γ ∈ξ i is mapped injectively onto ∆ 0 × ∆ 0 underF i . Now given the measure P i−1 , we take an element Γ ∈ξ i and consider the measureF i
* (P i−1 |Γ) on ∆ 0 × ∆ 0 . Here, we let c(Γ) = inf w∈∆0×∆0 dF i * (P i−1 |Γ) d(m × m|∆ 0 × ∆ 0 ) (w),
and specify P i |Γ bŷ
F i * (P i−1 |Γ) = ε i c(Γ)(m × m|∆ 0 × ∆ 0 ) +F i * (P i |Γ).
As before, we construct a measure P i , the total mass of which gives an upper bound at time T i . To fully determine the sequence {P i }, it remains to choose a sequence (ε i ). Our choice relates to the regularity of the densities dλ dm , dλ ′ dm . This is relevant because the method requires that the family of measure densities
dF i * (P i−1 |Γ) d(m × m) : i ≥ 1, Γ ∈ξ i
has some uniform regularity. (In fact, we require that the log of each of the above densities is suitably regular.) We require this in order that at the ith stage of the procedure, when we subtract an ε i proportion of the minimum local density, this corresponds to a similarly large proportion of the average density. Hence, provided this regularity is maintained, the total mass of P i−1 is decreased by a similar proportion. When we subtract a constant from a density as above, this weakens the regularity. However, at the next step of the procedure, we work with elements of the partitionξ i+1 . Since these sets are smaller, we regain some regularity by working with measures on ∆ 0 × ∆ 0 pushed forward from such sets. That is, we expect the densities dF i+1 *
(P i |Γ) d(m × m) : Γ ∈ξ i+1
to be more regular than the densities
dF i * (P i |Γ) d(m × m) : Γ ∈ξ i .
(This relies on the mapF being smooth enough that another application of the operatorF * doesn't much affect the regularity.)
The degree of regularity we gain in this way depends on the initial regularity of Φ, and hence of dλ dm , dλ ′ dm , with respect to the sequence of partitions. In the usual case, where dλ dm , dλ ′ dm ∈ (V 1), the regularity we gain each time we refine the partition is similar to the regularity we lose when we subtract a small constant proportion of the density; hence we may take every ε i to be a small constant ε. Where the initial regularities are not so good, we gain less regularity from refining the partition, and so we may only subtract correspondingly less measure.
For this reason, outside the (V 1) case we shall require that the sequence (ε i ) converges to zero at some minimum rate. However, if (ε i ) decays faster than necessary, we will simply obtain a suboptimal bound. So part of the problem is to try to choose a sequence (ε i ) decaying as slowly as is permissible. We shall also need to take into account the stopping time T 1 (which is unbounded), in order to estimate the speed of convergence in terms of the original map F .
Coupling
Over the next few sections, we give the proof of the main technical theorem.
Let F : (∆, m) be a tower, as defined in §3. Let I = {ϕ : ∆ → R | v n (ϕ) → 0}, and let I + = {ϕ ∈ I : inf ϕ > 0}. We shall work with probability measures whose densities with respect to m belong to I + . We see
ϕ(x) ϕ(y) − 1 = 1 ϕ(y) |ϕ(x) − ϕ(y)| ≤ C ϕ v s(x,y) (ϕ),(3)
where C ϕ depends on inf ϕ. Let λ, λ ′ be measures with dλ dm , dλ ′ dm ∈ I + , and let P = λ × λ ′ . For convenience, we shall write v n (λ) = v n ( dλ dm ) for such measures λ, and use the two notations interchangeably. We let C λ = C ϕ above, where ϕ = dλ dm . We shall write ν for the unique acip for F , which is equivalent to m.
We consider the direct product F × F : (∆ × ∆, m × m) , and specify a return function to ∆ 0 × ∆ 0 . We first fix n 0 > 0 to be some integer large enough that m(F −n ∆ 0 ∩ ∆ 0 ) ≥ some c > 0 for all n ≥ n 0 . Such an integer exists since ν is mixing and equivalent to m. Now we letR(x) be the first arrival time to ∆ 0 (settingR|∆ 0 ≡ 0). We define a sequence {τ i } of stopping time functions on ∆ × ∆ as follows:
τ 1 (x, y) = n 0 +R(F n0 x), τ 2 (x, y) = τ 1 + n 0 +R(F τ1+n0 y), τ 3 (x, y) = τ 2 + n 0 +R(F τ2+n0 x),
and so on, alternating between the two coordinates x, y each time. Correspondingly, we shall define an increasing sequence ξ 1 < ξ 2 < . . . of partitions of ∆×∆, according to each τ i . First, let π, π ′ be the coordinate projections of ∆ × ∆ onto ∆, that is, π(x, y) := x, π ′ (x, y) := y. At each stage we refine the partition according to one of the two coordinates, alternating between the two copies of ∆. First, ξ 1 is given by taking the partition into rectangles E × ∆, E ∈ η, and refining so that τ 1 is constant on each element Γ ∈ ξ 1 , and F τ1 |π(Γ) is for each Γ an injection onto ∆ 0 . To be precise, we write
ξ 1 (x, y) = τ1(x,y)−1 j=0 F −j η (x) × ∆,
using throughout the convention that for a partition ξ, ξ(x) denotes the element of ξ containing x. Subsequently, we say ξ i is the refinement of ξ i−1 such that each element of ξ i−1 is partitioned in the first (resp. second) coordinate for i odd (resp. even) so that τ i is constant on each element Γ ∈ ξ i , and F τi maps π(Γ) (resp. π ′ (Γ)) injectively onto ∆ 0 .
We define T to be the smallest τ i , i ≥ 2, with (F × F ) τi (x, y) ∈ ∆ 0 × ∆ 0 . This is well-defined m-a.e. since ν × ν is ergodic (in fact, mixing). Note that this is not necessarily the first return time to ∆ 0 × ∆ 0 for F × F . We now consider the simultaneous return functionF := (F × F ) T , and partition ∆ × ∆ into regions whichF n maps injectively onto ∆ 0 × ∆ 0 .
For i ≥ 1 we let T i be the time corresponding to the ith iterate ofF , i.e. T 1 ≡ T , and for i ≥ 2,
T i (z) = T i−1 (z) + T (F i−1 z).
Corresponding to {T i } we define a sequence of partitions η × η ≤ξ 1 ≤ξ 2 ≤ . . . of ∆ × ∆ similarly to before, such that for each Γ ∈ξ n , T n |Γ is constant andF n maps Γ injectively onto ∆ 0 × ∆ 0 . It will be convenient to define a separation timeŝ with respect toξ 1 ;ŝ(w, z) is the smallest n ≥ 0 s.t.F n w,F n z are in different elements ofξ 1 . We notice that if w = (x, x ′ ), z = (y, y ′ ), then s(w, z) ≤ min(s(x, y), s(x ′ , y ′ )).
Let ϕ = dλ dm , ϕ ′ = dλ ′ dm , and let Φ = dP dm×m = ϕ · ϕ ′ . We first consider the regularity ofF and Φ with respect to the separation timeŝ.
Sublemma 1
1. For all w, z ∈ ∆ × ∆ withŝ(w, z) ≥ n,
log JF n (w) JF n (z) ≤ CF βŝ (F n w,F n z)
for CF depending only on F .
For all
w, z ∈ ∆ × ∆, log Φ(w) Φ(z) ≤ C Φ vŝ (w,z) (Φ),
where v n (Φ) := max (v n (ϕ), v n (ϕ ′ )), and C Φ = C ϕ + C ϕ ′ , where C ϕ ,C ϕ ′ are the constants given in (3) above, corresponding to ϕ, ϕ ′ respectively.
Proof: Let w = (x, x ′ ), z = (y, y ′ ). Whenŝ(w, z) ≥ n, there exists k ∈ N withF n ≡ (F × F ) k when restricted to the element ofξ n containing w, z. So log JF n (w)
JF n (z) = log JF k (x)JF k (x ′ ) JF k (y)JF k (y ′ ) ≤ log JF k (x) JF k (y) + log JF k (x ′ ) JF k (y ′ ) .
Let j be the number of times F i (x) enters ∆ 0 , for i = 1, . . . , k. We have
log JF k (x) JF k (y) ≤ j i=1 Cβ s(F k x,F k y)+(j−i) ≤ C ′ β s(F k x,F k y)
for some C ′ > 0, and similarly for x ′ , y ′ . So log JF n (w)
JF n (z) ≤ CF βŝ (F n w,F n z)
for some CF > 0. For the second part, we have
log Φ(w) Φ(z) ≤ log ϕ(x) ϕ(y) + log ϕ ′ (x ′ ) ϕ ′ (y ′ ) ≤ C ϕ v s(x,y) (ϕ) + C ϕ ′ v s(x ′ ,y ′ ) (ϕ ′ ) ≤ C Φ vŝ (w,z) (Φ).
We now come to the core of the argument. We choose a sequence (ε i ) < 1, which represents the proportion of P we try to subtract at each step of the construction. Let ψ 0 ≡ψ 0 ≡ Φ. We proceed as follows; we push-forward Φ byF to obtain the function ψ 1 (z) := Φ(z) JF (z) . On each element Γ ∈ξ 1 we subtract the constant ε 1 inf{ψ 1 (z) : z ∈ Γ} from the density ψ 1 |Γ. We continue inductively, pushing forward by dividing the density by JF (F z) to get ψ 2 (z), subtracting ε 2 inf{ψ 2 (z) : z ∈ Γ} from ψ 2 |Γ for each Γ ∈ξ 2 , and so on. That is, we define:
ψ i (z) =ψ i−1 (z) JF (F i−1 z) ; ε i,z = ε i inf w∈ξi(z) ψ i (w); ψ i (z) = ψ i (z) − ε i,z .
We show that under certain conditions on (ε i ), the sequence {ψ i } satisfies a uniform bound on the ratios of its values for nearby points. A similar proposition to the following was obtained simultaneously but independently by Holland ([Hol]); there, the emphasis was on the regularity of the Jacobian.
Proposition 1 Suppose (ε ′ i ) ≤ 1 2 is a sequence with the property that v i (Φ) i j=1 (1 + ε ′ j ) ≤ K 0 ,(4)i j=1 i k=j (1 + ε ′ k ) β i−j+1 ≤ K 0 (5)
are both satisfied for some sufficiently large constant K 0 allowed to depend only on F and v 0 (Φ). Then there existδ < 1 andC > 0 each depending only on F , C Φ and v 0 (Φ) such that if we choose ε i =δε ′ i for each i, then for all w, z witĥ s(w, z) ≥ i ≥ 1,
logψ i (w) ψ i (z) ≤C.
Proof: Suppose we are given such a sequence (ε ′ i ) and assume that for each i we have
logψ i (w) ψ i (z) ≤ (1 + ε ′ i ) log ψ i (w) ψ i (z)(6)
for every w, z withŝ(w, z) ≥ i. We shall see that we may achieve this by a suitable choice of (ε i ). We note that whenŝ(w, z) ≥ i,
log ψ i (w) ψ i (z) ≤ logψ i−1 (w) ψ i−1 (z) + CF βŝ (w,z)−i , so logψ i (w) ψ i (z) ≤ (1 + ε ′ i ) logψ i−1 (w) ψ i−1 (z) + CF βŝ (w,z)−i .
Applying this inductively, we obtain the estimate
logψ i (w) ψ i (z) ≤ C Φ i j=1 (1 + ε ′ j ) vŝ (w,z) (Φ) + (1 + ε ′ i )CF βŝ (w,z)−i + (1 + ε ′ i )(1 + ε ′ i−1 )CF βŝ (w,z)−(i−1) + . . . + i j=1 (1 + ε ′ j )CF βŝ (w,z)−1 .
We see this is bounded above by the constant (C Φ + β −1 CF )K 0 =:C. So we have
log ψ i (w) ψ i (z) ≤C + CF forŝ(w, z) ≥ i.
It remains to examine the choice of sequence (ε i ) necessary for (6) to hold. For now, let (ε i ) be some sequence with ε i ≤ ε ′ i for each i. Let Γ =ξ i (w) =ξ i (z) and write ε i,Γ := ε i,w = ε i,z . Then
logψ i (w) ψ i (z) − log ψ i (w) ψ i (z) ≤ log ψ i (w) − ε i,Γ ψ i (z) − ε i,Γ · ψ i (z) ψ i (w) = log 1 + ε i,Γ ψ i (w) − ε i,Γ ψ i (z) (ψ i (z) − ε i,Γ )ψ i (w) = log 1 + εi,Γ ψi(z) − εi,Γ ψi(w)
We see that 0 ≤ εi,Γ ψi(w) ≤ ε i for all w ∈ Γ, and
εi,Γ ψi(z) − εi,Γ ψi(w) 1 − εi,Γ ψi(z) ≥ −ε i > − 1 2 ,
so C 1 may be chosen so as not to depend on anything. Continuing from the estimate above,
logψ i (w) ψ i (z) − log ψ i (w) ψ i (z) ≤ C 1 ε i,Γ ψ i (z) 1 − ψ i (z) ψ i (w) 1 1 − εi,Γ ψi(z) ≤ C 1 C 2 ε i 1 − ε i log ψ i (w) ψ i (z) ,
where C 2 may be chosen independently of i, w, z since ψi(w) ψi(z) ≥ e −(C+CF ) , provided that at each stage we choose ε i small enough that C 1 C 2 εi 1−εi ≤ ε ′ i . We confirm that it is sufficient to take ε i =δε ′ i for small enoughδ > 0. This means
C 1 C 2 ε i 1 − ε i = C 1 C 2δ ε ′ i 1 −δε ′ i < C 1 C 2δ 1 −δ ε ′ i , so takingδ = 1 1+C1C2 is sufficient.
7 Choosing a sequence (ε i )
Having shown that it is sufficient for our purposes for the sequence (ε ′ i ) to satisfy conditions (4) and (5), we now consider how we might choose a sequence (ε i ) which, subject to these conditions, decreases as slowly as possible. Having chosen a sequence, we shall then estimate the rate of convergence this gives us.
Lemma 1 Given a sequence v i (Φ), there exists a sequence (ε ′ i ) ≤ 1 2 satisfying (4) and (5) such that for ε i =δε ′ i , and any K > 1,
i j=1 1 − ε j K ≤ C max v i (Φ)δ K , θ i
for some θ < 1 depending only on F , and some C > 0.
Proof: We start by defining a sequence (v * i ) > 0 as follows:
we let v * 0 = v 0 (Φ) and v * i = max(v i (Φ), cv * i−1 ), where c is some constant such that exp{− min( 1 2 , β −1 − 1)} < c < 1. We claim that v * i = O(v i (Φ))
unless v i (Φ) decays exponentially fast, in which case v * i decays at some (possibly slower) exponential rate. To see this, suppose otherwise, in the case where v i (Φ) decays slower than any exponential speed. Then for large i certainly v * i > v i (Φ), and so v * i = cv * i−1 for large i, and (v * i ) decays exponentially fast. But this means v i (Φ) decays exponentially fast, which is a contradiction.
Let us now choose ε
′ i = log v * i−1 v * i .
(We ignore the trivial case v 0 (Φ) = 0.) We see that all terms are small enough that (5) is satisfied, and in particular,
ε i ≤ 1 2 . Furthermore, v i (Φ) i j=1 (1 + ε ′ j ) ≤ v * i exp i j=1 ε ′ j = v * i exp{log v * 0 − log v * i } = v 0 (Φ). For any K > 1, i j=1 1 − ε j K ≤ exp −δ K i j=1 ε ′ j = exp −δ K (log v * 0 − log v * i ) = (v 0 (Φ) −1 v * i )δ K . If (v * i ) decays exponentially fast, we get an exponential bound. Otherwise, v * δ K i = O(v i (Φ)δ K ).
Convergence of measures
We introduce a sequence of measure densitiesΦ 0 ≡ Φ ≥Φ 1 ≥Φ 2 ≥ . . . corresponding to the sequence {ψ i } in the following way:
Φ i (z) :=ψ i (z)JF i (z).
Lemma 2 Given a sequence (ε i ) = (δε ′ i ) satisfying the assumptions of Proposition 1, there exists K > 1 dependent only on F , C Φ and v 0 (Φ) such that for
all z ∈ ∆ × ∆, i ≥ 1,Φ i (z) ≤ 1 − ε i K Φ i−1 (z).
Proof: If we fix i ≥ 1, Γ ∈ξ i , and w, z ∈ Γ, then by Proposition 1 we havê
Φ i (w) JF i (w) ≤C 0Φ i (z) JF i (z) whereC 0 = eC > 1. From Sublemma 3, we have 1 JF (F i−1 w) ≤ e CF 1 JF (F i−1 z) , soΦ i−1 (w) JF i (w) =Φ i−1 (w) JF i−1 (w) · 1 JF (F i−1 w) ≤C 0 e CFΦ i−1 (z) JF i (z) .
Now we obtain a relationship betweenΦ i andΦ i−1 by writinĝ
Φ i (z) = (ψ i (z) − ε i,z )JF i (z) = ψ i−1 (z) JF (F i−1 z) − ε i inf w∈ξi(z)ψ i−1 (w) JF (F i−1 w) JF i (z) = Φ i−1 (z) JF i (z) − ε i inf w∈ξi(z)Φ i−1 (w) JF i (w) JF i (z).
So for any z ∈ ∆ × ∆ we have that
Φ i (z) ≤ Φ i−1 (z) JF i (z) − ε i KΦ i−1 (z) JF i (z) JF i (z) = 1 − ε i K Φ i−1 (z), where K =C 0 e CF .
The above lemma gives an estimate on the total mass ofΦ i for each i. To obtain an estimate for the difference between F n * λ and F n * λ ′ , we must use this, and also take into account the length of the simultaneous return time T .
Lemma 3 For all n > 0,
|F n * λ − F n * λ ′ | ≤ 2P {T > n} + 2 n i=1 i j=1 1 − ε j K P {T i ≤ n < T i+1 },
where K is as in the previous lemma.
Proof: We define a sequence {Φ i } of measure densities, corresponding to the measure unmatched at time i with respect to F × F . We shall often write Φ i (m × m), say, to refer to the measure which has density Φ i with respect to
m × m. For z ∈ ∆ × ∆ we let Φ n (z) =Φ i (z), where i is the largest integer such that T i (z) ≤ n. Writing Φ = Φ n + n k=1 (Φ k−1 − Φ k ), we have |F n * λ − F n * λ ′ | = |π * (F × F ) n * (Φ(m × m)) − π ′ * (F × F ) n * (Φ(m × m))| ≤ |π * (F × F ) n * (Φ n (m × m)) − π ′ * (F × F ) n * (Φ n (m × m))| + n k=1 |(π * − π ′ * ) [(F × F ) n * ((Φ k−1 − Φ k )(m × m))]| . (7)
The first term is clearly ≤ 2 Φ n d(m × m). Our construction should ensure the remaining terms are zero, since we have arranged that the measure we subtract is symmetric in the two coordinates. To confirm this, we partition ∆ × ∆ into regions on which each T m is constant, at least while T m < n.
Consider the family of sets A k,i , i, k ∈ N, where A k,i := {z ∈ ∆×∆ : T i (z) = k}. Clearly, each A k,i is a union of elements ofξ i , and for any fixed k the sets A k,i are pairwise disjoint. It is also clear that on any
A k,i , Φ k−1 − Φ k ≡Φ i−1 −Φ i , and for any k, Φ k−1 ≡ Φ k on ∆ × ∆ − ∪ i A k,i . So for each k, π * (F × F ) n * ((Φ k−1 − Φ k )(m × m)) = i Γ∈(ξi|A k,i ) F n−k * π * (F × F ) Ti * ((Φ i−1 −Φ i )((m × m)|Γ)).
We show that this measure is unchanged if we replace π with π ′ in the last expression. Let E ⊂ ∆ be an arbitrary measurable set, and fix some Γ ∈ξ i |A k,i . Then
π * F i * ((Φ i−1 −Φ i )((m × m)|Γ))(E) =F i * ε i JF i inf w∈ΓΦ i−1 (w) JF i (w) ((m × m)|Γ) (E × ∆) = ε i CJF i (m × m) (F −i (E × ∆) ∩ Γ)
where C is constant on Γ. This equals
F −i (E×∆)∩Γ ε i CJF i d(m × m) = ε i C(m × m)(E × ∆).
Since (m × m)(E × ∆) = (m × m)(∆ × E), the terms of the sum in (7) all have zero value, as claimed. Now
Φ n d(m × m) = ∞ i=0 {Ti≤n<Ti+1} Φ n d(m × m);
in fact, since T i ≥ i, all terms of the series are zero for i > n. For 1 ≤ i ≤ n,
{Ti≤n<Ti+1} Φ n = {Ti≤n<Ti+1}Φ i ≤ {Ti≤n<Ti+1} i j=1 1 − ε j K Φ.
The estimate claimed for |F n * λ − F n * λ ′ | follows easily. Finally we state a simple relationship between P {T > n} and (m × m){T > n}. From now on we shall use the convention that P {condition|Γ} := 1 P (Γ) P {x ∈ Γ : x satisfies condition}.
Sublemma 2 There existsK > 0 depending only on C Φ and v 0 (Φ) s.t. ∀i ≥ 1,
∀Γ ∈ξ i , P {T i+1 − T i > n|Γ} ≤K(m 0 × m 0 ){T > n}.
The dependence ofK on P may be removed entirely if we take only i ≥ some i 0 (P ).
Proof: Let µ = 1 P (Γ)F i * (P |Γ). We see that P {T i+1 − T i > n|Γ} = µ{T > n}. We prove a distortion estimate for dµ d(m0×m0) , using the estimates of Sublemma 1. Let w, z ∈ ∆ 0 × ∆ 0 and let w 0 , z 0 ∈ Γ be such thatF i w 0 = w,F i z 0 = z. Then
log dµ d(m×m) (w) dµ d(m×m) (z) ≤ log Φ(w 0 ) Φ(z 0 ) + log JF i w 0 JF i z 0 ≤ C Φ v i (Φ) + CF . This gives dµ d(m × m) ≤ e (CΦvi(Φ)+CF ) (m 0 × m 0 )(∆ 0 × ∆ 0 )
and hence the result follows.
Combinatorial estimates
In Lemma 3 we have given the main estimate involving P , T , and the sequence (ε i ). It remains to relate P and T to the sequence m 0 {R > n}. Primarily, this involves estimates relating the sequences P {T > n} and m 0 {R > n}. We shall state only some key estimates of the proof, referring the reader to [Yo] for full details. Our statements differ slightly, as the estimates of [Yo] are stated in terms of m{R > n}; they are easily reconciled by noting that m{R > n} = i>n m 0 {R > i}. (As earlier,R ≥ 0 is the first arrival time to ∆ 0 .)
Proposition 2
1. If m 0 {R > n} = O(θ n ) for some 0 < θ < 1, then P {T > n} = O(θ n 1 ), for some 0 < θ 1 < 1. Also, for sufficiently small δ 1 > 0, P {T i ≤ n < T i+1 } ≤ Cθ ′n for i ≤ δ 1 n, for some 0 < θ ′ < 1, C > 0 independent of i. The constants θ 1 , θ ′ , δ 1 may all be chosen independently of P .
2. If m 0 {R > n} = O(n −α ) for some α > 1, then P {T > n} = O(n 1−α ).
This proposition follows from estimates involving the combinatorics of the intermediate stopping times {τ i }. Let us make explicit a key sublemma used in the proofs, concerning the regularity of the pushed-forward measure densities dF n * λ dm ; for the rest of the argument we refer to [Yo], as the changes are minor. Sublemma 3 For any k > 0, let
Ω ∈ k−1 i=0 F −i η be s.t. F k Ω = ∆ 0 . Let µ = F k * (λ|Ω). Then ∀x, y ∈ ∆ 0 , we have dµ dm (x) dµ dm (y) − 1 ≤ C 0 for some C 0 (λ),
where the dependence on λ is only on v 0 (λ) and C λ , and may be removed entirely if we only consider k ≥ some k 0 (λ).
Proof: Let ϕ = dλ dm , fix x, y ∈ ∆ 0 , and let x 0 , y 0 be the unique points in Ω such that F k x 0 = x, F k y 0 = y. We note that dµ
dm (x) = ϕ(x 0 ) · dF k * (m|Ω) dm (x 0 ). So ϕ(x 0 ) JF k x 0 · JF k y 0 ϕ(y 0 ) − 1 = JF k y 0 ϕ(y 0 ) ϕ(x 0 ) JF k x 0 − ϕ(y 0 ) JF k y 0 ≤ JF k y 0 ϕ(y 0 ) ϕ(x 0 ) 1 JF k x 0 − 1 JF k y 0 + 1 JF k y 0 |ϕ(x 0 ) − ϕ(y 0 )| ≤ ϕ(x 0 ) ϕ(y 0 ) JF k y 0 JF k x 0 − 1 + ϕ(x 0 ) ϕ(y 0 ) − 1 ≤ (1 + C ϕ v j (ϕ))C ′ + C ϕ v j (ϕ) ≤ (1 + C ϕ v 0 (ϕ))C ′ + C ϕ v 0 (ϕ),
where j is the number of visits to ∆ 0 up to time k. Clearly the penultimate bound can be made independent of λ for j ≥ some j 0 (λ).
The following result combines the estimates above with those of the previous section.
Proposition 3
1. When m 0 {R > n} ≤ C 1 θ n ,
|F n * λ − F n * λ ′ | ≤ Cθ ′n + 2 n i=[δ1n]+1 i j=1 (1 − ε j K )
for some 0 < θ ′ < 1 and sufficiently small δ 1 .
When
m 0 {R > n} ≤ C 1 n −α , α > 1, |F n * λ − F n * λ ′ | ≤ 2C 1 n 1−α + Cn 1−α [δ1n] i=1 i α i j=1 1 − ε j K + n i=[δ1n]+1 i j=1 (1 − ε j K )
for sufficiently small δ 1 .
Proof: In the first case, Proposition 2 and Lemma 3 tell us that
|F n * λ − F n * λ ′ | ≤ Cθ n 0 + 2 [δ1n] i=1 P {T i ≤ n < T i+1 } + 2 n i=[δ1n]+1 i j=1 1 − ε j K
for any 0 < δ 1 < 1, for some 0 < θ 0 < 1. For sufficiently small δ 1 , the middle term is ≤ C[δ 1 n]θ ′n , which decays at some exponential speed in n.
In the second case, for any 0 < δ 1 < 1 we have
|F n * λ − F n * λ ′ | ≤ Cn 1−α + 2 [δ1n] i=1 i j=1 1 − ε j K P {T i ≤ n < T i+1 } + n i=[δ1n]+1 i j=1 1 − ε j K .
We estimate the middle term by noting that
P {T i ≤ n < T i+1 } ≤ i j=0 P T j+1 − T j > n i + 1 ≤K(i + 1)(m × m) T > n i + 1 ≤ Cn 1−α (i + 1) α for some C > 0.
For the last step, note that Proposition 2 applies to the normalisation of (m×m) to a probability measure.
Specific regularity classes
We now combine all of our intermediate estimates to obtain a rate of decay of correlations in the specific cases mentioned in Theorem 6. First, we set ζ =δ K , which can be seen to depend only on F , C Φ and v 0 (Φ). Throughout this section, we shall let C denote a generic constant, allowed to depend only on F and Φ, which may vary between expressions.
Exponential return times
In this subsection, we suppose that m 0 {R > n} = O(θ n ), and hence m{R > n} = O(θ n ).
Class (V1): Suppose v i (Φ) = O(θ i 1 ) for some θ 1 < 1. By Lemma 1 we may take (ε i ) such that conditions (4) and (5) are satisfied, and
i j=1 1 − ε j K = O(θ i 2 )
for some 0 < θ 2 < 1. Applying Proposition 3 we have
|F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n]
θ i 2 for some θ ′ < 1 and sufficiently small δ 1 > 0. This give the required exponential bound in n.
Class (V2): Suppose v i (Φ) = O e −i γ , for some γ ∈ (0, 1). Then there exists (ε j ) such that i j=1 1 − ε j K = O(e −ζi γ ). So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n] e −ζi γ .
We see that e −ζi γ = O(e −i γ ′ ) for every 0 < γ ′ < γ, and it is well known that the sum is of order e −n γ ′′ for every 0 < γ ′′ < γ ′ .
Class (V3): Suppose v i (Φ) = O e −(log i) γ for some γ > 1. We may take (ε j ) such that i j=1 1 − ε j K = O(e −ζ(log i) γ ). So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n] e −ζ(log i) γ .
It is easy to show that e −ζ(log
i) γ = O e −(log i) γ ′ for every 0 < γ ′ < γ. So the sum is of order O(e −(log n) γ ′′ ) for every 0 < γ ′′ < γ. Class (V4): Suppose v i (Φ) = O(i −γ ) for some γ > 1 ζ . Then we can take (ε j ) such that i j=1 1 − ε j K ≤ Ci −ζγ . So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C n i=[δ1n+1] i −ζγ = O(n 1−ζγ ).
Polynomial return times
Here we suppose m 0 {R > n} = O(n −α ) for some α > 1. Suppose v n (Φ) = O(n −γ ), for some γ > 2 ζ . We can take (ε i ) such that
i j=1 1 − ε j K ≤ Ci −ζγ .
By Proposition 3, for some δ 1 ,
|F n * λ − F n * λ ′ | ≤ 2Cn 1−α + Cn 1−α [δ1n] i=1 i α−ζγ + C ∞ i=[δ1n]+1 i −ζγ .
The third term here is of order n 1−ζγ . To estimate the second term, we consider three cases.
Case 1: γ > α+1 ζ . Here, α − ζγ < −1, so the sum is bounded above independently of n, and the whole term is O(n 1−α ).
Case 2: γ = α+1 ζ . The sum is
i≤[δ1n] i −1 ≤ 1 + [δ1n] 1 x −1 dx = 1 + log[δ 1 n] = O(log n).
So the whole term is O(n 1−α log n). Case 3: 2 ζ < γ < α+1 ζ . The sum is of order n α+1−ζγ , and so the whole term is O(n 2−ζγ ).
Decay of correlations
Finally, we show how estimates for decay of correlations may be derived directly from those for the rates of convergence of measures.
Let ϕ, ψ ∈ L ∞ (∆, m), as in the statement of Theorem 6. We writeψ := b(ψ + a), where a = 1 − inf ψ and b is such that ψ dν = 1. We notice that b ∈ 1 1+v0(ψ) , 1 , and that infψ = b, supψ ≤ 1 + v 0 (ψ). Now let ρ = dν dm , and let λ be the measure on ∆ with dλ dm =ψρ. We have
(ϕ • F n ) ψdν − ϕdν ψdν = 1 b (ϕ • F n )ψdν − ϕdν ψ dν = 1 b ϕ d F n * (ψρm) − ϕρdm ≤ 1 b |ϕ| dF n * λ dm − ρ dm ≤ 1 b ϕ ∞ |F n * λ − ν| .
It remains to check the regularity ofψρ. First,
ψ (x)ρ(x) −ψ(y)ρ(y) ≤ ψ (x) (ρ(x) − ρ(y)) + ρ(y) ψ (x) −ψ(y) ≤ ψ ∞ |ρ(x) − ρ(y)| + ρ ∞ |b| |ψ(x) − ψ(y)| .
It can be shown that ρ is bounded below by some positive constant, and v n (ρ) ≤ Cβ n . (This is part of the statement of Theorem 1 in [Yo].) Soψρ is bounded away from zero, and v n (ψρ) ≤ Cβ n + Cv n (ψ), where C depends on v 0 (ψ).
Taking λ ′ = ν, we see that v n (Φ) ≤ Cβ n + Cv n (ψ). This shows that estimates for |F n * λ− F n * λ ′ | carry straight over to estimates for decay of correlations. To check that the dependency of the constants is as we require, we note that we can take
C λ = 1 inf dλ dm = 1 infψρ ≤ 1 b inf ρ ≤ 1 + v 0 (ψ) inf ρ .
So an upper bound for this constant is determined by v 0 (ψ). Clearly C Φ depends only on F and an upper bound for v 0 (ψ), and in particular these constants determine ζ =δ K .
Central Limit Theorem
We verify the Central Limit Theorem in each case for classes of observables which give summable decay of autocorrelations (that is, summable decay of correlations under the restriction ϕ = ψ).
A general theorem of Liverani ( [L]) reduces in this context to the following.
Theorem 7 Let (X, F , µ) be a probability space, and T : X a (non-invertible) ergodic measure-preserving transformation. Let ϕ ∈ L ∞ (X, µ) be such that
ϕdµ = 0. Assume ∞ n=1 (ϕ • T n )ϕdµ < ∞,(8)∞ n=1 (T * n ϕ)(x) is absolutely convergent for µ-a.e. x,(9)
whereT * is the dual of the operatorT : ϕ → ϕ • T . Then the Central Limit Theorem holds for ϕ if and only if ϕ is not a coboundary.
In the above, the dual operatorT * is the Perron-Frobenius operator corresponding to T and µ, that is
(T * ϕ)(x) = y:T y=x ϕ(y) JT (y)
.
Of course the Jacobian JT here is defined in terms of the measure µ. Let ϕ : ∆ → R be an observable which is not a coboundary, and for which
C n (ϕ, ϕ; ν) = (ϕ • F n )ϕdν − ( ϕdν) 2 is summable. Let φ = ϕ − ϕdν, so that φdν = ϕdν − ( ϕdν)dν = 0.
We shall show that φ satisfies the assumptions of the theorem above. It is straightforward to check that C n (ϕ, ϕ; ν) = C n (φ, φ; ν) = (φ • F n )φdν . Hence condition (8) above is satisfied for φ.
Since m and ν are equivalent measures, it suffices to verify the condition in (9) m-a.e. The operatorF * is defined in terms of the invariant measure, so for a measure λ ≪ m it sends dλ dν to dF * λ dν . By a change of coordinates (or rather, of reference measure), we find that
(F * n φ)(x) = 1 ρ(x) (P n (φρ))(x),
where P is the Perron-Frobenius operator with respect to m, that is, the operator sending densities dλ dm to dF * λ dm .
We shall now write φ as the difference of the densities of two (positive) measures of similar regularity to φ. We letφ = b(φ + a), for some large a, with b > 0 chosen such that φ ρdm = 1. We define measures λ, λ ′ by
dλ dm = (bφ +φ)ρ, dλ ′ dm =φρ.
It is straightforward to check this gives two probability measures, and that
b −1 dλ dm − dλ ′ dm = φρ.
As we showed in the previous section, v n (φρ) ≤ Cβ n + Cv n (φ) for some C > 0. Also, bφ +φ = b(2φ + a), which is bounded below by some positive constant, provided we choose sufficiently large a. We easily see v n (λ), v n (λ ′ ) ≤ Cv n (ϕ). We now follow the construction of the previous sections for these given measures λ, λ ′ , and consider the sequence of densities Φ n defined in §8. We have
F n * λ − F n * λ ′ = π * (F × F ) n * (Φ n (m × m)) − π ′ * (F × F ) n * (Φ n (m × m))
. Let ψ n be the density of the first term with respect to m, and ψ ′ n the density of the second. Since P is a linear operator, we see that
|P n (φρ)| = b −1 dF n * λ dm − dF n * λ ′ dm ≤ b −1 (ψ n + ψ ′ n )
These densities have integral and distortion which are estimable by the construction. We know ψ n dm = ψ ′ n dm = Φ n d(m × m). In the cases we consider (sufficiently fast polynomial variations) this is summable in n; notice that we have already used this expression as a key upper bound for 1 2 |F n * λ − F n * λ ′ | (see Lemma 3). It remains to show that a similar condition holds pointwise, by showing that ψ n , ψ ′ n both have bounded distortion on each ∆ l , and hence |F n * λ − F n * λ ′ | is an upper bound for ψ n + ψ ′ n , up to some constant. This follows non-trivially from Proposition 1, which gives a distortion bound on {Φ k }, and hence on {Φ n } when we restrict to elements of a suitable partition. The remainder of the argument is essentially no different from that given in [Yo], and we omit it here.
Applications
Having obtained estimates in the abstract framework of Young's tower, we now discuss how these results may be applied to other settings. First, we define formally what it means for a system to admit a tower.
Let X be a finite dimensional compact Riemannian manifold, with Leb denoting some Riemannian volume (Lebesgue measure) on X. We say that a locally C 1 non-uniformly expanding system f : X admits a tower if there exists a subset X 0 ⊂ X, Leb(X 0 ) > 0, a partition (mod Leb) P of X 0 , and a return time function R : X 0 → N constant on each element of P, such that • for every ω ∈ P, f R |ω is an injection onto X 0 ;
• f R and (f R |ω) −1 are Leb-measurable functions, ∀ω ∈ P;
• ∞ j=0 (f R ) −j P is the trivial partition into points;
• the volume derivative det Df R is well-defined and non-singular (i.e. 0 < | det Df R | < ∞) Leb-a.e., and ∃C > 0, β < 1, such that ∀ω ∈ P, ∀x, y ∈ ω,
det Df R (x) det Df R (y) − 1 ≤ Cβ s(F R x,F R y) ,
where s is defined in terms of f R and P as before.
We say the system admits the tower F : (∆, m) if the base ∆ 0 = X 0 , m|∆ 0 = Leb|X 0 , and the tower is determined by ∆ 0 , R, F R := f R and P as in §3. It is easy to check that the usual assumptions of the tower hold, except possibly for aperiodicity and finiteness. In particular, | det Df R | equals the Jacobian JF R .
If F : (∆, m) is a tower for f as above, there exists a projection π : ∆ → X we shall simply call the tower projection, which is a semi-conjugacy between f and F ; that is, for x ∈ ∆ l , with x = F l x 0 for x 0 ∈ ∆ 0 , π(x) := f l (x 0 ). In all the examples we have mentioned in §2, the standard tower constructions (as given in the papers we cited there) provide us with a tower projection π which is Hölder-continuous with respect to the separation time s on ∆. That is, given a Riemannian metric d, in each case we have that ∃β < 1 such that for x, y ∈ ∆, d(π(x), π(y)) = O(β s(x,y) ).
(10)
Note that the issue of the regularity of π is often not mentioned explicitly in the literature, but essentially follows from having good distortion control for every iterate of the map. (Formally, a tower is only required to have good distortion for the return map F R , which is not sufficient.)
Given a system f which admits a tower F : (∆, m) with projection π satisfying (10), we show how the observable classes (R1 − 4) on X correspond to the classes (V 1 − 4) of observables on ∆. Recall that for given ψ, R ε (ψ) := sup{|ψ(x) − ψ(y)| : d(x, y) ≤ ε}.
Given a regularity for ψ in terms of R ε (ψ), we estimate the regularity of ψ • π, which is an observable on ∆.
Lemma 4
• If ψ ∈ (R1, γ) for some γ ∈ (0, 1], then ψ • π ∈ (V 1);
• if ψ ∈ (R2, γ) for some γ ∈ (0, 1), then ψ • π ∈ (V 2, γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R3, γ) for some γ > 1, then ψ • π ∈ (V 3, γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R4, γ) for some γ > 1, then ψ • π ∈ (V 4, γ).
Proof: The computations are entirely straightforward, so we shall just make explicit the (R4) case for the purposes of illustration.
Suppose R ε (ψ) = O(|log ε| −γ ), for some γ > 0. Then, taking n large as necessary, v n (ψ • π) ≤ C| log Cβ n | −γ = C(n log β −1 − log C) −γ ≤ C( n 2 log β −1 ) −γ = O(n −γ ).
Let us point out that the condition (10) is not necessary for us to apply these methods. If we are given some weaker regularity on π, the classes (V 1 − 4) shall simply correspond to some larger observable classes on the manifold. It remains to check that the semi-conjugacy π preserves the statistical properties we are interested in.
Lemma 5 Let ν be the mixing acip on ∆ given by Theorem 6. Given ϕ, ψ : X → R, letφ = ϕ • π,ψ = ψ • π. Then C n (ϕ, ψ; π * ν) = C n (φ,ψ; ν).
Lemma 6 Suppose the Central Limit Theorem holds for (F, ν) for some observable ϕ : ∆ → R. Then the Central Limit Theorem also holds for (f, π * ν) for the observableφ = ϕ • π. Figure 1: A system admitting a tower via a non-Hölder semi-conjugacy.
Given 0 < a < b < 1, and α > 1, we define f : [0, 1] by
f (x) = 1 − (1 − b)(− log a) α (− log(a − x)) −α x ∈ [0, a] b b−a (x − a) x ∈ (a, b) b − b a exp{(1 − b) α −1 (log a)(1 − x) −α −1 } x ∈ [b, 1]
.
(See figure 1.) Note that the map has unbounded derivative near a, and that a maps onto the critical point at 1. It is easy to check that f is monotone increasing on each interval, and that f has a Markov structure on the intervals Taking ∆ 0 = [0, b], P = {[0, a], (a, b)} with R([0, a]) = 2, R((a, b)) = 1, it is clear that the conditions for f to admit a tower F : ∆ are satisfied. For x, y ∈ [0, a], we have that |x − y| ≈ ( b a ) −s(x,y) . If we fix k and consider |f (x) − f (y)| for x, y ∈ [0, a) with s(x, y) = k, then for y close to a, we have |f (x) − f (y)| ≈ (k log(b/a) + C) −α ≈ k −α for some C. This determines the regularity of the tower projection π, which is in particular not Hölder continuous. However, if we take ψ ∈ (R1, γ) for some | 13,261 |
math0401432 | 2951956456 | We consider the general question of estimating decay of correlations for non-uniformly expanding maps, for classes of observables which are much larger than the usual class of Holder continuous functions. Our results give new estimates for many non-uniformly expanding systems, including Manneville-Pomeau maps, many one-dimensional systems with critical points, and Viana maps. In many situations, we also obtain a Central Limit Theorem for a much larger class of observables than usual. Our main tool is an extension of the coupling method introduced by L.-S. Young for estimating rates of mixing on certain non-uniformly expanding tower maps. | Finally, we mention a result which applies directly to certain non-uniformly expanding systems, rather than to a symbolic space or tower. Pollicott and Yuri ( @cite_12 ) consider a class of maps of arbitrary dimension with a single indifferent periodic orbit and a given Markov structure, including in particular the Manneville-Pomeau interval maps. The class of observables considered is dynamically defined; each observable is required to be Lipschitz with respect to a Markov partition corresponding to some induced map, chosen to have good distortion properties. This class includes all functions which are Lipschitz with respect to the manifold, and while some estimates are weaker than comparable results for H "older observables, bounds are obtained for some observables which cannot be dealt with at all by our methods, such as certain unbounded functions. | {
"abstract": [
"In this note we present an axiomatic approach to the decay of correlations for maps of arbitrary dimension with indifferent periodic points. As applications, we apply our results to the well-known Manneville–Pomeau equation and the inhomogeneous diophantine approximation algorithm."
],
"cite_N": [
"@cite_12"
],
"mid": [
"2389387441"
]
} | Decay of correlations for non-Hölder observables | In this paper, we are interested in mixing properties (in particular, decay of correlations) of non-uniformly expanding maps. Much progress has been made in recent years, with upper estimates being obtained for many examples of such systems. Almost invariably, these estimates are for observables which are Hölder continuous. Our aim here is to extend the study to much larger classes of observables.
Let f : (X, ν) be some mixing system. We define a correlation function C n (ϕ, ψ; ν) = (ϕ • f n )ψdν − ϕdν ψdν for ϕ, ψ ∈ L 2 . The rate at which this sequence decays to zero is a measure of how quickly ϕ • f n becomes independent from ψ. It is well known that for any non-trivial mixing system, there exist ϕ, ψ ∈ L 2 for which correlations decay arbitrarily slowly. For this reason, we must restrict at least one of the observables to some smaller class of functions, in order to get an upper bound for C n .
Here, we present a result which is general in the context of towers, as introduced by L.-S. Young ([Yo]). There are many examples of systems which admit such towers, and we shall see that under a fairly weak assumption on the relationship between the tower and the system (which is satisfied in all the examples we mention) we get estimates for certain classes of observables with respect to the system itself. One of the main strengths of this method is that these classes of observables may be defined purely in terms of their regularity with respect to the manifold; this contrasts with some results, where regularity is considered with respect to some Markov partition.
All of our results shall take the following form. Given a system f : X , a mixing acip ν, and ϕ ∈ L ∞ (X, ν), ψ ∈ I, for some class I = (Ri, γ) as above, we obtain in each example an estimate of the form
C n (ϕ, ψ; ν) ≤ ϕ ∞ C(ψ)u n ,
where · ∞ is the usual norm on L ∞ (X, ν), C(ψ) is a constant depending on f and ψ, and (u n ) is some sequence decaying to zero with rate determined by f and R ε (ψ). Notice that we make no assumption on the regularity of the observable ϕ; when discussing the regularity class of observables, we shall always be referring to the choice of the function ψ. (This is not atypical, although some existing results do require that both functions have some minimum regularity.)
For brevity, we shall simply give an estimate for u n in the statement of each result. For each example we also have a Central Limit Theorem for those observables which give summable decay of correlations, and are not coboundaries. We recall that a real-valued observable ψ satisfies the Central Limit Theorem for f if there exists σ > 0 such that for every interval J ⊂ R,
ν x ∈ X : 1 √ n n−1 j=0 ϕ(f j (x)) − ϕdν ∈ J → 1 σ √ 2π J e − t 2 2σ 2 dt.
Note that the range of examples given in the following subsections is meant to be illustrative rather than exhaustive, and so we shall miss out some simple generalisations for which essentially the same results hold. We shall instead try to make clear the conditions needed to apply these results, and direct the reader to the papers mentioned below for further examples which satisfy these conditions.
Uniformly expanding maps
Let f : M be a C 2 -diffeomorphism of a compact Riemannian manifold. We say f is uniformly expanding if there exists λ > 1 such that Df x v ≥ λ v for all x ∈ M , and all tangent vectors v. Such a map admits an absolutely continuous invariant probability measure µ, which is unique and mixing.
Theorem 1 Let ϕ ∈ L ∞ (M, µ), and let ψ : M → R be continuous. Upper bounds are given for (u n ) as follows:
• if ψ ∈ (R1), then u n = O(θ n ) for some θ ∈ (0, 1);
• if ψ ∈ (R2, γ), for some γ ∈ (0, 1), then u n = O(e −n γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R3, γ), for some γ > 1, then u n = O(e −(log n) γ ′ ) for every γ ′ < γ;
• for any constant C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (R4, γ) for some γ > ζ −1 , and R ∞ (ψ) < C ∞ , then u n = O(n 1−ζγ ).
Furthermore, the Central Limit Theorem holds when ψ ∈ (R4, γ) for sufficiently large γ, depending on R ∞ (ψ).
Such maps are generally regarded as being well understood, and in particular, results of exponential decay of correlations for observables in (R1) go back to the seventies, and the work of Sinai, Ruelle and Bowen ([Si], [R], [Bo]). For a more modern perspective, see for instance the books of Baladi ([Ba]) and Viana ([V2]).
I have not seen explicit claims of similar results for observables in classes (R2 − 4). However, it is well known that any such map can be coded by a one-sided full shift on finitely many symbols, so an analogous result on shift spaces would be sufficient, and may well already exist. The estimates here are probably not sharp, particularly in the (R4) case.
The other examples we consider are not in general reducible to finite alphabet shift maps, so we can be more confident that the next set of results are new.
Maps with indifferent fixed points
These are perhaps the simplest examples of strictly non-uniformly expanding systems. Purely for simplicity, we restrict to the well known case of the Manneville-Pomeau map.
Theorem 2 Let f : [0,1] be the map f (x) = x + x 1+α (mod 1), for some α ∈ (0, 1), and let ν be the unique acip for this system. For ϕ, ψ : [0, 1] → R with ϕ bounded and ψ continuous, for every constant C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (R4, γ) for some γ > 2ζ −1 , with R ∞ (ψ) < C ∞ , then
• if γ = ζ −1 (τ + 1), then u n = O(n 1−τ log n);
• otherwise, u n = O(max(n 1−τ , n 2−ζγ ));
where τ = α −1 . In particular, when γ > 3 ζ the Central Limit Theorem holds.
In the case where ψ ∈ (R4, γ) for every large γ, this gives u n = O(n 1− 1 α ), which is the bound obtained in [Yo] for ψ ∈ (R1). We do not give separate estimates for observables in classes (R2) and (R3), as we obtain the same upper bound in each case. Note that the polynomial upper bound for (R1) observables is known to be sharp ( [Hu]), and hence the above gives a sharp bound in the (R2) and (R3) cases, and for (R4, γ) when γ is large.
The above results apply in the more general 1-dimensional case considered in [Yo], where in particular a finite number of expanding branches are allowed, and it is assumed that xf ′′ (x) ≈ x α near the indifferent fixed point.
In our remaining examples, estimates will invariably correspond to either the above form, or that of Theorem 1, and we shall simply say which is the case, specifying the parameter τ as appropriate.
One-dimensional maps with critical points
Let us consider the systems of [BLS]. These are one-dimensional multimodal maps, where there is some long-term growth of derivative along the critical orbits. Let f : I → I be a C 3 interval or circle map with a finite critical set C and no stable or neutral periodic orbit. We assume all critical points have the same critical order l ∈ (1, ∞); this means that for each c ∈ C, there is some neighbourhood in which f can be written in the form
f (x) = ±|ϕ(x − c)| l + f (c)
for some diffeomorphism ϕ : R → R fixing 0, with the ± allowed to depend on the sign of x − c.
For c ∈ C, let D n (c) = |(f n ) ′ (f (c))|. From [BLS] we know there exists an acip µ provided
n D − 1 2l−1 n (c) < ∞ ∀c ∈ C.
If f is not renormalisable on the support of µ then µ is mixing.
Theorem 3 Let ϕ ∈ L ∞ (I, µ), and let ψ be continuous.
Case 1: Suppose there exist C > 0, λ > 1 such that D n (c) ≥ Cλ n for all n ≥ 1, c ∈ C. Then we have estimates for (u n ) exactly as in the uniformly expanding case (Theorem 1).
Case 2: Suppose there exist C > 0, α > 2l − 1 such that D n (c) ≥ Cn α for all n ≥ 1, c ∈ C. Then we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2) for every τ < α−1 l−1 . In particular, the Central Limit Theorem holds in either case when ψ ∈ (R4, γ) for sufficiently large γ, depending on R ∞ (ψ).
Again, we have restricted our attention to some particular cases; analogous results should be possible for the intermediate cases considered in [BLS]. In particular, for the class of Fibonacci maps with quadratic critical points (see [LM]) we obtain estimates as in Theorem 2 for every τ > 1.
Viana maps
Next we consider the class of Viana maps, introduced in [V1]. These are examples of non-uniformly expanding maps in more than one dimension, with sub-exponential decay of correlations for Hölder observables. They are notable for being possibly the first examples of non-uniformly expanding systems in more than one dimension which admit an acip, and also because the attractor, and many of its statistical properties, persist in a C 3 neighbourhood of systems.
Let a 0 be some real number in (1, 2) for which x = 0 is pre-periodic for the system x → a 0 − x 2 . We define a skew productf :
S 1 × R bŷ f (s, x) = (ds mod 1, a 0 + α sin(2πs) − x 2 ),
where d is an integer ≥ 16, and α > 0 is a constant. When α is sufficiently small, there is a compact interval I ⊂ (−2, 2) for which S 1 × I is mapped strictly inside its own interior, andf admits a unique acip, which is mixing for some iterate, and has two positive Lyapunov exponents ( [V1], [AV]). The same is also true for any f in a sufficiently small C 3 neighbourhood N off .
Let us fix some small α, and let N be a sufficiently small C 3 neighbourhood off such that for every f ∈ N the above properties hold. Choose some f ∈ N ; if f is not mixing, we consider instead the first mixing power.
Theorem 4 For ϕ ∈ L ∞ (S 1 × R, ν), ψ ∈ (R4, γ), we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2) for every τ > 1.
The Central Limit Theorem holds for ψ ∈ (R4, γ) when γ is sufficiently large, depending on R ∞ (ψ).
Another way of saying the above is that if ψ ∈ (R4, γ), then u n = O(n 2−ζγ ), with the usual dependency of ζ on R ∞ (ψ). Note that for observables in ∩ γ>1 (R4, γ), we get super-polynomial decay of correlations, the same estimate as we obtain for Hölder observables (though Baladi and Gouëzel have recently announced a stretched exponential bound for Hölder observables -see BG).
There are a number of generalisations we could consider, such as allowing D ≥ 2 ( [BST] -note they require f to be C ∞ close tof ), or replacing sin(2πs) by an arbitrary Morse function.
Non-uniformly expanding maps
Finally, we discuss probably the most general context in which our methods can currently be applied, the setting of [ALP]. In particular, this setting generalises that of Viana maps.
Let f : M → M be a transitive C 2 local diffeomorphism away from a singular/critical set S, with M a compact finite-dimensional Riemannian manifold. Let Leb be a normalised Riemannian volume form on M , which we shall refer to as Lebesgue measure, and d a Riemannian metric. We assume f is nonuniformly expanding, or more precisely, there exists λ > 0 such that
lim inf n→∞ 1 n n−1 i=0 log Df −1 f i (x) −1 ≥ λ > 0.(1)
For almost every x in M , we may define
E(x) = min N : 1 n n−1 i=0 log Df −1 f i (x) −1 ≥ λ/2, ∀n ≥ N .
The decay rate of the sequence Leb{E(x) > n} may be considered to give a degree of hyperbolicity. Where S is non-empty, we need the following further assumptions, firstly on the critical set. We assume C is non-degenerate, that is, m(C) = 0, and ∃β > 0 such that ∀x ∈ M C we have d(x, C) β Df x v / v d(x, C) −β ∀v ∈ T x M , and the functions log det Df and log Df −1 are locally Lipschitz with Lipschitz constant d(x, C) −β . Now let d δ (x, S) = d(x, S) when this is ≤ δ, and 1 otherwise. We assume that for any ε > 0 there exists δ > 0 such that for Lebesgue a.e. x ∈ M , lim sup
n→∞ 1 n n−1 j=0 − log d δ (f j (x), S) ≤ ε.(2)
We define a recurrence time
T (x) = min N ≥ 1 : 1 n n−1 i=0 − log d δ (f j (x), S) ≤ 2ε, ∀n ≥ N .
Let f be a map satisfying the above conditions, and for which there exists α > 1 such that
Leb({E(x) > n or T (x) > n}) = O(n −α ).
Then f admits an acip ν with respect to Lebesgue measure, and we may assume ν to be mixing by taking a suitable power of f .
Theorem 5 For ϕ ∈ L ∞ (M, ν), ψ ∈ (R4, γ), we have estimates for (u n ) as in the indifferent fixed point case (Theorem 2), for τ = α. Furthermore, when α > 2, the Central Limit Theorem holds for ψ ∈ (R4, γ) when γ is sufficiently large for given R ∞ (ψ).
Young's tower
In the previous section, we indicated the variety of systems we may consider. We shall now state the main technical result, and with it the conditions a system must satisfy in order for our result to be applicable. As verifying that a system satisfies such conditions is often considerable work, we refer the reader to those papers mentioned in each of the previous subsections for full details. The relevant setting for our arguments will be the tower object introduced by Young in [Yo], and we recap its definition. We start with a map F R : (∆ 0 , m 0 ) , where (∆ 0 , m 0 ) is a finite measure space. This shall represent the base of the tower. We assume there exists a partition (mod 0) P = {∆ 0,i : i ∈ N} of ∆ 0 , such that F R |∆ 0,i is an injection onto ∆ 0 for each ∆ 0,i . We require that the partition generates, i.e. that ∞ j=0 (F R ) −j P is the trivial partition into points. We also choose a return time function R : ∆ 0 → N, which must be constant on each ∆ 0,i .
We define a tower to be any map F : (∆, m) determined by some F R , P, and R as follows. Let ∆ = {(z, l) : z ∈ ∆ 0 , l < R(z)}. For convenience let ∆ l refer to the set of points (·, l) in ∆. This shall be thought of as the lth level of ∆. (We shall freely confuse the zeroth level {(z, 0) : z ∈ ∆ 0 } ⊂ ∆ with ∆ 0 itself. We shall also happily refer to points in ∆ by a single letter x, say.) We write ∆ l,i = {(z, l) : z ∈ ∆ 0,i } for l < R(∆ 0,i ). The partition of ∆ into the sets ∆ l,i shall be denoted by η.
The map F is then defined as follows:
F (z, l) = (z, l + 1) if l + 1 < R(z) (F R (z), 0) otherwise.
We notice that the map F R(x) (x) on ∆ 0 is identical to F R (x), justifying our choice of notation. Finally, we define a notion of separation time; for x, y ∈ ∆ 0 , s(x, y) is defined to be the least integer n ≥ 0 s.t. (F R ) n x, (F R ) n y are in different elements of P. For x, y ∈ some ∆ l,i , where x = (x 0 , l), y = (y 0 , l), we set s(x, y) := s(x 0 , y 0 ); for x, y in different elements of η, s(x, y) = 0.
We say that the Jacobian JF R of F R with respect to m 0 is the real-valued function such that for any measurable set E on which JF R is injective,
m 0 (F R (E)) = E JF R dm 0 .
We assume JF R is uniquely defined, positive, and finite m 0 -a.e. We require some further assumptions.
• Measure structure: Let B be the σ-algebra of m 0 -measurable sets. We assume that all elements of P and each n−1 i=0 (F R ) −i P belong to B, and that F R and (F R |∆ 0,i ) −1 are measurable functions. We then extend m 0 to a measure m on ∆ as follows: for E ⊂ ∆ l , any l ≥ 0, we let m(E) = m 0 (F −l E), provided that F −l E ∈ B. Throughout, we shall assume that any sets we choose are measurable. Also, whenever we say we are choosing an arbitrary point x, we shall assume it is a good point, i.e. that each element of its orbit is contained within a single element of the partition η, and that JF R is well-defined and positive at each of these points.
• Bounded distortion: There exist C > 0 and β < 1 s.t. for x, y ∈ any ∆ 0,i ∈ P,
JF R (x) JF R (y) − 1 ≤ Cβ s(F R x,F R y) .
• Aperiodicity: We assume that gcd{R(x) : x ∈ ∆ 0 } = 1. This is a necessary and sufficient condition for mixing (in fact, for exactness).
• Finiteness: We assume Rdm 0 < ∞. This tells us that m(∆) < ∞.
Let F : (∆, m) be a tower, as defined above. We define classes of observable similar to those we consider on the manifold, but characterised instead in terms of the separation time s on ∆. Given a bounded function ψ : ∆ → R, we define the variation for n ≥ 0:
v n (ψ) = sup{|ψ(x) − ψ(y)| : s(x, y) ≥ n}.
Let us use this to define some regularity classes:
Exponential case: ψ ∈ (V 1, γ), γ ∈ (0, 1), if v n (ψ) = O(γ n ); Stretched exponential case: ψ ∈ (V 2, γ), γ ∈ (0, 1), if v n (ψ) = O(exp{−n γ }); Intermediate case: ψ ∈ (V 3, γ), γ > 1, if v n (ψ) = O(exp{−(log n) γ }); Polynomial case: ψ ∈ (V 4, γ), γ > 1, if v n (ψ) = O(n −γ ).
We shall see that the classes (V1-4) of regularity correspond naturally with the classes (R1-4) of regularity on the manifold respectively, under fairly weak assumptions on the relation between the system and the tower we construct for it. (We shall discuss this further in §13.) These classes are essentially those defined in [P], although there the functions are considered to be potentials rather than observables.
We now state the main technical result.
Theorem 6 Let F : (∆, m) be a tower satisfying the assumptions stated above. Then F : (∆, m) admits a unique acip ν, which is mixing. Furthermore, for all ϕ, ψ ∈ L ∞ (∆, m),
(ϕ • F n )ψdν − ϕdν ψdν ≤ ϕ ∞ C(ψ)u n ,
where C(ψ) > 0 is some constant, and (u n ) is a sequence converging to zero at some rate determined by F and v n (ψ). In particular:
Case 1: Suppose m 0 {R > n} = O(θ n ), some θ ∈ (0, 1). Then • if ψ ∈ (V 1, γ) for some γ ∈ (0, 1), then u n = O(θ n ) for some θ ∈ (0, 1); • if ψ ∈ (V 2, γ) for some γ ∈ (0, 1), then u n = O(e −n γ ′ ) for every γ ′ < γ; • if ψ ∈ (V 3, γ) for some γ > 1, then u n = O(e −(log n) γ ′ ) for every γ ′ < γ; • for any constant C ∞ > 0, there exists ζ < 1 such that if ψ ∈ (V 4, γ) for some γ > 1 ζ , and v 0 (ψ) < C ∞ , then u n = O(n 1−ζγ ). Case 2: Suppose m 0 {R > n} = O(n −α ) for some α > 1. Then for every C ∞ > 0 there exists ζ < 1 such that if ψ ∈ (V 4, γ) for some γ > 2 ζ , with v 0 (ψ) < C ∞ , then • if γ = α+1 ζ , u n = O(n 1−α log n); • otherwise, u n = O max(n 1−α , n 2−ζγ ) .
The existence of a mixing acip is proved in [Yo], as is the result in the case ψ ∈ (V 1). As a corollary of the above, we get a Central Limit Theorem in the cases where the rate of mixing is summable.
Corollary 1 Suppose F satisfies the above assumptions, and m 0 {R > n} = O(n −α ), for some α > 2. Then the Central Limit Theorem is satisfied for ψ ∈ (R4, γ) when γ is sufficiently large, depending on F and v 0 (ψ).
In §13 we shall give the exact conditions needed on a system in order to apply the above results.
Overview of method
Our strategy in proving the above theorem is to generalise a coupling method introduced by Young in [Yo]. Our argument follows closely the line of approach of that paper, and we give an outline of the key ideas here.
First, we need to reduce the problem to one in a slightly different context. Given a system F : (∆, m) , we define a transfer operator F * which, for any measure λ on ∆ for which F is measurable, gives a measure F * λ on ∆ defined by
(F * λ)(A) = λ(F −1 A)
whenever A is a λ-measurable set. Clearly any F -invariant measure is a fixed point for this operator. Also, a key property of F * is that for any function
φ : ∆ → R, φ • F dλ = φd(F * λ).
Next, we define a variation norm on m-absolutely continuous signed measures, that is, on the difference between any two (positive) measures which are absolutely continuous. Given two such measures λ, λ ′ , we write
|λ − λ ′ | := dλ dm − dλ ′ dm dm.
Now let us fix an acip ν and choose observables
ϕ ∈ L ∞ (∆, ν), ψ ∈ L 1 (∆, ν), with inf ψ > 0, ψdν = 1. We have (ϕ • F n )ψdν = (ϕ • F n )d(ψν) = ϕd(F n * (ψν)),
where ψν denotes the unique measure which has density ψ with respect to ν. So
(ϕ • F n )ψdν − ϕdν ψdν ≤ ϕ ∞ dF n * (ψν) dm − dν dm dm = ϕ ∞ |F n * (ψν) − ν|.
Hence we may reduce the problem to one of estimating the rate at which certain measures converge to the invariant measure, in terms of the variation norm. In fact, it will be useful to consider the more general question of estimating |F n * λ − F n * λ ′ | for a pair of measures λ, λ ′ whose densities with respect to m are of some given regularity. (We shall require an estimate in the case λ ′ = ν when we consider the Central Limit Theorem.)
Let us now outline the main argument. We work with two copies of the system, and the direct product
F × F : (∆ × ∆, m × m) . Let P 0 = λ × λ ′ ,
and consider it to be a measure on ∆ × ∆. If we let π, π ′ : ∆ × ∆ → ∆ be the projections onto the first and second coordinates respectively, we have that
|F n * λ − F n * λ ′ | = |π * (F × F ) n * P 0 − π ′ * (F × F ) n * P 0 | ≤ 2|(F × F ) n * P 0 |.
Our strategy will involve summing the differences between the two projections over small regions of the space, only comparing them at convenient times that vary with the region of space we are considering. At each of these times, we shall subtract some measure from both coordinates so that the difference is unaffected, yet the total measure of P 0 is reduced, giving an improved upper bound on the difference.
The key difference between our method and that of [Yo] is that we introduce a sequence (ε n ), which shall represent the rate at which we attempt to subtract measure from P 0 . When the densities of λ, λ ′ are of class (V 1), (ε n ) can be taken to be a small constant, and the method here reduces to that of [Yo]; however, by allowing sequences ε n → 0, we may also consider measure densities of weaker regularity.
We shall see that it is possible to define an induced mapF : ∆×∆ → ∆ 0 ×∆ 0 for which there is a partitionξ 1 of ∆×∆, with every element mapping injectively onto ∆ 0 × ∆ 0 underF . In fact, there is a stopping time
T 1 :ξ 1 → N such that for each Γ ∈ξ 1 ,F |Γ = (F × F ) T1(Γ) . If we choose some Γ ∈ξ 1 , thenF * (P 0 |Γ) is a measure on ∆ 0 × ∆ 0 .
The density of P 0 with respect to m × m has essentially the same regularity as the measures λ, λ ′ , and the density ofF * (P 0 |Γ) will be similar, except possibly weakened slightly by any irregularity in the mapF . (We shall see that the mapF is not too irregular.) Let
c(Γ) = inf w∈∆0×∆0 dF * (P 0 |Γ) d(m × m) (w).
For any ε 1 ∈ [0, 1], we may writê
F * (P 0 |Γ) = ε 1 c(Γ)(m × m|∆ 0 × ∆ 0 ) +F * (P 1 |Γ)
for some (positive) measure P 1 |Γ; this is uniquely defined sinceF |Γ is injective. Essentially, we are subtracting some amount of mass from the measureF * (P 0 |Γ). Moreover, we are subtracting it equally from both coordinates; this means that writing Γ = A × B and k = T 1 (Γ), the distance between the measures F k * (λ|A) and F k * (λ ′ |B), both defined on ∆ 0 , is unaffected. However, we also see that the remaining measureF * (P 1 |Γ) has smaller total mass, and this is an upper bound for |F k * (λ|A) − F k * (λ ′ |B)|. We fix an ε 1 , and perform this subtraction of measure for each Γ ∈ξ 1 , obtaining a measure P 1 defined on ∆ × ∆. The total mass of P 1 represents the difference between F n * λ and F n * λ ′ at time n = T 1 , taking into account that T 1 is not constant over ∆ × ∆. Clearly, we obtain the best upper bound by taking ε 1 = 1; however, we shall see that it is to our advantage to choose some smaller value for ε 1 .
We choose a sequence (ε n ), and proceed inductively as follows. First, we define a sequence of partitions {ξ i } such that Γ ∈ξ i is mapped injectively onto ∆ 0 × ∆ 0 underF i . Now given the measure P i−1 , we take an element Γ ∈ξ i and consider the measureF i
* (P i−1 |Γ) on ∆ 0 × ∆ 0 . Here, we let c(Γ) = inf w∈∆0×∆0 dF i * (P i−1 |Γ) d(m × m|∆ 0 × ∆ 0 ) (w),
and specify P i |Γ bŷ
F i * (P i−1 |Γ) = ε i c(Γ)(m × m|∆ 0 × ∆ 0 ) +F i * (P i |Γ).
As before, we construct a measure P i , the total mass of which gives an upper bound at time T i . To fully determine the sequence {P i }, it remains to choose a sequence (ε i ). Our choice relates to the regularity of the densities dλ dm , dλ ′ dm . This is relevant because the method requires that the family of measure densities
dF i * (P i−1 |Γ) d(m × m) : i ≥ 1, Γ ∈ξ i
has some uniform regularity. (In fact, we require that the log of each of the above densities is suitably regular.) We require this in order that at the ith stage of the procedure, when we subtract an ε i proportion of the minimum local density, this corresponds to a similarly large proportion of the average density. Hence, provided this regularity is maintained, the total mass of P i−1 is decreased by a similar proportion. When we subtract a constant from a density as above, this weakens the regularity. However, at the next step of the procedure, we work with elements of the partitionξ i+1 . Since these sets are smaller, we regain some regularity by working with measures on ∆ 0 × ∆ 0 pushed forward from such sets. That is, we expect the densities dF i+1 *
(P i |Γ) d(m × m) : Γ ∈ξ i+1
to be more regular than the densities
dF i * (P i |Γ) d(m × m) : Γ ∈ξ i .
(This relies on the mapF being smooth enough that another application of the operatorF * doesn't much affect the regularity.)
The degree of regularity we gain in this way depends on the initial regularity of Φ, and hence of dλ dm , dλ ′ dm , with respect to the sequence of partitions. In the usual case, where dλ dm , dλ ′ dm ∈ (V 1), the regularity we gain each time we refine the partition is similar to the regularity we lose when we subtract a small constant proportion of the density; hence we may take every ε i to be a small constant ε. Where the initial regularities are not so good, we gain less regularity from refining the partition, and so we may only subtract correspondingly less measure.
For this reason, outside the (V 1) case we shall require that the sequence (ε i ) converges to zero at some minimum rate. However, if (ε i ) decays faster than necessary, we will simply obtain a suboptimal bound. So part of the problem is to try to choose a sequence (ε i ) decaying as slowly as is permissible. We shall also need to take into account the stopping time T 1 (which is unbounded), in order to estimate the speed of convergence in terms of the original map F .
Coupling
Over the next few sections, we give the proof of the main technical theorem.
Let F : (∆, m) be a tower, as defined in §3. Let I = {ϕ : ∆ → R | v n (ϕ) → 0}, and let I + = {ϕ ∈ I : inf ϕ > 0}. We shall work with probability measures whose densities with respect to m belong to I + . We see
ϕ(x) ϕ(y) − 1 = 1 ϕ(y) |ϕ(x) − ϕ(y)| ≤ C ϕ v s(x,y) (ϕ),(3)
where C ϕ depends on inf ϕ. Let λ, λ ′ be measures with dλ dm , dλ ′ dm ∈ I + , and let P = λ × λ ′ . For convenience, we shall write v n (λ) = v n ( dλ dm ) for such measures λ, and use the two notations interchangeably. We let C λ = C ϕ above, where ϕ = dλ dm . We shall write ν for the unique acip for F , which is equivalent to m.
We consider the direct product F × F : (∆ × ∆, m × m) , and specify a return function to ∆ 0 × ∆ 0 . We first fix n 0 > 0 to be some integer large enough that m(F −n ∆ 0 ∩ ∆ 0 ) ≥ some c > 0 for all n ≥ n 0 . Such an integer exists since ν is mixing and equivalent to m. Now we letR(x) be the first arrival time to ∆ 0 (settingR|∆ 0 ≡ 0). We define a sequence {τ i } of stopping time functions on ∆ × ∆ as follows:
τ 1 (x, y) = n 0 +R(F n0 x), τ 2 (x, y) = τ 1 + n 0 +R(F τ1+n0 y), τ 3 (x, y) = τ 2 + n 0 +R(F τ2+n0 x),
and so on, alternating between the two coordinates x, y each time. Correspondingly, we shall define an increasing sequence ξ 1 < ξ 2 < . . . of partitions of ∆×∆, according to each τ i . First, let π, π ′ be the coordinate projections of ∆ × ∆ onto ∆, that is, π(x, y) := x, π ′ (x, y) := y. At each stage we refine the partition according to one of the two coordinates, alternating between the two copies of ∆. First, ξ 1 is given by taking the partition into rectangles E × ∆, E ∈ η, and refining so that τ 1 is constant on each element Γ ∈ ξ 1 , and F τ1 |π(Γ) is for each Γ an injection onto ∆ 0 . To be precise, we write
ξ 1 (x, y) = τ1(x,y)−1 j=0 F −j η (x) × ∆,
using throughout the convention that for a partition ξ, ξ(x) denotes the element of ξ containing x. Subsequently, we say ξ i is the refinement of ξ i−1 such that each element of ξ i−1 is partitioned in the first (resp. second) coordinate for i odd (resp. even) so that τ i is constant on each element Γ ∈ ξ i , and F τi maps π(Γ) (resp. π ′ (Γ)) injectively onto ∆ 0 .
We define T to be the smallest τ i , i ≥ 2, with (F × F ) τi (x, y) ∈ ∆ 0 × ∆ 0 . This is well-defined m-a.e. since ν × ν is ergodic (in fact, mixing). Note that this is not necessarily the first return time to ∆ 0 × ∆ 0 for F × F . We now consider the simultaneous return functionF := (F × F ) T , and partition ∆ × ∆ into regions whichF n maps injectively onto ∆ 0 × ∆ 0 .
For i ≥ 1 we let T i be the time corresponding to the ith iterate ofF , i.e. T 1 ≡ T , and for i ≥ 2,
T i (z) = T i−1 (z) + T (F i−1 z).
Corresponding to {T i } we define a sequence of partitions η × η ≤ξ 1 ≤ξ 2 ≤ . . . of ∆ × ∆ similarly to before, such that for each Γ ∈ξ n , T n |Γ is constant andF n maps Γ injectively onto ∆ 0 × ∆ 0 . It will be convenient to define a separation timeŝ with respect toξ 1 ;ŝ(w, z) is the smallest n ≥ 0 s.t.F n w,F n z are in different elements ofξ 1 . We notice that if w = (x, x ′ ), z = (y, y ′ ), then s(w, z) ≤ min(s(x, y), s(x ′ , y ′ )).
Let ϕ = dλ dm , ϕ ′ = dλ ′ dm , and let Φ = dP dm×m = ϕ · ϕ ′ . We first consider the regularity ofF and Φ with respect to the separation timeŝ.
Sublemma 1
1. For all w, z ∈ ∆ × ∆ withŝ(w, z) ≥ n,
log JF n (w) JF n (z) ≤ CF βŝ (F n w,F n z)
for CF depending only on F .
For all
w, z ∈ ∆ × ∆, log Φ(w) Φ(z) ≤ C Φ vŝ (w,z) (Φ),
where v n (Φ) := max (v n (ϕ), v n (ϕ ′ )), and C Φ = C ϕ + C ϕ ′ , where C ϕ ,C ϕ ′ are the constants given in (3) above, corresponding to ϕ, ϕ ′ respectively.
Proof: Let w = (x, x ′ ), z = (y, y ′ ). Whenŝ(w, z) ≥ n, there exists k ∈ N withF n ≡ (F × F ) k when restricted to the element ofξ n containing w, z. So log JF n (w)
JF n (z) = log JF k (x)JF k (x ′ ) JF k (y)JF k (y ′ ) ≤ log JF k (x) JF k (y) + log JF k (x ′ ) JF k (y ′ ) .
Let j be the number of times F i (x) enters ∆ 0 , for i = 1, . . . , k. We have
log JF k (x) JF k (y) ≤ j i=1 Cβ s(F k x,F k y)+(j−i) ≤ C ′ β s(F k x,F k y)
for some C ′ > 0, and similarly for x ′ , y ′ . So log JF n (w)
JF n (z) ≤ CF βŝ (F n w,F n z)
for some CF > 0. For the second part, we have
log Φ(w) Φ(z) ≤ log ϕ(x) ϕ(y) + log ϕ ′ (x ′ ) ϕ ′ (y ′ ) ≤ C ϕ v s(x,y) (ϕ) + C ϕ ′ v s(x ′ ,y ′ ) (ϕ ′ ) ≤ C Φ vŝ (w,z) (Φ).
We now come to the core of the argument. We choose a sequence (ε i ) < 1, which represents the proportion of P we try to subtract at each step of the construction. Let ψ 0 ≡ψ 0 ≡ Φ. We proceed as follows; we push-forward Φ byF to obtain the function ψ 1 (z) := Φ(z) JF (z) . On each element Γ ∈ξ 1 we subtract the constant ε 1 inf{ψ 1 (z) : z ∈ Γ} from the density ψ 1 |Γ. We continue inductively, pushing forward by dividing the density by JF (F z) to get ψ 2 (z), subtracting ε 2 inf{ψ 2 (z) : z ∈ Γ} from ψ 2 |Γ for each Γ ∈ξ 2 , and so on. That is, we define:
ψ i (z) =ψ i−1 (z) JF (F i−1 z) ; ε i,z = ε i inf w∈ξi(z) ψ i (w); ψ i (z) = ψ i (z) − ε i,z .
We show that under certain conditions on (ε i ), the sequence {ψ i } satisfies a uniform bound on the ratios of its values for nearby points. A similar proposition to the following was obtained simultaneously but independently by Holland ([Hol]); there, the emphasis was on the regularity of the Jacobian.
Proposition 1 Suppose (ε ′ i ) ≤ 1 2 is a sequence with the property that v i (Φ) i j=1 (1 + ε ′ j ) ≤ K 0 ,(4)i j=1 i k=j (1 + ε ′ k ) β i−j+1 ≤ K 0 (5)
are both satisfied for some sufficiently large constant K 0 allowed to depend only on F and v 0 (Φ). Then there existδ < 1 andC > 0 each depending only on F , C Φ and v 0 (Φ) such that if we choose ε i =δε ′ i for each i, then for all w, z witĥ s(w, z) ≥ i ≥ 1,
logψ i (w) ψ i (z) ≤C.
Proof: Suppose we are given such a sequence (ε ′ i ) and assume that for each i we have
logψ i (w) ψ i (z) ≤ (1 + ε ′ i ) log ψ i (w) ψ i (z)(6)
for every w, z withŝ(w, z) ≥ i. We shall see that we may achieve this by a suitable choice of (ε i ). We note that whenŝ(w, z) ≥ i,
log ψ i (w) ψ i (z) ≤ logψ i−1 (w) ψ i−1 (z) + CF βŝ (w,z)−i , so logψ i (w) ψ i (z) ≤ (1 + ε ′ i ) logψ i−1 (w) ψ i−1 (z) + CF βŝ (w,z)−i .
Applying this inductively, we obtain the estimate
logψ i (w) ψ i (z) ≤ C Φ i j=1 (1 + ε ′ j ) vŝ (w,z) (Φ) + (1 + ε ′ i )CF βŝ (w,z)−i + (1 + ε ′ i )(1 + ε ′ i−1 )CF βŝ (w,z)−(i−1) + . . . + i j=1 (1 + ε ′ j )CF βŝ (w,z)−1 .
We see this is bounded above by the constant (C Φ + β −1 CF )K 0 =:C. So we have
log ψ i (w) ψ i (z) ≤C + CF forŝ(w, z) ≥ i.
It remains to examine the choice of sequence (ε i ) necessary for (6) to hold. For now, let (ε i ) be some sequence with ε i ≤ ε ′ i for each i. Let Γ =ξ i (w) =ξ i (z) and write ε i,Γ := ε i,w = ε i,z . Then
logψ i (w) ψ i (z) − log ψ i (w) ψ i (z) ≤ log ψ i (w) − ε i,Γ ψ i (z) − ε i,Γ · ψ i (z) ψ i (w) = log 1 + ε i,Γ ψ i (w) − ε i,Γ ψ i (z) (ψ i (z) − ε i,Γ )ψ i (w) = log 1 + εi,Γ ψi(z) − εi,Γ ψi(w)
We see that 0 ≤ εi,Γ ψi(w) ≤ ε i for all w ∈ Γ, and
εi,Γ ψi(z) − εi,Γ ψi(w) 1 − εi,Γ ψi(z) ≥ −ε i > − 1 2 ,
so C 1 may be chosen so as not to depend on anything. Continuing from the estimate above,
logψ i (w) ψ i (z) − log ψ i (w) ψ i (z) ≤ C 1 ε i,Γ ψ i (z) 1 − ψ i (z) ψ i (w) 1 1 − εi,Γ ψi(z) ≤ C 1 C 2 ε i 1 − ε i log ψ i (w) ψ i (z) ,
where C 2 may be chosen independently of i, w, z since ψi(w) ψi(z) ≥ e −(C+CF ) , provided that at each stage we choose ε i small enough that C 1 C 2 εi 1−εi ≤ ε ′ i . We confirm that it is sufficient to take ε i =δε ′ i for small enoughδ > 0. This means
C 1 C 2 ε i 1 − ε i = C 1 C 2δ ε ′ i 1 −δε ′ i < C 1 C 2δ 1 −δ ε ′ i , so takingδ = 1 1+C1C2 is sufficient.
7 Choosing a sequence (ε i )
Having shown that it is sufficient for our purposes for the sequence (ε ′ i ) to satisfy conditions (4) and (5), we now consider how we might choose a sequence (ε i ) which, subject to these conditions, decreases as slowly as possible. Having chosen a sequence, we shall then estimate the rate of convergence this gives us.
Lemma 1 Given a sequence v i (Φ), there exists a sequence (ε ′ i ) ≤ 1 2 satisfying (4) and (5) such that for ε i =δε ′ i , and any K > 1,
i j=1 1 − ε j K ≤ C max v i (Φ)δ K , θ i
for some θ < 1 depending only on F , and some C > 0.
Proof: We start by defining a sequence (v * i ) > 0 as follows:
we let v * 0 = v 0 (Φ) and v * i = max(v i (Φ), cv * i−1 ), where c is some constant such that exp{− min( 1 2 , β −1 − 1)} < c < 1. We claim that v * i = O(v i (Φ))
unless v i (Φ) decays exponentially fast, in which case v * i decays at some (possibly slower) exponential rate. To see this, suppose otherwise, in the case where v i (Φ) decays slower than any exponential speed. Then for large i certainly v * i > v i (Φ), and so v * i = cv * i−1 for large i, and (v * i ) decays exponentially fast. But this means v i (Φ) decays exponentially fast, which is a contradiction.
Let us now choose ε
′ i = log v * i−1 v * i .
(We ignore the trivial case v 0 (Φ) = 0.) We see that all terms are small enough that (5) is satisfied, and in particular,
ε i ≤ 1 2 . Furthermore, v i (Φ) i j=1 (1 + ε ′ j ) ≤ v * i exp i j=1 ε ′ j = v * i exp{log v * 0 − log v * i } = v 0 (Φ). For any K > 1, i j=1 1 − ε j K ≤ exp −δ K i j=1 ε ′ j = exp −δ K (log v * 0 − log v * i ) = (v 0 (Φ) −1 v * i )δ K . If (v * i ) decays exponentially fast, we get an exponential bound. Otherwise, v * δ K i = O(v i (Φ)δ K ).
Convergence of measures
We introduce a sequence of measure densitiesΦ 0 ≡ Φ ≥Φ 1 ≥Φ 2 ≥ . . . corresponding to the sequence {ψ i } in the following way:
Φ i (z) :=ψ i (z)JF i (z).
Lemma 2 Given a sequence (ε i ) = (δε ′ i ) satisfying the assumptions of Proposition 1, there exists K > 1 dependent only on F , C Φ and v 0 (Φ) such that for
all z ∈ ∆ × ∆, i ≥ 1,Φ i (z) ≤ 1 − ε i K Φ i−1 (z).
Proof: If we fix i ≥ 1, Γ ∈ξ i , and w, z ∈ Γ, then by Proposition 1 we havê
Φ i (w) JF i (w) ≤C 0Φ i (z) JF i (z) whereC 0 = eC > 1. From Sublemma 3, we have 1 JF (F i−1 w) ≤ e CF 1 JF (F i−1 z) , soΦ i−1 (w) JF i (w) =Φ i−1 (w) JF i−1 (w) · 1 JF (F i−1 w) ≤C 0 e CFΦ i−1 (z) JF i (z) .
Now we obtain a relationship betweenΦ i andΦ i−1 by writinĝ
Φ i (z) = (ψ i (z) − ε i,z )JF i (z) = ψ i−1 (z) JF (F i−1 z) − ε i inf w∈ξi(z)ψ i−1 (w) JF (F i−1 w) JF i (z) = Φ i−1 (z) JF i (z) − ε i inf w∈ξi(z)Φ i−1 (w) JF i (w) JF i (z).
So for any z ∈ ∆ × ∆ we have that
Φ i (z) ≤ Φ i−1 (z) JF i (z) − ε i KΦ i−1 (z) JF i (z) JF i (z) = 1 − ε i K Φ i−1 (z), where K =C 0 e CF .
The above lemma gives an estimate on the total mass ofΦ i for each i. To obtain an estimate for the difference between F n * λ and F n * λ ′ , we must use this, and also take into account the length of the simultaneous return time T .
Lemma 3 For all n > 0,
|F n * λ − F n * λ ′ | ≤ 2P {T > n} + 2 n i=1 i j=1 1 − ε j K P {T i ≤ n < T i+1 },
where K is as in the previous lemma.
Proof: We define a sequence {Φ i } of measure densities, corresponding to the measure unmatched at time i with respect to F × F . We shall often write Φ i (m × m), say, to refer to the measure which has density Φ i with respect to
m × m. For z ∈ ∆ × ∆ we let Φ n (z) =Φ i (z), where i is the largest integer such that T i (z) ≤ n. Writing Φ = Φ n + n k=1 (Φ k−1 − Φ k ), we have |F n * λ − F n * λ ′ | = |π * (F × F ) n * (Φ(m × m)) − π ′ * (F × F ) n * (Φ(m × m))| ≤ |π * (F × F ) n * (Φ n (m × m)) − π ′ * (F × F ) n * (Φ n (m × m))| + n k=1 |(π * − π ′ * ) [(F × F ) n * ((Φ k−1 − Φ k )(m × m))]| . (7)
The first term is clearly ≤ 2 Φ n d(m × m). Our construction should ensure the remaining terms are zero, since we have arranged that the measure we subtract is symmetric in the two coordinates. To confirm this, we partition ∆ × ∆ into regions on which each T m is constant, at least while T m < n.
Consider the family of sets A k,i , i, k ∈ N, where A k,i := {z ∈ ∆×∆ : T i (z) = k}. Clearly, each A k,i is a union of elements ofξ i , and for any fixed k the sets A k,i are pairwise disjoint. It is also clear that on any
A k,i , Φ k−1 − Φ k ≡Φ i−1 −Φ i , and for any k, Φ k−1 ≡ Φ k on ∆ × ∆ − ∪ i A k,i . So for each k, π * (F × F ) n * ((Φ k−1 − Φ k )(m × m)) = i Γ∈(ξi|A k,i ) F n−k * π * (F × F ) Ti * ((Φ i−1 −Φ i )((m × m)|Γ)).
We show that this measure is unchanged if we replace π with π ′ in the last expression. Let E ⊂ ∆ be an arbitrary measurable set, and fix some Γ ∈ξ i |A k,i . Then
π * F i * ((Φ i−1 −Φ i )((m × m)|Γ))(E) =F i * ε i JF i inf w∈ΓΦ i−1 (w) JF i (w) ((m × m)|Γ) (E × ∆) = ε i CJF i (m × m) (F −i (E × ∆) ∩ Γ)
where C is constant on Γ. This equals
F −i (E×∆)∩Γ ε i CJF i d(m × m) = ε i C(m × m)(E × ∆).
Since (m × m)(E × ∆) = (m × m)(∆ × E), the terms of the sum in (7) all have zero value, as claimed. Now
Φ n d(m × m) = ∞ i=0 {Ti≤n<Ti+1} Φ n d(m × m);
in fact, since T i ≥ i, all terms of the series are zero for i > n. For 1 ≤ i ≤ n,
{Ti≤n<Ti+1} Φ n = {Ti≤n<Ti+1}Φ i ≤ {Ti≤n<Ti+1} i j=1 1 − ε j K Φ.
The estimate claimed for |F n * λ − F n * λ ′ | follows easily. Finally we state a simple relationship between P {T > n} and (m × m){T > n}. From now on we shall use the convention that P {condition|Γ} := 1 P (Γ) P {x ∈ Γ : x satisfies condition}.
Sublemma 2 There existsK > 0 depending only on C Φ and v 0 (Φ) s.t. ∀i ≥ 1,
∀Γ ∈ξ i , P {T i+1 − T i > n|Γ} ≤K(m 0 × m 0 ){T > n}.
The dependence ofK on P may be removed entirely if we take only i ≥ some i 0 (P ).
Proof: Let µ = 1 P (Γ)F i * (P |Γ). We see that P {T i+1 − T i > n|Γ} = µ{T > n}. We prove a distortion estimate for dµ d(m0×m0) , using the estimates of Sublemma 1. Let w, z ∈ ∆ 0 × ∆ 0 and let w 0 , z 0 ∈ Γ be such thatF i w 0 = w,F i z 0 = z. Then
log dµ d(m×m) (w) dµ d(m×m) (z) ≤ log Φ(w 0 ) Φ(z 0 ) + log JF i w 0 JF i z 0 ≤ C Φ v i (Φ) + CF . This gives dµ d(m × m) ≤ e (CΦvi(Φ)+CF ) (m 0 × m 0 )(∆ 0 × ∆ 0 )
and hence the result follows.
Combinatorial estimates
In Lemma 3 we have given the main estimate involving P , T , and the sequence (ε i ). It remains to relate P and T to the sequence m 0 {R > n}. Primarily, this involves estimates relating the sequences P {T > n} and m 0 {R > n}. We shall state only some key estimates of the proof, referring the reader to [Yo] for full details. Our statements differ slightly, as the estimates of [Yo] are stated in terms of m{R > n}; they are easily reconciled by noting that m{R > n} = i>n m 0 {R > i}. (As earlier,R ≥ 0 is the first arrival time to ∆ 0 .)
Proposition 2
1. If m 0 {R > n} = O(θ n ) for some 0 < θ < 1, then P {T > n} = O(θ n 1 ), for some 0 < θ 1 < 1. Also, for sufficiently small δ 1 > 0, P {T i ≤ n < T i+1 } ≤ Cθ ′n for i ≤ δ 1 n, for some 0 < θ ′ < 1, C > 0 independent of i. The constants θ 1 , θ ′ , δ 1 may all be chosen independently of P .
2. If m 0 {R > n} = O(n −α ) for some α > 1, then P {T > n} = O(n 1−α ).
This proposition follows from estimates involving the combinatorics of the intermediate stopping times {τ i }. Let us make explicit a key sublemma used in the proofs, concerning the regularity of the pushed-forward measure densities dF n * λ dm ; for the rest of the argument we refer to [Yo], as the changes are minor. Sublemma 3 For any k > 0, let
Ω ∈ k−1 i=0 F −i η be s.t. F k Ω = ∆ 0 . Let µ = F k * (λ|Ω). Then ∀x, y ∈ ∆ 0 , we have dµ dm (x) dµ dm (y) − 1 ≤ C 0 for some C 0 (λ),
where the dependence on λ is only on v 0 (λ) and C λ , and may be removed entirely if we only consider k ≥ some k 0 (λ).
Proof: Let ϕ = dλ dm , fix x, y ∈ ∆ 0 , and let x 0 , y 0 be the unique points in Ω such that F k x 0 = x, F k y 0 = y. We note that dµ
dm (x) = ϕ(x 0 ) · dF k * (m|Ω) dm (x 0 ). So ϕ(x 0 ) JF k x 0 · JF k y 0 ϕ(y 0 ) − 1 = JF k y 0 ϕ(y 0 ) ϕ(x 0 ) JF k x 0 − ϕ(y 0 ) JF k y 0 ≤ JF k y 0 ϕ(y 0 ) ϕ(x 0 ) 1 JF k x 0 − 1 JF k y 0 + 1 JF k y 0 |ϕ(x 0 ) − ϕ(y 0 )| ≤ ϕ(x 0 ) ϕ(y 0 ) JF k y 0 JF k x 0 − 1 + ϕ(x 0 ) ϕ(y 0 ) − 1 ≤ (1 + C ϕ v j (ϕ))C ′ + C ϕ v j (ϕ) ≤ (1 + C ϕ v 0 (ϕ))C ′ + C ϕ v 0 (ϕ),
where j is the number of visits to ∆ 0 up to time k. Clearly the penultimate bound can be made independent of λ for j ≥ some j 0 (λ).
The following result combines the estimates above with those of the previous section.
Proposition 3
1. When m 0 {R > n} ≤ C 1 θ n ,
|F n * λ − F n * λ ′ | ≤ Cθ ′n + 2 n i=[δ1n]+1 i j=1 (1 − ε j K )
for some 0 < θ ′ < 1 and sufficiently small δ 1 .
When
m 0 {R > n} ≤ C 1 n −α , α > 1, |F n * λ − F n * λ ′ | ≤ 2C 1 n 1−α + Cn 1−α [δ1n] i=1 i α i j=1 1 − ε j K + n i=[δ1n]+1 i j=1 (1 − ε j K )
for sufficiently small δ 1 .
Proof: In the first case, Proposition 2 and Lemma 3 tell us that
|F n * λ − F n * λ ′ | ≤ Cθ n 0 + 2 [δ1n] i=1 P {T i ≤ n < T i+1 } + 2 n i=[δ1n]+1 i j=1 1 − ε j K
for any 0 < δ 1 < 1, for some 0 < θ 0 < 1. For sufficiently small δ 1 , the middle term is ≤ C[δ 1 n]θ ′n , which decays at some exponential speed in n.
In the second case, for any 0 < δ 1 < 1 we have
|F n * λ − F n * λ ′ | ≤ Cn 1−α + 2 [δ1n] i=1 i j=1 1 − ε j K P {T i ≤ n < T i+1 } + n i=[δ1n]+1 i j=1 1 − ε j K .
We estimate the middle term by noting that
P {T i ≤ n < T i+1 } ≤ i j=0 P T j+1 − T j > n i + 1 ≤K(i + 1)(m × m) T > n i + 1 ≤ Cn 1−α (i + 1) α for some C > 0.
For the last step, note that Proposition 2 applies to the normalisation of (m×m) to a probability measure.
Specific regularity classes
We now combine all of our intermediate estimates to obtain a rate of decay of correlations in the specific cases mentioned in Theorem 6. First, we set ζ =δ K , which can be seen to depend only on F , C Φ and v 0 (Φ). Throughout this section, we shall let C denote a generic constant, allowed to depend only on F and Φ, which may vary between expressions.
Exponential return times
In this subsection, we suppose that m 0 {R > n} = O(θ n ), and hence m{R > n} = O(θ n ).
Class (V1): Suppose v i (Φ) = O(θ i 1 ) for some θ 1 < 1. By Lemma 1 we may take (ε i ) such that conditions (4) and (5) are satisfied, and
i j=1 1 − ε j K = O(θ i 2 )
for some 0 < θ 2 < 1. Applying Proposition 3 we have
|F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n]
θ i 2 for some θ ′ < 1 and sufficiently small δ 1 > 0. This give the required exponential bound in n.
Class (V2): Suppose v i (Φ) = O e −i γ , for some γ ∈ (0, 1). Then there exists (ε j ) such that i j=1 1 − ε j K = O(e −ζi γ ). So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n] e −ζi γ .
We see that e −ζi γ = O(e −i γ ′ ) for every 0 < γ ′ < γ, and it is well known that the sum is of order e −n γ ′′ for every 0 < γ ′′ < γ ′ .
Class (V3): Suppose v i (Φ) = O e −(log i) γ for some γ > 1. We may take (ε j ) such that i j=1 1 − ε j K = O(e −ζ(log i) γ ). So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C i>[δ1n] e −ζ(log i) γ .
It is easy to show that e −ζ(log
i) γ = O e −(log i) γ ′ for every 0 < γ ′ < γ. So the sum is of order O(e −(log n) γ ′′ ) for every 0 < γ ′′ < γ. Class (V4): Suppose v i (Φ) = O(i −γ ) for some γ > 1 ζ . Then we can take (ε j ) such that i j=1 1 − ε j K ≤ Ci −ζγ . So |F n * λ − F n * λ ′ | ≤ Cθ ′n + C n i=[δ1n+1] i −ζγ = O(n 1−ζγ ).
Polynomial return times
Here we suppose m 0 {R > n} = O(n −α ) for some α > 1. Suppose v n (Φ) = O(n −γ ), for some γ > 2 ζ . We can take (ε i ) such that
i j=1 1 − ε j K ≤ Ci −ζγ .
By Proposition 3, for some δ 1 ,
|F n * λ − F n * λ ′ | ≤ 2Cn 1−α + Cn 1−α [δ1n] i=1 i α−ζγ + C ∞ i=[δ1n]+1 i −ζγ .
The third term here is of order n 1−ζγ . To estimate the second term, we consider three cases.
Case 1: γ > α+1 ζ . Here, α − ζγ < −1, so the sum is bounded above independently of n, and the whole term is O(n 1−α ).
Case 2: γ = α+1 ζ . The sum is
i≤[δ1n] i −1 ≤ 1 + [δ1n] 1 x −1 dx = 1 + log[δ 1 n] = O(log n).
So the whole term is O(n 1−α log n). Case 3: 2 ζ < γ < α+1 ζ . The sum is of order n α+1−ζγ , and so the whole term is O(n 2−ζγ ).
Decay of correlations
Finally, we show how estimates for decay of correlations may be derived directly from those for the rates of convergence of measures.
Let ϕ, ψ ∈ L ∞ (∆, m), as in the statement of Theorem 6. We writeψ := b(ψ + a), where a = 1 − inf ψ and b is such that ψ dν = 1. We notice that b ∈ 1 1+v0(ψ) , 1 , and that infψ = b, supψ ≤ 1 + v 0 (ψ). Now let ρ = dν dm , and let λ be the measure on ∆ with dλ dm =ψρ. We have
(ϕ • F n ) ψdν − ϕdν ψdν = 1 b (ϕ • F n )ψdν − ϕdν ψ dν = 1 b ϕ d F n * (ψρm) − ϕρdm ≤ 1 b |ϕ| dF n * λ dm − ρ dm ≤ 1 b ϕ ∞ |F n * λ − ν| .
It remains to check the regularity ofψρ. First,
ψ (x)ρ(x) −ψ(y)ρ(y) ≤ ψ (x) (ρ(x) − ρ(y)) + ρ(y) ψ (x) −ψ(y) ≤ ψ ∞ |ρ(x) − ρ(y)| + ρ ∞ |b| |ψ(x) − ψ(y)| .
It can be shown that ρ is bounded below by some positive constant, and v n (ρ) ≤ Cβ n . (This is part of the statement of Theorem 1 in [Yo].) Soψρ is bounded away from zero, and v n (ψρ) ≤ Cβ n + Cv n (ψ), where C depends on v 0 (ψ).
Taking λ ′ = ν, we see that v n (Φ) ≤ Cβ n + Cv n (ψ). This shows that estimates for |F n * λ− F n * λ ′ | carry straight over to estimates for decay of correlations. To check that the dependency of the constants is as we require, we note that we can take
C λ = 1 inf dλ dm = 1 infψρ ≤ 1 b inf ρ ≤ 1 + v 0 (ψ) inf ρ .
So an upper bound for this constant is determined by v 0 (ψ). Clearly C Φ depends only on F and an upper bound for v 0 (ψ), and in particular these constants determine ζ =δ K .
Central Limit Theorem
We verify the Central Limit Theorem in each case for classes of observables which give summable decay of autocorrelations (that is, summable decay of correlations under the restriction ϕ = ψ).
A general theorem of Liverani ( [L]) reduces in this context to the following.
Theorem 7 Let (X, F , µ) be a probability space, and T : X a (non-invertible) ergodic measure-preserving transformation. Let ϕ ∈ L ∞ (X, µ) be such that
ϕdµ = 0. Assume ∞ n=1 (ϕ • T n )ϕdµ < ∞,(8)∞ n=1 (T * n ϕ)(x) is absolutely convergent for µ-a.e. x,(9)
whereT * is the dual of the operatorT : ϕ → ϕ • T . Then the Central Limit Theorem holds for ϕ if and only if ϕ is not a coboundary.
In the above, the dual operatorT * is the Perron-Frobenius operator corresponding to T and µ, that is
(T * ϕ)(x) = y:T y=x ϕ(y) JT (y)
.
Of course the Jacobian JT here is defined in terms of the measure µ. Let ϕ : ∆ → R be an observable which is not a coboundary, and for which
C n (ϕ, ϕ; ν) = (ϕ • F n )ϕdν − ( ϕdν) 2 is summable. Let φ = ϕ − ϕdν, so that φdν = ϕdν − ( ϕdν)dν = 0.
We shall show that φ satisfies the assumptions of the theorem above. It is straightforward to check that C n (ϕ, ϕ; ν) = C n (φ, φ; ν) = (φ • F n )φdν . Hence condition (8) above is satisfied for φ.
Since m and ν are equivalent measures, it suffices to verify the condition in (9) m-a.e. The operatorF * is defined in terms of the invariant measure, so for a measure λ ≪ m it sends dλ dν to dF * λ dν . By a change of coordinates (or rather, of reference measure), we find that
(F * n φ)(x) = 1 ρ(x) (P n (φρ))(x),
where P is the Perron-Frobenius operator with respect to m, that is, the operator sending densities dλ dm to dF * λ dm .
We shall now write φ as the difference of the densities of two (positive) measures of similar regularity to φ. We letφ = b(φ + a), for some large a, with b > 0 chosen such that φ ρdm = 1. We define measures λ, λ ′ by
dλ dm = (bφ +φ)ρ, dλ ′ dm =φρ.
It is straightforward to check this gives two probability measures, and that
b −1 dλ dm − dλ ′ dm = φρ.
As we showed in the previous section, v n (φρ) ≤ Cβ n + Cv n (φ) for some C > 0. Also, bφ +φ = b(2φ + a), which is bounded below by some positive constant, provided we choose sufficiently large a. We easily see v n (λ), v n (λ ′ ) ≤ Cv n (ϕ). We now follow the construction of the previous sections for these given measures λ, λ ′ , and consider the sequence of densities Φ n defined in §8. We have
F n * λ − F n * λ ′ = π * (F × F ) n * (Φ n (m × m)) − π ′ * (F × F ) n * (Φ n (m × m))
. Let ψ n be the density of the first term with respect to m, and ψ ′ n the density of the second. Since P is a linear operator, we see that
|P n (φρ)| = b −1 dF n * λ dm − dF n * λ ′ dm ≤ b −1 (ψ n + ψ ′ n )
These densities have integral and distortion which are estimable by the construction. We know ψ n dm = ψ ′ n dm = Φ n d(m × m). In the cases we consider (sufficiently fast polynomial variations) this is summable in n; notice that we have already used this expression as a key upper bound for 1 2 |F n * λ − F n * λ ′ | (see Lemma 3). It remains to show that a similar condition holds pointwise, by showing that ψ n , ψ ′ n both have bounded distortion on each ∆ l , and hence |F n * λ − F n * λ ′ | is an upper bound for ψ n + ψ ′ n , up to some constant. This follows non-trivially from Proposition 1, which gives a distortion bound on {Φ k }, and hence on {Φ n } when we restrict to elements of a suitable partition. The remainder of the argument is essentially no different from that given in [Yo], and we omit it here.
Applications
Having obtained estimates in the abstract framework of Young's tower, we now discuss how these results may be applied to other settings. First, we define formally what it means for a system to admit a tower.
Let X be a finite dimensional compact Riemannian manifold, with Leb denoting some Riemannian volume (Lebesgue measure) on X. We say that a locally C 1 non-uniformly expanding system f : X admits a tower if there exists a subset X 0 ⊂ X, Leb(X 0 ) > 0, a partition (mod Leb) P of X 0 , and a return time function R : X 0 → N constant on each element of P, such that • for every ω ∈ P, f R |ω is an injection onto X 0 ;
• f R and (f R |ω) −1 are Leb-measurable functions, ∀ω ∈ P;
• ∞ j=0 (f R ) −j P is the trivial partition into points;
• the volume derivative det Df R is well-defined and non-singular (i.e. 0 < | det Df R | < ∞) Leb-a.e., and ∃C > 0, β < 1, such that ∀ω ∈ P, ∀x, y ∈ ω,
det Df R (x) det Df R (y) − 1 ≤ Cβ s(F R x,F R y) ,
where s is defined in terms of f R and P as before.
We say the system admits the tower F : (∆, m) if the base ∆ 0 = X 0 , m|∆ 0 = Leb|X 0 , and the tower is determined by ∆ 0 , R, F R := f R and P as in §3. It is easy to check that the usual assumptions of the tower hold, except possibly for aperiodicity and finiteness. In particular, | det Df R | equals the Jacobian JF R .
If F : (∆, m) is a tower for f as above, there exists a projection π : ∆ → X we shall simply call the tower projection, which is a semi-conjugacy between f and F ; that is, for x ∈ ∆ l , with x = F l x 0 for x 0 ∈ ∆ 0 , π(x) := f l (x 0 ). In all the examples we have mentioned in §2, the standard tower constructions (as given in the papers we cited there) provide us with a tower projection π which is Hölder-continuous with respect to the separation time s on ∆. That is, given a Riemannian metric d, in each case we have that ∃β < 1 such that for x, y ∈ ∆, d(π(x), π(y)) = O(β s(x,y) ).
(10)
Note that the issue of the regularity of π is often not mentioned explicitly in the literature, but essentially follows from having good distortion control for every iterate of the map. (Formally, a tower is only required to have good distortion for the return map F R , which is not sufficient.)
Given a system f which admits a tower F : (∆, m) with projection π satisfying (10), we show how the observable classes (R1 − 4) on X correspond to the classes (V 1 − 4) of observables on ∆. Recall that for given ψ, R ε (ψ) := sup{|ψ(x) − ψ(y)| : d(x, y) ≤ ε}.
Given a regularity for ψ in terms of R ε (ψ), we estimate the regularity of ψ • π, which is an observable on ∆.
Lemma 4
• If ψ ∈ (R1, γ) for some γ ∈ (0, 1], then ψ • π ∈ (V 1);
• if ψ ∈ (R2, γ) for some γ ∈ (0, 1), then ψ • π ∈ (V 2, γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R3, γ) for some γ > 1, then ψ • π ∈ (V 3, γ ′ ) for every γ ′ < γ;
• if ψ ∈ (R4, γ) for some γ > 1, then ψ • π ∈ (V 4, γ).
Proof: The computations are entirely straightforward, so we shall just make explicit the (R4) case for the purposes of illustration.
Suppose R ε (ψ) = O(|log ε| −γ ), for some γ > 0. Then, taking n large as necessary, v n (ψ • π) ≤ C| log Cβ n | −γ = C(n log β −1 − log C) −γ ≤ C( n 2 log β −1 ) −γ = O(n −γ ).
Let us point out that the condition (10) is not necessary for us to apply these methods. If we are given some weaker regularity on π, the classes (V 1 − 4) shall simply correspond to some larger observable classes on the manifold. It remains to check that the semi-conjugacy π preserves the statistical properties we are interested in.
Lemma 5 Let ν be the mixing acip on ∆ given by Theorem 6. Given ϕ, ψ : X → R, letφ = ϕ • π,ψ = ψ • π. Then C n (ϕ, ψ; π * ν) = C n (φ,ψ; ν).
Lemma 6 Suppose the Central Limit Theorem holds for (F, ν) for some observable ϕ : ∆ → R. Then the Central Limit Theorem also holds for (f, π * ν) for the observableφ = ϕ • π. Figure 1: A system admitting a tower via a non-Hölder semi-conjugacy.
Given 0 < a < b < 1, and α > 1, we define f : [0, 1] by
f (x) = 1 − (1 − b)(− log a) α (− log(a − x)) −α x ∈ [0, a] b b−a (x − a) x ∈ (a, b) b − b a exp{(1 − b) α −1 (log a)(1 − x) −α −1 } x ∈ [b, 1]
.
(See figure 1.) Note that the map has unbounded derivative near a, and that a maps onto the critical point at 1. It is easy to check that f is monotone increasing on each interval, and that f has a Markov structure on the intervals Taking ∆ 0 = [0, b], P = {[0, a], (a, b)} with R([0, a]) = 2, R((a, b)) = 1, it is clear that the conditions for f to admit a tower F : ∆ are satisfied. For x, y ∈ [0, a], we have that |x − y| ≈ ( b a ) −s(x,y) . If we fix k and consider |f (x) − f (y)| for x, y ∈ [0, a) with s(x, y) = k, then for y close to a, we have |f (x) − f (y)| ≈ (k log(b/a) + C) −α ≈ k −α for some C. This determines the regularity of the tower projection π, which is in particular not Hölder continuous. However, if we take ψ ∈ (R1, γ) for some | 13,261 |
cs0311047 | 2950787737 | In recent years, the amount of information on the Internet has increased exponentially developing great interest in selective information dissemination systems. The publish subscribe paradigm is particularly suited for designing systems for routing information and requests according to their content throughout wide-area network of brokers. Current publish subscribe systems use limited syntax-based content routing but since publishers and subscribers are anonymous and decoupled in time, space and location, often over wide-area network boundary, they do not necessarily speak the same language. Consequently, adding semantics to current publish subscribe systems is important. In this paper we identify and examine the issues in developing semantic-based content routing for publish subscribe broker networks. | Some systems @cite_9 @cite_13 use inference engines to discover semantic relationships between data from ontology representations. Inference engines usually have specialized languages for expressing queries different from the language used to retrieve data, therefore user queries have to be either expressed in or translated into the language of the inference engine. The ontology is either global (i.e., domain independent) or domain-specific (i.e., only a single domain) ontology. Domain-specific ontologies are smaller and more commonly found than global ontologies because they are easier to specify. Additionally, there are systems that use mapping functions exclusively and do not have inference engines @cite_3 @cite_12 . In these systems, mapping functions serve the role of an inference engine. | {
"abstract": [
"A method for integrating separately developed information resources that overcomes incompatibilities in syntax and semantics and permits the resources to be accessed and modified coherently is described. The method provides logical connectivity among the information resources via a semantic service layer that automates the maintenance of data integrity and provides an approximation of global data integration across systems. This layer is a fundamental part of the Carnot architecture, which provides tools for interoperability across global enterprises. >",
"",
"Large organizations need to exchange information among many separately developed systems. In order for this exchange to be useful, the individual systems must agree on the meaning of their exchanged data. That is, the organization must ensure semantic interoperability . This paper provides a theory of semantic values as a unit of exchange that facilitates semantic interoperability betweeen heterogeneous information systems. We show how semantic values can either be stored explicitly or be defined by environments . A system architecture is presented that allows autonomous components to share semantic values. The key component in this architecture is called the context mediator , whose job is to identify and construct the semantic values being sent, to determine when the exchange is meaningful, and to convert the semantic values to the form required by the receiver. Our theory is then applied to the relational model. We provide an interpretation of standard SQL queries in which context conversions and manipulations are transparent to the user. We also introduce an extension of SQL, called Context-SQL (C-SQL), in which the context of a semantic value can be explicitly accessed and updated. Finally, we describe the implementation of a prototype context mediator for a relational C-SQL system.",
"There has been an explosion in the types, availability and volume of data accessible in an information system, thanks to the World Wide Web (the Web) and related inter-networking technologies. In this environment, there is a critical need to replace or complement earlier database integration approaches and current browsing and keyword-based techniques with concept-based approaches. Ontologies are increasingly becoming accepted as an important part of any concept or semantics based solution, and there is increasing realization that any viable solution will need to support multiple ontologies that may be independently developed and managed. In particular, we consider the use of concepts from pre-existing real world domain ontologies for describing the content of the underlying data repositories. The most challenging issue in this approach is that of vocabulary sharing, which involves dealing with the use of different terms or concepts to describe similar information. In this paper, we describe the architecture, design and implementation of the OBSERVER system. Brokering across the domain ontologies is enabled by representing and utilizing interontology relationships such as (but not limited to) synonyms, hyponyms and hypernyms across terms in different ontologies. User queries are rewritten by using these relationships to obtain translations across ontologies. Well established metrics like precision and recall based on the extensions underlying the concepts are used to estimate the loss of information, if any."
],
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_12",
"@cite_3"
],
"mid": [
"2109977296",
"",
"2005379079",
"1911282740"
]
} | I know what you mean: semantic issues in Internet-scale publish/subscribe systems | The increase in the amount of data on the Internet has led to the development of a new generation of applications based on selective information dissemination where data is distributed only to interested clients. Such applications require a new middleware architecture that can efficiently match user interests with available information. Middleware that can satisfy this requirement include eventbased architectures such as publish/subscribe systems.
In publish/subscribe systems (hereafter referred to as pub/sub systems), clients are autonomous components that exchange information by publishing events and by subscribing to the classes of events 1 they are interested in. In these systems, publishers produce information, while subscribers consume it. A component usually generates a message when it wants the external world to know that a certain event has occurred. All components that have previously expressed their interest in receiving such events will be notified about it. The central component of this architecture is the event dispatcher (also known as event broker). This component records all subscriptions in the system. When a certain event is published, the event dispatcher matches it against all subscriptions in the system. When the incoming event verifies a subscription, the event dispatcher sends a notification to the corresponding subscriber.
The earliest pub/sub systems were subject-based. In these systems, each message (event) belongs to a certain topic. Thus, subscribers express their interest in a particular subject and they receive all the events published within that particular subject. The most significant restriction of these systems is the limited selectivity of subscriptions. The latest systems are called content-based systems. In these systems, the subscriptions can contain complex queries on event content.
Pub/sub systems try to solve the problem of selective information dissemination. Recently, there has been a lot of research on solving the problem of efficiently matching events against subscriptions. The proposed solutions are either centralized, where a single broker stores all subscriptions and event matching is done locally [1][2][3] or distributed, where many brokers need to collaborate to match events with subscriptions because not all subscriptions are available to every broker [4,5]. The latter approach is also referred to as content-based routing because brokers form a network where events are routed to interested subscribers based on their content.
The existing solutions are limited because the matching (routing) is based on the syntax and not on the semantics of the information exchanged. For example, someone interested in buying a car with a "value of up to 10,000 will not receive notifications about "vehicles, "automobiles or even "cars with "price of 8,999 because the system has no understanding of the "price-"value relationship, nor of the "car-"automobile-"vehicle relationship.
In this paper we examine the issues in extending distributed pub/sub systems to offer semantic capabilities. This is an important aspect to be studied as components in a pub/sub systems are decoupled and do not necessary speak the same language.
Local Matching and Content-based Routing
Due to space limitation, we will not provide an extensive background about pub/sub systems and content-based routing. Instead, we briefly present the most important concepts that help the reader understand the ideas conceived in this paper.
The key point in pub/sub systems is that the information sent into the system by the publisher does not contain the addresses of the receivers. The information is forwarded to interested clients based on the content of the message and clients subscriptions. In a centralized approach, there is only one broker that stores all subscriptions. Upon receiving an event, the broker uses a matching algorithm to match the event against the subscriptions in order to decide which subscribers want to receive notifications about the event.
Usually, publications are expressed as lists of attribute-value pairs. The formal representation of a publication is given by the following expression: (a 1 , val 1 ), (a 2 , val 2 ), ..., (a n , val n ). Subscriptions are expressed as conjunctions of simple predicates. In a formal description, a simple predicate is represented as (attribute name relational operator value). A predicate (a rel op val) is matched by an attribute-value pair (a, val) if and only if the attribute names are identi-Subscription s1
Subscription s2
Covering Relation (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "IBM, price ≤ 1500) s1 covers s2 (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, price ≤ 1600) s2 covers s1 (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "Dell, price ≤ 1500) s1 does not cover s2, s2 does not cover s1 Table 1. Examples of subscriptions and covering relations cal (a = a) and the (a rel op val) boolean relation is true. A subscription s is matched by a publication p if and only if all its predicates are matched by some pair in p. In this case we say that the subscription is matched at syntactic level.
The distributed approach involves a network of brokers that collaborate in order to route the information in the system based on its content. In this case, practically, each broker is aware of its neighbours interests. Upon receiving an event, the broker matches it against its neighbours subscriptions and sends the event only to the interested neighbours. Usually, the routing scheme presents two distinct aspects: subscription forwarding and event forwarding. Subscription forwarding is used to propagate clients interests in the system, while event forwarding algorithms decide how to disseminate the events to the interested clients. Two main optimizations were introduced in the literature in order to increase the performance of these forwarding algorithms: subscription covering and advertisements.
Subscription covering
Given two subscriptions s 1 and s 2 , s 1 covers s 2 if and only if all the events that match s 2 also match s 1 . In other words, if we denote with E 1 and E 2 the set of events that match subscription s 1 and s 2 , respectively, then E 2 ⊆ E 1 .
If we look at the predicate level, the covering relation can be expressed as follows: Given two subscriptions s 1 = p 1 1 , p 2 1 , . . . , p n 1 and s 2 = p 1 2 , p 2 2 , . . . , p m 2 , s 1 covers s 2 if and only if ∀p k 1 ∈ s 1 , ∃p j 2 ∈ s 2 2 such that if p j 2 is matched by some attribute-value pair (a, val), then p k 1 is also matched by the same (a, val) attribute-value pair. In other words, s 2 has potentially more predicates and they are more restrictive than those in s 1 . Table 1 presents some examples of subscriptions and the corresponding covering relations.
When a broker B receives a subscription s, it will send it to its neighbours if and only if it has not previously sent them another subscription s, that covers s. Broker B is ensured to receive all events that match s, since it receives all events that match s and the events that match s are included in the set of the events that match s. Advertisement a Intersection Relation (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "IBM, price ≤ 1500) a intersects s (product = "computer, price ≤ 1600) (product = "computer, brand = "IBM, price ≤ 1600) a intersects s (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "Dell, price ≤ 1500) a does not intersect s Table 2. Examples of subscriptions, advertisements and intersection relations Advertisements Advertisements are used by publishers to announce the set of publications they are going to publish. Advertisements look exactly like subscriptions 3 , but have a different role in the system: they are used to build the routing path from the publishers to the interested subscribers.
An advertisement a determines an event e if and only if all attribute-value pairs match some predicates in the advertisement. Formally, an advertisement a = p 1 1 , p 2 1 , . . . , p n 1 determines an event e, if and only if ∀(a, v) ∈ e, ∃p k ∈ a such that (a, v) matches p k .
An advertisement a intersects a subscription s if and only if the intersection of the set of the events determined by the advertisement a and the set of the events that match s is a non-empty set. Formally, at predicate level, an advertisement a = a 1 , a 2 , . . . , a n intersects a subscription s = s 1 , s 2 , . . . , s n if and only if ∀s k ∈ s, ∃a j ∈ a and some attribute-value pair (attr, val) 4 such that (attr, val) matches both s k and a j . Table 2 presents some examples of subscriptions and advertisements and the corresponding intersection relations.
When using advertisements, upon receiving a subscription, each broker forwards it only to the neighbours that previously sent advertisements that intersect with the subscription. Thus, the subscriptions are forwarded only to the brokers that have potentially interesting publishers.
In this section we first introduce some extensions to the existing matching algorithms in order to make them semantic-aware and then we discuss the implications of using such a solution for semantic-based routing.
Semantic Matching
In this section we summarize our approach to make the existing centralized matching algorithms semantic-aware [6]. Our goal is to minimize the changes to the existing matching algorithms so that we can take advantage of their already efficient techniques and to make the processing of semantic information fast. We describe three approaches, each adding more extensive semantic capability to the matching algorithms. Each of the approaches can be used independently and for some applications that may be desirable. It is also possible to use all three approaches together.
The first approach allows a matching algorithm to match events and subscriptions that use semantically equivalent attributes or values-synonyms. The second approach uses additional knowledge about the relationships (beyond synonyms) between attributes and values to allow additional matches. More precisely, it uses a concept hierarchy that provides two kinds of relations: specialization and generalization. The third approach uses mapping functions which allow definitions of arbitrary relationships between the schema and the attribute values of the event.
The synonym step involves translating all strings with different names but with the same meaning to a "root term. For example, "car and "automobile are synonyms for "vehicle which then becomes the root term for the three words. This translation is performed for both subscriptions and events and at both attribute and value level. This allows syntactically different events and subscriptions to match. This translation is simple and straightforward. The semantic capability it adds to the system, although important, may not be sufficient in some situations, because this approach operates only at attribute and value level independently and does not consider the semantic relation between attributes and values. Moreover, this approach is limited to synonym relations only.
Taxonomies represent a way of organizing ontological knowledge using specialization and generalization relationships between different concepts. Intuitively, all the terms contained in such a taxonomy can be represented in a hierarchical structure, where more general terms are higher up in the hierarchy and are linked to more specialized terms situated lower in the hierarchy. This structure is called a "concept hierarchy. Usually, a concept hierarchy contains all terms within a specific domain, which includes both attributes and values.
Considering the observation that the subscriber should receive only information that it has precisely requested, we come up with the following two rules for matching that uses concept hierarchy: (1) the events that contain more specialized concepts have to match the subscriptions that contain more generalized terms of the same kind and (2) the events that contain more generalized terms than those used in the subscriptions do not match the subscriptions.
In order to better understand these rules, we look at the following examples. Suppose that we have in the system a subscription:
S : (book = StoneAge)AN D(subject = reptiles). When the event:
E : (encyclopedia, StoneAge), (subject, crocodiles) is entering the system, it should match the subscription S, as the subscriber asked for more general information that the event provides (in other words, an encyclopedia is a special kind of book and crocodiles represent a special kind of reptiles). On the other hand, considering the subscription:
S : (encyclopedia = StoneAge)AN D(subject = reptiles) and the incoming event E : (book, StoneAge), (subject, crocodiles), the event E should not match the subscription S, as the book contained in the event may be a dictionary or a fiction book (as well as an encyclopedia). Note that, although the subscription S contains in its second predicate a value more specialized than that in the event, the first predicate of the subscription is not matched by the event, and therefore, the event does not match the subscription. The last rule prevents an eventual spamming of the subscribers with useless information.
Mapping functions can specify relationships which otherwise cannot be specified using a concept hierarchy or a synonym relationship. For example, they can be used to create a mapping between different ontologies. A mapping function is a many-to-many function that correlates one or more attribute-value pairs to one or more semantically related attribute-value pairs. It is possible to have many mapping functions for each attribute. We assume that mapping functions are specified by domain experts. In the future, we are going to investigate using a fully-fledged inference engine as a more compact representation of mapping functions and the performance trade off this entails.
We illustrate the concept of mapping functions with an example. Let us say that there is a university professor X, who is interested in advising new PhD graduate students. In particular, he is only interested in students who have had 5 or more years of previous professional experience. Subsequently, he subscribes to the following: S : (university = Y )AN D(degree = P hD)AN D(prof essional experience > 4) Specifically, the professor X is looking for students applying to university Y in the PhD stream with 5 or more years of experience. For each new student applying to the university, a new event, which contains (among others) the information about previous work experience, is published into our system. Thus, an event for a student who had some work experience would look like E : (school, Y )(degree, P hD)(work experience, true)(graduation date, 1990). In addition, the system has access to the following mapping function: f 1 : (work experience, graduation date) → prof essional experience. You can think of function f 1 implemented as a simple difference between todays date and the date of students graduation and returning that difference as the value of prof essional experience. For the purposes of the example, f 1 assumes that the student has been working since graduation. Finally, the result of f 1 is appended to event E and the matching algorithm matches E to professor Xs subscription S.
In addition, we can think about events and subscriptions as points in a multidimensional space [7] where the distance between points determines a match between an event and a subscription. This way it is possible that an event matches a subscription even if some attribute/value pair of the event is more general than the corresponding predicate in the subscription as long as the distance between the event and the subscription, as determined by all their constituent attribute-value pairs and predicates respectively, is within the defined matching range.
To summarize, the synonym stage translates the events and the subscriptions to a normalized form using the root terms, while the hierarchy and the mapping stages add new attribute-value pairs to the events. The new events are matched using existing matching algorithms against the subscriptions in the system. In conclusion, we say that e semantically matches s 5 if and only if the hierarchy and the mapping stages can produce an event e = e ∪ E 6 that matches s at syntactic level.
Semantic-based Routing
At first glance, it is apparent that existing algorithms for subscription and event forwarding can be used with a semantic-aware matching algorithm in order to achieve semantic-based routing. However, this approach is not straight forward. In this section we discuss some open issues that arise from using a semantic-aware matching algorithm in content-based routing.
Subscription covering Although it is defined at syntax level, the covering relation as presented in Section 2 can be used directly with the semantic matching approach discussed above without any loss of notifications. In other words, if s 1 covers s 2 and a certain broker B will forward only subscription s 1 to its neighbours, it will still receive both events that semantically match s 1 and s 2 . This happens because the relation between the set of events E 1 and E 2 that semantically match s 1 and s 2 respectively, is preserved, i.e. E 2 ⊆ E 1 . Truly, if e semantically matches s 2 , then the hierarchy and the mapping stages can produce an event e that matches s 2 at syntactic level. If e matches s 2 at syntactic level, then, according to the definition of covering relation, e matches s 1 at syntactic level. Since e is produced by adding semantic knowledge to e, this means that e semantically matches s 1 , i.e. E 2 ⊆ E 1 . Thus, broker B is ensured to receive all events that semantically match s 2 , since it receives all events that semantically match s 1 and the events that semantically match s 2 are included in the set of the events that semantically match s 1 .
Although the syntactic covering relation can be used without loss of notifications, some redundant subscriptions may be forwarded into the network. This happens because the set of events E 1 and E 2 that semantically match s 1 and s 2 can be in the following relation E 2 ⊆ E 1 without necessarily s 1 covering s 2 at syntax level. In other words, although s 1 does not cover s 2 at syntactic level, it may cover it semantically speaking. For example, consider the following subscriptions: s 1 = ((product = "printed material")AN D(topic = "semantic web")) and s 2 = ((product = "book")AN D(topic = "semantic web")). In this case, all events that semantically match s 2 will also match s 1 as a book is a form of printed material ; thus E 2 ⊆ E 1 , but s 1 does not cover s 2 (at syntax level). Therefore, the covering relation needs to be extended to encapsulate semantic knowledge. One simple way of transforming the covering relation to be semanticaware is to use the hierarchy approach. In this case, subscription s 1 will cover s 2 as the printed material term is a more general term than book.
Advertisements While the covering relation can be directly used with the semantic matching algorithms, this is not the case for advertisements. As explained earlier in this paper, advertisements are used to establish the routing path from the publishers to the interested subscribers. How the events are routed in the system depends on the intersection relation between advertisements and subscription. Consider the following example: advertisement a = ((product = "printedmaterial), (price ≥ 10)) and subscription s = ((product = "book), (price ≤ 20)). Advertisement a does not intersect s at syntactic level because there is not any predicate p in a and not any attribute-value pair (attr, val) such that (attr, val) matches both p and the following predicate (product = "book) of subscription s. (cf. Section 2. Thus, the subscription will not be forwarded towards the publisher that emitted the advertisement. All publications that will be produced by this publisher will not be forwarded to the subscriber, although some of them may matched its subscriptions.
Distributed semantic knowledge
The discussion above about subscription covering and advertisements considered that each broker contains the same semantic knowledge (i.e. same synonyms, hierarchies and mapping functions). However, the replication of the same semantic knowledge to all brokers in the system may not be feasible and it may be detrimental to scalability. We envision a system where semantic knowledge is distributed between brokers 7 in the same way that Internet distributes link status information using routing protocols. A semantic knowledge database is equivalent to routing tables in terms of functionality.
The Internet is a hierarchical computer network. At the top of hierarchy are relatively few routers containing very general information in routing tables. The tables do not contain information about every host on the Internet, but only about a few network destinations. Thus, high level pre-defined ontological information could be distributed in the same way among the top routers ( Figure 1).
Top-level routers
Lower-level routers to hosts to hosts to lower-level routers It is difficult to envision what this higher level information will be at this time, but we only need to take a look at Internet directories such as Google and Yahoo to get an idea of top level semantic knowledge. Both of these directories provide a user with only a few key entries as starting point for exploring the wast Internet information store. We see top level brokers exchanging only covering and advertisement information.
Lower in the Internet hierarchy routers maintain routing tables with destinations to specific hosts. Even though top level brokers use a common ontology, lower level brokers do not have to. For example, consider two different pairs of communicating applications: financial and medical. Financial applications are exchanging stock quotes, while medical are exchanging news about new drugs. These two application use different ontologies. The ontology information for each application can be distributed between multiple routers. These low level brokers will advertise more general descriptions of the ontologies they have to higher level brokers. Using this information, any new application will be able to locate the broker with specific ontologies. Any application wishing to integrate medical and financial information can create a mapping ontology between the financial and medical ontologies and provide a general description of the mapping ontology to higher level broker like in the previous case. We see that high level concepts can be used to route information between brokers who do not have access to specific ontologies. We can look at these general terms as very terse summaries of ontologies.
Our vision of a large scale semantic-based routing raises many questions:
-top-level routing: How to bridge multiple distributed ontologies to enable content routing? How can we avoid or reduce duplication of ontological information among brokers? What is an appropriate high level generalization that can bring together different ontologies? How do semantic routing protocols look like? -lower-level routing: How to efficiently store ontological information at routers? Large knowledge databases will probably require secondary storage beyond what is available at routers. How does this affect routing? If routers have to use covering at this level how can they dynamically control the generality of covering to affect network performance?
Conclusions
In this paper we underline the limits of matching and content-based routing at syntactic level in pub/sub systems. We propose a solution for achieving semantic capabilities for local matching and look into the implications of using such a solution for content-based routing. We also present our vision on nextgeneration semantic-based routing. Our intent was to give rise to questions and ideas in order to improve existing content-based routing approaches and make them semantic-aware. | 3,957 |
cs0311047 | 2950787737 | In recent years, the amount of information on the Internet has increased exponentially developing great interest in selective information dissemination systems. The publish subscribe paradigm is particularly suited for designing systems for routing information and requests according to their content throughout wide-area network of brokers. Current publish subscribe systems use limited syntax-based content routing but since publishers and subscribers are anonymous and decoupled in time, space and location, often over wide-area network boundary, they do not necessarily speak the same language. Consequently, adding semantics to current publish subscribe systems is important. In this paper we identify and examine the issues in developing semantic-based content routing for publish subscribe broker networks. | To improve scalability, peer-to-peer database systems are looking in the direction of semantic routing. HyperCuP @cite_7 uses common ontology to dynamically cluster peers based on the data they contain. A cluster is identified using a more general concept then those associated with its members in the ontology. Concepts in the ontology map to cluster addresses so a node can determine appropriate route for a query by looking up more general concepts of the query terms in the concept hierarchy. Edutella @cite_8 uses query hubs (functionally similar to brokers) to collect user metadata and present the peer-to-peer network as a virtual database which users query. All queries are routed though a query hub which forwards queries only to those nodes that can answer it. | {
"abstract": [
"Semantic Web Services are a promising combination of Semantic Web and Web service technology, aiming at providing means of automatically executing, discovering and composing semantically marked-up Web services. We envision peer-to-peer networks which allow for carrying out searches in real-time on permanently reconfiguring networks to be an ideal infrastructure for deploying a network of Semantic Web Service providers. However, P2P networks evolving in an unorganized manner suffer from serious scalability problems, limiting the number of nodes in the network, creating network overload and pushing search times to unacceptable limits. We address these problems by imposing a deterministic shape on P2P networks: We propose a graph topology which allows for very efficient broadcast and search, and we provide an efficient topology construction and maintenance algorithm which, crucial to symmetric peer-to-peer networks, does neither require a central server nor super nodes in the network. We show how our scheme can be made even more efficient by using a globally known ontology to determine the organization of peers in the graph topology, allowing for efficient concept-based search.",
"Metadata for the World Wide Web is important, but metadata for Peer-to-Peer (P2P) networks is absolutely crucial. In this paper we discuss the open source project Edutella which builds upon metadata standards defined for the WWW and aims to provide an RDF-based metadata infrastructure for P2P applications, building on the recently announced JXTA Framework. We describe the goals and main services this infrastructure will provide and the architecture to connect Edutella Peers based on exchange of RDF metadata. As the query service is one of the core services of Edutella, upon which other services are built, we specify in detail the Edutella Common Data Model (ECDM) as basis for the Edutella query exchange language (RDF-QEL-i) and format implementing distributed queries over the Edutella network. Finally, we shortly discuss registration and mediation services, and introduce the prototype and application scenario for our current Edutella aware peers."
],
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2159414772",
"1979734411"
]
} | I know what you mean: semantic issues in Internet-scale publish/subscribe systems | The increase in the amount of data on the Internet has led to the development of a new generation of applications based on selective information dissemination where data is distributed only to interested clients. Such applications require a new middleware architecture that can efficiently match user interests with available information. Middleware that can satisfy this requirement include eventbased architectures such as publish/subscribe systems.
In publish/subscribe systems (hereafter referred to as pub/sub systems), clients are autonomous components that exchange information by publishing events and by subscribing to the classes of events 1 they are interested in. In these systems, publishers produce information, while subscribers consume it. A component usually generates a message when it wants the external world to know that a certain event has occurred. All components that have previously expressed their interest in receiving such events will be notified about it. The central component of this architecture is the event dispatcher (also known as event broker). This component records all subscriptions in the system. When a certain event is published, the event dispatcher matches it against all subscriptions in the system. When the incoming event verifies a subscription, the event dispatcher sends a notification to the corresponding subscriber.
The earliest pub/sub systems were subject-based. In these systems, each message (event) belongs to a certain topic. Thus, subscribers express their interest in a particular subject and they receive all the events published within that particular subject. The most significant restriction of these systems is the limited selectivity of subscriptions. The latest systems are called content-based systems. In these systems, the subscriptions can contain complex queries on event content.
Pub/sub systems try to solve the problem of selective information dissemination. Recently, there has been a lot of research on solving the problem of efficiently matching events against subscriptions. The proposed solutions are either centralized, where a single broker stores all subscriptions and event matching is done locally [1][2][3] or distributed, where many brokers need to collaborate to match events with subscriptions because not all subscriptions are available to every broker [4,5]. The latter approach is also referred to as content-based routing because brokers form a network where events are routed to interested subscribers based on their content.
The existing solutions are limited because the matching (routing) is based on the syntax and not on the semantics of the information exchanged. For example, someone interested in buying a car with a "value of up to 10,000 will not receive notifications about "vehicles, "automobiles or even "cars with "price of 8,999 because the system has no understanding of the "price-"value relationship, nor of the "car-"automobile-"vehicle relationship.
In this paper we examine the issues in extending distributed pub/sub systems to offer semantic capabilities. This is an important aspect to be studied as components in a pub/sub systems are decoupled and do not necessary speak the same language.
Local Matching and Content-based Routing
Due to space limitation, we will not provide an extensive background about pub/sub systems and content-based routing. Instead, we briefly present the most important concepts that help the reader understand the ideas conceived in this paper.
The key point in pub/sub systems is that the information sent into the system by the publisher does not contain the addresses of the receivers. The information is forwarded to interested clients based on the content of the message and clients subscriptions. In a centralized approach, there is only one broker that stores all subscriptions. Upon receiving an event, the broker uses a matching algorithm to match the event against the subscriptions in order to decide which subscribers want to receive notifications about the event.
Usually, publications are expressed as lists of attribute-value pairs. The formal representation of a publication is given by the following expression: (a 1 , val 1 ), (a 2 , val 2 ), ..., (a n , val n ). Subscriptions are expressed as conjunctions of simple predicates. In a formal description, a simple predicate is represented as (attribute name relational operator value). A predicate (a rel op val) is matched by an attribute-value pair (a, val) if and only if the attribute names are identi-Subscription s1
Subscription s2
Covering Relation (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "IBM, price ≤ 1500) s1 covers s2 (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, price ≤ 1600) s2 covers s1 (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "Dell, price ≤ 1500) s1 does not cover s2, s2 does not cover s1 Table 1. Examples of subscriptions and covering relations cal (a = a) and the (a rel op val) boolean relation is true. A subscription s is matched by a publication p if and only if all its predicates are matched by some pair in p. In this case we say that the subscription is matched at syntactic level.
The distributed approach involves a network of brokers that collaborate in order to route the information in the system based on its content. In this case, practically, each broker is aware of its neighbours interests. Upon receiving an event, the broker matches it against its neighbours subscriptions and sends the event only to the interested neighbours. Usually, the routing scheme presents two distinct aspects: subscription forwarding and event forwarding. Subscription forwarding is used to propagate clients interests in the system, while event forwarding algorithms decide how to disseminate the events to the interested clients. Two main optimizations were introduced in the literature in order to increase the performance of these forwarding algorithms: subscription covering and advertisements.
Subscription covering
Given two subscriptions s 1 and s 2 , s 1 covers s 2 if and only if all the events that match s 2 also match s 1 . In other words, if we denote with E 1 and E 2 the set of events that match subscription s 1 and s 2 , respectively, then E 2 ⊆ E 1 .
If we look at the predicate level, the covering relation can be expressed as follows: Given two subscriptions s 1 = p 1 1 , p 2 1 , . . . , p n 1 and s 2 = p 1 2 , p 2 2 , . . . , p m 2 , s 1 covers s 2 if and only if ∀p k 1 ∈ s 1 , ∃p j 2 ∈ s 2 2 such that if p j 2 is matched by some attribute-value pair (a, val), then p k 1 is also matched by the same (a, val) attribute-value pair. In other words, s 2 has potentially more predicates and they are more restrictive than those in s 1 . Table 1 presents some examples of subscriptions and the corresponding covering relations.
When a broker B receives a subscription s, it will send it to its neighbours if and only if it has not previously sent them another subscription s, that covers s. Broker B is ensured to receive all events that match s, since it receives all events that match s and the events that match s are included in the set of the events that match s. Advertisement a Intersection Relation (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "IBM, price ≤ 1500) a intersects s (product = "computer, price ≤ 1600) (product = "computer, brand = "IBM, price ≤ 1600) a intersects s (product = "computer, brand = "IBM, price ≤ 1600) (product = "computer, brand = "Dell, price ≤ 1500) a does not intersect s Table 2. Examples of subscriptions, advertisements and intersection relations Advertisements Advertisements are used by publishers to announce the set of publications they are going to publish. Advertisements look exactly like subscriptions 3 , but have a different role in the system: they are used to build the routing path from the publishers to the interested subscribers.
An advertisement a determines an event e if and only if all attribute-value pairs match some predicates in the advertisement. Formally, an advertisement a = p 1 1 , p 2 1 , . . . , p n 1 determines an event e, if and only if ∀(a, v) ∈ e, ∃p k ∈ a such that (a, v) matches p k .
An advertisement a intersects a subscription s if and only if the intersection of the set of the events determined by the advertisement a and the set of the events that match s is a non-empty set. Formally, at predicate level, an advertisement a = a 1 , a 2 , . . . , a n intersects a subscription s = s 1 , s 2 , . . . , s n if and only if ∀s k ∈ s, ∃a j ∈ a and some attribute-value pair (attr, val) 4 such that (attr, val) matches both s k and a j . Table 2 presents some examples of subscriptions and advertisements and the corresponding intersection relations.
When using advertisements, upon receiving a subscription, each broker forwards it only to the neighbours that previously sent advertisements that intersect with the subscription. Thus, the subscriptions are forwarded only to the brokers that have potentially interesting publishers.
In this section we first introduce some extensions to the existing matching algorithms in order to make them semantic-aware and then we discuss the implications of using such a solution for semantic-based routing.
Semantic Matching
In this section we summarize our approach to make the existing centralized matching algorithms semantic-aware [6]. Our goal is to minimize the changes to the existing matching algorithms so that we can take advantage of their already efficient techniques and to make the processing of semantic information fast. We describe three approaches, each adding more extensive semantic capability to the matching algorithms. Each of the approaches can be used independently and for some applications that may be desirable. It is also possible to use all three approaches together.
The first approach allows a matching algorithm to match events and subscriptions that use semantically equivalent attributes or values-synonyms. The second approach uses additional knowledge about the relationships (beyond synonyms) between attributes and values to allow additional matches. More precisely, it uses a concept hierarchy that provides two kinds of relations: specialization and generalization. The third approach uses mapping functions which allow definitions of arbitrary relationships between the schema and the attribute values of the event.
The synonym step involves translating all strings with different names but with the same meaning to a "root term. For example, "car and "automobile are synonyms for "vehicle which then becomes the root term for the three words. This translation is performed for both subscriptions and events and at both attribute and value level. This allows syntactically different events and subscriptions to match. This translation is simple and straightforward. The semantic capability it adds to the system, although important, may not be sufficient in some situations, because this approach operates only at attribute and value level independently and does not consider the semantic relation between attributes and values. Moreover, this approach is limited to synonym relations only.
Taxonomies represent a way of organizing ontological knowledge using specialization and generalization relationships between different concepts. Intuitively, all the terms contained in such a taxonomy can be represented in a hierarchical structure, where more general terms are higher up in the hierarchy and are linked to more specialized terms situated lower in the hierarchy. This structure is called a "concept hierarchy. Usually, a concept hierarchy contains all terms within a specific domain, which includes both attributes and values.
Considering the observation that the subscriber should receive only information that it has precisely requested, we come up with the following two rules for matching that uses concept hierarchy: (1) the events that contain more specialized concepts have to match the subscriptions that contain more generalized terms of the same kind and (2) the events that contain more generalized terms than those used in the subscriptions do not match the subscriptions.
In order to better understand these rules, we look at the following examples. Suppose that we have in the system a subscription:
S : (book = StoneAge)AN D(subject = reptiles). When the event:
E : (encyclopedia, StoneAge), (subject, crocodiles) is entering the system, it should match the subscription S, as the subscriber asked for more general information that the event provides (in other words, an encyclopedia is a special kind of book and crocodiles represent a special kind of reptiles). On the other hand, considering the subscription:
S : (encyclopedia = StoneAge)AN D(subject = reptiles) and the incoming event E : (book, StoneAge), (subject, crocodiles), the event E should not match the subscription S, as the book contained in the event may be a dictionary or a fiction book (as well as an encyclopedia). Note that, although the subscription S contains in its second predicate a value more specialized than that in the event, the first predicate of the subscription is not matched by the event, and therefore, the event does not match the subscription. The last rule prevents an eventual spamming of the subscribers with useless information.
Mapping functions can specify relationships which otherwise cannot be specified using a concept hierarchy or a synonym relationship. For example, they can be used to create a mapping between different ontologies. A mapping function is a many-to-many function that correlates one or more attribute-value pairs to one or more semantically related attribute-value pairs. It is possible to have many mapping functions for each attribute. We assume that mapping functions are specified by domain experts. In the future, we are going to investigate using a fully-fledged inference engine as a more compact representation of mapping functions and the performance trade off this entails.
We illustrate the concept of mapping functions with an example. Let us say that there is a university professor X, who is interested in advising new PhD graduate students. In particular, he is only interested in students who have had 5 or more years of previous professional experience. Subsequently, he subscribes to the following: S : (university = Y )AN D(degree = P hD)AN D(prof essional experience > 4) Specifically, the professor X is looking for students applying to university Y in the PhD stream with 5 or more years of experience. For each new student applying to the university, a new event, which contains (among others) the information about previous work experience, is published into our system. Thus, an event for a student who had some work experience would look like E : (school, Y )(degree, P hD)(work experience, true)(graduation date, 1990). In addition, the system has access to the following mapping function: f 1 : (work experience, graduation date) → prof essional experience. You can think of function f 1 implemented as a simple difference between todays date and the date of students graduation and returning that difference as the value of prof essional experience. For the purposes of the example, f 1 assumes that the student has been working since graduation. Finally, the result of f 1 is appended to event E and the matching algorithm matches E to professor Xs subscription S.
In addition, we can think about events and subscriptions as points in a multidimensional space [7] where the distance between points determines a match between an event and a subscription. This way it is possible that an event matches a subscription even if some attribute/value pair of the event is more general than the corresponding predicate in the subscription as long as the distance between the event and the subscription, as determined by all their constituent attribute-value pairs and predicates respectively, is within the defined matching range.
To summarize, the synonym stage translates the events and the subscriptions to a normalized form using the root terms, while the hierarchy and the mapping stages add new attribute-value pairs to the events. The new events are matched using existing matching algorithms against the subscriptions in the system. In conclusion, we say that e semantically matches s 5 if and only if the hierarchy and the mapping stages can produce an event e = e ∪ E 6 that matches s at syntactic level.
Semantic-based Routing
At first glance, it is apparent that existing algorithms for subscription and event forwarding can be used with a semantic-aware matching algorithm in order to achieve semantic-based routing. However, this approach is not straight forward. In this section we discuss some open issues that arise from using a semantic-aware matching algorithm in content-based routing.
Subscription covering Although it is defined at syntax level, the covering relation as presented in Section 2 can be used directly with the semantic matching approach discussed above without any loss of notifications. In other words, if s 1 covers s 2 and a certain broker B will forward only subscription s 1 to its neighbours, it will still receive both events that semantically match s 1 and s 2 . This happens because the relation between the set of events E 1 and E 2 that semantically match s 1 and s 2 respectively, is preserved, i.e. E 2 ⊆ E 1 . Truly, if e semantically matches s 2 , then the hierarchy and the mapping stages can produce an event e that matches s 2 at syntactic level. If e matches s 2 at syntactic level, then, according to the definition of covering relation, e matches s 1 at syntactic level. Since e is produced by adding semantic knowledge to e, this means that e semantically matches s 1 , i.e. E 2 ⊆ E 1 . Thus, broker B is ensured to receive all events that semantically match s 2 , since it receives all events that semantically match s 1 and the events that semantically match s 2 are included in the set of the events that semantically match s 1 .
Although the syntactic covering relation can be used without loss of notifications, some redundant subscriptions may be forwarded into the network. This happens because the set of events E 1 and E 2 that semantically match s 1 and s 2 can be in the following relation E 2 ⊆ E 1 without necessarily s 1 covering s 2 at syntax level. In other words, although s 1 does not cover s 2 at syntactic level, it may cover it semantically speaking. For example, consider the following subscriptions: s 1 = ((product = "printed material")AN D(topic = "semantic web")) and s 2 = ((product = "book")AN D(topic = "semantic web")). In this case, all events that semantically match s 2 will also match s 1 as a book is a form of printed material ; thus E 2 ⊆ E 1 , but s 1 does not cover s 2 (at syntax level). Therefore, the covering relation needs to be extended to encapsulate semantic knowledge. One simple way of transforming the covering relation to be semanticaware is to use the hierarchy approach. In this case, subscription s 1 will cover s 2 as the printed material term is a more general term than book.
Advertisements While the covering relation can be directly used with the semantic matching algorithms, this is not the case for advertisements. As explained earlier in this paper, advertisements are used to establish the routing path from the publishers to the interested subscribers. How the events are routed in the system depends on the intersection relation between advertisements and subscription. Consider the following example: advertisement a = ((product = "printedmaterial), (price ≥ 10)) and subscription s = ((product = "book), (price ≤ 20)). Advertisement a does not intersect s at syntactic level because there is not any predicate p in a and not any attribute-value pair (attr, val) such that (attr, val) matches both p and the following predicate (product = "book) of subscription s. (cf. Section 2. Thus, the subscription will not be forwarded towards the publisher that emitted the advertisement. All publications that will be produced by this publisher will not be forwarded to the subscriber, although some of them may matched its subscriptions.
Distributed semantic knowledge
The discussion above about subscription covering and advertisements considered that each broker contains the same semantic knowledge (i.e. same synonyms, hierarchies and mapping functions). However, the replication of the same semantic knowledge to all brokers in the system may not be feasible and it may be detrimental to scalability. We envision a system where semantic knowledge is distributed between brokers 7 in the same way that Internet distributes link status information using routing protocols. A semantic knowledge database is equivalent to routing tables in terms of functionality.
The Internet is a hierarchical computer network. At the top of hierarchy are relatively few routers containing very general information in routing tables. The tables do not contain information about every host on the Internet, but only about a few network destinations. Thus, high level pre-defined ontological information could be distributed in the same way among the top routers ( Figure 1).
Top-level routers
Lower-level routers to hosts to hosts to lower-level routers It is difficult to envision what this higher level information will be at this time, but we only need to take a look at Internet directories such as Google and Yahoo to get an idea of top level semantic knowledge. Both of these directories provide a user with only a few key entries as starting point for exploring the wast Internet information store. We see top level brokers exchanging only covering and advertisement information.
Lower in the Internet hierarchy routers maintain routing tables with destinations to specific hosts. Even though top level brokers use a common ontology, lower level brokers do not have to. For example, consider two different pairs of communicating applications: financial and medical. Financial applications are exchanging stock quotes, while medical are exchanging news about new drugs. These two application use different ontologies. The ontology information for each application can be distributed between multiple routers. These low level brokers will advertise more general descriptions of the ontologies they have to higher level brokers. Using this information, any new application will be able to locate the broker with specific ontologies. Any application wishing to integrate medical and financial information can create a mapping ontology between the financial and medical ontologies and provide a general description of the mapping ontology to higher level broker like in the previous case. We see that high level concepts can be used to route information between brokers who do not have access to specific ontologies. We can look at these general terms as very terse summaries of ontologies.
Our vision of a large scale semantic-based routing raises many questions:
-top-level routing: How to bridge multiple distributed ontologies to enable content routing? How can we avoid or reduce duplication of ontological information among brokers? What is an appropriate high level generalization that can bring together different ontologies? How do semantic routing protocols look like? -lower-level routing: How to efficiently store ontological information at routers? Large knowledge databases will probably require secondary storage beyond what is available at routers. How does this affect routing? If routers have to use covering at this level how can they dynamically control the generality of covering to affect network performance?
Conclusions
In this paper we underline the limits of matching and content-based routing at syntactic level in pub/sub systems. We propose a solution for achieving semantic capabilities for local matching and look into the implications of using such a solution for content-based routing. We also present our vision on nextgeneration semantic-based routing. Our intent was to give rise to questions and ideas in order to improve existing content-based routing approaches and make them semantic-aware. | 3,957 |
cs0310024 | 1513200972 | Cyclic debugging requires repeatable executions. As non-deterministic or real-time systems typically do not have the potential to provide this, special methods are required. One such method is replay, a process that requires monitoring of a running system and logging of the data produced by that monitoring. We shall discuss the process of preparing the replay, a part of the process that has not been very well described before. | Zambonelli and Netzer @cite_2 proposed a method that, by taking on-line decisions on whether to log or not to log a monitored event, deviates from the strict FIFO-solution. However, sometimes logging will debit the system with a jitter in the execution time, and so will also the algorithm it self. As larger jitter will force more extensive efforts for validation @cite_6 , an increase in jitter is counterproductive to the validation effort. | {
"abstract": [
"Abstract For testing of sequential software it is usually sufficient to provide the same input (and program state) in order to reproduce the output. For real-time systems (RTS), on the other hand, we need also to control, or observe, the timing and order of the inputs. If the system additionally is multitasking, we also need to take timing and the concurrency of the executing tasks into account. In this paper we present a method for deterministic testing of multitasking RTS, which allows explorative investigations of real-time system behavior. The method includes an analysis technique that given a set of tasks and a schedule derives all execution orderings that can occur during run-time. These orderings correspond to the possible inter-leavings of the executing tasks. The method also includes a testing strategy that using the derived execution orderings can achieve deterministic, and even reproducible, testing of RTS. Since, each execution ordering can be regarded as a sequential program, it becomes possible to use techniques for testing of sequential software in testing multitasking real-time system software. We also show how this analysis and testing strategy can be extended to encompass distributed computations, communication latencies and the effects of global clock synchronization. The number of execution orderings is an objective measure of the testability of a system since it indicates how many behaviors the system can exhibit during runtime. In the pursuit of finding errors we must thus cover all these execution orderings. The fewer the orderings the better the testability.",
"To support incremental replay of message-passing applications, processes must periodically checkpoint and the content of some messages must be logged, to break dependencies of the current state of the execution on past events. The paper presents a new adaptive logging algorithm that dynamically decides whether to log a message based on dependencies the incoming message introduces on past events of the execution. The paper discusses the implementation issues of the algorithm and evaluates its performances on several applications, showing how it improves previously known schemes."
],
"cite_N": [
"@cite_6",
"@cite_2"
],
"mid": [
"2143295017",
"2110140003"
]
} | Availability Guarantee for Deterministic Replay Starting Points in Real-Time Systems 2 | Cyclic debugging is the commonly used term for the process of debugging a system using an ordinary debugger (e.g., gdb). That process normally restarts the system repeatedly (with the same input) to pinpoint the bug, hence "cyclic". This method of debugging relies on that the same execution can be deterministically recreated at command over and over again. Replay has been proposed to realize cyclic debugging of systems that do not fulfill this requirement, systems that incorporate elements of non-deterministic behavior and/or time-dependence [HST03, SG97, ZN99] (e.g. real-time systems).
Replay can be described as creating a facsimile of an execution based on a previous recording. The general idea behind replay is to, by inserting probes [Hus02b] into the system, record (to monitor and log) sufficient information about a reference execution of the non-deterministic system to facilitate the reproduction of a replay execution. The information logged consists of events describing the execution [TSHP03]: control-flow events (describing context-switches, exceptions, and interrupts), and dataflow events (describing checkpoints of task-states and input from the environment or from other tasks). As an event is monitored, an entry with information describing the event is logged into a record for post-mortem usage.
Here, we assume deterministic replay [TSHP03], which does not assume that the log from the reference execution describes the reference execution in its entirety; some sequences of the log may be discarded before completion of the reference execution. This interrupted coverage of the reference
JOEL HUSELIUS, HENRIK THANE AND DANIEL SUNDMARK.
execution is a corollary of that the space for storing records on is not infinitely large; thus, some records may have to be discarded in favor of newer ones. In this paper, we are concerned with the eviction scheduler that controls the contents of the log.
This paper is a continuation on previous efforts [HST03,Hus02a], where we presented a method for starting a replay execution from starting points identified in the task code, and concluded that successful management by the eviction scheduler of the memory pool is vital to the performance of the replay execution. We noted that the structure of the task code may be such that several starting points exists, some of which may be unreachable if we cannot guide the execution to them during the setup of the replay execution. However, guiding the execution requires logged data that describes that transition, the eviction scheduler must be responsible for keeping required entries in the log in order to guarantee the replay execution.
The previous work on eviction schedulers is not substantial, as with this paper, we elevate the issue in the particular case of real-time systems, and provide a method that is general, provided that the known and thoroughly defined conditions listed in [HST03] are met. The paper is organized as follows: Section 2 recapitulates some previous work in the area, Section 3 presents our method, and Section 4 concludes the paper.
The extended constant execution time eviction scheduler ECETES
We have, in a previous publication [Hus02a], presented a method called the Constant Execution Time Eviction Scheduler (CETES). Similarly to the method proposed by Zambonelli and Netzer, CETES took on-line decisions on how to organize the logging efforts. Unlike the proposition of Zambonelli and Netzer, our method had a constant execution time and logged all monitored events. The main drawback of CETES was the requirement that all entries be the same size. In this work, we propose the Extended Constant Execution Time Eviction Scheduler (ECETES).
Implementation
ECETES allows a user to specify a set of queues where data can be stored, the queues share a pool of memory, divided into atomic records, where entries can be logged concurrently in one or more records. The setup of the ECETES system requires the input-parameters: size and location of the memory pool, the size of one record, the maximum number of records per entry, and the number of queues.
As posted above, execution time jitter forces more extensive validation efforts. In this setting, as such an increase would be counter productive, we require that ECETES has no execution time jitter. The functionality of the ECETES is very influenced by this restriction on the implementation; given the same input-parameters, the execution time of the implementation should be deterministic.
AVAILABILITY GUARANTEE FOR DETERMINISTIC REPLAY STARTING POINTS IN REAL-TIME SYSTEMS 263
When a call is made to insert a new entry in the ECETES structure, the required input-parameters are: The location of the queue in which to store the entry, the location of the entry, the number of records required to store the entry, the type of the record, and the current time. The type of the record specifies if it is a control-or data-flow entry, and the specific type of flow-entry. We have decided upon the following solution: A call to insert a new entry that requires l records will cause ECETES to inspect one entry of each queue, the (l+1):th record counted from the end of the queue. The inspected records are compared with respect to their age and the properties of the queue to which they belong. At the end of the comparison, a queue has been chosen that will suffer least if l records are removed from it. From this queue, l records are then removed from the queue, and they are instead inserted into the queue described by the input-parameters and can there accommodate the requested entry. Before exiting, some internal structures of the ECETES are updated.
The above mentioned queue-properties can be used to control the operation of ECETES without modifying its execution time characteristics. Currently, the implementation supports a similar set of queue properties as did CETES: Minimum Temporal Span (MTL) that can prevent young records from being evicted, Minimum Spatial Span (MSL) that ensures the availability of a specified number of records in the queue, Queue Priority (QP) relates queues with respect to their importance (entries are evicted from low-priority queues provided that no other constraint is violated).
Using ECETES
As noted in our previous work [HST03], potential starting points for replay can be identified in the task code. A potential starting point is a starting point if there is a checkpoint of the task-state and a control-flow entry from that point available in the log. In order to start a replay from a particular starting point, it is required that the execution up on till the first encounter of that checkpoint can be deterministic [HST03]. It is potentially required that a replay must be used to guide the replay from one starting point to a consecutive one in order to fulfill the stipulated requirement.
When using ECETES for recording, the following setup is intended: One queue should collect all the control-flow of the system, separate queues should store data-flow entries for each task. Taskconstructs that have more then one starting point are allocated one queue per starting point in order to guarantee replay (in compliance with the discussion in [HST03]). The MSL of each queue is set to guarantee the availability of at least on complete checkpoint. The effectiveness of the solution is increased if a record size could be established so that there are few records for each entry, and so that there is not much redundant space lost due to incompatible entry sizes.
When is ECETES better then the alternatives?
The only previously known method that fulfills all requirements posted on an eviction scheduler is the LFIFO algorithm. As queue-sizes are static, LFIFO has the potential drawback that queues must be dimensioned pre-runtime.
We have still to perform the validation of ECETES, but we have hopes that it should outperform LFIFO in situation where: the taskset has a high degree of sporadicity, or where data-flow entries describing checkpoints may come from different program counter values in the same task (as described above, a task may have several starting points).
Conclusions
We have presented a new method called ECETES for managing the memory available for logging, a new eviction scheduler. We have described the situations where we expect ECETES to perform better then traditional methods, but the validation is yet to be performed. | 1,344 |
cs0309054 | 2949702945 | Denial of Service (DoS) attacks are one of the most challenging threats to Internet security. An attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim's site with malicious traffic, clogging its tail circuit and interfering with normal traffic. At present, the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic. However, as DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. In this paper, we present Active Internet Traffic Filtering, a new automatic filter propagation protocol. We argue that this system provides a guaranteed, significant level of protection against DoS attacks in exchange for a reasonable, bounded amount of router resources. We also argue that the proposed system cannot be abused by a malicious node to interfere with normal Internet operation. Finally, we argue that it retains its efficiency in the face of continued Internet growth. | In @cite_5 Mahajan propose mechanisms for detecting and controlling high bandwidth traffic aggregates. One part of their work discusses how a node determines whether it is congested and how it identifies the aggregate(s) responsible for the congestion. In contrast, we start from the point where the node has identified the undesired flow(s). In that sense, their work and our work are complementary. Another part of their work discusses how much to rate-limit an annoying aggregate due to a DoS attack or a flash crowd. In contrast, our mechanism focuses on DoS attack traffic and attempts to limit it to rate @math . We believe that DoS attacks should be addressed separately from flash crowds: Flash crowd aggregates are created by legitimate traffic. Therefore, it makes sense to rate-limit them instead of completely blocking them. On the contrary, DoS attack traffic aims at disrupting the victim's operation. Therefore, it makes sense to block it. Blocking a traffic flow is simpler and cheaper than rate-limiting it. Moreover, DoS attack traffic is generated by malicious compromised nodes. Therefore, it demands a more intelligent defense mechanism. | {
"abstract": [
"The current Internet infrastructure has very few built-in protection mechanisms, and is therefore vulnerable to attacks and failures. In particular, recent events have illustrated the Internet's vulnerability to both denial of service (DoS) attacks and flash crowds in which one or more links in the network (or servers at the edge of the network) become severely congested. In both DoS attacks and flash crowds the congestion is due neither to a single flow, nor to a general increase in traffic, but to a well-defined subset of the traffic --- an aggregate. This paper proposes mechanisms for detecting and controlling such high bandwidth aggregates. Our design involves both a local mechanism for detecting and controlling an aggregate at a single router, and a cooperative pushback mechanism in which a router can ask upstream routers to control an aggregate. While certainly not a panacea, these mechanisms could provide some needed relief from flash crowds and flooding-style DoS attacks. The presentation in this paper is a first step towards a more rigorous evaluation of these mechanisms."
],
"cite_N": [
"@cite_5"
],
"mid": [
"2154178154"
]
} | Active Internet Traffic Filtering: Real-time Response to Denial-of-Service Attacks | Denial of Service (DoS) attacks are recognized as one of the most challenging threats to Internet security. Any organization or enterprise that is dependent on the Internet can be subject to a DoS attack, causing its service to be severely disrupted, if not fail completely. The attacker typically uses a worm to create an "army" of zombies, which she orchestrates to flood the victim's site with malicious traffic. This malicious traffic exhausts the victim's resources, thereby seriously affecting the victim's ability to respond to normal traffic.
A network layer solution is required because the end-user or end-organization has no way to protect its tail circuit from being congested by an attack, causing the disruption sought by the attacker. For example, if an enterprise has a 10 Mbps connection to the Internet, an attacker can command its zombies to send traffic far exceeding this 10 Mbps rate to this enterprise, completely congesting the downstream link to the enterprise and causing normal traffic to be dropped.
Network operators use conventional router filtering capabilities to respond to DoS attacks. Typically, an operator of a site under attack identifies the nature of the packets being used in the attack by some packet collection facility, installs a filter in its firewall/edge router to block these packets and then requests its ISP to install comparable filters in its routers to remove this traffic from the tail circuit to the site. Each ISP can further communicate with its peering ISPs to block this unwanted traffic as well, if it so desires.
Currently, this propagation of filters is manual: the operator on each site determines the necessary filters and adds them to each router configuration. In several attacks, the operators of different networks have been forced to communicate by telephone given that the network connection, and thus email, was inoperable because of the attack.
As DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. For example, an attack can switch from one protocol to another, move between source networks as well as oscillate between on and off far faster than any human can respond. In general, network operators are confronting an "arms race" in which any defense, such as manually installed filters, is viewed as a challenge by the community of attacker-types to defeat. Exploiting a weakness such as human speeds of filter configuration is an obvious direction for an attacker to pursue.
The concept of automatic filter propagation has already been introduced in [MBF + 01]: a router is configured with a filter to drop (or rate-limit) certain traffic; if it continues to drop a significant amount of this traffic, it requests that the upstream router take over and block the traffic. However, the crucial issues associated with automatic filter propagation are still unaddressed.
The real problem is how to efficiently manage the bounded number of filters available to a network operator to provide this filtering support. An attacker can change protocols, source addresses, port numbers, etc. requiring a very large number of filters. However, a sophisticated hardware router has a fixed maximum number of wire-speed filters that can block traffic with no degradation in router performance. The maximum is determined by hardware table sizes and is typically limited to several thousand. A software router is typically less constrained by table space, but incurs a processing overhead for each additional filter. This usually limits the practical number of filters to even less than a hardware router. Moreover, there is a processing cost at each router for installing each new filter, removing the old filters and sending and receiving filter propagation protocol messages.
Given the restricted amount of filtering resources available to each router, hop-by-hop filter propagation towards the attacker's site clearly does not scale: Internet backbone routers would quickly become the "filtering bottleneck" having to satisfy filtering requests coming from all the corners of the Internet. Fortunately, traceback [SWKA00] [SPS + 01] makes it possible to identify a router close to the attacker and send it a filtering request directly. However, any filter propagation mechanism other than hop-by-hop raises a serious security issue: Once a router starts accepting filtering requests from unknown sources, how can it trust that these requests are not forged by malicious nodes seeking to disrupt normal communication between other nodes?
In this paper we propose a new filter propagation protocol called AITF (Active Internet Traffic Filtering): The victim sends a filtering request to its network gateway. The victim's gateway temporarily blocks the undesired traffic, while it propagates the request to the attacker's gateway. As we will see, the protocol both motivates and assists the attacker's gateway to block the attack. Moreover, a router receiving a filtering request satisfies it only if it determines that the requestor is on the same path with the specified undesired traffic. Thus, the filter cannot affect any nodes in the Internet other than those already operating at the mercy of the requestor.
The novel aspect of AITF is that it enables each participating service provider to guarantee to its clients a specific, significant amount of protection against DoS attacks, while it requires only a bounded credible amount of resources. At the same time it is secure i.e., it cannot be abused by a malicious node to harm (e.g. block legitimate traffic to) other nodes. Finally, it scales with Internet size i.e., it keeps its efficiency in the face of continued Internet growth.
II. Active Internet Traffic Filtering (AITF)
A. Terminology
A flow label is a set of values that captures the common characteristics of a traffic flow -e.g., "all packets with IP source address S and IP destination address D".
A filtering request is a request to block a flow of packets -all packets matching a specific wildcarded flow label -for the next T time units.
A filtering contract between networks A and B specifies: i. The filtering request rate R 1 at which A accepts filtering requests to block certain traffic to B.
ii. The filtering request rate R 2 at which A can send filtering requests to get B to block certain traffic from coming into A.
An AITF network is an Autonomous Domain which has a filtering contract with each of its end-hosts and each neighbor Autonomous Domain directly connected to it. An AITF node is either an end-host or a border router 1 in an AITF network.
Finally, we define the following terms with respect to an undesired flow: The attack path is the set of AITF nodes the undesired flow goes through. The attacker is the origin 1 A border router is a router that has interfaces in more than one AITF networks.
of the undesired flow. The victim is the target of the undesired flow. The attacker's gateway is the AITF node closest to the attacker along the attack path. Similarly, the victim's gateway is the AITF node closest to the victim along the attack path.
B. Overview
The AITF protocol enables a service provider to protect a client against N undesired flows, by using only n ≪ N filters and a DRAM cache of size O(N ). The motivation is that each router can afford gigabytes of DRAM but only a limited number of filters.
In an AITF world, each Autonomous Domain (AD) is an AITF network i.e., it has filtering contracts with all its endhosts and peering ADs. These contracts limit the rates by which the AD can send/receive filtering requests to/from its end-hosts and peering ADs. The limited rates allow the receiving router to police the requests to the specified rates and indiscriminately drop requests when the rate is in excess of the agreed rate. Thus, the router can limit the CPU cycles used to process filtering requests as well as the number of filters it requires.
An AITF filtering request is initially sent from the victim to the victim's gateway; the victim's gateway propagates it to the attacker's gateway; finally, the attacker's gateway propagates it to the attacker. Both the victim's gateway and the attacker's gateway install filters to block the undesired flow. The victim's gateway installs a filter only temporarily, to immediately protect the victim, while it waits for the attacker's gateway to take responsibility. The attacker's gateway is expected to install a filter and block the undesired flow for T time units.
If the undesired flow stops within some grace period, the victim's gateway interprets this as a hint that the attacker's gateway has taken over and removes its temporary filter. This leaves the door open to "on-off" undesired flows 2 . In order to detect and block such "on-off" flows, the victim's gateway needs to remember each filtering request for at least T time units. Thus, the victim's gateway, installs a filter for T tmp ≪ T time units, but keeps a "shadow" of the filter in DRAM for T time units 3 .
The attacker's gateway expects the attacker to stop the undesired flow within a grace period. Otherwise, it holds the right to disconnect from her. This fact encourages the attacker to stop the undesired flow. Similarly, the victim's gateway expects the attacker's gateway to block the undesired flow within a grace period. Otherwise, the mechanism escalates: The victim's gateway now plays the role of the victim (i.e., it sends a filtering request to its own gateway) and the attacker's gateway plays the role of the attacker (i.e., it is asked to stop the undesired flow or risk disconnection). The escalation process should become clear with the example in II-D.
Thus, the mechanism proceeds in rounds. At each round, only four nodes are involved. In the first round, the mechanism tries to push filtering of undesired traffic back to the AITF node closest to the attacker. If that fails, it tries the second closest AITF node to the attacker and so on.
C. Basic protocol
The AITF protocol involves only one type of message: a filtering request. A filtering request contains a flow label and a type field. The latter specifies whether this request is addressed to the victim's gateway, the attacker's gateway or the attacker.
The only nodes in an AITF network that speak the AITF protocol are end-hosts and border routers. Internal routers do not participate.
AITF node X sends a filtering request to AITF node Y , when X wants a certain traffic flow coming through Y to be blocked for T time units.
When AITF node Y receives a filtering request, it checks which end-host or peering network the request is received from/through. If that end-host or peering network has exceeded its allowed rate, the request is dropped. If not, Y looks at the specified undesired flow label and takes certain actions: − If Y is the victim's gateway:
i. It installs a temporary filter to block the undesired flow for T tmp ≪ T time units.
ii. It logs the filtering request in DRAM for T time units.
iii. It propagates the filtering request to the attacker's gateway. If the attacker's gateway does not block the flow within T tmp time units, Y propagates the filtering request to its own gateway. − If Y is the attacker's gateway:
i. It installs a filter to block the undesired flow for T time units.
ii. It propagates the filtering request to the attacker. If the attacker does not stop the flow within a grace period, Y disconnects from her. − If Y itself is the attacker, it stops the flow (to avoid disconnection).
We should note that the behavior described above is that of a non-compromised, non-malicious node. Neither the attacker not even the attacker's gateway are expected to always conform to this behavior. AITF operation does not rely on their cooperation.
D. Example
In Figure 1 G host -which stands for "good host" -is an end-host residing in enterprise network G net, which is connected to local ISP G isp through router G gw1. G isp runs a regional network that connects through its backbone router G gw2 to a wide-area ISP G wan. Similarly, B host -which stands for "bad host" -is an end-host residing in enterprise network B net etc.
B host starts sending an undesired flow to G host. G host sends a filtering request to G gw1 against B host. Upon reception of G host's request, G gw1 temporarily On the other side, upon reception of G gw1's request, B gw1 immediately blocks the undesired flow, but also propagates the filtering request to B host. B host either stops the undesired flow or risks being disconnected. Thus, if B gw1 cooperates, by the end of the first round, filtering of the undesired flow has been successfully pushed to the AITF node closest to the attacker (B gw1).
Of course, B gw1 may decide not to cooperate and ignore the filtering request. Then, the mechanism escalates: G gw1 propagates the filtering request to G gw2. G gw2 temporarily blocks all undesired traffic, but also propagates the filtering request to B gw2 and so on. Thus, if B gw2 cooperates, by the end of the second round, filtering of the undesired flow has been successfully pushed to the second closest to the attacker AITF node (B gw2).
In the worst-case scenario, even B gw3 refuses to cooperate. As a result, G gw3 disconnects from B gw3.
E. Verifying a filtering request
In a network architecture where source address spoofing is allowed, compromised node M can maliciously request the blocking of traffic from A to V thereby disrupting their communication. To avoid this, we add a simple extension to the basic protocol.
The extension introduces two more messages: A verification query and a verification reply. Both types include a flow label and a nonce (i.e., a random number).
When router Y receives a filtering request, which asks for the blocking of a traffic flow from attacker A to victim V , Y verifies that the request is real before taking any action to satisfy it. If Y is the victim's gateway, this verification is trivial with appropriate ingress filtering. If Y is the attacker's gateway, verification is accomplished through the following "3-way handshake":
i. Router Y receives a filtering request, asking for the blocking of a traffic flow from attacker A to victim V .
ii. Y sends a verification query to V , asking "Do you really not want this traffic flow?"
iii. V responds to Y with a verification reply. The reply must include the same flow label and nonce included in the query. If the nonce on V 's reply is the same with the nonce on Y 's query, Y accepts the request as real and proceeds to satisfy it. The "3-way handshake" is further discussed in III-B.
F. Assumptions
AITF operation assumes that the victim's gateway can determine − Who is the attacker's gateway (in order to propagate the request). − Who is the next AITF node on the attack path (in order to escalate, if necessary). These assumptions are met, if an efficient traceback technique as those described in [SWKA00] [SPS + 01] is available.
Also AITF assumes that off-path traffic monitoring is not possible i.e., if node M is not located on the path from node A to node V , then M cannot monitor traffic sent on that path (this assumption is necessary for the "3-way handshake").
III. Discussion
A. Why it works
The basic idea of AITF is to push filtering of undesired traffic to the network closest to the attacker. That is, hold a service provider responsible for providing connectivity to a misbehaving client and have it do the dirty job. The question is, why would the attacker's service provider accept (or at least be encouraged) to do that?
If the attacker's service provider does not cooperate, it risks being disconnected by its own service provider. This makes sense for both of them: If B net in Figure 1 refuses to block its misbehaving client, the filtering burden falls on B isp. Thus, it makes sense for B isp to consider B net a bad client and disconnect from it. On the other hand, this offers an incentive to B net to cooperate and block the undesired flow. Otherwise, it will be disconnected by B isp, which will result in all of its clients being dissatisfied.
Moreover, AITF offers an economic incentive to providers to protect their network from the inside by employing appropriate ingress filtering. If a provider proactively prevents spoofed flows from exiting its network, it lowers the probability of an attack being launched from its own network, thus reducing the number of expected filtering requests it will later have to satisfy to avoid disconnection.
In short, AITF creates a cost vs quality trade-off for service providers: Either they pay the cost to block the undesired flows generated by their few bad clients, or they run the risk of dissatisfying their legitimate clients, which are the vast majority. Thus, the quality of a provider's service is now related to its capability to filter its own misbehaving clients.
B. Why it is secure
The greatest challenge with automatic filtering mechanisms is that compromised node M may maliciously request the blocking of traffic from A to V , thereby disrupting their communication. AITF prevents this through the "3-way handshake" described in II-E.
The "3-way handshake" does not exactly verify the authenticity of a filtering request. It only enables A's gateway to verify that a request to block traffic from A to V has been sent by a node located on the path from A to V . A compromised router located on this path can naturally forge and snoop handshake messages to disrupt A-V communication. However, such a compromised router can disrupt A-V communication anyway, by simply dropping the corresponding packets. 4 In short, AITF cannot be abused by a compromised node to cause interruption of a legitimate traffic flow, unless that compromised node is responsible for routing the flow, in which case it can interrupt the flow anyway.
C. Why it scales
AITF scales with Internet size, because it pushes filtering of undesired traffic to the leaves of the Internet, where filtering capacity follows Internet growth.
In most cases, AITF pushes filtering of undesired traffic to the provider(s) of the attacker(s). Thus, the amount of filtering requests a provider is asked to satisfy grows proportionally to the number of the provider's (misbehaving) clients. However, intuitively, a provider's filtering capacity also grows proportionally to the number of its clients. 5 In short, a provider's filtering capacity follows the provider's filtering workload.
If the attacker's provider is itself compromised, AITF naturally fails to push filtering to it. Instead, filtering is performed by another network, closer to the Internet core. If this situation occurred often, then the scalability argument stated above would be false. Fortunately, compromised routers are a very small percentage of the Internet infrastructure. 6 Thus, AITF fails to push filtering to the attacker's provider with a very small probability.
IV. Performance Analysis
In this section we provide simple formulas that describe AITF performance. For lack of space and given that our formulas are very simple and intuitive, we defer any details to [AC03].
A. The victim's perspective
A.1 Effective bandwidth of an undesired flow AITF significantly reduces the effective bandwidth of an undesired flow -i.e., the bandwidth of the undesired flow actually experienced by the victim. Specifically, it can be shown that AITF reduces the effective bandwidth of an undesired flow by a factor of
r ≈ n(T d + T r ) T
where n is the number of non-cooperating 7 AITF nodes on the attack path, T d is attack detection time and T r is the one-way delay from the victim to its gateway. T is the timeout associated with all filtering requests i.e. each filtering request asks for the blocking of a flow for T time units. For example, if the only non-cooperating node on the attack path is the attacker, and if the one-way delay from the victim to its gateway is T r = 50 msec, for T = 1 min, an AITF node can reduce the effective bandwidth of an undesired flow by a factor r ≈ 0.00083. 8
Here we only demonstrate this result for n = 1 i.e., when the only non-cooperating node on the attack path is the attacker: At time 0 the attacker starts the undesired flow; at time T d the victim detects it and sends a filtering request; at time T d +T r the victim's gateway temporarily blocks the flow and the victim stops receiving it; the flow is eventually blocked by the attacker's gateway and released after time T . Thus, if the original bandwidth of the undesired flow is B, its effective bandwidth is B e ≈ B · T d +Tr T . When n ≥ 1 i.e., when 1 or more AITF routers close to the attacker are non-cooperating, the attacker can play "on-off" games: Pretend to stop the undesired flow to trick the victim's gateway into removing its filter, then resume the flow etc. The victim's gateway detects and blocks such attackers by using its DRAM cache.
A.2 Number of undesired flows
An AITF node is guaranteed protection against a specific number of undesired flows, which depends on its contract with its service provider. Specifically, it can be shown that if a client is allowed to send R 1 filtering requests per time unit to the provider, then the client is protected 9 against N v = R 1 · T simultaneous undesired flows. For example, for R 1 = 100 filtering requests per second and T = 1 min, the client is protected against N v = 6, 000 simultaneous undesired flows.
B. Filtering close to the victim AITF enables a service provider to protect a client against N v undesired flows by using only n v ≪ N v filters. Specifically, it can be shown that if a client is allowed to send R 1 filtering requests per time unit to the provider, the provider needs n v filters and a DRAM cache that can fit m v filtering requests in order to satisfy all the requests, where
n v = R 1 · T tmp , m v = R 1 · T
T tmp is the amount of time that elapses from the moment the victim's gateway installs a temporary filter until it removes it. The purpose of the temporary filter is to block the undesired flow until the attacker's gateway takes over. Therefore, T tmp should be large enough to allow the traceback from the victim's gateway to the attacker's gateway plus the 3-way handshake. For example, suppose we use an architecture like [CG00], where traceback is automatically provided inside each packet. Then traceback time is 0. If the 3-way handshake between the two gateways takes 600 msec, for R 1 = 100 filtering requests per second and T = 1 min, the service provider needs n v = 60 filters to protect a client against N v = 6, 000 undesired flows.
C. Filtering close to the attacker AITF requires a bounded amount of resources from the attacker's service provider. Specifically, if a service provider is allowed to send R 2 filtering requests per time unit to a client, then the provider needs n a = R 2 · T filters in order to ensure that the client satisfies all the requests. Given these resources, the provider can filter N a = n a = R 2 · T simultaneous undesired flows generated by the client. For example, for R 2 = 1 filtering request per second and T = 1 min, the provider needs n a = 60 filters for the client. This filtering request rate allows the provider to filter up to N a = 60 simultaneous undesired flows generated by the client.
D. The attacker's perspective
We have defined an attacker as the source of an undesired flow. By this definition, an attacker is not necessarily a malicious/compromised node; it is simply a node being asked to stop sending a certain flow. A legitimate AITF node must be provisioned to stop sending undesired flows when requested, in order to avoid disconnection.
AITF requires a bounded amount of resources from the attacker as well. Specifically, if a service provider is allowed to send R 2 filtering requests per time unit to a client, the client needs n a = R 2 · T filters (as many as the provider) in order to satisfy all the requests. For example, for R 2 = 1 filtering request per second and T = 1 min, the client needs n a = 60 filters.
VI. Conclusions
We presented AITF, an automatic filter propagation mechanism, according to which each Autonomous Domain (AD) has a filtering contract with each of its end-hosts and neighbor ADs. A filtering contract with a neighbor provides a guaranteed, significant level of protection against DoS attacks coming through that neighbor in exchange for 10 We believe that DoS attacks should be addressed separately from flash crowds: Flash crowd aggregates are created by legitimate traffic. Therefore, it makes sense to rate-limit them instead of completely blocking them. On the contrary, DoS attack traffic aims at disrupting the victim's operation. Therefore, it makes sense to block it. Blocking a traffic flow is simpler and cheaper than rate-limiting it. Moreover, DoS attack traffic is generated by malicious/compromised nodes. Therefore, it demands a more intelligent defense mechanism. a reasonable, bounded amount of router resources.
Specifically: − Given a filtering contract between a client and a service provider, which allows the client to send R 1 filtering requests per time unit to the provider, the provider can protect the client against a large number of undesired flows N v = R 1 · T , by significantly limiting the effective bandwidth of each undesired flow. The provider achieves this by using only a modest number of filters n v = R 1 ·T tmp ≪ N v . − Given a filtering contract between a client and a service provider, which allows the provider to send R 2 filtering requests per time unit to the client, both the client and the provider need a bounded number of filters n a = R 2 · T to honor their contract.
We argued that AITF successfully deals with the biggest challenge to automatic filtering mechanisms: source address spoofing. Namely, we argued that it is not possible for any malicious/compromised node to abuse AITF in order to interrupt a legitimate traffic flow, unless the compromised node is responsible for routing that flow, in which case it can interrupt the flow anyway.
Finally, we argued that AITF scales with Internet size, because it pushes filtering of undesired traffic to the service providers of the attackers, unless the service providers are themselves compromised. Fortunately, compromised routers are a very small percentage of Internet infrastructure. Thus, in the vast majority of cases, AITF pushes filtering of undesired traffic to the leaves of the Internet, where filtering capacity follows Internet growth. | 4,513 |
cs0309054 | 2949702945 | Denial of Service (DoS) attacks are one of the most challenging threats to Internet security. An attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim's site with malicious traffic, clogging its tail circuit and interfering with normal traffic. At present, the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic. However, as DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. In this paper, we present Active Internet Traffic Filtering, a new automatic filter propagation protocol. We argue that this system provides a guaranteed, significant level of protection against DoS attacks in exchange for a reasonable, bounded amount of router resources. We also argue that the proposed system cannot be abused by a malicious node to interfere with normal Internet operation. Finally, we argue that it retains its efficiency in the face of continued Internet growth. | In @cite_0 Park and Lee propose DPF (Distributed Packet Filtering), a distributed ingress-filtering mechanism for pro-actively blocking spoofed flows. In contrast, AITF aims at blocking undesired -- including spoofed -- flows as close as possible to their sources. Thus, it cannot be replaced by DPF. On the other hand, DPF blocks most spoofed flows they reach their destination i.e., DPF is proactive, whereas AITF is reactive. In that sense, DPF and AITF are complementary. | {
"abstract": [
"Denial of service (DoS) attack on the Internet has become a pressing problem. In this paper, we describe and evaluate route-based distributed packet filtering (DPF), a novel approach to distributed DoS (DDoS) attack prevention. We show that DPF achieves proactiveness and scalability, and we show that there is an intimate relationship between the effectiveness of DPF at mitigating DDoS attack and power-law network topology.The salient features of this work are two-fold. First, we show that DPF is able to proactively filter out a significant fraction of spoofed packet flows and prevent attack packets from reaching their targets in the first place. The IP flows that cannot be proactively curtailed are extremely sparse so that their origin can be localized---i.e., IP traceback---to within a small, constant number of candidate sites. We show that the two proactive and reactive performance effects can be achieved by implementing route-based filtering on less than 20 of Internet autonomous system (AS) sites. Second, we show that the two complementary performance measures are dependent on the properties of the underlying AS graph. In particular, we show that the power-law structure of Internet AS topology leads to connectivity properties which are crucial in facilitating the observed performance effects."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2162133150"
]
} | Active Internet Traffic Filtering: Real-time Response to Denial-of-Service Attacks | Denial of Service (DoS) attacks are recognized as one of the most challenging threats to Internet security. Any organization or enterprise that is dependent on the Internet can be subject to a DoS attack, causing its service to be severely disrupted, if not fail completely. The attacker typically uses a worm to create an "army" of zombies, which she orchestrates to flood the victim's site with malicious traffic. This malicious traffic exhausts the victim's resources, thereby seriously affecting the victim's ability to respond to normal traffic.
A network layer solution is required because the end-user or end-organization has no way to protect its tail circuit from being congested by an attack, causing the disruption sought by the attacker. For example, if an enterprise has a 10 Mbps connection to the Internet, an attacker can command its zombies to send traffic far exceeding this 10 Mbps rate to this enterprise, completely congesting the downstream link to the enterprise and causing normal traffic to be dropped.
Network operators use conventional router filtering capabilities to respond to DoS attacks. Typically, an operator of a site under attack identifies the nature of the packets being used in the attack by some packet collection facility, installs a filter in its firewall/edge router to block these packets and then requests its ISP to install comparable filters in its routers to remove this traffic from the tail circuit to the site. Each ISP can further communicate with its peering ISPs to block this unwanted traffic as well, if it so desires.
Currently, this propagation of filters is manual: the operator on each site determines the necessary filters and adds them to each router configuration. In several attacks, the operators of different networks have been forced to communicate by telephone given that the network connection, and thus email, was inoperable because of the attack.
As DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. For example, an attack can switch from one protocol to another, move between source networks as well as oscillate between on and off far faster than any human can respond. In general, network operators are confronting an "arms race" in which any defense, such as manually installed filters, is viewed as a challenge by the community of attacker-types to defeat. Exploiting a weakness such as human speeds of filter configuration is an obvious direction for an attacker to pursue.
The concept of automatic filter propagation has already been introduced in [MBF + 01]: a router is configured with a filter to drop (or rate-limit) certain traffic; if it continues to drop a significant amount of this traffic, it requests that the upstream router take over and block the traffic. However, the crucial issues associated with automatic filter propagation are still unaddressed.
The real problem is how to efficiently manage the bounded number of filters available to a network operator to provide this filtering support. An attacker can change protocols, source addresses, port numbers, etc. requiring a very large number of filters. However, a sophisticated hardware router has a fixed maximum number of wire-speed filters that can block traffic with no degradation in router performance. The maximum is determined by hardware table sizes and is typically limited to several thousand. A software router is typically less constrained by table space, but incurs a processing overhead for each additional filter. This usually limits the practical number of filters to even less than a hardware router. Moreover, there is a processing cost at each router for installing each new filter, removing the old filters and sending and receiving filter propagation protocol messages.
Given the restricted amount of filtering resources available to each router, hop-by-hop filter propagation towards the attacker's site clearly does not scale: Internet backbone routers would quickly become the "filtering bottleneck" having to satisfy filtering requests coming from all the corners of the Internet. Fortunately, traceback [SWKA00] [SPS + 01] makes it possible to identify a router close to the attacker and send it a filtering request directly. However, any filter propagation mechanism other than hop-by-hop raises a serious security issue: Once a router starts accepting filtering requests from unknown sources, how can it trust that these requests are not forged by malicious nodes seeking to disrupt normal communication between other nodes?
In this paper we propose a new filter propagation protocol called AITF (Active Internet Traffic Filtering): The victim sends a filtering request to its network gateway. The victim's gateway temporarily blocks the undesired traffic, while it propagates the request to the attacker's gateway. As we will see, the protocol both motivates and assists the attacker's gateway to block the attack. Moreover, a router receiving a filtering request satisfies it only if it determines that the requestor is on the same path with the specified undesired traffic. Thus, the filter cannot affect any nodes in the Internet other than those already operating at the mercy of the requestor.
The novel aspect of AITF is that it enables each participating service provider to guarantee to its clients a specific, significant amount of protection against DoS attacks, while it requires only a bounded credible amount of resources. At the same time it is secure i.e., it cannot be abused by a malicious node to harm (e.g. block legitimate traffic to) other nodes. Finally, it scales with Internet size i.e., it keeps its efficiency in the face of continued Internet growth.
II. Active Internet Traffic Filtering (AITF)
A. Terminology
A flow label is a set of values that captures the common characteristics of a traffic flow -e.g., "all packets with IP source address S and IP destination address D".
A filtering request is a request to block a flow of packets -all packets matching a specific wildcarded flow label -for the next T time units.
A filtering contract between networks A and B specifies: i. The filtering request rate R 1 at which A accepts filtering requests to block certain traffic to B.
ii. The filtering request rate R 2 at which A can send filtering requests to get B to block certain traffic from coming into A.
An AITF network is an Autonomous Domain which has a filtering contract with each of its end-hosts and each neighbor Autonomous Domain directly connected to it. An AITF node is either an end-host or a border router 1 in an AITF network.
Finally, we define the following terms with respect to an undesired flow: The attack path is the set of AITF nodes the undesired flow goes through. The attacker is the origin 1 A border router is a router that has interfaces in more than one AITF networks.
of the undesired flow. The victim is the target of the undesired flow. The attacker's gateway is the AITF node closest to the attacker along the attack path. Similarly, the victim's gateway is the AITF node closest to the victim along the attack path.
B. Overview
The AITF protocol enables a service provider to protect a client against N undesired flows, by using only n ≪ N filters and a DRAM cache of size O(N ). The motivation is that each router can afford gigabytes of DRAM but only a limited number of filters.
In an AITF world, each Autonomous Domain (AD) is an AITF network i.e., it has filtering contracts with all its endhosts and peering ADs. These contracts limit the rates by which the AD can send/receive filtering requests to/from its end-hosts and peering ADs. The limited rates allow the receiving router to police the requests to the specified rates and indiscriminately drop requests when the rate is in excess of the agreed rate. Thus, the router can limit the CPU cycles used to process filtering requests as well as the number of filters it requires.
An AITF filtering request is initially sent from the victim to the victim's gateway; the victim's gateway propagates it to the attacker's gateway; finally, the attacker's gateway propagates it to the attacker. Both the victim's gateway and the attacker's gateway install filters to block the undesired flow. The victim's gateway installs a filter only temporarily, to immediately protect the victim, while it waits for the attacker's gateway to take responsibility. The attacker's gateway is expected to install a filter and block the undesired flow for T time units.
If the undesired flow stops within some grace period, the victim's gateway interprets this as a hint that the attacker's gateway has taken over and removes its temporary filter. This leaves the door open to "on-off" undesired flows 2 . In order to detect and block such "on-off" flows, the victim's gateway needs to remember each filtering request for at least T time units. Thus, the victim's gateway, installs a filter for T tmp ≪ T time units, but keeps a "shadow" of the filter in DRAM for T time units 3 .
The attacker's gateway expects the attacker to stop the undesired flow within a grace period. Otherwise, it holds the right to disconnect from her. This fact encourages the attacker to stop the undesired flow. Similarly, the victim's gateway expects the attacker's gateway to block the undesired flow within a grace period. Otherwise, the mechanism escalates: The victim's gateway now plays the role of the victim (i.e., it sends a filtering request to its own gateway) and the attacker's gateway plays the role of the attacker (i.e., it is asked to stop the undesired flow or risk disconnection). The escalation process should become clear with the example in II-D.
Thus, the mechanism proceeds in rounds. At each round, only four nodes are involved. In the first round, the mechanism tries to push filtering of undesired traffic back to the AITF node closest to the attacker. If that fails, it tries the second closest AITF node to the attacker and so on.
C. Basic protocol
The AITF protocol involves only one type of message: a filtering request. A filtering request contains a flow label and a type field. The latter specifies whether this request is addressed to the victim's gateway, the attacker's gateway or the attacker.
The only nodes in an AITF network that speak the AITF protocol are end-hosts and border routers. Internal routers do not participate.
AITF node X sends a filtering request to AITF node Y , when X wants a certain traffic flow coming through Y to be blocked for T time units.
When AITF node Y receives a filtering request, it checks which end-host or peering network the request is received from/through. If that end-host or peering network has exceeded its allowed rate, the request is dropped. If not, Y looks at the specified undesired flow label and takes certain actions: − If Y is the victim's gateway:
i. It installs a temporary filter to block the undesired flow for T tmp ≪ T time units.
ii. It logs the filtering request in DRAM for T time units.
iii. It propagates the filtering request to the attacker's gateway. If the attacker's gateway does not block the flow within T tmp time units, Y propagates the filtering request to its own gateway. − If Y is the attacker's gateway:
i. It installs a filter to block the undesired flow for T time units.
ii. It propagates the filtering request to the attacker. If the attacker does not stop the flow within a grace period, Y disconnects from her. − If Y itself is the attacker, it stops the flow (to avoid disconnection).
We should note that the behavior described above is that of a non-compromised, non-malicious node. Neither the attacker not even the attacker's gateway are expected to always conform to this behavior. AITF operation does not rely on their cooperation.
D. Example
In Figure 1 G host -which stands for "good host" -is an end-host residing in enterprise network G net, which is connected to local ISP G isp through router G gw1. G isp runs a regional network that connects through its backbone router G gw2 to a wide-area ISP G wan. Similarly, B host -which stands for "bad host" -is an end-host residing in enterprise network B net etc.
B host starts sending an undesired flow to G host. G host sends a filtering request to G gw1 against B host. Upon reception of G host's request, G gw1 temporarily On the other side, upon reception of G gw1's request, B gw1 immediately blocks the undesired flow, but also propagates the filtering request to B host. B host either stops the undesired flow or risks being disconnected. Thus, if B gw1 cooperates, by the end of the first round, filtering of the undesired flow has been successfully pushed to the AITF node closest to the attacker (B gw1).
Of course, B gw1 may decide not to cooperate and ignore the filtering request. Then, the mechanism escalates: G gw1 propagates the filtering request to G gw2. G gw2 temporarily blocks all undesired traffic, but also propagates the filtering request to B gw2 and so on. Thus, if B gw2 cooperates, by the end of the second round, filtering of the undesired flow has been successfully pushed to the second closest to the attacker AITF node (B gw2).
In the worst-case scenario, even B gw3 refuses to cooperate. As a result, G gw3 disconnects from B gw3.
E. Verifying a filtering request
In a network architecture where source address spoofing is allowed, compromised node M can maliciously request the blocking of traffic from A to V thereby disrupting their communication. To avoid this, we add a simple extension to the basic protocol.
The extension introduces two more messages: A verification query and a verification reply. Both types include a flow label and a nonce (i.e., a random number).
When router Y receives a filtering request, which asks for the blocking of a traffic flow from attacker A to victim V , Y verifies that the request is real before taking any action to satisfy it. If Y is the victim's gateway, this verification is trivial with appropriate ingress filtering. If Y is the attacker's gateway, verification is accomplished through the following "3-way handshake":
i. Router Y receives a filtering request, asking for the blocking of a traffic flow from attacker A to victim V .
ii. Y sends a verification query to V , asking "Do you really not want this traffic flow?"
iii. V responds to Y with a verification reply. The reply must include the same flow label and nonce included in the query. If the nonce on V 's reply is the same with the nonce on Y 's query, Y accepts the request as real and proceeds to satisfy it. The "3-way handshake" is further discussed in III-B.
F. Assumptions
AITF operation assumes that the victim's gateway can determine − Who is the attacker's gateway (in order to propagate the request). − Who is the next AITF node on the attack path (in order to escalate, if necessary). These assumptions are met, if an efficient traceback technique as those described in [SWKA00] [SPS + 01] is available.
Also AITF assumes that off-path traffic monitoring is not possible i.e., if node M is not located on the path from node A to node V , then M cannot monitor traffic sent on that path (this assumption is necessary for the "3-way handshake").
III. Discussion
A. Why it works
The basic idea of AITF is to push filtering of undesired traffic to the network closest to the attacker. That is, hold a service provider responsible for providing connectivity to a misbehaving client and have it do the dirty job. The question is, why would the attacker's service provider accept (or at least be encouraged) to do that?
If the attacker's service provider does not cooperate, it risks being disconnected by its own service provider. This makes sense for both of them: If B net in Figure 1 refuses to block its misbehaving client, the filtering burden falls on B isp. Thus, it makes sense for B isp to consider B net a bad client and disconnect from it. On the other hand, this offers an incentive to B net to cooperate and block the undesired flow. Otherwise, it will be disconnected by B isp, which will result in all of its clients being dissatisfied.
Moreover, AITF offers an economic incentive to providers to protect their network from the inside by employing appropriate ingress filtering. If a provider proactively prevents spoofed flows from exiting its network, it lowers the probability of an attack being launched from its own network, thus reducing the number of expected filtering requests it will later have to satisfy to avoid disconnection.
In short, AITF creates a cost vs quality trade-off for service providers: Either they pay the cost to block the undesired flows generated by their few bad clients, or they run the risk of dissatisfying their legitimate clients, which are the vast majority. Thus, the quality of a provider's service is now related to its capability to filter its own misbehaving clients.
B. Why it is secure
The greatest challenge with automatic filtering mechanisms is that compromised node M may maliciously request the blocking of traffic from A to V , thereby disrupting their communication. AITF prevents this through the "3-way handshake" described in II-E.
The "3-way handshake" does not exactly verify the authenticity of a filtering request. It only enables A's gateway to verify that a request to block traffic from A to V has been sent by a node located on the path from A to V . A compromised router located on this path can naturally forge and snoop handshake messages to disrupt A-V communication. However, such a compromised router can disrupt A-V communication anyway, by simply dropping the corresponding packets. 4 In short, AITF cannot be abused by a compromised node to cause interruption of a legitimate traffic flow, unless that compromised node is responsible for routing the flow, in which case it can interrupt the flow anyway.
C. Why it scales
AITF scales with Internet size, because it pushes filtering of undesired traffic to the leaves of the Internet, where filtering capacity follows Internet growth.
In most cases, AITF pushes filtering of undesired traffic to the provider(s) of the attacker(s). Thus, the amount of filtering requests a provider is asked to satisfy grows proportionally to the number of the provider's (misbehaving) clients. However, intuitively, a provider's filtering capacity also grows proportionally to the number of its clients. 5 In short, a provider's filtering capacity follows the provider's filtering workload.
If the attacker's provider is itself compromised, AITF naturally fails to push filtering to it. Instead, filtering is performed by another network, closer to the Internet core. If this situation occurred often, then the scalability argument stated above would be false. Fortunately, compromised routers are a very small percentage of the Internet infrastructure. 6 Thus, AITF fails to push filtering to the attacker's provider with a very small probability.
IV. Performance Analysis
In this section we provide simple formulas that describe AITF performance. For lack of space and given that our formulas are very simple and intuitive, we defer any details to [AC03].
A. The victim's perspective
A.1 Effective bandwidth of an undesired flow AITF significantly reduces the effective bandwidth of an undesired flow -i.e., the bandwidth of the undesired flow actually experienced by the victim. Specifically, it can be shown that AITF reduces the effective bandwidth of an undesired flow by a factor of
r ≈ n(T d + T r ) T
where n is the number of non-cooperating 7 AITF nodes on the attack path, T d is attack detection time and T r is the one-way delay from the victim to its gateway. T is the timeout associated with all filtering requests i.e. each filtering request asks for the blocking of a flow for T time units. For example, if the only non-cooperating node on the attack path is the attacker, and if the one-way delay from the victim to its gateway is T r = 50 msec, for T = 1 min, an AITF node can reduce the effective bandwidth of an undesired flow by a factor r ≈ 0.00083. 8
Here we only demonstrate this result for n = 1 i.e., when the only non-cooperating node on the attack path is the attacker: At time 0 the attacker starts the undesired flow; at time T d the victim detects it and sends a filtering request; at time T d +T r the victim's gateway temporarily blocks the flow and the victim stops receiving it; the flow is eventually blocked by the attacker's gateway and released after time T . Thus, if the original bandwidth of the undesired flow is B, its effective bandwidth is B e ≈ B · T d +Tr T . When n ≥ 1 i.e., when 1 or more AITF routers close to the attacker are non-cooperating, the attacker can play "on-off" games: Pretend to stop the undesired flow to trick the victim's gateway into removing its filter, then resume the flow etc. The victim's gateway detects and blocks such attackers by using its DRAM cache.
A.2 Number of undesired flows
An AITF node is guaranteed protection against a specific number of undesired flows, which depends on its contract with its service provider. Specifically, it can be shown that if a client is allowed to send R 1 filtering requests per time unit to the provider, then the client is protected 9 against N v = R 1 · T simultaneous undesired flows. For example, for R 1 = 100 filtering requests per second and T = 1 min, the client is protected against N v = 6, 000 simultaneous undesired flows.
B. Filtering close to the victim AITF enables a service provider to protect a client against N v undesired flows by using only n v ≪ N v filters. Specifically, it can be shown that if a client is allowed to send R 1 filtering requests per time unit to the provider, the provider needs n v filters and a DRAM cache that can fit m v filtering requests in order to satisfy all the requests, where
n v = R 1 · T tmp , m v = R 1 · T
T tmp is the amount of time that elapses from the moment the victim's gateway installs a temporary filter until it removes it. The purpose of the temporary filter is to block the undesired flow until the attacker's gateway takes over. Therefore, T tmp should be large enough to allow the traceback from the victim's gateway to the attacker's gateway plus the 3-way handshake. For example, suppose we use an architecture like [CG00], where traceback is automatically provided inside each packet. Then traceback time is 0. If the 3-way handshake between the two gateways takes 600 msec, for R 1 = 100 filtering requests per second and T = 1 min, the service provider needs n v = 60 filters to protect a client against N v = 6, 000 undesired flows.
C. Filtering close to the attacker AITF requires a bounded amount of resources from the attacker's service provider. Specifically, if a service provider is allowed to send R 2 filtering requests per time unit to a client, then the provider needs n a = R 2 · T filters in order to ensure that the client satisfies all the requests. Given these resources, the provider can filter N a = n a = R 2 · T simultaneous undesired flows generated by the client. For example, for R 2 = 1 filtering request per second and T = 1 min, the provider needs n a = 60 filters for the client. This filtering request rate allows the provider to filter up to N a = 60 simultaneous undesired flows generated by the client.
D. The attacker's perspective
We have defined an attacker as the source of an undesired flow. By this definition, an attacker is not necessarily a malicious/compromised node; it is simply a node being asked to stop sending a certain flow. A legitimate AITF node must be provisioned to stop sending undesired flows when requested, in order to avoid disconnection.
AITF requires a bounded amount of resources from the attacker as well. Specifically, if a service provider is allowed to send R 2 filtering requests per time unit to a client, the client needs n a = R 2 · T filters (as many as the provider) in order to satisfy all the requests. For example, for R 2 = 1 filtering request per second and T = 1 min, the client needs n a = 60 filters.
VI. Conclusions
We presented AITF, an automatic filter propagation mechanism, according to which each Autonomous Domain (AD) has a filtering contract with each of its end-hosts and neighbor ADs. A filtering contract with a neighbor provides a guaranteed, significant level of protection against DoS attacks coming through that neighbor in exchange for 10 We believe that DoS attacks should be addressed separately from flash crowds: Flash crowd aggregates are created by legitimate traffic. Therefore, it makes sense to rate-limit them instead of completely blocking them. On the contrary, DoS attack traffic aims at disrupting the victim's operation. Therefore, it makes sense to block it. Blocking a traffic flow is simpler and cheaper than rate-limiting it. Moreover, DoS attack traffic is generated by malicious/compromised nodes. Therefore, it demands a more intelligent defense mechanism. a reasonable, bounded amount of router resources.
Specifically: − Given a filtering contract between a client and a service provider, which allows the client to send R 1 filtering requests per time unit to the provider, the provider can protect the client against a large number of undesired flows N v = R 1 · T , by significantly limiting the effective bandwidth of each undesired flow. The provider achieves this by using only a modest number of filters n v = R 1 ·T tmp ≪ N v . − Given a filtering contract between a client and a service provider, which allows the provider to send R 2 filtering requests per time unit to the client, both the client and the provider need a bounded number of filters n a = R 2 · T to honor their contract.
We argued that AITF successfully deals with the biggest challenge to automatic filtering mechanisms: source address spoofing. Namely, we argued that it is not possible for any malicious/compromised node to abuse AITF in order to interrupt a legitimate traffic flow, unless the compromised node is responsible for routing that flow, in which case it can interrupt the flow anyway.
Finally, we argued that AITF scales with Internet size, because it pushes filtering of undesired traffic to the service providers of the attackers, unless the service providers are themselves compromised. Fortunately, compromised routers are a very small percentage of Internet infrastructure. Thus, in the vast majority of cases, AITF pushes filtering of undesired traffic to the leaves of the Internet, where filtering capacity follows Internet growth. | 4,513 |
cs0309054 | 2949702945 | Denial of Service (DoS) attacks are one of the most challenging threats to Internet security. An attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim's site with malicious traffic, clogging its tail circuit and interfering with normal traffic. At present, the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic. However, as DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. In this paper, we present Active Internet Traffic Filtering, a new automatic filter propagation protocol. We argue that this system provides a guaranteed, significant level of protection against DoS attacks in exchange for a reasonable, bounded amount of router resources. We also argue that the proposed system cannot be abused by a malicious node to interfere with normal Internet operation. Finally, we argue that it retains its efficiency in the face of continued Internet growth. | In @cite_1 Keromytis propose SOS (Secure Overlay Services), an architecture for pro-actively protecting against DoS attacks the communication between a pre-determined location and a specific set of users who have authorized access to communicate with that location. In contrast, AITF addresses the more general problem of protecting against DoS attacks any location accessible to all Internet users. | {
"abstract": [
"Denial of service (DoS) attacks continue to threaten the reliability of networking systems. Previous approaches for protecting networks from DoS attacks are reactive in that they wait for an attack to be launched before taking appropriate measures to protect the network. This leaves the door open for other attacks that use more sophisticated methods to mask their traffic.We propose an architecture called Secure Overlay Services (SOS) that proactively prevents DoS attacks, geared toward supporting Emergency Services or similar types of communication. The architecture is constructed using a combination of secure overlay tunneling, routing via consistent hashing, and filtering. We reduce the probability of successful attacks by (i) performing intensive filtering near protected network edges, pushing the attack point perimeter into the core of the network, where high-speed routers can handle the volume of attack traffic, and (ii) introducing randomness and anonymity into the architecture, making it difficult for an attacker to target nodes along the path to a specific SOS-protected destination.Using simple analytical models, we evaluate the likelihood that an attacker can successfully launch a DoS attack against an SOS-protected network. Our analysis demonstrates that such an architecture reduces the likelihood of a successful attack to minuscule levels."
],
"cite_N": [
"@cite_1"
],
"mid": [
"2139153904"
]
} | Active Internet Traffic Filtering: Real-time Response to Denial-of-Service Attacks | Denial of Service (DoS) attacks are recognized as one of the most challenging threats to Internet security. Any organization or enterprise that is dependent on the Internet can be subject to a DoS attack, causing its service to be severely disrupted, if not fail completely. The attacker typically uses a worm to create an "army" of zombies, which she orchestrates to flood the victim's site with malicious traffic. This malicious traffic exhausts the victim's resources, thereby seriously affecting the victim's ability to respond to normal traffic.
A network layer solution is required because the end-user or end-organization has no way to protect its tail circuit from being congested by an attack, causing the disruption sought by the attacker. For example, if an enterprise has a 10 Mbps connection to the Internet, an attacker can command its zombies to send traffic far exceeding this 10 Mbps rate to this enterprise, completely congesting the downstream link to the enterprise and causing normal traffic to be dropped.
Network operators use conventional router filtering capabilities to respond to DoS attacks. Typically, an operator of a site under attack identifies the nature of the packets being used in the attack by some packet collection facility, installs a filter in its firewall/edge router to block these packets and then requests its ISP to install comparable filters in its routers to remove this traffic from the tail circuit to the site. Each ISP can further communicate with its peering ISPs to block this unwanted traffic as well, if it so desires.
Currently, this propagation of filters is manual: the operator on each site determines the necessary filters and adds them to each router configuration. In several attacks, the operators of different networks have been forced to communicate by telephone given that the network connection, and thus email, was inoperable because of the attack.
As DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. For example, an attack can switch from one protocol to another, move between source networks as well as oscillate between on and off far faster than any human can respond. In general, network operators are confronting an "arms race" in which any defense, such as manually installed filters, is viewed as a challenge by the community of attacker-types to defeat. Exploiting a weakness such as human speeds of filter configuration is an obvious direction for an attacker to pursue.
The concept of automatic filter propagation has already been introduced in [MBF + 01]: a router is configured with a filter to drop (or rate-limit) certain traffic; if it continues to drop a significant amount of this traffic, it requests that the upstream router take over and block the traffic. However, the crucial issues associated with automatic filter propagation are still unaddressed.
The real problem is how to efficiently manage the bounded number of filters available to a network operator to provide this filtering support. An attacker can change protocols, source addresses, port numbers, etc. requiring a very large number of filters. However, a sophisticated hardware router has a fixed maximum number of wire-speed filters that can block traffic with no degradation in router performance. The maximum is determined by hardware table sizes and is typically limited to several thousand. A software router is typically less constrained by table space, but incurs a processing overhead for each additional filter. This usually limits the practical number of filters to even less than a hardware router. Moreover, there is a processing cost at each router for installing each new filter, removing the old filters and sending and receiving filter propagation protocol messages.
Given the restricted amount of filtering resources available to each router, hop-by-hop filter propagation towards the attacker's site clearly does not scale: Internet backbone routers would quickly become the "filtering bottleneck" having to satisfy filtering requests coming from all the corners of the Internet. Fortunately, traceback [SWKA00] [SPS + 01] makes it possible to identify a router close to the attacker and send it a filtering request directly. However, any filter propagation mechanism other than hop-by-hop raises a serious security issue: Once a router starts accepting filtering requests from unknown sources, how can it trust that these requests are not forged by malicious nodes seeking to disrupt normal communication between other nodes?
In this paper we propose a new filter propagation protocol called AITF (Active Internet Traffic Filtering): The victim sends a filtering request to its network gateway. The victim's gateway temporarily blocks the undesired traffic, while it propagates the request to the attacker's gateway. As we will see, the protocol both motivates and assists the attacker's gateway to block the attack. Moreover, a router receiving a filtering request satisfies it only if it determines that the requestor is on the same path with the specified undesired traffic. Thus, the filter cannot affect any nodes in the Internet other than those already operating at the mercy of the requestor.
The novel aspect of AITF is that it enables each participating service provider to guarantee to its clients a specific, significant amount of protection against DoS attacks, while it requires only a bounded credible amount of resources. At the same time it is secure i.e., it cannot be abused by a malicious node to harm (e.g. block legitimate traffic to) other nodes. Finally, it scales with Internet size i.e., it keeps its efficiency in the face of continued Internet growth.
II. Active Internet Traffic Filtering (AITF)
A. Terminology
A flow label is a set of values that captures the common characteristics of a traffic flow -e.g., "all packets with IP source address S and IP destination address D".
A filtering request is a request to block a flow of packets -all packets matching a specific wildcarded flow label -for the next T time units.
A filtering contract between networks A and B specifies: i. The filtering request rate R 1 at which A accepts filtering requests to block certain traffic to B.
ii. The filtering request rate R 2 at which A can send filtering requests to get B to block certain traffic from coming into A.
An AITF network is an Autonomous Domain which has a filtering contract with each of its end-hosts and each neighbor Autonomous Domain directly connected to it. An AITF node is either an end-host or a border router 1 in an AITF network.
Finally, we define the following terms with respect to an undesired flow: The attack path is the set of AITF nodes the undesired flow goes through. The attacker is the origin 1 A border router is a router that has interfaces in more than one AITF networks.
of the undesired flow. The victim is the target of the undesired flow. The attacker's gateway is the AITF node closest to the attacker along the attack path. Similarly, the victim's gateway is the AITF node closest to the victim along the attack path.
B. Overview
The AITF protocol enables a service provider to protect a client against N undesired flows, by using only n ≪ N filters and a DRAM cache of size O(N ). The motivation is that each router can afford gigabytes of DRAM but only a limited number of filters.
In an AITF world, each Autonomous Domain (AD) is an AITF network i.e., it has filtering contracts with all its endhosts and peering ADs. These contracts limit the rates by which the AD can send/receive filtering requests to/from its end-hosts and peering ADs. The limited rates allow the receiving router to police the requests to the specified rates and indiscriminately drop requests when the rate is in excess of the agreed rate. Thus, the router can limit the CPU cycles used to process filtering requests as well as the number of filters it requires.
An AITF filtering request is initially sent from the victim to the victim's gateway; the victim's gateway propagates it to the attacker's gateway; finally, the attacker's gateway propagates it to the attacker. Both the victim's gateway and the attacker's gateway install filters to block the undesired flow. The victim's gateway installs a filter only temporarily, to immediately protect the victim, while it waits for the attacker's gateway to take responsibility. The attacker's gateway is expected to install a filter and block the undesired flow for T time units.
If the undesired flow stops within some grace period, the victim's gateway interprets this as a hint that the attacker's gateway has taken over and removes its temporary filter. This leaves the door open to "on-off" undesired flows 2 . In order to detect and block such "on-off" flows, the victim's gateway needs to remember each filtering request for at least T time units. Thus, the victim's gateway, installs a filter for T tmp ≪ T time units, but keeps a "shadow" of the filter in DRAM for T time units 3 .
The attacker's gateway expects the attacker to stop the undesired flow within a grace period. Otherwise, it holds the right to disconnect from her. This fact encourages the attacker to stop the undesired flow. Similarly, the victim's gateway expects the attacker's gateway to block the undesired flow within a grace period. Otherwise, the mechanism escalates: The victim's gateway now plays the role of the victim (i.e., it sends a filtering request to its own gateway) and the attacker's gateway plays the role of the attacker (i.e., it is asked to stop the undesired flow or risk disconnection). The escalation process should become clear with the example in II-D.
Thus, the mechanism proceeds in rounds. At each round, only four nodes are involved. In the first round, the mechanism tries to push filtering of undesired traffic back to the AITF node closest to the attacker. If that fails, it tries the second closest AITF node to the attacker and so on.
C. Basic protocol
The AITF protocol involves only one type of message: a filtering request. A filtering request contains a flow label and a type field. The latter specifies whether this request is addressed to the victim's gateway, the attacker's gateway or the attacker.
The only nodes in an AITF network that speak the AITF protocol are end-hosts and border routers. Internal routers do not participate.
AITF node X sends a filtering request to AITF node Y , when X wants a certain traffic flow coming through Y to be blocked for T time units.
When AITF node Y receives a filtering request, it checks which end-host or peering network the request is received from/through. If that end-host or peering network has exceeded its allowed rate, the request is dropped. If not, Y looks at the specified undesired flow label and takes certain actions: − If Y is the victim's gateway:
i. It installs a temporary filter to block the undesired flow for T tmp ≪ T time units.
ii. It logs the filtering request in DRAM for T time units.
iii. It propagates the filtering request to the attacker's gateway. If the attacker's gateway does not block the flow within T tmp time units, Y propagates the filtering request to its own gateway. − If Y is the attacker's gateway:
i. It installs a filter to block the undesired flow for T time units.
ii. It propagates the filtering request to the attacker. If the attacker does not stop the flow within a grace period, Y disconnects from her. − If Y itself is the attacker, it stops the flow (to avoid disconnection).
We should note that the behavior described above is that of a non-compromised, non-malicious node. Neither the attacker not even the attacker's gateway are expected to always conform to this behavior. AITF operation does not rely on their cooperation.
D. Example
In Figure 1 G host -which stands for "good host" -is an end-host residing in enterprise network G net, which is connected to local ISP G isp through router G gw1. G isp runs a regional network that connects through its backbone router G gw2 to a wide-area ISP G wan. Similarly, B host -which stands for "bad host" -is an end-host residing in enterprise network B net etc.
B host starts sending an undesired flow to G host. G host sends a filtering request to G gw1 against B host. Upon reception of G host's request, G gw1 temporarily On the other side, upon reception of G gw1's request, B gw1 immediately blocks the undesired flow, but also propagates the filtering request to B host. B host either stops the undesired flow or risks being disconnected. Thus, if B gw1 cooperates, by the end of the first round, filtering of the undesired flow has been successfully pushed to the AITF node closest to the attacker (B gw1).
Of course, B gw1 may decide not to cooperate and ignore the filtering request. Then, the mechanism escalates: G gw1 propagates the filtering request to G gw2. G gw2 temporarily blocks all undesired traffic, but also propagates the filtering request to B gw2 and so on. Thus, if B gw2 cooperates, by the end of the second round, filtering of the undesired flow has been successfully pushed to the second closest to the attacker AITF node (B gw2).
In the worst-case scenario, even B gw3 refuses to cooperate. As a result, G gw3 disconnects from B gw3.
E. Verifying a filtering request
In a network architecture where source address spoofing is allowed, compromised node M can maliciously request the blocking of traffic from A to V thereby disrupting their communication. To avoid this, we add a simple extension to the basic protocol.
The extension introduces two more messages: A verification query and a verification reply. Both types include a flow label and a nonce (i.e., a random number).
When router Y receives a filtering request, which asks for the blocking of a traffic flow from attacker A to victim V , Y verifies that the request is real before taking any action to satisfy it. If Y is the victim's gateway, this verification is trivial with appropriate ingress filtering. If Y is the attacker's gateway, verification is accomplished through the following "3-way handshake":
i. Router Y receives a filtering request, asking for the blocking of a traffic flow from attacker A to victim V .
ii. Y sends a verification query to V , asking "Do you really not want this traffic flow?"
iii. V responds to Y with a verification reply. The reply must include the same flow label and nonce included in the query. If the nonce on V 's reply is the same with the nonce on Y 's query, Y accepts the request as real and proceeds to satisfy it. The "3-way handshake" is further discussed in III-B.
F. Assumptions
AITF operation assumes that the victim's gateway can determine − Who is the attacker's gateway (in order to propagate the request). − Who is the next AITF node on the attack path (in order to escalate, if necessary). These assumptions are met, if an efficient traceback technique as those described in [SWKA00] [SPS + 01] is available.
Also AITF assumes that off-path traffic monitoring is not possible i.e., if node M is not located on the path from node A to node V , then M cannot monitor traffic sent on that path (this assumption is necessary for the "3-way handshake").
III. Discussion
A. Why it works
The basic idea of AITF is to push filtering of undesired traffic to the network closest to the attacker. That is, hold a service provider responsible for providing connectivity to a misbehaving client and have it do the dirty job. The question is, why would the attacker's service provider accept (or at least be encouraged) to do that?
If the attacker's service provider does not cooperate, it risks being disconnected by its own service provider. This makes sense for both of them: If B net in Figure 1 refuses to block its misbehaving client, the filtering burden falls on B isp. Thus, it makes sense for B isp to consider B net a bad client and disconnect from it. On the other hand, this offers an incentive to B net to cooperate and block the undesired flow. Otherwise, it will be disconnected by B isp, which will result in all of its clients being dissatisfied.
Moreover, AITF offers an economic incentive to providers to protect their network from the inside by employing appropriate ingress filtering. If a provider proactively prevents spoofed flows from exiting its network, it lowers the probability of an attack being launched from its own network, thus reducing the number of expected filtering requests it will later have to satisfy to avoid disconnection.
In short, AITF creates a cost vs quality trade-off for service providers: Either they pay the cost to block the undesired flows generated by their few bad clients, or they run the risk of dissatisfying their legitimate clients, which are the vast majority. Thus, the quality of a provider's service is now related to its capability to filter its own misbehaving clients.
B. Why it is secure
The greatest challenge with automatic filtering mechanisms is that compromised node M may maliciously request the blocking of traffic from A to V , thereby disrupting their communication. AITF prevents this through the "3-way handshake" described in II-E.
The "3-way handshake" does not exactly verify the authenticity of a filtering request. It only enables A's gateway to verify that a request to block traffic from A to V has been sent by a node located on the path from A to V . A compromised router located on this path can naturally forge and snoop handshake messages to disrupt A-V communication. However, such a compromised router can disrupt A-V communication anyway, by simply dropping the corresponding packets. 4 In short, AITF cannot be abused by a compromised node to cause interruption of a legitimate traffic flow, unless that compromised node is responsible for routing the flow, in which case it can interrupt the flow anyway.
C. Why it scales
AITF scales with Internet size, because it pushes filtering of undesired traffic to the leaves of the Internet, where filtering capacity follows Internet growth.
In most cases, AITF pushes filtering of undesired traffic to the provider(s) of the attacker(s). Thus, the amount of filtering requests a provider is asked to satisfy grows proportionally to the number of the provider's (misbehaving) clients. However, intuitively, a provider's filtering capacity also grows proportionally to the number of its clients. 5 In short, a provider's filtering capacity follows the provider's filtering workload.
If the attacker's provider is itself compromised, AITF naturally fails to push filtering to it. Instead, filtering is performed by another network, closer to the Internet core. If this situation occurred often, then the scalability argument stated above would be false. Fortunately, compromised routers are a very small percentage of the Internet infrastructure. 6 Thus, AITF fails to push filtering to the attacker's provider with a very small probability.
IV. Performance Analysis
In this section we provide simple formulas that describe AITF performance. For lack of space and given that our formulas are very simple and intuitive, we defer any details to [AC03].
A. The victim's perspective
A.1 Effective bandwidth of an undesired flow AITF significantly reduces the effective bandwidth of an undesired flow -i.e., the bandwidth of the undesired flow actually experienced by the victim. Specifically, it can be shown that AITF reduces the effective bandwidth of an undesired flow by a factor of
r ≈ n(T d + T r ) T
where n is the number of non-cooperating 7 AITF nodes on the attack path, T d is attack detection time and T r is the one-way delay from the victim to its gateway. T is the timeout associated with all filtering requests i.e. each filtering request asks for the blocking of a flow for T time units. For example, if the only non-cooperating node on the attack path is the attacker, and if the one-way delay from the victim to its gateway is T r = 50 msec, for T = 1 min, an AITF node can reduce the effective bandwidth of an undesired flow by a factor r ≈ 0.00083. 8
Here we only demonstrate this result for n = 1 i.e., when the only non-cooperating node on the attack path is the attacker: At time 0 the attacker starts the undesired flow; at time T d the victim detects it and sends a filtering request; at time T d +T r the victim's gateway temporarily blocks the flow and the victim stops receiving it; the flow is eventually blocked by the attacker's gateway and released after time T . Thus, if the original bandwidth of the undesired flow is B, its effective bandwidth is B e ≈ B · T d +Tr T . When n ≥ 1 i.e., when 1 or more AITF routers close to the attacker are non-cooperating, the attacker can play "on-off" games: Pretend to stop the undesired flow to trick the victim's gateway into removing its filter, then resume the flow etc. The victim's gateway detects and blocks such attackers by using its DRAM cache.
A.2 Number of undesired flows
An AITF node is guaranteed protection against a specific number of undesired flows, which depends on its contract with its service provider. Specifically, it can be shown that if a client is allowed to send R 1 filtering requests per time unit to the provider, then the client is protected 9 against N v = R 1 · T simultaneous undesired flows. For example, for R 1 = 100 filtering requests per second and T = 1 min, the client is protected against N v = 6, 000 simultaneous undesired flows.
B. Filtering close to the victim AITF enables a service provider to protect a client against N v undesired flows by using only n v ≪ N v filters. Specifically, it can be shown that if a client is allowed to send R 1 filtering requests per time unit to the provider, the provider needs n v filters and a DRAM cache that can fit m v filtering requests in order to satisfy all the requests, where
n v = R 1 · T tmp , m v = R 1 · T
T tmp is the amount of time that elapses from the moment the victim's gateway installs a temporary filter until it removes it. The purpose of the temporary filter is to block the undesired flow until the attacker's gateway takes over. Therefore, T tmp should be large enough to allow the traceback from the victim's gateway to the attacker's gateway plus the 3-way handshake. For example, suppose we use an architecture like [CG00], where traceback is automatically provided inside each packet. Then traceback time is 0. If the 3-way handshake between the two gateways takes 600 msec, for R 1 = 100 filtering requests per second and T = 1 min, the service provider needs n v = 60 filters to protect a client against N v = 6, 000 undesired flows.
C. Filtering close to the attacker AITF requires a bounded amount of resources from the attacker's service provider. Specifically, if a service provider is allowed to send R 2 filtering requests per time unit to a client, then the provider needs n a = R 2 · T filters in order to ensure that the client satisfies all the requests. Given these resources, the provider can filter N a = n a = R 2 · T simultaneous undesired flows generated by the client. For example, for R 2 = 1 filtering request per second and T = 1 min, the provider needs n a = 60 filters for the client. This filtering request rate allows the provider to filter up to N a = 60 simultaneous undesired flows generated by the client.
D. The attacker's perspective
We have defined an attacker as the source of an undesired flow. By this definition, an attacker is not necessarily a malicious/compromised node; it is simply a node being asked to stop sending a certain flow. A legitimate AITF node must be provisioned to stop sending undesired flows when requested, in order to avoid disconnection.
AITF requires a bounded amount of resources from the attacker as well. Specifically, if a service provider is allowed to send R 2 filtering requests per time unit to a client, the client needs n a = R 2 · T filters (as many as the provider) in order to satisfy all the requests. For example, for R 2 = 1 filtering request per second and T = 1 min, the client needs n a = 60 filters.
VI. Conclusions
We presented AITF, an automatic filter propagation mechanism, according to which each Autonomous Domain (AD) has a filtering contract with each of its end-hosts and neighbor ADs. A filtering contract with a neighbor provides a guaranteed, significant level of protection against DoS attacks coming through that neighbor in exchange for 10 We believe that DoS attacks should be addressed separately from flash crowds: Flash crowd aggregates are created by legitimate traffic. Therefore, it makes sense to rate-limit them instead of completely blocking them. On the contrary, DoS attack traffic aims at disrupting the victim's operation. Therefore, it makes sense to block it. Blocking a traffic flow is simpler and cheaper than rate-limiting it. Moreover, DoS attack traffic is generated by malicious/compromised nodes. Therefore, it demands a more intelligent defense mechanism. a reasonable, bounded amount of router resources.
Specifically: − Given a filtering contract between a client and a service provider, which allows the client to send R 1 filtering requests per time unit to the provider, the provider can protect the client against a large number of undesired flows N v = R 1 · T , by significantly limiting the effective bandwidth of each undesired flow. The provider achieves this by using only a modest number of filters n v = R 1 ·T tmp ≪ N v . − Given a filtering contract between a client and a service provider, which allows the provider to send R 2 filtering requests per time unit to the client, both the client and the provider need a bounded number of filters n a = R 2 · T to honor their contract.
We argued that AITF successfully deals with the biggest challenge to automatic filtering mechanisms: source address spoofing. Namely, we argued that it is not possible for any malicious/compromised node to abuse AITF in order to interrupt a legitimate traffic flow, unless the compromised node is responsible for routing that flow, in which case it can interrupt the flow anyway.
Finally, we argued that AITF scales with Internet size, because it pushes filtering of undesired traffic to the service providers of the attackers, unless the service providers are themselves compromised. Fortunately, compromised routers are a very small percentage of Internet infrastructure. Thus, in the vast majority of cases, AITF pushes filtering of undesired traffic to the leaves of the Internet, where filtering capacity follows Internet growth. | 4,513 |
cs0307043 | 2953390777 | The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs. | To see what problems MIPs model, note, from constraints (i) and (iii) of MIPs, that for all @math , any feasible solution will make the set @math have precisely one 1, with all other elements being 0; MIPs thus model many choice'' scenarios. Consider, e.g., global routing in VLSI gate arrays @cite_22 . Given are an undirected graph @math , a function @math , and @math , a set @math of paths in @math , each connecting @math to @math ; we must connect each @math with @math using exactly one path from @math , so that the maximum number of times that any edge in @math is used for, is minimized--an MIP formulation is obvious, with @math being the indicator variable for picking the @math th path in @math . This problem, the vector-selection problem of @cite_22 , and the discrepancy-type problems of , are all modeled by MIPs; many MIP instances, e.g., global routing, are NP-hard. | {
"abstract": [
"We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance."
],
"cite_N": [
"@cite_22"
],
"mid": [
"2022191808"
]
} | An Extension of the Lovász Local Lemma, and its Applications to Integer Programming * | The powerful Lovász Local Lemma (LLL) is often used to show the existence of rare combinatorial structures by showing that a random sample from a suitable sample space produces them with positive probability [14]; see Alon & Spencer [4] and Motwani & Raghavan [27] for several such applications. We present an extension of this lemma, and demonstrate applications to rounding fractional solutions for certain families of integer programs.
Let e denote the base of natural logarithms as usual. The symmetric case of the LLL shows that all of a set of "bad" events E i can be avoided under some conditions: Lemma 1.1. ( [14]) Let E 1 , E 2 , . . . , E m be any events with Pr(E i ) ≤ p ∀i. If each E i is mutually independent of all but at most d of the other events E j and if ep(d + 1) ≤ 1, then Pr( m i=1 E i ) > 0.
Though the LLL is powerful, one problem is that the "dependency" d is high in some cases, precluding the use of the LLL if p is not small enough. We present a partial solution to this via an extension of the LLL (Theorem 3.1), which shows how to essentially reduce d for a class of events E i ; this works well when each E i denotes some random variable deviating "much" from its mean. In a nutshell, we show that such events E i can often be decomposed suitably into sub-events; although the sub-events may have a large dependency among themselves, we show that it suffices to have a small "bipartite dependency" between the set of events E i and the set of sub-events. This, in combination with some other ideas, leads to the following applications in integer programming.
It is well-known that a large number of NP-hard combinatorial optimization problems can be cast as integer linear programming problems (ILPs). Due to their NP-hardness, good approximation algorithms are of much interest for such problems. Recall that a ρ-approximation algorithm for a minimization problem is a polynomial-time algorithm that delivers a solution whose objective function value is at most ρ times optimal; ρ is usually called the approximation guarantee, approximation ratio, or performance guarantee of the algorithm. Algorithmic work in this area typically focuses on achieving the smallest possible ρ in polynomial time. One powerful paradigm here is to start with the linear programming (LP) relaxation of the given ILP wherein the variables are allowed to be reals within their integer ranges; once an optimal solution is found for the LP, the main issue is how to round it to a good feasible solution for the ILP.
Rounding results in this context often have the following strong property: they present an integral solution of value at most y * · ρ, where y * will throughout denote the optimal solution value of the LP relaxation.
Since the optimal solution value OP T of the ILP is easily seen to be lower-bounded by y * , such rounding algorithms are also ρ-approximation algorithms. Furthermore, they provide an upper bound of ρ on the ratio OP T /y * , which is usually called the integrality gap or integrality ratio of the relaxation; the smaller this value, the better the relaxation.
This work presents improved upper bounds on the integrality gap of the natural LP relaxation for two families of ILPs: minimax integer programs (MIPs) and covering integer programs (CIPs). (The precise definitions and results are presented in § 2.) For the latter, we also provide the corresponding polynomialtime rounding algorithms. Our main improvements are in the case where the coefficient matrix of the given ILP is column-sparse: i.e., the number of nonzero entries in every column is bounded by a given parameter a. There are classical rounding theorems for such column-sparse problems (e.g., Beck & Fiala [6], Karp, Leighton, Rivest, Thompson, Vazirani & Vazirani [18]). Our results complement, and are incomparable with, these results. Furthermore, the notion of column-sparsity, which denotes no variable occurring in "too many" constraints, occurs naturally in combinatorial optimization: e.g., routing using "short" paths, and problems on hypergraphs with "small" degree. These issues are discussed further in § 2.
A key technique, randomized rounding of linear relaxations, was developed by Raghavan & Thompson [32] to get approximation algorithms for such ILPs. We use Theorem 3.1 to prove that this technique produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrix of the given MIP/CIP is column-sparse. (In the case of MIPs, our algorithm iterates randomized rounding several times with different choices of parameters, in order to achieve our result.) Such results cannot be got via Lemma 1.1, as the dependency d, in the sense of Lemma 1.1, can be as high as Θ(m) for these problems. Roughly speaking, Theorem 3.1 helps show that if no column in our given ILP has more than a nonzero entries, then the dependency can essentially be brought down to a polynomial in a; this is the key driver behind our improvements.
Theorem 3.1 works well in combination with an idea that has blossomed in the areas of derandomization and pseudorandomness, in the last two decades: (approximately) decomposing a function of several variables into a sum of terms, each of which depends on only a few of these variables. Concretely, suppose Z is a sum of random variables Z i . Many tools have been developed to upper-bound Pr(Z − E[Z] ≥ z) and Pr(|Z − E[Z]| ≥ z) even if the Z i s are only (almost) k-wise independent for some "small" k, rather than completely independent. The idea is to bound the probabilities by considering E[(Z − E[Z]) k ] or similar expectations, which look at the Z i k or fewer at a time (via linearity of expectation). The main application of this has been that the Z i can then be sampled using "few" random bits, yielding a derandomization/pseudorandomness result (e.g., [3,23,8,26,28,33]). Our results show that such ideas can in fact be used to show that some structures exist! This is one of our main contributions.
What about polynomial-time algorithms for our existential results? Typical applications of Lemma 1.1 are "nonconstructive" [i.e., do not directly imply (randomized) polynomial-time algorithmic versions], since the positive probability guaranteed by Lemma 1.1 can be exponentially small in the size of the input. However, certain algorithmic versions of the LLL have been developed starting with the seminal work of Beck [5]. These ideas do not seem to apply to our extension of the LLL, and hence our MIP result is nonconstructive. Following the preliminary version of this work [35], two main algorithmic versions related to our work have been obtained: (i) for a subclass of the MIPs [20], and (ii) for a somewhat different notion of approximation than the one we study, for certain families of MIPs [11].
Our main algorithmic contribution is for CIPs and multi-criteria versions thereof: we show, by a generalization of the method of pessimistic estimators [31], that we can efficiently construct the same structure as is guaranteed by our nonconstructive argument. We view this as interesting for two reasons. First, the generalized pessimistic estimator argument requires a quite delicate analysis, which we expect to be useful in other applications of developing constructive versions of existential arguments. Second, except for some of the algorithmic versions of the LLL developed in [24,25], most current algorithmic versions minimally require something like "pd 3 = O(1)" (see, e.g., [5,1]); the LLL only needs that pd = O(1). While this issue does not matter much in many applications, it crucially does, in some others. A good example of this is the existentially-optimal integrality gap for the edge-disjoint paths problem with "short" paths, shown using the LLL in [21]. The above-seen "pd 3 = O(1)" requirement of currently-known algorithmic approaches to the LLL, leads to algorithms that will violate the edge-disjointness condition when applied in this context: specifically, they may route up to three paths on some edges of the graph. See [9] for a different -randomwalk based -approach to low-congestion routing. An algorithmic version of this edge-disjoint paths result of [21] is still lacking. It is a very interesting open question whether there is an algorithmic version of the LLL that can construct the same structures as guaranteed to exist by the LLL. In particular, can one of the most successful derandomization tools -the method of conditional probabilities or its generalization, the pessimistic estimators method -be applied, fixing the underlying random choices of the probabilistic argument one-by-one? This intriguing question is open (and seems difficult) for now. As a step in this direction, we are able to show how such approaches can indeed be developed, in the context of CIPs.
Thus, our main contributions are as follows. (a) The LLL extension is of independent interest: it helps in certain settings where the "dependency" among the "bad" events is too high for the LLL to be directly applicable. We expect to see further applications/extensions of such ideas. (b) This work shows that certain classes of column-sparse ILPs have much better solutions than known before; such problems abound in practice (e.g., short paths are often desired/required in routing). (c) Our generalized method of pessimistic estimators should prove fruitful in other contexts also; it is a step toward complete algorithmic versions of the LLL.
The rest of this paper is organized as follows. Our results are first presented in § 2, along with a discussion of related work. The extended LLL, and some large-deviation methods that will be seen to work well with it, are shown in § 3. Sections 4 and 5 are devoted to our rounding applications. Finally, § 6 concludes.
Improvements achieved
For MIPs, we use the extended LLL and an idea ofÉva Tardos that leads to a bootstrapping of the LLL extension, to show the existence of an integral solution of value y * + O(min{y * , m} · H(min{y * , m}, 1/a)) + O(1); see Theorem 4.5. Since a ≤ m, this is always as good as the y * +O(min{y * , m}·H(min{y * , m}, 1/m)) bound of [32] and is a good improvement, if a ≪ m. It also is an improvement over the additive g factor of [18] in cases where g is not small compared to y * .
Consider, e.g., the global routing problem and its MIP formulation, sketched above; m here is the number of edges in G, and g = a is the maximum length of any path in i P i . To focus on a specific interesting case, suppose y * , the fractional congestion, is at most one. Then while the previous results ( [32] and [18], resp.) give bounds of O(log m/ log log m) and O(a) on an integral solution, we get the improved bound of O(log a/ log log a). Similar improvements are easily seen for other ranges of y * also; e.g., if y * = O(log a), an integral solution of value O(log a) exists, improving on the previously known bounds of O(log m/ log(2 log m/ log a)) and O(a). Thus, routing along short paths (this is the notion of sparsity for the global routing problem) is very beneficial in keeping the congestion low. Section 4 presents a scenario where we get such improvements, for discrepancy-type problems [34,4]. In particular, we generalize a hypergraph-partitioning result of Füredi & Kahn [16].
Recall the bounds of [36] for CIPs mentioned in the paragraph preceding this subsection; our bounds for CIPs depend only on the set of constraints Ax ≥ b, i.e., they hold for any non-negative objectivefunction vector c. Our improvements over [36] get better as y * decreases. We show an integrality gap of 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}), once again improving on [36] for weighted CIPs. This CIP bound is better than that of [36] if y * ≤ mB/a: this inequality fails for unweighted CIPs and is generally true for weighted CIPs, since y * can get arbitrarily small in the latter case. In particular, we generalize the result of Chvátal [10] on weighted set cover. Consider, e.g., a facility location problem on a directed graph G = (V, A): given a cost c i ∈ [0, 1] for each i ∈ V , we want a min-cost assignment of facilities to the nodes such that each node sees at least B facilities in its out-neighborhood-multiple facilities at a node are allowed. If ∆ in is the maximum in-degree of G, we show an integrality gap of 1 + O(max{ln(∆ in + 1)/B, ln(B(∆ in + 1))/B}). This improves on [36] if y * ≤ |V |B/∆ in ; it shows an O(1) (resp., 1 + o(1)) integrality gap if B grows as fast as (resp., strictly faster than) log ∆ in . Theorem 5.7 presents our covering results.
A key corollary of our results is that for families of instances of CIPs, we get a good (O(1) or 1 + o(1)) integrality gap if B grows at least as fast as log a. Bounds on the result of a greedy algorithm for CIPs relative to the optimal integral solution, are known [12,13]. Our bound improves that of [12] and is incomparable with [13]; for any given A, c, and the unit vector b/||b|| 2 , our bound improves on [13] if B is more than a certain threshold. As it stands, randomized rounding produces such improved solutions for several CIPs only with a very low, sometimes exponentially small, probability. Thus, it does not imply a randomized algorithm, often. To this end, we generalize Raghavan's method of pessimistic estimators to derive an algorithmic (polynomial-time) version of our results for CIPs, in § 5.3.
We also show via Theorem 5.9 and Corollary 5.10 that multi-criteria CIPs can be approximated well. In particular, Corollary 5.10 shows some interesting cases where the approximation guarantee for multi-criteria CIPs grows in a very much sub-linear fashion with the number ℓ of given vectors c i : the approximation ratio is at most O(log log ℓ) times what we show for CIPs (which correspond to the case where ℓ = 1). We are not aware of any such earlier work on multi-criteria CIPs.
The preliminary version of this work was presented in [35]. As mentioned in § 1, two main algorithmic versions related to our work have been obtained following [35]. First, for a subclass of the MIPs where the nonzero entries of the matrix A are "reasonably large", constructive versions of our results have been obtained in [20]. Second, for a notion of approximation that is different from the one we study, algorithmic results have been developed for certain families of MIPs in [11]. Furthermore, our Theorem 5.7 for CIPs has been used in [19] to develop approximation algorithms for CIPs that have given upper bounds on the variables x j .
The Extended LLL and an Approach to Large Deviations
We now present our LLL extension, Theorem 3.1. For any event E, define χ(E) to be its indicator r.v.:
1 if E holds and 0 otherwise. Suppose we have "bad" events E 1 , . . . , E m with a "dependency" d ′ (in the sense of Lemma 1.1) that is "large". Theorem 3.1 shows how to essentially replace d ′ by a possibly much-smaller d, under some conditions. It generalizes Lemma 1.1 (define one r.v., C i,1 = χ(E i ), for each i, to get Lemma 1.1), its proof is very similar to the classical proof of Lemma 1.1, and its motivation will be clarified by the applications.
(i) any C i,j is mutually independent of all but at most d of the events E k , k = i, and (ii) ∀I ⊆ ([m] − {i}), Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)]. Let p i denote j E[C i,j ]; clearly, Pr(E i ) ≤ p i (set I = φ in (ii)). Suppose that for all i ∈ [m] we have ep i (d + 1) ≤ 1. Then Pr( i E i ) ≥ (d/(d + 1)) m > 0.
Remark 3.2. C i,j and C i,j ′ can "depend" on different subsets of {E k |k = i}; the only restriction is that these subsets be of size at most d. Note that we have essentially reduced the dependency among the E i s, to just d: ep i (d + 1) ≤ 1 suffices. Another important point is that the dependency among the r.v.s C i,j could be much higher than d: all we count is the number of E k that any C i,j depends on. Proof of Theorem 3.1. We prove by induction on |I| that if i ∈ I then Pr(E i | Z(I)) ≤ ep i , which suffices to prove the theorem since Pr
( i E i ) = i∈[m] (1 − Pr(E i | Z([i − 1]))). For the base case where I = ∅, Pr(E i | Z(I)) = Pr(E i ) ≤ p i . For the inductive step, let S i,j,I . = {k ∈ I | C i,j depends on E k }, and S ′ i,j,I = I − S i,j,I ; note that |S i,j,I | ≤ d. If S i,j,I = ∅, then E[C i,j | Z(I)] = E[C i,j ]. Otherwise, letting S i,j,I = {ℓ 1 , . . . , ℓ r }, we have E[C i,j | Z(I)] = E[C i,j · χ(Z(S i,j,I )) | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I )) ≤ E[C i,j | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I ))
, since C i,j is non-negative. The numerator of the last term is E[C i,j ], by assumption. The denominator can be lower-bounded as follows:
s∈[r] (1 − Pr(E ℓs | Z({ℓ 1 , ℓ 2 , . . . , ℓ s−1 } ∪ S ′ i,j,I ))) ≥ s∈[r] (1 − ep ℓs ) ≥ (1 − 1/(d + 1)) r ≥ (d/(d + 1)) d > 1/e;
the first inequality follows from the induction hypothesis. Hence,
E[C i,j | Z(I)] ≤ eE[C i,j ] and thus, Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)] ≤ ep i ≤ 1/(d + 1).
The crucial point is that the events E i could have a large dependency d ′ , in the sense of the classical Lemma 1.1. The main utility of Theorem 3.1 is that if we can "decompose" each E i into the r.v.s C i,j that satisfy the conditions of the theorem, then there is the possibility of effectively reducing the dependency by much (d ′ can be replaced by the value d). Concrete instances of this will be studied in later sections.
The tools behind our MIP application are our new LLL, and a result of [33]. Define, for z = (z 1 , . . . , z n ) ∈ ℜ n , a family of polynomials S j (z), j = 0, 1, . . . , n, where S 0 (z) ≡ 1, and for j ∈ [n],
S j (z) . = 1≤i 1 <i 2 ···<i j ≤n z i 1 z i 2 · · · z i j .(1)Remark 3.3.
For real x and non-negative integral r, we define x r . = x(x − 1) · · · (x − r + 1)/r! as usual; this is the sense meant in Theorem 3.4 below.
We define a nonempty event to be any event with a nonzero probability of occurrence. The relevant theorem of [33] is the following:
Theorem 3.4. ([33]) Given r.v.s X 1 , . . . , X n ∈ [0, 1], let X = n i=1 X i and µ = E[X]
. Then, (a) For any q > 0, any nonempty event Z and any non-negative integer k ≤ q,
Pr(X ≥ q | Z) ≤ E[Y k,q | Z], where Y k,q = S k (X 1 , . . . , X n )/ q k . (b) If the X i s are independent, δ > 0, and k = ⌈µδ⌉, then Pr(X ≥ µ(1 + δ)) ≤ E[Y k,µ(1+δ) ] ≤ G(µ, δ), where G(·, ·) is as in Lemma 2.4. (c) If the X i s are independent, then E[S k (X 1 , . . . , X n )] ≤ n k · (µ/n) k ≤ µ k /k!.
Proof. Suppose r 1 , r 2 , . . . r n ∈ [0, 1] satisfy n i=1 r i ≥ q. Then, a simple proof is given in [33], for the fact that for any non-negative integer k ≤ q, S k (r 1 , r 2 , . . . , r n ) ≥ q k . This clearly holds even given the occurrence of any nonempty event Z. Thus we get Pr(
X ≥ q) | Z) ≤ Pr(Y k,q ≥ 1 | Z) ≤ E[Y k,q | Z],
where the second inequality follows from Markov's inequality. The proofs of (b) and (c) are given in [33].
We next present the proof of Lemma 2.4:
Proof of Lemma 2.4. Part (a) is the Chernoff-Hoeffding bound (see, e.g., Appendix A of [4], or [27]). For (b), we proceed as follows. For any µ > 0, it is easy to check that
G(µ, δ) = e −Θ(µδ 2 ) if δ ∈ (0, 1); (2) G(µ, δ) = e −Θ(µ(1+δ) ln(1+δ)) if δ ≥ 1. (3) Now if µ ≤ log(p −1 )/2, choose δ = C · log(p −1 ) µ log(log(p −1 )/µ)
for a suitably large constant C. Note that δ is lower-bounded by some positive constant; hence, (3) holds (since the constant 1 in the conditions "δ ∈ (0, 1)" and "δ > 1" of (2) and (3) can clearly be replaced by any other positive constant). Simple algebraic manipulation now shows that if C is large enough, then
⌈µδ⌉·G(µ, δ) ≤ p holds. Similarly, if µ > log(p −1 )/2, we set δ = C · log(µ+p −1 )
µ for a large enough constant C, and use (2).
Approximating Minimax Integer Programs
Suppose we are given an MIP conforming to Definition 2.1. Define t to be max i∈[m] N Z i , where N Z i is the number of rows of A which have a non-zero coefficient corresponding to at least one variable among
{x i,j : j ∈ [ℓ i ]}. Note that g ≤ a ≤ t ≤ min{m, a · max i∈[n] ℓ i }.(4)
Theorem 4.2 now shows how Theorem 3.1 can help, for sparse MIPs-those where t ≪ m. We will then bootstrap Theorem 4.2 to get the further improved Theorem 4.5. We start with a proposition, whose proof is a simple calculus exercise:
Proposition 4.1. If 0 < µ 1 ≤ µ 2 , then for any δ > 0, G(µ 1 , µ 2 δ/µ 1 ) ≤ G(µ 2 , δ). Proof. Conduct randomized rounding: independently for each i, randomly round exactly one x i,j to 1, guided by the "probabilities" {x * i,j }. We may assume that {x * i,j } is a basic feasible solution to the LP relaxation. Hence, at most m of the {x * i,j } will be neither zero nor one, and only these variables will participate in the rounding. Thus, since all the entries of A are in [0, 1], we assume without loss of generality from now on that y * ≤ m (and that max i∈[n] ℓ i ≤ m); this explains the "min{y * , m}" term in our stated bounds. If z ∈ {0, 1} N denotes the randomly rounded vector, then E[(Az) i ] = b i by linearity of expectation, i.e., at most y * . Defining k = ⌈y * H(y * , 1/(et))⌉ and events E 1 , E 2 , . . . , E m by
E i ≡ "(Az) i ≥ b i + k",C i,j . = v∈S(j) Z i,v b i +k k .(5)
We now need to show that the r.v.s C i,j satisfy the conditions of Theorem 3.1. For any i ∈ [m], let
δ i = k/b i . Since b i ≤ y * , we have, for each i ∈ [m], G(b i , δ i ) ≤ G(y * ,E i | Z) ≤ j∈[u] E[C i,j | Z]. Also, p i . = j∈[u] E[C i,j ] < G(b i , δ i ) ≤ 1/(ekt).
Next since any C i,j involves (a product of) k terms, each of which "depends" on at most (t − 1) of the events Theorem 4.2 gives good results if t ≪ m, but can we improve it further, say by replacing t by a (≤ t) in it? As seen from (4), the key reason for t ≫ a Θ(1) is that max i∈[n] ℓ i ≫ a Θ(1) . If we can essentially "bring down" max i∈[n] ℓ i by forcing many x * i,j to be zero for each i, then we effectively reduce t (t ≤ a · max i ℓ i , see (4)); this is so since only those x * i,j that are neither zero nor one take part in the rounding. A way of bootstrapping Theorem 4.2 to achieve this is shown by: Proof. Let K 0 > 0 be a sufficiently large absolute constant. Now if
{E v : v ∈ ([m] − {i})} by definition of t, we see the important Fact 4.4. ∀i ∈ [m] ∀j ∈ [u], C i,j ∈ [0, 1] and C i,j "depends" on at most d = k(t − 1) of the set of events {E v : v ∈ ([m] − {i})}.(y * ≥ t 1/7 ) or (t ≤ max{K 0 , 2}) or (t ≤ a 4 )(6)
holds, then we will be done by Theorem 4.2. So we may assume that (6) is false. Also, if y * ≤ t −1/7 , Theorem 4.2 guarantees an integral solution of value O(1); thus, we also suppose that y * > t −1/7 . The basic idea now is, as sketched above, to set many x * i,j to zero for each i (without losing too much on y * ), so that max i ℓ i and hence, t, will essentially get reduced. Such an approach, whose performance will be validated by arguments similar to those of Theorem 4.2, is repeatedly applied until (6) holds, owing to the (continually reduced) t becoming small enough to satisfy (6). There are two cases:
Case I: y * ≥ 1. Solve the LP relaxation, and set x ′ i,j := (y * ) 2 (log 5 t)x * i,j . Conduct randomized rounding on the x ′ i,j now, rounding each x ′ i,j independently to z i,j ∈ {⌊x ′ i,j ⌋, ⌈x ′ i,j ⌉}. (Note the key difference from Theorem 4.2, where for each i, we round exactly one x * i,j to 1.)
Let K 1 > 0 be a sufficiently large absolute constant. We now use ideas similar to those used in our proof of Theorem 4.2 to show that with nonzero probability, we have both of the following:
∀i ∈ [m], (Az) i ≤ (y * ) 3 log 5 t · (1 + K 1 /((y * ) 1.5 log 2 t)), and (7) ∀i ∈ [n], | j z i,j − (y * ) 2 log 5 t| ≤ K 1 y * log 3 t.
To show this, we proceed as follows. Let E 1 , E 2 , . . . , E m be the "bad" events, one for each event in (7) not holding; similarly, let E m+1 , E m+2 , . . . , E m+n be the "bad" events, one for each event in (8) not holding. We want to use our extended LLL to show that with positive probability, all these bad events can be avoided; specifically, we need a way of decomposing each E i into a finite number of non-negative r.v.s C i,j . For each event E m+ℓ where ℓ ≥ 1, we define just one r.v. C m+ℓ,1 : this is the indicator variable for the occurrence of E m+ℓ . For the events E i where i ≤ m, we decompose E i into r.v.s C i,j just as in (5): each such C i,j is now a scalar multiple of at most O((y * ) 3 log 5 t/((y * ) 1.5 log 2 t)) = O((y * ) 1.5 log 3 t) = O(t 1.5/7 log 3 t) independent binary r.v.s that underlie our randomized rounding; the second equality (big-Oh bound) here follows since (6) has been assumed to not hold. Thus, it is easy to see that for all i, 1 ≤ i ≤ m + n, and for any j, the r.v. C i,j depends on at most
O(t · t 1.5/7 log 3 t)(9)
events E k , where k = i. Also, as in our proof of Theorem 4.2, Theorem 3.4 gives a direct proof of requirement (ii) of Theorem 3.1; part (b) of Theorem 3.4 shows that for any desired constant K, we can choose the constant K 1 large enough so that for all i, j E[C i,j ] ≤ t −K . Thus, in view of (9), we see by Theorem 3.1 that Pr( m+n i=1 E i ) > 0. Fix a rounding z satisfying (7) and (8). For each i ∈ [n] and j ∈ [ℓ i ], we renormalize as follows: x ′′ i,j := z i,j / u z i,u . Thus we have u x ′′ i,u = 1 for all i; we now see that we have two very useful properties. First, since j z i,j ≥ (y * ) 2 log 5 t · 1 − O( 1 y * log 2 t ) for all i from (8), we have, ∀i ∈ [m], (Ax ′′ ) i ≤ y * (1 + O(1/((y * ) 1.5 log 2 t))) 1 − O(1/(y * log 2 t)) ≤ y * (1 + O(1/(y * log 2 t))).
Second, since the z i,j are non-negative integers summing to at most (y * ) 2 log 5 t(1 + O(1/(y * log 2 t))), at most O((y * ) 2 log 5 t) values x ′′ i,j are nonzero, for each i ∈ [n]. Thus, by losing a little in y * (see (10)), our "scaling up-rounding-scaling down" method has given a fractional solution x ′′ with a much-reduced ℓ i for each i; ℓ i is now O((y * ) 2 log 5 t), essentially. Thus, t has been reduced to O(a(y * ) 2 log 5 t); i.e., t has been reduced to at most
K 2 t 1/4+2/7 log 5 t(11)
for some constant K 2 > 0 that is independent of K 0 , since (6) was assumed false. Repeating this scheme O(log log t) times makes t small enough to satisfy (6). More formally, define t 0 = t, and t i+1 = K 2 t 1/4+2/7 i log 5 t i for i ≥ 0. Stop this sequence at the first point where either t = t i satisfies (6), or t i+1 ≥ t i holds. Thus, we finally have t small enough to satisfy (6) or to be bounded by some absolute constant. How much has max i∈[m] (Ax) i increased in the process? By (10), we see that at the end, max i∈ [m] (Ax) i ≤ y * · j≥0 (1 + O(1/(y * log 2 t j ))) ≤ y * · e O( j≥0 1/(y * log 2 t j )) ≤ y * + O(1),
since the values log t j decrease geometrically and are lower-bounded by some absolute positive constant. We may now apply Theorem 4.2.
Case II: t −1/7 < y * < 1. The idea is the same here, with the scaling up of x * i,j being by (log 5 t)/y * ; the same "scaling up-rounding-scaling down" method works out. Since the ideas are very similar to Case I, we only give a proof sketch here. We now scale up all the x * i,j first by (log 5 t)/y * and do a randomized rounding. The analogs of (7) and (8)
∀i ∈ [n], | j z i,j − log 5 t/y * | ≤ K ′ 1 log 3 t/ y * .(14)
Proceeding identically as in Case I, we can show that with positive probability, (13) and (14) hold simultaneously. Fix a rounding where these two properties hold, and renormalize as before: x ′′ i,j := z i,j / u z i,u . Since (13) and (14) hold, it is easy to show that the following analogs of (10) and (11) hold:
(Ax ′′ ) i ≤ y * (1 + O(1/ log 2 t)) 1 − O( √ y * / log 2 t) ≤ y * (1 + O(1/ log 2 t))
; and t has been reduced to O(a log 5 t/y * ), i.e., to O(t 1/4+1/7 log 5 t).
We thus only need O(log log t) iterations, again. Also, the analog of (12) now is that
max i∈[m] (Ax) i ≤ y * · j≥0 (1 + O(1/ log 2 t j )) ≤ y * · e O( j≥0 1/ log 2 t j ) ≤ y * + O(1).
This completes the proof.
We now study our improvements for discrepancy-type problems, which are an important class of MIPs that, among other things, are useful in devising divide-and-conquer algorithms. Given is a set-system (X, F ), where X = [n] and F = {D 1 , D 2 , . . . , D M } ⊆ 2 X . Given a positive integer ℓ, the problem is to partition X into ℓ parts, so that each D j is "split well": we want a function f : X → [ℓ] which minimizes max j∈[M ],k∈[ℓ] |{i ∈ D j : f (i) = k}|. (The case ℓ = 2 is the standard set-discrepancy problem.) To motivate this problem, suppose we have a (di)graph (V, A); we want a partition of V into V 1 , . . . , V ℓ such that ∀v ∈ V , {|{j ∈ N (v)∩V k }| : k ∈ [ℓ]} are "roughly the same", where N (v) is the (out-)neighborhood of v. See, e.g., [2,17] for how this helps construct divide-and-conquer approaches. This problem is naturally modeled by the above set-system problem.
Let ∆ be the degree of (X, F ), i.e., max i∈[n] |{j : i ∈ D j }|, and let ∆ ′ . = max D j ∈F |D j |. Our problem is naturally written as an MIP with m = M ℓ, ℓ i = ℓ for each i, and g = a = ∆, in the notation of Definition 2.1; y * = ∆ ′ /ℓ here. The analysis of [32] gives an integral solution of value at most y * (1 + O(H(y * , 1/(M ℓ)))), while [18] presents a solution of value at most y * + ∆. Also, since any D j ∈ F intersects at most (∆ − 1)∆ ′ other elements of F , Lemma 1.1 shows that randomized rounding produces, with positive probability, a solution of value at most y * (1 + O(H(y * , 1/(e∆ ′ ∆ℓ)))). This is the approach taken by [16] for their case of interest: ∆ = ∆ ′ , ℓ = ∆/ log ∆. Theorem 4.5 shows the existence of an integral solution of value y * (1+O(H(y * , 1/∆)))+O(1), i.e., removes the dependence on ∆ ′ . This is an improvement on all the three results above. As a specific interesting case, suppose ℓ grows at most as fast as ∆ ′ / log ∆. Then we see that good integral solutions-those that grow at the rate of O(y * ) or better-exist, and this was not known before. (The approach of [16] shows such a result for ℓ = O(∆ ′ / log(max{∆, ∆ ′ })). Our bound of O(∆ ′ / log ∆) is always better than this, and especially so if ∆ ′ ≫ ∆.)
Approximating Covering Integer Programs
One of the main ideas behind Theorem 3.1 was to extend the basic inductive proof behind the LLL by decomposing the "bad" events E i appropriately into the r.v.s C i,j . We now use this general idea in a different context, that of (multi-criteria) covering integer programs, with an additional crucial ingredient being a useful correlation inequality, the FKG inequality [15]. The reader is asked to recall the discussion of (multi-criteria) CIPs from § 2. We start with a discussion of randomized rounding for CIPs, the Chernoff lower-tail bound, and the FKG inequality in § 5.1. These lead to our improved, but nonconstructive, approximation bound for column-sparse (multi-criteria) CIPs, in § 5.2. This is then made constructive in § 5.3; we also discuss there what we view as novel about this constructive approach.
Preliminaries
Let us start with a simple and well-known approach to tail bounds. Suppose Y is a random variable and y is some value. Then, for any 0 ≤ δ < 1, we have
Pr(Y ≤ y) ≤ Pr((1 − δ) Y ≥ (1 − δ) y ) ≤ E[(1 − δ) Y ] (1 − δ) y ,(15)
where the inequality is a consequence of Markov's inequality.
We next setup some basic notions related to approximation algorithms for (multi-criteria) CIPs. Recall that in such problems, we have ℓ given non-negative vectors c 1 , c 2 , . . . , c ℓ such that for all i, c i ∈ [0, 1] n with max j c i,j = 1; ℓ = 1 in the case of CIPs. Let x = (x * 1 , x * 2 , . . . , x * n ) denote a given fractional solution that satisfies the system of constraints Ax ≥ b. We are not concerned here with how x * was found: typically, x * would be an optimal solution to the LP relaxation of the problem. (The LP relaxation is obvious if, e.g., ℓ = 1, or, say, if the given multi-criteria aims to minimize max i c T i · x * , or to keep each c T i · x * bounded by some target value v i .) We now consider how to round x * to some integral z so that:
(P1) the constraints Az ≥ b hold, and (P2) for all i, c T i · z is "not much bigger" than c T i · x * : our approximation bound will be a measure of how small a "not much bigger value" we can achieve in this sense.
Let us now discuss the "standard" randomized rounding scheme for (multi-criteria) CIPs. We assume a fixed instance as well as x * , from now on. For an α > 1 to be chosen suitably, set x ′ j = αx * j , for each j ∈ [n]. We then construct a random integral solution z by setting, independently for each j ∈ [n],
z j = ⌊x ′ j ⌋ + 1 with probability x ′ j − ⌊x ′ j ⌋, and z j = ⌊x ′ j ⌋ with probability 1 − (x ′ j − ⌊x ′ j ⌋).
The aim then is to show that with positive (hopefully high) probability, (P1) and (P2) happen simultaneously. We now introduce some useful notation. For every j ∈ [n], let s j = ⌊x ′ j ⌋. Let A i denote the ith row of A, and let X 1 , X 2 , . . . , X n ∈ {0, 1} be independent r.v.s with Pr(X j = 1) = x ′ j − s j for all j. The bad event E i that the ith constraint is violated by our randomized rounding is given by
E i ≡ "A i · X < µ i (1 − δ i )", where µ i = E[A i · X] and δ i = 1 − (b i − A i · s)/µ i .
We now bound Pr(E i ) for all i, when the standard randomized rounding is used. Define g(B, α) . = (α · e −(α−1) ) B . Then for all i,
Lemma 5.1.Pr(E i ) ≤ E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ g(B, α) ≤ e −B(α−1) 2 /(2α)
under standard randomized rounding.
Proof. The first inequality follows from (15). Next, the Chernoff-Hoeffding lower-tail approach [4,27] shows that
E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ e −δ i (1 − δ i ) 1−δ i µ i .
It is observed in [36] (and is not hard to see) that this latter quantity is maximized when s j = 0 for all j, and when each b i equals its minimum value of B. Thus we see that Pr(E i ) ≤ g(B, α). The inequality g(B, α) ≤ e −B(α−1) 2 /(2α) for α ≥ 1, is well-known and easy to verify via elementary calculus.
Next, the FKG inequality is a useful correlation inequality, a special case of which is as follows [15]. Given binary vectors a = (a 1 , a 2 , . . . , a ℓ ) ∈ {0, 1} ℓ and b = (b 1 , b 2 , . . . , b ℓ ) ∈ {0, 1} ℓ , let us partially order them by coordinate-wise domination: a b iff a i ≤ b i for all i. Now suppose Y 1 , Y 2 , . . . , Y ℓ are independent r.v.s, each taking values in {0, 1}. Let Y denote the vector (Y 1 , Y 2 , . . . , Y ℓ ). Suppose an event A is completely defined by the value of Y . Define A to be increasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b such that a b. Analogously, event A is decreasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b a. The FKG inequality proves certain intuitively appealing bounds:
(i) Pr(I i | j∈S I j ) ≥ Pr(I i ) and Pr(D i | j∈S D j ) ≥ Pr(D i ); (ii) Pr(I i | j∈S D j ) ≤ Pr(I i ) and Pr(D i | j∈S I j ) ≤ Pr(D i ).
Returning to our random variables X j and events E i , we get the following lemma as an easy consequence of the FKG inequality, since each event of the form "E i " or "X j = 1" is an increasing event as a function of the vector (X 1 , X 2 , . . . , X n ):
Lemma 5.3. For all B 1 , B 2 ⊆ [m]
such that B 1 ∩B 2 = ∅ and for any B 3 ⊆ [n], Pr( i∈B 1 E i | (( j∈B 2 E j )∧ ( k∈B 3 (X k = 1))) ≥ i∈B 1 Pr(E i ).
Nonconstructive approximation bounds for (multi-criteria) CIPs
Definition 5.4. (The function R) For any s and any j 1 < j 2 < · · · < j s , let R(j 1 , j 2 , . . . , j s ) be the set of indices i such that row i of the constraint system "Ax ≥ b" has at least one of the variables j k , 1 ≤ k ≤ s, appearing with a nonzero coefficient. (Note from the definition of a in Defn. 2.2, that |R(j 1 , j 2 , . . . , j s )| ≤ a · s.)
Let the vector x * = (x * 1 , x * 2 , . . . , x * n ), the parameter α > 1, and the "standard" randomized rounding scheme, be as defined in § 5.1. The standard rounding scheme is sufficient for our (nonconstructive) purposes now; we generalize this scheme as follows, for later use in § 5.3.
Definition 5.5. (General randomized rounding) Given a vector p = (p 1 , p 2 , . . . , p n ) ∈ [0, 1] n , the general randomized rounding with parameter p generates independent random variables X 1 , X 2 , . . . , X n ∈ {0, 1} with Pr(X j = 1) = p j ; the rounded vector z is defined by z j = ⌊αx * j ⌋ + X j for all j. (As in the standard rounding, we set each z j to be either ⌊αx * j ⌋ or ⌈αx * j ⌉; the standard rounding is the special case in which E[z j ] = αx * j for all j.)
We now present an important lemma, Lemma 5.6, to get correlation inequalities which "point" in the "direction" opposite to FKG. Some ideas from the proof of Lemma 1.1 will play a crucial role in our proof this lemma.
Lemma 5.6. Suppose we employ general randomized rounding with some parameter p, and that Pr( m i=1 E i ) is nonzero under this rounding. The following hold for any q and any 1 ≤ j 1 < j 2 < · · · < j q ≤ n.
(i)
Pr(X j 1 = X j 2 = · · · = X jq = 1 | m i=1 E i ) ≤ q t=1 p jt i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i ))
; (16) the events E i ≡ ((Az) i < b i ) are defined here w.r.t. the general randomized rounding.
(ii) In the special case of standard randomized rounding, i∈R(j 1 ,j 2 ,...,jq)
(1 − Pr(E i )) ≥ (1 − g(B, α)) aq ;(17)
the function g is as defined in Lemma 5.1.
Proof. (i) Note first that if we wanted a lower bound on the l.h.s., the FKG inequality would immediately imply that the l.h.s. is at least p j 1 p j 2 · · · p jq . We get around this "correlation problem" as follows. Let
Q = R(j 1 , j 2 , . . . , j q ); let Q ′ = [m] − Q. Let Z 1 ≡ ( i∈Q E i ), and Z 2 ≡ ( i∈Q ′ E i ).
Letting Y = q t=1 X jt , note that |Q| ≤ aq and (18)
Y is independent of Z 2 .(19)
Now,
Pr(Y = 1 | (Z 1 ∧ Z 2 )) = Pr(((Y = 1) ∧ Z 1 ) | Z 2 ) Pr(Z 1 | Z 2 ) ≤ Pr((Y = 1) | Z 2 ) Pr(Z 1 | Z 2 ) = Pr(Y = 1) Pr(Z 1 | Z 2 ) (by (19)) ≤ q t=1
Pr(X jt = 1) i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i )) (by Lemma 5.3).
(ii) We get (17) from Lemma 5.1 and (18).
We will use Lemmas 5.3 and 5.6 to prove Theorem 5.9. As a warmup, let us start with a result for the special case of CIPs; recall that y * denotes c T · x * . Theorem 5.7. For any given CIP, suppose we choose α, β > 1 such that β(1 − g(B, α)) a > 1. Then, there exists a feasible solution of value at most y * αβ. In particular, there is an absolute constant K > 0 such that if α, β > 1 are chosen as: α = K · ln(a + 1)/B and β = 2, if ln(a + 1) ≥ B, and (20) α = β = 1 + K · ln(a + 1)/B, if ln(a + 1) < B; (21) then, there exists a feasible solution of value at most y * αβ. Thus, the integrality gap is at most 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}).
Proof. Conduct standard randomized rounding, and let E be the event that c T · z > y * αβ. Setting Z ≡ i∈[m] E i and µ . = E[c T · z] = y * α, we see by Markov's inequality that Pr(E | Z) is at most R = ( n j=1 c j Pr(X j = 1 | Z))/(µβ). Note that Pr(Z) > 0 since α > 1; so, we now seek to make R < 1, which will complete the proof. Lemma 5.6 shows that g(B, α)) a ; thus, the condition β (1 − g(B, α)) a > 1 suffices.
R ≤ j c j p j µβ · (1 − g(B, α)) a = 1 β(1 −
Simple algebra shows that choosing α, β > 1 as in (20) and (21), ensures that β (1 − g(B, α)) a > 1.
The basic approach of our proof of Theorem 5.7 is to follow the main idea of Theorem 3.1, and to decompose the event "E | Z" into a non-negative linear combination of events of the form "X j = 1 | Z'; we then exploited the fact that each X j depends on at most a of the events comprising Z. We now extend Theorem 5.7 and also generalize to multi-criteria CIPs. Instead of employing just a "first moment method" (Markov's inequality) as in the proof of Theorem 5.7, we will work with higher moments: the functions S k defined in (1) and used in Theorem 3.4. Suppose some parameters λ i > 0 are given, and that our goal is to round x * to z so that the event
A ≡ "(Az ≥ b) ∧ (∀i, c T i · z ≤ λ i ) ′′(22)
holds. We first give a sufficient condition for this to hold, in Theorem 5.9; we then derive some concrete consequences in Corollary 5.10. We need one further definition before presenting Theorem 5.9. Recall that A i and b i respectively denote the ith row of A and the ith component of b. Also, the vector s and values δ i will throughout be as in the definition of standard randomized rounding.
Definition 5.8. (The functions ch and ch ′ ) Suppose we conduct general randomized rounding with some parameter p; i.e., let X 1 , X 2 , . . . , X n be independent binary random variables such that Pr(X j = 1) = p j . For each i ∈ [m], define
ch i (p) . = E[(1 − δ i ) A i ·X ] (1 − δ i ) b i −A i ·s = j∈[n] E[(1 − δ i ) A i,j X j ] (1 − δ i ) b i −A i ·s and ch ′ i (p) . = min{CH i (p), 1}.
(Note from (15) that if we conduct general randomized rounding with parameter p, then Pr((Az) i < b i ) ≤ ch i (p) ≤ ch ′ i (p); also, "ch" stands for "Chernoff-Hoeffding".) Theorem 5.9. Suppose we are given a multi-criteria CIP, as well as some parameters λ 1 , λ 2 , . . . , λ ℓ > 0. Let A be as in (22). Then, for any sequence of positive integers (k 1 , k 2 , . . . , k ℓ ) such that k i ≤ λ i , the following hold.
(i) Suppose we employ general randomized rounding with parameter p = (p 1 , p 2 , . . . , p n ). Then, Pr(A) is at least (ii) Suppose we employ the standard randomized rounding to get a rounded vector z.
Φ(p) . = r∈[m] (1 − ch ′ r (p)) − ℓ i=1 1 λ i k i · j 1 <···<j k i k i t=1 c i,jt · p jt · r ∈R(j 1 ,...,j k i ) (1 − ch ′ r (p));(23)Let λ i = ν i (1 + γ i ) for each i ∈ [ℓ], where ν i = E[c T i · z] = α · (c T i · x * )
and γ i > 0 is some parameter. Then,
Φ(p) ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −a·k i .(24)
In particular, if the r.h.s. of (24) is positive, then Pr(A) > 0 for standard randomized rounding.
The proof is a simple generalization of that of Theorem 5.7, and is deferred to Section 5.4. Theorem 5.7 is the special case of Theorem 5.9 corresponding to ℓ = k 1 = 1. To make the general result of Theorem 5.9 more concrete, we now study an additional special case. We present this special case as one possible "proof of concept", rather than as an optimized one; e.g., the constant "3" in the bound "c T i · z ≤ 3ν i " can be improved.
Corollary 5.10. There is an absolute constant K ′ > 0 such that the following holds. Suppose we are given a multi-criteria CIP with notation as in part (ii) of Theorem 5.9. Define α = K ′ · max{ ln(a)+ln ln(2ℓ)
B , 1}. Now if ν i ≥ log 2 (2ℓ) for all i ∈ [ℓ]
, then standard randomized rounding produces a feasible solution z such that c T i · z ≤ 3ν i for all i, with positive probability. In particular, this can be shown by setting k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i, in part (ii) of Theorem 5.9.
Proof. Let us employ Theorem 5.9(ii) with k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i. We just need to establish that the r.h.s. of (24) is positive. We need to show that
ℓ i=1 n k i · (ν i /n) k i 3ν i k i · (1 − g(B, α)) −a·k i < 1;
it is sufficient to prove that for all i,
ν k i i /k i ! 3ν i k i · (1 − g(B, α)) −a·k i < 1/ℓ.(25)
We make two observations now.
• Since k i ∼ ln ℓ and ν i ≥ log 2 (2ℓ),
3ν i k i = (1/k i !) · k i −1 j=0 (3ν i − j) = (1/k i !) · (3ν i ) k i · e −Θ( k i −1 j=0 j/ν i ) = Θ((1/k i !) · (3ν i ) k i ).
• (1 − g(B, α)) −a·k i can be made arbitrarily close to 1 by choosing the constant K ′ large enough.
These two observations establish (25).
Constructive version
It can be shown that for many problems, randomized rounding produces the solutions shown to exist by Theorem 5.7 and Theorem 5.9, with very low probability: e.g., probability almost exponentially small in the input size. Thus we need to obtain constructive versions of these theorems. Our method will be a deterministic procedure that makes O(n) calls to the function Φ(·), in addition to poly(n, m) work. Now, if k ′ denotes the maximum of all the k i , we see that Φ can be evaluated in poly(n k ′ , m) time. Thus, our overall procedure runs in time poly(n k ′ , m) time. In particular, we get constructive versions of Theorem 5.7 and Corollary 5.10 that run in time poly(n, m) and poly(n log ℓ , m), respectively.
Our approach is as follows. We start with a vector p that corresponds to standard randomized rounding, for which we know (say, as argued in Corollary 5.10) that Φ(p) > 0. In general, we have a vector of probabilities p = (p 1 , p 2 , . . . , p n ) such that Φ(p) > 0. If p ∈ {0, 1} n , we are done. Otherwise suppose some p j lies in (0, 1); by renaming the variables, we will assume without loss of generality that j = n. Define p ′ = (p 1 , p 2 , . . . , p n−1 , 0) and p ′′ = (p 1 , p 2 , . . . , p n−1 , 1). The main fact we wish to show is that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0: we can then set p n to 0 or 1 appropriately, and continue. (As mentioned in the previous paragraph, we thus have O(n) calls to the function Φ(·) in total.) Note that although some of the p j will lie in {0, 1}, we can crucially continue to view the X j as independent random variables with Pr(X j = 1) = p j .
So, our main goal is: assuming that p n ∈ (0, 1) and that Φ(p) > 0, (26) to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. In order to do so, we make some observations and introduce some simplifying notation. Define, for each i ∈ [m]:
q i = ch ′ i (p), q ′ i = ch ′ i (p ′ ), and q ′′ i = ch ′ i (p ′′ )
. Also define the vectors q . = (q 1 , q 2 , . . . , q m ), q ′ . = (q ′ 1 , q ′ 2 , . . . , q ′ m ), and q ′′ . = (q ′′ 1 , q ′′ 2 , . . . , q ′′ m ). We now present a useful lemma about these vectors:
Lemma 5.11. For all i ∈ [m], we have 0 ≤ q ′′ i ≤ q ′ i ≤ 1; (27) q i ≥ p n q ′′ i + (1 − p n )q ′ i ; and (28) q ′ i = q ′′ i = q i if i ∈ R(n).(29)
Proof. The proofs of (27) and (29) are straightforward. As for (28), we proceed as in [36]. First of all, if q i = 1, then we are done, since q ′′ i , q ′ i ≤ 1. So suppose q i < 1; in this case, q i = ch i (p). Now, Definition 5.8 shows that
ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ). Therefore, q i = ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ) ≥ p n ch ′ i (p ′′ ) + (1 − p n )ch ′ i (p ′ ).
Since we are mainly concerned with the vectors p, p ′ and p ′′ now, we will view the values p 1 , p 2 , . . . , p n−1 as arbitrary but fixed, subject to (26). The function Φ(·) now has a simple form; to see this, we first define, for a vector r = (r 1 , r 2 , . . . , r m ) and a set U ⊆ [m],
f (U, r) = i∈U (1 − r i ).
Recall that p 1 , p 2 , . . . , p n−1 are considered as constants now. Then, it is evident from (23) that there exist constants u 1 , u 2 , . . . , u t and v 1 , v 2 , . . . , v t ′ , as well as subsets U 1 , U 2 , . . . , U t and V 1 , V 2 , . . . , V t ′ of [m], such that
Φ(p) = f ([m], q) − ( i u i · f (U i , q)) − (p n · j v j · f (V j , q));(30)Φ(p ′ ) = f ([m], q ′ ) − ( i u i · f (U i , q ′ )) − (0 · j v j · f (V j , q ′ )) = f ([m], q ′ ) − i u i · f (U i , q ′ ); (31) Φ(p ′′ ) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − (1 · j v j · f (V j , q ′′ )) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − ( j v j · f (V j , q ′′ )).(32)
Importantly, we also have the following:
the constants u i , v j are non-negative; ∀j, V j ∩ R(n) = ∅.
Recall that our goal is to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. We will do so by proving that
Φ(p) ≤ p n Φ(p ′′ ) + (1 − p n )Φ(p ′ ).(34)
Let us use the equalities (30), (31), and (32). In view of (29) and (33), the term "−p n · j v j · f (V j , q)" on both sides of the inequality (34) cancels; defining ∆(U ) .
= (1 − p n ) · f (U, q ′ ) + p n · f (U, q ′′ ) − f (U, q), inequality (34) reduces to ∆([m]) − i u i · ∆(U i ) ≥ 0.(35)
Before proving this, we pause to note a challenge we face. Suppose we only had to show that, say, ∆([m]) is non-negative; this is exactly the issue faced in [36]. Then, we will immediately be done by part (i) of Lemma 5.12, which states that ∆(U ) ≥ 0 for any set U . However, (35) also has terms such as "u i · ∆(U i )" with a negative sign in front. To deal with this, we need something more than just that ∆(U ) ≥ 0 for all U ; we handle this by part (ii) of Lemma 5.12. We view this as the main novelty in our constructive version here. ≥ 0 (by (26) and (30)).
Thus we have (35).
Proof of Lemma 5.12. It suffices to show the following. Assume U = [m]; suppose u ∈ ([m] − U ) and that U ′ = U ∪ {u}. Assuming by induction on |U | that ∆(U ) ≥ 0, we show that ∆(U ′ ) ≥ 0, and that ∆(U )/f (U, q) ≤ ∆(U ′ )/f (U ′ , q). It is easy to check that this way, we will prove both claims of the lemma.
The base case of the induction is that |U | ∈ {0, 1}, where ∆(U ) ≥ 0 is directly seen by using (28). Suppose inductively that ∆(U ) ≥ 0. Using the definition of ∆(U ) and the fact that f (U ′ , q) = (1 − q u )f (U, q), we have
f (U ′ , q) = (1 − q u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ ) − ∆(U )] ≤ (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] − (1 − q u ) · ∆(U ),
where this last inequality is a consequence of (28). Therefore, using the definition of ∆(U ′ ) and the facts f (U ′ , q ′ ) = (1 − q ′ u )f (U, q ′ ) and f (U ′ , q ′′ ) = (1 − q ′′ u )f (U, q ′′ ), (27)).
∆(U ′ ) = (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) − f (U ′ , q) ≥ (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) + (1 − q u ) · ∆(U ) − (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] = (1 − q u ) · ∆(U ) + p n (1 − p n ) · (f (U, q ′′ ) − f (U, q ′ )) · (q ′ u − q ′′ u ) ≥ (1 − q u ) · ∆(U ) (by
So, since we assumed that ∆(U ) ≥ 0, we get ∆(U ′ ) ≥ 0; furthermore, we get that ∆(U ′ ) ≥ (1 − q u ) · ∆(U ), which implies that ∆(U ′ )/f (U ′ , q) ≥ ∆(U )/f (U, q). (i) Let E r ≡ ((Az) r < b r ) be defined w.r.t. general randomized rounding with parameter p; as observed in Definition 5.8, Pr(E r ) ≤ ch ′ r (p). Now if ch ′ r (p) = 1 for some r, then part (i) is trivially true; so we assume that Pr(E r ) ≤ ch ′ r ( (1 − Pr(E r )).
Define, for i = 1, 2, . . . , ℓ, the "bad" event E i ≡ (c T i · z > λ i ). Fix any i. Our plan is to show that
Pr(E i | Z) ≤ 1 λ i k i · j 1 <j 2 <···<j k i k i t=1 c i,jt · p jt · r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r )) −1 .(36)
If we prove (36), then we will be done as follows. We have
Pr(A) ≥ Pr(Z) · 1 − i Pr(E i | Z) ≥ ( r∈[m]
(1 − Pr(E r ))) · 1 − i Pr(E i | Z) .
Now, the term "( r∈ [m] (1 − Pr(E r )))" is a decreasing function of each of the values Pr(E r ); so is the lower bound on "− Pr(E i | Z)" obtained from (36). Hence, bounds (36) and (37), along with the bound Pr(E r ) ≤ ch ′ r (p), will complete the proof of part (i). We now prove (36) using Theorem 3.4(a) and Lemma 5.6. Recall the symmetric polynomials S k from (1). Define Y = S k i (c i,1 X 1 , c i,2 X 2 , . . . , c i,n X n )/ λ i k i . By Theorem 3.4(a), Pr(E i | Z) ≤ E[Y | Z]. Next, the typical term in E[Y | Z] can be upper bounded using Lemma 5.6:
E ( k i t=1 c i,jt · X jt ) | m i=1 E i ≤ k i
t=1 c i,jt · p jt r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r ))
.
Thus we have (36), and the proof of part (i) is complete. (1 − ch ′ r (p)) Lemma 5.1 shows that under standard randomized rounding, ch ′ r (p) ≤ g(B, α) < 1 for all r. So, the r.h.s. κ of (38) gets lower-bounded as follows:
· 1 − ℓ i=1 1 λ i k i · j 1 <···<j k iκ ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · [ r∈R(j 1 ,...,j k i ) (1 − g(B, α))] −1 ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · (1 − g(B, α)) −ak i ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −ak i ,
where the last line follows from Theorem 3.4(c). 2
Conclusion
We have presented an extension of the LLL that basically helps reduce the "dependency" much in some settings; we have seen applications to two families of integer programming problems. It would be interesting to see how far these ideas can be pushed further. Two other open problems suggested by this work are: (i) developing a constructive version of our result for MIPs, and (ii) developing a poly(n, m)-time constructive version of Theorem 5.9, as opposed to the poly(n k ′ , m)-time constructive version that we present in § 5.3. Finally, a very interesting question is to develop a theory of applications of the LLL that can be made constructive with (essentially) no loss. | 11,762 |
cs0307043 | 2953390777 | The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs. | Next, there is growing interest in optimization, since different participating individuals and or organizations may have different objective functions in a given problem instance; see, e.g., @cite_32 . Motivated by this, we study multi-criteria optimization in the setting of covering problems: | {
"abstract": [
"We study problems in multiobjective optimization, in which solutions to a combinatorial optimization problem are evaluated with respect to several cost criteria, and we are interested in the trade-off between these objectives (the so-called Pareto curve). We point out that, under very general conditions, there is a polynomially succinct curve that spl epsiv -approximates the Pareto curve, for any spl epsiv >0. We give a necessary and sufficient condition under which this approximate Pareto curve can be constructed in time polynomial in the size of the instance and 1 spl epsiv . In the case of multiple linear objectives, we distinguish between two cases: when the underlying feasible region is convex, then we show that approximating the multi-objective problem is equivalent to approximating the single-objective problem. If however the feasible region is discrete, then we point out that the question reduces to an old and recurrent one: how does the complexity of a combinatorial optimization problem change when its feasible region is intersected with a hyperplane with small coefficients; we report some interesting new findings in this domain. Finally, we apply these concepts and techniques to formulate and solve approximately a cost-time-quality trade-off for optimizing access to the World-Wide Web, in a model first studied by (1996) (which was actually the original motivation for this work)."
],
"cite_N": [
"@cite_32"
],
"mid": [
"1928381443"
]
} | An Extension of the Lovász Local Lemma, and its Applications to Integer Programming * | The powerful Lovász Local Lemma (LLL) is often used to show the existence of rare combinatorial structures by showing that a random sample from a suitable sample space produces them with positive probability [14]; see Alon & Spencer [4] and Motwani & Raghavan [27] for several such applications. We present an extension of this lemma, and demonstrate applications to rounding fractional solutions for certain families of integer programs.
Let e denote the base of natural logarithms as usual. The symmetric case of the LLL shows that all of a set of "bad" events E i can be avoided under some conditions: Lemma 1.1. ( [14]) Let E 1 , E 2 , . . . , E m be any events with Pr(E i ) ≤ p ∀i. If each E i is mutually independent of all but at most d of the other events E j and if ep(d + 1) ≤ 1, then Pr( m i=1 E i ) > 0.
Though the LLL is powerful, one problem is that the "dependency" d is high in some cases, precluding the use of the LLL if p is not small enough. We present a partial solution to this via an extension of the LLL (Theorem 3.1), which shows how to essentially reduce d for a class of events E i ; this works well when each E i denotes some random variable deviating "much" from its mean. In a nutshell, we show that such events E i can often be decomposed suitably into sub-events; although the sub-events may have a large dependency among themselves, we show that it suffices to have a small "bipartite dependency" between the set of events E i and the set of sub-events. This, in combination with some other ideas, leads to the following applications in integer programming.
It is well-known that a large number of NP-hard combinatorial optimization problems can be cast as integer linear programming problems (ILPs). Due to their NP-hardness, good approximation algorithms are of much interest for such problems. Recall that a ρ-approximation algorithm for a minimization problem is a polynomial-time algorithm that delivers a solution whose objective function value is at most ρ times optimal; ρ is usually called the approximation guarantee, approximation ratio, or performance guarantee of the algorithm. Algorithmic work in this area typically focuses on achieving the smallest possible ρ in polynomial time. One powerful paradigm here is to start with the linear programming (LP) relaxation of the given ILP wherein the variables are allowed to be reals within their integer ranges; once an optimal solution is found for the LP, the main issue is how to round it to a good feasible solution for the ILP.
Rounding results in this context often have the following strong property: they present an integral solution of value at most y * · ρ, where y * will throughout denote the optimal solution value of the LP relaxation.
Since the optimal solution value OP T of the ILP is easily seen to be lower-bounded by y * , such rounding algorithms are also ρ-approximation algorithms. Furthermore, they provide an upper bound of ρ on the ratio OP T /y * , which is usually called the integrality gap or integrality ratio of the relaxation; the smaller this value, the better the relaxation.
This work presents improved upper bounds on the integrality gap of the natural LP relaxation for two families of ILPs: minimax integer programs (MIPs) and covering integer programs (CIPs). (The precise definitions and results are presented in § 2.) For the latter, we also provide the corresponding polynomialtime rounding algorithms. Our main improvements are in the case where the coefficient matrix of the given ILP is column-sparse: i.e., the number of nonzero entries in every column is bounded by a given parameter a. There are classical rounding theorems for such column-sparse problems (e.g., Beck & Fiala [6], Karp, Leighton, Rivest, Thompson, Vazirani & Vazirani [18]). Our results complement, and are incomparable with, these results. Furthermore, the notion of column-sparsity, which denotes no variable occurring in "too many" constraints, occurs naturally in combinatorial optimization: e.g., routing using "short" paths, and problems on hypergraphs with "small" degree. These issues are discussed further in § 2.
A key technique, randomized rounding of linear relaxations, was developed by Raghavan & Thompson [32] to get approximation algorithms for such ILPs. We use Theorem 3.1 to prove that this technique produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrix of the given MIP/CIP is column-sparse. (In the case of MIPs, our algorithm iterates randomized rounding several times with different choices of parameters, in order to achieve our result.) Such results cannot be got via Lemma 1.1, as the dependency d, in the sense of Lemma 1.1, can be as high as Θ(m) for these problems. Roughly speaking, Theorem 3.1 helps show that if no column in our given ILP has more than a nonzero entries, then the dependency can essentially be brought down to a polynomial in a; this is the key driver behind our improvements.
Theorem 3.1 works well in combination with an idea that has blossomed in the areas of derandomization and pseudorandomness, in the last two decades: (approximately) decomposing a function of several variables into a sum of terms, each of which depends on only a few of these variables. Concretely, suppose Z is a sum of random variables Z i . Many tools have been developed to upper-bound Pr(Z − E[Z] ≥ z) and Pr(|Z − E[Z]| ≥ z) even if the Z i s are only (almost) k-wise independent for some "small" k, rather than completely independent. The idea is to bound the probabilities by considering E[(Z − E[Z]) k ] or similar expectations, which look at the Z i k or fewer at a time (via linearity of expectation). The main application of this has been that the Z i can then be sampled using "few" random bits, yielding a derandomization/pseudorandomness result (e.g., [3,23,8,26,28,33]). Our results show that such ideas can in fact be used to show that some structures exist! This is one of our main contributions.
What about polynomial-time algorithms for our existential results? Typical applications of Lemma 1.1 are "nonconstructive" [i.e., do not directly imply (randomized) polynomial-time algorithmic versions], since the positive probability guaranteed by Lemma 1.1 can be exponentially small in the size of the input. However, certain algorithmic versions of the LLL have been developed starting with the seminal work of Beck [5]. These ideas do not seem to apply to our extension of the LLL, and hence our MIP result is nonconstructive. Following the preliminary version of this work [35], two main algorithmic versions related to our work have been obtained: (i) for a subclass of the MIPs [20], and (ii) for a somewhat different notion of approximation than the one we study, for certain families of MIPs [11].
Our main algorithmic contribution is for CIPs and multi-criteria versions thereof: we show, by a generalization of the method of pessimistic estimators [31], that we can efficiently construct the same structure as is guaranteed by our nonconstructive argument. We view this as interesting for two reasons. First, the generalized pessimistic estimator argument requires a quite delicate analysis, which we expect to be useful in other applications of developing constructive versions of existential arguments. Second, except for some of the algorithmic versions of the LLL developed in [24,25], most current algorithmic versions minimally require something like "pd 3 = O(1)" (see, e.g., [5,1]); the LLL only needs that pd = O(1). While this issue does not matter much in many applications, it crucially does, in some others. A good example of this is the existentially-optimal integrality gap for the edge-disjoint paths problem with "short" paths, shown using the LLL in [21]. The above-seen "pd 3 = O(1)" requirement of currently-known algorithmic approaches to the LLL, leads to algorithms that will violate the edge-disjointness condition when applied in this context: specifically, they may route up to three paths on some edges of the graph. See [9] for a different -randomwalk based -approach to low-congestion routing. An algorithmic version of this edge-disjoint paths result of [21] is still lacking. It is a very interesting open question whether there is an algorithmic version of the LLL that can construct the same structures as guaranteed to exist by the LLL. In particular, can one of the most successful derandomization tools -the method of conditional probabilities or its generalization, the pessimistic estimators method -be applied, fixing the underlying random choices of the probabilistic argument one-by-one? This intriguing question is open (and seems difficult) for now. As a step in this direction, we are able to show how such approaches can indeed be developed, in the context of CIPs.
Thus, our main contributions are as follows. (a) The LLL extension is of independent interest: it helps in certain settings where the "dependency" among the "bad" events is too high for the LLL to be directly applicable. We expect to see further applications/extensions of such ideas. (b) This work shows that certain classes of column-sparse ILPs have much better solutions than known before; such problems abound in practice (e.g., short paths are often desired/required in routing). (c) Our generalized method of pessimistic estimators should prove fruitful in other contexts also; it is a step toward complete algorithmic versions of the LLL.
The rest of this paper is organized as follows. Our results are first presented in § 2, along with a discussion of related work. The extended LLL, and some large-deviation methods that will be seen to work well with it, are shown in § 3. Sections 4 and 5 are devoted to our rounding applications. Finally, § 6 concludes.
Improvements achieved
For MIPs, we use the extended LLL and an idea ofÉva Tardos that leads to a bootstrapping of the LLL extension, to show the existence of an integral solution of value y * + O(min{y * , m} · H(min{y * , m}, 1/a)) + O(1); see Theorem 4.5. Since a ≤ m, this is always as good as the y * +O(min{y * , m}·H(min{y * , m}, 1/m)) bound of [32] and is a good improvement, if a ≪ m. It also is an improvement over the additive g factor of [18] in cases where g is not small compared to y * .
Consider, e.g., the global routing problem and its MIP formulation, sketched above; m here is the number of edges in G, and g = a is the maximum length of any path in i P i . To focus on a specific interesting case, suppose y * , the fractional congestion, is at most one. Then while the previous results ( [32] and [18], resp.) give bounds of O(log m/ log log m) and O(a) on an integral solution, we get the improved bound of O(log a/ log log a). Similar improvements are easily seen for other ranges of y * also; e.g., if y * = O(log a), an integral solution of value O(log a) exists, improving on the previously known bounds of O(log m/ log(2 log m/ log a)) and O(a). Thus, routing along short paths (this is the notion of sparsity for the global routing problem) is very beneficial in keeping the congestion low. Section 4 presents a scenario where we get such improvements, for discrepancy-type problems [34,4]. In particular, we generalize a hypergraph-partitioning result of Füredi & Kahn [16].
Recall the bounds of [36] for CIPs mentioned in the paragraph preceding this subsection; our bounds for CIPs depend only on the set of constraints Ax ≥ b, i.e., they hold for any non-negative objectivefunction vector c. Our improvements over [36] get better as y * decreases. We show an integrality gap of 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}), once again improving on [36] for weighted CIPs. This CIP bound is better than that of [36] if y * ≤ mB/a: this inequality fails for unweighted CIPs and is generally true for weighted CIPs, since y * can get arbitrarily small in the latter case. In particular, we generalize the result of Chvátal [10] on weighted set cover. Consider, e.g., a facility location problem on a directed graph G = (V, A): given a cost c i ∈ [0, 1] for each i ∈ V , we want a min-cost assignment of facilities to the nodes such that each node sees at least B facilities in its out-neighborhood-multiple facilities at a node are allowed. If ∆ in is the maximum in-degree of G, we show an integrality gap of 1 + O(max{ln(∆ in + 1)/B, ln(B(∆ in + 1))/B}). This improves on [36] if y * ≤ |V |B/∆ in ; it shows an O(1) (resp., 1 + o(1)) integrality gap if B grows as fast as (resp., strictly faster than) log ∆ in . Theorem 5.7 presents our covering results.
A key corollary of our results is that for families of instances of CIPs, we get a good (O(1) or 1 + o(1)) integrality gap if B grows at least as fast as log a. Bounds on the result of a greedy algorithm for CIPs relative to the optimal integral solution, are known [12,13]. Our bound improves that of [12] and is incomparable with [13]; for any given A, c, and the unit vector b/||b|| 2 , our bound improves on [13] if B is more than a certain threshold. As it stands, randomized rounding produces such improved solutions for several CIPs only with a very low, sometimes exponentially small, probability. Thus, it does not imply a randomized algorithm, often. To this end, we generalize Raghavan's method of pessimistic estimators to derive an algorithmic (polynomial-time) version of our results for CIPs, in § 5.3.
We also show via Theorem 5.9 and Corollary 5.10 that multi-criteria CIPs can be approximated well. In particular, Corollary 5.10 shows some interesting cases where the approximation guarantee for multi-criteria CIPs grows in a very much sub-linear fashion with the number ℓ of given vectors c i : the approximation ratio is at most O(log log ℓ) times what we show for CIPs (which correspond to the case where ℓ = 1). We are not aware of any such earlier work on multi-criteria CIPs.
The preliminary version of this work was presented in [35]. As mentioned in § 1, two main algorithmic versions related to our work have been obtained following [35]. First, for a subclass of the MIPs where the nonzero entries of the matrix A are "reasonably large", constructive versions of our results have been obtained in [20]. Second, for a notion of approximation that is different from the one we study, algorithmic results have been developed for certain families of MIPs in [11]. Furthermore, our Theorem 5.7 for CIPs has been used in [19] to develop approximation algorithms for CIPs that have given upper bounds on the variables x j .
The Extended LLL and an Approach to Large Deviations
We now present our LLL extension, Theorem 3.1. For any event E, define χ(E) to be its indicator r.v.:
1 if E holds and 0 otherwise. Suppose we have "bad" events E 1 , . . . , E m with a "dependency" d ′ (in the sense of Lemma 1.1) that is "large". Theorem 3.1 shows how to essentially replace d ′ by a possibly much-smaller d, under some conditions. It generalizes Lemma 1.1 (define one r.v., C i,1 = χ(E i ), for each i, to get Lemma 1.1), its proof is very similar to the classical proof of Lemma 1.1, and its motivation will be clarified by the applications.
(i) any C i,j is mutually independent of all but at most d of the events E k , k = i, and (ii) ∀I ⊆ ([m] − {i}), Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)]. Let p i denote j E[C i,j ]; clearly, Pr(E i ) ≤ p i (set I = φ in (ii)). Suppose that for all i ∈ [m] we have ep i (d + 1) ≤ 1. Then Pr( i E i ) ≥ (d/(d + 1)) m > 0.
Remark 3.2. C i,j and C i,j ′ can "depend" on different subsets of {E k |k = i}; the only restriction is that these subsets be of size at most d. Note that we have essentially reduced the dependency among the E i s, to just d: ep i (d + 1) ≤ 1 suffices. Another important point is that the dependency among the r.v.s C i,j could be much higher than d: all we count is the number of E k that any C i,j depends on. Proof of Theorem 3.1. We prove by induction on |I| that if i ∈ I then Pr(E i | Z(I)) ≤ ep i , which suffices to prove the theorem since Pr
( i E i ) = i∈[m] (1 − Pr(E i | Z([i − 1]))). For the base case where I = ∅, Pr(E i | Z(I)) = Pr(E i ) ≤ p i . For the inductive step, let S i,j,I . = {k ∈ I | C i,j depends on E k }, and S ′ i,j,I = I − S i,j,I ; note that |S i,j,I | ≤ d. If S i,j,I = ∅, then E[C i,j | Z(I)] = E[C i,j ]. Otherwise, letting S i,j,I = {ℓ 1 , . . . , ℓ r }, we have E[C i,j | Z(I)] = E[C i,j · χ(Z(S i,j,I )) | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I )) ≤ E[C i,j | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I ))
, since C i,j is non-negative. The numerator of the last term is E[C i,j ], by assumption. The denominator can be lower-bounded as follows:
s∈[r] (1 − Pr(E ℓs | Z({ℓ 1 , ℓ 2 , . . . , ℓ s−1 } ∪ S ′ i,j,I ))) ≥ s∈[r] (1 − ep ℓs ) ≥ (1 − 1/(d + 1)) r ≥ (d/(d + 1)) d > 1/e;
the first inequality follows from the induction hypothesis. Hence,
E[C i,j | Z(I)] ≤ eE[C i,j ] and thus, Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)] ≤ ep i ≤ 1/(d + 1).
The crucial point is that the events E i could have a large dependency d ′ , in the sense of the classical Lemma 1.1. The main utility of Theorem 3.1 is that if we can "decompose" each E i into the r.v.s C i,j that satisfy the conditions of the theorem, then there is the possibility of effectively reducing the dependency by much (d ′ can be replaced by the value d). Concrete instances of this will be studied in later sections.
The tools behind our MIP application are our new LLL, and a result of [33]. Define, for z = (z 1 , . . . , z n ) ∈ ℜ n , a family of polynomials S j (z), j = 0, 1, . . . , n, where S 0 (z) ≡ 1, and for j ∈ [n],
S j (z) . = 1≤i 1 <i 2 ···<i j ≤n z i 1 z i 2 · · · z i j .(1)Remark 3.3.
For real x and non-negative integral r, we define x r . = x(x − 1) · · · (x − r + 1)/r! as usual; this is the sense meant in Theorem 3.4 below.
We define a nonempty event to be any event with a nonzero probability of occurrence. The relevant theorem of [33] is the following:
Theorem 3.4. ([33]) Given r.v.s X 1 , . . . , X n ∈ [0, 1], let X = n i=1 X i and µ = E[X]
. Then, (a) For any q > 0, any nonempty event Z and any non-negative integer k ≤ q,
Pr(X ≥ q | Z) ≤ E[Y k,q | Z], where Y k,q = S k (X 1 , . . . , X n )/ q k . (b) If the X i s are independent, δ > 0, and k = ⌈µδ⌉, then Pr(X ≥ µ(1 + δ)) ≤ E[Y k,µ(1+δ) ] ≤ G(µ, δ), where G(·, ·) is as in Lemma 2.4. (c) If the X i s are independent, then E[S k (X 1 , . . . , X n )] ≤ n k · (µ/n) k ≤ µ k /k!.
Proof. Suppose r 1 , r 2 , . . . r n ∈ [0, 1] satisfy n i=1 r i ≥ q. Then, a simple proof is given in [33], for the fact that for any non-negative integer k ≤ q, S k (r 1 , r 2 , . . . , r n ) ≥ q k . This clearly holds even given the occurrence of any nonempty event Z. Thus we get Pr(
X ≥ q) | Z) ≤ Pr(Y k,q ≥ 1 | Z) ≤ E[Y k,q | Z],
where the second inequality follows from Markov's inequality. The proofs of (b) and (c) are given in [33].
We next present the proof of Lemma 2.4:
Proof of Lemma 2.4. Part (a) is the Chernoff-Hoeffding bound (see, e.g., Appendix A of [4], or [27]). For (b), we proceed as follows. For any µ > 0, it is easy to check that
G(µ, δ) = e −Θ(µδ 2 ) if δ ∈ (0, 1); (2) G(µ, δ) = e −Θ(µ(1+δ) ln(1+δ)) if δ ≥ 1. (3) Now if µ ≤ log(p −1 )/2, choose δ = C · log(p −1 ) µ log(log(p −1 )/µ)
for a suitably large constant C. Note that δ is lower-bounded by some positive constant; hence, (3) holds (since the constant 1 in the conditions "δ ∈ (0, 1)" and "δ > 1" of (2) and (3) can clearly be replaced by any other positive constant). Simple algebraic manipulation now shows that if C is large enough, then
⌈µδ⌉·G(µ, δ) ≤ p holds. Similarly, if µ > log(p −1 )/2, we set δ = C · log(µ+p −1 )
µ for a large enough constant C, and use (2).
Approximating Minimax Integer Programs
Suppose we are given an MIP conforming to Definition 2.1. Define t to be max i∈[m] N Z i , where N Z i is the number of rows of A which have a non-zero coefficient corresponding to at least one variable among
{x i,j : j ∈ [ℓ i ]}. Note that g ≤ a ≤ t ≤ min{m, a · max i∈[n] ℓ i }.(4)
Theorem 4.2 now shows how Theorem 3.1 can help, for sparse MIPs-those where t ≪ m. We will then bootstrap Theorem 4.2 to get the further improved Theorem 4.5. We start with a proposition, whose proof is a simple calculus exercise:
Proposition 4.1. If 0 < µ 1 ≤ µ 2 , then for any δ > 0, G(µ 1 , µ 2 δ/µ 1 ) ≤ G(µ 2 , δ). Proof. Conduct randomized rounding: independently for each i, randomly round exactly one x i,j to 1, guided by the "probabilities" {x * i,j }. We may assume that {x * i,j } is a basic feasible solution to the LP relaxation. Hence, at most m of the {x * i,j } will be neither zero nor one, and only these variables will participate in the rounding. Thus, since all the entries of A are in [0, 1], we assume without loss of generality from now on that y * ≤ m (and that max i∈[n] ℓ i ≤ m); this explains the "min{y * , m}" term in our stated bounds. If z ∈ {0, 1} N denotes the randomly rounded vector, then E[(Az) i ] = b i by linearity of expectation, i.e., at most y * . Defining k = ⌈y * H(y * , 1/(et))⌉ and events E 1 , E 2 , . . . , E m by
E i ≡ "(Az) i ≥ b i + k",C i,j . = v∈S(j) Z i,v b i +k k .(5)
We now need to show that the r.v.s C i,j satisfy the conditions of Theorem 3.1. For any i ∈ [m], let
δ i = k/b i . Since b i ≤ y * , we have, for each i ∈ [m], G(b i , δ i ) ≤ G(y * ,E i | Z) ≤ j∈[u] E[C i,j | Z]. Also, p i . = j∈[u] E[C i,j ] < G(b i , δ i ) ≤ 1/(ekt).
Next since any C i,j involves (a product of) k terms, each of which "depends" on at most (t − 1) of the events Theorem 4.2 gives good results if t ≪ m, but can we improve it further, say by replacing t by a (≤ t) in it? As seen from (4), the key reason for t ≫ a Θ(1) is that max i∈[n] ℓ i ≫ a Θ(1) . If we can essentially "bring down" max i∈[n] ℓ i by forcing many x * i,j to be zero for each i, then we effectively reduce t (t ≤ a · max i ℓ i , see (4)); this is so since only those x * i,j that are neither zero nor one take part in the rounding. A way of bootstrapping Theorem 4.2 to achieve this is shown by: Proof. Let K 0 > 0 be a sufficiently large absolute constant. Now if
{E v : v ∈ ([m] − {i})} by definition of t, we see the important Fact 4.4. ∀i ∈ [m] ∀j ∈ [u], C i,j ∈ [0, 1] and C i,j "depends" on at most d = k(t − 1) of the set of events {E v : v ∈ ([m] − {i})}.(y * ≥ t 1/7 ) or (t ≤ max{K 0 , 2}) or (t ≤ a 4 )(6)
holds, then we will be done by Theorem 4.2. So we may assume that (6) is false. Also, if y * ≤ t −1/7 , Theorem 4.2 guarantees an integral solution of value O(1); thus, we also suppose that y * > t −1/7 . The basic idea now is, as sketched above, to set many x * i,j to zero for each i (without losing too much on y * ), so that max i ℓ i and hence, t, will essentially get reduced. Such an approach, whose performance will be validated by arguments similar to those of Theorem 4.2, is repeatedly applied until (6) holds, owing to the (continually reduced) t becoming small enough to satisfy (6). There are two cases:
Case I: y * ≥ 1. Solve the LP relaxation, and set x ′ i,j := (y * ) 2 (log 5 t)x * i,j . Conduct randomized rounding on the x ′ i,j now, rounding each x ′ i,j independently to z i,j ∈ {⌊x ′ i,j ⌋, ⌈x ′ i,j ⌉}. (Note the key difference from Theorem 4.2, where for each i, we round exactly one x * i,j to 1.)
Let K 1 > 0 be a sufficiently large absolute constant. We now use ideas similar to those used in our proof of Theorem 4.2 to show that with nonzero probability, we have both of the following:
∀i ∈ [m], (Az) i ≤ (y * ) 3 log 5 t · (1 + K 1 /((y * ) 1.5 log 2 t)), and (7) ∀i ∈ [n], | j z i,j − (y * ) 2 log 5 t| ≤ K 1 y * log 3 t.
To show this, we proceed as follows. Let E 1 , E 2 , . . . , E m be the "bad" events, one for each event in (7) not holding; similarly, let E m+1 , E m+2 , . . . , E m+n be the "bad" events, one for each event in (8) not holding. We want to use our extended LLL to show that with positive probability, all these bad events can be avoided; specifically, we need a way of decomposing each E i into a finite number of non-negative r.v.s C i,j . For each event E m+ℓ where ℓ ≥ 1, we define just one r.v. C m+ℓ,1 : this is the indicator variable for the occurrence of E m+ℓ . For the events E i where i ≤ m, we decompose E i into r.v.s C i,j just as in (5): each such C i,j is now a scalar multiple of at most O((y * ) 3 log 5 t/((y * ) 1.5 log 2 t)) = O((y * ) 1.5 log 3 t) = O(t 1.5/7 log 3 t) independent binary r.v.s that underlie our randomized rounding; the second equality (big-Oh bound) here follows since (6) has been assumed to not hold. Thus, it is easy to see that for all i, 1 ≤ i ≤ m + n, and for any j, the r.v. C i,j depends on at most
O(t · t 1.5/7 log 3 t)(9)
events E k , where k = i. Also, as in our proof of Theorem 4.2, Theorem 3.4 gives a direct proof of requirement (ii) of Theorem 3.1; part (b) of Theorem 3.4 shows that for any desired constant K, we can choose the constant K 1 large enough so that for all i, j E[C i,j ] ≤ t −K . Thus, in view of (9), we see by Theorem 3.1 that Pr( m+n i=1 E i ) > 0. Fix a rounding z satisfying (7) and (8). For each i ∈ [n] and j ∈ [ℓ i ], we renormalize as follows: x ′′ i,j := z i,j / u z i,u . Thus we have u x ′′ i,u = 1 for all i; we now see that we have two very useful properties. First, since j z i,j ≥ (y * ) 2 log 5 t · 1 − O( 1 y * log 2 t ) for all i from (8), we have, ∀i ∈ [m], (Ax ′′ ) i ≤ y * (1 + O(1/((y * ) 1.5 log 2 t))) 1 − O(1/(y * log 2 t)) ≤ y * (1 + O(1/(y * log 2 t))).
Second, since the z i,j are non-negative integers summing to at most (y * ) 2 log 5 t(1 + O(1/(y * log 2 t))), at most O((y * ) 2 log 5 t) values x ′′ i,j are nonzero, for each i ∈ [n]. Thus, by losing a little in y * (see (10)), our "scaling up-rounding-scaling down" method has given a fractional solution x ′′ with a much-reduced ℓ i for each i; ℓ i is now O((y * ) 2 log 5 t), essentially. Thus, t has been reduced to O(a(y * ) 2 log 5 t); i.e., t has been reduced to at most
K 2 t 1/4+2/7 log 5 t(11)
for some constant K 2 > 0 that is independent of K 0 , since (6) was assumed false. Repeating this scheme O(log log t) times makes t small enough to satisfy (6). More formally, define t 0 = t, and t i+1 = K 2 t 1/4+2/7 i log 5 t i for i ≥ 0. Stop this sequence at the first point where either t = t i satisfies (6), or t i+1 ≥ t i holds. Thus, we finally have t small enough to satisfy (6) or to be bounded by some absolute constant. How much has max i∈[m] (Ax) i increased in the process? By (10), we see that at the end, max i∈ [m] (Ax) i ≤ y * · j≥0 (1 + O(1/(y * log 2 t j ))) ≤ y * · e O( j≥0 1/(y * log 2 t j )) ≤ y * + O(1),
since the values log t j decrease geometrically and are lower-bounded by some absolute positive constant. We may now apply Theorem 4.2.
Case II: t −1/7 < y * < 1. The idea is the same here, with the scaling up of x * i,j being by (log 5 t)/y * ; the same "scaling up-rounding-scaling down" method works out. Since the ideas are very similar to Case I, we only give a proof sketch here. We now scale up all the x * i,j first by (log 5 t)/y * and do a randomized rounding. The analogs of (7) and (8)
∀i ∈ [n], | j z i,j − log 5 t/y * | ≤ K ′ 1 log 3 t/ y * .(14)
Proceeding identically as in Case I, we can show that with positive probability, (13) and (14) hold simultaneously. Fix a rounding where these two properties hold, and renormalize as before: x ′′ i,j := z i,j / u z i,u . Since (13) and (14) hold, it is easy to show that the following analogs of (10) and (11) hold:
(Ax ′′ ) i ≤ y * (1 + O(1/ log 2 t)) 1 − O( √ y * / log 2 t) ≤ y * (1 + O(1/ log 2 t))
; and t has been reduced to O(a log 5 t/y * ), i.e., to O(t 1/4+1/7 log 5 t).
We thus only need O(log log t) iterations, again. Also, the analog of (12) now is that
max i∈[m] (Ax) i ≤ y * · j≥0 (1 + O(1/ log 2 t j )) ≤ y * · e O( j≥0 1/ log 2 t j ) ≤ y * + O(1).
This completes the proof.
We now study our improvements for discrepancy-type problems, which are an important class of MIPs that, among other things, are useful in devising divide-and-conquer algorithms. Given is a set-system (X, F ), where X = [n] and F = {D 1 , D 2 , . . . , D M } ⊆ 2 X . Given a positive integer ℓ, the problem is to partition X into ℓ parts, so that each D j is "split well": we want a function f : X → [ℓ] which minimizes max j∈[M ],k∈[ℓ] |{i ∈ D j : f (i) = k}|. (The case ℓ = 2 is the standard set-discrepancy problem.) To motivate this problem, suppose we have a (di)graph (V, A); we want a partition of V into V 1 , . . . , V ℓ such that ∀v ∈ V , {|{j ∈ N (v)∩V k }| : k ∈ [ℓ]} are "roughly the same", where N (v) is the (out-)neighborhood of v. See, e.g., [2,17] for how this helps construct divide-and-conquer approaches. This problem is naturally modeled by the above set-system problem.
Let ∆ be the degree of (X, F ), i.e., max i∈[n] |{j : i ∈ D j }|, and let ∆ ′ . = max D j ∈F |D j |. Our problem is naturally written as an MIP with m = M ℓ, ℓ i = ℓ for each i, and g = a = ∆, in the notation of Definition 2.1; y * = ∆ ′ /ℓ here. The analysis of [32] gives an integral solution of value at most y * (1 + O(H(y * , 1/(M ℓ)))), while [18] presents a solution of value at most y * + ∆. Also, since any D j ∈ F intersects at most (∆ − 1)∆ ′ other elements of F , Lemma 1.1 shows that randomized rounding produces, with positive probability, a solution of value at most y * (1 + O(H(y * , 1/(e∆ ′ ∆ℓ)))). This is the approach taken by [16] for their case of interest: ∆ = ∆ ′ , ℓ = ∆/ log ∆. Theorem 4.5 shows the existence of an integral solution of value y * (1+O(H(y * , 1/∆)))+O(1), i.e., removes the dependence on ∆ ′ . This is an improvement on all the three results above. As a specific interesting case, suppose ℓ grows at most as fast as ∆ ′ / log ∆. Then we see that good integral solutions-those that grow at the rate of O(y * ) or better-exist, and this was not known before. (The approach of [16] shows such a result for ℓ = O(∆ ′ / log(max{∆, ∆ ′ })). Our bound of O(∆ ′ / log ∆) is always better than this, and especially so if ∆ ′ ≫ ∆.)
Approximating Covering Integer Programs
One of the main ideas behind Theorem 3.1 was to extend the basic inductive proof behind the LLL by decomposing the "bad" events E i appropriately into the r.v.s C i,j . We now use this general idea in a different context, that of (multi-criteria) covering integer programs, with an additional crucial ingredient being a useful correlation inequality, the FKG inequality [15]. The reader is asked to recall the discussion of (multi-criteria) CIPs from § 2. We start with a discussion of randomized rounding for CIPs, the Chernoff lower-tail bound, and the FKG inequality in § 5.1. These lead to our improved, but nonconstructive, approximation bound for column-sparse (multi-criteria) CIPs, in § 5.2. This is then made constructive in § 5.3; we also discuss there what we view as novel about this constructive approach.
Preliminaries
Let us start with a simple and well-known approach to tail bounds. Suppose Y is a random variable and y is some value. Then, for any 0 ≤ δ < 1, we have
Pr(Y ≤ y) ≤ Pr((1 − δ) Y ≥ (1 − δ) y ) ≤ E[(1 − δ) Y ] (1 − δ) y ,(15)
where the inequality is a consequence of Markov's inequality.
We next setup some basic notions related to approximation algorithms for (multi-criteria) CIPs. Recall that in such problems, we have ℓ given non-negative vectors c 1 , c 2 , . . . , c ℓ such that for all i, c i ∈ [0, 1] n with max j c i,j = 1; ℓ = 1 in the case of CIPs. Let x = (x * 1 , x * 2 , . . . , x * n ) denote a given fractional solution that satisfies the system of constraints Ax ≥ b. We are not concerned here with how x * was found: typically, x * would be an optimal solution to the LP relaxation of the problem. (The LP relaxation is obvious if, e.g., ℓ = 1, or, say, if the given multi-criteria aims to minimize max i c T i · x * , or to keep each c T i · x * bounded by some target value v i .) We now consider how to round x * to some integral z so that:
(P1) the constraints Az ≥ b hold, and (P2) for all i, c T i · z is "not much bigger" than c T i · x * : our approximation bound will be a measure of how small a "not much bigger value" we can achieve in this sense.
Let us now discuss the "standard" randomized rounding scheme for (multi-criteria) CIPs. We assume a fixed instance as well as x * , from now on. For an α > 1 to be chosen suitably, set x ′ j = αx * j , for each j ∈ [n]. We then construct a random integral solution z by setting, independently for each j ∈ [n],
z j = ⌊x ′ j ⌋ + 1 with probability x ′ j − ⌊x ′ j ⌋, and z j = ⌊x ′ j ⌋ with probability 1 − (x ′ j − ⌊x ′ j ⌋).
The aim then is to show that with positive (hopefully high) probability, (P1) and (P2) happen simultaneously. We now introduce some useful notation. For every j ∈ [n], let s j = ⌊x ′ j ⌋. Let A i denote the ith row of A, and let X 1 , X 2 , . . . , X n ∈ {0, 1} be independent r.v.s with Pr(X j = 1) = x ′ j − s j for all j. The bad event E i that the ith constraint is violated by our randomized rounding is given by
E i ≡ "A i · X < µ i (1 − δ i )", where µ i = E[A i · X] and δ i = 1 − (b i − A i · s)/µ i .
We now bound Pr(E i ) for all i, when the standard randomized rounding is used. Define g(B, α) . = (α · e −(α−1) ) B . Then for all i,
Lemma 5.1.Pr(E i ) ≤ E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ g(B, α) ≤ e −B(α−1) 2 /(2α)
under standard randomized rounding.
Proof. The first inequality follows from (15). Next, the Chernoff-Hoeffding lower-tail approach [4,27] shows that
E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ e −δ i (1 − δ i ) 1−δ i µ i .
It is observed in [36] (and is not hard to see) that this latter quantity is maximized when s j = 0 for all j, and when each b i equals its minimum value of B. Thus we see that Pr(E i ) ≤ g(B, α). The inequality g(B, α) ≤ e −B(α−1) 2 /(2α) for α ≥ 1, is well-known and easy to verify via elementary calculus.
Next, the FKG inequality is a useful correlation inequality, a special case of which is as follows [15]. Given binary vectors a = (a 1 , a 2 , . . . , a ℓ ) ∈ {0, 1} ℓ and b = (b 1 , b 2 , . . . , b ℓ ) ∈ {0, 1} ℓ , let us partially order them by coordinate-wise domination: a b iff a i ≤ b i for all i. Now suppose Y 1 , Y 2 , . . . , Y ℓ are independent r.v.s, each taking values in {0, 1}. Let Y denote the vector (Y 1 , Y 2 , . . . , Y ℓ ). Suppose an event A is completely defined by the value of Y . Define A to be increasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b such that a b. Analogously, event A is decreasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b a. The FKG inequality proves certain intuitively appealing bounds:
(i) Pr(I i | j∈S I j ) ≥ Pr(I i ) and Pr(D i | j∈S D j ) ≥ Pr(D i ); (ii) Pr(I i | j∈S D j ) ≤ Pr(I i ) and Pr(D i | j∈S I j ) ≤ Pr(D i ).
Returning to our random variables X j and events E i , we get the following lemma as an easy consequence of the FKG inequality, since each event of the form "E i " or "X j = 1" is an increasing event as a function of the vector (X 1 , X 2 , . . . , X n ):
Lemma 5.3. For all B 1 , B 2 ⊆ [m]
such that B 1 ∩B 2 = ∅ and for any B 3 ⊆ [n], Pr( i∈B 1 E i | (( j∈B 2 E j )∧ ( k∈B 3 (X k = 1))) ≥ i∈B 1 Pr(E i ).
Nonconstructive approximation bounds for (multi-criteria) CIPs
Definition 5.4. (The function R) For any s and any j 1 < j 2 < · · · < j s , let R(j 1 , j 2 , . . . , j s ) be the set of indices i such that row i of the constraint system "Ax ≥ b" has at least one of the variables j k , 1 ≤ k ≤ s, appearing with a nonzero coefficient. (Note from the definition of a in Defn. 2.2, that |R(j 1 , j 2 , . . . , j s )| ≤ a · s.)
Let the vector x * = (x * 1 , x * 2 , . . . , x * n ), the parameter α > 1, and the "standard" randomized rounding scheme, be as defined in § 5.1. The standard rounding scheme is sufficient for our (nonconstructive) purposes now; we generalize this scheme as follows, for later use in § 5.3.
Definition 5.5. (General randomized rounding) Given a vector p = (p 1 , p 2 , . . . , p n ) ∈ [0, 1] n , the general randomized rounding with parameter p generates independent random variables X 1 , X 2 , . . . , X n ∈ {0, 1} with Pr(X j = 1) = p j ; the rounded vector z is defined by z j = ⌊αx * j ⌋ + X j for all j. (As in the standard rounding, we set each z j to be either ⌊αx * j ⌋ or ⌈αx * j ⌉; the standard rounding is the special case in which E[z j ] = αx * j for all j.)
We now present an important lemma, Lemma 5.6, to get correlation inequalities which "point" in the "direction" opposite to FKG. Some ideas from the proof of Lemma 1.1 will play a crucial role in our proof this lemma.
Lemma 5.6. Suppose we employ general randomized rounding with some parameter p, and that Pr( m i=1 E i ) is nonzero under this rounding. The following hold for any q and any 1 ≤ j 1 < j 2 < · · · < j q ≤ n.
(i)
Pr(X j 1 = X j 2 = · · · = X jq = 1 | m i=1 E i ) ≤ q t=1 p jt i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i ))
; (16) the events E i ≡ ((Az) i < b i ) are defined here w.r.t. the general randomized rounding.
(ii) In the special case of standard randomized rounding, i∈R(j 1 ,j 2 ,...,jq)
(1 − Pr(E i )) ≥ (1 − g(B, α)) aq ;(17)
the function g is as defined in Lemma 5.1.
Proof. (i) Note first that if we wanted a lower bound on the l.h.s., the FKG inequality would immediately imply that the l.h.s. is at least p j 1 p j 2 · · · p jq . We get around this "correlation problem" as follows. Let
Q = R(j 1 , j 2 , . . . , j q ); let Q ′ = [m] − Q. Let Z 1 ≡ ( i∈Q E i ), and Z 2 ≡ ( i∈Q ′ E i ).
Letting Y = q t=1 X jt , note that |Q| ≤ aq and (18)
Y is independent of Z 2 .(19)
Now,
Pr(Y = 1 | (Z 1 ∧ Z 2 )) = Pr(((Y = 1) ∧ Z 1 ) | Z 2 ) Pr(Z 1 | Z 2 ) ≤ Pr((Y = 1) | Z 2 ) Pr(Z 1 | Z 2 ) = Pr(Y = 1) Pr(Z 1 | Z 2 ) (by (19)) ≤ q t=1
Pr(X jt = 1) i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i )) (by Lemma 5.3).
(ii) We get (17) from Lemma 5.1 and (18).
We will use Lemmas 5.3 and 5.6 to prove Theorem 5.9. As a warmup, let us start with a result for the special case of CIPs; recall that y * denotes c T · x * . Theorem 5.7. For any given CIP, suppose we choose α, β > 1 such that β(1 − g(B, α)) a > 1. Then, there exists a feasible solution of value at most y * αβ. In particular, there is an absolute constant K > 0 such that if α, β > 1 are chosen as: α = K · ln(a + 1)/B and β = 2, if ln(a + 1) ≥ B, and (20) α = β = 1 + K · ln(a + 1)/B, if ln(a + 1) < B; (21) then, there exists a feasible solution of value at most y * αβ. Thus, the integrality gap is at most 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}).
Proof. Conduct standard randomized rounding, and let E be the event that c T · z > y * αβ. Setting Z ≡ i∈[m] E i and µ . = E[c T · z] = y * α, we see by Markov's inequality that Pr(E | Z) is at most R = ( n j=1 c j Pr(X j = 1 | Z))/(µβ). Note that Pr(Z) > 0 since α > 1; so, we now seek to make R < 1, which will complete the proof. Lemma 5.6 shows that g(B, α)) a ; thus, the condition β (1 − g(B, α)) a > 1 suffices.
R ≤ j c j p j µβ · (1 − g(B, α)) a = 1 β(1 −
Simple algebra shows that choosing α, β > 1 as in (20) and (21), ensures that β (1 − g(B, α)) a > 1.
The basic approach of our proof of Theorem 5.7 is to follow the main idea of Theorem 3.1, and to decompose the event "E | Z" into a non-negative linear combination of events of the form "X j = 1 | Z'; we then exploited the fact that each X j depends on at most a of the events comprising Z. We now extend Theorem 5.7 and also generalize to multi-criteria CIPs. Instead of employing just a "first moment method" (Markov's inequality) as in the proof of Theorem 5.7, we will work with higher moments: the functions S k defined in (1) and used in Theorem 3.4. Suppose some parameters λ i > 0 are given, and that our goal is to round x * to z so that the event
A ≡ "(Az ≥ b) ∧ (∀i, c T i · z ≤ λ i ) ′′(22)
holds. We first give a sufficient condition for this to hold, in Theorem 5.9; we then derive some concrete consequences in Corollary 5.10. We need one further definition before presenting Theorem 5.9. Recall that A i and b i respectively denote the ith row of A and the ith component of b. Also, the vector s and values δ i will throughout be as in the definition of standard randomized rounding.
Definition 5.8. (The functions ch and ch ′ ) Suppose we conduct general randomized rounding with some parameter p; i.e., let X 1 , X 2 , . . . , X n be independent binary random variables such that Pr(X j = 1) = p j . For each i ∈ [m], define
ch i (p) . = E[(1 − δ i ) A i ·X ] (1 − δ i ) b i −A i ·s = j∈[n] E[(1 − δ i ) A i,j X j ] (1 − δ i ) b i −A i ·s and ch ′ i (p) . = min{CH i (p), 1}.
(Note from (15) that if we conduct general randomized rounding with parameter p, then Pr((Az) i < b i ) ≤ ch i (p) ≤ ch ′ i (p); also, "ch" stands for "Chernoff-Hoeffding".) Theorem 5.9. Suppose we are given a multi-criteria CIP, as well as some parameters λ 1 , λ 2 , . . . , λ ℓ > 0. Let A be as in (22). Then, for any sequence of positive integers (k 1 , k 2 , . . . , k ℓ ) such that k i ≤ λ i , the following hold.
(i) Suppose we employ general randomized rounding with parameter p = (p 1 , p 2 , . . . , p n ). Then, Pr(A) is at least (ii) Suppose we employ the standard randomized rounding to get a rounded vector z.
Φ(p) . = r∈[m] (1 − ch ′ r (p)) − ℓ i=1 1 λ i k i · j 1 <···<j k i k i t=1 c i,jt · p jt · r ∈R(j 1 ,...,j k i ) (1 − ch ′ r (p));(23)Let λ i = ν i (1 + γ i ) for each i ∈ [ℓ], where ν i = E[c T i · z] = α · (c T i · x * )
and γ i > 0 is some parameter. Then,
Φ(p) ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −a·k i .(24)
In particular, if the r.h.s. of (24) is positive, then Pr(A) > 0 for standard randomized rounding.
The proof is a simple generalization of that of Theorem 5.7, and is deferred to Section 5.4. Theorem 5.7 is the special case of Theorem 5.9 corresponding to ℓ = k 1 = 1. To make the general result of Theorem 5.9 more concrete, we now study an additional special case. We present this special case as one possible "proof of concept", rather than as an optimized one; e.g., the constant "3" in the bound "c T i · z ≤ 3ν i " can be improved.
Corollary 5.10. There is an absolute constant K ′ > 0 such that the following holds. Suppose we are given a multi-criteria CIP with notation as in part (ii) of Theorem 5.9. Define α = K ′ · max{ ln(a)+ln ln(2ℓ)
B , 1}. Now if ν i ≥ log 2 (2ℓ) for all i ∈ [ℓ]
, then standard randomized rounding produces a feasible solution z such that c T i · z ≤ 3ν i for all i, with positive probability. In particular, this can be shown by setting k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i, in part (ii) of Theorem 5.9.
Proof. Let us employ Theorem 5.9(ii) with k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i. We just need to establish that the r.h.s. of (24) is positive. We need to show that
ℓ i=1 n k i · (ν i /n) k i 3ν i k i · (1 − g(B, α)) −a·k i < 1;
it is sufficient to prove that for all i,
ν k i i /k i ! 3ν i k i · (1 − g(B, α)) −a·k i < 1/ℓ.(25)
We make two observations now.
• Since k i ∼ ln ℓ and ν i ≥ log 2 (2ℓ),
3ν i k i = (1/k i !) · k i −1 j=0 (3ν i − j) = (1/k i !) · (3ν i ) k i · e −Θ( k i −1 j=0 j/ν i ) = Θ((1/k i !) · (3ν i ) k i ).
• (1 − g(B, α)) −a·k i can be made arbitrarily close to 1 by choosing the constant K ′ large enough.
These two observations establish (25).
Constructive version
It can be shown that for many problems, randomized rounding produces the solutions shown to exist by Theorem 5.7 and Theorem 5.9, with very low probability: e.g., probability almost exponentially small in the input size. Thus we need to obtain constructive versions of these theorems. Our method will be a deterministic procedure that makes O(n) calls to the function Φ(·), in addition to poly(n, m) work. Now, if k ′ denotes the maximum of all the k i , we see that Φ can be evaluated in poly(n k ′ , m) time. Thus, our overall procedure runs in time poly(n k ′ , m) time. In particular, we get constructive versions of Theorem 5.7 and Corollary 5.10 that run in time poly(n, m) and poly(n log ℓ , m), respectively.
Our approach is as follows. We start with a vector p that corresponds to standard randomized rounding, for which we know (say, as argued in Corollary 5.10) that Φ(p) > 0. In general, we have a vector of probabilities p = (p 1 , p 2 , . . . , p n ) such that Φ(p) > 0. If p ∈ {0, 1} n , we are done. Otherwise suppose some p j lies in (0, 1); by renaming the variables, we will assume without loss of generality that j = n. Define p ′ = (p 1 , p 2 , . . . , p n−1 , 0) and p ′′ = (p 1 , p 2 , . . . , p n−1 , 1). The main fact we wish to show is that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0: we can then set p n to 0 or 1 appropriately, and continue. (As mentioned in the previous paragraph, we thus have O(n) calls to the function Φ(·) in total.) Note that although some of the p j will lie in {0, 1}, we can crucially continue to view the X j as independent random variables with Pr(X j = 1) = p j .
So, our main goal is: assuming that p n ∈ (0, 1) and that Φ(p) > 0, (26) to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. In order to do so, we make some observations and introduce some simplifying notation. Define, for each i ∈ [m]:
q i = ch ′ i (p), q ′ i = ch ′ i (p ′ ), and q ′′ i = ch ′ i (p ′′ )
. Also define the vectors q . = (q 1 , q 2 , . . . , q m ), q ′ . = (q ′ 1 , q ′ 2 , . . . , q ′ m ), and q ′′ . = (q ′′ 1 , q ′′ 2 , . . . , q ′′ m ). We now present a useful lemma about these vectors:
Lemma 5.11. For all i ∈ [m], we have 0 ≤ q ′′ i ≤ q ′ i ≤ 1; (27) q i ≥ p n q ′′ i + (1 − p n )q ′ i ; and (28) q ′ i = q ′′ i = q i if i ∈ R(n).(29)
Proof. The proofs of (27) and (29) are straightforward. As for (28), we proceed as in [36]. First of all, if q i = 1, then we are done, since q ′′ i , q ′ i ≤ 1. So suppose q i < 1; in this case, q i = ch i (p). Now, Definition 5.8 shows that
ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ). Therefore, q i = ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ) ≥ p n ch ′ i (p ′′ ) + (1 − p n )ch ′ i (p ′ ).
Since we are mainly concerned with the vectors p, p ′ and p ′′ now, we will view the values p 1 , p 2 , . . . , p n−1 as arbitrary but fixed, subject to (26). The function Φ(·) now has a simple form; to see this, we first define, for a vector r = (r 1 , r 2 , . . . , r m ) and a set U ⊆ [m],
f (U, r) = i∈U (1 − r i ).
Recall that p 1 , p 2 , . . . , p n−1 are considered as constants now. Then, it is evident from (23) that there exist constants u 1 , u 2 , . . . , u t and v 1 , v 2 , . . . , v t ′ , as well as subsets U 1 , U 2 , . . . , U t and V 1 , V 2 , . . . , V t ′ of [m], such that
Φ(p) = f ([m], q) − ( i u i · f (U i , q)) − (p n · j v j · f (V j , q));(30)Φ(p ′ ) = f ([m], q ′ ) − ( i u i · f (U i , q ′ )) − (0 · j v j · f (V j , q ′ )) = f ([m], q ′ ) − i u i · f (U i , q ′ ); (31) Φ(p ′′ ) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − (1 · j v j · f (V j , q ′′ )) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − ( j v j · f (V j , q ′′ )).(32)
Importantly, we also have the following:
the constants u i , v j are non-negative; ∀j, V j ∩ R(n) = ∅.
Recall that our goal is to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. We will do so by proving that
Φ(p) ≤ p n Φ(p ′′ ) + (1 − p n )Φ(p ′ ).(34)
Let us use the equalities (30), (31), and (32). In view of (29) and (33), the term "−p n · j v j · f (V j , q)" on both sides of the inequality (34) cancels; defining ∆(U ) .
= (1 − p n ) · f (U, q ′ ) + p n · f (U, q ′′ ) − f (U, q), inequality (34) reduces to ∆([m]) − i u i · ∆(U i ) ≥ 0.(35)
Before proving this, we pause to note a challenge we face. Suppose we only had to show that, say, ∆([m]) is non-negative; this is exactly the issue faced in [36]. Then, we will immediately be done by part (i) of Lemma 5.12, which states that ∆(U ) ≥ 0 for any set U . However, (35) also has terms such as "u i · ∆(U i )" with a negative sign in front. To deal with this, we need something more than just that ∆(U ) ≥ 0 for all U ; we handle this by part (ii) of Lemma 5.12. We view this as the main novelty in our constructive version here. ≥ 0 (by (26) and (30)).
Thus we have (35).
Proof of Lemma 5.12. It suffices to show the following. Assume U = [m]; suppose u ∈ ([m] − U ) and that U ′ = U ∪ {u}. Assuming by induction on |U | that ∆(U ) ≥ 0, we show that ∆(U ′ ) ≥ 0, and that ∆(U )/f (U, q) ≤ ∆(U ′ )/f (U ′ , q). It is easy to check that this way, we will prove both claims of the lemma.
The base case of the induction is that |U | ∈ {0, 1}, where ∆(U ) ≥ 0 is directly seen by using (28). Suppose inductively that ∆(U ) ≥ 0. Using the definition of ∆(U ) and the fact that f (U ′ , q) = (1 − q u )f (U, q), we have
f (U ′ , q) = (1 − q u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ ) − ∆(U )] ≤ (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] − (1 − q u ) · ∆(U ),
where this last inequality is a consequence of (28). Therefore, using the definition of ∆(U ′ ) and the facts f (U ′ , q ′ ) = (1 − q ′ u )f (U, q ′ ) and f (U ′ , q ′′ ) = (1 − q ′′ u )f (U, q ′′ ), (27)).
∆(U ′ ) = (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) − f (U ′ , q) ≥ (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) + (1 − q u ) · ∆(U ) − (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] = (1 − q u ) · ∆(U ) + p n (1 − p n ) · (f (U, q ′′ ) − f (U, q ′ )) · (q ′ u − q ′′ u ) ≥ (1 − q u ) · ∆(U ) (by
So, since we assumed that ∆(U ) ≥ 0, we get ∆(U ′ ) ≥ 0; furthermore, we get that ∆(U ′ ) ≥ (1 − q u ) · ∆(U ), which implies that ∆(U ′ )/f (U ′ , q) ≥ ∆(U )/f (U, q). (i) Let E r ≡ ((Az) r < b r ) be defined w.r.t. general randomized rounding with parameter p; as observed in Definition 5.8, Pr(E r ) ≤ ch ′ r (p). Now if ch ′ r (p) = 1 for some r, then part (i) is trivially true; so we assume that Pr(E r ) ≤ ch ′ r ( (1 − Pr(E r )).
Define, for i = 1, 2, . . . , ℓ, the "bad" event E i ≡ (c T i · z > λ i ). Fix any i. Our plan is to show that
Pr(E i | Z) ≤ 1 λ i k i · j 1 <j 2 <···<j k i k i t=1 c i,jt · p jt · r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r )) −1 .(36)
If we prove (36), then we will be done as follows. We have
Pr(A) ≥ Pr(Z) · 1 − i Pr(E i | Z) ≥ ( r∈[m]
(1 − Pr(E r ))) · 1 − i Pr(E i | Z) .
Now, the term "( r∈ [m] (1 − Pr(E r )))" is a decreasing function of each of the values Pr(E r ); so is the lower bound on "− Pr(E i | Z)" obtained from (36). Hence, bounds (36) and (37), along with the bound Pr(E r ) ≤ ch ′ r (p), will complete the proof of part (i). We now prove (36) using Theorem 3.4(a) and Lemma 5.6. Recall the symmetric polynomials S k from (1). Define Y = S k i (c i,1 X 1 , c i,2 X 2 , . . . , c i,n X n )/ λ i k i . By Theorem 3.4(a), Pr(E i | Z) ≤ E[Y | Z]. Next, the typical term in E[Y | Z] can be upper bounded using Lemma 5.6:
E ( k i t=1 c i,jt · X jt ) | m i=1 E i ≤ k i
t=1 c i,jt · p jt r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r ))
.
Thus we have (36), and the proof of part (i) is complete. (1 − ch ′ r (p)) Lemma 5.1 shows that under standard randomized rounding, ch ′ r (p) ≤ g(B, α) < 1 for all r. So, the r.h.s. κ of (38) gets lower-bounded as follows:
· 1 − ℓ i=1 1 λ i k i · j 1 <···<j k iκ ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · [ r∈R(j 1 ,...,j k i ) (1 − g(B, α))] −1 ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · (1 − g(B, α)) −ak i ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −ak i ,
where the last line follows from Theorem 3.4(c). 2
Conclusion
We have presented an extension of the LLL that basically helps reduce the "dependency" much in some settings; we have seen applications to two families of integer programming problems. It would be interesting to see how far these ideas can be pushed further. Two other open problems suggested by this work are: (i) developing a constructive version of our result for MIPs, and (ii) developing a poly(n, m)-time constructive version of Theorem 5.9, as opposed to the poly(n k ′ , m)-time constructive version that we present in § 5.3. Finally, a very interesting question is to develop a theory of applications of the LLL that can be made constructive with (essentially) no loss. | 11,762 |
cs0307043 | 2953390777 | The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs. | Given an ILP, we can find an optimal solution @math to its LP relaxation efficiently, but need to round fractional entries in @math to integers. The idea of randomized rounding is: given a real @math , round @math to @math with probability @math , and round @math to @math with probability @math . This has the nice property that the mean outcome is @math . Starting with this idea, the analysis of @cite_22 produces an integral solution of value at most @math for MIPs (though phrased a bit differently); this is derandomized in @cite_20 . But this does not exploit the sparsity of @math ; the previously-mentioned result of @cite_5 produces an integral solution of value at most @math . | {
"abstract": [
"",
"We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.",
"We consider the problem of approximating an integer program by first solving its relaxation linear program and \"rounding\" the resulting solution. For several packing problems, we prove probabilistically that there exists an integer solution close to the optimum of the relaxation solution. We then develop a methodology for converting such a probabilistic existence proof to a deterministic approximation algorithm. The methodology mimics the existence proof in a very strong sense."
],
"cite_N": [
"@cite_5",
"@cite_22",
"@cite_20"
],
"mid": [
"2064779029",
"2022191808",
"2024631575"
]
} | An Extension of the Lovász Local Lemma, and its Applications to Integer Programming * | The powerful Lovász Local Lemma (LLL) is often used to show the existence of rare combinatorial structures by showing that a random sample from a suitable sample space produces them with positive probability [14]; see Alon & Spencer [4] and Motwani & Raghavan [27] for several such applications. We present an extension of this lemma, and demonstrate applications to rounding fractional solutions for certain families of integer programs.
Let e denote the base of natural logarithms as usual. The symmetric case of the LLL shows that all of a set of "bad" events E i can be avoided under some conditions: Lemma 1.1. ( [14]) Let E 1 , E 2 , . . . , E m be any events with Pr(E i ) ≤ p ∀i. If each E i is mutually independent of all but at most d of the other events E j and if ep(d + 1) ≤ 1, then Pr( m i=1 E i ) > 0.
Though the LLL is powerful, one problem is that the "dependency" d is high in some cases, precluding the use of the LLL if p is not small enough. We present a partial solution to this via an extension of the LLL (Theorem 3.1), which shows how to essentially reduce d for a class of events E i ; this works well when each E i denotes some random variable deviating "much" from its mean. In a nutshell, we show that such events E i can often be decomposed suitably into sub-events; although the sub-events may have a large dependency among themselves, we show that it suffices to have a small "bipartite dependency" between the set of events E i and the set of sub-events. This, in combination with some other ideas, leads to the following applications in integer programming.
It is well-known that a large number of NP-hard combinatorial optimization problems can be cast as integer linear programming problems (ILPs). Due to their NP-hardness, good approximation algorithms are of much interest for such problems. Recall that a ρ-approximation algorithm for a minimization problem is a polynomial-time algorithm that delivers a solution whose objective function value is at most ρ times optimal; ρ is usually called the approximation guarantee, approximation ratio, or performance guarantee of the algorithm. Algorithmic work in this area typically focuses on achieving the smallest possible ρ in polynomial time. One powerful paradigm here is to start with the linear programming (LP) relaxation of the given ILP wherein the variables are allowed to be reals within their integer ranges; once an optimal solution is found for the LP, the main issue is how to round it to a good feasible solution for the ILP.
Rounding results in this context often have the following strong property: they present an integral solution of value at most y * · ρ, where y * will throughout denote the optimal solution value of the LP relaxation.
Since the optimal solution value OP T of the ILP is easily seen to be lower-bounded by y * , such rounding algorithms are also ρ-approximation algorithms. Furthermore, they provide an upper bound of ρ on the ratio OP T /y * , which is usually called the integrality gap or integrality ratio of the relaxation; the smaller this value, the better the relaxation.
This work presents improved upper bounds on the integrality gap of the natural LP relaxation for two families of ILPs: minimax integer programs (MIPs) and covering integer programs (CIPs). (The precise definitions and results are presented in § 2.) For the latter, we also provide the corresponding polynomialtime rounding algorithms. Our main improvements are in the case where the coefficient matrix of the given ILP is column-sparse: i.e., the number of nonzero entries in every column is bounded by a given parameter a. There are classical rounding theorems for such column-sparse problems (e.g., Beck & Fiala [6], Karp, Leighton, Rivest, Thompson, Vazirani & Vazirani [18]). Our results complement, and are incomparable with, these results. Furthermore, the notion of column-sparsity, which denotes no variable occurring in "too many" constraints, occurs naturally in combinatorial optimization: e.g., routing using "short" paths, and problems on hypergraphs with "small" degree. These issues are discussed further in § 2.
A key technique, randomized rounding of linear relaxations, was developed by Raghavan & Thompson [32] to get approximation algorithms for such ILPs. We use Theorem 3.1 to prove that this technique produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrix of the given MIP/CIP is column-sparse. (In the case of MIPs, our algorithm iterates randomized rounding several times with different choices of parameters, in order to achieve our result.) Such results cannot be got via Lemma 1.1, as the dependency d, in the sense of Lemma 1.1, can be as high as Θ(m) for these problems. Roughly speaking, Theorem 3.1 helps show that if no column in our given ILP has more than a nonzero entries, then the dependency can essentially be brought down to a polynomial in a; this is the key driver behind our improvements.
Theorem 3.1 works well in combination with an idea that has blossomed in the areas of derandomization and pseudorandomness, in the last two decades: (approximately) decomposing a function of several variables into a sum of terms, each of which depends on only a few of these variables. Concretely, suppose Z is a sum of random variables Z i . Many tools have been developed to upper-bound Pr(Z − E[Z] ≥ z) and Pr(|Z − E[Z]| ≥ z) even if the Z i s are only (almost) k-wise independent for some "small" k, rather than completely independent. The idea is to bound the probabilities by considering E[(Z − E[Z]) k ] or similar expectations, which look at the Z i k or fewer at a time (via linearity of expectation). The main application of this has been that the Z i can then be sampled using "few" random bits, yielding a derandomization/pseudorandomness result (e.g., [3,23,8,26,28,33]). Our results show that such ideas can in fact be used to show that some structures exist! This is one of our main contributions.
What about polynomial-time algorithms for our existential results? Typical applications of Lemma 1.1 are "nonconstructive" [i.e., do not directly imply (randomized) polynomial-time algorithmic versions], since the positive probability guaranteed by Lemma 1.1 can be exponentially small in the size of the input. However, certain algorithmic versions of the LLL have been developed starting with the seminal work of Beck [5]. These ideas do not seem to apply to our extension of the LLL, and hence our MIP result is nonconstructive. Following the preliminary version of this work [35], two main algorithmic versions related to our work have been obtained: (i) for a subclass of the MIPs [20], and (ii) for a somewhat different notion of approximation than the one we study, for certain families of MIPs [11].
Our main algorithmic contribution is for CIPs and multi-criteria versions thereof: we show, by a generalization of the method of pessimistic estimators [31], that we can efficiently construct the same structure as is guaranteed by our nonconstructive argument. We view this as interesting for two reasons. First, the generalized pessimistic estimator argument requires a quite delicate analysis, which we expect to be useful in other applications of developing constructive versions of existential arguments. Second, except for some of the algorithmic versions of the LLL developed in [24,25], most current algorithmic versions minimally require something like "pd 3 = O(1)" (see, e.g., [5,1]); the LLL only needs that pd = O(1). While this issue does not matter much in many applications, it crucially does, in some others. A good example of this is the existentially-optimal integrality gap for the edge-disjoint paths problem with "short" paths, shown using the LLL in [21]. The above-seen "pd 3 = O(1)" requirement of currently-known algorithmic approaches to the LLL, leads to algorithms that will violate the edge-disjointness condition when applied in this context: specifically, they may route up to three paths on some edges of the graph. See [9] for a different -randomwalk based -approach to low-congestion routing. An algorithmic version of this edge-disjoint paths result of [21] is still lacking. It is a very interesting open question whether there is an algorithmic version of the LLL that can construct the same structures as guaranteed to exist by the LLL. In particular, can one of the most successful derandomization tools -the method of conditional probabilities or its generalization, the pessimistic estimators method -be applied, fixing the underlying random choices of the probabilistic argument one-by-one? This intriguing question is open (and seems difficult) for now. As a step in this direction, we are able to show how such approaches can indeed be developed, in the context of CIPs.
Thus, our main contributions are as follows. (a) The LLL extension is of independent interest: it helps in certain settings where the "dependency" among the "bad" events is too high for the LLL to be directly applicable. We expect to see further applications/extensions of such ideas. (b) This work shows that certain classes of column-sparse ILPs have much better solutions than known before; such problems abound in practice (e.g., short paths are often desired/required in routing). (c) Our generalized method of pessimistic estimators should prove fruitful in other contexts also; it is a step toward complete algorithmic versions of the LLL.
The rest of this paper is organized as follows. Our results are first presented in § 2, along with a discussion of related work. The extended LLL, and some large-deviation methods that will be seen to work well with it, are shown in § 3. Sections 4 and 5 are devoted to our rounding applications. Finally, § 6 concludes.
Improvements achieved
For MIPs, we use the extended LLL and an idea ofÉva Tardos that leads to a bootstrapping of the LLL extension, to show the existence of an integral solution of value y * + O(min{y * , m} · H(min{y * , m}, 1/a)) + O(1); see Theorem 4.5. Since a ≤ m, this is always as good as the y * +O(min{y * , m}·H(min{y * , m}, 1/m)) bound of [32] and is a good improvement, if a ≪ m. It also is an improvement over the additive g factor of [18] in cases where g is not small compared to y * .
Consider, e.g., the global routing problem and its MIP formulation, sketched above; m here is the number of edges in G, and g = a is the maximum length of any path in i P i . To focus on a specific interesting case, suppose y * , the fractional congestion, is at most one. Then while the previous results ( [32] and [18], resp.) give bounds of O(log m/ log log m) and O(a) on an integral solution, we get the improved bound of O(log a/ log log a). Similar improvements are easily seen for other ranges of y * also; e.g., if y * = O(log a), an integral solution of value O(log a) exists, improving on the previously known bounds of O(log m/ log(2 log m/ log a)) and O(a). Thus, routing along short paths (this is the notion of sparsity for the global routing problem) is very beneficial in keeping the congestion low. Section 4 presents a scenario where we get such improvements, for discrepancy-type problems [34,4]. In particular, we generalize a hypergraph-partitioning result of Füredi & Kahn [16].
Recall the bounds of [36] for CIPs mentioned in the paragraph preceding this subsection; our bounds for CIPs depend only on the set of constraints Ax ≥ b, i.e., they hold for any non-negative objectivefunction vector c. Our improvements over [36] get better as y * decreases. We show an integrality gap of 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}), once again improving on [36] for weighted CIPs. This CIP bound is better than that of [36] if y * ≤ mB/a: this inequality fails for unweighted CIPs and is generally true for weighted CIPs, since y * can get arbitrarily small in the latter case. In particular, we generalize the result of Chvátal [10] on weighted set cover. Consider, e.g., a facility location problem on a directed graph G = (V, A): given a cost c i ∈ [0, 1] for each i ∈ V , we want a min-cost assignment of facilities to the nodes such that each node sees at least B facilities in its out-neighborhood-multiple facilities at a node are allowed. If ∆ in is the maximum in-degree of G, we show an integrality gap of 1 + O(max{ln(∆ in + 1)/B, ln(B(∆ in + 1))/B}). This improves on [36] if y * ≤ |V |B/∆ in ; it shows an O(1) (resp., 1 + o(1)) integrality gap if B grows as fast as (resp., strictly faster than) log ∆ in . Theorem 5.7 presents our covering results.
A key corollary of our results is that for families of instances of CIPs, we get a good (O(1) or 1 + o(1)) integrality gap if B grows at least as fast as log a. Bounds on the result of a greedy algorithm for CIPs relative to the optimal integral solution, are known [12,13]. Our bound improves that of [12] and is incomparable with [13]; for any given A, c, and the unit vector b/||b|| 2 , our bound improves on [13] if B is more than a certain threshold. As it stands, randomized rounding produces such improved solutions for several CIPs only with a very low, sometimes exponentially small, probability. Thus, it does not imply a randomized algorithm, often. To this end, we generalize Raghavan's method of pessimistic estimators to derive an algorithmic (polynomial-time) version of our results for CIPs, in § 5.3.
We also show via Theorem 5.9 and Corollary 5.10 that multi-criteria CIPs can be approximated well. In particular, Corollary 5.10 shows some interesting cases where the approximation guarantee for multi-criteria CIPs grows in a very much sub-linear fashion with the number ℓ of given vectors c i : the approximation ratio is at most O(log log ℓ) times what we show for CIPs (which correspond to the case where ℓ = 1). We are not aware of any such earlier work on multi-criteria CIPs.
The preliminary version of this work was presented in [35]. As mentioned in § 1, two main algorithmic versions related to our work have been obtained following [35]. First, for a subclass of the MIPs where the nonzero entries of the matrix A are "reasonably large", constructive versions of our results have been obtained in [20]. Second, for a notion of approximation that is different from the one we study, algorithmic results have been developed for certain families of MIPs in [11]. Furthermore, our Theorem 5.7 for CIPs has been used in [19] to develop approximation algorithms for CIPs that have given upper bounds on the variables x j .
The Extended LLL and an Approach to Large Deviations
We now present our LLL extension, Theorem 3.1. For any event E, define χ(E) to be its indicator r.v.:
1 if E holds and 0 otherwise. Suppose we have "bad" events E 1 , . . . , E m with a "dependency" d ′ (in the sense of Lemma 1.1) that is "large". Theorem 3.1 shows how to essentially replace d ′ by a possibly much-smaller d, under some conditions. It generalizes Lemma 1.1 (define one r.v., C i,1 = χ(E i ), for each i, to get Lemma 1.1), its proof is very similar to the classical proof of Lemma 1.1, and its motivation will be clarified by the applications.
(i) any C i,j is mutually independent of all but at most d of the events E k , k = i, and (ii) ∀I ⊆ ([m] − {i}), Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)]. Let p i denote j E[C i,j ]; clearly, Pr(E i ) ≤ p i (set I = φ in (ii)). Suppose that for all i ∈ [m] we have ep i (d + 1) ≤ 1. Then Pr( i E i ) ≥ (d/(d + 1)) m > 0.
Remark 3.2. C i,j and C i,j ′ can "depend" on different subsets of {E k |k = i}; the only restriction is that these subsets be of size at most d. Note that we have essentially reduced the dependency among the E i s, to just d: ep i (d + 1) ≤ 1 suffices. Another important point is that the dependency among the r.v.s C i,j could be much higher than d: all we count is the number of E k that any C i,j depends on. Proof of Theorem 3.1. We prove by induction on |I| that if i ∈ I then Pr(E i | Z(I)) ≤ ep i , which suffices to prove the theorem since Pr
( i E i ) = i∈[m] (1 − Pr(E i | Z([i − 1]))). For the base case where I = ∅, Pr(E i | Z(I)) = Pr(E i ) ≤ p i . For the inductive step, let S i,j,I . = {k ∈ I | C i,j depends on E k }, and S ′ i,j,I = I − S i,j,I ; note that |S i,j,I | ≤ d. If S i,j,I = ∅, then E[C i,j | Z(I)] = E[C i,j ]. Otherwise, letting S i,j,I = {ℓ 1 , . . . , ℓ r }, we have E[C i,j | Z(I)] = E[C i,j · χ(Z(S i,j,I )) | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I )) ≤ E[C i,j | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I ))
, since C i,j is non-negative. The numerator of the last term is E[C i,j ], by assumption. The denominator can be lower-bounded as follows:
s∈[r] (1 − Pr(E ℓs | Z({ℓ 1 , ℓ 2 , . . . , ℓ s−1 } ∪ S ′ i,j,I ))) ≥ s∈[r] (1 − ep ℓs ) ≥ (1 − 1/(d + 1)) r ≥ (d/(d + 1)) d > 1/e;
the first inequality follows from the induction hypothesis. Hence,
E[C i,j | Z(I)] ≤ eE[C i,j ] and thus, Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)] ≤ ep i ≤ 1/(d + 1).
The crucial point is that the events E i could have a large dependency d ′ , in the sense of the classical Lemma 1.1. The main utility of Theorem 3.1 is that if we can "decompose" each E i into the r.v.s C i,j that satisfy the conditions of the theorem, then there is the possibility of effectively reducing the dependency by much (d ′ can be replaced by the value d). Concrete instances of this will be studied in later sections.
The tools behind our MIP application are our new LLL, and a result of [33]. Define, for z = (z 1 , . . . , z n ) ∈ ℜ n , a family of polynomials S j (z), j = 0, 1, . . . , n, where S 0 (z) ≡ 1, and for j ∈ [n],
S j (z) . = 1≤i 1 <i 2 ···<i j ≤n z i 1 z i 2 · · · z i j .(1)Remark 3.3.
For real x and non-negative integral r, we define x r . = x(x − 1) · · · (x − r + 1)/r! as usual; this is the sense meant in Theorem 3.4 below.
We define a nonempty event to be any event with a nonzero probability of occurrence. The relevant theorem of [33] is the following:
Theorem 3.4. ([33]) Given r.v.s X 1 , . . . , X n ∈ [0, 1], let X = n i=1 X i and µ = E[X]
. Then, (a) For any q > 0, any nonempty event Z and any non-negative integer k ≤ q,
Pr(X ≥ q | Z) ≤ E[Y k,q | Z], where Y k,q = S k (X 1 , . . . , X n )/ q k . (b) If the X i s are independent, δ > 0, and k = ⌈µδ⌉, then Pr(X ≥ µ(1 + δ)) ≤ E[Y k,µ(1+δ) ] ≤ G(µ, δ), where G(·, ·) is as in Lemma 2.4. (c) If the X i s are independent, then E[S k (X 1 , . . . , X n )] ≤ n k · (µ/n) k ≤ µ k /k!.
Proof. Suppose r 1 , r 2 , . . . r n ∈ [0, 1] satisfy n i=1 r i ≥ q. Then, a simple proof is given in [33], for the fact that for any non-negative integer k ≤ q, S k (r 1 , r 2 , . . . , r n ) ≥ q k . This clearly holds even given the occurrence of any nonempty event Z. Thus we get Pr(
X ≥ q) | Z) ≤ Pr(Y k,q ≥ 1 | Z) ≤ E[Y k,q | Z],
where the second inequality follows from Markov's inequality. The proofs of (b) and (c) are given in [33].
We next present the proof of Lemma 2.4:
Proof of Lemma 2.4. Part (a) is the Chernoff-Hoeffding bound (see, e.g., Appendix A of [4], or [27]). For (b), we proceed as follows. For any µ > 0, it is easy to check that
G(µ, δ) = e −Θ(µδ 2 ) if δ ∈ (0, 1); (2) G(µ, δ) = e −Θ(µ(1+δ) ln(1+δ)) if δ ≥ 1. (3) Now if µ ≤ log(p −1 )/2, choose δ = C · log(p −1 ) µ log(log(p −1 )/µ)
for a suitably large constant C. Note that δ is lower-bounded by some positive constant; hence, (3) holds (since the constant 1 in the conditions "δ ∈ (0, 1)" and "δ > 1" of (2) and (3) can clearly be replaced by any other positive constant). Simple algebraic manipulation now shows that if C is large enough, then
⌈µδ⌉·G(µ, δ) ≤ p holds. Similarly, if µ > log(p −1 )/2, we set δ = C · log(µ+p −1 )
µ for a large enough constant C, and use (2).
Approximating Minimax Integer Programs
Suppose we are given an MIP conforming to Definition 2.1. Define t to be max i∈[m] N Z i , where N Z i is the number of rows of A which have a non-zero coefficient corresponding to at least one variable among
{x i,j : j ∈ [ℓ i ]}. Note that g ≤ a ≤ t ≤ min{m, a · max i∈[n] ℓ i }.(4)
Theorem 4.2 now shows how Theorem 3.1 can help, for sparse MIPs-those where t ≪ m. We will then bootstrap Theorem 4.2 to get the further improved Theorem 4.5. We start with a proposition, whose proof is a simple calculus exercise:
Proposition 4.1. If 0 < µ 1 ≤ µ 2 , then for any δ > 0, G(µ 1 , µ 2 δ/µ 1 ) ≤ G(µ 2 , δ). Proof. Conduct randomized rounding: independently for each i, randomly round exactly one x i,j to 1, guided by the "probabilities" {x * i,j }. We may assume that {x * i,j } is a basic feasible solution to the LP relaxation. Hence, at most m of the {x * i,j } will be neither zero nor one, and only these variables will participate in the rounding. Thus, since all the entries of A are in [0, 1], we assume without loss of generality from now on that y * ≤ m (and that max i∈[n] ℓ i ≤ m); this explains the "min{y * , m}" term in our stated bounds. If z ∈ {0, 1} N denotes the randomly rounded vector, then E[(Az) i ] = b i by linearity of expectation, i.e., at most y * . Defining k = ⌈y * H(y * , 1/(et))⌉ and events E 1 , E 2 , . . . , E m by
E i ≡ "(Az) i ≥ b i + k",C i,j . = v∈S(j) Z i,v b i +k k .(5)
We now need to show that the r.v.s C i,j satisfy the conditions of Theorem 3.1. For any i ∈ [m], let
δ i = k/b i . Since b i ≤ y * , we have, for each i ∈ [m], G(b i , δ i ) ≤ G(y * ,E i | Z) ≤ j∈[u] E[C i,j | Z]. Also, p i . = j∈[u] E[C i,j ] < G(b i , δ i ) ≤ 1/(ekt).
Next since any C i,j involves (a product of) k terms, each of which "depends" on at most (t − 1) of the events Theorem 4.2 gives good results if t ≪ m, but can we improve it further, say by replacing t by a (≤ t) in it? As seen from (4), the key reason for t ≫ a Θ(1) is that max i∈[n] ℓ i ≫ a Θ(1) . If we can essentially "bring down" max i∈[n] ℓ i by forcing many x * i,j to be zero for each i, then we effectively reduce t (t ≤ a · max i ℓ i , see (4)); this is so since only those x * i,j that are neither zero nor one take part in the rounding. A way of bootstrapping Theorem 4.2 to achieve this is shown by: Proof. Let K 0 > 0 be a sufficiently large absolute constant. Now if
{E v : v ∈ ([m] − {i})} by definition of t, we see the important Fact 4.4. ∀i ∈ [m] ∀j ∈ [u], C i,j ∈ [0, 1] and C i,j "depends" on at most d = k(t − 1) of the set of events {E v : v ∈ ([m] − {i})}.(y * ≥ t 1/7 ) or (t ≤ max{K 0 , 2}) or (t ≤ a 4 )(6)
holds, then we will be done by Theorem 4.2. So we may assume that (6) is false. Also, if y * ≤ t −1/7 , Theorem 4.2 guarantees an integral solution of value O(1); thus, we also suppose that y * > t −1/7 . The basic idea now is, as sketched above, to set many x * i,j to zero for each i (without losing too much on y * ), so that max i ℓ i and hence, t, will essentially get reduced. Such an approach, whose performance will be validated by arguments similar to those of Theorem 4.2, is repeatedly applied until (6) holds, owing to the (continually reduced) t becoming small enough to satisfy (6). There are two cases:
Case I: y * ≥ 1. Solve the LP relaxation, and set x ′ i,j := (y * ) 2 (log 5 t)x * i,j . Conduct randomized rounding on the x ′ i,j now, rounding each x ′ i,j independently to z i,j ∈ {⌊x ′ i,j ⌋, ⌈x ′ i,j ⌉}. (Note the key difference from Theorem 4.2, where for each i, we round exactly one x * i,j to 1.)
Let K 1 > 0 be a sufficiently large absolute constant. We now use ideas similar to those used in our proof of Theorem 4.2 to show that with nonzero probability, we have both of the following:
∀i ∈ [m], (Az) i ≤ (y * ) 3 log 5 t · (1 + K 1 /((y * ) 1.5 log 2 t)), and (7) ∀i ∈ [n], | j z i,j − (y * ) 2 log 5 t| ≤ K 1 y * log 3 t.
To show this, we proceed as follows. Let E 1 , E 2 , . . . , E m be the "bad" events, one for each event in (7) not holding; similarly, let E m+1 , E m+2 , . . . , E m+n be the "bad" events, one for each event in (8) not holding. We want to use our extended LLL to show that with positive probability, all these bad events can be avoided; specifically, we need a way of decomposing each E i into a finite number of non-negative r.v.s C i,j . For each event E m+ℓ where ℓ ≥ 1, we define just one r.v. C m+ℓ,1 : this is the indicator variable for the occurrence of E m+ℓ . For the events E i where i ≤ m, we decompose E i into r.v.s C i,j just as in (5): each such C i,j is now a scalar multiple of at most O((y * ) 3 log 5 t/((y * ) 1.5 log 2 t)) = O((y * ) 1.5 log 3 t) = O(t 1.5/7 log 3 t) independent binary r.v.s that underlie our randomized rounding; the second equality (big-Oh bound) here follows since (6) has been assumed to not hold. Thus, it is easy to see that for all i, 1 ≤ i ≤ m + n, and for any j, the r.v. C i,j depends on at most
O(t · t 1.5/7 log 3 t)(9)
events E k , where k = i. Also, as in our proof of Theorem 4.2, Theorem 3.4 gives a direct proof of requirement (ii) of Theorem 3.1; part (b) of Theorem 3.4 shows that for any desired constant K, we can choose the constant K 1 large enough so that for all i, j E[C i,j ] ≤ t −K . Thus, in view of (9), we see by Theorem 3.1 that Pr( m+n i=1 E i ) > 0. Fix a rounding z satisfying (7) and (8). For each i ∈ [n] and j ∈ [ℓ i ], we renormalize as follows: x ′′ i,j := z i,j / u z i,u . Thus we have u x ′′ i,u = 1 for all i; we now see that we have two very useful properties. First, since j z i,j ≥ (y * ) 2 log 5 t · 1 − O( 1 y * log 2 t ) for all i from (8), we have, ∀i ∈ [m], (Ax ′′ ) i ≤ y * (1 + O(1/((y * ) 1.5 log 2 t))) 1 − O(1/(y * log 2 t)) ≤ y * (1 + O(1/(y * log 2 t))).
Second, since the z i,j are non-negative integers summing to at most (y * ) 2 log 5 t(1 + O(1/(y * log 2 t))), at most O((y * ) 2 log 5 t) values x ′′ i,j are nonzero, for each i ∈ [n]. Thus, by losing a little in y * (see (10)), our "scaling up-rounding-scaling down" method has given a fractional solution x ′′ with a much-reduced ℓ i for each i; ℓ i is now O((y * ) 2 log 5 t), essentially. Thus, t has been reduced to O(a(y * ) 2 log 5 t); i.e., t has been reduced to at most
K 2 t 1/4+2/7 log 5 t(11)
for some constant K 2 > 0 that is independent of K 0 , since (6) was assumed false. Repeating this scheme O(log log t) times makes t small enough to satisfy (6). More formally, define t 0 = t, and t i+1 = K 2 t 1/4+2/7 i log 5 t i for i ≥ 0. Stop this sequence at the first point where either t = t i satisfies (6), or t i+1 ≥ t i holds. Thus, we finally have t small enough to satisfy (6) or to be bounded by some absolute constant. How much has max i∈[m] (Ax) i increased in the process? By (10), we see that at the end, max i∈ [m] (Ax) i ≤ y * · j≥0 (1 + O(1/(y * log 2 t j ))) ≤ y * · e O( j≥0 1/(y * log 2 t j )) ≤ y * + O(1),
since the values log t j decrease geometrically and are lower-bounded by some absolute positive constant. We may now apply Theorem 4.2.
Case II: t −1/7 < y * < 1. The idea is the same here, with the scaling up of x * i,j being by (log 5 t)/y * ; the same "scaling up-rounding-scaling down" method works out. Since the ideas are very similar to Case I, we only give a proof sketch here. We now scale up all the x * i,j first by (log 5 t)/y * and do a randomized rounding. The analogs of (7) and (8)
∀i ∈ [n], | j z i,j − log 5 t/y * | ≤ K ′ 1 log 3 t/ y * .(14)
Proceeding identically as in Case I, we can show that with positive probability, (13) and (14) hold simultaneously. Fix a rounding where these two properties hold, and renormalize as before: x ′′ i,j := z i,j / u z i,u . Since (13) and (14) hold, it is easy to show that the following analogs of (10) and (11) hold:
(Ax ′′ ) i ≤ y * (1 + O(1/ log 2 t)) 1 − O( √ y * / log 2 t) ≤ y * (1 + O(1/ log 2 t))
; and t has been reduced to O(a log 5 t/y * ), i.e., to O(t 1/4+1/7 log 5 t).
We thus only need O(log log t) iterations, again. Also, the analog of (12) now is that
max i∈[m] (Ax) i ≤ y * · j≥0 (1 + O(1/ log 2 t j )) ≤ y * · e O( j≥0 1/ log 2 t j ) ≤ y * + O(1).
This completes the proof.
We now study our improvements for discrepancy-type problems, which are an important class of MIPs that, among other things, are useful in devising divide-and-conquer algorithms. Given is a set-system (X, F ), where X = [n] and F = {D 1 , D 2 , . . . , D M } ⊆ 2 X . Given a positive integer ℓ, the problem is to partition X into ℓ parts, so that each D j is "split well": we want a function f : X → [ℓ] which minimizes max j∈[M ],k∈[ℓ] |{i ∈ D j : f (i) = k}|. (The case ℓ = 2 is the standard set-discrepancy problem.) To motivate this problem, suppose we have a (di)graph (V, A); we want a partition of V into V 1 , . . . , V ℓ such that ∀v ∈ V , {|{j ∈ N (v)∩V k }| : k ∈ [ℓ]} are "roughly the same", where N (v) is the (out-)neighborhood of v. See, e.g., [2,17] for how this helps construct divide-and-conquer approaches. This problem is naturally modeled by the above set-system problem.
Let ∆ be the degree of (X, F ), i.e., max i∈[n] |{j : i ∈ D j }|, and let ∆ ′ . = max D j ∈F |D j |. Our problem is naturally written as an MIP with m = M ℓ, ℓ i = ℓ for each i, and g = a = ∆, in the notation of Definition 2.1; y * = ∆ ′ /ℓ here. The analysis of [32] gives an integral solution of value at most y * (1 + O(H(y * , 1/(M ℓ)))), while [18] presents a solution of value at most y * + ∆. Also, since any D j ∈ F intersects at most (∆ − 1)∆ ′ other elements of F , Lemma 1.1 shows that randomized rounding produces, with positive probability, a solution of value at most y * (1 + O(H(y * , 1/(e∆ ′ ∆ℓ)))). This is the approach taken by [16] for their case of interest: ∆ = ∆ ′ , ℓ = ∆/ log ∆. Theorem 4.5 shows the existence of an integral solution of value y * (1+O(H(y * , 1/∆)))+O(1), i.e., removes the dependence on ∆ ′ . This is an improvement on all the three results above. As a specific interesting case, suppose ℓ grows at most as fast as ∆ ′ / log ∆. Then we see that good integral solutions-those that grow at the rate of O(y * ) or better-exist, and this was not known before. (The approach of [16] shows such a result for ℓ = O(∆ ′ / log(max{∆, ∆ ′ })). Our bound of O(∆ ′ / log ∆) is always better than this, and especially so if ∆ ′ ≫ ∆.)
Approximating Covering Integer Programs
One of the main ideas behind Theorem 3.1 was to extend the basic inductive proof behind the LLL by decomposing the "bad" events E i appropriately into the r.v.s C i,j . We now use this general idea in a different context, that of (multi-criteria) covering integer programs, with an additional crucial ingredient being a useful correlation inequality, the FKG inequality [15]. The reader is asked to recall the discussion of (multi-criteria) CIPs from § 2. We start with a discussion of randomized rounding for CIPs, the Chernoff lower-tail bound, and the FKG inequality in § 5.1. These lead to our improved, but nonconstructive, approximation bound for column-sparse (multi-criteria) CIPs, in § 5.2. This is then made constructive in § 5.3; we also discuss there what we view as novel about this constructive approach.
Preliminaries
Let us start with a simple and well-known approach to tail bounds. Suppose Y is a random variable and y is some value. Then, for any 0 ≤ δ < 1, we have
Pr(Y ≤ y) ≤ Pr((1 − δ) Y ≥ (1 − δ) y ) ≤ E[(1 − δ) Y ] (1 − δ) y ,(15)
where the inequality is a consequence of Markov's inequality.
We next setup some basic notions related to approximation algorithms for (multi-criteria) CIPs. Recall that in such problems, we have ℓ given non-negative vectors c 1 , c 2 , . . . , c ℓ such that for all i, c i ∈ [0, 1] n with max j c i,j = 1; ℓ = 1 in the case of CIPs. Let x = (x * 1 , x * 2 , . . . , x * n ) denote a given fractional solution that satisfies the system of constraints Ax ≥ b. We are not concerned here with how x * was found: typically, x * would be an optimal solution to the LP relaxation of the problem. (The LP relaxation is obvious if, e.g., ℓ = 1, or, say, if the given multi-criteria aims to minimize max i c T i · x * , or to keep each c T i · x * bounded by some target value v i .) We now consider how to round x * to some integral z so that:
(P1) the constraints Az ≥ b hold, and (P2) for all i, c T i · z is "not much bigger" than c T i · x * : our approximation bound will be a measure of how small a "not much bigger value" we can achieve in this sense.
Let us now discuss the "standard" randomized rounding scheme for (multi-criteria) CIPs. We assume a fixed instance as well as x * , from now on. For an α > 1 to be chosen suitably, set x ′ j = αx * j , for each j ∈ [n]. We then construct a random integral solution z by setting, independently for each j ∈ [n],
z j = ⌊x ′ j ⌋ + 1 with probability x ′ j − ⌊x ′ j ⌋, and z j = ⌊x ′ j ⌋ with probability 1 − (x ′ j − ⌊x ′ j ⌋).
The aim then is to show that with positive (hopefully high) probability, (P1) and (P2) happen simultaneously. We now introduce some useful notation. For every j ∈ [n], let s j = ⌊x ′ j ⌋. Let A i denote the ith row of A, and let X 1 , X 2 , . . . , X n ∈ {0, 1} be independent r.v.s with Pr(X j = 1) = x ′ j − s j for all j. The bad event E i that the ith constraint is violated by our randomized rounding is given by
E i ≡ "A i · X < µ i (1 − δ i )", where µ i = E[A i · X] and δ i = 1 − (b i − A i · s)/µ i .
We now bound Pr(E i ) for all i, when the standard randomized rounding is used. Define g(B, α) . = (α · e −(α−1) ) B . Then for all i,
Lemma 5.1.Pr(E i ) ≤ E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ g(B, α) ≤ e −B(α−1) 2 /(2α)
under standard randomized rounding.
Proof. The first inequality follows from (15). Next, the Chernoff-Hoeffding lower-tail approach [4,27] shows that
E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ e −δ i (1 − δ i ) 1−δ i µ i .
It is observed in [36] (and is not hard to see) that this latter quantity is maximized when s j = 0 for all j, and when each b i equals its minimum value of B. Thus we see that Pr(E i ) ≤ g(B, α). The inequality g(B, α) ≤ e −B(α−1) 2 /(2α) for α ≥ 1, is well-known and easy to verify via elementary calculus.
Next, the FKG inequality is a useful correlation inequality, a special case of which is as follows [15]. Given binary vectors a = (a 1 , a 2 , . . . , a ℓ ) ∈ {0, 1} ℓ and b = (b 1 , b 2 , . . . , b ℓ ) ∈ {0, 1} ℓ , let us partially order them by coordinate-wise domination: a b iff a i ≤ b i for all i. Now suppose Y 1 , Y 2 , . . . , Y ℓ are independent r.v.s, each taking values in {0, 1}. Let Y denote the vector (Y 1 , Y 2 , . . . , Y ℓ ). Suppose an event A is completely defined by the value of Y . Define A to be increasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b such that a b. Analogously, event A is decreasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b a. The FKG inequality proves certain intuitively appealing bounds:
(i) Pr(I i | j∈S I j ) ≥ Pr(I i ) and Pr(D i | j∈S D j ) ≥ Pr(D i ); (ii) Pr(I i | j∈S D j ) ≤ Pr(I i ) and Pr(D i | j∈S I j ) ≤ Pr(D i ).
Returning to our random variables X j and events E i , we get the following lemma as an easy consequence of the FKG inequality, since each event of the form "E i " or "X j = 1" is an increasing event as a function of the vector (X 1 , X 2 , . . . , X n ):
Lemma 5.3. For all B 1 , B 2 ⊆ [m]
such that B 1 ∩B 2 = ∅ and for any B 3 ⊆ [n], Pr( i∈B 1 E i | (( j∈B 2 E j )∧ ( k∈B 3 (X k = 1))) ≥ i∈B 1 Pr(E i ).
Nonconstructive approximation bounds for (multi-criteria) CIPs
Definition 5.4. (The function R) For any s and any j 1 < j 2 < · · · < j s , let R(j 1 , j 2 , . . . , j s ) be the set of indices i such that row i of the constraint system "Ax ≥ b" has at least one of the variables j k , 1 ≤ k ≤ s, appearing with a nonzero coefficient. (Note from the definition of a in Defn. 2.2, that |R(j 1 , j 2 , . . . , j s )| ≤ a · s.)
Let the vector x * = (x * 1 , x * 2 , . . . , x * n ), the parameter α > 1, and the "standard" randomized rounding scheme, be as defined in § 5.1. The standard rounding scheme is sufficient for our (nonconstructive) purposes now; we generalize this scheme as follows, for later use in § 5.3.
Definition 5.5. (General randomized rounding) Given a vector p = (p 1 , p 2 , . . . , p n ) ∈ [0, 1] n , the general randomized rounding with parameter p generates independent random variables X 1 , X 2 , . . . , X n ∈ {0, 1} with Pr(X j = 1) = p j ; the rounded vector z is defined by z j = ⌊αx * j ⌋ + X j for all j. (As in the standard rounding, we set each z j to be either ⌊αx * j ⌋ or ⌈αx * j ⌉; the standard rounding is the special case in which E[z j ] = αx * j for all j.)
We now present an important lemma, Lemma 5.6, to get correlation inequalities which "point" in the "direction" opposite to FKG. Some ideas from the proof of Lemma 1.1 will play a crucial role in our proof this lemma.
Lemma 5.6. Suppose we employ general randomized rounding with some parameter p, and that Pr( m i=1 E i ) is nonzero under this rounding. The following hold for any q and any 1 ≤ j 1 < j 2 < · · · < j q ≤ n.
(i)
Pr(X j 1 = X j 2 = · · · = X jq = 1 | m i=1 E i ) ≤ q t=1 p jt i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i ))
; (16) the events E i ≡ ((Az) i < b i ) are defined here w.r.t. the general randomized rounding.
(ii) In the special case of standard randomized rounding, i∈R(j 1 ,j 2 ,...,jq)
(1 − Pr(E i )) ≥ (1 − g(B, α)) aq ;(17)
the function g is as defined in Lemma 5.1.
Proof. (i) Note first that if we wanted a lower bound on the l.h.s., the FKG inequality would immediately imply that the l.h.s. is at least p j 1 p j 2 · · · p jq . We get around this "correlation problem" as follows. Let
Q = R(j 1 , j 2 , . . . , j q ); let Q ′ = [m] − Q. Let Z 1 ≡ ( i∈Q E i ), and Z 2 ≡ ( i∈Q ′ E i ).
Letting Y = q t=1 X jt , note that |Q| ≤ aq and (18)
Y is independent of Z 2 .(19)
Now,
Pr(Y = 1 | (Z 1 ∧ Z 2 )) = Pr(((Y = 1) ∧ Z 1 ) | Z 2 ) Pr(Z 1 | Z 2 ) ≤ Pr((Y = 1) | Z 2 ) Pr(Z 1 | Z 2 ) = Pr(Y = 1) Pr(Z 1 | Z 2 ) (by (19)) ≤ q t=1
Pr(X jt = 1) i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i )) (by Lemma 5.3).
(ii) We get (17) from Lemma 5.1 and (18).
We will use Lemmas 5.3 and 5.6 to prove Theorem 5.9. As a warmup, let us start with a result for the special case of CIPs; recall that y * denotes c T · x * . Theorem 5.7. For any given CIP, suppose we choose α, β > 1 such that β(1 − g(B, α)) a > 1. Then, there exists a feasible solution of value at most y * αβ. In particular, there is an absolute constant K > 0 such that if α, β > 1 are chosen as: α = K · ln(a + 1)/B and β = 2, if ln(a + 1) ≥ B, and (20) α = β = 1 + K · ln(a + 1)/B, if ln(a + 1) < B; (21) then, there exists a feasible solution of value at most y * αβ. Thus, the integrality gap is at most 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}).
Proof. Conduct standard randomized rounding, and let E be the event that c T · z > y * αβ. Setting Z ≡ i∈[m] E i and µ . = E[c T · z] = y * α, we see by Markov's inequality that Pr(E | Z) is at most R = ( n j=1 c j Pr(X j = 1 | Z))/(µβ). Note that Pr(Z) > 0 since α > 1; so, we now seek to make R < 1, which will complete the proof. Lemma 5.6 shows that g(B, α)) a ; thus, the condition β (1 − g(B, α)) a > 1 suffices.
R ≤ j c j p j µβ · (1 − g(B, α)) a = 1 β(1 −
Simple algebra shows that choosing α, β > 1 as in (20) and (21), ensures that β (1 − g(B, α)) a > 1.
The basic approach of our proof of Theorem 5.7 is to follow the main idea of Theorem 3.1, and to decompose the event "E | Z" into a non-negative linear combination of events of the form "X j = 1 | Z'; we then exploited the fact that each X j depends on at most a of the events comprising Z. We now extend Theorem 5.7 and also generalize to multi-criteria CIPs. Instead of employing just a "first moment method" (Markov's inequality) as in the proof of Theorem 5.7, we will work with higher moments: the functions S k defined in (1) and used in Theorem 3.4. Suppose some parameters λ i > 0 are given, and that our goal is to round x * to z so that the event
A ≡ "(Az ≥ b) ∧ (∀i, c T i · z ≤ λ i ) ′′(22)
holds. We first give a sufficient condition for this to hold, in Theorem 5.9; we then derive some concrete consequences in Corollary 5.10. We need one further definition before presenting Theorem 5.9. Recall that A i and b i respectively denote the ith row of A and the ith component of b. Also, the vector s and values δ i will throughout be as in the definition of standard randomized rounding.
Definition 5.8. (The functions ch and ch ′ ) Suppose we conduct general randomized rounding with some parameter p; i.e., let X 1 , X 2 , . . . , X n be independent binary random variables such that Pr(X j = 1) = p j . For each i ∈ [m], define
ch i (p) . = E[(1 − δ i ) A i ·X ] (1 − δ i ) b i −A i ·s = j∈[n] E[(1 − δ i ) A i,j X j ] (1 − δ i ) b i −A i ·s and ch ′ i (p) . = min{CH i (p), 1}.
(Note from (15) that if we conduct general randomized rounding with parameter p, then Pr((Az) i < b i ) ≤ ch i (p) ≤ ch ′ i (p); also, "ch" stands for "Chernoff-Hoeffding".) Theorem 5.9. Suppose we are given a multi-criteria CIP, as well as some parameters λ 1 , λ 2 , . . . , λ ℓ > 0. Let A be as in (22). Then, for any sequence of positive integers (k 1 , k 2 , . . . , k ℓ ) such that k i ≤ λ i , the following hold.
(i) Suppose we employ general randomized rounding with parameter p = (p 1 , p 2 , . . . , p n ). Then, Pr(A) is at least (ii) Suppose we employ the standard randomized rounding to get a rounded vector z.
Φ(p) . = r∈[m] (1 − ch ′ r (p)) − ℓ i=1 1 λ i k i · j 1 <···<j k i k i t=1 c i,jt · p jt · r ∈R(j 1 ,...,j k i ) (1 − ch ′ r (p));(23)Let λ i = ν i (1 + γ i ) for each i ∈ [ℓ], where ν i = E[c T i · z] = α · (c T i · x * )
and γ i > 0 is some parameter. Then,
Φ(p) ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −a·k i .(24)
In particular, if the r.h.s. of (24) is positive, then Pr(A) > 0 for standard randomized rounding.
The proof is a simple generalization of that of Theorem 5.7, and is deferred to Section 5.4. Theorem 5.7 is the special case of Theorem 5.9 corresponding to ℓ = k 1 = 1. To make the general result of Theorem 5.9 more concrete, we now study an additional special case. We present this special case as one possible "proof of concept", rather than as an optimized one; e.g., the constant "3" in the bound "c T i · z ≤ 3ν i " can be improved.
Corollary 5.10. There is an absolute constant K ′ > 0 such that the following holds. Suppose we are given a multi-criteria CIP with notation as in part (ii) of Theorem 5.9. Define α = K ′ · max{ ln(a)+ln ln(2ℓ)
B , 1}. Now if ν i ≥ log 2 (2ℓ) for all i ∈ [ℓ]
, then standard randomized rounding produces a feasible solution z such that c T i · z ≤ 3ν i for all i, with positive probability. In particular, this can be shown by setting k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i, in part (ii) of Theorem 5.9.
Proof. Let us employ Theorem 5.9(ii) with k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i. We just need to establish that the r.h.s. of (24) is positive. We need to show that
ℓ i=1 n k i · (ν i /n) k i 3ν i k i · (1 − g(B, α)) −a·k i < 1;
it is sufficient to prove that for all i,
ν k i i /k i ! 3ν i k i · (1 − g(B, α)) −a·k i < 1/ℓ.(25)
We make two observations now.
• Since k i ∼ ln ℓ and ν i ≥ log 2 (2ℓ),
3ν i k i = (1/k i !) · k i −1 j=0 (3ν i − j) = (1/k i !) · (3ν i ) k i · e −Θ( k i −1 j=0 j/ν i ) = Θ((1/k i !) · (3ν i ) k i ).
• (1 − g(B, α)) −a·k i can be made arbitrarily close to 1 by choosing the constant K ′ large enough.
These two observations establish (25).
Constructive version
It can be shown that for many problems, randomized rounding produces the solutions shown to exist by Theorem 5.7 and Theorem 5.9, with very low probability: e.g., probability almost exponentially small in the input size. Thus we need to obtain constructive versions of these theorems. Our method will be a deterministic procedure that makes O(n) calls to the function Φ(·), in addition to poly(n, m) work. Now, if k ′ denotes the maximum of all the k i , we see that Φ can be evaluated in poly(n k ′ , m) time. Thus, our overall procedure runs in time poly(n k ′ , m) time. In particular, we get constructive versions of Theorem 5.7 and Corollary 5.10 that run in time poly(n, m) and poly(n log ℓ , m), respectively.
Our approach is as follows. We start with a vector p that corresponds to standard randomized rounding, for which we know (say, as argued in Corollary 5.10) that Φ(p) > 0. In general, we have a vector of probabilities p = (p 1 , p 2 , . . . , p n ) such that Φ(p) > 0. If p ∈ {0, 1} n , we are done. Otherwise suppose some p j lies in (0, 1); by renaming the variables, we will assume without loss of generality that j = n. Define p ′ = (p 1 , p 2 , . . . , p n−1 , 0) and p ′′ = (p 1 , p 2 , . . . , p n−1 , 1). The main fact we wish to show is that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0: we can then set p n to 0 or 1 appropriately, and continue. (As mentioned in the previous paragraph, we thus have O(n) calls to the function Φ(·) in total.) Note that although some of the p j will lie in {0, 1}, we can crucially continue to view the X j as independent random variables with Pr(X j = 1) = p j .
So, our main goal is: assuming that p n ∈ (0, 1) and that Φ(p) > 0, (26) to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. In order to do so, we make some observations and introduce some simplifying notation. Define, for each i ∈ [m]:
q i = ch ′ i (p), q ′ i = ch ′ i (p ′ ), and q ′′ i = ch ′ i (p ′′ )
. Also define the vectors q . = (q 1 , q 2 , . . . , q m ), q ′ . = (q ′ 1 , q ′ 2 , . . . , q ′ m ), and q ′′ . = (q ′′ 1 , q ′′ 2 , . . . , q ′′ m ). We now present a useful lemma about these vectors:
Lemma 5.11. For all i ∈ [m], we have 0 ≤ q ′′ i ≤ q ′ i ≤ 1; (27) q i ≥ p n q ′′ i + (1 − p n )q ′ i ; and (28) q ′ i = q ′′ i = q i if i ∈ R(n).(29)
Proof. The proofs of (27) and (29) are straightforward. As for (28), we proceed as in [36]. First of all, if q i = 1, then we are done, since q ′′ i , q ′ i ≤ 1. So suppose q i < 1; in this case, q i = ch i (p). Now, Definition 5.8 shows that
ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ). Therefore, q i = ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ) ≥ p n ch ′ i (p ′′ ) + (1 − p n )ch ′ i (p ′ ).
Since we are mainly concerned with the vectors p, p ′ and p ′′ now, we will view the values p 1 , p 2 , . . . , p n−1 as arbitrary but fixed, subject to (26). The function Φ(·) now has a simple form; to see this, we first define, for a vector r = (r 1 , r 2 , . . . , r m ) and a set U ⊆ [m],
f (U, r) = i∈U (1 − r i ).
Recall that p 1 , p 2 , . . . , p n−1 are considered as constants now. Then, it is evident from (23) that there exist constants u 1 , u 2 , . . . , u t and v 1 , v 2 , . . . , v t ′ , as well as subsets U 1 , U 2 , . . . , U t and V 1 , V 2 , . . . , V t ′ of [m], such that
Φ(p) = f ([m], q) − ( i u i · f (U i , q)) − (p n · j v j · f (V j , q));(30)Φ(p ′ ) = f ([m], q ′ ) − ( i u i · f (U i , q ′ )) − (0 · j v j · f (V j , q ′ )) = f ([m], q ′ ) − i u i · f (U i , q ′ ); (31) Φ(p ′′ ) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − (1 · j v j · f (V j , q ′′ )) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − ( j v j · f (V j , q ′′ )).(32)
Importantly, we also have the following:
the constants u i , v j are non-negative; ∀j, V j ∩ R(n) = ∅.
Recall that our goal is to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. We will do so by proving that
Φ(p) ≤ p n Φ(p ′′ ) + (1 − p n )Φ(p ′ ).(34)
Let us use the equalities (30), (31), and (32). In view of (29) and (33), the term "−p n · j v j · f (V j , q)" on both sides of the inequality (34) cancels; defining ∆(U ) .
= (1 − p n ) · f (U, q ′ ) + p n · f (U, q ′′ ) − f (U, q), inequality (34) reduces to ∆([m]) − i u i · ∆(U i ) ≥ 0.(35)
Before proving this, we pause to note a challenge we face. Suppose we only had to show that, say, ∆([m]) is non-negative; this is exactly the issue faced in [36]. Then, we will immediately be done by part (i) of Lemma 5.12, which states that ∆(U ) ≥ 0 for any set U . However, (35) also has terms such as "u i · ∆(U i )" with a negative sign in front. To deal with this, we need something more than just that ∆(U ) ≥ 0 for all U ; we handle this by part (ii) of Lemma 5.12. We view this as the main novelty in our constructive version here. ≥ 0 (by (26) and (30)).
Thus we have (35).
Proof of Lemma 5.12. It suffices to show the following. Assume U = [m]; suppose u ∈ ([m] − U ) and that U ′ = U ∪ {u}. Assuming by induction on |U | that ∆(U ) ≥ 0, we show that ∆(U ′ ) ≥ 0, and that ∆(U )/f (U, q) ≤ ∆(U ′ )/f (U ′ , q). It is easy to check that this way, we will prove both claims of the lemma.
The base case of the induction is that |U | ∈ {0, 1}, where ∆(U ) ≥ 0 is directly seen by using (28). Suppose inductively that ∆(U ) ≥ 0. Using the definition of ∆(U ) and the fact that f (U ′ , q) = (1 − q u )f (U, q), we have
f (U ′ , q) = (1 − q u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ ) − ∆(U )] ≤ (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] − (1 − q u ) · ∆(U ),
where this last inequality is a consequence of (28). Therefore, using the definition of ∆(U ′ ) and the facts f (U ′ , q ′ ) = (1 − q ′ u )f (U, q ′ ) and f (U ′ , q ′′ ) = (1 − q ′′ u )f (U, q ′′ ), (27)).
∆(U ′ ) = (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) − f (U ′ , q) ≥ (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) + (1 − q u ) · ∆(U ) − (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] = (1 − q u ) · ∆(U ) + p n (1 − p n ) · (f (U, q ′′ ) − f (U, q ′ )) · (q ′ u − q ′′ u ) ≥ (1 − q u ) · ∆(U ) (by
So, since we assumed that ∆(U ) ≥ 0, we get ∆(U ′ ) ≥ 0; furthermore, we get that ∆(U ′ ) ≥ (1 − q u ) · ∆(U ), which implies that ∆(U ′ )/f (U ′ , q) ≥ ∆(U )/f (U, q). (i) Let E r ≡ ((Az) r < b r ) be defined w.r.t. general randomized rounding with parameter p; as observed in Definition 5.8, Pr(E r ) ≤ ch ′ r (p). Now if ch ′ r (p) = 1 for some r, then part (i) is trivially true; so we assume that Pr(E r ) ≤ ch ′ r ( (1 − Pr(E r )).
Define, for i = 1, 2, . . . , ℓ, the "bad" event E i ≡ (c T i · z > λ i ). Fix any i. Our plan is to show that
Pr(E i | Z) ≤ 1 λ i k i · j 1 <j 2 <···<j k i k i t=1 c i,jt · p jt · r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r )) −1 .(36)
If we prove (36), then we will be done as follows. We have
Pr(A) ≥ Pr(Z) · 1 − i Pr(E i | Z) ≥ ( r∈[m]
(1 − Pr(E r ))) · 1 − i Pr(E i | Z) .
Now, the term "( r∈ [m] (1 − Pr(E r )))" is a decreasing function of each of the values Pr(E r ); so is the lower bound on "− Pr(E i | Z)" obtained from (36). Hence, bounds (36) and (37), along with the bound Pr(E r ) ≤ ch ′ r (p), will complete the proof of part (i). We now prove (36) using Theorem 3.4(a) and Lemma 5.6. Recall the symmetric polynomials S k from (1). Define Y = S k i (c i,1 X 1 , c i,2 X 2 , . . . , c i,n X n )/ λ i k i . By Theorem 3.4(a), Pr(E i | Z) ≤ E[Y | Z]. Next, the typical term in E[Y | Z] can be upper bounded using Lemma 5.6:
E ( k i t=1 c i,jt · X jt ) | m i=1 E i ≤ k i
t=1 c i,jt · p jt r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r ))
.
Thus we have (36), and the proof of part (i) is complete. (1 − ch ′ r (p)) Lemma 5.1 shows that under standard randomized rounding, ch ′ r (p) ≤ g(B, α) < 1 for all r. So, the r.h.s. κ of (38) gets lower-bounded as follows:
· 1 − ℓ i=1 1 λ i k i · j 1 <···<j k iκ ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · [ r∈R(j 1 ,...,j k i ) (1 − g(B, α))] −1 ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · (1 − g(B, α)) −ak i ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −ak i ,
where the last line follows from Theorem 3.4(c). 2
Conclusion
We have presented an extension of the LLL that basically helps reduce the "dependency" much in some settings; we have seen applications to two families of integer programming problems. It would be interesting to see how far these ideas can be pushed further. Two other open problems suggested by this work are: (i) developing a constructive version of our result for MIPs, and (ii) developing a poly(n, m)-time constructive version of Theorem 5.9, as opposed to the poly(n k ′ , m)-time constructive version that we present in § 5.3. Finally, a very interesting question is to develop a theory of applications of the LLL that can be made constructive with (essentially) no loss. | 11,762 |
cs0307043 | 2953390777 | The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs. | For CIPs, the idea is to solve the LP relaxation, scale the components of @math suitably, and then perform randomized rounding; see for the details. Starting with this idea, the work of @cite_22 leads to certain approximation bounds; similar bounds are achieved through different means by Plotkin, Shmoys & Tardos @cite_14 . Work of this author @cite_10 improved upon these results by observing a correlation'' property of CIPs, getting an approximation ratio of @math . Thus, while the work of @cite_22 gives a general approximation bound for MIPs, the result of @cite_5 gives good results for sparse MIPs. For CIPs, the current-best results are those of @cite_10 ; however, no better results were known for sparse CIPs. | {
"abstract": [
"",
"This paper presents fast algorithms that find approximate solutions for a general class of problems, which we call fractional packing and covering problems. The only previously known algorithms for solving these problems are based on general linear programming techniques. The techniques developed in this paper greatly outperform the general methods in many applications, and are extensions of a method previously applied to find approximate solutions to multicommodity flow problems. Our algorithm is a Lagrangian relaxation technique; an important aspect of our results is that we obtain a theoretical analysis of the running time of a Lagrangian relaxation-based algorithm.We give several applications of our algorithms. The new approach yields several orders of magnitude of improvement over the best previously known running times for algorithms for the scheduling of unrelated parallel machines in both the preemptive and the nonpreemptive models, for the job shop problem, for the Held and Karp bound for the traveling salesman problem, for the cutting-stock problem, for the network embedding problem, and for the minimum-cost multicommodity flow problem.",
"Several important NP-hard combinatorial optimization problems can be posed as packing covering integer programs; the randomized rounding technique of Raghavan and Thompson is a powerful tool with which to approximate them well. We present one elementary unifying property of all these integer linear programs and use the FKG correlation inequality to derive an improved analysis of randomized rounding on them. This yields a pessimistic estimator, thus presenting deterministic polynomial-time algorithms for them with approximation guarantees that are significantly better than those known.",
"We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance."
],
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_10",
"@cite_22"
],
"mid": [
"2064779029",
"2134422938",
"1993119087",
"2022191808"
]
} | An Extension of the Lovász Local Lemma, and its Applications to Integer Programming * | The powerful Lovász Local Lemma (LLL) is often used to show the existence of rare combinatorial structures by showing that a random sample from a suitable sample space produces them with positive probability [14]; see Alon & Spencer [4] and Motwani & Raghavan [27] for several such applications. We present an extension of this lemma, and demonstrate applications to rounding fractional solutions for certain families of integer programs.
Let e denote the base of natural logarithms as usual. The symmetric case of the LLL shows that all of a set of "bad" events E i can be avoided under some conditions: Lemma 1.1. ( [14]) Let E 1 , E 2 , . . . , E m be any events with Pr(E i ) ≤ p ∀i. If each E i is mutually independent of all but at most d of the other events E j and if ep(d + 1) ≤ 1, then Pr( m i=1 E i ) > 0.
Though the LLL is powerful, one problem is that the "dependency" d is high in some cases, precluding the use of the LLL if p is not small enough. We present a partial solution to this via an extension of the LLL (Theorem 3.1), which shows how to essentially reduce d for a class of events E i ; this works well when each E i denotes some random variable deviating "much" from its mean. In a nutshell, we show that such events E i can often be decomposed suitably into sub-events; although the sub-events may have a large dependency among themselves, we show that it suffices to have a small "bipartite dependency" between the set of events E i and the set of sub-events. This, in combination with some other ideas, leads to the following applications in integer programming.
It is well-known that a large number of NP-hard combinatorial optimization problems can be cast as integer linear programming problems (ILPs). Due to their NP-hardness, good approximation algorithms are of much interest for such problems. Recall that a ρ-approximation algorithm for a minimization problem is a polynomial-time algorithm that delivers a solution whose objective function value is at most ρ times optimal; ρ is usually called the approximation guarantee, approximation ratio, or performance guarantee of the algorithm. Algorithmic work in this area typically focuses on achieving the smallest possible ρ in polynomial time. One powerful paradigm here is to start with the linear programming (LP) relaxation of the given ILP wherein the variables are allowed to be reals within their integer ranges; once an optimal solution is found for the LP, the main issue is how to round it to a good feasible solution for the ILP.
Rounding results in this context often have the following strong property: they present an integral solution of value at most y * · ρ, where y * will throughout denote the optimal solution value of the LP relaxation.
Since the optimal solution value OP T of the ILP is easily seen to be lower-bounded by y * , such rounding algorithms are also ρ-approximation algorithms. Furthermore, they provide an upper bound of ρ on the ratio OP T /y * , which is usually called the integrality gap or integrality ratio of the relaxation; the smaller this value, the better the relaxation.
This work presents improved upper bounds on the integrality gap of the natural LP relaxation for two families of ILPs: minimax integer programs (MIPs) and covering integer programs (CIPs). (The precise definitions and results are presented in § 2.) For the latter, we also provide the corresponding polynomialtime rounding algorithms. Our main improvements are in the case where the coefficient matrix of the given ILP is column-sparse: i.e., the number of nonzero entries in every column is bounded by a given parameter a. There are classical rounding theorems for such column-sparse problems (e.g., Beck & Fiala [6], Karp, Leighton, Rivest, Thompson, Vazirani & Vazirani [18]). Our results complement, and are incomparable with, these results. Furthermore, the notion of column-sparsity, which denotes no variable occurring in "too many" constraints, occurs naturally in combinatorial optimization: e.g., routing using "short" paths, and problems on hypergraphs with "small" degree. These issues are discussed further in § 2.
A key technique, randomized rounding of linear relaxations, was developed by Raghavan & Thompson [32] to get approximation algorithms for such ILPs. We use Theorem 3.1 to prove that this technique produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrix of the given MIP/CIP is column-sparse. (In the case of MIPs, our algorithm iterates randomized rounding several times with different choices of parameters, in order to achieve our result.) Such results cannot be got via Lemma 1.1, as the dependency d, in the sense of Lemma 1.1, can be as high as Θ(m) for these problems. Roughly speaking, Theorem 3.1 helps show that if no column in our given ILP has more than a nonzero entries, then the dependency can essentially be brought down to a polynomial in a; this is the key driver behind our improvements.
Theorem 3.1 works well in combination with an idea that has blossomed in the areas of derandomization and pseudorandomness, in the last two decades: (approximately) decomposing a function of several variables into a sum of terms, each of which depends on only a few of these variables. Concretely, suppose Z is a sum of random variables Z i . Many tools have been developed to upper-bound Pr(Z − E[Z] ≥ z) and Pr(|Z − E[Z]| ≥ z) even if the Z i s are only (almost) k-wise independent for some "small" k, rather than completely independent. The idea is to bound the probabilities by considering E[(Z − E[Z]) k ] or similar expectations, which look at the Z i k or fewer at a time (via linearity of expectation). The main application of this has been that the Z i can then be sampled using "few" random bits, yielding a derandomization/pseudorandomness result (e.g., [3,23,8,26,28,33]). Our results show that such ideas can in fact be used to show that some structures exist! This is one of our main contributions.
What about polynomial-time algorithms for our existential results? Typical applications of Lemma 1.1 are "nonconstructive" [i.e., do not directly imply (randomized) polynomial-time algorithmic versions], since the positive probability guaranteed by Lemma 1.1 can be exponentially small in the size of the input. However, certain algorithmic versions of the LLL have been developed starting with the seminal work of Beck [5]. These ideas do not seem to apply to our extension of the LLL, and hence our MIP result is nonconstructive. Following the preliminary version of this work [35], two main algorithmic versions related to our work have been obtained: (i) for a subclass of the MIPs [20], and (ii) for a somewhat different notion of approximation than the one we study, for certain families of MIPs [11].
Our main algorithmic contribution is for CIPs and multi-criteria versions thereof: we show, by a generalization of the method of pessimistic estimators [31], that we can efficiently construct the same structure as is guaranteed by our nonconstructive argument. We view this as interesting for two reasons. First, the generalized pessimistic estimator argument requires a quite delicate analysis, which we expect to be useful in other applications of developing constructive versions of existential arguments. Second, except for some of the algorithmic versions of the LLL developed in [24,25], most current algorithmic versions minimally require something like "pd 3 = O(1)" (see, e.g., [5,1]); the LLL only needs that pd = O(1). While this issue does not matter much in many applications, it crucially does, in some others. A good example of this is the existentially-optimal integrality gap for the edge-disjoint paths problem with "short" paths, shown using the LLL in [21]. The above-seen "pd 3 = O(1)" requirement of currently-known algorithmic approaches to the LLL, leads to algorithms that will violate the edge-disjointness condition when applied in this context: specifically, they may route up to three paths on some edges of the graph. See [9] for a different -randomwalk based -approach to low-congestion routing. An algorithmic version of this edge-disjoint paths result of [21] is still lacking. It is a very interesting open question whether there is an algorithmic version of the LLL that can construct the same structures as guaranteed to exist by the LLL. In particular, can one of the most successful derandomization tools -the method of conditional probabilities or its generalization, the pessimistic estimators method -be applied, fixing the underlying random choices of the probabilistic argument one-by-one? This intriguing question is open (and seems difficult) for now. As a step in this direction, we are able to show how such approaches can indeed be developed, in the context of CIPs.
Thus, our main contributions are as follows. (a) The LLL extension is of independent interest: it helps in certain settings where the "dependency" among the "bad" events is too high for the LLL to be directly applicable. We expect to see further applications/extensions of such ideas. (b) This work shows that certain classes of column-sparse ILPs have much better solutions than known before; such problems abound in practice (e.g., short paths are often desired/required in routing). (c) Our generalized method of pessimistic estimators should prove fruitful in other contexts also; it is a step toward complete algorithmic versions of the LLL.
The rest of this paper is organized as follows. Our results are first presented in § 2, along with a discussion of related work. The extended LLL, and some large-deviation methods that will be seen to work well with it, are shown in § 3. Sections 4 and 5 are devoted to our rounding applications. Finally, § 6 concludes.
Improvements achieved
For MIPs, we use the extended LLL and an idea ofÉva Tardos that leads to a bootstrapping of the LLL extension, to show the existence of an integral solution of value y * + O(min{y * , m} · H(min{y * , m}, 1/a)) + O(1); see Theorem 4.5. Since a ≤ m, this is always as good as the y * +O(min{y * , m}·H(min{y * , m}, 1/m)) bound of [32] and is a good improvement, if a ≪ m. It also is an improvement over the additive g factor of [18] in cases where g is not small compared to y * .
Consider, e.g., the global routing problem and its MIP formulation, sketched above; m here is the number of edges in G, and g = a is the maximum length of any path in i P i . To focus on a specific interesting case, suppose y * , the fractional congestion, is at most one. Then while the previous results ( [32] and [18], resp.) give bounds of O(log m/ log log m) and O(a) on an integral solution, we get the improved bound of O(log a/ log log a). Similar improvements are easily seen for other ranges of y * also; e.g., if y * = O(log a), an integral solution of value O(log a) exists, improving on the previously known bounds of O(log m/ log(2 log m/ log a)) and O(a). Thus, routing along short paths (this is the notion of sparsity for the global routing problem) is very beneficial in keeping the congestion low. Section 4 presents a scenario where we get such improvements, for discrepancy-type problems [34,4]. In particular, we generalize a hypergraph-partitioning result of Füredi & Kahn [16].
Recall the bounds of [36] for CIPs mentioned in the paragraph preceding this subsection; our bounds for CIPs depend only on the set of constraints Ax ≥ b, i.e., they hold for any non-negative objectivefunction vector c. Our improvements over [36] get better as y * decreases. We show an integrality gap of 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}), once again improving on [36] for weighted CIPs. This CIP bound is better than that of [36] if y * ≤ mB/a: this inequality fails for unweighted CIPs and is generally true for weighted CIPs, since y * can get arbitrarily small in the latter case. In particular, we generalize the result of Chvátal [10] on weighted set cover. Consider, e.g., a facility location problem on a directed graph G = (V, A): given a cost c i ∈ [0, 1] for each i ∈ V , we want a min-cost assignment of facilities to the nodes such that each node sees at least B facilities in its out-neighborhood-multiple facilities at a node are allowed. If ∆ in is the maximum in-degree of G, we show an integrality gap of 1 + O(max{ln(∆ in + 1)/B, ln(B(∆ in + 1))/B}). This improves on [36] if y * ≤ |V |B/∆ in ; it shows an O(1) (resp., 1 + o(1)) integrality gap if B grows as fast as (resp., strictly faster than) log ∆ in . Theorem 5.7 presents our covering results.
A key corollary of our results is that for families of instances of CIPs, we get a good (O(1) or 1 + o(1)) integrality gap if B grows at least as fast as log a. Bounds on the result of a greedy algorithm for CIPs relative to the optimal integral solution, are known [12,13]. Our bound improves that of [12] and is incomparable with [13]; for any given A, c, and the unit vector b/||b|| 2 , our bound improves on [13] if B is more than a certain threshold. As it stands, randomized rounding produces such improved solutions for several CIPs only with a very low, sometimes exponentially small, probability. Thus, it does not imply a randomized algorithm, often. To this end, we generalize Raghavan's method of pessimistic estimators to derive an algorithmic (polynomial-time) version of our results for CIPs, in § 5.3.
We also show via Theorem 5.9 and Corollary 5.10 that multi-criteria CIPs can be approximated well. In particular, Corollary 5.10 shows some interesting cases where the approximation guarantee for multi-criteria CIPs grows in a very much sub-linear fashion with the number ℓ of given vectors c i : the approximation ratio is at most O(log log ℓ) times what we show for CIPs (which correspond to the case where ℓ = 1). We are not aware of any such earlier work on multi-criteria CIPs.
The preliminary version of this work was presented in [35]. As mentioned in § 1, two main algorithmic versions related to our work have been obtained following [35]. First, for a subclass of the MIPs where the nonzero entries of the matrix A are "reasonably large", constructive versions of our results have been obtained in [20]. Second, for a notion of approximation that is different from the one we study, algorithmic results have been developed for certain families of MIPs in [11]. Furthermore, our Theorem 5.7 for CIPs has been used in [19] to develop approximation algorithms for CIPs that have given upper bounds on the variables x j .
The Extended LLL and an Approach to Large Deviations
We now present our LLL extension, Theorem 3.1. For any event E, define χ(E) to be its indicator r.v.:
1 if E holds and 0 otherwise. Suppose we have "bad" events E 1 , . . . , E m with a "dependency" d ′ (in the sense of Lemma 1.1) that is "large". Theorem 3.1 shows how to essentially replace d ′ by a possibly much-smaller d, under some conditions. It generalizes Lemma 1.1 (define one r.v., C i,1 = χ(E i ), for each i, to get Lemma 1.1), its proof is very similar to the classical proof of Lemma 1.1, and its motivation will be clarified by the applications.
(i) any C i,j is mutually independent of all but at most d of the events E k , k = i, and (ii) ∀I ⊆ ([m] − {i}), Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)]. Let p i denote j E[C i,j ]; clearly, Pr(E i ) ≤ p i (set I = φ in (ii)). Suppose that for all i ∈ [m] we have ep i (d + 1) ≤ 1. Then Pr( i E i ) ≥ (d/(d + 1)) m > 0.
Remark 3.2. C i,j and C i,j ′ can "depend" on different subsets of {E k |k = i}; the only restriction is that these subsets be of size at most d. Note that we have essentially reduced the dependency among the E i s, to just d: ep i (d + 1) ≤ 1 suffices. Another important point is that the dependency among the r.v.s C i,j could be much higher than d: all we count is the number of E k that any C i,j depends on. Proof of Theorem 3.1. We prove by induction on |I| that if i ∈ I then Pr(E i | Z(I)) ≤ ep i , which suffices to prove the theorem since Pr
( i E i ) = i∈[m] (1 − Pr(E i | Z([i − 1]))). For the base case where I = ∅, Pr(E i | Z(I)) = Pr(E i ) ≤ p i . For the inductive step, let S i,j,I . = {k ∈ I | C i,j depends on E k }, and S ′ i,j,I = I − S i,j,I ; note that |S i,j,I | ≤ d. If S i,j,I = ∅, then E[C i,j | Z(I)] = E[C i,j ]. Otherwise, letting S i,j,I = {ℓ 1 , . . . , ℓ r }, we have E[C i,j | Z(I)] = E[C i,j · χ(Z(S i,j,I )) | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I )) ≤ E[C i,j | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I ))
, since C i,j is non-negative. The numerator of the last term is E[C i,j ], by assumption. The denominator can be lower-bounded as follows:
s∈[r] (1 − Pr(E ℓs | Z({ℓ 1 , ℓ 2 , . . . , ℓ s−1 } ∪ S ′ i,j,I ))) ≥ s∈[r] (1 − ep ℓs ) ≥ (1 − 1/(d + 1)) r ≥ (d/(d + 1)) d > 1/e;
the first inequality follows from the induction hypothesis. Hence,
E[C i,j | Z(I)] ≤ eE[C i,j ] and thus, Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)] ≤ ep i ≤ 1/(d + 1).
The crucial point is that the events E i could have a large dependency d ′ , in the sense of the classical Lemma 1.1. The main utility of Theorem 3.1 is that if we can "decompose" each E i into the r.v.s C i,j that satisfy the conditions of the theorem, then there is the possibility of effectively reducing the dependency by much (d ′ can be replaced by the value d). Concrete instances of this will be studied in later sections.
The tools behind our MIP application are our new LLL, and a result of [33]. Define, for z = (z 1 , . . . , z n ) ∈ ℜ n , a family of polynomials S j (z), j = 0, 1, . . . , n, where S 0 (z) ≡ 1, and for j ∈ [n],
S j (z) . = 1≤i 1 <i 2 ···<i j ≤n z i 1 z i 2 · · · z i j .(1)Remark 3.3.
For real x and non-negative integral r, we define x r . = x(x − 1) · · · (x − r + 1)/r! as usual; this is the sense meant in Theorem 3.4 below.
We define a nonempty event to be any event with a nonzero probability of occurrence. The relevant theorem of [33] is the following:
Theorem 3.4. ([33]) Given r.v.s X 1 , . . . , X n ∈ [0, 1], let X = n i=1 X i and µ = E[X]
. Then, (a) For any q > 0, any nonempty event Z and any non-negative integer k ≤ q,
Pr(X ≥ q | Z) ≤ E[Y k,q | Z], where Y k,q = S k (X 1 , . . . , X n )/ q k . (b) If the X i s are independent, δ > 0, and k = ⌈µδ⌉, then Pr(X ≥ µ(1 + δ)) ≤ E[Y k,µ(1+δ) ] ≤ G(µ, δ), where G(·, ·) is as in Lemma 2.4. (c) If the X i s are independent, then E[S k (X 1 , . . . , X n )] ≤ n k · (µ/n) k ≤ µ k /k!.
Proof. Suppose r 1 , r 2 , . . . r n ∈ [0, 1] satisfy n i=1 r i ≥ q. Then, a simple proof is given in [33], for the fact that for any non-negative integer k ≤ q, S k (r 1 , r 2 , . . . , r n ) ≥ q k . This clearly holds even given the occurrence of any nonempty event Z. Thus we get Pr(
X ≥ q) | Z) ≤ Pr(Y k,q ≥ 1 | Z) ≤ E[Y k,q | Z],
where the second inequality follows from Markov's inequality. The proofs of (b) and (c) are given in [33].
We next present the proof of Lemma 2.4:
Proof of Lemma 2.4. Part (a) is the Chernoff-Hoeffding bound (see, e.g., Appendix A of [4], or [27]). For (b), we proceed as follows. For any µ > 0, it is easy to check that
G(µ, δ) = e −Θ(µδ 2 ) if δ ∈ (0, 1); (2) G(µ, δ) = e −Θ(µ(1+δ) ln(1+δ)) if δ ≥ 1. (3) Now if µ ≤ log(p −1 )/2, choose δ = C · log(p −1 ) µ log(log(p −1 )/µ)
for a suitably large constant C. Note that δ is lower-bounded by some positive constant; hence, (3) holds (since the constant 1 in the conditions "δ ∈ (0, 1)" and "δ > 1" of (2) and (3) can clearly be replaced by any other positive constant). Simple algebraic manipulation now shows that if C is large enough, then
⌈µδ⌉·G(µ, δ) ≤ p holds. Similarly, if µ > log(p −1 )/2, we set δ = C · log(µ+p −1 )
µ for a large enough constant C, and use (2).
Approximating Minimax Integer Programs
Suppose we are given an MIP conforming to Definition 2.1. Define t to be max i∈[m] N Z i , where N Z i is the number of rows of A which have a non-zero coefficient corresponding to at least one variable among
{x i,j : j ∈ [ℓ i ]}. Note that g ≤ a ≤ t ≤ min{m, a · max i∈[n] ℓ i }.(4)
Theorem 4.2 now shows how Theorem 3.1 can help, for sparse MIPs-those where t ≪ m. We will then bootstrap Theorem 4.2 to get the further improved Theorem 4.5. We start with a proposition, whose proof is a simple calculus exercise:
Proposition 4.1. If 0 < µ 1 ≤ µ 2 , then for any δ > 0, G(µ 1 , µ 2 δ/µ 1 ) ≤ G(µ 2 , δ). Proof. Conduct randomized rounding: independently for each i, randomly round exactly one x i,j to 1, guided by the "probabilities" {x * i,j }. We may assume that {x * i,j } is a basic feasible solution to the LP relaxation. Hence, at most m of the {x * i,j } will be neither zero nor one, and only these variables will participate in the rounding. Thus, since all the entries of A are in [0, 1], we assume without loss of generality from now on that y * ≤ m (and that max i∈[n] ℓ i ≤ m); this explains the "min{y * , m}" term in our stated bounds. If z ∈ {0, 1} N denotes the randomly rounded vector, then E[(Az) i ] = b i by linearity of expectation, i.e., at most y * . Defining k = ⌈y * H(y * , 1/(et))⌉ and events E 1 , E 2 , . . . , E m by
E i ≡ "(Az) i ≥ b i + k",C i,j . = v∈S(j) Z i,v b i +k k .(5)
We now need to show that the r.v.s C i,j satisfy the conditions of Theorem 3.1. For any i ∈ [m], let
δ i = k/b i . Since b i ≤ y * , we have, for each i ∈ [m], G(b i , δ i ) ≤ G(y * ,E i | Z) ≤ j∈[u] E[C i,j | Z]. Also, p i . = j∈[u] E[C i,j ] < G(b i , δ i ) ≤ 1/(ekt).
Next since any C i,j involves (a product of) k terms, each of which "depends" on at most (t − 1) of the events Theorem 4.2 gives good results if t ≪ m, but can we improve it further, say by replacing t by a (≤ t) in it? As seen from (4), the key reason for t ≫ a Θ(1) is that max i∈[n] ℓ i ≫ a Θ(1) . If we can essentially "bring down" max i∈[n] ℓ i by forcing many x * i,j to be zero for each i, then we effectively reduce t (t ≤ a · max i ℓ i , see (4)); this is so since only those x * i,j that are neither zero nor one take part in the rounding. A way of bootstrapping Theorem 4.2 to achieve this is shown by: Proof. Let K 0 > 0 be a sufficiently large absolute constant. Now if
{E v : v ∈ ([m] − {i})} by definition of t, we see the important Fact 4.4. ∀i ∈ [m] ∀j ∈ [u], C i,j ∈ [0, 1] and C i,j "depends" on at most d = k(t − 1) of the set of events {E v : v ∈ ([m] − {i})}.(y * ≥ t 1/7 ) or (t ≤ max{K 0 , 2}) or (t ≤ a 4 )(6)
holds, then we will be done by Theorem 4.2. So we may assume that (6) is false. Also, if y * ≤ t −1/7 , Theorem 4.2 guarantees an integral solution of value O(1); thus, we also suppose that y * > t −1/7 . The basic idea now is, as sketched above, to set many x * i,j to zero for each i (without losing too much on y * ), so that max i ℓ i and hence, t, will essentially get reduced. Such an approach, whose performance will be validated by arguments similar to those of Theorem 4.2, is repeatedly applied until (6) holds, owing to the (continually reduced) t becoming small enough to satisfy (6). There are two cases:
Case I: y * ≥ 1. Solve the LP relaxation, and set x ′ i,j := (y * ) 2 (log 5 t)x * i,j . Conduct randomized rounding on the x ′ i,j now, rounding each x ′ i,j independently to z i,j ∈ {⌊x ′ i,j ⌋, ⌈x ′ i,j ⌉}. (Note the key difference from Theorem 4.2, where for each i, we round exactly one x * i,j to 1.)
Let K 1 > 0 be a sufficiently large absolute constant. We now use ideas similar to those used in our proof of Theorem 4.2 to show that with nonzero probability, we have both of the following:
∀i ∈ [m], (Az) i ≤ (y * ) 3 log 5 t · (1 + K 1 /((y * ) 1.5 log 2 t)), and (7) ∀i ∈ [n], | j z i,j − (y * ) 2 log 5 t| ≤ K 1 y * log 3 t.
To show this, we proceed as follows. Let E 1 , E 2 , . . . , E m be the "bad" events, one for each event in (7) not holding; similarly, let E m+1 , E m+2 , . . . , E m+n be the "bad" events, one for each event in (8) not holding. We want to use our extended LLL to show that with positive probability, all these bad events can be avoided; specifically, we need a way of decomposing each E i into a finite number of non-negative r.v.s C i,j . For each event E m+ℓ where ℓ ≥ 1, we define just one r.v. C m+ℓ,1 : this is the indicator variable for the occurrence of E m+ℓ . For the events E i where i ≤ m, we decompose E i into r.v.s C i,j just as in (5): each such C i,j is now a scalar multiple of at most O((y * ) 3 log 5 t/((y * ) 1.5 log 2 t)) = O((y * ) 1.5 log 3 t) = O(t 1.5/7 log 3 t) independent binary r.v.s that underlie our randomized rounding; the second equality (big-Oh bound) here follows since (6) has been assumed to not hold. Thus, it is easy to see that for all i, 1 ≤ i ≤ m + n, and for any j, the r.v. C i,j depends on at most
O(t · t 1.5/7 log 3 t)(9)
events E k , where k = i. Also, as in our proof of Theorem 4.2, Theorem 3.4 gives a direct proof of requirement (ii) of Theorem 3.1; part (b) of Theorem 3.4 shows that for any desired constant K, we can choose the constant K 1 large enough so that for all i, j E[C i,j ] ≤ t −K . Thus, in view of (9), we see by Theorem 3.1 that Pr( m+n i=1 E i ) > 0. Fix a rounding z satisfying (7) and (8). For each i ∈ [n] and j ∈ [ℓ i ], we renormalize as follows: x ′′ i,j := z i,j / u z i,u . Thus we have u x ′′ i,u = 1 for all i; we now see that we have two very useful properties. First, since j z i,j ≥ (y * ) 2 log 5 t · 1 − O( 1 y * log 2 t ) for all i from (8), we have, ∀i ∈ [m], (Ax ′′ ) i ≤ y * (1 + O(1/((y * ) 1.5 log 2 t))) 1 − O(1/(y * log 2 t)) ≤ y * (1 + O(1/(y * log 2 t))).
Second, since the z i,j are non-negative integers summing to at most (y * ) 2 log 5 t(1 + O(1/(y * log 2 t))), at most O((y * ) 2 log 5 t) values x ′′ i,j are nonzero, for each i ∈ [n]. Thus, by losing a little in y * (see (10)), our "scaling up-rounding-scaling down" method has given a fractional solution x ′′ with a much-reduced ℓ i for each i; ℓ i is now O((y * ) 2 log 5 t), essentially. Thus, t has been reduced to O(a(y * ) 2 log 5 t); i.e., t has been reduced to at most
K 2 t 1/4+2/7 log 5 t(11)
for some constant K 2 > 0 that is independent of K 0 , since (6) was assumed false. Repeating this scheme O(log log t) times makes t small enough to satisfy (6). More formally, define t 0 = t, and t i+1 = K 2 t 1/4+2/7 i log 5 t i for i ≥ 0. Stop this sequence at the first point where either t = t i satisfies (6), or t i+1 ≥ t i holds. Thus, we finally have t small enough to satisfy (6) or to be bounded by some absolute constant. How much has max i∈[m] (Ax) i increased in the process? By (10), we see that at the end, max i∈ [m] (Ax) i ≤ y * · j≥0 (1 + O(1/(y * log 2 t j ))) ≤ y * · e O( j≥0 1/(y * log 2 t j )) ≤ y * + O(1),
since the values log t j decrease geometrically and are lower-bounded by some absolute positive constant. We may now apply Theorem 4.2.
Case II: t −1/7 < y * < 1. The idea is the same here, with the scaling up of x * i,j being by (log 5 t)/y * ; the same "scaling up-rounding-scaling down" method works out. Since the ideas are very similar to Case I, we only give a proof sketch here. We now scale up all the x * i,j first by (log 5 t)/y * and do a randomized rounding. The analogs of (7) and (8)
∀i ∈ [n], | j z i,j − log 5 t/y * | ≤ K ′ 1 log 3 t/ y * .(14)
Proceeding identically as in Case I, we can show that with positive probability, (13) and (14) hold simultaneously. Fix a rounding where these two properties hold, and renormalize as before: x ′′ i,j := z i,j / u z i,u . Since (13) and (14) hold, it is easy to show that the following analogs of (10) and (11) hold:
(Ax ′′ ) i ≤ y * (1 + O(1/ log 2 t)) 1 − O( √ y * / log 2 t) ≤ y * (1 + O(1/ log 2 t))
; and t has been reduced to O(a log 5 t/y * ), i.e., to O(t 1/4+1/7 log 5 t).
We thus only need O(log log t) iterations, again. Also, the analog of (12) now is that
max i∈[m] (Ax) i ≤ y * · j≥0 (1 + O(1/ log 2 t j )) ≤ y * · e O( j≥0 1/ log 2 t j ) ≤ y * + O(1).
This completes the proof.
We now study our improvements for discrepancy-type problems, which are an important class of MIPs that, among other things, are useful in devising divide-and-conquer algorithms. Given is a set-system (X, F ), where X = [n] and F = {D 1 , D 2 , . . . , D M } ⊆ 2 X . Given a positive integer ℓ, the problem is to partition X into ℓ parts, so that each D j is "split well": we want a function f : X → [ℓ] which minimizes max j∈[M ],k∈[ℓ] |{i ∈ D j : f (i) = k}|. (The case ℓ = 2 is the standard set-discrepancy problem.) To motivate this problem, suppose we have a (di)graph (V, A); we want a partition of V into V 1 , . . . , V ℓ such that ∀v ∈ V , {|{j ∈ N (v)∩V k }| : k ∈ [ℓ]} are "roughly the same", where N (v) is the (out-)neighborhood of v. See, e.g., [2,17] for how this helps construct divide-and-conquer approaches. This problem is naturally modeled by the above set-system problem.
Let ∆ be the degree of (X, F ), i.e., max i∈[n] |{j : i ∈ D j }|, and let ∆ ′ . = max D j ∈F |D j |. Our problem is naturally written as an MIP with m = M ℓ, ℓ i = ℓ for each i, and g = a = ∆, in the notation of Definition 2.1; y * = ∆ ′ /ℓ here. The analysis of [32] gives an integral solution of value at most y * (1 + O(H(y * , 1/(M ℓ)))), while [18] presents a solution of value at most y * + ∆. Also, since any D j ∈ F intersects at most (∆ − 1)∆ ′ other elements of F , Lemma 1.1 shows that randomized rounding produces, with positive probability, a solution of value at most y * (1 + O(H(y * , 1/(e∆ ′ ∆ℓ)))). This is the approach taken by [16] for their case of interest: ∆ = ∆ ′ , ℓ = ∆/ log ∆. Theorem 4.5 shows the existence of an integral solution of value y * (1+O(H(y * , 1/∆)))+O(1), i.e., removes the dependence on ∆ ′ . This is an improvement on all the three results above. As a specific interesting case, suppose ℓ grows at most as fast as ∆ ′ / log ∆. Then we see that good integral solutions-those that grow at the rate of O(y * ) or better-exist, and this was not known before. (The approach of [16] shows such a result for ℓ = O(∆ ′ / log(max{∆, ∆ ′ })). Our bound of O(∆ ′ / log ∆) is always better than this, and especially so if ∆ ′ ≫ ∆.)
Approximating Covering Integer Programs
One of the main ideas behind Theorem 3.1 was to extend the basic inductive proof behind the LLL by decomposing the "bad" events E i appropriately into the r.v.s C i,j . We now use this general idea in a different context, that of (multi-criteria) covering integer programs, with an additional crucial ingredient being a useful correlation inequality, the FKG inequality [15]. The reader is asked to recall the discussion of (multi-criteria) CIPs from § 2. We start with a discussion of randomized rounding for CIPs, the Chernoff lower-tail bound, and the FKG inequality in § 5.1. These lead to our improved, but nonconstructive, approximation bound for column-sparse (multi-criteria) CIPs, in § 5.2. This is then made constructive in § 5.3; we also discuss there what we view as novel about this constructive approach.
Preliminaries
Let us start with a simple and well-known approach to tail bounds. Suppose Y is a random variable and y is some value. Then, for any 0 ≤ δ < 1, we have
Pr(Y ≤ y) ≤ Pr((1 − δ) Y ≥ (1 − δ) y ) ≤ E[(1 − δ) Y ] (1 − δ) y ,(15)
where the inequality is a consequence of Markov's inequality.
We next setup some basic notions related to approximation algorithms for (multi-criteria) CIPs. Recall that in such problems, we have ℓ given non-negative vectors c 1 , c 2 , . . . , c ℓ such that for all i, c i ∈ [0, 1] n with max j c i,j = 1; ℓ = 1 in the case of CIPs. Let x = (x * 1 , x * 2 , . . . , x * n ) denote a given fractional solution that satisfies the system of constraints Ax ≥ b. We are not concerned here with how x * was found: typically, x * would be an optimal solution to the LP relaxation of the problem. (The LP relaxation is obvious if, e.g., ℓ = 1, or, say, if the given multi-criteria aims to minimize max i c T i · x * , or to keep each c T i · x * bounded by some target value v i .) We now consider how to round x * to some integral z so that:
(P1) the constraints Az ≥ b hold, and (P2) for all i, c T i · z is "not much bigger" than c T i · x * : our approximation bound will be a measure of how small a "not much bigger value" we can achieve in this sense.
Let us now discuss the "standard" randomized rounding scheme for (multi-criteria) CIPs. We assume a fixed instance as well as x * , from now on. For an α > 1 to be chosen suitably, set x ′ j = αx * j , for each j ∈ [n]. We then construct a random integral solution z by setting, independently for each j ∈ [n],
z j = ⌊x ′ j ⌋ + 1 with probability x ′ j − ⌊x ′ j ⌋, and z j = ⌊x ′ j ⌋ with probability 1 − (x ′ j − ⌊x ′ j ⌋).
The aim then is to show that with positive (hopefully high) probability, (P1) and (P2) happen simultaneously. We now introduce some useful notation. For every j ∈ [n], let s j = ⌊x ′ j ⌋. Let A i denote the ith row of A, and let X 1 , X 2 , . . . , X n ∈ {0, 1} be independent r.v.s with Pr(X j = 1) = x ′ j − s j for all j. The bad event E i that the ith constraint is violated by our randomized rounding is given by
E i ≡ "A i · X < µ i (1 − δ i )", where µ i = E[A i · X] and δ i = 1 − (b i − A i · s)/µ i .
We now bound Pr(E i ) for all i, when the standard randomized rounding is used. Define g(B, α) . = (α · e −(α−1) ) B . Then for all i,
Lemma 5.1.Pr(E i ) ≤ E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ g(B, α) ≤ e −B(α−1) 2 /(2α)
under standard randomized rounding.
Proof. The first inequality follows from (15). Next, the Chernoff-Hoeffding lower-tail approach [4,27] shows that
E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ e −δ i (1 − δ i ) 1−δ i µ i .
It is observed in [36] (and is not hard to see) that this latter quantity is maximized when s j = 0 for all j, and when each b i equals its minimum value of B. Thus we see that Pr(E i ) ≤ g(B, α). The inequality g(B, α) ≤ e −B(α−1) 2 /(2α) for α ≥ 1, is well-known and easy to verify via elementary calculus.
Next, the FKG inequality is a useful correlation inequality, a special case of which is as follows [15]. Given binary vectors a = (a 1 , a 2 , . . . , a ℓ ) ∈ {0, 1} ℓ and b = (b 1 , b 2 , . . . , b ℓ ) ∈ {0, 1} ℓ , let us partially order them by coordinate-wise domination: a b iff a i ≤ b i for all i. Now suppose Y 1 , Y 2 , . . . , Y ℓ are independent r.v.s, each taking values in {0, 1}. Let Y denote the vector (Y 1 , Y 2 , . . . , Y ℓ ). Suppose an event A is completely defined by the value of Y . Define A to be increasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b such that a b. Analogously, event A is decreasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b a. The FKG inequality proves certain intuitively appealing bounds:
(i) Pr(I i | j∈S I j ) ≥ Pr(I i ) and Pr(D i | j∈S D j ) ≥ Pr(D i ); (ii) Pr(I i | j∈S D j ) ≤ Pr(I i ) and Pr(D i | j∈S I j ) ≤ Pr(D i ).
Returning to our random variables X j and events E i , we get the following lemma as an easy consequence of the FKG inequality, since each event of the form "E i " or "X j = 1" is an increasing event as a function of the vector (X 1 , X 2 , . . . , X n ):
Lemma 5.3. For all B 1 , B 2 ⊆ [m]
such that B 1 ∩B 2 = ∅ and for any B 3 ⊆ [n], Pr( i∈B 1 E i | (( j∈B 2 E j )∧ ( k∈B 3 (X k = 1))) ≥ i∈B 1 Pr(E i ).
Nonconstructive approximation bounds for (multi-criteria) CIPs
Definition 5.4. (The function R) For any s and any j 1 < j 2 < · · · < j s , let R(j 1 , j 2 , . . . , j s ) be the set of indices i such that row i of the constraint system "Ax ≥ b" has at least one of the variables j k , 1 ≤ k ≤ s, appearing with a nonzero coefficient. (Note from the definition of a in Defn. 2.2, that |R(j 1 , j 2 , . . . , j s )| ≤ a · s.)
Let the vector x * = (x * 1 , x * 2 , . . . , x * n ), the parameter α > 1, and the "standard" randomized rounding scheme, be as defined in § 5.1. The standard rounding scheme is sufficient for our (nonconstructive) purposes now; we generalize this scheme as follows, for later use in § 5.3.
Definition 5.5. (General randomized rounding) Given a vector p = (p 1 , p 2 , . . . , p n ) ∈ [0, 1] n , the general randomized rounding with parameter p generates independent random variables X 1 , X 2 , . . . , X n ∈ {0, 1} with Pr(X j = 1) = p j ; the rounded vector z is defined by z j = ⌊αx * j ⌋ + X j for all j. (As in the standard rounding, we set each z j to be either ⌊αx * j ⌋ or ⌈αx * j ⌉; the standard rounding is the special case in which E[z j ] = αx * j for all j.)
We now present an important lemma, Lemma 5.6, to get correlation inequalities which "point" in the "direction" opposite to FKG. Some ideas from the proof of Lemma 1.1 will play a crucial role in our proof this lemma.
Lemma 5.6. Suppose we employ general randomized rounding with some parameter p, and that Pr( m i=1 E i ) is nonzero under this rounding. The following hold for any q and any 1 ≤ j 1 < j 2 < · · · < j q ≤ n.
(i)
Pr(X j 1 = X j 2 = · · · = X jq = 1 | m i=1 E i ) ≤ q t=1 p jt i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i ))
; (16) the events E i ≡ ((Az) i < b i ) are defined here w.r.t. the general randomized rounding.
(ii) In the special case of standard randomized rounding, i∈R(j 1 ,j 2 ,...,jq)
(1 − Pr(E i )) ≥ (1 − g(B, α)) aq ;(17)
the function g is as defined in Lemma 5.1.
Proof. (i) Note first that if we wanted a lower bound on the l.h.s., the FKG inequality would immediately imply that the l.h.s. is at least p j 1 p j 2 · · · p jq . We get around this "correlation problem" as follows. Let
Q = R(j 1 , j 2 , . . . , j q ); let Q ′ = [m] − Q. Let Z 1 ≡ ( i∈Q E i ), and Z 2 ≡ ( i∈Q ′ E i ).
Letting Y = q t=1 X jt , note that |Q| ≤ aq and (18)
Y is independent of Z 2 .(19)
Now,
Pr(Y = 1 | (Z 1 ∧ Z 2 )) = Pr(((Y = 1) ∧ Z 1 ) | Z 2 ) Pr(Z 1 | Z 2 ) ≤ Pr((Y = 1) | Z 2 ) Pr(Z 1 | Z 2 ) = Pr(Y = 1) Pr(Z 1 | Z 2 ) (by (19)) ≤ q t=1
Pr(X jt = 1) i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i )) (by Lemma 5.3).
(ii) We get (17) from Lemma 5.1 and (18).
We will use Lemmas 5.3 and 5.6 to prove Theorem 5.9. As a warmup, let us start with a result for the special case of CIPs; recall that y * denotes c T · x * . Theorem 5.7. For any given CIP, suppose we choose α, β > 1 such that β(1 − g(B, α)) a > 1. Then, there exists a feasible solution of value at most y * αβ. In particular, there is an absolute constant K > 0 such that if α, β > 1 are chosen as: α = K · ln(a + 1)/B and β = 2, if ln(a + 1) ≥ B, and (20) α = β = 1 + K · ln(a + 1)/B, if ln(a + 1) < B; (21) then, there exists a feasible solution of value at most y * αβ. Thus, the integrality gap is at most 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}).
Proof. Conduct standard randomized rounding, and let E be the event that c T · z > y * αβ. Setting Z ≡ i∈[m] E i and µ . = E[c T · z] = y * α, we see by Markov's inequality that Pr(E | Z) is at most R = ( n j=1 c j Pr(X j = 1 | Z))/(µβ). Note that Pr(Z) > 0 since α > 1; so, we now seek to make R < 1, which will complete the proof. Lemma 5.6 shows that g(B, α)) a ; thus, the condition β (1 − g(B, α)) a > 1 suffices.
R ≤ j c j p j µβ · (1 − g(B, α)) a = 1 β(1 −
Simple algebra shows that choosing α, β > 1 as in (20) and (21), ensures that β (1 − g(B, α)) a > 1.
The basic approach of our proof of Theorem 5.7 is to follow the main idea of Theorem 3.1, and to decompose the event "E | Z" into a non-negative linear combination of events of the form "X j = 1 | Z'; we then exploited the fact that each X j depends on at most a of the events comprising Z. We now extend Theorem 5.7 and also generalize to multi-criteria CIPs. Instead of employing just a "first moment method" (Markov's inequality) as in the proof of Theorem 5.7, we will work with higher moments: the functions S k defined in (1) and used in Theorem 3.4. Suppose some parameters λ i > 0 are given, and that our goal is to round x * to z so that the event
A ≡ "(Az ≥ b) ∧ (∀i, c T i · z ≤ λ i ) ′′(22)
holds. We first give a sufficient condition for this to hold, in Theorem 5.9; we then derive some concrete consequences in Corollary 5.10. We need one further definition before presenting Theorem 5.9. Recall that A i and b i respectively denote the ith row of A and the ith component of b. Also, the vector s and values δ i will throughout be as in the definition of standard randomized rounding.
Definition 5.8. (The functions ch and ch ′ ) Suppose we conduct general randomized rounding with some parameter p; i.e., let X 1 , X 2 , . . . , X n be independent binary random variables such that Pr(X j = 1) = p j . For each i ∈ [m], define
ch i (p) . = E[(1 − δ i ) A i ·X ] (1 − δ i ) b i −A i ·s = j∈[n] E[(1 − δ i ) A i,j X j ] (1 − δ i ) b i −A i ·s and ch ′ i (p) . = min{CH i (p), 1}.
(Note from (15) that if we conduct general randomized rounding with parameter p, then Pr((Az) i < b i ) ≤ ch i (p) ≤ ch ′ i (p); also, "ch" stands for "Chernoff-Hoeffding".) Theorem 5.9. Suppose we are given a multi-criteria CIP, as well as some parameters λ 1 , λ 2 , . . . , λ ℓ > 0. Let A be as in (22). Then, for any sequence of positive integers (k 1 , k 2 , . . . , k ℓ ) such that k i ≤ λ i , the following hold.
(i) Suppose we employ general randomized rounding with parameter p = (p 1 , p 2 , . . . , p n ). Then, Pr(A) is at least (ii) Suppose we employ the standard randomized rounding to get a rounded vector z.
Φ(p) . = r∈[m] (1 − ch ′ r (p)) − ℓ i=1 1 λ i k i · j 1 <···<j k i k i t=1 c i,jt · p jt · r ∈R(j 1 ,...,j k i ) (1 − ch ′ r (p));(23)Let λ i = ν i (1 + γ i ) for each i ∈ [ℓ], where ν i = E[c T i · z] = α · (c T i · x * )
and γ i > 0 is some parameter. Then,
Φ(p) ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −a·k i .(24)
In particular, if the r.h.s. of (24) is positive, then Pr(A) > 0 for standard randomized rounding.
The proof is a simple generalization of that of Theorem 5.7, and is deferred to Section 5.4. Theorem 5.7 is the special case of Theorem 5.9 corresponding to ℓ = k 1 = 1. To make the general result of Theorem 5.9 more concrete, we now study an additional special case. We present this special case as one possible "proof of concept", rather than as an optimized one; e.g., the constant "3" in the bound "c T i · z ≤ 3ν i " can be improved.
Corollary 5.10. There is an absolute constant K ′ > 0 such that the following holds. Suppose we are given a multi-criteria CIP with notation as in part (ii) of Theorem 5.9. Define α = K ′ · max{ ln(a)+ln ln(2ℓ)
B , 1}. Now if ν i ≥ log 2 (2ℓ) for all i ∈ [ℓ]
, then standard randomized rounding produces a feasible solution z such that c T i · z ≤ 3ν i for all i, with positive probability. In particular, this can be shown by setting k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i, in part (ii) of Theorem 5.9.
Proof. Let us employ Theorem 5.9(ii) with k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i. We just need to establish that the r.h.s. of (24) is positive. We need to show that
ℓ i=1 n k i · (ν i /n) k i 3ν i k i · (1 − g(B, α)) −a·k i < 1;
it is sufficient to prove that for all i,
ν k i i /k i ! 3ν i k i · (1 − g(B, α)) −a·k i < 1/ℓ.(25)
We make two observations now.
• Since k i ∼ ln ℓ and ν i ≥ log 2 (2ℓ),
3ν i k i = (1/k i !) · k i −1 j=0 (3ν i − j) = (1/k i !) · (3ν i ) k i · e −Θ( k i −1 j=0 j/ν i ) = Θ((1/k i !) · (3ν i ) k i ).
• (1 − g(B, α)) −a·k i can be made arbitrarily close to 1 by choosing the constant K ′ large enough.
These two observations establish (25).
Constructive version
It can be shown that for many problems, randomized rounding produces the solutions shown to exist by Theorem 5.7 and Theorem 5.9, with very low probability: e.g., probability almost exponentially small in the input size. Thus we need to obtain constructive versions of these theorems. Our method will be a deterministic procedure that makes O(n) calls to the function Φ(·), in addition to poly(n, m) work. Now, if k ′ denotes the maximum of all the k i , we see that Φ can be evaluated in poly(n k ′ , m) time. Thus, our overall procedure runs in time poly(n k ′ , m) time. In particular, we get constructive versions of Theorem 5.7 and Corollary 5.10 that run in time poly(n, m) and poly(n log ℓ , m), respectively.
Our approach is as follows. We start with a vector p that corresponds to standard randomized rounding, for which we know (say, as argued in Corollary 5.10) that Φ(p) > 0. In general, we have a vector of probabilities p = (p 1 , p 2 , . . . , p n ) such that Φ(p) > 0. If p ∈ {0, 1} n , we are done. Otherwise suppose some p j lies in (0, 1); by renaming the variables, we will assume without loss of generality that j = n. Define p ′ = (p 1 , p 2 , . . . , p n−1 , 0) and p ′′ = (p 1 , p 2 , . . . , p n−1 , 1). The main fact we wish to show is that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0: we can then set p n to 0 or 1 appropriately, and continue. (As mentioned in the previous paragraph, we thus have O(n) calls to the function Φ(·) in total.) Note that although some of the p j will lie in {0, 1}, we can crucially continue to view the X j as independent random variables with Pr(X j = 1) = p j .
So, our main goal is: assuming that p n ∈ (0, 1) and that Φ(p) > 0, (26) to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. In order to do so, we make some observations and introduce some simplifying notation. Define, for each i ∈ [m]:
q i = ch ′ i (p), q ′ i = ch ′ i (p ′ ), and q ′′ i = ch ′ i (p ′′ )
. Also define the vectors q . = (q 1 , q 2 , . . . , q m ), q ′ . = (q ′ 1 , q ′ 2 , . . . , q ′ m ), and q ′′ . = (q ′′ 1 , q ′′ 2 , . . . , q ′′ m ). We now present a useful lemma about these vectors:
Lemma 5.11. For all i ∈ [m], we have 0 ≤ q ′′ i ≤ q ′ i ≤ 1; (27) q i ≥ p n q ′′ i + (1 − p n )q ′ i ; and (28) q ′ i = q ′′ i = q i if i ∈ R(n).(29)
Proof. The proofs of (27) and (29) are straightforward. As for (28), we proceed as in [36]. First of all, if q i = 1, then we are done, since q ′′ i , q ′ i ≤ 1. So suppose q i < 1; in this case, q i = ch i (p). Now, Definition 5.8 shows that
ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ). Therefore, q i = ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ) ≥ p n ch ′ i (p ′′ ) + (1 − p n )ch ′ i (p ′ ).
Since we are mainly concerned with the vectors p, p ′ and p ′′ now, we will view the values p 1 , p 2 , . . . , p n−1 as arbitrary but fixed, subject to (26). The function Φ(·) now has a simple form; to see this, we first define, for a vector r = (r 1 , r 2 , . . . , r m ) and a set U ⊆ [m],
f (U, r) = i∈U (1 − r i ).
Recall that p 1 , p 2 , . . . , p n−1 are considered as constants now. Then, it is evident from (23) that there exist constants u 1 , u 2 , . . . , u t and v 1 , v 2 , . . . , v t ′ , as well as subsets U 1 , U 2 , . . . , U t and V 1 , V 2 , . . . , V t ′ of [m], such that
Φ(p) = f ([m], q) − ( i u i · f (U i , q)) − (p n · j v j · f (V j , q));(30)Φ(p ′ ) = f ([m], q ′ ) − ( i u i · f (U i , q ′ )) − (0 · j v j · f (V j , q ′ )) = f ([m], q ′ ) − i u i · f (U i , q ′ ); (31) Φ(p ′′ ) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − (1 · j v j · f (V j , q ′′ )) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − ( j v j · f (V j , q ′′ )).(32)
Importantly, we also have the following:
the constants u i , v j are non-negative; ∀j, V j ∩ R(n) = ∅.
Recall that our goal is to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. We will do so by proving that
Φ(p) ≤ p n Φ(p ′′ ) + (1 − p n )Φ(p ′ ).(34)
Let us use the equalities (30), (31), and (32). In view of (29) and (33), the term "−p n · j v j · f (V j , q)" on both sides of the inequality (34) cancels; defining ∆(U ) .
= (1 − p n ) · f (U, q ′ ) + p n · f (U, q ′′ ) − f (U, q), inequality (34) reduces to ∆([m]) − i u i · ∆(U i ) ≥ 0.(35)
Before proving this, we pause to note a challenge we face. Suppose we only had to show that, say, ∆([m]) is non-negative; this is exactly the issue faced in [36]. Then, we will immediately be done by part (i) of Lemma 5.12, which states that ∆(U ) ≥ 0 for any set U . However, (35) also has terms such as "u i · ∆(U i )" with a negative sign in front. To deal with this, we need something more than just that ∆(U ) ≥ 0 for all U ; we handle this by part (ii) of Lemma 5.12. We view this as the main novelty in our constructive version here. ≥ 0 (by (26) and (30)).
Thus we have (35).
Proof of Lemma 5.12. It suffices to show the following. Assume U = [m]; suppose u ∈ ([m] − U ) and that U ′ = U ∪ {u}. Assuming by induction on |U | that ∆(U ) ≥ 0, we show that ∆(U ′ ) ≥ 0, and that ∆(U )/f (U, q) ≤ ∆(U ′ )/f (U ′ , q). It is easy to check that this way, we will prove both claims of the lemma.
The base case of the induction is that |U | ∈ {0, 1}, where ∆(U ) ≥ 0 is directly seen by using (28). Suppose inductively that ∆(U ) ≥ 0. Using the definition of ∆(U ) and the fact that f (U ′ , q) = (1 − q u )f (U, q), we have
f (U ′ , q) = (1 − q u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ ) − ∆(U )] ≤ (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] − (1 − q u ) · ∆(U ),
where this last inequality is a consequence of (28). Therefore, using the definition of ∆(U ′ ) and the facts f (U ′ , q ′ ) = (1 − q ′ u )f (U, q ′ ) and f (U ′ , q ′′ ) = (1 − q ′′ u )f (U, q ′′ ), (27)).
∆(U ′ ) = (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) − f (U ′ , q) ≥ (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) + (1 − q u ) · ∆(U ) − (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] = (1 − q u ) · ∆(U ) + p n (1 − p n ) · (f (U, q ′′ ) − f (U, q ′ )) · (q ′ u − q ′′ u ) ≥ (1 − q u ) · ∆(U ) (by
So, since we assumed that ∆(U ) ≥ 0, we get ∆(U ′ ) ≥ 0; furthermore, we get that ∆(U ′ ) ≥ (1 − q u ) · ∆(U ), which implies that ∆(U ′ )/f (U ′ , q) ≥ ∆(U )/f (U, q). (i) Let E r ≡ ((Az) r < b r ) be defined w.r.t. general randomized rounding with parameter p; as observed in Definition 5.8, Pr(E r ) ≤ ch ′ r (p). Now if ch ′ r (p) = 1 for some r, then part (i) is trivially true; so we assume that Pr(E r ) ≤ ch ′ r ( (1 − Pr(E r )).
Define, for i = 1, 2, . . . , ℓ, the "bad" event E i ≡ (c T i · z > λ i ). Fix any i. Our plan is to show that
Pr(E i | Z) ≤ 1 λ i k i · j 1 <j 2 <···<j k i k i t=1 c i,jt · p jt · r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r )) −1 .(36)
If we prove (36), then we will be done as follows. We have
Pr(A) ≥ Pr(Z) · 1 − i Pr(E i | Z) ≥ ( r∈[m]
(1 − Pr(E r ))) · 1 − i Pr(E i | Z) .
Now, the term "( r∈ [m] (1 − Pr(E r )))" is a decreasing function of each of the values Pr(E r ); so is the lower bound on "− Pr(E i | Z)" obtained from (36). Hence, bounds (36) and (37), along with the bound Pr(E r ) ≤ ch ′ r (p), will complete the proof of part (i). We now prove (36) using Theorem 3.4(a) and Lemma 5.6. Recall the symmetric polynomials S k from (1). Define Y = S k i (c i,1 X 1 , c i,2 X 2 , . . . , c i,n X n )/ λ i k i . By Theorem 3.4(a), Pr(E i | Z) ≤ E[Y | Z]. Next, the typical term in E[Y | Z] can be upper bounded using Lemma 5.6:
E ( k i t=1 c i,jt · X jt ) | m i=1 E i ≤ k i
t=1 c i,jt · p jt r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r ))
.
Thus we have (36), and the proof of part (i) is complete. (1 − ch ′ r (p)) Lemma 5.1 shows that under standard randomized rounding, ch ′ r (p) ≤ g(B, α) < 1 for all r. So, the r.h.s. κ of (38) gets lower-bounded as follows:
· 1 − ℓ i=1 1 λ i k i · j 1 <···<j k iκ ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · [ r∈R(j 1 ,...,j k i ) (1 − g(B, α))] −1 ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · (1 − g(B, α)) −ak i ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −ak i ,
where the last line follows from Theorem 3.4(c). 2
Conclusion
We have presented an extension of the LLL that basically helps reduce the "dependency" much in some settings; we have seen applications to two families of integer programming problems. It would be interesting to see how far these ideas can be pushed further. Two other open problems suggested by this work are: (i) developing a constructive version of our result for MIPs, and (ii) developing a poly(n, m)-time constructive version of Theorem 5.9, as opposed to the poly(n k ′ , m)-time constructive version that we present in § 5.3. Finally, a very interesting question is to develop a theory of applications of the LLL that can be made constructive with (essentially) no loss. | 11,762 |
cs0307043 | 2953390777 | The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs. | A key corollary of our results is that for families of instances of CIPs, we get a good ( @math or @math ) integrality gap if @math grows at least as fast as @math . Bounds on the result of a greedy algorithm for CIPs relative to the optimal solution, are known @cite_29 @cite_7 . Our bound improves that of @cite_29 and is incomparable with @cite_7 ; for any given @math , @math , and the unit vector @math , our bound improves on @cite_7 if @math is more than a certain threshold. As it stands, randomized rounding produces such improved solutions for several CIPs only with a very low, sometimes exponentially small, probability. Thus, it does not imply a randomized algorithm, often. To this end, we generalize Raghavan's method of pessimistic estimators to derive an algorithmic (polynomial-time) version of our results for CIPs, in . | {
"abstract": [
"We give a worst-case analysis for two greedy heuristics for the integer programming problem minimize cx , Ax (ge) b , 0 (le) x (le) u , x integer, where the entries in A, b , and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics.",
"Worst-case bounds are given on the performance of the greedy heuristic for a continuous version of the set covering problem. This generalizes results of Chvatal, Johnson and Lovasz for the 0-1 covering problem. The results for the greedy heuristic and for other heuristics are obtained by treating the covering problem as a limiting case of a generalized location problem for which worst-case results are known. An alternative approach involving dual greedy heuristics leads also to worst-case bounds for continuous packing problems."
],
"cite_N": [
"@cite_29",
"@cite_7"
],
"mid": [
"2073127061",
"2001911901"
]
} | An Extension of the Lovász Local Lemma, and its Applications to Integer Programming * | The powerful Lovász Local Lemma (LLL) is often used to show the existence of rare combinatorial structures by showing that a random sample from a suitable sample space produces them with positive probability [14]; see Alon & Spencer [4] and Motwani & Raghavan [27] for several such applications. We present an extension of this lemma, and demonstrate applications to rounding fractional solutions for certain families of integer programs.
Let e denote the base of natural logarithms as usual. The symmetric case of the LLL shows that all of a set of "bad" events E i can be avoided under some conditions: Lemma 1.1. ( [14]) Let E 1 , E 2 , . . . , E m be any events with Pr(E i ) ≤ p ∀i. If each E i is mutually independent of all but at most d of the other events E j and if ep(d + 1) ≤ 1, then Pr( m i=1 E i ) > 0.
Though the LLL is powerful, one problem is that the "dependency" d is high in some cases, precluding the use of the LLL if p is not small enough. We present a partial solution to this via an extension of the LLL (Theorem 3.1), which shows how to essentially reduce d for a class of events E i ; this works well when each E i denotes some random variable deviating "much" from its mean. In a nutshell, we show that such events E i can often be decomposed suitably into sub-events; although the sub-events may have a large dependency among themselves, we show that it suffices to have a small "bipartite dependency" between the set of events E i and the set of sub-events. This, in combination with some other ideas, leads to the following applications in integer programming.
It is well-known that a large number of NP-hard combinatorial optimization problems can be cast as integer linear programming problems (ILPs). Due to their NP-hardness, good approximation algorithms are of much interest for such problems. Recall that a ρ-approximation algorithm for a minimization problem is a polynomial-time algorithm that delivers a solution whose objective function value is at most ρ times optimal; ρ is usually called the approximation guarantee, approximation ratio, or performance guarantee of the algorithm. Algorithmic work in this area typically focuses on achieving the smallest possible ρ in polynomial time. One powerful paradigm here is to start with the linear programming (LP) relaxation of the given ILP wherein the variables are allowed to be reals within their integer ranges; once an optimal solution is found for the LP, the main issue is how to round it to a good feasible solution for the ILP.
Rounding results in this context often have the following strong property: they present an integral solution of value at most y * · ρ, where y * will throughout denote the optimal solution value of the LP relaxation.
Since the optimal solution value OP T of the ILP is easily seen to be lower-bounded by y * , such rounding algorithms are also ρ-approximation algorithms. Furthermore, they provide an upper bound of ρ on the ratio OP T /y * , which is usually called the integrality gap or integrality ratio of the relaxation; the smaller this value, the better the relaxation.
This work presents improved upper bounds on the integrality gap of the natural LP relaxation for two families of ILPs: minimax integer programs (MIPs) and covering integer programs (CIPs). (The precise definitions and results are presented in § 2.) For the latter, we also provide the corresponding polynomialtime rounding algorithms. Our main improvements are in the case where the coefficient matrix of the given ILP is column-sparse: i.e., the number of nonzero entries in every column is bounded by a given parameter a. There are classical rounding theorems for such column-sparse problems (e.g., Beck & Fiala [6], Karp, Leighton, Rivest, Thompson, Vazirani & Vazirani [18]). Our results complement, and are incomparable with, these results. Furthermore, the notion of column-sparsity, which denotes no variable occurring in "too many" constraints, occurs naturally in combinatorial optimization: e.g., routing using "short" paths, and problems on hypergraphs with "small" degree. These issues are discussed further in § 2.
A key technique, randomized rounding of linear relaxations, was developed by Raghavan & Thompson [32] to get approximation algorithms for such ILPs. We use Theorem 3.1 to prove that this technique produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrix of the given MIP/CIP is column-sparse. (In the case of MIPs, our algorithm iterates randomized rounding several times with different choices of parameters, in order to achieve our result.) Such results cannot be got via Lemma 1.1, as the dependency d, in the sense of Lemma 1.1, can be as high as Θ(m) for these problems. Roughly speaking, Theorem 3.1 helps show that if no column in our given ILP has more than a nonzero entries, then the dependency can essentially be brought down to a polynomial in a; this is the key driver behind our improvements.
Theorem 3.1 works well in combination with an idea that has blossomed in the areas of derandomization and pseudorandomness, in the last two decades: (approximately) decomposing a function of several variables into a sum of terms, each of which depends on only a few of these variables. Concretely, suppose Z is a sum of random variables Z i . Many tools have been developed to upper-bound Pr(Z − E[Z] ≥ z) and Pr(|Z − E[Z]| ≥ z) even if the Z i s are only (almost) k-wise independent for some "small" k, rather than completely independent. The idea is to bound the probabilities by considering E[(Z − E[Z]) k ] or similar expectations, which look at the Z i k or fewer at a time (via linearity of expectation). The main application of this has been that the Z i can then be sampled using "few" random bits, yielding a derandomization/pseudorandomness result (e.g., [3,23,8,26,28,33]). Our results show that such ideas can in fact be used to show that some structures exist! This is one of our main contributions.
What about polynomial-time algorithms for our existential results? Typical applications of Lemma 1.1 are "nonconstructive" [i.e., do not directly imply (randomized) polynomial-time algorithmic versions], since the positive probability guaranteed by Lemma 1.1 can be exponentially small in the size of the input. However, certain algorithmic versions of the LLL have been developed starting with the seminal work of Beck [5]. These ideas do not seem to apply to our extension of the LLL, and hence our MIP result is nonconstructive. Following the preliminary version of this work [35], two main algorithmic versions related to our work have been obtained: (i) for a subclass of the MIPs [20], and (ii) for a somewhat different notion of approximation than the one we study, for certain families of MIPs [11].
Our main algorithmic contribution is for CIPs and multi-criteria versions thereof: we show, by a generalization of the method of pessimistic estimators [31], that we can efficiently construct the same structure as is guaranteed by our nonconstructive argument. We view this as interesting for two reasons. First, the generalized pessimistic estimator argument requires a quite delicate analysis, which we expect to be useful in other applications of developing constructive versions of existential arguments. Second, except for some of the algorithmic versions of the LLL developed in [24,25], most current algorithmic versions minimally require something like "pd 3 = O(1)" (see, e.g., [5,1]); the LLL only needs that pd = O(1). While this issue does not matter much in many applications, it crucially does, in some others. A good example of this is the existentially-optimal integrality gap for the edge-disjoint paths problem with "short" paths, shown using the LLL in [21]. The above-seen "pd 3 = O(1)" requirement of currently-known algorithmic approaches to the LLL, leads to algorithms that will violate the edge-disjointness condition when applied in this context: specifically, they may route up to three paths on some edges of the graph. See [9] for a different -randomwalk based -approach to low-congestion routing. An algorithmic version of this edge-disjoint paths result of [21] is still lacking. It is a very interesting open question whether there is an algorithmic version of the LLL that can construct the same structures as guaranteed to exist by the LLL. In particular, can one of the most successful derandomization tools -the method of conditional probabilities or its generalization, the pessimistic estimators method -be applied, fixing the underlying random choices of the probabilistic argument one-by-one? This intriguing question is open (and seems difficult) for now. As a step in this direction, we are able to show how such approaches can indeed be developed, in the context of CIPs.
Thus, our main contributions are as follows. (a) The LLL extension is of independent interest: it helps in certain settings where the "dependency" among the "bad" events is too high for the LLL to be directly applicable. We expect to see further applications/extensions of such ideas. (b) This work shows that certain classes of column-sparse ILPs have much better solutions than known before; such problems abound in practice (e.g., short paths are often desired/required in routing). (c) Our generalized method of pessimistic estimators should prove fruitful in other contexts also; it is a step toward complete algorithmic versions of the LLL.
The rest of this paper is organized as follows. Our results are first presented in § 2, along with a discussion of related work. The extended LLL, and some large-deviation methods that will be seen to work well with it, are shown in § 3. Sections 4 and 5 are devoted to our rounding applications. Finally, § 6 concludes.
Improvements achieved
For MIPs, we use the extended LLL and an idea ofÉva Tardos that leads to a bootstrapping of the LLL extension, to show the existence of an integral solution of value y * + O(min{y * , m} · H(min{y * , m}, 1/a)) + O(1); see Theorem 4.5. Since a ≤ m, this is always as good as the y * +O(min{y * , m}·H(min{y * , m}, 1/m)) bound of [32] and is a good improvement, if a ≪ m. It also is an improvement over the additive g factor of [18] in cases where g is not small compared to y * .
Consider, e.g., the global routing problem and its MIP formulation, sketched above; m here is the number of edges in G, and g = a is the maximum length of any path in i P i . To focus on a specific interesting case, suppose y * , the fractional congestion, is at most one. Then while the previous results ( [32] and [18], resp.) give bounds of O(log m/ log log m) and O(a) on an integral solution, we get the improved bound of O(log a/ log log a). Similar improvements are easily seen for other ranges of y * also; e.g., if y * = O(log a), an integral solution of value O(log a) exists, improving on the previously known bounds of O(log m/ log(2 log m/ log a)) and O(a). Thus, routing along short paths (this is the notion of sparsity for the global routing problem) is very beneficial in keeping the congestion low. Section 4 presents a scenario where we get such improvements, for discrepancy-type problems [34,4]. In particular, we generalize a hypergraph-partitioning result of Füredi & Kahn [16].
Recall the bounds of [36] for CIPs mentioned in the paragraph preceding this subsection; our bounds for CIPs depend only on the set of constraints Ax ≥ b, i.e., they hold for any non-negative objectivefunction vector c. Our improvements over [36] get better as y * decreases. We show an integrality gap of 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}), once again improving on [36] for weighted CIPs. This CIP bound is better than that of [36] if y * ≤ mB/a: this inequality fails for unweighted CIPs and is generally true for weighted CIPs, since y * can get arbitrarily small in the latter case. In particular, we generalize the result of Chvátal [10] on weighted set cover. Consider, e.g., a facility location problem on a directed graph G = (V, A): given a cost c i ∈ [0, 1] for each i ∈ V , we want a min-cost assignment of facilities to the nodes such that each node sees at least B facilities in its out-neighborhood-multiple facilities at a node are allowed. If ∆ in is the maximum in-degree of G, we show an integrality gap of 1 + O(max{ln(∆ in + 1)/B, ln(B(∆ in + 1))/B}). This improves on [36] if y * ≤ |V |B/∆ in ; it shows an O(1) (resp., 1 + o(1)) integrality gap if B grows as fast as (resp., strictly faster than) log ∆ in . Theorem 5.7 presents our covering results.
A key corollary of our results is that for families of instances of CIPs, we get a good (O(1) or 1 + o(1)) integrality gap if B grows at least as fast as log a. Bounds on the result of a greedy algorithm for CIPs relative to the optimal integral solution, are known [12,13]. Our bound improves that of [12] and is incomparable with [13]; for any given A, c, and the unit vector b/||b|| 2 , our bound improves on [13] if B is more than a certain threshold. As it stands, randomized rounding produces such improved solutions for several CIPs only with a very low, sometimes exponentially small, probability. Thus, it does not imply a randomized algorithm, often. To this end, we generalize Raghavan's method of pessimistic estimators to derive an algorithmic (polynomial-time) version of our results for CIPs, in § 5.3.
We also show via Theorem 5.9 and Corollary 5.10 that multi-criteria CIPs can be approximated well. In particular, Corollary 5.10 shows some interesting cases where the approximation guarantee for multi-criteria CIPs grows in a very much sub-linear fashion with the number ℓ of given vectors c i : the approximation ratio is at most O(log log ℓ) times what we show for CIPs (which correspond to the case where ℓ = 1). We are not aware of any such earlier work on multi-criteria CIPs.
The preliminary version of this work was presented in [35]. As mentioned in § 1, two main algorithmic versions related to our work have been obtained following [35]. First, for a subclass of the MIPs where the nonzero entries of the matrix A are "reasonably large", constructive versions of our results have been obtained in [20]. Second, for a notion of approximation that is different from the one we study, algorithmic results have been developed for certain families of MIPs in [11]. Furthermore, our Theorem 5.7 for CIPs has been used in [19] to develop approximation algorithms for CIPs that have given upper bounds on the variables x j .
The Extended LLL and an Approach to Large Deviations
We now present our LLL extension, Theorem 3.1. For any event E, define χ(E) to be its indicator r.v.:
1 if E holds and 0 otherwise. Suppose we have "bad" events E 1 , . . . , E m with a "dependency" d ′ (in the sense of Lemma 1.1) that is "large". Theorem 3.1 shows how to essentially replace d ′ by a possibly much-smaller d, under some conditions. It generalizes Lemma 1.1 (define one r.v., C i,1 = χ(E i ), for each i, to get Lemma 1.1), its proof is very similar to the classical proof of Lemma 1.1, and its motivation will be clarified by the applications.
(i) any C i,j is mutually independent of all but at most d of the events E k , k = i, and (ii) ∀I ⊆ ([m] − {i}), Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)]. Let p i denote j E[C i,j ]; clearly, Pr(E i ) ≤ p i (set I = φ in (ii)). Suppose that for all i ∈ [m] we have ep i (d + 1) ≤ 1. Then Pr( i E i ) ≥ (d/(d + 1)) m > 0.
Remark 3.2. C i,j and C i,j ′ can "depend" on different subsets of {E k |k = i}; the only restriction is that these subsets be of size at most d. Note that we have essentially reduced the dependency among the E i s, to just d: ep i (d + 1) ≤ 1 suffices. Another important point is that the dependency among the r.v.s C i,j could be much higher than d: all we count is the number of E k that any C i,j depends on. Proof of Theorem 3.1. We prove by induction on |I| that if i ∈ I then Pr(E i | Z(I)) ≤ ep i , which suffices to prove the theorem since Pr
( i E i ) = i∈[m] (1 − Pr(E i | Z([i − 1]))). For the base case where I = ∅, Pr(E i | Z(I)) = Pr(E i ) ≤ p i . For the inductive step, let S i,j,I . = {k ∈ I | C i,j depends on E k }, and S ′ i,j,I = I − S i,j,I ; note that |S i,j,I | ≤ d. If S i,j,I = ∅, then E[C i,j | Z(I)] = E[C i,j ]. Otherwise, letting S i,j,I = {ℓ 1 , . . . , ℓ r }, we have E[C i,j | Z(I)] = E[C i,j · χ(Z(S i,j,I )) | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I )) ≤ E[C i,j | Z(S ′ i,j,I )] Pr(Z(S i,j,I ) | Z(S ′ i,j,I ))
, since C i,j is non-negative. The numerator of the last term is E[C i,j ], by assumption. The denominator can be lower-bounded as follows:
s∈[r] (1 − Pr(E ℓs | Z({ℓ 1 , ℓ 2 , . . . , ℓ s−1 } ∪ S ′ i,j,I ))) ≥ s∈[r] (1 − ep ℓs ) ≥ (1 − 1/(d + 1)) r ≥ (d/(d + 1)) d > 1/e;
the first inequality follows from the induction hypothesis. Hence,
E[C i,j | Z(I)] ≤ eE[C i,j ] and thus, Pr(E i | Z(I)) ≤ j E[C i,j | Z(I)] ≤ ep i ≤ 1/(d + 1).
The crucial point is that the events E i could have a large dependency d ′ , in the sense of the classical Lemma 1.1. The main utility of Theorem 3.1 is that if we can "decompose" each E i into the r.v.s C i,j that satisfy the conditions of the theorem, then there is the possibility of effectively reducing the dependency by much (d ′ can be replaced by the value d). Concrete instances of this will be studied in later sections.
The tools behind our MIP application are our new LLL, and a result of [33]. Define, for z = (z 1 , . . . , z n ) ∈ ℜ n , a family of polynomials S j (z), j = 0, 1, . . . , n, where S 0 (z) ≡ 1, and for j ∈ [n],
S j (z) . = 1≤i 1 <i 2 ···<i j ≤n z i 1 z i 2 · · · z i j .(1)Remark 3.3.
For real x and non-negative integral r, we define x r . = x(x − 1) · · · (x − r + 1)/r! as usual; this is the sense meant in Theorem 3.4 below.
We define a nonempty event to be any event with a nonzero probability of occurrence. The relevant theorem of [33] is the following:
Theorem 3.4. ([33]) Given r.v.s X 1 , . . . , X n ∈ [0, 1], let X = n i=1 X i and µ = E[X]
. Then, (a) For any q > 0, any nonempty event Z and any non-negative integer k ≤ q,
Pr(X ≥ q | Z) ≤ E[Y k,q | Z], where Y k,q = S k (X 1 , . . . , X n )/ q k . (b) If the X i s are independent, δ > 0, and k = ⌈µδ⌉, then Pr(X ≥ µ(1 + δ)) ≤ E[Y k,µ(1+δ) ] ≤ G(µ, δ), where G(·, ·) is as in Lemma 2.4. (c) If the X i s are independent, then E[S k (X 1 , . . . , X n )] ≤ n k · (µ/n) k ≤ µ k /k!.
Proof. Suppose r 1 , r 2 , . . . r n ∈ [0, 1] satisfy n i=1 r i ≥ q. Then, a simple proof is given in [33], for the fact that for any non-negative integer k ≤ q, S k (r 1 , r 2 , . . . , r n ) ≥ q k . This clearly holds even given the occurrence of any nonempty event Z. Thus we get Pr(
X ≥ q) | Z) ≤ Pr(Y k,q ≥ 1 | Z) ≤ E[Y k,q | Z],
where the second inequality follows from Markov's inequality. The proofs of (b) and (c) are given in [33].
We next present the proof of Lemma 2.4:
Proof of Lemma 2.4. Part (a) is the Chernoff-Hoeffding bound (see, e.g., Appendix A of [4], or [27]). For (b), we proceed as follows. For any µ > 0, it is easy to check that
G(µ, δ) = e −Θ(µδ 2 ) if δ ∈ (0, 1); (2) G(µ, δ) = e −Θ(µ(1+δ) ln(1+δ)) if δ ≥ 1. (3) Now if µ ≤ log(p −1 )/2, choose δ = C · log(p −1 ) µ log(log(p −1 )/µ)
for a suitably large constant C. Note that δ is lower-bounded by some positive constant; hence, (3) holds (since the constant 1 in the conditions "δ ∈ (0, 1)" and "δ > 1" of (2) and (3) can clearly be replaced by any other positive constant). Simple algebraic manipulation now shows that if C is large enough, then
⌈µδ⌉·G(µ, δ) ≤ p holds. Similarly, if µ > log(p −1 )/2, we set δ = C · log(µ+p −1 )
µ for a large enough constant C, and use (2).
Approximating Minimax Integer Programs
Suppose we are given an MIP conforming to Definition 2.1. Define t to be max i∈[m] N Z i , where N Z i is the number of rows of A which have a non-zero coefficient corresponding to at least one variable among
{x i,j : j ∈ [ℓ i ]}. Note that g ≤ a ≤ t ≤ min{m, a · max i∈[n] ℓ i }.(4)
Theorem 4.2 now shows how Theorem 3.1 can help, for sparse MIPs-those where t ≪ m. We will then bootstrap Theorem 4.2 to get the further improved Theorem 4.5. We start with a proposition, whose proof is a simple calculus exercise:
Proposition 4.1. If 0 < µ 1 ≤ µ 2 , then for any δ > 0, G(µ 1 , µ 2 δ/µ 1 ) ≤ G(µ 2 , δ). Proof. Conduct randomized rounding: independently for each i, randomly round exactly one x i,j to 1, guided by the "probabilities" {x * i,j }. We may assume that {x * i,j } is a basic feasible solution to the LP relaxation. Hence, at most m of the {x * i,j } will be neither zero nor one, and only these variables will participate in the rounding. Thus, since all the entries of A are in [0, 1], we assume without loss of generality from now on that y * ≤ m (and that max i∈[n] ℓ i ≤ m); this explains the "min{y * , m}" term in our stated bounds. If z ∈ {0, 1} N denotes the randomly rounded vector, then E[(Az) i ] = b i by linearity of expectation, i.e., at most y * . Defining k = ⌈y * H(y * , 1/(et))⌉ and events E 1 , E 2 , . . . , E m by
E i ≡ "(Az) i ≥ b i + k",C i,j . = v∈S(j) Z i,v b i +k k .(5)
We now need to show that the r.v.s C i,j satisfy the conditions of Theorem 3.1. For any i ∈ [m], let
δ i = k/b i . Since b i ≤ y * , we have, for each i ∈ [m], G(b i , δ i ) ≤ G(y * ,E i | Z) ≤ j∈[u] E[C i,j | Z]. Also, p i . = j∈[u] E[C i,j ] < G(b i , δ i ) ≤ 1/(ekt).
Next since any C i,j involves (a product of) k terms, each of which "depends" on at most (t − 1) of the events Theorem 4.2 gives good results if t ≪ m, but can we improve it further, say by replacing t by a (≤ t) in it? As seen from (4), the key reason for t ≫ a Θ(1) is that max i∈[n] ℓ i ≫ a Θ(1) . If we can essentially "bring down" max i∈[n] ℓ i by forcing many x * i,j to be zero for each i, then we effectively reduce t (t ≤ a · max i ℓ i , see (4)); this is so since only those x * i,j that are neither zero nor one take part in the rounding. A way of bootstrapping Theorem 4.2 to achieve this is shown by: Proof. Let K 0 > 0 be a sufficiently large absolute constant. Now if
{E v : v ∈ ([m] − {i})} by definition of t, we see the important Fact 4.4. ∀i ∈ [m] ∀j ∈ [u], C i,j ∈ [0, 1] and C i,j "depends" on at most d = k(t − 1) of the set of events {E v : v ∈ ([m] − {i})}.(y * ≥ t 1/7 ) or (t ≤ max{K 0 , 2}) or (t ≤ a 4 )(6)
holds, then we will be done by Theorem 4.2. So we may assume that (6) is false. Also, if y * ≤ t −1/7 , Theorem 4.2 guarantees an integral solution of value O(1); thus, we also suppose that y * > t −1/7 . The basic idea now is, as sketched above, to set many x * i,j to zero for each i (without losing too much on y * ), so that max i ℓ i and hence, t, will essentially get reduced. Such an approach, whose performance will be validated by arguments similar to those of Theorem 4.2, is repeatedly applied until (6) holds, owing to the (continually reduced) t becoming small enough to satisfy (6). There are two cases:
Case I: y * ≥ 1. Solve the LP relaxation, and set x ′ i,j := (y * ) 2 (log 5 t)x * i,j . Conduct randomized rounding on the x ′ i,j now, rounding each x ′ i,j independently to z i,j ∈ {⌊x ′ i,j ⌋, ⌈x ′ i,j ⌉}. (Note the key difference from Theorem 4.2, where for each i, we round exactly one x * i,j to 1.)
Let K 1 > 0 be a sufficiently large absolute constant. We now use ideas similar to those used in our proof of Theorem 4.2 to show that with nonzero probability, we have both of the following:
∀i ∈ [m], (Az) i ≤ (y * ) 3 log 5 t · (1 + K 1 /((y * ) 1.5 log 2 t)), and (7) ∀i ∈ [n], | j z i,j − (y * ) 2 log 5 t| ≤ K 1 y * log 3 t.
To show this, we proceed as follows. Let E 1 , E 2 , . . . , E m be the "bad" events, one for each event in (7) not holding; similarly, let E m+1 , E m+2 , . . . , E m+n be the "bad" events, one for each event in (8) not holding. We want to use our extended LLL to show that with positive probability, all these bad events can be avoided; specifically, we need a way of decomposing each E i into a finite number of non-negative r.v.s C i,j . For each event E m+ℓ where ℓ ≥ 1, we define just one r.v. C m+ℓ,1 : this is the indicator variable for the occurrence of E m+ℓ . For the events E i where i ≤ m, we decompose E i into r.v.s C i,j just as in (5): each such C i,j is now a scalar multiple of at most O((y * ) 3 log 5 t/((y * ) 1.5 log 2 t)) = O((y * ) 1.5 log 3 t) = O(t 1.5/7 log 3 t) independent binary r.v.s that underlie our randomized rounding; the second equality (big-Oh bound) here follows since (6) has been assumed to not hold. Thus, it is easy to see that for all i, 1 ≤ i ≤ m + n, and for any j, the r.v. C i,j depends on at most
O(t · t 1.5/7 log 3 t)(9)
events E k , where k = i. Also, as in our proof of Theorem 4.2, Theorem 3.4 gives a direct proof of requirement (ii) of Theorem 3.1; part (b) of Theorem 3.4 shows that for any desired constant K, we can choose the constant K 1 large enough so that for all i, j E[C i,j ] ≤ t −K . Thus, in view of (9), we see by Theorem 3.1 that Pr( m+n i=1 E i ) > 0. Fix a rounding z satisfying (7) and (8). For each i ∈ [n] and j ∈ [ℓ i ], we renormalize as follows: x ′′ i,j := z i,j / u z i,u . Thus we have u x ′′ i,u = 1 for all i; we now see that we have two very useful properties. First, since j z i,j ≥ (y * ) 2 log 5 t · 1 − O( 1 y * log 2 t ) for all i from (8), we have, ∀i ∈ [m], (Ax ′′ ) i ≤ y * (1 + O(1/((y * ) 1.5 log 2 t))) 1 − O(1/(y * log 2 t)) ≤ y * (1 + O(1/(y * log 2 t))).
Second, since the z i,j are non-negative integers summing to at most (y * ) 2 log 5 t(1 + O(1/(y * log 2 t))), at most O((y * ) 2 log 5 t) values x ′′ i,j are nonzero, for each i ∈ [n]. Thus, by losing a little in y * (see (10)), our "scaling up-rounding-scaling down" method has given a fractional solution x ′′ with a much-reduced ℓ i for each i; ℓ i is now O((y * ) 2 log 5 t), essentially. Thus, t has been reduced to O(a(y * ) 2 log 5 t); i.e., t has been reduced to at most
K 2 t 1/4+2/7 log 5 t(11)
for some constant K 2 > 0 that is independent of K 0 , since (6) was assumed false. Repeating this scheme O(log log t) times makes t small enough to satisfy (6). More formally, define t 0 = t, and t i+1 = K 2 t 1/4+2/7 i log 5 t i for i ≥ 0. Stop this sequence at the first point where either t = t i satisfies (6), or t i+1 ≥ t i holds. Thus, we finally have t small enough to satisfy (6) or to be bounded by some absolute constant. How much has max i∈[m] (Ax) i increased in the process? By (10), we see that at the end, max i∈ [m] (Ax) i ≤ y * · j≥0 (1 + O(1/(y * log 2 t j ))) ≤ y * · e O( j≥0 1/(y * log 2 t j )) ≤ y * + O(1),
since the values log t j decrease geometrically and are lower-bounded by some absolute positive constant. We may now apply Theorem 4.2.
Case II: t −1/7 < y * < 1. The idea is the same here, with the scaling up of x * i,j being by (log 5 t)/y * ; the same "scaling up-rounding-scaling down" method works out. Since the ideas are very similar to Case I, we only give a proof sketch here. We now scale up all the x * i,j first by (log 5 t)/y * and do a randomized rounding. The analogs of (7) and (8)
∀i ∈ [n], | j z i,j − log 5 t/y * | ≤ K ′ 1 log 3 t/ y * .(14)
Proceeding identically as in Case I, we can show that with positive probability, (13) and (14) hold simultaneously. Fix a rounding where these two properties hold, and renormalize as before: x ′′ i,j := z i,j / u z i,u . Since (13) and (14) hold, it is easy to show that the following analogs of (10) and (11) hold:
(Ax ′′ ) i ≤ y * (1 + O(1/ log 2 t)) 1 − O( √ y * / log 2 t) ≤ y * (1 + O(1/ log 2 t))
; and t has been reduced to O(a log 5 t/y * ), i.e., to O(t 1/4+1/7 log 5 t).
We thus only need O(log log t) iterations, again. Also, the analog of (12) now is that
max i∈[m] (Ax) i ≤ y * · j≥0 (1 + O(1/ log 2 t j )) ≤ y * · e O( j≥0 1/ log 2 t j ) ≤ y * + O(1).
This completes the proof.
We now study our improvements for discrepancy-type problems, which are an important class of MIPs that, among other things, are useful in devising divide-and-conquer algorithms. Given is a set-system (X, F ), where X = [n] and F = {D 1 , D 2 , . . . , D M } ⊆ 2 X . Given a positive integer ℓ, the problem is to partition X into ℓ parts, so that each D j is "split well": we want a function f : X → [ℓ] which minimizes max j∈[M ],k∈[ℓ] |{i ∈ D j : f (i) = k}|. (The case ℓ = 2 is the standard set-discrepancy problem.) To motivate this problem, suppose we have a (di)graph (V, A); we want a partition of V into V 1 , . . . , V ℓ such that ∀v ∈ V , {|{j ∈ N (v)∩V k }| : k ∈ [ℓ]} are "roughly the same", where N (v) is the (out-)neighborhood of v. See, e.g., [2,17] for how this helps construct divide-and-conquer approaches. This problem is naturally modeled by the above set-system problem.
Let ∆ be the degree of (X, F ), i.e., max i∈[n] |{j : i ∈ D j }|, and let ∆ ′ . = max D j ∈F |D j |. Our problem is naturally written as an MIP with m = M ℓ, ℓ i = ℓ for each i, and g = a = ∆, in the notation of Definition 2.1; y * = ∆ ′ /ℓ here. The analysis of [32] gives an integral solution of value at most y * (1 + O(H(y * , 1/(M ℓ)))), while [18] presents a solution of value at most y * + ∆. Also, since any D j ∈ F intersects at most (∆ − 1)∆ ′ other elements of F , Lemma 1.1 shows that randomized rounding produces, with positive probability, a solution of value at most y * (1 + O(H(y * , 1/(e∆ ′ ∆ℓ)))). This is the approach taken by [16] for their case of interest: ∆ = ∆ ′ , ℓ = ∆/ log ∆. Theorem 4.5 shows the existence of an integral solution of value y * (1+O(H(y * , 1/∆)))+O(1), i.e., removes the dependence on ∆ ′ . This is an improvement on all the three results above. As a specific interesting case, suppose ℓ grows at most as fast as ∆ ′ / log ∆. Then we see that good integral solutions-those that grow at the rate of O(y * ) or better-exist, and this was not known before. (The approach of [16] shows such a result for ℓ = O(∆ ′ / log(max{∆, ∆ ′ })). Our bound of O(∆ ′ / log ∆) is always better than this, and especially so if ∆ ′ ≫ ∆.)
Approximating Covering Integer Programs
One of the main ideas behind Theorem 3.1 was to extend the basic inductive proof behind the LLL by decomposing the "bad" events E i appropriately into the r.v.s C i,j . We now use this general idea in a different context, that of (multi-criteria) covering integer programs, with an additional crucial ingredient being a useful correlation inequality, the FKG inequality [15]. The reader is asked to recall the discussion of (multi-criteria) CIPs from § 2. We start with a discussion of randomized rounding for CIPs, the Chernoff lower-tail bound, and the FKG inequality in § 5.1. These lead to our improved, but nonconstructive, approximation bound for column-sparse (multi-criteria) CIPs, in § 5.2. This is then made constructive in § 5.3; we also discuss there what we view as novel about this constructive approach.
Preliminaries
Let us start with a simple and well-known approach to tail bounds. Suppose Y is a random variable and y is some value. Then, for any 0 ≤ δ < 1, we have
Pr(Y ≤ y) ≤ Pr((1 − δ) Y ≥ (1 − δ) y ) ≤ E[(1 − δ) Y ] (1 − δ) y ,(15)
where the inequality is a consequence of Markov's inequality.
We next setup some basic notions related to approximation algorithms for (multi-criteria) CIPs. Recall that in such problems, we have ℓ given non-negative vectors c 1 , c 2 , . . . , c ℓ such that for all i, c i ∈ [0, 1] n with max j c i,j = 1; ℓ = 1 in the case of CIPs. Let x = (x * 1 , x * 2 , . . . , x * n ) denote a given fractional solution that satisfies the system of constraints Ax ≥ b. We are not concerned here with how x * was found: typically, x * would be an optimal solution to the LP relaxation of the problem. (The LP relaxation is obvious if, e.g., ℓ = 1, or, say, if the given multi-criteria aims to minimize max i c T i · x * , or to keep each c T i · x * bounded by some target value v i .) We now consider how to round x * to some integral z so that:
(P1) the constraints Az ≥ b hold, and (P2) for all i, c T i · z is "not much bigger" than c T i · x * : our approximation bound will be a measure of how small a "not much bigger value" we can achieve in this sense.
Let us now discuss the "standard" randomized rounding scheme for (multi-criteria) CIPs. We assume a fixed instance as well as x * , from now on. For an α > 1 to be chosen suitably, set x ′ j = αx * j , for each j ∈ [n]. We then construct a random integral solution z by setting, independently for each j ∈ [n],
z j = ⌊x ′ j ⌋ + 1 with probability x ′ j − ⌊x ′ j ⌋, and z j = ⌊x ′ j ⌋ with probability 1 − (x ′ j − ⌊x ′ j ⌋).
The aim then is to show that with positive (hopefully high) probability, (P1) and (P2) happen simultaneously. We now introduce some useful notation. For every j ∈ [n], let s j = ⌊x ′ j ⌋. Let A i denote the ith row of A, and let X 1 , X 2 , . . . , X n ∈ {0, 1} be independent r.v.s with Pr(X j = 1) = x ′ j − s j for all j. The bad event E i that the ith constraint is violated by our randomized rounding is given by
E i ≡ "A i · X < µ i (1 − δ i )", where µ i = E[A i · X] and δ i = 1 − (b i − A i · s)/µ i .
We now bound Pr(E i ) for all i, when the standard randomized rounding is used. Define g(B, α) . = (α · e −(α−1) ) B . Then for all i,
Lemma 5.1.Pr(E i ) ≤ E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ g(B, α) ≤ e −B(α−1) 2 /(2α)
under standard randomized rounding.
Proof. The first inequality follows from (15). Next, the Chernoff-Hoeffding lower-tail approach [4,27] shows that
E[(1 − δ i ) A i ·X ] (1 − δ i ) (1−δ i )µ i ≤ e −δ i (1 − δ i ) 1−δ i µ i .
It is observed in [36] (and is not hard to see) that this latter quantity is maximized when s j = 0 for all j, and when each b i equals its minimum value of B. Thus we see that Pr(E i ) ≤ g(B, α). The inequality g(B, α) ≤ e −B(α−1) 2 /(2α) for α ≥ 1, is well-known and easy to verify via elementary calculus.
Next, the FKG inequality is a useful correlation inequality, a special case of which is as follows [15]. Given binary vectors a = (a 1 , a 2 , . . . , a ℓ ) ∈ {0, 1} ℓ and b = (b 1 , b 2 , . . . , b ℓ ) ∈ {0, 1} ℓ , let us partially order them by coordinate-wise domination: a b iff a i ≤ b i for all i. Now suppose Y 1 , Y 2 , . . . , Y ℓ are independent r.v.s, each taking values in {0, 1}. Let Y denote the vector (Y 1 , Y 2 , . . . , Y ℓ ). Suppose an event A is completely defined by the value of Y . Define A to be increasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b such that a b. Analogously, event A is decreasing iff: for all a ∈ {0, 1} ℓ such that A holds when Y = a, A also holds when Y = b, for any b a. The FKG inequality proves certain intuitively appealing bounds:
(i) Pr(I i | j∈S I j ) ≥ Pr(I i ) and Pr(D i | j∈S D j ) ≥ Pr(D i ); (ii) Pr(I i | j∈S D j ) ≤ Pr(I i ) and Pr(D i | j∈S I j ) ≤ Pr(D i ).
Returning to our random variables X j and events E i , we get the following lemma as an easy consequence of the FKG inequality, since each event of the form "E i " or "X j = 1" is an increasing event as a function of the vector (X 1 , X 2 , . . . , X n ):
Lemma 5.3. For all B 1 , B 2 ⊆ [m]
such that B 1 ∩B 2 = ∅ and for any B 3 ⊆ [n], Pr( i∈B 1 E i | (( j∈B 2 E j )∧ ( k∈B 3 (X k = 1))) ≥ i∈B 1 Pr(E i ).
Nonconstructive approximation bounds for (multi-criteria) CIPs
Definition 5.4. (The function R) For any s and any j 1 < j 2 < · · · < j s , let R(j 1 , j 2 , . . . , j s ) be the set of indices i such that row i of the constraint system "Ax ≥ b" has at least one of the variables j k , 1 ≤ k ≤ s, appearing with a nonzero coefficient. (Note from the definition of a in Defn. 2.2, that |R(j 1 , j 2 , . . . , j s )| ≤ a · s.)
Let the vector x * = (x * 1 , x * 2 , . . . , x * n ), the parameter α > 1, and the "standard" randomized rounding scheme, be as defined in § 5.1. The standard rounding scheme is sufficient for our (nonconstructive) purposes now; we generalize this scheme as follows, for later use in § 5.3.
Definition 5.5. (General randomized rounding) Given a vector p = (p 1 , p 2 , . . . , p n ) ∈ [0, 1] n , the general randomized rounding with parameter p generates independent random variables X 1 , X 2 , . . . , X n ∈ {0, 1} with Pr(X j = 1) = p j ; the rounded vector z is defined by z j = ⌊αx * j ⌋ + X j for all j. (As in the standard rounding, we set each z j to be either ⌊αx * j ⌋ or ⌈αx * j ⌉; the standard rounding is the special case in which E[z j ] = αx * j for all j.)
We now present an important lemma, Lemma 5.6, to get correlation inequalities which "point" in the "direction" opposite to FKG. Some ideas from the proof of Lemma 1.1 will play a crucial role in our proof this lemma.
Lemma 5.6. Suppose we employ general randomized rounding with some parameter p, and that Pr( m i=1 E i ) is nonzero under this rounding. The following hold for any q and any 1 ≤ j 1 < j 2 < · · · < j q ≤ n.
(i)
Pr(X j 1 = X j 2 = · · · = X jq = 1 | m i=1 E i ) ≤ q t=1 p jt i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i ))
; (16) the events E i ≡ ((Az) i < b i ) are defined here w.r.t. the general randomized rounding.
(ii) In the special case of standard randomized rounding, i∈R(j 1 ,j 2 ,...,jq)
(1 − Pr(E i )) ≥ (1 − g(B, α)) aq ;(17)
the function g is as defined in Lemma 5.1.
Proof. (i) Note first that if we wanted a lower bound on the l.h.s., the FKG inequality would immediately imply that the l.h.s. is at least p j 1 p j 2 · · · p jq . We get around this "correlation problem" as follows. Let
Q = R(j 1 , j 2 , . . . , j q ); let Q ′ = [m] − Q. Let Z 1 ≡ ( i∈Q E i ), and Z 2 ≡ ( i∈Q ′ E i ).
Letting Y = q t=1 X jt , note that |Q| ≤ aq and (18)
Y is independent of Z 2 .(19)
Now,
Pr(Y = 1 | (Z 1 ∧ Z 2 )) = Pr(((Y = 1) ∧ Z 1 ) | Z 2 ) Pr(Z 1 | Z 2 ) ≤ Pr((Y = 1) | Z 2 ) Pr(Z 1 | Z 2 ) = Pr(Y = 1) Pr(Z 1 | Z 2 ) (by (19)) ≤ q t=1
Pr(X jt = 1) i∈R(j 1 ,j 2 ,...,jq) (1 − Pr(E i )) (by Lemma 5.3).
(ii) We get (17) from Lemma 5.1 and (18).
We will use Lemmas 5.3 and 5.6 to prove Theorem 5.9. As a warmup, let us start with a result for the special case of CIPs; recall that y * denotes c T · x * . Theorem 5.7. For any given CIP, suppose we choose α, β > 1 such that β(1 − g(B, α)) a > 1. Then, there exists a feasible solution of value at most y * αβ. In particular, there is an absolute constant K > 0 such that if α, β > 1 are chosen as: α = K · ln(a + 1)/B and β = 2, if ln(a + 1) ≥ B, and (20) α = β = 1 + K · ln(a + 1)/B, if ln(a + 1) < B; (21) then, there exists a feasible solution of value at most y * αβ. Thus, the integrality gap is at most 1 + O(max{ln(a + 1)/B, ln(a + 1)/B}).
Proof. Conduct standard randomized rounding, and let E be the event that c T · z > y * αβ. Setting Z ≡ i∈[m] E i and µ . = E[c T · z] = y * α, we see by Markov's inequality that Pr(E | Z) is at most R = ( n j=1 c j Pr(X j = 1 | Z))/(µβ). Note that Pr(Z) > 0 since α > 1; so, we now seek to make R < 1, which will complete the proof. Lemma 5.6 shows that g(B, α)) a ; thus, the condition β (1 − g(B, α)) a > 1 suffices.
R ≤ j c j p j µβ · (1 − g(B, α)) a = 1 β(1 −
Simple algebra shows that choosing α, β > 1 as in (20) and (21), ensures that β (1 − g(B, α)) a > 1.
The basic approach of our proof of Theorem 5.7 is to follow the main idea of Theorem 3.1, and to decompose the event "E | Z" into a non-negative linear combination of events of the form "X j = 1 | Z'; we then exploited the fact that each X j depends on at most a of the events comprising Z. We now extend Theorem 5.7 and also generalize to multi-criteria CIPs. Instead of employing just a "first moment method" (Markov's inequality) as in the proof of Theorem 5.7, we will work with higher moments: the functions S k defined in (1) and used in Theorem 3.4. Suppose some parameters λ i > 0 are given, and that our goal is to round x * to z so that the event
A ≡ "(Az ≥ b) ∧ (∀i, c T i · z ≤ λ i ) ′′(22)
holds. We first give a sufficient condition for this to hold, in Theorem 5.9; we then derive some concrete consequences in Corollary 5.10. We need one further definition before presenting Theorem 5.9. Recall that A i and b i respectively denote the ith row of A and the ith component of b. Also, the vector s and values δ i will throughout be as in the definition of standard randomized rounding.
Definition 5.8. (The functions ch and ch ′ ) Suppose we conduct general randomized rounding with some parameter p; i.e., let X 1 , X 2 , . . . , X n be independent binary random variables such that Pr(X j = 1) = p j . For each i ∈ [m], define
ch i (p) . = E[(1 − δ i ) A i ·X ] (1 − δ i ) b i −A i ·s = j∈[n] E[(1 − δ i ) A i,j X j ] (1 − δ i ) b i −A i ·s and ch ′ i (p) . = min{CH i (p), 1}.
(Note from (15) that if we conduct general randomized rounding with parameter p, then Pr((Az) i < b i ) ≤ ch i (p) ≤ ch ′ i (p); also, "ch" stands for "Chernoff-Hoeffding".) Theorem 5.9. Suppose we are given a multi-criteria CIP, as well as some parameters λ 1 , λ 2 , . . . , λ ℓ > 0. Let A be as in (22). Then, for any sequence of positive integers (k 1 , k 2 , . . . , k ℓ ) such that k i ≤ λ i , the following hold.
(i) Suppose we employ general randomized rounding with parameter p = (p 1 , p 2 , . . . , p n ). Then, Pr(A) is at least (ii) Suppose we employ the standard randomized rounding to get a rounded vector z.
Φ(p) . = r∈[m] (1 − ch ′ r (p)) − ℓ i=1 1 λ i k i · j 1 <···<j k i k i t=1 c i,jt · p jt · r ∈R(j 1 ,...,j k i ) (1 − ch ′ r (p));(23)Let λ i = ν i (1 + γ i ) for each i ∈ [ℓ], where ν i = E[c T i · z] = α · (c T i · x * )
and γ i > 0 is some parameter. Then,
Φ(p) ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −a·k i .(24)
In particular, if the r.h.s. of (24) is positive, then Pr(A) > 0 for standard randomized rounding.
The proof is a simple generalization of that of Theorem 5.7, and is deferred to Section 5.4. Theorem 5.7 is the special case of Theorem 5.9 corresponding to ℓ = k 1 = 1. To make the general result of Theorem 5.9 more concrete, we now study an additional special case. We present this special case as one possible "proof of concept", rather than as an optimized one; e.g., the constant "3" in the bound "c T i · z ≤ 3ν i " can be improved.
Corollary 5.10. There is an absolute constant K ′ > 0 such that the following holds. Suppose we are given a multi-criteria CIP with notation as in part (ii) of Theorem 5.9. Define α = K ′ · max{ ln(a)+ln ln(2ℓ)
B , 1}. Now if ν i ≥ log 2 (2ℓ) for all i ∈ [ℓ]
, then standard randomized rounding produces a feasible solution z such that c T i · z ≤ 3ν i for all i, with positive probability. In particular, this can be shown by setting k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i, in part (ii) of Theorem 5.9.
Proof. Let us employ Theorem 5.9(ii) with k i = ⌈ln(2ℓ)⌉ and γ i = 2 for all i. We just need to establish that the r.h.s. of (24) is positive. We need to show that
ℓ i=1 n k i · (ν i /n) k i 3ν i k i · (1 − g(B, α)) −a·k i < 1;
it is sufficient to prove that for all i,
ν k i i /k i ! 3ν i k i · (1 − g(B, α)) −a·k i < 1/ℓ.(25)
We make two observations now.
• Since k i ∼ ln ℓ and ν i ≥ log 2 (2ℓ),
3ν i k i = (1/k i !) · k i −1 j=0 (3ν i − j) = (1/k i !) · (3ν i ) k i · e −Θ( k i −1 j=0 j/ν i ) = Θ((1/k i !) · (3ν i ) k i ).
• (1 − g(B, α)) −a·k i can be made arbitrarily close to 1 by choosing the constant K ′ large enough.
These two observations establish (25).
Constructive version
It can be shown that for many problems, randomized rounding produces the solutions shown to exist by Theorem 5.7 and Theorem 5.9, with very low probability: e.g., probability almost exponentially small in the input size. Thus we need to obtain constructive versions of these theorems. Our method will be a deterministic procedure that makes O(n) calls to the function Φ(·), in addition to poly(n, m) work. Now, if k ′ denotes the maximum of all the k i , we see that Φ can be evaluated in poly(n k ′ , m) time. Thus, our overall procedure runs in time poly(n k ′ , m) time. In particular, we get constructive versions of Theorem 5.7 and Corollary 5.10 that run in time poly(n, m) and poly(n log ℓ , m), respectively.
Our approach is as follows. We start with a vector p that corresponds to standard randomized rounding, for which we know (say, as argued in Corollary 5.10) that Φ(p) > 0. In general, we have a vector of probabilities p = (p 1 , p 2 , . . . , p n ) such that Φ(p) > 0. If p ∈ {0, 1} n , we are done. Otherwise suppose some p j lies in (0, 1); by renaming the variables, we will assume without loss of generality that j = n. Define p ′ = (p 1 , p 2 , . . . , p n−1 , 0) and p ′′ = (p 1 , p 2 , . . . , p n−1 , 1). The main fact we wish to show is that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0: we can then set p n to 0 or 1 appropriately, and continue. (As mentioned in the previous paragraph, we thus have O(n) calls to the function Φ(·) in total.) Note that although some of the p j will lie in {0, 1}, we can crucially continue to view the X j as independent random variables with Pr(X j = 1) = p j .
So, our main goal is: assuming that p n ∈ (0, 1) and that Φ(p) > 0, (26) to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. In order to do so, we make some observations and introduce some simplifying notation. Define, for each i ∈ [m]:
q i = ch ′ i (p), q ′ i = ch ′ i (p ′ ), and q ′′ i = ch ′ i (p ′′ )
. Also define the vectors q . = (q 1 , q 2 , . . . , q m ), q ′ . = (q ′ 1 , q ′ 2 , . . . , q ′ m ), and q ′′ . = (q ′′ 1 , q ′′ 2 , . . . , q ′′ m ). We now present a useful lemma about these vectors:
Lemma 5.11. For all i ∈ [m], we have 0 ≤ q ′′ i ≤ q ′ i ≤ 1; (27) q i ≥ p n q ′′ i + (1 − p n )q ′ i ; and (28) q ′ i = q ′′ i = q i if i ∈ R(n).(29)
Proof. The proofs of (27) and (29) are straightforward. As for (28), we proceed as in [36]. First of all, if q i = 1, then we are done, since q ′′ i , q ′ i ≤ 1. So suppose q i < 1; in this case, q i = ch i (p). Now, Definition 5.8 shows that
ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ). Therefore, q i = ch i (p) = p n ch i (p ′′ ) + (1 − p n )ch i (p ′ ) ≥ p n ch ′ i (p ′′ ) + (1 − p n )ch ′ i (p ′ ).
Since we are mainly concerned with the vectors p, p ′ and p ′′ now, we will view the values p 1 , p 2 , . . . , p n−1 as arbitrary but fixed, subject to (26). The function Φ(·) now has a simple form; to see this, we first define, for a vector r = (r 1 , r 2 , . . . , r m ) and a set U ⊆ [m],
f (U, r) = i∈U (1 − r i ).
Recall that p 1 , p 2 , . . . , p n−1 are considered as constants now. Then, it is evident from (23) that there exist constants u 1 , u 2 , . . . , u t and v 1 , v 2 , . . . , v t ′ , as well as subsets U 1 , U 2 , . . . , U t and V 1 , V 2 , . . . , V t ′ of [m], such that
Φ(p) = f ([m], q) − ( i u i · f (U i , q)) − (p n · j v j · f (V j , q));(30)Φ(p ′ ) = f ([m], q ′ ) − ( i u i · f (U i , q ′ )) − (0 · j v j · f (V j , q ′ )) = f ([m], q ′ ) − i u i · f (U i , q ′ ); (31) Φ(p ′′ ) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − (1 · j v j · f (V j , q ′′ )) = f ([m], q ′′ ) − ( i u i · f (U i , q ′′ )) − ( j v j · f (V j , q ′′ )).(32)
Importantly, we also have the following:
the constants u i , v j are non-negative; ∀j, V j ∩ R(n) = ∅.
Recall that our goal is to show that Φ(p ′ ) > 0 or Φ(p ′′ ) > 0. We will do so by proving that
Φ(p) ≤ p n Φ(p ′′ ) + (1 − p n )Φ(p ′ ).(34)
Let us use the equalities (30), (31), and (32). In view of (29) and (33), the term "−p n · j v j · f (V j , q)" on both sides of the inequality (34) cancels; defining ∆(U ) .
= (1 − p n ) · f (U, q ′ ) + p n · f (U, q ′′ ) − f (U, q), inequality (34) reduces to ∆([m]) − i u i · ∆(U i ) ≥ 0.(35)
Before proving this, we pause to note a challenge we face. Suppose we only had to show that, say, ∆([m]) is non-negative; this is exactly the issue faced in [36]. Then, we will immediately be done by part (i) of Lemma 5.12, which states that ∆(U ) ≥ 0 for any set U . However, (35) also has terms such as "u i · ∆(U i )" with a negative sign in front. To deal with this, we need something more than just that ∆(U ) ≥ 0 for all U ; we handle this by part (ii) of Lemma 5.12. We view this as the main novelty in our constructive version here. ≥ 0 (by (26) and (30)).
Thus we have (35).
Proof of Lemma 5.12. It suffices to show the following. Assume U = [m]; suppose u ∈ ([m] − U ) and that U ′ = U ∪ {u}. Assuming by induction on |U | that ∆(U ) ≥ 0, we show that ∆(U ′ ) ≥ 0, and that ∆(U )/f (U, q) ≤ ∆(U ′ )/f (U ′ , q). It is easy to check that this way, we will prove both claims of the lemma.
The base case of the induction is that |U | ∈ {0, 1}, where ∆(U ) ≥ 0 is directly seen by using (28). Suppose inductively that ∆(U ) ≥ 0. Using the definition of ∆(U ) and the fact that f (U ′ , q) = (1 − q u )f (U, q), we have
f (U ′ , q) = (1 − q u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ ) − ∆(U )] ≤ (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] − (1 − q u ) · ∆(U ),
where this last inequality is a consequence of (28). Therefore, using the definition of ∆(U ′ ) and the facts f (U ′ , q ′ ) = (1 − q ′ u )f (U, q ′ ) and f (U ′ , q ′′ ) = (1 − q ′′ u )f (U, q ′′ ), (27)).
∆(U ′ ) = (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) − f (U ′ , q) ≥ (1 − p n )(1 − q ′ u )f (U, q ′ ) + p n (1 − q ′′ u )f (U, q ′′ ) + (1 − q u ) · ∆(U ) − (1 − (1 − p n )q ′ u − p n q ′′ u ) · [(1 − p n )f (U, q ′ ) + p n f (U, q ′′ )] = (1 − q u ) · ∆(U ) + p n (1 − p n ) · (f (U, q ′′ ) − f (U, q ′ )) · (q ′ u − q ′′ u ) ≥ (1 − q u ) · ∆(U ) (by
So, since we assumed that ∆(U ) ≥ 0, we get ∆(U ′ ) ≥ 0; furthermore, we get that ∆(U ′ ) ≥ (1 − q u ) · ∆(U ), which implies that ∆(U ′ )/f (U ′ , q) ≥ ∆(U )/f (U, q). (i) Let E r ≡ ((Az) r < b r ) be defined w.r.t. general randomized rounding with parameter p; as observed in Definition 5.8, Pr(E r ) ≤ ch ′ r (p). Now if ch ′ r (p) = 1 for some r, then part (i) is trivially true; so we assume that Pr(E r ) ≤ ch ′ r ( (1 − Pr(E r )).
Define, for i = 1, 2, . . . , ℓ, the "bad" event E i ≡ (c T i · z > λ i ). Fix any i. Our plan is to show that
Pr(E i | Z) ≤ 1 λ i k i · j 1 <j 2 <···<j k i k i t=1 c i,jt · p jt · r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r )) −1 .(36)
If we prove (36), then we will be done as follows. We have
Pr(A) ≥ Pr(Z) · 1 − i Pr(E i | Z) ≥ ( r∈[m]
(1 − Pr(E r ))) · 1 − i Pr(E i | Z) .
Now, the term "( r∈ [m] (1 − Pr(E r )))" is a decreasing function of each of the values Pr(E r ); so is the lower bound on "− Pr(E i | Z)" obtained from (36). Hence, bounds (36) and (37), along with the bound Pr(E r ) ≤ ch ′ r (p), will complete the proof of part (i). We now prove (36) using Theorem 3.4(a) and Lemma 5.6. Recall the symmetric polynomials S k from (1). Define Y = S k i (c i,1 X 1 , c i,2 X 2 , . . . , c i,n X n )/ λ i k i . By Theorem 3.4(a), Pr(E i | Z) ≤ E[Y | Z]. Next, the typical term in E[Y | Z] can be upper bounded using Lemma 5.6:
E ( k i t=1 c i,jt · X jt ) | m i=1 E i ≤ k i
t=1 c i,jt · p jt r∈R(j 1 ,j 2 ,...,j k i ) (1 − Pr(E r ))
.
Thus we have (36), and the proof of part (i) is complete. (1 − ch ′ r (p)) Lemma 5.1 shows that under standard randomized rounding, ch ′ r (p) ≤ g(B, α) < 1 for all r. So, the r.h.s. κ of (38) gets lower-bounded as follows:
· 1 − ℓ i=1 1 λ i k i · j 1 <···<j k iκ ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · [ r∈R(j 1 ,...,j k i ) (1 − g(B, α))] −1 ≥ (1 − g(B, α)) m · 1 − ℓ i=1 1 ν i (1+γ i ) k i · j 1 <···<j k i k i t=1 c i,jt · p jt · (1 − g(B, α)) −ak i ≥ (1 − g(B, α)) m · 1 − ℓ i=1 n k i · (ν i /n) k i ν i (1+γ i ) k i · (1 − g(B, α)) −ak i ,
where the last line follows from Theorem 3.4(c). 2
Conclusion
We have presented an extension of the LLL that basically helps reduce the "dependency" much in some settings; we have seen applications to two families of integer programming problems. It would be interesting to see how far these ideas can be pushed further. Two other open problems suggested by this work are: (i) developing a constructive version of our result for MIPs, and (ii) developing a poly(n, m)-time constructive version of Theorem 5.9, as opposed to the poly(n k ′ , m)-time constructive version that we present in § 5.3. Finally, a very interesting question is to develop a theory of applications of the LLL that can be made constructive with (essentially) no loss. | 11,762 |
cs0306119 | 2952136536 | We present a method for solving service allocation problems in which a set of services must be allocated to a set of agents so as to maximize a global utility. The method is completely distributed so it can scale to any number of services without degradation. We first formalize the service allocation problem and then present a simple hill-climbing, a global hill-climbing, and a bidding-protocol algorithm for solving it. We analyze the expected performance of these algorithms as a function of various problem parameters such as the branching factor and the number of agents. Finally, we use the sensor allocation problem, an instance of a service allocation problem, to show the bidding protocol at work. The simulations also show that phase transition on the expected quality of the solution exists as the amount of communication between agents increases. | There is ongoing work in the field of complexity that attempts to study they dynamics of complex adaptive systems @cite_7 . Our approach is based on ideas borrowed from the use of NK landscapes for the analysis of co-evolving systems. As such, we are using some of the results from that field. However, complexity theory is more concerned with explaining the dynamic behavior of existing systems, while we are more concerned with the engineering of multiagent systems for distributed service allocation. | {
"abstract": [
"1. Conceptual outline of current evolutionary theory PART I: ADAPTATION ON THE EDGE OF CHAOS 2. The structure of rugged fitness landscapes 3. Biological implications of rugged fitness landscapes 4. The structure of adaptive landscapes underlying protein evolution 5. Self organization and adaptation in complex systems 6. Coevolving complex systems PART II: THE CRYSTALLIZATION OF LIFE 7. The origins of life: a new view 8. The origin of a connected metabolism 9. Autocatalytic polynucleotide systems: hypercycles, spin glasses and coding 10. Random grammars PART III: ORDER AND ONTOGENY 11. The architecture of genetic regulatory circuits and its evolution 12. Differentiation: the dynamical behaviors of genetic regulatory networks 13. Selection for gene expression in cell type 14. Morphology, maps and the spatial ordering of integrated tissues"
],
"cite_N": [
"@cite_7"
],
"mid": [
"2108020239"
]
} | A Method for Solving Distributed Service Allocation Problems * | The problem of dynamically allocating services to a changing set of consumers arises in many applications. For example, in an e-commerce system, the service providers are always trying to determine which service to provide to whom, and at what price [5]; in an automated manufacturing for mass customization scenario, agents must decide which services will be more popular/profitable [1]; and in a dynamic sensor allocation problem, a set of sensors in a field must decide which area to cover, if any, while preserving their resources.
While these problems might not seem related, they are instances of a more general service allocation problem in which a finite set of resources must be allocated by a set of autonomous agents so as to maximize some global measure of utility. A general approach to solving these types of problems has been used in many successful systems , such as [2] [3] [11] [9]. The approach involves three general steps:
1. Assign each resource that needs to be preserved to an agent responsible for managing the resource.
2. Assign each goal of the problem domain to an agent responsible for achieving it. Achieving these goals requires the consumption of resources.
3. Have each agent take actions so as to maximize its own utility, but implement a coordination algorithm that encourages agents to take actions that also maximize the global utility.
In this paper we formalize this general approach by casting the problem as a search in a global fitness landscape which is defined as the sum of the agents' utilities. We show how the choice of a coordination/communication protocol disseminates information, which in turn "smoothes" the global utility landscape. This smooth global utility landscape allows the agents to easily find the global optimum by simply making selfish decisions to maximize their individual utility.
We also present experiments that pinpoint the location of a phase transition in the time it takes for the agents to find the optimal allocation. The transition can be seen when the amount of communication allowed among agents is manipulated. It exists because communication allows the agents to align their individual landscapes with the global landscape. At some amount of communication, the alignment between these landscapes is good enough to allow the agents to find the global optimum, but less communication drives the agents into a random behavior from which the system cannot recuperate.
Task Allocation
The service allocation problem we discuss in this paper is a superset of the well known task allocation problem [10, chapter 5.7]. A task allocation problem is defined by a set of tasks that must be allocated among a set of agents. Each agent has a cost associated with each subset of tasks, which represents the cost the agent would incur if it had to perform those tasks. Coordination protocols are designed to allow agents to trade tasks so that the globally optimal allocation-the one that minimizes the sum of all the individual agent costs-is reached as soon as possible. It has been shown that this globally optimal allocation can reached if the agents use the contract-net protocol [9] with OCSM contracts [8]. These OCSM contracts make it possible for the system to transition from any allocation to any other allocation in one step. As such, a simple hill-climbing search is guaranteed to eventually reach the global optimum.
In this paper we consider the service allocation problem, which is a superset of the task allocation because it allows for more than one agent to service a "task". The service allocation problem we study also has the characteristic that every allocation cannot be reached from every other allocation in one step.
Service Allocation
In a service allocation problem there are a set of services, offered by service agents, and a set of consumers who use those services. A server can provide any one of a number of services and some consumers will benefit from that service without depleting it. A server agent incurs a cost when providing a service and can choose not to provide any service.
For example, a server could be an agent that sets up a website with information about cats. All the consumer agents with interests in cats will benefit from this service, but those with other interests will not benefit. Since each server can provide, at most, one service, the problem is to find the allocation of services that maximizes the sum of all the agents' utilities, that is, an allocation that maximizes the global utility.
Sensor Allocation
Another instance of the service allocation problem is the sensor allocation problem, which we will use as an example throughout this paper. In the sensor allocation problem we have a number of sensors placed in fixed positions in a two-dimensional space. Each sensor has a limited viewing angle and distance but can point in any one of a number of directions. For example, a sensor might have a viewing angle of 120 degrees, viewing distance of 3 feet, and be able to look in three directions, each one 120 degrees apart from the others. That is, it can "look" in any one of three directions. On each direction it can see everything that is in the 120 degree and 3 feet long view cone. Each time a sensor looks in a particular direction is uses energy.
There are also targets that move around in the field. The goal is for the sensors to detect and track all the targets in the field. However, in order to determine the location of a target, two or more sensors have to look at it at the same time. We also wish to minimize the amount of energy spent by the sensors.
We consider the sensor agents as being able to provide three services, one for each sector, but only one at a time. We consider the target agents as consuming the services of the sensors.
A Formal Model for Service Allocation
We define a service allocation problem SA as a tuple SA = {C, S} where C is the set of consumer agents C = {c 1 , c 2 , . . . , c |C| }, and c i has only one possible state, c i = 0. The set of service agents is S = {s 1 , s 2 , . . . , s |S| } and the value of s i is the value of that service. For the sensor domain in which a sensor can observe any one of three 120-degree sectors or be turned off, we have s i ∈ {0, 1, 2, off}. An allocation is an assignment of states to the services (since the consumers have only one possible state we can ignore them). A particular allocation is denoted by a = {s 1 , s 2 , . . . , s |S| }, where the s i have some value taken from the domain of service states, and a ∈ A, where A is the set of all possible allocations. That is, an allocation tells us the state of all agents (since consumers have only one state they can be omitted).
Each agent also has a utility function. The utility that an agent receives depends on the current allocation a, where we let a(s) be the state of service agent s under a. The agent's utilities will depend on their state and the state of other agents. For example, in the sensor problem we define the utility of sensor s as U s (a), where
U s (a) = 0 if a(s) = off −K 1 otherwise.(1)
That is, a sensor receives no utility when it is off and must pay a penalty of −K 1 when it is running. The targets are the consumers, and each target's utility is defined as
U c (a) = 0 if f c (a) = 0 K 2 if f c (a) = 1 K 2 + n − 2 if f c (a) = n (2)
where f c (a) = number of sensors s that see c given their state a(c).
(
Finally, given the individual agent utilities, we define the global utility GU (a) as the sum of the individual agents' utilities:
GU (a) = c∈C U c (a) + s∈S U s (a).(4)
The service allocation problem is to find the allocation a that maximizes GU (a). In the sensor problem, there are 4 |S| possible allocations, which would make a simple generate-and-test approach take exponential amounts of time. We wish to find the global optimum much faster than that.
Search Algorithms
Our goal is to design an interaction protocol whereby an allocation a that maximizes the global utility GU (a) is reached in a small number of steps. In each step of our protocol one of the agents will change its state or send a message to another agent. The messages might contain the state or utilities of other agents. We assume that the agents do not have direct access to the other agents' states or utility values.
The simplest algorithm we can envision involves having each consumer, at each time, changing the state of a randomly chosen service agent so as to increase the consumer's own utility. That is, a consumer c will change the current allocation a into a ′ by changing the state of some sensor s such that U c (a ′ ) > U c (a). If the sensor's state cannot be changed so as to increase the utility, then the consumer does nothing. In the sensor domain this amounts to a target picking a sensor and changing its state so that the sensor can see the target. We refer to this algorithm as individual hill-climbing.
The individual hill-climbing algorithm is simple to implement and the only communication needed is between the consumer and the chose server. This simple algorithm makes every consumer agent increase its individual utility at each turn. However, the new allocation a ′ might result in a lower global utility, since a ′ might reduce the utility of several other agents. Therefore, it does not guarantee that an optimal allocation will be eventually reached.
Another approach is for each agent to change state so as to increase the global utility. We call this a global hill-climbing algorithm. In order to implement this algorithm, an agent would need to know how the proposed state change affects the global utility as well as the states of all the other agents. That is, it would need to be able to determine GU (a ′ ) which requires it to know the state of all the agents in a ′ as well as the utility functions of every other agent, as per the definition of global utility (4). In order for an agent to know the state of others, it would need to somehow communicate with all other agents. If the system implements a global broadcasting method then we would need for each agent to broadcast its state at each time. If the system uses more specialized communications such as point-to-point, limited broadcasting, etc., then more messages will be needed.
Any protocol that implements the global hill-climbing algorithm will reach a locally optimal allocation in the global utility. This is because it is always true that, for a new allocation a ′ and old allocation a, GU (a ′ ) ≥ GU (a). Whether or not this local optimum is also a global optimum will depend on the ruggedness of the global utility landscape. That is, if it consists of one smooth peak then it is likely that any local optimum is the global optimum. On the other hand, if the landscape is very rugged then there are likely many local peaks. Studies in NK landscapes [4] tell us that smoother landscapes result when an agent's utility depends on the state of smaller number of other agents.
Global hill-climbing is better than individual hill-climbing since it guarantees that we will find a local optima. However, it requires agents to know each others' utility function and to constantly communicate their state. Such large amount of communication is often undesirable in multiagent systems. We need a better way to find the global optimum.
One way of correlating the individual landscapes to the global utility landscape is with the use of a bidding protocol in which each consumer agent tells each service the marginal utility the consumer would receive if the service switched its state to so as to maximize the consumer's utility. The service agent can then choose to provide the service with the highest aggregate demand. Since the service is picking the value that maximizes the utility of everyone involved (all the consumers and the service) without decreasing the utility of anyone else (the other services) this protocol is guaranteed to never decrease the global utility. This bidding protocol is a simplified version of the contract-net [9] protocol in that it does not require contractors to send requests for bids.
However, in order for a consumer to determine the marginal utility it will receive from one sensor changing state, it still needs to know the state of all the other sensors. This means that a complete implementation of this protocol will still require a lot of communication (namely, the same amount as in global hill-climbing). We can reduce this number of messages by allowing agents to communicate with only a subset of the other agents and making their decisions based on only this subset of information. That is, instead of all services telling each consumer their state, a consumer could receive state information from only a subset of the services and make its decision based on this (assuming that the services chosen are representative of the whole). This strategy shows a lot of promise but its performance can only be evaluated on an instance-by-instance basis. We explore this strategy experimentally in Section 3 using the sensor domain.
Theoretical Time Bounds of Global Hill-Climbing
Since we now know that global hill-climbing will always reach a local optimum, the next questions we must answer are:
1. How many local optima are there? 2. What is the probability that a local optimum is the global optimum?
3. How long does it take, on average, to reach a local optimum?
Let a be the current allocation and a ′ be a neighboring allocation. We know that a is a local optimum if
∀ a ′ ∈N (a) GU (a) > GU (a ′ )(5)
where N (a) = {x | x is a Neighbor of a}.
We define a Neighbor allocation as an allocation where one, and only one, agent has a different state. The probability that some allocation a is a local optimum is simply the probability that (5) is true. If the utility of all pairs of neighbors is not correlated, then this probability is
Pr[∀ a ′ ∈N (a) GU (a) > GU (a ′ )] = Pr[GU (a) > GU (a ′ )] b ,(7)
where b is the branching factor. In the sensor problem b = 3 · |S| where S is the set of all sensors. That is, since each sensor can be in any of four states it will have three neighbors from each state. In some systems it is safe to assume that the global utilities of a's neighbors are independent. However, most systems show some degree of correlation. Now we need to calculate the Pr[GU (a) > GU (a ′ )], that is, the probability that some allocation a has a greater global utility that its neighbor a ′ , for all a and a ′ . This could be calculated via an exhaustive enumeration of all possible allocations. However, often we can find the expected value of this probability.
For example, in the sensor problem each sensor has four possible states. If a sensor changes its state from sector x to sector y the utility of the target agents covered by x will decrease while the utility of those in y will increase. If we assume that, on average, the targets are evenly spaced on the field, then the global utilities for both of these are expected to be the same. That is, the expected probability that the global utility of one allocation is bigger than the other is 1/2.
If, on the other hand, a sensor changes state from "off" to a sector, or from a sector to "off," the global utility is expected to decrease and increase, respectively. However, there are an equal number of opportunities to go from "off" to "on" and vice-versa. Therefore, we can also expect that for these cases the probability that the global utility of one allocation is bigger than the other is 1/2.
Based on these approximations, we can declare that for the sensor problem
Pr[∀ a ′ ∈N (a) GU (a) > GU (a ′ )] = 1 2 b = λ.(8)
If we assume an even distribution of local optima, the total number of local optima is simply the product of the total number of allocations times the probability that each one is a local optimum. That is, Total number of local optima = λ|A|
For the sensor problem, λ = 1/2 b , b = 3 · |S| and |A| = b |S| , so the expected number of local optima is b |S| /2 3|S| .
Pr[a local optimum is also global]
= 1 λ|A| = 1 2 b .(10)
We can find the expected time the algorithm will take to reach a local optimum by determining the maximum number of steps from every allocation to the nearest local optimum. This gives us an upper bound on the number of steps needed to reach the nearest local optimum using global hill-climbing. Notice that, under either individual hill-climbing or the bidding protocol it is possible that the local optimum is not reached, or is reached after more steps, since these algorithms can take steps that lower the global utility.
In order to find the expected number of steps to reach a local optimum, we start at any one of the local optima and then traverse all possible links at each depth d until all possible allocations have been visited. This occurs when
λ · |A| · b d > |A|.(11)
Solving for d, and remembering that λ = 1/2 b , we get
d > b log b 2.(12)
The expected worst-case distance from any point to the nearest local optimum is, therefore, b log b 2 (this number only makes sense for b ≥ 2 since smaller number of neighbors do not form a searchable space). That is, the number of steps to reach the nearest local optima in the sensor domain is proportional to the branching factor b, which is equal to 3 · |S|. We can expect search time to increase linearly with the number of sensors in the field.
Simulations
While the theoretical results above give us some bounds on the number of iterations before the system is expected to converge to a local optimum, the bounds are rather loose and do not tell us much about the dynamics of the executing system. Also, we cannot show mathematically how changes in the amount of communication change the search. Therefore, we have implemented a service allocation simulator to answer these questions. It simulates the sensor allocation domain described in the introduction.
The simulator is written in Java and the source code is available upon request. It gathers and analyzes data from any desired number of runs. The program can analyze the behavior of any number of target and sensor agents on a two-dimensional space, and the agents can be given any desired utility function. The program is limited to static targets. That is, it only considers the one-shot service allocation problem. Each new allocation is completely independent of any previous one.
In the tests we performed, each run has seven sensors and seven targets, all of which are randomly placed on a two-dimensional grid. Each sensor can only point in one of three directions or sectors. These three sectors are the same for all sensors (specifically, the first sector is from 0 to 120 degrees, the second one from 120 to 240, and the third one from 240 to 360). All the sensors use the same utility function which is given by (1), while the targets use (2). After a sensor agent receives all the bids it chooses the sector that has the heighest aggregate demand, as described by the bidding protocol in Section 2.1.
During a run, each of the targets periodically sends a bid to a number of sensors asking them to turn to the sector that faces the target. We set the bid amount to a fixed number for these tests. Periodically, the sensors count the number of bids they have received for each sector and turn their detector (such as a radar) to face the sector with the highest aggregate demand. We assume that neither the targets nor the sensors can form coalitions.
We vary the number of sensors to which the targets send their bids in order to explore the quality of the solution that the system converges upon as the amount of communication changes. For example, at one extreme if the all the targets send their bids to all the sensors, then the sensors would always set their sector to be the one with the most targets. This particular service allocation should, usually, be the best. However, it might not always be the optimal solution. For example, if seven targets are clustered together and the eighth is on another part of the field, it would be better if six sensor agents pointed towards the cluster of targets while the remaining two sensor agents pointed towards the stray target rather than having all sensor agents point towards the cluster of targets. At the other extreme, if all the targets send their bids to only one sensor then they will minimize communications but then the sensors will point to the sector from which they received a message-an allocation which is likely to be suboptimal.
These simulations explore the ruggedness of the system's global utility landscape and the dynamics of the agents' exploration of this landscape. If the agents were to always converge on a local (non-global) optimum then we would deduce that this problem domain has a very rugged utility landscape. On the other hand, if they usually manage to reach the global optimum then we could deduce a smooth utility landscape.
Results with 4 Neighbors
Test Results
In each of our tests we set the number of agents that each target will send its bid to, that is, the number of neighbors, to a fixed number. Given this fixed number of neighbors, we then generated 100 random placements of agents on the field and ran our bidding algorithm 10 times on each of those placements. Finally, we plotted the average solution quality, over the 10 runs, as a function of time for each of the 100 different placements. The solution quality is given by the ratio
α = Current Utility Globally Optimal Utility ,(13)
so if α = 1, then it means that the run has reached the global optimum. Since the number of agents is small, we were able to calculate the global optimum using a brute-force method. Specifically, there are 3 7 = 2187 possible configurations times 100 random placements leads to 218700 combinations that we had to check for each run in order to find the global optimum using brute-force. Using more than 7 sensors made the test take too long. Notice, however, that our algorithm is much faster than this brute-force search which we perform only to confirm that our search does find the global optimum. In our tests there were always seven target agents and seven sensor agents. We varied the number of neighbors from 1 to 7. If the target can only communicate with one other sensor, the sensors will likely have very little information for making their decision, while if all targets communicate with all seven sensors, then each sensor will generally be able to point to the sector with the most targets. However, because these decisions are made in an asynchronous manner, it is possible that some sensor will sometimes not receive all the bids before it has to make a decision. The targets always send their bids to the sensors that are closest to them.
The results from our experiments are shown in Figure 1 where we can see that there is a transition in the system's performance as the number of neighbors goes from three to five. That is, if the targets only send their bids to three sensors then it is almost certain that the system will stay in a configuration that has a very low global utility. However, if the targets send their bids to five sensors, then it is almost guaranteed (98% of the time) that the system will reach the globally optimal allocation. This is a huge difference in terms of the performance. We also notice in Figure 2 that for four neighbors there is a fairly even distribution in the utility of the final allocation.
Conclusions
We have formalized the service allocation problem and examined a general approach to solving problems of this type. The approach involves the use of utility-maximizing agents that represent the resources and the services. A simple form of bidding is used for communication. An analysis of this approach reveals that it implements a form of distributed hill-climbing, where each agent climbs its own utility landscape and not the global utility landscape. However, we showed that increasing the amount of communication among the agents forces each individual agent's landscape to become increasingly correlated to the global landscape.
These theoretical results were then verified in our implementation of a sensor allocation problem-an instance of a service allocation problem. Furthermore, the simulations allowed us to determine the location of a phase transition in the amount of communication needed for the system to consistently arrive at the globally optimal service allocation.
More generally, we have shown how a service allocation problem can be viewed as a distributed search by multiple agents over multiple landscapes. We also showed how the correlation between the global utility landscape and the individual agent's utility landscape depends on the amount and type of inter-agent communication. Specifically, we have shown that increased communications leads to a higher correlation between the global and individual utility landscapes, which increases the probability that the global optimum will be reached. Of course, the success of the search still depends on the connectivity of the search space, which will vary from domain to domain. We expect that our general approach can be applied to the design of any multiagent systems whose desired behavior is given by a global utility function but whose agents must act selfishly.
Our future work includes the study of how the system will behave under perturbations. For example, as the target moves it perturbs the current allocation and the global optimum might change. We also hope to characterize the local to global utility function correlation for different service allocation problems and the expected time to find the global optimum under various amounts of communication. | 4,553 |
cs0306119 | 2952136536 | We present a method for solving service allocation problems in which a set of services must be allocated to a set of agents so as to maximize a global utility. The method is completely distributed so it can scale to any number of services without degradation. We first formalize the service allocation problem and then present a simple hill-climbing, a global hill-climbing, and a bidding-protocol algorithm for solving it. We analyze the expected performance of these algorithms as a function of various problem parameters such as the branching factor and the number of agents. Finally, we use the sensor allocation problem, an instance of a service allocation problem, to show the bidding protocol at work. The simulations also show that phase transition on the expected quality of the solution exists as the amount of communication between agents increases. | The Collective Intelligence (COIN) framework @cite_4 shares many of the same goals of our research. They start with a global utility function from which they derive the rewards functions for each agent. The agents are assumed to use some form of reinforcement learning. They show that the global utility is maximized when using their prescribed reward functions. They do not, however, consider how agent communication might affect the individual agent's utility landscape. | {
"abstract": [
"This paper surveys the emerging science of how to design a “COllective INtelligence” (COIN). A COIN is a large multi-agent system where: i) There is little to no centralized communication or control. ii) There is a provided world utility function that rates the possible histories of the full system. In particular, we are interested in COINs in which each agent runs a reinforcement learning (RL) algorithm. The conventional approach to designing large distributed systems to optimize a world utility does not use agents running RL algorithms. Rather, that approach begins with explicit modeling of the dynamics of the overall system, followed by detailed hand-tuning of the interactions between the components to ensure that they “cooperate” as far as the world utility is concerned. This approach is labor-intensive, often results in highly nonrobust systems, and usually results in design techniques that have limited applicability. In contrast, we wish to solve the COIN design problem implicitly, via the “adaptive” character of the RL algorithms of each of the agents. This approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, Braess’s paradox, or the liquidity trap? Although still very young, research specifically concentrating on the COIN design problem has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur’s El Farol bar problem. It is expected that as it matures and draws upon other disciplines related to COINs, this research will greatly expand the range of tasks addressable by human engineers. Moreover, in addition to drawing on them, such a fully developed science of COIN design may provide much insight into other already established scientific fields, such as economics, game theory, and population biology."
],
"cite_N": [
"@cite_4"
],
"mid": [
"2171234133"
]
} | A Method for Solving Distributed Service Allocation Problems * | The problem of dynamically allocating services to a changing set of consumers arises in many applications. For example, in an e-commerce system, the service providers are always trying to determine which service to provide to whom, and at what price [5]; in an automated manufacturing for mass customization scenario, agents must decide which services will be more popular/profitable [1]; and in a dynamic sensor allocation problem, a set of sensors in a field must decide which area to cover, if any, while preserving their resources.
While these problems might not seem related, they are instances of a more general service allocation problem in which a finite set of resources must be allocated by a set of autonomous agents so as to maximize some global measure of utility. A general approach to solving these types of problems has been used in many successful systems , such as [2] [3] [11] [9]. The approach involves three general steps:
1. Assign each resource that needs to be preserved to an agent responsible for managing the resource.
2. Assign each goal of the problem domain to an agent responsible for achieving it. Achieving these goals requires the consumption of resources.
3. Have each agent take actions so as to maximize its own utility, but implement a coordination algorithm that encourages agents to take actions that also maximize the global utility.
In this paper we formalize this general approach by casting the problem as a search in a global fitness landscape which is defined as the sum of the agents' utilities. We show how the choice of a coordination/communication protocol disseminates information, which in turn "smoothes" the global utility landscape. This smooth global utility landscape allows the agents to easily find the global optimum by simply making selfish decisions to maximize their individual utility.
We also present experiments that pinpoint the location of a phase transition in the time it takes for the agents to find the optimal allocation. The transition can be seen when the amount of communication allowed among agents is manipulated. It exists because communication allows the agents to align their individual landscapes with the global landscape. At some amount of communication, the alignment between these landscapes is good enough to allow the agents to find the global optimum, but less communication drives the agents into a random behavior from which the system cannot recuperate.
Task Allocation
The service allocation problem we discuss in this paper is a superset of the well known task allocation problem [10, chapter 5.7]. A task allocation problem is defined by a set of tasks that must be allocated among a set of agents. Each agent has a cost associated with each subset of tasks, which represents the cost the agent would incur if it had to perform those tasks. Coordination protocols are designed to allow agents to trade tasks so that the globally optimal allocation-the one that minimizes the sum of all the individual agent costs-is reached as soon as possible. It has been shown that this globally optimal allocation can reached if the agents use the contract-net protocol [9] with OCSM contracts [8]. These OCSM contracts make it possible for the system to transition from any allocation to any other allocation in one step. As such, a simple hill-climbing search is guaranteed to eventually reach the global optimum.
In this paper we consider the service allocation problem, which is a superset of the task allocation because it allows for more than one agent to service a "task". The service allocation problem we study also has the characteristic that every allocation cannot be reached from every other allocation in one step.
Service Allocation
In a service allocation problem there are a set of services, offered by service agents, and a set of consumers who use those services. A server can provide any one of a number of services and some consumers will benefit from that service without depleting it. A server agent incurs a cost when providing a service and can choose not to provide any service.
For example, a server could be an agent that sets up a website with information about cats. All the consumer agents with interests in cats will benefit from this service, but those with other interests will not benefit. Since each server can provide, at most, one service, the problem is to find the allocation of services that maximizes the sum of all the agents' utilities, that is, an allocation that maximizes the global utility.
Sensor Allocation
Another instance of the service allocation problem is the sensor allocation problem, which we will use as an example throughout this paper. In the sensor allocation problem we have a number of sensors placed in fixed positions in a two-dimensional space. Each sensor has a limited viewing angle and distance but can point in any one of a number of directions. For example, a sensor might have a viewing angle of 120 degrees, viewing distance of 3 feet, and be able to look in three directions, each one 120 degrees apart from the others. That is, it can "look" in any one of three directions. On each direction it can see everything that is in the 120 degree and 3 feet long view cone. Each time a sensor looks in a particular direction is uses energy.
There are also targets that move around in the field. The goal is for the sensors to detect and track all the targets in the field. However, in order to determine the location of a target, two or more sensors have to look at it at the same time. We also wish to minimize the amount of energy spent by the sensors.
We consider the sensor agents as being able to provide three services, one for each sector, but only one at a time. We consider the target agents as consuming the services of the sensors.
A Formal Model for Service Allocation
We define a service allocation problem SA as a tuple SA = {C, S} where C is the set of consumer agents C = {c 1 , c 2 , . . . , c |C| }, and c i has only one possible state, c i = 0. The set of service agents is S = {s 1 , s 2 , . . . , s |S| } and the value of s i is the value of that service. For the sensor domain in which a sensor can observe any one of three 120-degree sectors or be turned off, we have s i ∈ {0, 1, 2, off}. An allocation is an assignment of states to the services (since the consumers have only one possible state we can ignore them). A particular allocation is denoted by a = {s 1 , s 2 , . . . , s |S| }, where the s i have some value taken from the domain of service states, and a ∈ A, where A is the set of all possible allocations. That is, an allocation tells us the state of all agents (since consumers have only one state they can be omitted).
Each agent also has a utility function. The utility that an agent receives depends on the current allocation a, where we let a(s) be the state of service agent s under a. The agent's utilities will depend on their state and the state of other agents. For example, in the sensor problem we define the utility of sensor s as U s (a), where
U s (a) = 0 if a(s) = off −K 1 otherwise.(1)
That is, a sensor receives no utility when it is off and must pay a penalty of −K 1 when it is running. The targets are the consumers, and each target's utility is defined as
U c (a) = 0 if f c (a) = 0 K 2 if f c (a) = 1 K 2 + n − 2 if f c (a) = n (2)
where f c (a) = number of sensors s that see c given their state a(c).
(
Finally, given the individual agent utilities, we define the global utility GU (a) as the sum of the individual agents' utilities:
GU (a) = c∈C U c (a) + s∈S U s (a).(4)
The service allocation problem is to find the allocation a that maximizes GU (a). In the sensor problem, there are 4 |S| possible allocations, which would make a simple generate-and-test approach take exponential amounts of time. We wish to find the global optimum much faster than that.
Search Algorithms
Our goal is to design an interaction protocol whereby an allocation a that maximizes the global utility GU (a) is reached in a small number of steps. In each step of our protocol one of the agents will change its state or send a message to another agent. The messages might contain the state or utilities of other agents. We assume that the agents do not have direct access to the other agents' states or utility values.
The simplest algorithm we can envision involves having each consumer, at each time, changing the state of a randomly chosen service agent so as to increase the consumer's own utility. That is, a consumer c will change the current allocation a into a ′ by changing the state of some sensor s such that U c (a ′ ) > U c (a). If the sensor's state cannot be changed so as to increase the utility, then the consumer does nothing. In the sensor domain this amounts to a target picking a sensor and changing its state so that the sensor can see the target. We refer to this algorithm as individual hill-climbing.
The individual hill-climbing algorithm is simple to implement and the only communication needed is between the consumer and the chose server. This simple algorithm makes every consumer agent increase its individual utility at each turn. However, the new allocation a ′ might result in a lower global utility, since a ′ might reduce the utility of several other agents. Therefore, it does not guarantee that an optimal allocation will be eventually reached.
Another approach is for each agent to change state so as to increase the global utility. We call this a global hill-climbing algorithm. In order to implement this algorithm, an agent would need to know how the proposed state change affects the global utility as well as the states of all the other agents. That is, it would need to be able to determine GU (a ′ ) which requires it to know the state of all the agents in a ′ as well as the utility functions of every other agent, as per the definition of global utility (4). In order for an agent to know the state of others, it would need to somehow communicate with all other agents. If the system implements a global broadcasting method then we would need for each agent to broadcast its state at each time. If the system uses more specialized communications such as point-to-point, limited broadcasting, etc., then more messages will be needed.
Any protocol that implements the global hill-climbing algorithm will reach a locally optimal allocation in the global utility. This is because it is always true that, for a new allocation a ′ and old allocation a, GU (a ′ ) ≥ GU (a). Whether or not this local optimum is also a global optimum will depend on the ruggedness of the global utility landscape. That is, if it consists of one smooth peak then it is likely that any local optimum is the global optimum. On the other hand, if the landscape is very rugged then there are likely many local peaks. Studies in NK landscapes [4] tell us that smoother landscapes result when an agent's utility depends on the state of smaller number of other agents.
Global hill-climbing is better than individual hill-climbing since it guarantees that we will find a local optima. However, it requires agents to know each others' utility function and to constantly communicate their state. Such large amount of communication is often undesirable in multiagent systems. We need a better way to find the global optimum.
One way of correlating the individual landscapes to the global utility landscape is with the use of a bidding protocol in which each consumer agent tells each service the marginal utility the consumer would receive if the service switched its state to so as to maximize the consumer's utility. The service agent can then choose to provide the service with the highest aggregate demand. Since the service is picking the value that maximizes the utility of everyone involved (all the consumers and the service) without decreasing the utility of anyone else (the other services) this protocol is guaranteed to never decrease the global utility. This bidding protocol is a simplified version of the contract-net [9] protocol in that it does not require contractors to send requests for bids.
However, in order for a consumer to determine the marginal utility it will receive from one sensor changing state, it still needs to know the state of all the other sensors. This means that a complete implementation of this protocol will still require a lot of communication (namely, the same amount as in global hill-climbing). We can reduce this number of messages by allowing agents to communicate with only a subset of the other agents and making their decisions based on only this subset of information. That is, instead of all services telling each consumer their state, a consumer could receive state information from only a subset of the services and make its decision based on this (assuming that the services chosen are representative of the whole). This strategy shows a lot of promise but its performance can only be evaluated on an instance-by-instance basis. We explore this strategy experimentally in Section 3 using the sensor domain.
Theoretical Time Bounds of Global Hill-Climbing
Since we now know that global hill-climbing will always reach a local optimum, the next questions we must answer are:
1. How many local optima are there? 2. What is the probability that a local optimum is the global optimum?
3. How long does it take, on average, to reach a local optimum?
Let a be the current allocation and a ′ be a neighboring allocation. We know that a is a local optimum if
∀ a ′ ∈N (a) GU (a) > GU (a ′ )(5)
where N (a) = {x | x is a Neighbor of a}.
We define a Neighbor allocation as an allocation where one, and only one, agent has a different state. The probability that some allocation a is a local optimum is simply the probability that (5) is true. If the utility of all pairs of neighbors is not correlated, then this probability is
Pr[∀ a ′ ∈N (a) GU (a) > GU (a ′ )] = Pr[GU (a) > GU (a ′ )] b ,(7)
where b is the branching factor. In the sensor problem b = 3 · |S| where S is the set of all sensors. That is, since each sensor can be in any of four states it will have three neighbors from each state. In some systems it is safe to assume that the global utilities of a's neighbors are independent. However, most systems show some degree of correlation. Now we need to calculate the Pr[GU (a) > GU (a ′ )], that is, the probability that some allocation a has a greater global utility that its neighbor a ′ , for all a and a ′ . This could be calculated via an exhaustive enumeration of all possible allocations. However, often we can find the expected value of this probability.
For example, in the sensor problem each sensor has four possible states. If a sensor changes its state from sector x to sector y the utility of the target agents covered by x will decrease while the utility of those in y will increase. If we assume that, on average, the targets are evenly spaced on the field, then the global utilities for both of these are expected to be the same. That is, the expected probability that the global utility of one allocation is bigger than the other is 1/2.
If, on the other hand, a sensor changes state from "off" to a sector, or from a sector to "off," the global utility is expected to decrease and increase, respectively. However, there are an equal number of opportunities to go from "off" to "on" and vice-versa. Therefore, we can also expect that for these cases the probability that the global utility of one allocation is bigger than the other is 1/2.
Based on these approximations, we can declare that for the sensor problem
Pr[∀ a ′ ∈N (a) GU (a) > GU (a ′ )] = 1 2 b = λ.(8)
If we assume an even distribution of local optima, the total number of local optima is simply the product of the total number of allocations times the probability that each one is a local optimum. That is, Total number of local optima = λ|A|
For the sensor problem, λ = 1/2 b , b = 3 · |S| and |A| = b |S| , so the expected number of local optima is b |S| /2 3|S| .
Pr[a local optimum is also global]
= 1 λ|A| = 1 2 b .(10)
We can find the expected time the algorithm will take to reach a local optimum by determining the maximum number of steps from every allocation to the nearest local optimum. This gives us an upper bound on the number of steps needed to reach the nearest local optimum using global hill-climbing. Notice that, under either individual hill-climbing or the bidding protocol it is possible that the local optimum is not reached, or is reached after more steps, since these algorithms can take steps that lower the global utility.
In order to find the expected number of steps to reach a local optimum, we start at any one of the local optima and then traverse all possible links at each depth d until all possible allocations have been visited. This occurs when
λ · |A| · b d > |A|.(11)
Solving for d, and remembering that λ = 1/2 b , we get
d > b log b 2.(12)
The expected worst-case distance from any point to the nearest local optimum is, therefore, b log b 2 (this number only makes sense for b ≥ 2 since smaller number of neighbors do not form a searchable space). That is, the number of steps to reach the nearest local optima in the sensor domain is proportional to the branching factor b, which is equal to 3 · |S|. We can expect search time to increase linearly with the number of sensors in the field.
Simulations
While the theoretical results above give us some bounds on the number of iterations before the system is expected to converge to a local optimum, the bounds are rather loose and do not tell us much about the dynamics of the executing system. Also, we cannot show mathematically how changes in the amount of communication change the search. Therefore, we have implemented a service allocation simulator to answer these questions. It simulates the sensor allocation domain described in the introduction.
The simulator is written in Java and the source code is available upon request. It gathers and analyzes data from any desired number of runs. The program can analyze the behavior of any number of target and sensor agents on a two-dimensional space, and the agents can be given any desired utility function. The program is limited to static targets. That is, it only considers the one-shot service allocation problem. Each new allocation is completely independent of any previous one.
In the tests we performed, each run has seven sensors and seven targets, all of which are randomly placed on a two-dimensional grid. Each sensor can only point in one of three directions or sectors. These three sectors are the same for all sensors (specifically, the first sector is from 0 to 120 degrees, the second one from 120 to 240, and the third one from 240 to 360). All the sensors use the same utility function which is given by (1), while the targets use (2). After a sensor agent receives all the bids it chooses the sector that has the heighest aggregate demand, as described by the bidding protocol in Section 2.1.
During a run, each of the targets periodically sends a bid to a number of sensors asking them to turn to the sector that faces the target. We set the bid amount to a fixed number for these tests. Periodically, the sensors count the number of bids they have received for each sector and turn their detector (such as a radar) to face the sector with the highest aggregate demand. We assume that neither the targets nor the sensors can form coalitions.
We vary the number of sensors to which the targets send their bids in order to explore the quality of the solution that the system converges upon as the amount of communication changes. For example, at one extreme if the all the targets send their bids to all the sensors, then the sensors would always set their sector to be the one with the most targets. This particular service allocation should, usually, be the best. However, it might not always be the optimal solution. For example, if seven targets are clustered together and the eighth is on another part of the field, it would be better if six sensor agents pointed towards the cluster of targets while the remaining two sensor agents pointed towards the stray target rather than having all sensor agents point towards the cluster of targets. At the other extreme, if all the targets send their bids to only one sensor then they will minimize communications but then the sensors will point to the sector from which they received a message-an allocation which is likely to be suboptimal.
These simulations explore the ruggedness of the system's global utility landscape and the dynamics of the agents' exploration of this landscape. If the agents were to always converge on a local (non-global) optimum then we would deduce that this problem domain has a very rugged utility landscape. On the other hand, if they usually manage to reach the global optimum then we could deduce a smooth utility landscape.
Results with 4 Neighbors
Test Results
In each of our tests we set the number of agents that each target will send its bid to, that is, the number of neighbors, to a fixed number. Given this fixed number of neighbors, we then generated 100 random placements of agents on the field and ran our bidding algorithm 10 times on each of those placements. Finally, we plotted the average solution quality, over the 10 runs, as a function of time for each of the 100 different placements. The solution quality is given by the ratio
α = Current Utility Globally Optimal Utility ,(13)
so if α = 1, then it means that the run has reached the global optimum. Since the number of agents is small, we were able to calculate the global optimum using a brute-force method. Specifically, there are 3 7 = 2187 possible configurations times 100 random placements leads to 218700 combinations that we had to check for each run in order to find the global optimum using brute-force. Using more than 7 sensors made the test take too long. Notice, however, that our algorithm is much faster than this brute-force search which we perform only to confirm that our search does find the global optimum. In our tests there were always seven target agents and seven sensor agents. We varied the number of neighbors from 1 to 7. If the target can only communicate with one other sensor, the sensors will likely have very little information for making their decision, while if all targets communicate with all seven sensors, then each sensor will generally be able to point to the sector with the most targets. However, because these decisions are made in an asynchronous manner, it is possible that some sensor will sometimes not receive all the bids before it has to make a decision. The targets always send their bids to the sensors that are closest to them.
The results from our experiments are shown in Figure 1 where we can see that there is a transition in the system's performance as the number of neighbors goes from three to five. That is, if the targets only send their bids to three sensors then it is almost certain that the system will stay in a configuration that has a very low global utility. However, if the targets send their bids to five sensors, then it is almost guaranteed (98% of the time) that the system will reach the globally optimal allocation. This is a huge difference in terms of the performance. We also notice in Figure 2 that for four neighbors there is a fairly even distribution in the utility of the final allocation.
Conclusions
We have formalized the service allocation problem and examined a general approach to solving problems of this type. The approach involves the use of utility-maximizing agents that represent the resources and the services. A simple form of bidding is used for communication. An analysis of this approach reveals that it implements a form of distributed hill-climbing, where each agent climbs its own utility landscape and not the global utility landscape. However, we showed that increasing the amount of communication among the agents forces each individual agent's landscape to become increasingly correlated to the global landscape.
These theoretical results were then verified in our implementation of a sensor allocation problem-an instance of a service allocation problem. Furthermore, the simulations allowed us to determine the location of a phase transition in the amount of communication needed for the system to consistently arrive at the globally optimal service allocation.
More generally, we have shown how a service allocation problem can be viewed as a distributed search by multiple agents over multiple landscapes. We also showed how the correlation between the global utility landscape and the individual agent's utility landscape depends on the amount and type of inter-agent communication. Specifically, we have shown that increased communications leads to a higher correlation between the global and individual utility landscapes, which increases the probability that the global optimum will be reached. Of course, the success of the search still depends on the connectivity of the search space, which will vary from domain to domain. We expect that our general approach can be applied to the design of any multiagent systems whose desired behavior is given by a global utility function but whose agents must act selfishly.
Our future work includes the study of how the system will behave under perturbations. For example, as the target moves it perturbs the current allocation and the global optimum might change. We also hope to characterize the local to global utility function correlation for different service allocation problems and the expected time to find the global optimum under various amounts of communication. | 4,553 |
cs0306119 | 2952136536 | We present a method for solving service allocation problems in which a set of services must be allocated to a set of agents so as to maximize a global utility. The method is completely distributed so it can scale to any number of services without degradation. We first formalize the service allocation problem and then present a simple hill-climbing, a global hill-climbing, and a bidding-protocol algorithm for solving it. We analyze the expected performance of these algorithms as a function of various problem parameters such as the branching factor and the number of agents. Finally, we use the sensor allocation problem, an instance of a service allocation problem, to show the bidding protocol at work. The simulations also show that phase transition on the expected quality of the solution exists as the amount of communication between agents increases. | The task allocation problem has been studied in @cite_0 , but the service allocation problem we present in this paper has received very little attention. There is also work being done on the analysis of the dynamics of multiagent systems for other domains such as e-commerce @cite_10 and automated manufacturing @cite_2 . It is possible that extensions to our approach will shed some light into the dynamics of these domains. | {
"abstract": [
"",
"Abstract We envision a future in which the global economy and the Internet will merge, evolving into an information economy bustling with billions of economically motivated software agents that exchange information goods and services with humans and other agents. Economic software agents will differ in important ways from their human counterparts, and these differences may have significant beneficial or harmful effects upon the global economy. It is therefore important to consider the economic incentives and behaviors of economic software agents, and to use every available means to anticipate their collective interactions. We survey research conducted by the Information Economies group at IBM Research aimed at understanding collective interactions among agents that dynamically price information goods or services. In particular, we study the potential impact of widespread shopbot usage on prices, the price dynamics that may ensue from various mixtures of automated pricing agents (or “pricebots”), the potential use of machine-learning algorithms to improve profits, and more generally the interplay among learning, optimization, and dynamics in agent-based information economies. These studies illustrate both beneficial and harmful collective behaviors that can arise in such systems, suggest possible cures for some of the undesired phenomena, and raise fundamental theoretical issues, particularly in the realms of multi-agent learning and dynamic optimization.",
"Agent architectures need to organize themselves and adapt dynamically to changing circumstances without top-down control from a system operator. Some researchers provide this capability with complex agents that emulate human intelligence and reason explicitly about their coordination, reintroducing many of the problems of complex system design and implementation that motivated increasing software localization in the first place. Naturally occurring systems of simple agents (such as populations of insects or other animals) suggest that this retreat is not necessary. This paper summarizes several studies of such systems, and derives from them a set of general principles that artificial multi-agent systems can use to support overall system behavior significantly more complex than the behavior of the individuals agents. Copyright Kluwer Academic Publishers 1997"
],
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_2"
],
"mid": [
"147685635",
"2145397675",
"1523176412"
]
} | A Method for Solving Distributed Service Allocation Problems * | The problem of dynamically allocating services to a changing set of consumers arises in many applications. For example, in an e-commerce system, the service providers are always trying to determine which service to provide to whom, and at what price [5]; in an automated manufacturing for mass customization scenario, agents must decide which services will be more popular/profitable [1]; and in a dynamic sensor allocation problem, a set of sensors in a field must decide which area to cover, if any, while preserving their resources.
While these problems might not seem related, they are instances of a more general service allocation problem in which a finite set of resources must be allocated by a set of autonomous agents so as to maximize some global measure of utility. A general approach to solving these types of problems has been used in many successful systems , such as [2] [3] [11] [9]. The approach involves three general steps:
1. Assign each resource that needs to be preserved to an agent responsible for managing the resource.
2. Assign each goal of the problem domain to an agent responsible for achieving it. Achieving these goals requires the consumption of resources.
3. Have each agent take actions so as to maximize its own utility, but implement a coordination algorithm that encourages agents to take actions that also maximize the global utility.
In this paper we formalize this general approach by casting the problem as a search in a global fitness landscape which is defined as the sum of the agents' utilities. We show how the choice of a coordination/communication protocol disseminates information, which in turn "smoothes" the global utility landscape. This smooth global utility landscape allows the agents to easily find the global optimum by simply making selfish decisions to maximize their individual utility.
We also present experiments that pinpoint the location of a phase transition in the time it takes for the agents to find the optimal allocation. The transition can be seen when the amount of communication allowed among agents is manipulated. It exists because communication allows the agents to align their individual landscapes with the global landscape. At some amount of communication, the alignment between these landscapes is good enough to allow the agents to find the global optimum, but less communication drives the agents into a random behavior from which the system cannot recuperate.
Task Allocation
The service allocation problem we discuss in this paper is a superset of the well known task allocation problem [10, chapter 5.7]. A task allocation problem is defined by a set of tasks that must be allocated among a set of agents. Each agent has a cost associated with each subset of tasks, which represents the cost the agent would incur if it had to perform those tasks. Coordination protocols are designed to allow agents to trade tasks so that the globally optimal allocation-the one that minimizes the sum of all the individual agent costs-is reached as soon as possible. It has been shown that this globally optimal allocation can reached if the agents use the contract-net protocol [9] with OCSM contracts [8]. These OCSM contracts make it possible for the system to transition from any allocation to any other allocation in one step. As such, a simple hill-climbing search is guaranteed to eventually reach the global optimum.
In this paper we consider the service allocation problem, which is a superset of the task allocation because it allows for more than one agent to service a "task". The service allocation problem we study also has the characteristic that every allocation cannot be reached from every other allocation in one step.
Service Allocation
In a service allocation problem there are a set of services, offered by service agents, and a set of consumers who use those services. A server can provide any one of a number of services and some consumers will benefit from that service without depleting it. A server agent incurs a cost when providing a service and can choose not to provide any service.
For example, a server could be an agent that sets up a website with information about cats. All the consumer agents with interests in cats will benefit from this service, but those with other interests will not benefit. Since each server can provide, at most, one service, the problem is to find the allocation of services that maximizes the sum of all the agents' utilities, that is, an allocation that maximizes the global utility.
Sensor Allocation
Another instance of the service allocation problem is the sensor allocation problem, which we will use as an example throughout this paper. In the sensor allocation problem we have a number of sensors placed in fixed positions in a two-dimensional space. Each sensor has a limited viewing angle and distance but can point in any one of a number of directions. For example, a sensor might have a viewing angle of 120 degrees, viewing distance of 3 feet, and be able to look in three directions, each one 120 degrees apart from the others. That is, it can "look" in any one of three directions. On each direction it can see everything that is in the 120 degree and 3 feet long view cone. Each time a sensor looks in a particular direction is uses energy.
There are also targets that move around in the field. The goal is for the sensors to detect and track all the targets in the field. However, in order to determine the location of a target, two or more sensors have to look at it at the same time. We also wish to minimize the amount of energy spent by the sensors.
We consider the sensor agents as being able to provide three services, one for each sector, but only one at a time. We consider the target agents as consuming the services of the sensors.
A Formal Model for Service Allocation
We define a service allocation problem SA as a tuple SA = {C, S} where C is the set of consumer agents C = {c 1 , c 2 , . . . , c |C| }, and c i has only one possible state, c i = 0. The set of service agents is S = {s 1 , s 2 , . . . , s |S| } and the value of s i is the value of that service. For the sensor domain in which a sensor can observe any one of three 120-degree sectors or be turned off, we have s i ∈ {0, 1, 2, off}. An allocation is an assignment of states to the services (since the consumers have only one possible state we can ignore them). A particular allocation is denoted by a = {s 1 , s 2 , . . . , s |S| }, where the s i have some value taken from the domain of service states, and a ∈ A, where A is the set of all possible allocations. That is, an allocation tells us the state of all agents (since consumers have only one state they can be omitted).
Each agent also has a utility function. The utility that an agent receives depends on the current allocation a, where we let a(s) be the state of service agent s under a. The agent's utilities will depend on their state and the state of other agents. For example, in the sensor problem we define the utility of sensor s as U s (a), where
U s (a) = 0 if a(s) = off −K 1 otherwise.(1)
That is, a sensor receives no utility when it is off and must pay a penalty of −K 1 when it is running. The targets are the consumers, and each target's utility is defined as
U c (a) = 0 if f c (a) = 0 K 2 if f c (a) = 1 K 2 + n − 2 if f c (a) = n (2)
where f c (a) = number of sensors s that see c given their state a(c).
(
Finally, given the individual agent utilities, we define the global utility GU (a) as the sum of the individual agents' utilities:
GU (a) = c∈C U c (a) + s∈S U s (a).(4)
The service allocation problem is to find the allocation a that maximizes GU (a). In the sensor problem, there are 4 |S| possible allocations, which would make a simple generate-and-test approach take exponential amounts of time. We wish to find the global optimum much faster than that.
Search Algorithms
Our goal is to design an interaction protocol whereby an allocation a that maximizes the global utility GU (a) is reached in a small number of steps. In each step of our protocol one of the agents will change its state or send a message to another agent. The messages might contain the state or utilities of other agents. We assume that the agents do not have direct access to the other agents' states or utility values.
The simplest algorithm we can envision involves having each consumer, at each time, changing the state of a randomly chosen service agent so as to increase the consumer's own utility. That is, a consumer c will change the current allocation a into a ′ by changing the state of some sensor s such that U c (a ′ ) > U c (a). If the sensor's state cannot be changed so as to increase the utility, then the consumer does nothing. In the sensor domain this amounts to a target picking a sensor and changing its state so that the sensor can see the target. We refer to this algorithm as individual hill-climbing.
The individual hill-climbing algorithm is simple to implement and the only communication needed is between the consumer and the chose server. This simple algorithm makes every consumer agent increase its individual utility at each turn. However, the new allocation a ′ might result in a lower global utility, since a ′ might reduce the utility of several other agents. Therefore, it does not guarantee that an optimal allocation will be eventually reached.
Another approach is for each agent to change state so as to increase the global utility. We call this a global hill-climbing algorithm. In order to implement this algorithm, an agent would need to know how the proposed state change affects the global utility as well as the states of all the other agents. That is, it would need to be able to determine GU (a ′ ) which requires it to know the state of all the agents in a ′ as well as the utility functions of every other agent, as per the definition of global utility (4). In order for an agent to know the state of others, it would need to somehow communicate with all other agents. If the system implements a global broadcasting method then we would need for each agent to broadcast its state at each time. If the system uses more specialized communications such as point-to-point, limited broadcasting, etc., then more messages will be needed.
Any protocol that implements the global hill-climbing algorithm will reach a locally optimal allocation in the global utility. This is because it is always true that, for a new allocation a ′ and old allocation a, GU (a ′ ) ≥ GU (a). Whether or not this local optimum is also a global optimum will depend on the ruggedness of the global utility landscape. That is, if it consists of one smooth peak then it is likely that any local optimum is the global optimum. On the other hand, if the landscape is very rugged then there are likely many local peaks. Studies in NK landscapes [4] tell us that smoother landscapes result when an agent's utility depends on the state of smaller number of other agents.
Global hill-climbing is better than individual hill-climbing since it guarantees that we will find a local optima. However, it requires agents to know each others' utility function and to constantly communicate their state. Such large amount of communication is often undesirable in multiagent systems. We need a better way to find the global optimum.
One way of correlating the individual landscapes to the global utility landscape is with the use of a bidding protocol in which each consumer agent tells each service the marginal utility the consumer would receive if the service switched its state to so as to maximize the consumer's utility. The service agent can then choose to provide the service with the highest aggregate demand. Since the service is picking the value that maximizes the utility of everyone involved (all the consumers and the service) without decreasing the utility of anyone else (the other services) this protocol is guaranteed to never decrease the global utility. This bidding protocol is a simplified version of the contract-net [9] protocol in that it does not require contractors to send requests for bids.
However, in order for a consumer to determine the marginal utility it will receive from one sensor changing state, it still needs to know the state of all the other sensors. This means that a complete implementation of this protocol will still require a lot of communication (namely, the same amount as in global hill-climbing). We can reduce this number of messages by allowing agents to communicate with only a subset of the other agents and making their decisions based on only this subset of information. That is, instead of all services telling each consumer their state, a consumer could receive state information from only a subset of the services and make its decision based on this (assuming that the services chosen are representative of the whole). This strategy shows a lot of promise but its performance can only be evaluated on an instance-by-instance basis. We explore this strategy experimentally in Section 3 using the sensor domain.
Theoretical Time Bounds of Global Hill-Climbing
Since we now know that global hill-climbing will always reach a local optimum, the next questions we must answer are:
1. How many local optima are there? 2. What is the probability that a local optimum is the global optimum?
3. How long does it take, on average, to reach a local optimum?
Let a be the current allocation and a ′ be a neighboring allocation. We know that a is a local optimum if
∀ a ′ ∈N (a) GU (a) > GU (a ′ )(5)
where N (a) = {x | x is a Neighbor of a}.
We define a Neighbor allocation as an allocation where one, and only one, agent has a different state. The probability that some allocation a is a local optimum is simply the probability that (5) is true. If the utility of all pairs of neighbors is not correlated, then this probability is
Pr[∀ a ′ ∈N (a) GU (a) > GU (a ′ )] = Pr[GU (a) > GU (a ′ )] b ,(7)
where b is the branching factor. In the sensor problem b = 3 · |S| where S is the set of all sensors. That is, since each sensor can be in any of four states it will have three neighbors from each state. In some systems it is safe to assume that the global utilities of a's neighbors are independent. However, most systems show some degree of correlation. Now we need to calculate the Pr[GU (a) > GU (a ′ )], that is, the probability that some allocation a has a greater global utility that its neighbor a ′ , for all a and a ′ . This could be calculated via an exhaustive enumeration of all possible allocations. However, often we can find the expected value of this probability.
For example, in the sensor problem each sensor has four possible states. If a sensor changes its state from sector x to sector y the utility of the target agents covered by x will decrease while the utility of those in y will increase. If we assume that, on average, the targets are evenly spaced on the field, then the global utilities for both of these are expected to be the same. That is, the expected probability that the global utility of one allocation is bigger than the other is 1/2.
If, on the other hand, a sensor changes state from "off" to a sector, or from a sector to "off," the global utility is expected to decrease and increase, respectively. However, there are an equal number of opportunities to go from "off" to "on" and vice-versa. Therefore, we can also expect that for these cases the probability that the global utility of one allocation is bigger than the other is 1/2.
Based on these approximations, we can declare that for the sensor problem
Pr[∀ a ′ ∈N (a) GU (a) > GU (a ′ )] = 1 2 b = λ.(8)
If we assume an even distribution of local optima, the total number of local optima is simply the product of the total number of allocations times the probability that each one is a local optimum. That is, Total number of local optima = λ|A|
For the sensor problem, λ = 1/2 b , b = 3 · |S| and |A| = b |S| , so the expected number of local optima is b |S| /2 3|S| .
Pr[a local optimum is also global]
= 1 λ|A| = 1 2 b .(10)
We can find the expected time the algorithm will take to reach a local optimum by determining the maximum number of steps from every allocation to the nearest local optimum. This gives us an upper bound on the number of steps needed to reach the nearest local optimum using global hill-climbing. Notice that, under either individual hill-climbing or the bidding protocol it is possible that the local optimum is not reached, or is reached after more steps, since these algorithms can take steps that lower the global utility.
In order to find the expected number of steps to reach a local optimum, we start at any one of the local optima and then traverse all possible links at each depth d until all possible allocations have been visited. This occurs when
λ · |A| · b d > |A|.(11)
Solving for d, and remembering that λ = 1/2 b , we get
d > b log b 2.(12)
The expected worst-case distance from any point to the nearest local optimum is, therefore, b log b 2 (this number only makes sense for b ≥ 2 since smaller number of neighbors do not form a searchable space). That is, the number of steps to reach the nearest local optima in the sensor domain is proportional to the branching factor b, which is equal to 3 · |S|. We can expect search time to increase linearly with the number of sensors in the field.
Simulations
While the theoretical results above give us some bounds on the number of iterations before the system is expected to converge to a local optimum, the bounds are rather loose and do not tell us much about the dynamics of the executing system. Also, we cannot show mathematically how changes in the amount of communication change the search. Therefore, we have implemented a service allocation simulator to answer these questions. It simulates the sensor allocation domain described in the introduction.
The simulator is written in Java and the source code is available upon request. It gathers and analyzes data from any desired number of runs. The program can analyze the behavior of any number of target and sensor agents on a two-dimensional space, and the agents can be given any desired utility function. The program is limited to static targets. That is, it only considers the one-shot service allocation problem. Each new allocation is completely independent of any previous one.
In the tests we performed, each run has seven sensors and seven targets, all of which are randomly placed on a two-dimensional grid. Each sensor can only point in one of three directions or sectors. These three sectors are the same for all sensors (specifically, the first sector is from 0 to 120 degrees, the second one from 120 to 240, and the third one from 240 to 360). All the sensors use the same utility function which is given by (1), while the targets use (2). After a sensor agent receives all the bids it chooses the sector that has the heighest aggregate demand, as described by the bidding protocol in Section 2.1.
During a run, each of the targets periodically sends a bid to a number of sensors asking them to turn to the sector that faces the target. We set the bid amount to a fixed number for these tests. Periodically, the sensors count the number of bids they have received for each sector and turn their detector (such as a radar) to face the sector with the highest aggregate demand. We assume that neither the targets nor the sensors can form coalitions.
We vary the number of sensors to which the targets send their bids in order to explore the quality of the solution that the system converges upon as the amount of communication changes. For example, at one extreme if the all the targets send their bids to all the sensors, then the sensors would always set their sector to be the one with the most targets. This particular service allocation should, usually, be the best. However, it might not always be the optimal solution. For example, if seven targets are clustered together and the eighth is on another part of the field, it would be better if six sensor agents pointed towards the cluster of targets while the remaining two sensor agents pointed towards the stray target rather than having all sensor agents point towards the cluster of targets. At the other extreme, if all the targets send their bids to only one sensor then they will minimize communications but then the sensors will point to the sector from which they received a message-an allocation which is likely to be suboptimal.
These simulations explore the ruggedness of the system's global utility landscape and the dynamics of the agents' exploration of this landscape. If the agents were to always converge on a local (non-global) optimum then we would deduce that this problem domain has a very rugged utility landscape. On the other hand, if they usually manage to reach the global optimum then we could deduce a smooth utility landscape.
Results with 4 Neighbors
Test Results
In each of our tests we set the number of agents that each target will send its bid to, that is, the number of neighbors, to a fixed number. Given this fixed number of neighbors, we then generated 100 random placements of agents on the field and ran our bidding algorithm 10 times on each of those placements. Finally, we plotted the average solution quality, over the 10 runs, as a function of time for each of the 100 different placements. The solution quality is given by the ratio
α = Current Utility Globally Optimal Utility ,(13)
so if α = 1, then it means that the run has reached the global optimum. Since the number of agents is small, we were able to calculate the global optimum using a brute-force method. Specifically, there are 3 7 = 2187 possible configurations times 100 random placements leads to 218700 combinations that we had to check for each run in order to find the global optimum using brute-force. Using more than 7 sensors made the test take too long. Notice, however, that our algorithm is much faster than this brute-force search which we perform only to confirm that our search does find the global optimum. In our tests there were always seven target agents and seven sensor agents. We varied the number of neighbors from 1 to 7. If the target can only communicate with one other sensor, the sensors will likely have very little information for making their decision, while if all targets communicate with all seven sensors, then each sensor will generally be able to point to the sector with the most targets. However, because these decisions are made in an asynchronous manner, it is possible that some sensor will sometimes not receive all the bids before it has to make a decision. The targets always send their bids to the sensors that are closest to them.
The results from our experiments are shown in Figure 1 where we can see that there is a transition in the system's performance as the number of neighbors goes from three to five. That is, if the targets only send their bids to three sensors then it is almost certain that the system will stay in a configuration that has a very low global utility. However, if the targets send their bids to five sensors, then it is almost guaranteed (98% of the time) that the system will reach the globally optimal allocation. This is a huge difference in terms of the performance. We also notice in Figure 2 that for four neighbors there is a fairly even distribution in the utility of the final allocation.
Conclusions
We have formalized the service allocation problem and examined a general approach to solving problems of this type. The approach involves the use of utility-maximizing agents that represent the resources and the services. A simple form of bidding is used for communication. An analysis of this approach reveals that it implements a form of distributed hill-climbing, where each agent climbs its own utility landscape and not the global utility landscape. However, we showed that increasing the amount of communication among the agents forces each individual agent's landscape to become increasingly correlated to the global landscape.
These theoretical results were then verified in our implementation of a sensor allocation problem-an instance of a service allocation problem. Furthermore, the simulations allowed us to determine the location of a phase transition in the amount of communication needed for the system to consistently arrive at the globally optimal service allocation.
More generally, we have shown how a service allocation problem can be viewed as a distributed search by multiple agents over multiple landscapes. We also showed how the correlation between the global utility landscape and the individual agent's utility landscape depends on the amount and type of inter-agent communication. Specifically, we have shown that increased communications leads to a higher correlation between the global and individual utility landscapes, which increases the probability that the global optimum will be reached. Of course, the success of the search still depends on the connectivity of the search space, which will vary from domain to domain. We expect that our general approach can be applied to the design of any multiagent systems whose desired behavior is given by a global utility function but whose agents must act selfishly.
Our future work includes the study of how the system will behave under perturbations. For example, as the target moves it perturbs the current allocation and the global optimum might change. We also hope to characterize the local to global utility function correlation for different service allocation problems and the expected time to find the global optimum under various amounts of communication. | 4,553 |
math0301111 | 2157627854 | Let HN denote the problem of determining whether a system of multivariate polynomials with integer coefficients has a complex root. It has long been known that HN in P implies P=NP and, thanks to recent work of Koiran, it is now known that the truth of the Generalized Riemann Hypothesis (GRH) yields the implication that HN not in NP implies P is not equal to NP. We show that the assumption of GRH in the latter implication can be replaced by either of two more plausible hypotheses from analytic number theory. The first is an effective short interval Prime Ideal Theorem with explicit dependence on the underlying field, while the second can be interpreted as a quantitative statement on the higher moments of the zeroes of Dedekind zeta functions. In particular, both assumptions can still hold even if GRH is false. We thus obtain a new application of Dedekind zero estimates to computational algebraic geometry. Along the way, we also apply recent explicit algebraic and analytic estimates, some due to Silberman and Sombra, which may be of independent interest. | This result immediately implies that @math can be solved by solving a linear system with @math variables and equations over the rationals (with total bit-size @math ), thus easily yielding @math . That @math then follows immediately from the fact that linear algebra can be efficiently parallelized @cite_7 . | {
"abstract": [
"The parallel arithmetic complexities of matrix inversion, solving systems of linear equations, computing determinants and computing the characteristic polynomial of a matrix are shown to have the same growth rate. Algorithms are given that compute these problems in @math steps using a number of processors polynomial in n. (n is the order of the matrix of the problem.)"
],
"cite_N": [
"@cite_7"
],
"mid": [
"2113097540"
]
} | 0 |
||
1708.02989 | 2629354081 | The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system. | In the pilot task, we focus on citations and the text spans they cite in the original article. The importance of citations for summarization is discussed in @cite_23 , which compared summaries that were based on three different things: only the reference article; only the abstract; and, only citations. The best results were based on citations. @cite_7 also showed that the information from citations is different from that which can be gleaned from just the abstract or reference article. However, it is cautioned that citations often focus on very specific aspects of a paper @cite_5 . | {
"abstract": [
"The old Asian legend about the blind men and the elephant comes to mind when looking at how different authors of scientific papers describe a piece of related prior work. It turns out that different citations to the same paper often focus on different aspects of that paper and that neither provides a full description of its full set of contributions. In this article, we will describe our investigation of this phenomenon. We studied citation summaries in the context of research papers in the biomedical domain. A citation summary is the set of citing sentences for a given article and can be used as a surrogate for the actual article in a variety of scenarios. It contains information that was deemed by peers to be important. Our study shows that citation summaries overlap to some extent with the abstracts of the papers and that they also differ from them in that they focus on different aspects of these papers than do the abstracts. In addition to this, co-cited articles (which are pairs of articles cited by another article) tend to be similar. We show results based on a lexical similarity metric called cohesion to justify our claims. © 2008 Wiley Periodicals, Inc.",
"The number of research publications in various disciplines is growing exponentially. Researchers and scientists are increasingly finding themselves in the position of having to quickly understand large amounts of technical material. In this paper we present the first steps in producing an automatically generated, readily consumable, technical survey. Specifically we explore the combination of citation information and summarization techniques. Even though prior work (, 2006) argues that citation text is unsuitable for summarization, we show that in the framework of multi-document survey creation, citation texts can play a crucial role.",
"Researchers and scientists increasingly find themselves in the position of having to quickly understand large amounts of technical material. Our goal is to effectively serve this need by using bibliometric text mining and summarization techniques to generate summaries of scientific literature. We show how we can use citations to produce automatically generated, readily consumable, technical extractive summaries. We first propose C-LexRank, a model for summarizing single scientific articles based on citations, which employs community detection and extracts salient information-rich sentences. Next, we further extend our experiments to summarize a set of papers, which cover the same scienti fic topic. We generate extractive summaries of a set of Question Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their citation sentences and show that citations have unique information amenable to creating a summary."
],
"cite_N": [
"@cite_5",
"@cite_7",
"@cite_23"
],
"mid": [
"2107972503",
"2153568396",
"2146593092"
]
} | Identifying Reference Spans: Topic Modeling and Word Embeddings help IR | The CL-SciSumm 2016 [12] shared task posed the problem of automatic summarization in the computational linguistics domain. Single document summarization is hardly new [30,5,4]; however, in addition to the reference document to be summarized, we are also given citances, i.e. sentences that cite our reference document. The usefulness of citances in the process of summarization is immediately apparent. A citance can hint at what is interesting about the document.
This objective was split into three tasks. Given a citance (a sentence containing a citation), in Task 1a we must identify the span of text in the reference document that best reflects what has been cited. Task 1b asks us to classify the cited aspect according to a predefined set of facets: hypothesis, aim, method, results, and implication. Finally, Task 2 is the generation of a structured summary for the reference document. Although the shared task is broken up into multiple tasks, this paper concerns itself solely with Task 1a.
Task 1a is quite interesting all by itself. We can think of Task 1a as a small scale summarization. Thus, being precise is incredibly important: the system must often find a single sentence among hundreds (in some cases, however, multiple sentences are correct). The results of the workshop [15] reveal that Task 1a is quite challenging. There was a varied selection of methods used for this problem: SVMs, neural networks, learningto-rank algorithms, and more. Regardless, our previous system had the best performance on the test set for CL-SciSumm: cosine similarity between weighted bagof-word vectors. The weighting used is well known in information retrieval: term frequency · inverse document frequency (TFIDF). Although TFIDF is a well known and understood method in information retrieval, it is surprising that it achieved better performance than more heavily engineered solutions. Thus, our goal in this paper is twofold: to analyze and improve on the performance of TFIDF and to push beyond its performance ceiling.
In the process of exploring different configurations, we have observed the performance of our TFIDF method vary substantially. Text preprocessing parameters can have a significant effect on the final performance. This variance also underscores the need to start with a basic system and then add complexity step-by-step in a reasoned manner. Another prior attempt employed SVMs with tree kernels but the performance never surpassed arXiv:1708.02989v1 [cs.CL] 9 Aug 2017 that of TFIDF. Therefore, we focus on improving the TFIDF approach.
Depending on the domain of your data, it can be necessary to start with simple models. In general, unbalanced classification tasks are hard to evaluate due to the performance of the baseline. For an example that is not a classification task, look no further than news articles: the first few sentences of a news article form an incredibly effective baseline for summaries of the whole article.
First, we study a few of the characteristics of the dataset. In particular, we look at the sparsity between reference sentences and citances, what are some of the hurdles in handling citances, and whether chosen reference sentences appear more frequently in a particular section. Then we cover improvements to TFIDF. We also introduce topic models learned through Latent Dirichlet Allocation (LDA) and word embeddings learned through word2vec. These systems are studied for their ability to augment our TFIDF system. Finally, we present an analysis of how humans perform at this task.
CL-SciSumm 2016
We present a short overview of the different approaches used to solve Task 1a.
Aggarwal and Sharma [3] use bag-of-words bigrams, syntactic dependency cues and a set of rules for extracting parts of referenced documents that are relevant to citances.
In [14], researchers generate three combinations of an unsupervised graph-based sentence ranking approach with a supervised classification approach. In the first approach, sentence ranking is modified to use information provided by citing documents. In the second, the ranking procedure is applied as a filter before supervised classification. In the third, supervised learning is used as a filter to the cited document, before sentence ranking.
Cao et al. [9] model Task 1a as a ranking problem and apply SVM Rank for this purpose.
In [19], the citance is treated as a query over the sentences of the reference document. They used learningto-rank algorithms (RankBoost, RankNet, AdaRank, and Coordinate Ascent) for this problem with lexical (bag-of-words features), topic features and TextRank for ranking sentences. WordNet is used to compute concept similarity between citation contexts and candidate spans.
Lei et al. [18] use SVMs and rule-based methods with lexicon features (high frequency words within the reference text, LDA to train the reference document and citing documents, and co-occurrence lexicon) and similarities (IDF, Jaccard, and context similarity).
In [24], authors propose a linear combination between a TFIDF model and a single layer neural network model. This paper is the most similar to our work.
Saggion et al. [28] use supervised algorithms with feature vectors representing the citance and reference document sentences. Features include positional, Word-Net similarity measures, and rhetorical features.
We have chosen to use topic modeling and word embeddings to overcome the weaknesses of the TFIDF approach. Another participant of the CL-SciSumm 2016 shared task did the same [18]. Their system performed well on the development set, but not as well on the heldout test set. We show how improving a system with a topic model or a word embedding is a lot less straightforward than expected.
Preliminaries
Following are brief explanations of terms that will be used throughout the paper.
Cosine Similarity. This is a measure of similarity between two non zero vectors, A and B that measure the cosine of angle between them. Equation 1 shows the formula for calculating cosine similarity.
similarity(A, B) = cos θ = A · B A B(1)
In the above formula, θ is the angle between the two vectors A and B. We use cosine similarity to measure how far or close two sentences are from each other and rank them based on their similarity. In our task, each vector represents TFIDF or LDA values for all the words in a sentence. The higher the value of similarity(A, B), the greater the similarity is between the two sentences.
TFIDF. This is short for term frequency-inverse document frequency, and is a common scoring metric used for words in a query across a corpus of documents. The metric tries to capture the importance of a word by valuing frequency of the words use in a document and devaluing its appearance in every document. This was originally a method for retrieving documents from a corpus (instead of sentences from a document). For our task of summarization, this scoring metric was adjusted to help select matching sentences, so each sentence is treated as a document for our purposes. Thus, our "document" level frequencies are the frequencies of words in a sentence. The "corpus" will be the whole reference document. Then, the term frequency can be calculated by counting a word's frequency within a sentence. The inverse document frequency of a word will be based on the number of sentences that contain that word. When using TFIDF for calculating similarity, we use Equation 1 where the vectors are defined as:
A = a 1 , a 2 , . . . , a n where a i = tf wi · idf wi (2) idf wi = log(N/df wi )(3)
where tf wi refers to the term frequency of w i , df wi refers to the document frequency of w i (number of documents in which w i appears), and N refers to the total number of documents.
WordNet. This is a large lexical dataset for the English language [22]. The main relation among words in WordNet is synonymy. However, it contains other relations like antonymy, hyperonymy, hyponymy, meronymy, etc. For our task of summarization, we use synonymy for expanding words in reference sentences and citances. Since reference sentences and citances are written by two different authors, adding synonyms increases the chance of a word occurring in both sentences if they are both indeed related.
LDA. Latent Dirichlet Allocation is a technique used for topic modeling. It learns a generative model of a document. Topics are assumed to have some prior distribution, normally a symmetric Dirichlet distribution. Terms in the corpus are assumed to have a multinomial distribution. These assumptions form the basis of the method. After learning the parameters from a corpus, each term will have a topic distribution which can be used to determine the topics of a document. When using LDA for calculating similarity, we use Equation 1 where the vectors are defined as topic membership probabilities:
A = a 1 , a 2 , . . . , a n where a i = P (doc A ∈ topic i )(4) F 1 -score. To evaluate our methods, we have chosen the F 1 -score. The F 1 -score is a weighted average of precision and recall, where precision and recall receive equal weighting. This kind of weighted average is also referred to as the harmonic mean. Precision is the proportion of correct results among the results that were returned. And recall is the proportion of correct results among all possible correct results.
Our system outputs the top 3 sentences and we compute recall, precision, and F 1 -score using these sentences. If a relevant sentence appears in the top 3, then it factors into recall, precision, and F 1 -score. Thus, we naturally present the precision at N measure (P @N ) used by [8]. Precision at N is simply the proportion of correct results in the top N ranks. In our evaluations, N = 3. Average precision and the area under the ROC curve are two other measures that present a more complete picture when there is a large imbalance between classes. To keep in line with the evaluation for the BIRNDL shared task we chose to use P@N. Regardless, we focus on the F 1 -score rather than P @3 when determining if one system is better than another.
If we look at the percentage of sentences that appear in the gold standard, we see that roughly 90% of the sentences in our dataset are never chosen by an annotator. This means our desired class is rare and diverse, similar to outliers or anomalies [8]. Therefore, we should expect low performance from our system since our task is similar to anomaly detection [8] which has a hard time achieving good performance in such cases.
Dataset
The dataset [13] consists of 30 total documents separated into three sets of 10 documents each: training, development, and test sets. For the following analysis, no preprocessing has been done (for instance, stemming).
There are 23356 unique words among the reference documents in the dataset. The citances contain 5520 unique words. The most frequent word among reference documents appears in 4120 sentences. The most frequent word among citances appears in 521 sentences. There are 6700 reference sentences and 704 citances (although a few of these should actually be broken up into multiple sentences). The average reference sentence has approximately 22 words in this dataset whereas citances have an average of approximately 34 words.
In Figure 1 we can see the sparsity of the dataset. At a particular (x, y) along the curves we know x% of all sentences contain at least some number of unique words -a number equal to y% of the vocabulary. All sentences contain at least one word, which is a very small sliver of the vocabulary (appearing as 0% in the graph). The quicker the decay, the greater the sparsity. Noise in the dataset is one of the factors for the sparsity. We can see that citances, seen as a corpus, are in general less sparse than the reference texts. This can be an indication that citances have some common structure or semantics.
One possibility is that citance must have some level of summarization ability. If we look at the annotations of a document as a whole we see a pattern: the annotators tend to choose from a small pool of reference sentences. Therefore, the sentences chosen are usually somewhat general and serve as tiny summaries of a single concept. Furthermore, the chosen reference sentences make up roughly 10% of all reference sentences from which we have to choose.
Citances
It should be noted that citances have a few peculiarities, such as an abundance of citation markers and proper names. Citation markers (cues in written text demarcating a citation) will sometimes include the names of authors, thus the vocabulary for these sentences will include more proper names. This could justify the lesser sparsity if authors reoccur across citances. However, it could also justify greater sparsity since these authors may be unique. Identifying and ignoring citation markers should reduce noise. A preprocessing step we employ with this goal is the removal of all text enclosed in brackets of any kind.
To demonstrate the differences in difficulty a citance can pose we present two examples: one that is relatively simple and another that is relatively hard. In both examples the original citance marker is in italics.
Easy Citance: "According to Sproat et al. (1996), most prior work in Chinese segmentation has exploited lexical knowledge bases; indeed, the authors assert that they were aware of only one previously published instance (the mutual-information method of Sproat and Shih (1990)) of a purely statistical approach."
Reference Span: "Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexical rule-based approaches, and approaches that combine lexical information with statistical information. The present proposal falls into the last group. Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach."
In the "Easy" case, there are many salient words in common between the reference spans we must retrieve and the citance. This is the ideal case for TFIDF since matching based on these words should produce good results. However in the "Hard" case:
Hard Citance: "A lot of work has been done in English for the purpose of anaphora resolution and various algorithms have been devised for this purpose (Aone and Bennette, 1996; Brenan , Friedman and Pollard, 1987; Ge, Hale and Charniak, 1998; Grosz, Aravind and Weinstein, 1995; McCarthy and Lehnert, 1995; Lappins and Leass, 1994; Mitkov, 1998 ; Soon, Ng and Lim, 1999)."
Reference Span: "We have described a robust, knowledge-poor approach to pronoun resolution which operates on texts preprocessed by a part-of-speech tagger."
We can see that there is no overlap of salient words, between the citance and text span. Not only is the citance somewhat vague, but any semantic overlap is not exact. For instance, "anaphora resolution" and "pronoun resolution" refer to the same concept but do not match lexically.
Frequency of Section Titles
We analyzed the frequency of section titles for the chosen reference sentences. Our analysis only excludes document P05-1053 from consideration -the document whose answer key was withheld. For each cited reference sentence, we looked at the title of the section in which it appears. The titles that appeared with the greatest frequency can be seen in Table 1. To extract these section titles we looked at the parent nodes of sentences within the XML document. The "title" and "abstract" sections are special since they refer to parent nodes of type other than SECTION. Due to OCR noise, a few section names were wrong. We manually corrected these section names. Then, we performed a slight normalization by removing any 's' character that appeared in the end of a name. These results clearly show sentences that are cited are not uniformly distributed within a document.
Preprocessing
Before we use the dataset in our system we preprocessed the dataset to reduce the number of errors. The dataset has lots of errors due to the use of OCR techniques. Broken words, non-ascii characters, and formating problems in XML files are some examples of these problems. We performed the following preprocessing steps to reduce noise in the dataset. First, we manually went over the citances and reference sentences fixing broken words (those separated by hyphen, space, or some nonascii characters). We automatically removed all nonascii characters from citance and reference text. Finally, we manually fixed some misformatted XML files that were missing closing tags.
TFIDF Approach
Our best performing system for the CL-SciSumm 2016 task was based on TFIDF. It achieved 13.68% F 1 -score on the test set for Task 1a. Our approach compares the TFIDF vectors of the citance and the sentences in the reference document. Each reference sentence is assigned a score according to the cosine similarity between itself and the citance. There were several variations studied to improve our TFIDF system. Table 2 contains the abbreviations we use when discussing a particular configuration. Stopwords were removed for all configurations. These words serve mainly to add noise, so their removal helps improve performance. There are two lists of stopwords used: one from sklearn (sk stop) and one from NLTK (nltk stop).
To remove the effect of using words in their different forms we used stemming (st) to reduce words to their root form. For this purpose, we use the Snowball Stemmer, provided by the NLTK package [7].
WordNet has been utilized to expand the semantics of the sentence. We obtain the lemmas from the synsets of each word in the sentence. We use the Lesk algorithm, provided through the NLTK package [7], to perform wordsense disambiguation. This is a necessary step before obtaining the synset of a word from Word-Net. Each synset is a collection of lemmas. The lemmas that constitute each synset are added to the word vector of the sentence; this augmented vector is used when calculating the cosine similarity instead of the original vector. We consider three different methods of using WordNet: ref wn, cit wn, and both wn, which will be explained in Subsection 4.1.
Our first implementation of WordNet expansion increased coverage at the cost of performance. With the proper adjustments, we were able to improve performance as well. This is another example of how the details and tuning of the implementation are critical in dealing with short text. Our new implementation takes care to only add a word once, even though it may ap-pear in the synsets of multiple words of the sentence. Details are found in Subsection 4.1.
In the shared task, filtering candidate sentences by length improved the system's performance substantially. Short sentences are unlikely to be chosen; they are often too brief to encapsulate a concept completely. Longer sentences are usually artifacts from the PDF to text conversion (for instance, a table transformed into a sentence). We eliminate from consideration all sentences outside a certain range of number of words. In our preliminary experiments, we found two promising lower bounds on the number of words: 8 and 15. The only upper bound we consider is 70, which also reduces computation time since longer sentences take longer to score. Each range appears in the tables as an ordered pair (min, max); e.g. 8 to 70 words it would appear as (8,70). This process eliminates some of the sentences our system is supposed to retrieve so our maximum attainable F 1 -score is lowered.
Improvements on TFIDF
The main drawback of the TFIDF method is its inability to handle situations where there is no overlap between citance and reference sentence. Thus, we decided to work on improving the WordNet expansion. In Table 3 we can see the performance of various different configurations, some which improve upon our previously best system. The first improvement was to make sure the synsets do not flood the sentence with additional terms. Instead of adding all synsets to the sentence, we only added unique terms found in those synsets. Thus, if a term appeared in multiple synsets of words in the sentence it would still only contribute once to the modified sentence.
While running the experiments, the WordNet preprocessing was only applied to the citances instead of both citances and reference sentences by accident. This increased our performance to 14.11% (first entry on Table 3b). To further investigate this, we also ran the WordNet expansion on only the reference sentences. This led to another subtlety being discovered, but before we can elaborate we must explain how WordNet expansion is performed.
Conceptually, the goal of the WordNet preprocessing stage is to increase the overlap between the words that appear in the citance and those that appear in the reference sentences. By including synonyms, a sentence has a greater chance to match with the citance. The intended effect was for the citances and reference sentences to meet in the middle.
The steps taken in WordNet expansion are as follows: each sentence is tokenized into single word tokens; we search for the synsets of each token in WordNet; if a synset is found, then the lemmas that constitute that synset are added to the sentence. The small subtlety referred to before is the duplication of original tokens: if a synset is found, it must contain the original token, so the original token gets added once more to the sentence. This adds more weight to the original tokens. Before the discovery of one-sided WordNet expansion, this was a key factor in our TFIDF results.
In actuality, adding synonyms to all reference sentences was a step too far. We believe that the addition of WordNet synsets to both the reference sentences and citances only served to add more noise. Due to the number of reference sentences, these additional synsets impacted the TFIDF values derived. However, if we only apply this transformation to the citances, the impact on the TFIDF values is minimal.
We now had to experiment with applying Word-Net asymmetrically: adding synsets to citances only (cit wn) and adding synsets to reference sentences only (ref wn). In addition, we ran experiments to test the effect of duplicating original tokens. This would still serve the purpose of increasing overlap, but lessen the noise we introduced as a result. We can see the difference in performance in Table 3a and Table 3b. In the development set, applying WordNet only to the reference sentences with duplication performed the best with an F 1 -score of 16.41%. For the test set, WordNet applied to only the citances performs the best with 14.12%. Regardless, our experiments indicate one-sided WordNet leads to better results. In the following sections, experiments only consider one-sided WordNet use.
Topic Modeling
To overcome the limitations of using a single sentence we constructed topic models to better capture the semantic information of a citance. Using Latent Dirichlet Allocation (LDA), we created various topic models for the computational linguistics domain.
Corpus Creation
First, we gathered a set of 34273 documents from the ACL Anthology 1 website. This set is comprised of all PDFs available to download. The next step was to convert the PDFs to text. Unfortunately, we ran into the same problem as the organizers of the shared task: the conversion from PDF to text left a lot to be desired. Additionally, some PDFs used an internal encoding thus resulting in an undecipherable conversion. Instead of trying to fix these documents, we decided to select a subset that seemed sufficiently error-free. Since poorly converted documents contain more symbols than usual, we chose to cluster the documents according to character frequencies. Using K-means, we clustered the documents into twelve different clusters. After clustering, we manually selected the clusters that seemed to have articles with acceptable noise. Interestingly, tables of content are part of the PDFs available for download from the anthology. Since these documents contain more formatting than text, they ended up clustered together. We chose to disregard these clusters as well. In total, 26686 documents remained as part of our "cleaned corpus".
Latent Dirichlet Allocation
LDA can be seen as a probabilistic factorization method that splits a term-document matrix into term-topic and topic-document matrices. The main advantage of LDA is its soft clustering: a single document can be part of many topics to varying degree.
We are interested in the resulting term-topic matrix that is derived from the corpus. With this matrix, we can convert terms into topic vectors where each dimension represents the term's extent of membership. These topic vectors provides new opportunities for achieving overlap between citances and reference sentences, thus allowing us to score sentences that would have a cosine similarity of zero between TFIDF vectors. Similar to K-means, we must choose the number of topics beforehand. Since we are using online LDA [11] there are a few additional parameters, specifically: κ, a parameter to adjust the learning rate for online LDA; τ 0 , a parameter to slow down the learning for the first few iterations. The ranges for each parameter are [0.5, 0.9] in increments of 0.1 for κ and 1, 256, 512, 768 for τ 0 .
We also experimented with different parameters for the vocabulary. The minimum number of documents in which a word had to appear was an absolute number of documents (min df): 10 or 40. The maximum number of documents in which a word could appear was a percentage of the total corpus (max df): 0.87, 0.93, 0.99.
One way to evaluate the suitability of a learned topic model is through a measure known as perplexity [11]. Since LDA learns a distribution for topics and terms, we can calculate the probability of any document according to this distribution. Given an unseen collection of documents taken from the same domain, we calculate the probability of this collection according to the topic model. We expect a good topic model to be less "surprised" at these documents if they are a representative sample of the domain. In Figure 2, we graph the perplexity of our topic models when judged on the reference documents of the training and development set of the CL-SciSumm dataset.
Unfortunately, the implementation we used does not normalize these values, which means we cannot use the perplexity for comparing two models that have a different number of topics. Keep in mind the numbers in Figure 2 do not reflect the perplexity directly. Perplexity is still useful for evaluating our choice of κ and τ 0 . We omit plotting the perplexity for different τ 0 values since, with regards to perplexity, models with τ 0 > 1 always underperformed. Figure 2 makes a strong case for the choice of κ = 0.5. However, our experiments demonstrate that, for ranking, higher κ and higher τ 0 can be advantageous.
In order to compare these models across different number of topics we evaluated their performance at Task 1a. The results of these runs can be seen in Table 5a. Sentences were first converted to LDA topic vectors then ranked by their cosine similarity to the citance (also a topic vector). The performance of this method is worse than all TFIDF configurations, regardless of which LDA model is chosen. We merely use these results to compare the different models. Nevertheless, the topics learned by LDA were not immediately evidentsuggesting there is room for improvement in the choice of parameters. Table 4 has a selection of the most interpretable topics for one of the models.
Word Embeddings
Another way to augment the semantic information of the sentences is through word embeddings. The idea behind word embeddings is to assign each word a vector of real numbers. These vectors are chosen such that if two words a similar, their vectors should be similar as well. We learn these word embeddings in the same manner as word2vec [21].
We use DMTK [20] to learn our embeddings. DMTK provides a distributed implementation of word2vec. We trained two separate embeddings: WE-1 and WE-2. We only explored two different parameter settings. Both embeddings consist of a 200 dimensional vector space. Training was slightly more intensive for WE-1, which ran for 15 epochs sampling 5 negative examples. The second embedding, WE-2, ran for 13 epochs sampling only 4 negative examples. The minimum count for words in the vocabulary was also different: WE-1 required words to appear 40 times whereas WE-2 required words to appear 60 (thus, resulting in a smaller vocabulary).
To obtain similarity scores, we use the Word Mover's Distance [17]. Thus, instead of measuring the similarity to an averaged vector for each sentence, we consider the vector of each word separately. In summary, given two sentences, each composed of words which are represented as vectors, we want to move the vectors of one sentence atop those of the other by moving each vector as little as possible. The results obtained can be found in Table 5b.
Word embeddings outperformed topic models on the development set; while the highest scoring topic model achieved 7.90% F 1 -score on the development set, the highest scoring word embedding achieved 13.77%.
Tradeoff Parameterization
In order to combine the TFIDF systems with LDA or Word Embedding systems, we introduce a parameter to vary the importance of the added similarity compared to the TFIDF similarity: λ. The equation for the new scores is thus:
λ · T F IDF + (1 − λ) · other(5)
where other stands for either LDA or WE. Each sentence is scored by each system separately. These two values (the TFIDF similarity and the other system's similarity) are combined through Equation 5. The sentences are then ranked according to these adjusted values.
We evaluated this method by taking the 10 best performing systems on the development set for both TFIDF and LDA. Each combination of a TFIDF system with an LDA system was tested. We test these hybrid systems with values of λ between [0.7, 0.99] in 0.01 increments. There were only 6 different configura- tions for word embeddings so we used all of them with the same values for λ.
After obtaining the scores for the development set, we chose the 100 best systems to run on the test set (for LDA only). Systems consist of a choice of TFIDF system, a choice of LDA system, and a value for λ. The five highest scoring systems are shown in Table 7.
We can see that a particular topic model dominated. The LDA model that best complemented any TFIDF system was only the fourth best LDA system on the development set. There were multiple combinations with the same F 1 -score, so we had to choose which to display in Tabel 7. This obscures the results since other models attain F 1 -score as high as 14.64%. In particular, the second best performing topic model in this experiment was an 80 topic model that is not in Table 5a.
The following best topic model had 50 topics.
Interestingly, a TFIDF system coupled with word embeddings performs incredibly well on the development set as can be seen in Table 6 (values similar to LDA if we look at Fig 3). However, once we move to the test set, all improvements become meager. It is possible that word embeddings are more sensitive to the choice of λ.
Although we do not provide the numbers, if we analyze the distribution of scores given by word embeddings, we find that the distribution is much flatter. TFIDF scores drop rapidly; for some citances most sentences are scored with a zero. LDA improves upon that by having less zero-score sentences, but the scores still decay until they reach zero. Word embeddings, however, seem to grant a minimum score to all sentences (most scores are greater than 0.4). Furthermore, there is very little variability from lowest to highest score. This is further evidenced by the wide range of λ values that yield good performance on Table 6. We conjecture the shape of these distributions may be responsible for the differences in performance.
Statistical Analysis
Although the F 1 -scores have improved by augmenting the bare-bones TFIDF approach, we must still check whether this improvement is statistically significant. Since some of these systems have very similar F 1 -scores, we cannot simply provide a 95% confidence intervals for each F 1 -score individually; we are forced to perform paired t-tests which mitigate the variance inherent in the data.
Given two systems, A and B, we resample with replacement 10000 times the dataset tested on and calculate the F 1 -score for each new sample. By evaluating on the same sample, any variability due to the data (harder/easier citances, for instance) is ignored. Finally, these pairs of F 1 -scores are then used to calculate a pvalue for the paired t-test.
We calculate the significance of differences between the top entries for each category (TFIDF and two tradeoff variations) evaluated on the test set. For the best performing system (that is TFIDF + LDA at F 1 -score of 14.77%) the difference is statistically significant from TFIDF + WE (at 14.24%) and TFIDF (at 14.11%) with p-values of 0.0015 and 0.0003, respectively. However, the difference between TFIDF + WE and TFIDF is not statistically significant (p-value of 0.4830).
Human Annotators
In order to determine whether the performance of our system is much lower than what can be achieved, we ran an experiment with human annotators. Since human annotators require more time to perform the task, we had to truncate the test set to just three documents, chosen at random.
The subset used to evaluate the human annotators consists of three different articles from the test set: C00-2123, N06-2049, and J96-3004. The 20 citances that cite C00-2123 only select 24 distinct sentences from the reference article, which contains 203 sentences. Similarly, the 22 citances that cite N06-2049 select only 35 distinct reference sentences from 155 total. The last article, J96-3004, has 69 citances annotated that select 109 distinct reference sentences from the 471 sentences found in the article.
To avoid "guiding" the annotators to the correct answers, we provided minimal instructions. We explained the problem of matching citances to their relevant reference spans to each annotator. Since the objective was to compare to our system's performance, the annotators had at their disposal the XML files given to our system. Thus, the sentence boundaries are interpreted consistently by our system and the human annotators. We instructed them to choose one or many sentences, possibly including the title, as the reference span for a citance.
The performance of the human annotators and three of our best system configurations can be seen in Table 8. The raw score for two of the annotators had extremely low precision. Upon further analysis, we noticed outliers where more than ten different reference sentences had been chosen.
To provide a fairer assessment, the scores were adjusted for two of the annotators: if more than ten sentences were selected for a citance, we replace the sentences with simply the article title. We argue this is justified since any citance that requires that many sentences to be chosen is probably referencing the paper as a whole. After these adjustments, the score of the human annotators rose considerably.
Discussion
The fact that WordNet went from decreasing the performance of our system to increasing its performance shows the level of detail required to tune a system for the task of reference span identification. The performance of our human annotators demonstrate the difficulty of this task -it requires much more precision. Additionally, the human scores show there is room for improvement.
Task 1a can be framed as identifying semantically similar sentences. This perspective is best represented by the LDA systems of Section 5.2 and the word embeddings of Section 6. However, as can be seen by the results we obtained in Table 5, relying solely on semantic similarity is not the best approach.
Methods such as TFIDF and sentence limiting do not attempt to solve Task 1a head-on. Through a narrowing of possibilities, these methods improve the odds of choosing the correct sentences. Only after sifting through the candidate sentences with these methods can topic modeling be of use.
Combining TFIDF and LDA through a tradeoff parameter allowed us to test whether topic modeling does indeed improve our performance. Clearly, that is the case since our best performing system uses both TFIDF and LDA. The same experiment was performed with word embeddings, although the improvements were not as great.
Since word embeddings performed well alone but didn't provide much of a boost to TFIDF, it is possible the information captured by the embeddings overlaps with the information captured by TFIDF.
The question that remains is whether the topic modeling was done as best as it could be. The results in Section 7 require further analysis. As we can see from Figure 3, very few combinations provide a net-gain in performance. Likewise, it is possible that further tuning of word embedding parameters could improve our performance.
Conclusion
During the BIRNDL shared task, we were surprised by the result of our TFIDF system, which achieved an F 1score of 13.65%. More complex systems did not obtain a higher F 1 -score. In this paper, we show it is possible to improve our TFIDF system with additional semantic information.
Through the use of WordNet we achieve an F 1 -score of 14.11%. Word embeddings increase this F 1 -score to 14.24%. If we employ LDA topic models instead of word embeddings, our system attains its best performance: an F 1 -score of 14.77%. This improvement is statistically significant.
Although these increases seem modest, the difficulty of the task should be taken into account. We performed an experiment with human annotators to assess what F 1 -score would constitute a reasonable goal for our system. The best F 1 -score obtained by a human was 27.95%. This leads us to believe there is still room for improvement on this task.
In the future, the study of overlap between TFIDF and word embeddings could provide a better understanding of the limits of this task. Finally, we also propose the simultaneous combination of LDA topic models and word embeddings. | 6,316 |
1708.02989 | 2629354081 | The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system. | Because of this recognized importance of citation information, research has also been done on properly tagging or marking the actual citation. Powley and Dale @cite_6 give insight into recognizing text that is a citation. Siddharthan and Teufel demonstrate how this is useful in reducing the noise when comparing citation text to reference text @cite_4 . Siddharthan and Teufel also introduce scientific attribution'' which can help in discourse classification. The importance of discourse classification is further developed in @cite_9 : they were able to show how identifying the discourse facets helps produce coherent summaries. | {
"abstract": [
"In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the proposed approach outperforms several baselines in terms of both extraction quality and fluency.",
"Scientific papers revolve around citations, and for many discourse level tasks one needs to know whose work is being talked about at any point in the discourse. In this paper, we introduce the scientific attribution task, which links different linguistic expressions to citations. We discuss the suitability of different evaluation metrics and evaluate our classification approach to deciding attribution both intrinsically and in an extrinsic evaluation where information about scientific attribution is shown to improve performance on Argumentative Zoning, a rhetorical classification task.",
"Citations play an essential role in navigating academic literature and following chains of evidence in research. With the growing availability of large digital archives of scientific papers, the automated extraction and analysis of citations is becoming increasingly relevant. However, existing approaches to citation extraction still fall short of the high accuracy required to build more sophisticated and reliable tools for citation analysis and corpus navigation. In this paper, we present techniques for high accuracy extraction of citations and references from academic papers. By collecting multiple sources of evidence about entities from documents, and integrating citation extraction, reference segmentation, and citation-reference matching, we are able to significantly improve performance in subtasks including citation identification, author named entity recognition, and citation-reference matching. Applying our algorithm to previously-unseen documents, we demonstrate high F-measure performance of 0.980 for citation extraction, 0.983 for author named entity recognition, and 0.948 for citation-reference matching."
],
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_6"
],
"mid": [
"2157979304",
"36261270",
"2126953195"
]
} | Identifying Reference Spans: Topic Modeling and Word Embeddings help IR | The CL-SciSumm 2016 [12] shared task posed the problem of automatic summarization in the computational linguistics domain. Single document summarization is hardly new [30,5,4]; however, in addition to the reference document to be summarized, we are also given citances, i.e. sentences that cite our reference document. The usefulness of citances in the process of summarization is immediately apparent. A citance can hint at what is interesting about the document.
This objective was split into three tasks. Given a citance (a sentence containing a citation), in Task 1a we must identify the span of text in the reference document that best reflects what has been cited. Task 1b asks us to classify the cited aspect according to a predefined set of facets: hypothesis, aim, method, results, and implication. Finally, Task 2 is the generation of a structured summary for the reference document. Although the shared task is broken up into multiple tasks, this paper concerns itself solely with Task 1a.
Task 1a is quite interesting all by itself. We can think of Task 1a as a small scale summarization. Thus, being precise is incredibly important: the system must often find a single sentence among hundreds (in some cases, however, multiple sentences are correct). The results of the workshop [15] reveal that Task 1a is quite challenging. There was a varied selection of methods used for this problem: SVMs, neural networks, learningto-rank algorithms, and more. Regardless, our previous system had the best performance on the test set for CL-SciSumm: cosine similarity between weighted bagof-word vectors. The weighting used is well known in information retrieval: term frequency · inverse document frequency (TFIDF). Although TFIDF is a well known and understood method in information retrieval, it is surprising that it achieved better performance than more heavily engineered solutions. Thus, our goal in this paper is twofold: to analyze and improve on the performance of TFIDF and to push beyond its performance ceiling.
In the process of exploring different configurations, we have observed the performance of our TFIDF method vary substantially. Text preprocessing parameters can have a significant effect on the final performance. This variance also underscores the need to start with a basic system and then add complexity step-by-step in a reasoned manner. Another prior attempt employed SVMs with tree kernels but the performance never surpassed arXiv:1708.02989v1 [cs.CL] 9 Aug 2017 that of TFIDF. Therefore, we focus on improving the TFIDF approach.
Depending on the domain of your data, it can be necessary to start with simple models. In general, unbalanced classification tasks are hard to evaluate due to the performance of the baseline. For an example that is not a classification task, look no further than news articles: the first few sentences of a news article form an incredibly effective baseline for summaries of the whole article.
First, we study a few of the characteristics of the dataset. In particular, we look at the sparsity between reference sentences and citances, what are some of the hurdles in handling citances, and whether chosen reference sentences appear more frequently in a particular section. Then we cover improvements to TFIDF. We also introduce topic models learned through Latent Dirichlet Allocation (LDA) and word embeddings learned through word2vec. These systems are studied for their ability to augment our TFIDF system. Finally, we present an analysis of how humans perform at this task.
CL-SciSumm 2016
We present a short overview of the different approaches used to solve Task 1a.
Aggarwal and Sharma [3] use bag-of-words bigrams, syntactic dependency cues and a set of rules for extracting parts of referenced documents that are relevant to citances.
In [14], researchers generate three combinations of an unsupervised graph-based sentence ranking approach with a supervised classification approach. In the first approach, sentence ranking is modified to use information provided by citing documents. In the second, the ranking procedure is applied as a filter before supervised classification. In the third, supervised learning is used as a filter to the cited document, before sentence ranking.
Cao et al. [9] model Task 1a as a ranking problem and apply SVM Rank for this purpose.
In [19], the citance is treated as a query over the sentences of the reference document. They used learningto-rank algorithms (RankBoost, RankNet, AdaRank, and Coordinate Ascent) for this problem with lexical (bag-of-words features), topic features and TextRank for ranking sentences. WordNet is used to compute concept similarity between citation contexts and candidate spans.
Lei et al. [18] use SVMs and rule-based methods with lexicon features (high frequency words within the reference text, LDA to train the reference document and citing documents, and co-occurrence lexicon) and similarities (IDF, Jaccard, and context similarity).
In [24], authors propose a linear combination between a TFIDF model and a single layer neural network model. This paper is the most similar to our work.
Saggion et al. [28] use supervised algorithms with feature vectors representing the citance and reference document sentences. Features include positional, Word-Net similarity measures, and rhetorical features.
We have chosen to use topic modeling and word embeddings to overcome the weaknesses of the TFIDF approach. Another participant of the CL-SciSumm 2016 shared task did the same [18]. Their system performed well on the development set, but not as well on the heldout test set. We show how improving a system with a topic model or a word embedding is a lot less straightforward than expected.
Preliminaries
Following are brief explanations of terms that will be used throughout the paper.
Cosine Similarity. This is a measure of similarity between two non zero vectors, A and B that measure the cosine of angle between them. Equation 1 shows the formula for calculating cosine similarity.
similarity(A, B) = cos θ = A · B A B(1)
In the above formula, θ is the angle between the two vectors A and B. We use cosine similarity to measure how far or close two sentences are from each other and rank them based on their similarity. In our task, each vector represents TFIDF or LDA values for all the words in a sentence. The higher the value of similarity(A, B), the greater the similarity is between the two sentences.
TFIDF. This is short for term frequency-inverse document frequency, and is a common scoring metric used for words in a query across a corpus of documents. The metric tries to capture the importance of a word by valuing frequency of the words use in a document and devaluing its appearance in every document. This was originally a method for retrieving documents from a corpus (instead of sentences from a document). For our task of summarization, this scoring metric was adjusted to help select matching sentences, so each sentence is treated as a document for our purposes. Thus, our "document" level frequencies are the frequencies of words in a sentence. The "corpus" will be the whole reference document. Then, the term frequency can be calculated by counting a word's frequency within a sentence. The inverse document frequency of a word will be based on the number of sentences that contain that word. When using TFIDF for calculating similarity, we use Equation 1 where the vectors are defined as:
A = a 1 , a 2 , . . . , a n where a i = tf wi · idf wi (2) idf wi = log(N/df wi )(3)
where tf wi refers to the term frequency of w i , df wi refers to the document frequency of w i (number of documents in which w i appears), and N refers to the total number of documents.
WordNet. This is a large lexical dataset for the English language [22]. The main relation among words in WordNet is synonymy. However, it contains other relations like antonymy, hyperonymy, hyponymy, meronymy, etc. For our task of summarization, we use synonymy for expanding words in reference sentences and citances. Since reference sentences and citances are written by two different authors, adding synonyms increases the chance of a word occurring in both sentences if they are both indeed related.
LDA. Latent Dirichlet Allocation is a technique used for topic modeling. It learns a generative model of a document. Topics are assumed to have some prior distribution, normally a symmetric Dirichlet distribution. Terms in the corpus are assumed to have a multinomial distribution. These assumptions form the basis of the method. After learning the parameters from a corpus, each term will have a topic distribution which can be used to determine the topics of a document. When using LDA for calculating similarity, we use Equation 1 where the vectors are defined as topic membership probabilities:
A = a 1 , a 2 , . . . , a n where a i = P (doc A ∈ topic i )(4) F 1 -score. To evaluate our methods, we have chosen the F 1 -score. The F 1 -score is a weighted average of precision and recall, where precision and recall receive equal weighting. This kind of weighted average is also referred to as the harmonic mean. Precision is the proportion of correct results among the results that were returned. And recall is the proportion of correct results among all possible correct results.
Our system outputs the top 3 sentences and we compute recall, precision, and F 1 -score using these sentences. If a relevant sentence appears in the top 3, then it factors into recall, precision, and F 1 -score. Thus, we naturally present the precision at N measure (P @N ) used by [8]. Precision at N is simply the proportion of correct results in the top N ranks. In our evaluations, N = 3. Average precision and the area under the ROC curve are two other measures that present a more complete picture when there is a large imbalance between classes. To keep in line with the evaluation for the BIRNDL shared task we chose to use P@N. Regardless, we focus on the F 1 -score rather than P @3 when determining if one system is better than another.
If we look at the percentage of sentences that appear in the gold standard, we see that roughly 90% of the sentences in our dataset are never chosen by an annotator. This means our desired class is rare and diverse, similar to outliers or anomalies [8]. Therefore, we should expect low performance from our system since our task is similar to anomaly detection [8] which has a hard time achieving good performance in such cases.
Dataset
The dataset [13] consists of 30 total documents separated into three sets of 10 documents each: training, development, and test sets. For the following analysis, no preprocessing has been done (for instance, stemming).
There are 23356 unique words among the reference documents in the dataset. The citances contain 5520 unique words. The most frequent word among reference documents appears in 4120 sentences. The most frequent word among citances appears in 521 sentences. There are 6700 reference sentences and 704 citances (although a few of these should actually be broken up into multiple sentences). The average reference sentence has approximately 22 words in this dataset whereas citances have an average of approximately 34 words.
In Figure 1 we can see the sparsity of the dataset. At a particular (x, y) along the curves we know x% of all sentences contain at least some number of unique words -a number equal to y% of the vocabulary. All sentences contain at least one word, which is a very small sliver of the vocabulary (appearing as 0% in the graph). The quicker the decay, the greater the sparsity. Noise in the dataset is one of the factors for the sparsity. We can see that citances, seen as a corpus, are in general less sparse than the reference texts. This can be an indication that citances have some common structure or semantics.
One possibility is that citance must have some level of summarization ability. If we look at the annotations of a document as a whole we see a pattern: the annotators tend to choose from a small pool of reference sentences. Therefore, the sentences chosen are usually somewhat general and serve as tiny summaries of a single concept. Furthermore, the chosen reference sentences make up roughly 10% of all reference sentences from which we have to choose.
Citances
It should be noted that citances have a few peculiarities, such as an abundance of citation markers and proper names. Citation markers (cues in written text demarcating a citation) will sometimes include the names of authors, thus the vocabulary for these sentences will include more proper names. This could justify the lesser sparsity if authors reoccur across citances. However, it could also justify greater sparsity since these authors may be unique. Identifying and ignoring citation markers should reduce noise. A preprocessing step we employ with this goal is the removal of all text enclosed in brackets of any kind.
To demonstrate the differences in difficulty a citance can pose we present two examples: one that is relatively simple and another that is relatively hard. In both examples the original citance marker is in italics.
Easy Citance: "According to Sproat et al. (1996), most prior work in Chinese segmentation has exploited lexical knowledge bases; indeed, the authors assert that they were aware of only one previously published instance (the mutual-information method of Sproat and Shih (1990)) of a purely statistical approach."
Reference Span: "Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexical rule-based approaches, and approaches that combine lexical information with statistical information. The present proposal falls into the last group. Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach."
In the "Easy" case, there are many salient words in common between the reference spans we must retrieve and the citance. This is the ideal case for TFIDF since matching based on these words should produce good results. However in the "Hard" case:
Hard Citance: "A lot of work has been done in English for the purpose of anaphora resolution and various algorithms have been devised for this purpose (Aone and Bennette, 1996; Brenan , Friedman and Pollard, 1987; Ge, Hale and Charniak, 1998; Grosz, Aravind and Weinstein, 1995; McCarthy and Lehnert, 1995; Lappins and Leass, 1994; Mitkov, 1998 ; Soon, Ng and Lim, 1999)."
Reference Span: "We have described a robust, knowledge-poor approach to pronoun resolution which operates on texts preprocessed by a part-of-speech tagger."
We can see that there is no overlap of salient words, between the citance and text span. Not only is the citance somewhat vague, but any semantic overlap is not exact. For instance, "anaphora resolution" and "pronoun resolution" refer to the same concept but do not match lexically.
Frequency of Section Titles
We analyzed the frequency of section titles for the chosen reference sentences. Our analysis only excludes document P05-1053 from consideration -the document whose answer key was withheld. For each cited reference sentence, we looked at the title of the section in which it appears. The titles that appeared with the greatest frequency can be seen in Table 1. To extract these section titles we looked at the parent nodes of sentences within the XML document. The "title" and "abstract" sections are special since they refer to parent nodes of type other than SECTION. Due to OCR noise, a few section names were wrong. We manually corrected these section names. Then, we performed a slight normalization by removing any 's' character that appeared in the end of a name. These results clearly show sentences that are cited are not uniformly distributed within a document.
Preprocessing
Before we use the dataset in our system we preprocessed the dataset to reduce the number of errors. The dataset has lots of errors due to the use of OCR techniques. Broken words, non-ascii characters, and formating problems in XML files are some examples of these problems. We performed the following preprocessing steps to reduce noise in the dataset. First, we manually went over the citances and reference sentences fixing broken words (those separated by hyphen, space, or some nonascii characters). We automatically removed all nonascii characters from citance and reference text. Finally, we manually fixed some misformatted XML files that were missing closing tags.
TFIDF Approach
Our best performing system for the CL-SciSumm 2016 task was based on TFIDF. It achieved 13.68% F 1 -score on the test set for Task 1a. Our approach compares the TFIDF vectors of the citance and the sentences in the reference document. Each reference sentence is assigned a score according to the cosine similarity between itself and the citance. There were several variations studied to improve our TFIDF system. Table 2 contains the abbreviations we use when discussing a particular configuration. Stopwords were removed for all configurations. These words serve mainly to add noise, so their removal helps improve performance. There are two lists of stopwords used: one from sklearn (sk stop) and one from NLTK (nltk stop).
To remove the effect of using words in their different forms we used stemming (st) to reduce words to their root form. For this purpose, we use the Snowball Stemmer, provided by the NLTK package [7].
WordNet has been utilized to expand the semantics of the sentence. We obtain the lemmas from the synsets of each word in the sentence. We use the Lesk algorithm, provided through the NLTK package [7], to perform wordsense disambiguation. This is a necessary step before obtaining the synset of a word from Word-Net. Each synset is a collection of lemmas. The lemmas that constitute each synset are added to the word vector of the sentence; this augmented vector is used when calculating the cosine similarity instead of the original vector. We consider three different methods of using WordNet: ref wn, cit wn, and both wn, which will be explained in Subsection 4.1.
Our first implementation of WordNet expansion increased coverage at the cost of performance. With the proper adjustments, we were able to improve performance as well. This is another example of how the details and tuning of the implementation are critical in dealing with short text. Our new implementation takes care to only add a word once, even though it may ap-pear in the synsets of multiple words of the sentence. Details are found in Subsection 4.1.
In the shared task, filtering candidate sentences by length improved the system's performance substantially. Short sentences are unlikely to be chosen; they are often too brief to encapsulate a concept completely. Longer sentences are usually artifacts from the PDF to text conversion (for instance, a table transformed into a sentence). We eliminate from consideration all sentences outside a certain range of number of words. In our preliminary experiments, we found two promising lower bounds on the number of words: 8 and 15. The only upper bound we consider is 70, which also reduces computation time since longer sentences take longer to score. Each range appears in the tables as an ordered pair (min, max); e.g. 8 to 70 words it would appear as (8,70). This process eliminates some of the sentences our system is supposed to retrieve so our maximum attainable F 1 -score is lowered.
Improvements on TFIDF
The main drawback of the TFIDF method is its inability to handle situations where there is no overlap between citance and reference sentence. Thus, we decided to work on improving the WordNet expansion. In Table 3 we can see the performance of various different configurations, some which improve upon our previously best system. The first improvement was to make sure the synsets do not flood the sentence with additional terms. Instead of adding all synsets to the sentence, we only added unique terms found in those synsets. Thus, if a term appeared in multiple synsets of words in the sentence it would still only contribute once to the modified sentence.
While running the experiments, the WordNet preprocessing was only applied to the citances instead of both citances and reference sentences by accident. This increased our performance to 14.11% (first entry on Table 3b). To further investigate this, we also ran the WordNet expansion on only the reference sentences. This led to another subtlety being discovered, but before we can elaborate we must explain how WordNet expansion is performed.
Conceptually, the goal of the WordNet preprocessing stage is to increase the overlap between the words that appear in the citance and those that appear in the reference sentences. By including synonyms, a sentence has a greater chance to match with the citance. The intended effect was for the citances and reference sentences to meet in the middle.
The steps taken in WordNet expansion are as follows: each sentence is tokenized into single word tokens; we search for the synsets of each token in WordNet; if a synset is found, then the lemmas that constitute that synset are added to the sentence. The small subtlety referred to before is the duplication of original tokens: if a synset is found, it must contain the original token, so the original token gets added once more to the sentence. This adds more weight to the original tokens. Before the discovery of one-sided WordNet expansion, this was a key factor in our TFIDF results.
In actuality, adding synonyms to all reference sentences was a step too far. We believe that the addition of WordNet synsets to both the reference sentences and citances only served to add more noise. Due to the number of reference sentences, these additional synsets impacted the TFIDF values derived. However, if we only apply this transformation to the citances, the impact on the TFIDF values is minimal.
We now had to experiment with applying Word-Net asymmetrically: adding synsets to citances only (cit wn) and adding synsets to reference sentences only (ref wn). In addition, we ran experiments to test the effect of duplicating original tokens. This would still serve the purpose of increasing overlap, but lessen the noise we introduced as a result. We can see the difference in performance in Table 3a and Table 3b. In the development set, applying WordNet only to the reference sentences with duplication performed the best with an F 1 -score of 16.41%. For the test set, WordNet applied to only the citances performs the best with 14.12%. Regardless, our experiments indicate one-sided WordNet leads to better results. In the following sections, experiments only consider one-sided WordNet use.
Topic Modeling
To overcome the limitations of using a single sentence we constructed topic models to better capture the semantic information of a citance. Using Latent Dirichlet Allocation (LDA), we created various topic models for the computational linguistics domain.
Corpus Creation
First, we gathered a set of 34273 documents from the ACL Anthology 1 website. This set is comprised of all PDFs available to download. The next step was to convert the PDFs to text. Unfortunately, we ran into the same problem as the organizers of the shared task: the conversion from PDF to text left a lot to be desired. Additionally, some PDFs used an internal encoding thus resulting in an undecipherable conversion. Instead of trying to fix these documents, we decided to select a subset that seemed sufficiently error-free. Since poorly converted documents contain more symbols than usual, we chose to cluster the documents according to character frequencies. Using K-means, we clustered the documents into twelve different clusters. After clustering, we manually selected the clusters that seemed to have articles with acceptable noise. Interestingly, tables of content are part of the PDFs available for download from the anthology. Since these documents contain more formatting than text, they ended up clustered together. We chose to disregard these clusters as well. In total, 26686 documents remained as part of our "cleaned corpus".
Latent Dirichlet Allocation
LDA can be seen as a probabilistic factorization method that splits a term-document matrix into term-topic and topic-document matrices. The main advantage of LDA is its soft clustering: a single document can be part of many topics to varying degree.
We are interested in the resulting term-topic matrix that is derived from the corpus. With this matrix, we can convert terms into topic vectors where each dimension represents the term's extent of membership. These topic vectors provides new opportunities for achieving overlap between citances and reference sentences, thus allowing us to score sentences that would have a cosine similarity of zero between TFIDF vectors. Similar to K-means, we must choose the number of topics beforehand. Since we are using online LDA [11] there are a few additional parameters, specifically: κ, a parameter to adjust the learning rate for online LDA; τ 0 , a parameter to slow down the learning for the first few iterations. The ranges for each parameter are [0.5, 0.9] in increments of 0.1 for κ and 1, 256, 512, 768 for τ 0 .
We also experimented with different parameters for the vocabulary. The minimum number of documents in which a word had to appear was an absolute number of documents (min df): 10 or 40. The maximum number of documents in which a word could appear was a percentage of the total corpus (max df): 0.87, 0.93, 0.99.
One way to evaluate the suitability of a learned topic model is through a measure known as perplexity [11]. Since LDA learns a distribution for topics and terms, we can calculate the probability of any document according to this distribution. Given an unseen collection of documents taken from the same domain, we calculate the probability of this collection according to the topic model. We expect a good topic model to be less "surprised" at these documents if they are a representative sample of the domain. In Figure 2, we graph the perplexity of our topic models when judged on the reference documents of the training and development set of the CL-SciSumm dataset.
Unfortunately, the implementation we used does not normalize these values, which means we cannot use the perplexity for comparing two models that have a different number of topics. Keep in mind the numbers in Figure 2 do not reflect the perplexity directly. Perplexity is still useful for evaluating our choice of κ and τ 0 . We omit plotting the perplexity for different τ 0 values since, with regards to perplexity, models with τ 0 > 1 always underperformed. Figure 2 makes a strong case for the choice of κ = 0.5. However, our experiments demonstrate that, for ranking, higher κ and higher τ 0 can be advantageous.
In order to compare these models across different number of topics we evaluated their performance at Task 1a. The results of these runs can be seen in Table 5a. Sentences were first converted to LDA topic vectors then ranked by their cosine similarity to the citance (also a topic vector). The performance of this method is worse than all TFIDF configurations, regardless of which LDA model is chosen. We merely use these results to compare the different models. Nevertheless, the topics learned by LDA were not immediately evidentsuggesting there is room for improvement in the choice of parameters. Table 4 has a selection of the most interpretable topics for one of the models.
Word Embeddings
Another way to augment the semantic information of the sentences is through word embeddings. The idea behind word embeddings is to assign each word a vector of real numbers. These vectors are chosen such that if two words a similar, their vectors should be similar as well. We learn these word embeddings in the same manner as word2vec [21].
We use DMTK [20] to learn our embeddings. DMTK provides a distributed implementation of word2vec. We trained two separate embeddings: WE-1 and WE-2. We only explored two different parameter settings. Both embeddings consist of a 200 dimensional vector space. Training was slightly more intensive for WE-1, which ran for 15 epochs sampling 5 negative examples. The second embedding, WE-2, ran for 13 epochs sampling only 4 negative examples. The minimum count for words in the vocabulary was also different: WE-1 required words to appear 40 times whereas WE-2 required words to appear 60 (thus, resulting in a smaller vocabulary).
To obtain similarity scores, we use the Word Mover's Distance [17]. Thus, instead of measuring the similarity to an averaged vector for each sentence, we consider the vector of each word separately. In summary, given two sentences, each composed of words which are represented as vectors, we want to move the vectors of one sentence atop those of the other by moving each vector as little as possible. The results obtained can be found in Table 5b.
Word embeddings outperformed topic models on the development set; while the highest scoring topic model achieved 7.90% F 1 -score on the development set, the highest scoring word embedding achieved 13.77%.
Tradeoff Parameterization
In order to combine the TFIDF systems with LDA or Word Embedding systems, we introduce a parameter to vary the importance of the added similarity compared to the TFIDF similarity: λ. The equation for the new scores is thus:
λ · T F IDF + (1 − λ) · other(5)
where other stands for either LDA or WE. Each sentence is scored by each system separately. These two values (the TFIDF similarity and the other system's similarity) are combined through Equation 5. The sentences are then ranked according to these adjusted values.
We evaluated this method by taking the 10 best performing systems on the development set for both TFIDF and LDA. Each combination of a TFIDF system with an LDA system was tested. We test these hybrid systems with values of λ between [0.7, 0.99] in 0.01 increments. There were only 6 different configura- tions for word embeddings so we used all of them with the same values for λ.
After obtaining the scores for the development set, we chose the 100 best systems to run on the test set (for LDA only). Systems consist of a choice of TFIDF system, a choice of LDA system, and a value for λ. The five highest scoring systems are shown in Table 7.
We can see that a particular topic model dominated. The LDA model that best complemented any TFIDF system was only the fourth best LDA system on the development set. There were multiple combinations with the same F 1 -score, so we had to choose which to display in Tabel 7. This obscures the results since other models attain F 1 -score as high as 14.64%. In particular, the second best performing topic model in this experiment was an 80 topic model that is not in Table 5a.
The following best topic model had 50 topics.
Interestingly, a TFIDF system coupled with word embeddings performs incredibly well on the development set as can be seen in Table 6 (values similar to LDA if we look at Fig 3). However, once we move to the test set, all improvements become meager. It is possible that word embeddings are more sensitive to the choice of λ.
Although we do not provide the numbers, if we analyze the distribution of scores given by word embeddings, we find that the distribution is much flatter. TFIDF scores drop rapidly; for some citances most sentences are scored with a zero. LDA improves upon that by having less zero-score sentences, but the scores still decay until they reach zero. Word embeddings, however, seem to grant a minimum score to all sentences (most scores are greater than 0.4). Furthermore, there is very little variability from lowest to highest score. This is further evidenced by the wide range of λ values that yield good performance on Table 6. We conjecture the shape of these distributions may be responsible for the differences in performance.
Statistical Analysis
Although the F 1 -scores have improved by augmenting the bare-bones TFIDF approach, we must still check whether this improvement is statistically significant. Since some of these systems have very similar F 1 -scores, we cannot simply provide a 95% confidence intervals for each F 1 -score individually; we are forced to perform paired t-tests which mitigate the variance inherent in the data.
Given two systems, A and B, we resample with replacement 10000 times the dataset tested on and calculate the F 1 -score for each new sample. By evaluating on the same sample, any variability due to the data (harder/easier citances, for instance) is ignored. Finally, these pairs of F 1 -scores are then used to calculate a pvalue for the paired t-test.
We calculate the significance of differences between the top entries for each category (TFIDF and two tradeoff variations) evaluated on the test set. For the best performing system (that is TFIDF + LDA at F 1 -score of 14.77%) the difference is statistically significant from TFIDF + WE (at 14.24%) and TFIDF (at 14.11%) with p-values of 0.0015 and 0.0003, respectively. However, the difference between TFIDF + WE and TFIDF is not statistically significant (p-value of 0.4830).
Human Annotators
In order to determine whether the performance of our system is much lower than what can be achieved, we ran an experiment with human annotators. Since human annotators require more time to perform the task, we had to truncate the test set to just three documents, chosen at random.
The subset used to evaluate the human annotators consists of three different articles from the test set: C00-2123, N06-2049, and J96-3004. The 20 citances that cite C00-2123 only select 24 distinct sentences from the reference article, which contains 203 sentences. Similarly, the 22 citances that cite N06-2049 select only 35 distinct reference sentences from 155 total. The last article, J96-3004, has 69 citances annotated that select 109 distinct reference sentences from the 471 sentences found in the article.
To avoid "guiding" the annotators to the correct answers, we provided minimal instructions. We explained the problem of matching citances to their relevant reference spans to each annotator. Since the objective was to compare to our system's performance, the annotators had at their disposal the XML files given to our system. Thus, the sentence boundaries are interpreted consistently by our system and the human annotators. We instructed them to choose one or many sentences, possibly including the title, as the reference span for a citance.
The performance of the human annotators and three of our best system configurations can be seen in Table 8. The raw score for two of the annotators had extremely low precision. Upon further analysis, we noticed outliers where more than ten different reference sentences had been chosen.
To provide a fairer assessment, the scores were adjusted for two of the annotators: if more than ten sentences were selected for a citance, we replace the sentences with simply the article title. We argue this is justified since any citance that requires that many sentences to be chosen is probably referencing the paper as a whole. After these adjustments, the score of the human annotators rose considerably.
Discussion
The fact that WordNet went from decreasing the performance of our system to increasing its performance shows the level of detail required to tune a system for the task of reference span identification. The performance of our human annotators demonstrate the difficulty of this task -it requires much more precision. Additionally, the human scores show there is room for improvement.
Task 1a can be framed as identifying semantically similar sentences. This perspective is best represented by the LDA systems of Section 5.2 and the word embeddings of Section 6. However, as can be seen by the results we obtained in Table 5, relying solely on semantic similarity is not the best approach.
Methods such as TFIDF and sentence limiting do not attempt to solve Task 1a head-on. Through a narrowing of possibilities, these methods improve the odds of choosing the correct sentences. Only after sifting through the candidate sentences with these methods can topic modeling be of use.
Combining TFIDF and LDA through a tradeoff parameter allowed us to test whether topic modeling does indeed improve our performance. Clearly, that is the case since our best performing system uses both TFIDF and LDA. The same experiment was performed with word embeddings, although the improvements were not as great.
Since word embeddings performed well alone but didn't provide much of a boost to TFIDF, it is possible the information captured by the embeddings overlaps with the information captured by TFIDF.
The question that remains is whether the topic modeling was done as best as it could be. The results in Section 7 require further analysis. As we can see from Figure 3, very few combinations provide a net-gain in performance. Likewise, it is possible that further tuning of word embedding parameters could improve our performance.
Conclusion
During the BIRNDL shared task, we were surprised by the result of our TFIDF system, which achieved an F 1score of 13.65%. More complex systems did not obtain a higher F 1 -score. In this paper, we show it is possible to improve our TFIDF system with additional semantic information.
Through the use of WordNet we achieve an F 1 -score of 14.11%. Word embeddings increase this F 1 -score to 14.24%. If we employ LDA topic models instead of word embeddings, our system attains its best performance: an F 1 -score of 14.77%. This improvement is statistically significant.
Although these increases seem modest, the difficulty of the task should be taken into account. We performed an experiment with human annotators to assess what F 1 -score would constitute a reasonable goal for our system. The best F 1 -score obtained by a human was 27.95%. This leads us to believe there is still room for improvement on this task.
In the future, the study of overlap between TFIDF and word embeddings could provide a better understanding of the limits of this task. Finally, we also propose the simultaneous combination of LDA topic models and word embeddings. | 6,316 |
1708.02989 | 2629354081 | The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system. | The choice of proper features is very important in handling citation text. Previous research @cite_25 @cite_15 gives insight into these features. We find in @cite_25 an in-depth analysis of the usefulness of certain features. As a result, we have used it to guide our selection of which features to include. | {
"abstract": [
"In this paper, a method based on part-of-speech tagging (PoS) is used for bibliographic reference structure. This method operates on a roughly structured ASCII file, produced by OCR. Because of the heterogeneity of the reference structure, the method acts in a bottom-up way, without an a priori model, gathering structural elements from basic tags to sub-fields and fields. Significant tags are first grouped in homogeneous classes according to their grammar categories and then reduced in canonical forms corresponding to record fields: \"authors\", \"title\", \"conference name\", \"date\", etc. Non labelled tokens are integrated in one or another field by either applying PoS correction rules or using a structure model generated from well-detected records. The designed prototype operates with a great satisfaction on different record layouts and character recognition qualities. Without manual intervention, 96.6 words are correctly attributed, and about 75.9 references are completely segmented from 2500 references.",
"●● ● ● ● To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary. Document extracts consisting of roughly 20 of the original cart be as informative as the full text of a document, which suggests that even shorter extracts may be useful indicative summmies. The trends in our results are in agreement with those of Edmundson who used a subjectively weighted combination of features as opposed to training the feature weights using a corpus."
],
"cite_N": [
"@cite_15",
"@cite_25"
],
"mid": [
"2109757083",
"2101390659"
]
} | Identifying Reference Spans: Topic Modeling and Word Embeddings help IR | The CL-SciSumm 2016 [12] shared task posed the problem of automatic summarization in the computational linguistics domain. Single document summarization is hardly new [30,5,4]; however, in addition to the reference document to be summarized, we are also given citances, i.e. sentences that cite our reference document. The usefulness of citances in the process of summarization is immediately apparent. A citance can hint at what is interesting about the document.
This objective was split into three tasks. Given a citance (a sentence containing a citation), in Task 1a we must identify the span of text in the reference document that best reflects what has been cited. Task 1b asks us to classify the cited aspect according to a predefined set of facets: hypothesis, aim, method, results, and implication. Finally, Task 2 is the generation of a structured summary for the reference document. Although the shared task is broken up into multiple tasks, this paper concerns itself solely with Task 1a.
Task 1a is quite interesting all by itself. We can think of Task 1a as a small scale summarization. Thus, being precise is incredibly important: the system must often find a single sentence among hundreds (in some cases, however, multiple sentences are correct). The results of the workshop [15] reveal that Task 1a is quite challenging. There was a varied selection of methods used for this problem: SVMs, neural networks, learningto-rank algorithms, and more. Regardless, our previous system had the best performance on the test set for CL-SciSumm: cosine similarity between weighted bagof-word vectors. The weighting used is well known in information retrieval: term frequency · inverse document frequency (TFIDF). Although TFIDF is a well known and understood method in information retrieval, it is surprising that it achieved better performance than more heavily engineered solutions. Thus, our goal in this paper is twofold: to analyze and improve on the performance of TFIDF and to push beyond its performance ceiling.
In the process of exploring different configurations, we have observed the performance of our TFIDF method vary substantially. Text preprocessing parameters can have a significant effect on the final performance. This variance also underscores the need to start with a basic system and then add complexity step-by-step in a reasoned manner. Another prior attempt employed SVMs with tree kernels but the performance never surpassed arXiv:1708.02989v1 [cs.CL] 9 Aug 2017 that of TFIDF. Therefore, we focus on improving the TFIDF approach.
Depending on the domain of your data, it can be necessary to start with simple models. In general, unbalanced classification tasks are hard to evaluate due to the performance of the baseline. For an example that is not a classification task, look no further than news articles: the first few sentences of a news article form an incredibly effective baseline for summaries of the whole article.
First, we study a few of the characteristics of the dataset. In particular, we look at the sparsity between reference sentences and citances, what are some of the hurdles in handling citances, and whether chosen reference sentences appear more frequently in a particular section. Then we cover improvements to TFIDF. We also introduce topic models learned through Latent Dirichlet Allocation (LDA) and word embeddings learned through word2vec. These systems are studied for their ability to augment our TFIDF system. Finally, we present an analysis of how humans perform at this task.
CL-SciSumm 2016
We present a short overview of the different approaches used to solve Task 1a.
Aggarwal and Sharma [3] use bag-of-words bigrams, syntactic dependency cues and a set of rules for extracting parts of referenced documents that are relevant to citances.
In [14], researchers generate three combinations of an unsupervised graph-based sentence ranking approach with a supervised classification approach. In the first approach, sentence ranking is modified to use information provided by citing documents. In the second, the ranking procedure is applied as a filter before supervised classification. In the third, supervised learning is used as a filter to the cited document, before sentence ranking.
Cao et al. [9] model Task 1a as a ranking problem and apply SVM Rank for this purpose.
In [19], the citance is treated as a query over the sentences of the reference document. They used learningto-rank algorithms (RankBoost, RankNet, AdaRank, and Coordinate Ascent) for this problem with lexical (bag-of-words features), topic features and TextRank for ranking sentences. WordNet is used to compute concept similarity between citation contexts and candidate spans.
Lei et al. [18] use SVMs and rule-based methods with lexicon features (high frequency words within the reference text, LDA to train the reference document and citing documents, and co-occurrence lexicon) and similarities (IDF, Jaccard, and context similarity).
In [24], authors propose a linear combination between a TFIDF model and a single layer neural network model. This paper is the most similar to our work.
Saggion et al. [28] use supervised algorithms with feature vectors representing the citance and reference document sentences. Features include positional, Word-Net similarity measures, and rhetorical features.
We have chosen to use topic modeling and word embeddings to overcome the weaknesses of the TFIDF approach. Another participant of the CL-SciSumm 2016 shared task did the same [18]. Their system performed well on the development set, but not as well on the heldout test set. We show how improving a system with a topic model or a word embedding is a lot less straightforward than expected.
Preliminaries
Following are brief explanations of terms that will be used throughout the paper.
Cosine Similarity. This is a measure of similarity between two non zero vectors, A and B that measure the cosine of angle between them. Equation 1 shows the formula for calculating cosine similarity.
similarity(A, B) = cos θ = A · B A B(1)
In the above formula, θ is the angle between the two vectors A and B. We use cosine similarity to measure how far or close two sentences are from each other and rank them based on their similarity. In our task, each vector represents TFIDF or LDA values for all the words in a sentence. The higher the value of similarity(A, B), the greater the similarity is between the two sentences.
TFIDF. This is short for term frequency-inverse document frequency, and is a common scoring metric used for words in a query across a corpus of documents. The metric tries to capture the importance of a word by valuing frequency of the words use in a document and devaluing its appearance in every document. This was originally a method for retrieving documents from a corpus (instead of sentences from a document). For our task of summarization, this scoring metric was adjusted to help select matching sentences, so each sentence is treated as a document for our purposes. Thus, our "document" level frequencies are the frequencies of words in a sentence. The "corpus" will be the whole reference document. Then, the term frequency can be calculated by counting a word's frequency within a sentence. The inverse document frequency of a word will be based on the number of sentences that contain that word. When using TFIDF for calculating similarity, we use Equation 1 where the vectors are defined as:
A = a 1 , a 2 , . . . , a n where a i = tf wi · idf wi (2) idf wi = log(N/df wi )(3)
where tf wi refers to the term frequency of w i , df wi refers to the document frequency of w i (number of documents in which w i appears), and N refers to the total number of documents.
WordNet. This is a large lexical dataset for the English language [22]. The main relation among words in WordNet is synonymy. However, it contains other relations like antonymy, hyperonymy, hyponymy, meronymy, etc. For our task of summarization, we use synonymy for expanding words in reference sentences and citances. Since reference sentences and citances are written by two different authors, adding synonyms increases the chance of a word occurring in both sentences if they are both indeed related.
LDA. Latent Dirichlet Allocation is a technique used for topic modeling. It learns a generative model of a document. Topics are assumed to have some prior distribution, normally a symmetric Dirichlet distribution. Terms in the corpus are assumed to have a multinomial distribution. These assumptions form the basis of the method. After learning the parameters from a corpus, each term will have a topic distribution which can be used to determine the topics of a document. When using LDA for calculating similarity, we use Equation 1 where the vectors are defined as topic membership probabilities:
A = a 1 , a 2 , . . . , a n where a i = P (doc A ∈ topic i )(4) F 1 -score. To evaluate our methods, we have chosen the F 1 -score. The F 1 -score is a weighted average of precision and recall, where precision and recall receive equal weighting. This kind of weighted average is also referred to as the harmonic mean. Precision is the proportion of correct results among the results that were returned. And recall is the proportion of correct results among all possible correct results.
Our system outputs the top 3 sentences and we compute recall, precision, and F 1 -score using these sentences. If a relevant sentence appears in the top 3, then it factors into recall, precision, and F 1 -score. Thus, we naturally present the precision at N measure (P @N ) used by [8]. Precision at N is simply the proportion of correct results in the top N ranks. In our evaluations, N = 3. Average precision and the area under the ROC curve are two other measures that present a more complete picture when there is a large imbalance between classes. To keep in line with the evaluation for the BIRNDL shared task we chose to use P@N. Regardless, we focus on the F 1 -score rather than P @3 when determining if one system is better than another.
If we look at the percentage of sentences that appear in the gold standard, we see that roughly 90% of the sentences in our dataset are never chosen by an annotator. This means our desired class is rare and diverse, similar to outliers or anomalies [8]. Therefore, we should expect low performance from our system since our task is similar to anomaly detection [8] which has a hard time achieving good performance in such cases.
Dataset
The dataset [13] consists of 30 total documents separated into three sets of 10 documents each: training, development, and test sets. For the following analysis, no preprocessing has been done (for instance, stemming).
There are 23356 unique words among the reference documents in the dataset. The citances contain 5520 unique words. The most frequent word among reference documents appears in 4120 sentences. The most frequent word among citances appears in 521 sentences. There are 6700 reference sentences and 704 citances (although a few of these should actually be broken up into multiple sentences). The average reference sentence has approximately 22 words in this dataset whereas citances have an average of approximately 34 words.
In Figure 1 we can see the sparsity of the dataset. At a particular (x, y) along the curves we know x% of all sentences contain at least some number of unique words -a number equal to y% of the vocabulary. All sentences contain at least one word, which is a very small sliver of the vocabulary (appearing as 0% in the graph). The quicker the decay, the greater the sparsity. Noise in the dataset is one of the factors for the sparsity. We can see that citances, seen as a corpus, are in general less sparse than the reference texts. This can be an indication that citances have some common structure or semantics.
One possibility is that citance must have some level of summarization ability. If we look at the annotations of a document as a whole we see a pattern: the annotators tend to choose from a small pool of reference sentences. Therefore, the sentences chosen are usually somewhat general and serve as tiny summaries of a single concept. Furthermore, the chosen reference sentences make up roughly 10% of all reference sentences from which we have to choose.
Citances
It should be noted that citances have a few peculiarities, such as an abundance of citation markers and proper names. Citation markers (cues in written text demarcating a citation) will sometimes include the names of authors, thus the vocabulary for these sentences will include more proper names. This could justify the lesser sparsity if authors reoccur across citances. However, it could also justify greater sparsity since these authors may be unique. Identifying and ignoring citation markers should reduce noise. A preprocessing step we employ with this goal is the removal of all text enclosed in brackets of any kind.
To demonstrate the differences in difficulty a citance can pose we present two examples: one that is relatively simple and another that is relatively hard. In both examples the original citance marker is in italics.
Easy Citance: "According to Sproat et al. (1996), most prior work in Chinese segmentation has exploited lexical knowledge bases; indeed, the authors assert that they were aware of only one previously published instance (the mutual-information method of Sproat and Shih (1990)) of a purely statistical approach."
Reference Span: "Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexical rule-based approaches, and approaches that combine lexical information with statistical information. The present proposal falls into the last group. Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach."
In the "Easy" case, there are many salient words in common between the reference spans we must retrieve and the citance. This is the ideal case for TFIDF since matching based on these words should produce good results. However in the "Hard" case:
Hard Citance: "A lot of work has been done in English for the purpose of anaphora resolution and various algorithms have been devised for this purpose (Aone and Bennette, 1996; Brenan , Friedman and Pollard, 1987; Ge, Hale and Charniak, 1998; Grosz, Aravind and Weinstein, 1995; McCarthy and Lehnert, 1995; Lappins and Leass, 1994; Mitkov, 1998 ; Soon, Ng and Lim, 1999)."
Reference Span: "We have described a robust, knowledge-poor approach to pronoun resolution which operates on texts preprocessed by a part-of-speech tagger."
We can see that there is no overlap of salient words, between the citance and text span. Not only is the citance somewhat vague, but any semantic overlap is not exact. For instance, "anaphora resolution" and "pronoun resolution" refer to the same concept but do not match lexically.
Frequency of Section Titles
We analyzed the frequency of section titles for the chosen reference sentences. Our analysis only excludes document P05-1053 from consideration -the document whose answer key was withheld. For each cited reference sentence, we looked at the title of the section in which it appears. The titles that appeared with the greatest frequency can be seen in Table 1. To extract these section titles we looked at the parent nodes of sentences within the XML document. The "title" and "abstract" sections are special since they refer to parent nodes of type other than SECTION. Due to OCR noise, a few section names were wrong. We manually corrected these section names. Then, we performed a slight normalization by removing any 's' character that appeared in the end of a name. These results clearly show sentences that are cited are not uniformly distributed within a document.
Preprocessing
Before we use the dataset in our system we preprocessed the dataset to reduce the number of errors. The dataset has lots of errors due to the use of OCR techniques. Broken words, non-ascii characters, and formating problems in XML files are some examples of these problems. We performed the following preprocessing steps to reduce noise in the dataset. First, we manually went over the citances and reference sentences fixing broken words (those separated by hyphen, space, or some nonascii characters). We automatically removed all nonascii characters from citance and reference text. Finally, we manually fixed some misformatted XML files that were missing closing tags.
TFIDF Approach
Our best performing system for the CL-SciSumm 2016 task was based on TFIDF. It achieved 13.68% F 1 -score on the test set for Task 1a. Our approach compares the TFIDF vectors of the citance and the sentences in the reference document. Each reference sentence is assigned a score according to the cosine similarity between itself and the citance. There were several variations studied to improve our TFIDF system. Table 2 contains the abbreviations we use when discussing a particular configuration. Stopwords were removed for all configurations. These words serve mainly to add noise, so their removal helps improve performance. There are two lists of stopwords used: one from sklearn (sk stop) and one from NLTK (nltk stop).
To remove the effect of using words in their different forms we used stemming (st) to reduce words to their root form. For this purpose, we use the Snowball Stemmer, provided by the NLTK package [7].
WordNet has been utilized to expand the semantics of the sentence. We obtain the lemmas from the synsets of each word in the sentence. We use the Lesk algorithm, provided through the NLTK package [7], to perform wordsense disambiguation. This is a necessary step before obtaining the synset of a word from Word-Net. Each synset is a collection of lemmas. The lemmas that constitute each synset are added to the word vector of the sentence; this augmented vector is used when calculating the cosine similarity instead of the original vector. We consider three different methods of using WordNet: ref wn, cit wn, and both wn, which will be explained in Subsection 4.1.
Our first implementation of WordNet expansion increased coverage at the cost of performance. With the proper adjustments, we were able to improve performance as well. This is another example of how the details and tuning of the implementation are critical in dealing with short text. Our new implementation takes care to only add a word once, even though it may ap-pear in the synsets of multiple words of the sentence. Details are found in Subsection 4.1.
In the shared task, filtering candidate sentences by length improved the system's performance substantially. Short sentences are unlikely to be chosen; they are often too brief to encapsulate a concept completely. Longer sentences are usually artifacts from the PDF to text conversion (for instance, a table transformed into a sentence). We eliminate from consideration all sentences outside a certain range of number of words. In our preliminary experiments, we found two promising lower bounds on the number of words: 8 and 15. The only upper bound we consider is 70, which also reduces computation time since longer sentences take longer to score. Each range appears in the tables as an ordered pair (min, max); e.g. 8 to 70 words it would appear as (8,70). This process eliminates some of the sentences our system is supposed to retrieve so our maximum attainable F 1 -score is lowered.
Improvements on TFIDF
The main drawback of the TFIDF method is its inability to handle situations where there is no overlap between citance and reference sentence. Thus, we decided to work on improving the WordNet expansion. In Table 3 we can see the performance of various different configurations, some which improve upon our previously best system. The first improvement was to make sure the synsets do not flood the sentence with additional terms. Instead of adding all synsets to the sentence, we only added unique terms found in those synsets. Thus, if a term appeared in multiple synsets of words in the sentence it would still only contribute once to the modified sentence.
While running the experiments, the WordNet preprocessing was only applied to the citances instead of both citances and reference sentences by accident. This increased our performance to 14.11% (first entry on Table 3b). To further investigate this, we also ran the WordNet expansion on only the reference sentences. This led to another subtlety being discovered, but before we can elaborate we must explain how WordNet expansion is performed.
Conceptually, the goal of the WordNet preprocessing stage is to increase the overlap between the words that appear in the citance and those that appear in the reference sentences. By including synonyms, a sentence has a greater chance to match with the citance. The intended effect was for the citances and reference sentences to meet in the middle.
The steps taken in WordNet expansion are as follows: each sentence is tokenized into single word tokens; we search for the synsets of each token in WordNet; if a synset is found, then the lemmas that constitute that synset are added to the sentence. The small subtlety referred to before is the duplication of original tokens: if a synset is found, it must contain the original token, so the original token gets added once more to the sentence. This adds more weight to the original tokens. Before the discovery of one-sided WordNet expansion, this was a key factor in our TFIDF results.
In actuality, adding synonyms to all reference sentences was a step too far. We believe that the addition of WordNet synsets to both the reference sentences and citances only served to add more noise. Due to the number of reference sentences, these additional synsets impacted the TFIDF values derived. However, if we only apply this transformation to the citances, the impact on the TFIDF values is minimal.
We now had to experiment with applying Word-Net asymmetrically: adding synsets to citances only (cit wn) and adding synsets to reference sentences only (ref wn). In addition, we ran experiments to test the effect of duplicating original tokens. This would still serve the purpose of increasing overlap, but lessen the noise we introduced as a result. We can see the difference in performance in Table 3a and Table 3b. In the development set, applying WordNet only to the reference sentences with duplication performed the best with an F 1 -score of 16.41%. For the test set, WordNet applied to only the citances performs the best with 14.12%. Regardless, our experiments indicate one-sided WordNet leads to better results. In the following sections, experiments only consider one-sided WordNet use.
Topic Modeling
To overcome the limitations of using a single sentence we constructed topic models to better capture the semantic information of a citance. Using Latent Dirichlet Allocation (LDA), we created various topic models for the computational linguistics domain.
Corpus Creation
First, we gathered a set of 34273 documents from the ACL Anthology 1 website. This set is comprised of all PDFs available to download. The next step was to convert the PDFs to text. Unfortunately, we ran into the same problem as the organizers of the shared task: the conversion from PDF to text left a lot to be desired. Additionally, some PDFs used an internal encoding thus resulting in an undecipherable conversion. Instead of trying to fix these documents, we decided to select a subset that seemed sufficiently error-free. Since poorly converted documents contain more symbols than usual, we chose to cluster the documents according to character frequencies. Using K-means, we clustered the documents into twelve different clusters. After clustering, we manually selected the clusters that seemed to have articles with acceptable noise. Interestingly, tables of content are part of the PDFs available for download from the anthology. Since these documents contain more formatting than text, they ended up clustered together. We chose to disregard these clusters as well. In total, 26686 documents remained as part of our "cleaned corpus".
Latent Dirichlet Allocation
LDA can be seen as a probabilistic factorization method that splits a term-document matrix into term-topic and topic-document matrices. The main advantage of LDA is its soft clustering: a single document can be part of many topics to varying degree.
We are interested in the resulting term-topic matrix that is derived from the corpus. With this matrix, we can convert terms into topic vectors where each dimension represents the term's extent of membership. These topic vectors provides new opportunities for achieving overlap between citances and reference sentences, thus allowing us to score sentences that would have a cosine similarity of zero between TFIDF vectors. Similar to K-means, we must choose the number of topics beforehand. Since we are using online LDA [11] there are a few additional parameters, specifically: κ, a parameter to adjust the learning rate for online LDA; τ 0 , a parameter to slow down the learning for the first few iterations. The ranges for each parameter are [0.5, 0.9] in increments of 0.1 for κ and 1, 256, 512, 768 for τ 0 .
We also experimented with different parameters for the vocabulary. The minimum number of documents in which a word had to appear was an absolute number of documents (min df): 10 or 40. The maximum number of documents in which a word could appear was a percentage of the total corpus (max df): 0.87, 0.93, 0.99.
One way to evaluate the suitability of a learned topic model is through a measure known as perplexity [11]. Since LDA learns a distribution for topics and terms, we can calculate the probability of any document according to this distribution. Given an unseen collection of documents taken from the same domain, we calculate the probability of this collection according to the topic model. We expect a good topic model to be less "surprised" at these documents if they are a representative sample of the domain. In Figure 2, we graph the perplexity of our topic models when judged on the reference documents of the training and development set of the CL-SciSumm dataset.
Unfortunately, the implementation we used does not normalize these values, which means we cannot use the perplexity for comparing two models that have a different number of topics. Keep in mind the numbers in Figure 2 do not reflect the perplexity directly. Perplexity is still useful for evaluating our choice of κ and τ 0 . We omit plotting the perplexity for different τ 0 values since, with regards to perplexity, models with τ 0 > 1 always underperformed. Figure 2 makes a strong case for the choice of κ = 0.5. However, our experiments demonstrate that, for ranking, higher κ and higher τ 0 can be advantageous.
In order to compare these models across different number of topics we evaluated their performance at Task 1a. The results of these runs can be seen in Table 5a. Sentences were first converted to LDA topic vectors then ranked by their cosine similarity to the citance (also a topic vector). The performance of this method is worse than all TFIDF configurations, regardless of which LDA model is chosen. We merely use these results to compare the different models. Nevertheless, the topics learned by LDA were not immediately evidentsuggesting there is room for improvement in the choice of parameters. Table 4 has a selection of the most interpretable topics for one of the models.
Word Embeddings
Another way to augment the semantic information of the sentences is through word embeddings. The idea behind word embeddings is to assign each word a vector of real numbers. These vectors are chosen such that if two words a similar, their vectors should be similar as well. We learn these word embeddings in the same manner as word2vec [21].
We use DMTK [20] to learn our embeddings. DMTK provides a distributed implementation of word2vec. We trained two separate embeddings: WE-1 and WE-2. We only explored two different parameter settings. Both embeddings consist of a 200 dimensional vector space. Training was slightly more intensive for WE-1, which ran for 15 epochs sampling 5 negative examples. The second embedding, WE-2, ran for 13 epochs sampling only 4 negative examples. The minimum count for words in the vocabulary was also different: WE-1 required words to appear 40 times whereas WE-2 required words to appear 60 (thus, resulting in a smaller vocabulary).
To obtain similarity scores, we use the Word Mover's Distance [17]. Thus, instead of measuring the similarity to an averaged vector for each sentence, we consider the vector of each word separately. In summary, given two sentences, each composed of words which are represented as vectors, we want to move the vectors of one sentence atop those of the other by moving each vector as little as possible. The results obtained can be found in Table 5b.
Word embeddings outperformed topic models on the development set; while the highest scoring topic model achieved 7.90% F 1 -score on the development set, the highest scoring word embedding achieved 13.77%.
Tradeoff Parameterization
In order to combine the TFIDF systems with LDA or Word Embedding systems, we introduce a parameter to vary the importance of the added similarity compared to the TFIDF similarity: λ. The equation for the new scores is thus:
λ · T F IDF + (1 − λ) · other(5)
where other stands for either LDA or WE. Each sentence is scored by each system separately. These two values (the TFIDF similarity and the other system's similarity) are combined through Equation 5. The sentences are then ranked according to these adjusted values.
We evaluated this method by taking the 10 best performing systems on the development set for both TFIDF and LDA. Each combination of a TFIDF system with an LDA system was tested. We test these hybrid systems with values of λ between [0.7, 0.99] in 0.01 increments. There were only 6 different configura- tions for word embeddings so we used all of them with the same values for λ.
After obtaining the scores for the development set, we chose the 100 best systems to run on the test set (for LDA only). Systems consist of a choice of TFIDF system, a choice of LDA system, and a value for λ. The five highest scoring systems are shown in Table 7.
We can see that a particular topic model dominated. The LDA model that best complemented any TFIDF system was only the fourth best LDA system on the development set. There were multiple combinations with the same F 1 -score, so we had to choose which to display in Tabel 7. This obscures the results since other models attain F 1 -score as high as 14.64%. In particular, the second best performing topic model in this experiment was an 80 topic model that is not in Table 5a.
The following best topic model had 50 topics.
Interestingly, a TFIDF system coupled with word embeddings performs incredibly well on the development set as can be seen in Table 6 (values similar to LDA if we look at Fig 3). However, once we move to the test set, all improvements become meager. It is possible that word embeddings are more sensitive to the choice of λ.
Although we do not provide the numbers, if we analyze the distribution of scores given by word embeddings, we find that the distribution is much flatter. TFIDF scores drop rapidly; for some citances most sentences are scored with a zero. LDA improves upon that by having less zero-score sentences, but the scores still decay until they reach zero. Word embeddings, however, seem to grant a minimum score to all sentences (most scores are greater than 0.4). Furthermore, there is very little variability from lowest to highest score. This is further evidenced by the wide range of λ values that yield good performance on Table 6. We conjecture the shape of these distributions may be responsible for the differences in performance.
Statistical Analysis
Although the F 1 -scores have improved by augmenting the bare-bones TFIDF approach, we must still check whether this improvement is statistically significant. Since some of these systems have very similar F 1 -scores, we cannot simply provide a 95% confidence intervals for each F 1 -score individually; we are forced to perform paired t-tests which mitigate the variance inherent in the data.
Given two systems, A and B, we resample with replacement 10000 times the dataset tested on and calculate the F 1 -score for each new sample. By evaluating on the same sample, any variability due to the data (harder/easier citances, for instance) is ignored. Finally, these pairs of F 1 -scores are then used to calculate a pvalue for the paired t-test.
We calculate the significance of differences between the top entries for each category (TFIDF and two tradeoff variations) evaluated on the test set. For the best performing system (that is TFIDF + LDA at F 1 -score of 14.77%) the difference is statistically significant from TFIDF + WE (at 14.24%) and TFIDF (at 14.11%) with p-values of 0.0015 and 0.0003, respectively. However, the difference between TFIDF + WE and TFIDF is not statistically significant (p-value of 0.4830).
Human Annotators
In order to determine whether the performance of our system is much lower than what can be achieved, we ran an experiment with human annotators. Since human annotators require more time to perform the task, we had to truncate the test set to just three documents, chosen at random.
The subset used to evaluate the human annotators consists of three different articles from the test set: C00-2123, N06-2049, and J96-3004. The 20 citances that cite C00-2123 only select 24 distinct sentences from the reference article, which contains 203 sentences. Similarly, the 22 citances that cite N06-2049 select only 35 distinct reference sentences from 155 total. The last article, J96-3004, has 69 citances annotated that select 109 distinct reference sentences from the 471 sentences found in the article.
To avoid "guiding" the annotators to the correct answers, we provided minimal instructions. We explained the problem of matching citances to their relevant reference spans to each annotator. Since the objective was to compare to our system's performance, the annotators had at their disposal the XML files given to our system. Thus, the sentence boundaries are interpreted consistently by our system and the human annotators. We instructed them to choose one or many sentences, possibly including the title, as the reference span for a citance.
The performance of the human annotators and three of our best system configurations can be seen in Table 8. The raw score for two of the annotators had extremely low precision. Upon further analysis, we noticed outliers where more than ten different reference sentences had been chosen.
To provide a fairer assessment, the scores were adjusted for two of the annotators: if more than ten sentences were selected for a citance, we replace the sentences with simply the article title. We argue this is justified since any citance that requires that many sentences to be chosen is probably referencing the paper as a whole. After these adjustments, the score of the human annotators rose considerably.
Discussion
The fact that WordNet went from decreasing the performance of our system to increasing its performance shows the level of detail required to tune a system for the task of reference span identification. The performance of our human annotators demonstrate the difficulty of this task -it requires much more precision. Additionally, the human scores show there is room for improvement.
Task 1a can be framed as identifying semantically similar sentences. This perspective is best represented by the LDA systems of Section 5.2 and the word embeddings of Section 6. However, as can be seen by the results we obtained in Table 5, relying solely on semantic similarity is not the best approach.
Methods such as TFIDF and sentence limiting do not attempt to solve Task 1a head-on. Through a narrowing of possibilities, these methods improve the odds of choosing the correct sentences. Only after sifting through the candidate sentences with these methods can topic modeling be of use.
Combining TFIDF and LDA through a tradeoff parameter allowed us to test whether topic modeling does indeed improve our performance. Clearly, that is the case since our best performing system uses both TFIDF and LDA. The same experiment was performed with word embeddings, although the improvements were not as great.
Since word embeddings performed well alone but didn't provide much of a boost to TFIDF, it is possible the information captured by the embeddings overlaps with the information captured by TFIDF.
The question that remains is whether the topic modeling was done as best as it could be. The results in Section 7 require further analysis. As we can see from Figure 3, very few combinations provide a net-gain in performance. Likewise, it is possible that further tuning of word embedding parameters could improve our performance.
Conclusion
During the BIRNDL shared task, we were surprised by the result of our TFIDF system, which achieved an F 1score of 13.65%. More complex systems did not obtain a higher F 1 -score. In this paper, we show it is possible to improve our TFIDF system with additional semantic information.
Through the use of WordNet we achieve an F 1 -score of 14.11%. Word embeddings increase this F 1 -score to 14.24%. If we employ LDA topic models instead of word embeddings, our system attains its best performance: an F 1 -score of 14.77%. This improvement is statistically significant.
Although these increases seem modest, the difficulty of the task should be taken into account. We performed an experiment with human annotators to assess what F 1 -score would constitute a reasonable goal for our system. The best F 1 -score obtained by a human was 27.95%. This leads us to believe there is still room for improvement on this task.
In the future, the study of overlap between TFIDF and word embeddings could provide a better understanding of the limits of this task. Finally, we also propose the simultaneous combination of LDA topic models and word embeddings. | 6,316 |
1708.02989 | 2629354081 | The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system. | In addition to these features, we have to consider that multiple citation markers may be present in a sentence. Thus, only certain parts of a sentence may be relevant to identifying the target of a particular citation marker. Qazvinian and Radev @cite_8 share an approach to find the fragment of a sentence that applies to a citation, especially in the case of sentences with multiple citation markers. The research of Abu-Jbara and Radev @cite_3 further argues that a fragment need not always be continguous. | {
"abstract": [
"A citing sentence is one that appears in a scientific article and cites previous work. Citing sentences have been studied and used in many applications. For example, they have been used in scientific paper summarization, automatic survey generation, paraphrase identification, and citation function classification. Citing sentences that cite multiple papers are common in scientific writing. This observation should be taken into consideration when using citing sentences in applications. For instance, when a citing sentence is used in a summary of a scientific paper, only the fragments of the sentence that are relevant to the summarized paper should be included in the summary. In this paper, we present and compare three different approaches for identifying the fragments of a citing sentence that are related to a given target reference. Our methods are: word classification, sequence labeling, and segment classification. Our experiments show that segment classification achieves the best results.",
"Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as a Markov Random Field tuned to detect the patterns that context data create, and employ a Belief Propagation mechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone."
],
"cite_N": [
"@cite_3",
"@cite_8"
],
"mid": [
"62810358",
"2149801561"
]
} | Identifying Reference Spans: Topic Modeling and Word Embeddings help IR | The CL-SciSumm 2016 [12] shared task posed the problem of automatic summarization in the computational linguistics domain. Single document summarization is hardly new [30,5,4]; however, in addition to the reference document to be summarized, we are also given citances, i.e. sentences that cite our reference document. The usefulness of citances in the process of summarization is immediately apparent. A citance can hint at what is interesting about the document.
This objective was split into three tasks. Given a citance (a sentence containing a citation), in Task 1a we must identify the span of text in the reference document that best reflects what has been cited. Task 1b asks us to classify the cited aspect according to a predefined set of facets: hypothesis, aim, method, results, and implication. Finally, Task 2 is the generation of a structured summary for the reference document. Although the shared task is broken up into multiple tasks, this paper concerns itself solely with Task 1a.
Task 1a is quite interesting all by itself. We can think of Task 1a as a small scale summarization. Thus, being precise is incredibly important: the system must often find a single sentence among hundreds (in some cases, however, multiple sentences are correct). The results of the workshop [15] reveal that Task 1a is quite challenging. There was a varied selection of methods used for this problem: SVMs, neural networks, learningto-rank algorithms, and more. Regardless, our previous system had the best performance on the test set for CL-SciSumm: cosine similarity between weighted bagof-word vectors. The weighting used is well known in information retrieval: term frequency · inverse document frequency (TFIDF). Although TFIDF is a well known and understood method in information retrieval, it is surprising that it achieved better performance than more heavily engineered solutions. Thus, our goal in this paper is twofold: to analyze and improve on the performance of TFIDF and to push beyond its performance ceiling.
In the process of exploring different configurations, we have observed the performance of our TFIDF method vary substantially. Text preprocessing parameters can have a significant effect on the final performance. This variance also underscores the need to start with a basic system and then add complexity step-by-step in a reasoned manner. Another prior attempt employed SVMs with tree kernels but the performance never surpassed arXiv:1708.02989v1 [cs.CL] 9 Aug 2017 that of TFIDF. Therefore, we focus on improving the TFIDF approach.
Depending on the domain of your data, it can be necessary to start with simple models. In general, unbalanced classification tasks are hard to evaluate due to the performance of the baseline. For an example that is not a classification task, look no further than news articles: the first few sentences of a news article form an incredibly effective baseline for summaries of the whole article.
First, we study a few of the characteristics of the dataset. In particular, we look at the sparsity between reference sentences and citances, what are some of the hurdles in handling citances, and whether chosen reference sentences appear more frequently in a particular section. Then we cover improvements to TFIDF. We also introduce topic models learned through Latent Dirichlet Allocation (LDA) and word embeddings learned through word2vec. These systems are studied for their ability to augment our TFIDF system. Finally, we present an analysis of how humans perform at this task.
CL-SciSumm 2016
We present a short overview of the different approaches used to solve Task 1a.
Aggarwal and Sharma [3] use bag-of-words bigrams, syntactic dependency cues and a set of rules for extracting parts of referenced documents that are relevant to citances.
In [14], researchers generate three combinations of an unsupervised graph-based sentence ranking approach with a supervised classification approach. In the first approach, sentence ranking is modified to use information provided by citing documents. In the second, the ranking procedure is applied as a filter before supervised classification. In the third, supervised learning is used as a filter to the cited document, before sentence ranking.
Cao et al. [9] model Task 1a as a ranking problem and apply SVM Rank for this purpose.
In [19], the citance is treated as a query over the sentences of the reference document. They used learningto-rank algorithms (RankBoost, RankNet, AdaRank, and Coordinate Ascent) for this problem with lexical (bag-of-words features), topic features and TextRank for ranking sentences. WordNet is used to compute concept similarity between citation contexts and candidate spans.
Lei et al. [18] use SVMs and rule-based methods with lexicon features (high frequency words within the reference text, LDA to train the reference document and citing documents, and co-occurrence lexicon) and similarities (IDF, Jaccard, and context similarity).
In [24], authors propose a linear combination between a TFIDF model and a single layer neural network model. This paper is the most similar to our work.
Saggion et al. [28] use supervised algorithms with feature vectors representing the citance and reference document sentences. Features include positional, Word-Net similarity measures, and rhetorical features.
We have chosen to use topic modeling and word embeddings to overcome the weaknesses of the TFIDF approach. Another participant of the CL-SciSumm 2016 shared task did the same [18]. Their system performed well on the development set, but not as well on the heldout test set. We show how improving a system with a topic model or a word embedding is a lot less straightforward than expected.
Preliminaries
Following are brief explanations of terms that will be used throughout the paper.
Cosine Similarity. This is a measure of similarity between two non zero vectors, A and B that measure the cosine of angle between them. Equation 1 shows the formula for calculating cosine similarity.
similarity(A, B) = cos θ = A · B A B(1)
In the above formula, θ is the angle between the two vectors A and B. We use cosine similarity to measure how far or close two sentences are from each other and rank them based on their similarity. In our task, each vector represents TFIDF or LDA values for all the words in a sentence. The higher the value of similarity(A, B), the greater the similarity is between the two sentences.
TFIDF. This is short for term frequency-inverse document frequency, and is a common scoring metric used for words in a query across a corpus of documents. The metric tries to capture the importance of a word by valuing frequency of the words use in a document and devaluing its appearance in every document. This was originally a method for retrieving documents from a corpus (instead of sentences from a document). For our task of summarization, this scoring metric was adjusted to help select matching sentences, so each sentence is treated as a document for our purposes. Thus, our "document" level frequencies are the frequencies of words in a sentence. The "corpus" will be the whole reference document. Then, the term frequency can be calculated by counting a word's frequency within a sentence. The inverse document frequency of a word will be based on the number of sentences that contain that word. When using TFIDF for calculating similarity, we use Equation 1 where the vectors are defined as:
A = a 1 , a 2 , . . . , a n where a i = tf wi · idf wi (2) idf wi = log(N/df wi )(3)
where tf wi refers to the term frequency of w i , df wi refers to the document frequency of w i (number of documents in which w i appears), and N refers to the total number of documents.
WordNet. This is a large lexical dataset for the English language [22]. The main relation among words in WordNet is synonymy. However, it contains other relations like antonymy, hyperonymy, hyponymy, meronymy, etc. For our task of summarization, we use synonymy for expanding words in reference sentences and citances. Since reference sentences and citances are written by two different authors, adding synonyms increases the chance of a word occurring in both sentences if they are both indeed related.
LDA. Latent Dirichlet Allocation is a technique used for topic modeling. It learns a generative model of a document. Topics are assumed to have some prior distribution, normally a symmetric Dirichlet distribution. Terms in the corpus are assumed to have a multinomial distribution. These assumptions form the basis of the method. After learning the parameters from a corpus, each term will have a topic distribution which can be used to determine the topics of a document. When using LDA for calculating similarity, we use Equation 1 where the vectors are defined as topic membership probabilities:
A = a 1 , a 2 , . . . , a n where a i = P (doc A ∈ topic i )(4) F 1 -score. To evaluate our methods, we have chosen the F 1 -score. The F 1 -score is a weighted average of precision and recall, where precision and recall receive equal weighting. This kind of weighted average is also referred to as the harmonic mean. Precision is the proportion of correct results among the results that were returned. And recall is the proportion of correct results among all possible correct results.
Our system outputs the top 3 sentences and we compute recall, precision, and F 1 -score using these sentences. If a relevant sentence appears in the top 3, then it factors into recall, precision, and F 1 -score. Thus, we naturally present the precision at N measure (P @N ) used by [8]. Precision at N is simply the proportion of correct results in the top N ranks. In our evaluations, N = 3. Average precision and the area under the ROC curve are two other measures that present a more complete picture when there is a large imbalance between classes. To keep in line with the evaluation for the BIRNDL shared task we chose to use P@N. Regardless, we focus on the F 1 -score rather than P @3 when determining if one system is better than another.
If we look at the percentage of sentences that appear in the gold standard, we see that roughly 90% of the sentences in our dataset are never chosen by an annotator. This means our desired class is rare and diverse, similar to outliers or anomalies [8]. Therefore, we should expect low performance from our system since our task is similar to anomaly detection [8] which has a hard time achieving good performance in such cases.
Dataset
The dataset [13] consists of 30 total documents separated into three sets of 10 documents each: training, development, and test sets. For the following analysis, no preprocessing has been done (for instance, stemming).
There are 23356 unique words among the reference documents in the dataset. The citances contain 5520 unique words. The most frequent word among reference documents appears in 4120 sentences. The most frequent word among citances appears in 521 sentences. There are 6700 reference sentences and 704 citances (although a few of these should actually be broken up into multiple sentences). The average reference sentence has approximately 22 words in this dataset whereas citances have an average of approximately 34 words.
In Figure 1 we can see the sparsity of the dataset. At a particular (x, y) along the curves we know x% of all sentences contain at least some number of unique words -a number equal to y% of the vocabulary. All sentences contain at least one word, which is a very small sliver of the vocabulary (appearing as 0% in the graph). The quicker the decay, the greater the sparsity. Noise in the dataset is one of the factors for the sparsity. We can see that citances, seen as a corpus, are in general less sparse than the reference texts. This can be an indication that citances have some common structure or semantics.
One possibility is that citance must have some level of summarization ability. If we look at the annotations of a document as a whole we see a pattern: the annotators tend to choose from a small pool of reference sentences. Therefore, the sentences chosen are usually somewhat general and serve as tiny summaries of a single concept. Furthermore, the chosen reference sentences make up roughly 10% of all reference sentences from which we have to choose.
Citances
It should be noted that citances have a few peculiarities, such as an abundance of citation markers and proper names. Citation markers (cues in written text demarcating a citation) will sometimes include the names of authors, thus the vocabulary for these sentences will include more proper names. This could justify the lesser sparsity if authors reoccur across citances. However, it could also justify greater sparsity since these authors may be unique. Identifying and ignoring citation markers should reduce noise. A preprocessing step we employ with this goal is the removal of all text enclosed in brackets of any kind.
To demonstrate the differences in difficulty a citance can pose we present two examples: one that is relatively simple and another that is relatively hard. In both examples the original citance marker is in italics.
Easy Citance: "According to Sproat et al. (1996), most prior work in Chinese segmentation has exploited lexical knowledge bases; indeed, the authors assert that they were aware of only one previously published instance (the mutual-information method of Sproat and Shih (1990)) of a purely statistical approach."
Reference Span: "Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexical rule-based approaches, and approaches that combine lexical information with statistical information. The present proposal falls into the last group. Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach."
In the "Easy" case, there are many salient words in common between the reference spans we must retrieve and the citance. This is the ideal case for TFIDF since matching based on these words should produce good results. However in the "Hard" case:
Hard Citance: "A lot of work has been done in English for the purpose of anaphora resolution and various algorithms have been devised for this purpose (Aone and Bennette, 1996; Brenan , Friedman and Pollard, 1987; Ge, Hale and Charniak, 1998; Grosz, Aravind and Weinstein, 1995; McCarthy and Lehnert, 1995; Lappins and Leass, 1994; Mitkov, 1998 ; Soon, Ng and Lim, 1999)."
Reference Span: "We have described a robust, knowledge-poor approach to pronoun resolution which operates on texts preprocessed by a part-of-speech tagger."
We can see that there is no overlap of salient words, between the citance and text span. Not only is the citance somewhat vague, but any semantic overlap is not exact. For instance, "anaphora resolution" and "pronoun resolution" refer to the same concept but do not match lexically.
Frequency of Section Titles
We analyzed the frequency of section titles for the chosen reference sentences. Our analysis only excludes document P05-1053 from consideration -the document whose answer key was withheld. For each cited reference sentence, we looked at the title of the section in which it appears. The titles that appeared with the greatest frequency can be seen in Table 1. To extract these section titles we looked at the parent nodes of sentences within the XML document. The "title" and "abstract" sections are special since they refer to parent nodes of type other than SECTION. Due to OCR noise, a few section names were wrong. We manually corrected these section names. Then, we performed a slight normalization by removing any 's' character that appeared in the end of a name. These results clearly show sentences that are cited are not uniformly distributed within a document.
Preprocessing
Before we use the dataset in our system we preprocessed the dataset to reduce the number of errors. The dataset has lots of errors due to the use of OCR techniques. Broken words, non-ascii characters, and formating problems in XML files are some examples of these problems. We performed the following preprocessing steps to reduce noise in the dataset. First, we manually went over the citances and reference sentences fixing broken words (those separated by hyphen, space, or some nonascii characters). We automatically removed all nonascii characters from citance and reference text. Finally, we manually fixed some misformatted XML files that were missing closing tags.
TFIDF Approach
Our best performing system for the CL-SciSumm 2016 task was based on TFIDF. It achieved 13.68% F 1 -score on the test set for Task 1a. Our approach compares the TFIDF vectors of the citance and the sentences in the reference document. Each reference sentence is assigned a score according to the cosine similarity between itself and the citance. There were several variations studied to improve our TFIDF system. Table 2 contains the abbreviations we use when discussing a particular configuration. Stopwords were removed for all configurations. These words serve mainly to add noise, so their removal helps improve performance. There are two lists of stopwords used: one from sklearn (sk stop) and one from NLTK (nltk stop).
To remove the effect of using words in their different forms we used stemming (st) to reduce words to their root form. For this purpose, we use the Snowball Stemmer, provided by the NLTK package [7].
WordNet has been utilized to expand the semantics of the sentence. We obtain the lemmas from the synsets of each word in the sentence. We use the Lesk algorithm, provided through the NLTK package [7], to perform wordsense disambiguation. This is a necessary step before obtaining the synset of a word from Word-Net. Each synset is a collection of lemmas. The lemmas that constitute each synset are added to the word vector of the sentence; this augmented vector is used when calculating the cosine similarity instead of the original vector. We consider three different methods of using WordNet: ref wn, cit wn, and both wn, which will be explained in Subsection 4.1.
Our first implementation of WordNet expansion increased coverage at the cost of performance. With the proper adjustments, we were able to improve performance as well. This is another example of how the details and tuning of the implementation are critical in dealing with short text. Our new implementation takes care to only add a word once, even though it may ap-pear in the synsets of multiple words of the sentence. Details are found in Subsection 4.1.
In the shared task, filtering candidate sentences by length improved the system's performance substantially. Short sentences are unlikely to be chosen; they are often too brief to encapsulate a concept completely. Longer sentences are usually artifacts from the PDF to text conversion (for instance, a table transformed into a sentence). We eliminate from consideration all sentences outside a certain range of number of words. In our preliminary experiments, we found two promising lower bounds on the number of words: 8 and 15. The only upper bound we consider is 70, which also reduces computation time since longer sentences take longer to score. Each range appears in the tables as an ordered pair (min, max); e.g. 8 to 70 words it would appear as (8,70). This process eliminates some of the sentences our system is supposed to retrieve so our maximum attainable F 1 -score is lowered.
Improvements on TFIDF
The main drawback of the TFIDF method is its inability to handle situations where there is no overlap between citance and reference sentence. Thus, we decided to work on improving the WordNet expansion. In Table 3 we can see the performance of various different configurations, some which improve upon our previously best system. The first improvement was to make sure the synsets do not flood the sentence with additional terms. Instead of adding all synsets to the sentence, we only added unique terms found in those synsets. Thus, if a term appeared in multiple synsets of words in the sentence it would still only contribute once to the modified sentence.
While running the experiments, the WordNet preprocessing was only applied to the citances instead of both citances and reference sentences by accident. This increased our performance to 14.11% (first entry on Table 3b). To further investigate this, we also ran the WordNet expansion on only the reference sentences. This led to another subtlety being discovered, but before we can elaborate we must explain how WordNet expansion is performed.
Conceptually, the goal of the WordNet preprocessing stage is to increase the overlap between the words that appear in the citance and those that appear in the reference sentences. By including synonyms, a sentence has a greater chance to match with the citance. The intended effect was for the citances and reference sentences to meet in the middle.
The steps taken in WordNet expansion are as follows: each sentence is tokenized into single word tokens; we search for the synsets of each token in WordNet; if a synset is found, then the lemmas that constitute that synset are added to the sentence. The small subtlety referred to before is the duplication of original tokens: if a synset is found, it must contain the original token, so the original token gets added once more to the sentence. This adds more weight to the original tokens. Before the discovery of one-sided WordNet expansion, this was a key factor in our TFIDF results.
In actuality, adding synonyms to all reference sentences was a step too far. We believe that the addition of WordNet synsets to both the reference sentences and citances only served to add more noise. Due to the number of reference sentences, these additional synsets impacted the TFIDF values derived. However, if we only apply this transformation to the citances, the impact on the TFIDF values is minimal.
We now had to experiment with applying Word-Net asymmetrically: adding synsets to citances only (cit wn) and adding synsets to reference sentences only (ref wn). In addition, we ran experiments to test the effect of duplicating original tokens. This would still serve the purpose of increasing overlap, but lessen the noise we introduced as a result. We can see the difference in performance in Table 3a and Table 3b. In the development set, applying WordNet only to the reference sentences with duplication performed the best with an F 1 -score of 16.41%. For the test set, WordNet applied to only the citances performs the best with 14.12%. Regardless, our experiments indicate one-sided WordNet leads to better results. In the following sections, experiments only consider one-sided WordNet use.
Topic Modeling
To overcome the limitations of using a single sentence we constructed topic models to better capture the semantic information of a citance. Using Latent Dirichlet Allocation (LDA), we created various topic models for the computational linguistics domain.
Corpus Creation
First, we gathered a set of 34273 documents from the ACL Anthology 1 website. This set is comprised of all PDFs available to download. The next step was to convert the PDFs to text. Unfortunately, we ran into the same problem as the organizers of the shared task: the conversion from PDF to text left a lot to be desired. Additionally, some PDFs used an internal encoding thus resulting in an undecipherable conversion. Instead of trying to fix these documents, we decided to select a subset that seemed sufficiently error-free. Since poorly converted documents contain more symbols than usual, we chose to cluster the documents according to character frequencies. Using K-means, we clustered the documents into twelve different clusters. After clustering, we manually selected the clusters that seemed to have articles with acceptable noise. Interestingly, tables of content are part of the PDFs available for download from the anthology. Since these documents contain more formatting than text, they ended up clustered together. We chose to disregard these clusters as well. In total, 26686 documents remained as part of our "cleaned corpus".
Latent Dirichlet Allocation
LDA can be seen as a probabilistic factorization method that splits a term-document matrix into term-topic and topic-document matrices. The main advantage of LDA is its soft clustering: a single document can be part of many topics to varying degree.
We are interested in the resulting term-topic matrix that is derived from the corpus. With this matrix, we can convert terms into topic vectors where each dimension represents the term's extent of membership. These topic vectors provides new opportunities for achieving overlap between citances and reference sentences, thus allowing us to score sentences that would have a cosine similarity of zero between TFIDF vectors. Similar to K-means, we must choose the number of topics beforehand. Since we are using online LDA [11] there are a few additional parameters, specifically: κ, a parameter to adjust the learning rate for online LDA; τ 0 , a parameter to slow down the learning for the first few iterations. The ranges for each parameter are [0.5, 0.9] in increments of 0.1 for κ and 1, 256, 512, 768 for τ 0 .
We also experimented with different parameters for the vocabulary. The minimum number of documents in which a word had to appear was an absolute number of documents (min df): 10 or 40. The maximum number of documents in which a word could appear was a percentage of the total corpus (max df): 0.87, 0.93, 0.99.
One way to evaluate the suitability of a learned topic model is through a measure known as perplexity [11]. Since LDA learns a distribution for topics and terms, we can calculate the probability of any document according to this distribution. Given an unseen collection of documents taken from the same domain, we calculate the probability of this collection according to the topic model. We expect a good topic model to be less "surprised" at these documents if they are a representative sample of the domain. In Figure 2, we graph the perplexity of our topic models when judged on the reference documents of the training and development set of the CL-SciSumm dataset.
Unfortunately, the implementation we used does not normalize these values, which means we cannot use the perplexity for comparing two models that have a different number of topics. Keep in mind the numbers in Figure 2 do not reflect the perplexity directly. Perplexity is still useful for evaluating our choice of κ and τ 0 . We omit plotting the perplexity for different τ 0 values since, with regards to perplexity, models with τ 0 > 1 always underperformed. Figure 2 makes a strong case for the choice of κ = 0.5. However, our experiments demonstrate that, for ranking, higher κ and higher τ 0 can be advantageous.
In order to compare these models across different number of topics we evaluated their performance at Task 1a. The results of these runs can be seen in Table 5a. Sentences were first converted to LDA topic vectors then ranked by their cosine similarity to the citance (also a topic vector). The performance of this method is worse than all TFIDF configurations, regardless of which LDA model is chosen. We merely use these results to compare the different models. Nevertheless, the topics learned by LDA were not immediately evidentsuggesting there is room for improvement in the choice of parameters. Table 4 has a selection of the most interpretable topics for one of the models.
Word Embeddings
Another way to augment the semantic information of the sentences is through word embeddings. The idea behind word embeddings is to assign each word a vector of real numbers. These vectors are chosen such that if two words a similar, their vectors should be similar as well. We learn these word embeddings in the same manner as word2vec [21].
We use DMTK [20] to learn our embeddings. DMTK provides a distributed implementation of word2vec. We trained two separate embeddings: WE-1 and WE-2. We only explored two different parameter settings. Both embeddings consist of a 200 dimensional vector space. Training was slightly more intensive for WE-1, which ran for 15 epochs sampling 5 negative examples. The second embedding, WE-2, ran for 13 epochs sampling only 4 negative examples. The minimum count for words in the vocabulary was also different: WE-1 required words to appear 40 times whereas WE-2 required words to appear 60 (thus, resulting in a smaller vocabulary).
To obtain similarity scores, we use the Word Mover's Distance [17]. Thus, instead of measuring the similarity to an averaged vector for each sentence, we consider the vector of each word separately. In summary, given two sentences, each composed of words which are represented as vectors, we want to move the vectors of one sentence atop those of the other by moving each vector as little as possible. The results obtained can be found in Table 5b.
Word embeddings outperformed topic models on the development set; while the highest scoring topic model achieved 7.90% F 1 -score on the development set, the highest scoring word embedding achieved 13.77%.
Tradeoff Parameterization
In order to combine the TFIDF systems with LDA or Word Embedding systems, we introduce a parameter to vary the importance of the added similarity compared to the TFIDF similarity: λ. The equation for the new scores is thus:
λ · T F IDF + (1 − λ) · other(5)
where other stands for either LDA or WE. Each sentence is scored by each system separately. These two values (the TFIDF similarity and the other system's similarity) are combined through Equation 5. The sentences are then ranked according to these adjusted values.
We evaluated this method by taking the 10 best performing systems on the development set for both TFIDF and LDA. Each combination of a TFIDF system with an LDA system was tested. We test these hybrid systems with values of λ between [0.7, 0.99] in 0.01 increments. There were only 6 different configura- tions for word embeddings so we used all of them with the same values for λ.
After obtaining the scores for the development set, we chose the 100 best systems to run on the test set (for LDA only). Systems consist of a choice of TFIDF system, a choice of LDA system, and a value for λ. The five highest scoring systems are shown in Table 7.
We can see that a particular topic model dominated. The LDA model that best complemented any TFIDF system was only the fourth best LDA system on the development set. There were multiple combinations with the same F 1 -score, so we had to choose which to display in Tabel 7. This obscures the results since other models attain F 1 -score as high as 14.64%. In particular, the second best performing topic model in this experiment was an 80 topic model that is not in Table 5a.
The following best topic model had 50 topics.
Interestingly, a TFIDF system coupled with word embeddings performs incredibly well on the development set as can be seen in Table 6 (values similar to LDA if we look at Fig 3). However, once we move to the test set, all improvements become meager. It is possible that word embeddings are more sensitive to the choice of λ.
Although we do not provide the numbers, if we analyze the distribution of scores given by word embeddings, we find that the distribution is much flatter. TFIDF scores drop rapidly; for some citances most sentences are scored with a zero. LDA improves upon that by having less zero-score sentences, but the scores still decay until they reach zero. Word embeddings, however, seem to grant a minimum score to all sentences (most scores are greater than 0.4). Furthermore, there is very little variability from lowest to highest score. This is further evidenced by the wide range of λ values that yield good performance on Table 6. We conjecture the shape of these distributions may be responsible for the differences in performance.
Statistical Analysis
Although the F 1 -scores have improved by augmenting the bare-bones TFIDF approach, we must still check whether this improvement is statistically significant. Since some of these systems have very similar F 1 -scores, we cannot simply provide a 95% confidence intervals for each F 1 -score individually; we are forced to perform paired t-tests which mitigate the variance inherent in the data.
Given two systems, A and B, we resample with replacement 10000 times the dataset tested on and calculate the F 1 -score for each new sample. By evaluating on the same sample, any variability due to the data (harder/easier citances, for instance) is ignored. Finally, these pairs of F 1 -scores are then used to calculate a pvalue for the paired t-test.
We calculate the significance of differences between the top entries for each category (TFIDF and two tradeoff variations) evaluated on the test set. For the best performing system (that is TFIDF + LDA at F 1 -score of 14.77%) the difference is statistically significant from TFIDF + WE (at 14.24%) and TFIDF (at 14.11%) with p-values of 0.0015 and 0.0003, respectively. However, the difference between TFIDF + WE and TFIDF is not statistically significant (p-value of 0.4830).
Human Annotators
In order to determine whether the performance of our system is much lower than what can be achieved, we ran an experiment with human annotators. Since human annotators require more time to perform the task, we had to truncate the test set to just three documents, chosen at random.
The subset used to evaluate the human annotators consists of three different articles from the test set: C00-2123, N06-2049, and J96-3004. The 20 citances that cite C00-2123 only select 24 distinct sentences from the reference article, which contains 203 sentences. Similarly, the 22 citances that cite N06-2049 select only 35 distinct reference sentences from 155 total. The last article, J96-3004, has 69 citances annotated that select 109 distinct reference sentences from the 471 sentences found in the article.
To avoid "guiding" the annotators to the correct answers, we provided minimal instructions. We explained the problem of matching citances to their relevant reference spans to each annotator. Since the objective was to compare to our system's performance, the annotators had at their disposal the XML files given to our system. Thus, the sentence boundaries are interpreted consistently by our system and the human annotators. We instructed them to choose one or many sentences, possibly including the title, as the reference span for a citance.
The performance of the human annotators and three of our best system configurations can be seen in Table 8. The raw score for two of the annotators had extremely low precision. Upon further analysis, we noticed outliers where more than ten different reference sentences had been chosen.
To provide a fairer assessment, the scores were adjusted for two of the annotators: if more than ten sentences were selected for a citance, we replace the sentences with simply the article title. We argue this is justified since any citance that requires that many sentences to be chosen is probably referencing the paper as a whole. After these adjustments, the score of the human annotators rose considerably.
Discussion
The fact that WordNet went from decreasing the performance of our system to increasing its performance shows the level of detail required to tune a system for the task of reference span identification. The performance of our human annotators demonstrate the difficulty of this task -it requires much more precision. Additionally, the human scores show there is room for improvement.
Task 1a can be framed as identifying semantically similar sentences. This perspective is best represented by the LDA systems of Section 5.2 and the word embeddings of Section 6. However, as can be seen by the results we obtained in Table 5, relying solely on semantic similarity is not the best approach.
Methods such as TFIDF and sentence limiting do not attempt to solve Task 1a head-on. Through a narrowing of possibilities, these methods improve the odds of choosing the correct sentences. Only after sifting through the candidate sentences with these methods can topic modeling be of use.
Combining TFIDF and LDA through a tradeoff parameter allowed us to test whether topic modeling does indeed improve our performance. Clearly, that is the case since our best performing system uses both TFIDF and LDA. The same experiment was performed with word embeddings, although the improvements were not as great.
Since word embeddings performed well alone but didn't provide much of a boost to TFIDF, it is possible the information captured by the embeddings overlaps with the information captured by TFIDF.
The question that remains is whether the topic modeling was done as best as it could be. The results in Section 7 require further analysis. As we can see from Figure 3, very few combinations provide a net-gain in performance. Likewise, it is possible that further tuning of word embedding parameters could improve our performance.
Conclusion
During the BIRNDL shared task, we were surprised by the result of our TFIDF system, which achieved an F 1score of 13.65%. More complex systems did not obtain a higher F 1 -score. In this paper, we show it is possible to improve our TFIDF system with additional semantic information.
Through the use of WordNet we achieve an F 1 -score of 14.11%. Word embeddings increase this F 1 -score to 14.24%. If we employ LDA topic models instead of word embeddings, our system attains its best performance: an F 1 -score of 14.77%. This improvement is statistically significant.
Although these increases seem modest, the difficulty of the task should be taken into account. We performed an experiment with human annotators to assess what F 1 -score would constitute a reasonable goal for our system. The best F 1 -score obtained by a human was 27.95%. This leads us to believe there is still room for improvement on this task.
In the future, the study of overlap between TFIDF and word embeddings could provide a better understanding of the limits of this task. Finally, we also propose the simultaneous combination of LDA topic models and word embeddings. | 6,316 |
1708.02863 | 2743620784 | The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available. | In order to leverage the great success of deep neural networks for image classification @cite_9 @cite_36 , considerable object detection methods based on deep learning have been proposed @cite_24 @cite_13 @cite_33 @cite_26 @cite_29 . Although there are end-to-end detection frameworks, like SSD @cite_28 , YOLO @cite_6 and DenseBox @cite_14 , region-based systems ( Fast Faster R-CNN @cite_31 @cite_23 and R-FCN @cite_18 ) still dominate the detection accuracy on generic benchmarks @cite_8 @cite_21 . | {
"abstract": [
"",
"",
"How can a single fully convolutional neural network (FCN) perform on object detection? We introduce DenseBox, a unified end-to-end FCN framework that directly predicts bounding boxes and object class confidences through all locations and scales of an image. Our contribution is two-fold. First, we show that a single FCN, if designed and optimized carefully, can detect multiple different objects extremely accurately and efficiently. Second, we show that when incorporating with landmark localization during multi-task learning, DenseBox further improves object detection accuray. We present experimental results on public benchmark datasets including MALF face detection and KITTI car detection, that indicate our DenseBox is the state-of-the-art system for detecting challenging objects such as faces and cars.",
"",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"",
""
],
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_31",
"@cite_13"
],
"mid": [
"",
"",
"2129987527",
"",
"2179352600",
"",
"2193145675",
"2949650786",
"",
"",
"",
"2102605133",
"",
"",
""
]
} | CoupleNet: Coupling Global Structure with Local Parts for Object Detection | General object detection requires to accurately locate and classify all targets in the image or video. Compared to specific object detection, such as face, pedestrian and vehicle detection, general object detection often faces more challenges due to the large inter-class appearance differences. The variations arise not only from changes in a va- Only considering the local part information or global structure leads to low confidence score. By coupling the two kinds of information together, we can detect the sofa accurately with a confidence score of 0.78. Best viewed in color. riety of non-rigid deformations, but also due to the truncations, occlusions and inter-class interference. However, no matter how complicated the objects are, when humans identify a target, the recognition of object categories is subserved by both a global process that retrieves structural information and a local process that is sensitive to individual parts. This motivates us to build a detection model that fused both global and local information.
With the revival of Convolutional Neural Networks [15] (CNN), CNN-based object detection pipelines [8,9,16,21] have been proposed consecutively and made impressive improvements in generic benchmarks, e.g. PASCAL VOC [5] and MS COCO [17]. As two representative region-based CNN approaches, Fast/Faster R-CNN [8,21] uses a certain subnetwork to predict the category of each region proposal while R-FCN [16] conducts the inference with the positionsensitive score maps. Through removing the RoI-wise subnetwork, R-FCN has achieved higher detection speed while keeping the detection performance. However, the global structure information is ignored by the PSRoI pooling. As shown in Figure 1, using PSRoI pooling to extract local part information for final object category prediction, R-FCN leads to a low confidence score of 0.08 for the sofa detection since the local responses of sofa are disturbed by a women and a dog (they are also the categories that need to be detected). Conversely, the global structure of sofa could be extracted by the RoI pooling, but the confidence score is 0.45, which is also very low for the incomplete structure of sofa. By coupling the global confidence with the local part confidence together, we can obtain a more reliable prediction with the confidence score of 0.78.
In fact, the idea of fusing global and local information together is widely used in lots of visual tasks. In fingerprint recognition, Gu et al. [10] combined the global orientation field and local minutiae cue to largely improve the performance. In clique-graph matching, Nie et al. [19] proposed a clique-graph matching method by preserving global cliqueto-clique correspondence and local unary and pairwise correspondences. In scene parsing, Zhao et al. [27] designed a pyramid pooling module to effectively extract hierarchical global contextual prior, and then concatenated it with the local FCN feature to improve the performance. In traditional object detection, Felzenszwalb et al. [6] incorporated a global root model and several finer local part models to represent highly variable objects. All of which show that effective combination of the global structural properties and local fine-grained details can achieve complementary advantages.
Therefore, to fully explore the global and local clues, in this paper, we propose a novel full convolutional network named as CoupleNet, to couple the global structure and local parts to boost the detection accuracy. Specifically, the object proposals obtained by the RPN are fed into the coupling module which consists of two branches. One branch adopts the PSRoI pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Moreover, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. With the coupling structure, our network can jointly learn the local, global and context expression of the objects, which makes the model have a more powerful representation capacity and generalization ability. Extensive experiments demonstrate that CoupleNet can significantly improve the detection performance. Our detector shows competitive results on PASCAL VOC 07/12 and MS COCO compared to other state-of-the-art detectors, even with model ensemble approaches.
In summary, our main contributions are as follows: 1. We propose a unified fully convolutional network to jointly learn the local, global and context information for object detection.
2.
We design different normalization methods and coupling strategies to mine the compatibility and complementarity between the global and local branches.
3. We achieve the state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on MS COCO.
CoupleNet
In this section, we first introduce the architecture of the proposed CoupleNet for object detection. Then we explain in detail how we incorporate local representations, global appearance and contextual information for robust object detection.
Network architecture
The architecture of our proposed CoupleNet is illustrated in Figure 2. Our CoupleNet includes two different branches: a) a local part-sensitive fully convolutional network to learn the object-specific parts, denoted as local FCN; b) a global region-sensitive fully convolutional network to encode the whole appearance structure and context prior of the object, denoted as global FCN. We first use the ImageNet pre-trained ResNet-101 released in [12] to initialize our network. For our detection task, we remove the last average pooling layer and the fc layer. Given an input image, we extract candidate proposals by using the Region Proposal Network (RPN), which also shares convolution features with CoupleNet following [21]. Then each proposal flows to two different branches: the local FCN and the global FCN. Finally, the output of global and local FCN are coupled together as the final score of the object. We also perform class-agnostic bounding box regression in a similar way.
Local FCN
To effectively capture the specific fine-grained parts in local FCN, we construct a set of part-sensitive score maps by appending a 1x1 convolutional layer with k 2 (C + 1) channels, where k means we divide the object into k × k local parts (here k is set to the default value 7) and C + 1 is the number of object categories plus background. For each category, there are totally k 2 channels and each channel is responsible for encoding a specific part of the object. The final score of a category is determined by voting the k 2 responses. Here we use position-sensitive RoI pooling layer in [16] to extract object-specific parts and we simply perform average pooling for voting. Then, we obtain a (C + 1)-d vector which indicates the probability that the object belongs to each class. This procedure is equivalent to dividing a strong object category decision into the sum of multiple weak classifiers, which serves as the ensemble of several part models. Here we refer this part ensemble as local structure representation. As shown in Figure 3(a), for the truncated person, one can hardly get a strong response from the global description of the person due to truncation, on the contrary, our local FCN can effectively capture several specific parts, such as human nose, mouth, etc., which correspond to the regions with large responses in the feature map. We argue that the local FCN is much concerned with . An intuitive description of CoupleNet for object detection. (a) It is difficult to determine the target by using the global structure information alone for objects with truncations. (b) Moreover, for those having simple spatial structure and encompassing considerable background in the bounding box, e.g. dining table, it is also not enough to use local parts alone to make robust predictions. Therefore, an intuitive idea is to simultaneously couple global structure with local parts to effectively boost the confidence. Best viewed in color.
the internal structure and components, which can effectively reflect the local properties of visual object, especially when the object is occluded or the whole boundary is incomplete. However, for those having simple spatial structure and encompassing considerable background in the bounding box, e.g. dining table, the local FCN alone is difficult to make robust predictions. Thus it is necessary to add the global structure information to enhance the discrimination.
Global FCN
For the global FCN, we aim to describe the object by using the whole region-level features. Firstly, we attach a 1024-d 1x1 convolutional layer after the last convolutional block in ResNet-101 for reducing the dimension. Due to the diverse size of the object, we insert a RoI pooling layer in [8] to extract a fixed-length feature vector as the global structure description of the object. Secondly, we use two convolutional layers with kernal size k × k and 1 × 1 respectively (k is set to the default value 7) to further abstract the global representation of RoI. Finally, the output of 1x1 convolution is fed into the classifier whose output is also a (C + 1)-d vector.
In addition, context prior is the most basic and important factor for visual recognition tasks. For example, the boat usually travels in the water while is unlikely to fly in the sky. Despite the higher layers in deep neural network can involve the spatial context information around the objects due to the large receptive field, Zhou et al. [28] have shown that the practical receptive field is actually much smaller than the theoretical one. Therefore, it is necessary to explicitly collect the surrounding information to reduce the chance of misclassification. To enhance the feature representation ability of the global FCN, here we introduce the contextual information as an effective supplement. Specifically, we extend the context region by 2 times larger than the size of original proposal. Then the features RoI pooled from the original region and context region are concatenated together and fed into the latter RoI-wise subnetwork.As shown in Figure 2, the context region is embedded into the global branch to extract a more complete appearance structure and discriminative prior representation, which will help the classifier to better identity the object categories.
Due to the RoI pooling operation, the global FCN describes the proposal as a whole with CNN features, which can be seen as a global structure description of the object. Therefore, it can easily deal with the objects with intact structure and finer scale. As shown in Figure 3(b), our global FCN shows a large confidence for the dining table. However, in most cases, natural scenes consist of considerable objects with occlusions or truncations, making the detection more difficult. Figure 3(a) shows that using the global structure information alone can hardly make a confident prediction for the truncated person. By adding local part structural supports, the detection performance can be significantly boosted. Therefore, it is essential to combine both local and global descriptions for a robust detection.
Coupling structure
To match the same order of magnitude, we apply a normalization operation to the output of local and global FCN before they are combined together. We explored two different methods to perform normalization: an L2 normalization layer or a 1x1 convolutional layer to model the scale. Meanwhile, how to couple the local and global output is also a problem that needs to be researched. Here, we investigated three different coupling methods: element-wise sum, element-wise product and element-wise maximum. Our experiments show that using 1x1 convolution along with element-wise sum achieves the best performance and we will discuss it in Section 4.1.
With the coupling structure, CoupleNet simultaneously exploits the local parts, global structure and context prior for object detection. The whole network is fully convolutional and benefits from approximate joint training and multi-task learning. We also note that the global branch can be regarded as a lightweight Faster R-CNN, in which all learnable parameters are from convolutional layers and the depth of RoI-wise subnetwork is only two. Therefore, the computational complexity is far less than the subnetwork in ResNet-based Faster R-CNN system whose depth is ten. As a consequence, our CoupleNet can perform the inference efficiently, which runs slightly slower than R-FCN but much more faster than Faster R-CNN.
Experiments
We train and evaluate our method on three challenging object detection datasets: PASCAL VOC2007, VOC2012 and MS COCO. Since all these three datasets contain a variety of circumstances, which can sufficiently verify the effectiveness of our method. We demonstrate state-of-the-art results on all three datasets without bells and whistles.
Ablation studies on VOC2007
We first perform experiments on PASCAL VOC 2007 with 20 object categories for detailed analysis of our proposed CoupleNet detector. We train the models on the union set of VOC 2007 trainval and VOC 2012 trainval ("07+12") following [21], and evaluate on VOC 2007 test set. Object detection accuracy is measured by mean Average Precision (mAP), all the ablation experiments use single-scale training and testing, and we did not add the context prior.
Normalization. Since features extracted form different layers of CNN show various of scales, it is essential to normalize different features before coupling them together. Bell et al. [1] proposed to use L2 normalization to each RoIpooled feature and re-scale back up by a empirical scale, which shows a great gain on VOC dataset. In this paper, we also explore two different normalization ways to normalize the output of local and global FCN: an L2 normalization layer or a 1x1 convolutional layer to learn the scale.
As shown in Table 1, we find that the use of L2 normalization decreases the performance greatly, even worse than the direct addition (without any normalization ways). To explain such a phenomenon, we measured the outputs of two branches before and after L2 normalization. We further found that L2 normalization reduces the output gap between different categories, which results in a smaller score gap. As we know, a small score gap between different categories always means the classifier can not make a confident prediction. Therefore, we assume that this is the reason for the performance degradation. Moreover, we also exploit a 1x1 convolution to adaptively learn the scales between the global and local branches. Table 1 shows that using 1x1 convolution increases by 0.6 points compared to the direct addition and 2.2 points over R-FCN. Therefore, we use 1x1 convolution to replace the L2 normalization in the following experiments.
Coupling strategy. We explore three different response coupling strategies: element-wise sum, element-wise product and element-wise maximum. Table 1 shows the comparison results for the above three different implementations. We can see that the element-wise sum always achieves the best performance even though in different normalization methods. Generally, current advanced residual networks [12] also use element-wise sum as the effective way to integrate information from previous layers, which greatly facilitates the circulation of information and achieves the complementary advantages. For element-wise product, we argue that the system is relatively unstable and is susceptible to the weak side, which results in a large gradient to update the weak branch that makes it difficult to converge. For element-wise maximum, it equals to an ensemble model within the network to some extent, which losts the advantages of mutual support compared to element-wise sum when both two branches are failed to detect the object. Moreover, a better coupling strategy can be taken into consideration as the future work to further improve the accuracy, such as designing a more subtle nonlinear structure to learn the coupling relationship. Model ensemble. Model ensemble is commonly used to improve the final detection performance, since diverse initialization of parameters and the randomness of training samples both lead to different performance for the same model. Although the differences and complementarities will be more pronounced for different models, the promotion is often very limited. As shown in Table 4, we also compare our CoupleNet with the model ensemble. For a fair comparison, we first re-implemented Faster R-CNN [12] using ResNet-101 and online hard example mining (OHEM) [22], which achieves a mAP of 79.0% on VOC07 (76.4% in original paper without OHEM). We also re-implemented R-FCN with appropriate joint training using the public available code py-R-FCN 2 , which achieves a slightly lower result compared to [16] (78.6% vs. 79.5%). We use our reimplementation models to conduct the comparisons for consistency. We found that the promotion brought by model ensemble is less than 1 point. As shown in Table 4, it is far less than our method (81.7%).
On the one hand, we argue that the naive model ensemble just combines the results together and does not essentially guide the learning process of the network, while our Cou- Table 2. Comparisons with Faster R-CNN and R-FCN using ResNet-101. 128 samples are used for backpropagation and the top 300 proposals are selected for testing following [16]. The input resolution is 600x1000. We also note that the TITAN X used here is the new Pascal architecture along with CUDA 8.0 and cuDNN-v5.1. "07+12": VOC07 trainval union with VOC12 trainval. context: add the context prior to assist the global branch. pleNet can simultaneously utilize the global and local information to update the network and to infer the final results. On the other hand, our method enjoys end-to-end training and there is no need to train multiple models, thus greatly reducing the training time. Amount of parameters. Since our CoupleNet introduces a few more parameters compared with the single branch detectors, to further verify effectiveness of the coupling structure, here we increase the parameters of the prediction head for each single branch implementation to maintain the same amount of parameters with CoupleNet for comparison. In detail, we add a new residual variant block with three convolution layers, where the kernel size is 1x1x256, 3x3x256 and 1x1x1024 respectively, to the prediction sub-network. We found that the standard R-FCN with one or two extra heads got a mAP of 78.8% and 78.7% respectively in VOC07, which is slightly higher than our reimplemented version (78.6%) in [16] as shown in Table 4. Meanwhile, our global FCN, which performs the ROI Pooling on top of conv5, got a relative higher gain (a mAP of 79.3% for one head, 79.0% for two heads). The results indicate that simply adding more prediction layers obtains a very limited performance gain, while our coupling structure shows more discriminative power with the same amount of parameters.
Results on VOC2007
Using the public available ResNet-101 as the initialization model, we note that our method is easy to follow and the hyper-parameters for training are the same as in [16]. Similarly, we use the dilation strategy to reduce the effective stride of ResNet-101, just as [16] shows, thus both the global and local branches have a stride of 16. We also use a 1-GPU implementation, and the effective mini-batch size is 2 images by setting the iter size to 2. The whole network is trained for 80k iterations with a learning rate of 0.001 and then for 30k iterations with 0.0001. In addition, the context prior is proposed to further boost the performance while keeping the iterations unchanged. Finally, we also perform multi-scale training with the shorter sides of images are randomly resized from 480 to 864. Table 2 shows the detailed comparisons with Faster R-CNN and R-FCN. As we can see that our single model achieves a mAP of 81.7%, which outperforms the R-FCN by 2.2 points. However, while embedding the context prior to the global branch, our mAP rises up to 82.1%, which is the current best single model detector to our knowledge. Moreover, we also evaluate the inference time of our network using a NVIDIA TITAN X GPU (pascal) along with CUDA 8.0 and cuDNN-v5.1. As shown in the last column of Table 2, our method is slightly slower than R-FCN, which also reaches a real-time speed (i.e. 8.2 fps or 9.8 fps without context) and achieves the best trade-off between accuracy and speed. We argue that the sharing process of feature extraction between two branches and the design of lightweight RoI-wise subnetwork after RoI pooling both greatly reduce the model complexity. Table 3, we also compared our method with other state-of-the-art single model. We found that our method outperforms the others with a large margin, including the advanced end-to-end SSD method [18], which requires complicated data augmentation and careful training skills. Just as discussed earlier, CoupleNet shows a large gain over the classes with occlusions, truncations and considerable background information, like sofa, person, table and chair, which verifies our analyses. We also observed a large improvement for airplane, bird, boat and pottedplant, which usually have class-specific backgrounds, i.e. the sky for airplane and bird, water for boat and so on. Therefore, the context surrounding the objects provides an extra auxiliary discrimination.
As shown in
Results on VOC2012
We also evaluate our method on the more challenging VOC2012 dataset by submitting results to the public evaluation server. We use VOC07 trainval, VOC07 test and VOC12 trainval as the training set, which consists of 21k images in total. We also follow the similar hyper-parameter settings in VOC07 but change the iterations, since there are more training images. We train our models with 4 GPUs, and the effective mini-batch size thus becomes 4 (1 per GPU). As a result, the network is trained for 60k iterations with a learning rate of 0.001 and 0.0001 for the following 20k iterations. Table 5 shows the results on the VOC2012 test set. Our method obtains a top mAP of 80.4%, which is 2.8 points higher than R-FCN. We note that without using the extra tricks in the testing phase, our detector is the first one with a mAP higher than 80%. Similar promotions over the specific classes analysed in VOC07 are also observed, which once again validates the effectiveness of our method.
Results on MS COCO
Next we present more results on the Microsoft COCO object detection dataset. The dataset consists of 80k training set, 40k validation set and 20k test-dev set, which involves 80 object categories. All our models are trained on the union set of 80k training set and 40k validation set, and evaluated on 20k test-dev set. The COCO standard metric denotes as AP, which is evaluated at IoU ∈ [0.5 : 0.05 : 0.95]. Following the VOC2012, a 4-GPU implementation is used to accelerate the training process. We use an initial learning rate of 0.001 for the first 510k iterations and 0.0001 for the next 70k iterations. In addition, we conduct multiscale training with the scales are randomly sampled from {480, 576, 672, 768, 864} while testing in a single scale. Table 6 shows our results. Our single-scale trained detector has already achieved a result of 33.1%, which outperforms the R-FCN by 3.9 points. In addition, the multi-scale training further improves the performance up to 34.4%. Interestingly, we observed that the more challenging the dataset, the more the promotion (e.g., 2.2% for VOC07, 2.8% for VOC12 and 4.5% for COCO, all in multi-scale training), which directly proves that our approach can effectively cope with a variety of complex situations.
Conclusion
In this paper, we present the CoupleNet, a concise yet effective network that simultaneously couples global, local and context cues for accurate object detection. Our system naturally combines the advantages of different regionbased approaches with the coupling structure. With the combination of local part representation, global structural information and the contextual assistance, our CoupleNet achieves state-of-the-art results on the challenging PAS-CAL VOC and COCO datasets without using any extra tricks in the testing phase, which validates the effectiveness of our method. | 3,858 |
1708.02765 | 2744633726 | While prior work on context-based music recommendation focused on fixed set of contexts (e.g. walking, driving, jogging), we propose to use multiple sensors and external data sources to describe momentary (ephemeral) context in a rich way with a very large number of possible states (e.g. jogging fast along in downtown of Sydney under a heavy rain at night being tired and angry). With our approach, we address the problems which current approaches face: 1) a limited ability to infer context from missing or faulty sensor data; 2) an inability to use contextual information to support novel content discovery. | The type of contextual recommendations that can be made is shaped by sensors and signal processing used. Nowadays it is possible to accurately detect activities such as biking, driving, running, or walking based on smartphone sensors @cite_18 , or based on environmental sound cues @cite_14 . It is also possible to detect personality traits based on phone call patterns and social network data of the user @cite_6 . Similarly, interest in an object can be inferred based on ambient noise levels, and positions of people and objects in relation to each other @cite_13 . In the SenSay system, phone settings and preferences are set based on detected environmental and physiological states @cite_1 . | {
"abstract": [
"In this paper, we present Smart Diary, a novel smartphone based framework that analyzes mobile sensing data to infer, predict, and summarize people's daily activities, such as their behavioral patterns and life styles. Such activities are then used as the basis for knowledge representation, which generates personal digital diaries in an automatic manner. As users do not need to intentionally participate into this process, Smart Diary is able to make inferences and predictions based on a wide range of information sources, such as the phones' sensor readings, locations, and interaction history with the users, by integrating such information into a sustainable mining model. This model is specifically developed to handle heterogeneous and noisy sensing data, and is made to be extensible in that users can define their own logic rules to express short-term, mid-term, and long-term event patterns and predictions. Our evaluation results are based on the Android platform, and they demonstrate that the Smart Diary framework can provide accurate and easy-to-read diaries for end users without their interventions.",
"There are many studies that collect and store life log for personal memory. The paper explains how a system can create someone's life log in an inexpensive way to share daily life events with family or friends through socialnetwork or messaging. In the modern world where people are usually busier than ever, family members are geographically distributed due to globalization of companies and humans are inundated with more information than they can process, ambient communications through mobile media or internet based communication can provide rich social connections to friends and family. People can stay connected to their loving ones ubiquitously that they care about by sharing awareness information in a passive way. For users who wish to have a persistent existence in a virtual world - to let their friends know about their current activity or to inform their caretakers - new technology is needed. Research that aims to bridge real life and the virtual worlds (e.g., Second Life, Face book etc.) to simulate virtual living or logging daily events, while challenging and promising, is currently rare. Only very recently the mapping of real-world activities to virtual worlds has been attempted by processing multiple sensors data along with inference logic for realworld activities. Detecting or inferring human activity using such simple sensor data is often inaccurate, insufficient and expensive. Hence, this paper proposes to infer human activity from environmental sound cues and common sense knowledge, which is an inexpensive alternative to other sensors (e.g., accelerometers) based approaches. Because of their ubiquity, we believe that mobile phones or hand-held devices (HHD) are ideal channels to achieve a seamless integration between the physical and virtual worlds. Therefore, the paper presents a prototype to log daily events by a mobile phone based application by inferring activities from environmental sound cues. To the best of our knowledge, this system pioneers the use of environmental sound based activity recognition in mobile computing to reflect one's real-world activity in virtual worlds.",
"SenSay is a context-aware mobile phone that adapts to dynamically changing environmental and physiological states. In addition to manipulating ringer volume, vibration, and phone alerts, SenSay can provide remote callers with the ability to communicate the urgency of their calls, make call suggestions to users when they are idle, and provide the caller with feedback on the current status of the SenSay user. A number of sensors including accelerometers, light, and microphones are mounted at various points on the body to provide data about the user’s context. A decision module uses a set of rules to analyze the sensor data and manage a state machine composed of uninterruptible, idle, active and normal states. Results from our threshold analyses show a clear delineation can be made among several user states by examining sensor data trends. SenSay augments its contextual knowledge by tapping into applications such as electronic calendars, address books, and task lists.",
"Knowing the users' personality can be a strategic advantage for the design of adaptive and personalized user interfaces. In this paper, we present the results of a first trial conducted with the aim of inferring people's personality traits based on their mobile phone call behavior. Initial findings corroborate the efficacy of using call detail records (CDR) and Social Network Analysis (SNA) of the call graph to infer the Big Five personality factors. On-going work includes a large-scale study that shall refine the accuracy of the models with a reduced margin of error.",
"In many cases, visitors come to a museum in small groups. In such cases, the visitors’ social context has an impact on their museum visit experience. Knowing the social context may allow a system to provide socially aware services to the visitors. Evidence of the social context can be gained from observing monitoring the visitors’ social behavior. However, automatic identification of a social context requires, on the one hand, identifying typical social behavior patterns and, on the other, using relevant sensors that measure various signals and reason about them to detect the visitors’ social behavior. We present such typical social behavior patterns of visitor pairs, identified by observations, and then the instrumentation, detection process, reasoning, and analysis of measured signals that enable us to detect the visitors’ social behavior. Simple sensors’ data, such as proximity to other visitors, proximity to museum points of interest, and visitor orientation are used to detect social synchronization, attention to the social companion, and interest in museum exhibits. The presented approach may allow future research to offer adaptive services to museum visitors based on their social context to support their group visit experience better."
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_1",
"@cite_6",
"@cite_13"
],
"mid": [
"2055074231",
"2127639136",
"2150383149",
"1997993947",
"2029044896"
]
} | Ephemeral Context to Support Robust and Diverse Music Recommendations | Imagine the following persona: Anna is a university student in graphic design. She is active, easy going, and organized; keeping a structured well-planned calendar. She is 21 and high on the trait of Openness to Experience (Tkalcic & Chen, 2015), and enjoys traveling to new places and meeting new people, as well as discovering and listening to new music using online radio services. To improve Anna's listening experience, we consider her context. To decide which aspects of context to look at, we ran an exploratory crowdsourced survey. Out of 103 respondents 97 listen to different music based on their mood (e.g., happy, calm, sad), 92 based on their activity (e.g., commuting, jogging); 38 based on the ambience (e.g. sunny, rainy, loud, quiet), and 32 based on the location (e.g., a city park, a beach). From free-text answers we know that preferences of respondents also change according to the weather, time of day, people around, headphones or speakers used, upcoming concerts, the difficulty of work they do, languages they learn, reminiscence, and music that they just heard somewhere. Several respondents suggested combined causes, such as friends that are around and the activity they performed together.
We propose that Anna's online radio would suggest unexpected pleasant surprises based on her momentary unique (and therefore ephemeral) context. The contribution of this paper is using multiple sensors and external data sources to make the inference of life-logging events richer (through combinations of inputs), and more reliable (using multiple sensors to improve fault tolerance). It describes how rich context information can be combined in a large number of ways to improve the diversity of recommendations, which will lead to more opportunities for music discovery.
Jogging fast alone in downtown of Sydney under a heavy rain at night being tired and angry
These examples use sensors and external data sources for music recommendation. Some of these context-aware music discovery systems recommend not just relevant, but new music to users (Wang et al., 2014). Our contribution is to combine rich context in a way that is a) fault tolerant, and b) aims to facilitate music discovery, by constructing a momentary ephemeral context.
Approach
Here we give an example of how our approach could work to recommend music to Anna by detecting her ephemeral context based on high-level features (e.g. activity, mood, or the weather), which are inferred from low-level sensor data (see Figure 1) and discuss the benefits of such approach.
From low-level to high-level features. We infer that Anna's activity is "jogging" from the pattern of her smartphone's accelerometer and GPS, and because this activity was also planned in her calendar. Her speed is classified as "fast" for jogging, because her speed is 15km/h, while usually she runs 13km/h. For the social component she is classified as "alone", since her Bluetooth sensor does not see Bluetooth sensors of her friends' smartphones and the microphone does not recognise voices around. The location is "downtown of Sydney" based on the coordinates given by her smartphone's GPS, and the point of interest identified by GoogleMaps API, as well as recent reviews about it from FourSquare. The weather is "heavy rain" according to the moisture sensor of her phone and the weather forecast for the location from Weather API. The time of day is "night", because her smartphone time is 23:56. Her physical state is "tired" based on the high heart rate measured by her smart bracelet, and the respiratory pattern coming from her breathing sensor. Her mood is detected as "angry" by her smartphone front camera (Busso et al., 2004) and based on her public interactions on social media (e.g., angry emoticon). We combine these high-level features to construct a momentary ephemeral context, which becomes: "Jogging fast alone in downtown of Sydney under a heavy rain at night being tired and angry".
From individual recommenders to a hybrid one. We propose to use several individual recommenders focused on different sets of high-level features (e.g. a recommender looking only at location, weather, and time). A hybrid recommender later weights recommendendations of each individual one, based on explicit preferences of Anna, and the reliability of underlying high-level features, if detected at all. Anna can change weights to make an emphasis on a certain aspect, such as location or activity, depending on the way she wants to explore music. We provide an interactive web-based demonstration 1 of how such a hybrid recommender might work based on ephemeral context.
Benefits.
Our approach allows us to effectively address fault tolerance and leverage music discovery:
Fault tolerance. Different factors are used as a measure of fault tolerance. For example, if GPS and calendar locations are different, the system will omit location-based recommendations from the hybrid recommender.
Music discovery. Since ephemeral context frequently changes, the recommendations supplied will vary from moment to moment, leading to more opportunities for music discovery (e.g. 8 high-level features taking 8 values each, give more than 16mln combinations).
Outlook
Our next steps will be dedicated to identification of combinations of high-level features influencing music preferences, possibly via a user study; to evaluation of recommendations' relevance with cultural preferences in mind, potentially using crowdsourcing (von Ahn, 2008), and to study how to make such rich user profiling compliant with privacy concerns (Coles-Kemp et al., 2011). We also plan to study how to improve the transparency of ephemermal recommendations using textual and visual explanations. As such, we aim to deliver a streaming music experience, driven by context, while giving the user a sense of transparency and control. We are confident that music discovery through rich context is a very promising research topic, allowing streaming services to provide better personalized experiences to their listeners. | 940 |
1708.02765 | 2744633726 | While prior work on context-based music recommendation focused on fixed set of contexts (e.g. walking, driving, jogging), we propose to use multiple sensors and external data sources to describe momentary (ephemeral) context in a rich way with a very large number of possible states (e.g. jogging fast along in downtown of Sydney under a heavy rain at night being tired and angry). With our approach, we address the problems which current approaches face: 1) a limited ability to infer context from missing or faulty sensor data; 2) an inability to use contextual information to support novel content discovery. | With improvements in smartphone technology, there is a lot of potential for using rich contextual information to improve recommendations, in particular considering that people prefer to listen to different music in different contexts @cite_11 @cite_7 @cite_15 . Among the first to propose a context-aware music recommendation system are @cite_5 . They used weather data (from sensors and external data sources), and user information, to predict the appropriate music genre, tempo, and mood. Music can also be recommended based on user's heart beat to bring its rate to a normal level @cite_17 ; activities detected automatically (e.g. running, walking, sleeping, working, studying, and shopping) @cite_2 ; driving style, road type, landscape, sleepiness, traffic conditions, mood, weather, natural phenomena @cite_9 ; and emotional state to help to transition to a desired state @cite_0 . Soundtracks have also been recommended for smartphone videos based on location (using GPS and compass data for orientation), and extra information from 3rd party services such as Foursquare @cite_10 . | {
"abstract": [
"",
"",
"Mobile devices such as smart phones are becoming popular, and realtime access to multimedia data in different environments is getting easier. With properly equipped communication services, users can easily obtain the widely distributed videos, music, and documents they want. Because of its usability and capacity requirements, music is more popular than other types of multimedia data. Documents and videos are difficult to view on mobile phones' small screens, and videos' large data size results in high overhead for retrieval. But advanced compression techniques for music reduce the required storage space significantly and make the circulation of music data easier. This means that users can capture their favorite music directly from the Web without going to music stores. Accordingly, helping users find music they like in a large archive has become an attractive but challenging issue over the past few years.",
"",
"The amount of music consumed while on the move has been spiraling during the past couple of years, which requests for intelligent music recommendation techniques. In this demo paper, we introduce a context-aware mobile music player named \"Mobile Music Genius\" (MMG), which seamlessly adapts the music playlist on the fly, according to the user context. It makes use of a comprehensive set of features derived from sensor data, spatiotemporal information, and user interaction to learn which kind of music a listeners prefers in which context. We describe the automatic creation and adaptation of playlists and present results of a study that investigates the capabilities of the gathered user context features to predict the listener's music preference.",
"As the World Wide Web becomes a large source of digital music, the music recommendation system has got a great demand. There are several music recommendation systems for both commercial and academic areas, which deal with the user preference as fixed. However, since the music preferred by a user may change depending on the contexts, the conventional systems have inherent problems. This paper proposes a context-aware music recommendation system (CA-MRS) that exploits the fuzzy system, Bayesian networks and the utility theory in order to recommend appropriate music with respect to the context. We have analyzed the recommendation process and performed a subjective test to show the usefulness of the proposed system.",
"",
"We present a system to automatically generate soundtracks for user-generated outdoor videos (UGV) based on concurrently captured contextual sensor information with mobile apps for the ACM Multimedia 2012 Google challenge: Automatic Music Video Generation. Our method addresses the use case of making \"a video much more attractive for sharing by adding a matching soundtrack to it.\" Our system correlates viewable scene information from sensors with geographic contextual tags from OpenStreetMap. The co-occurance of geo-tags and mood tags are investigated from a set of categories of the web site Foursquare.com and a mapping from geo-tags to mood tags is obtained. Finally, a music retrieval component returns music based on matching mood tags. The experimental results show that our system can successfully create soundtracks that are related to the mood and situation of UGVs and therefore enhance the enjoyment of viewers. Our system sends only sensor data to a cloud service and is therefore bandwidth efficient since video data does not need to be transmitted for analysis.",
"In this paper, we present a new user heartbeat and preference aware music recommendation system. The system can not only recommend a music playlist based on the user’s music preference but also the music playlist is generated based on the user’s heartbeat. If the user’s heartbeat is higher than the normal heartbeat which is 60-100 beats per minutes (age 18 and over) or 70-100 beats per minutes (age 6-18), the system generates a user preferred music playlist using Markov decision process to transfer the user’s heartbeat back to the normal range with the minimum time cost; if the user’s heartbeat is normal, the system generates a user preferred music playlist to keep the user’s heartbeat within the normal range; If the user’s heartbeat is lower than the normal heartbeat, the system generates a user preferred music playlist using Markov decision process to uplift the user’s heartbeat back to the normal range with the minimum time cost."
],
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_9",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"",
"2037336711",
"",
"1985732434",
"1558086654",
"",
"2113409081",
"2106634459"
]
} | Ephemeral Context to Support Robust and Diverse Music Recommendations | Imagine the following persona: Anna is a university student in graphic design. She is active, easy going, and organized; keeping a structured well-planned calendar. She is 21 and high on the trait of Openness to Experience (Tkalcic & Chen, 2015), and enjoys traveling to new places and meeting new people, as well as discovering and listening to new music using online radio services. To improve Anna's listening experience, we consider her context. To decide which aspects of context to look at, we ran an exploratory crowdsourced survey. Out of 103 respondents 97 listen to different music based on their mood (e.g., happy, calm, sad), 92 based on their activity (e.g., commuting, jogging); 38 based on the ambience (e.g. sunny, rainy, loud, quiet), and 32 based on the location (e.g., a city park, a beach). From free-text answers we know that preferences of respondents also change according to the weather, time of day, people around, headphones or speakers used, upcoming concerts, the difficulty of work they do, languages they learn, reminiscence, and music that they just heard somewhere. Several respondents suggested combined causes, such as friends that are around and the activity they performed together.
We propose that Anna's online radio would suggest unexpected pleasant surprises based on her momentary unique (and therefore ephemeral) context. The contribution of this paper is using multiple sensors and external data sources to make the inference of life-logging events richer (through combinations of inputs), and more reliable (using multiple sensors to improve fault tolerance). It describes how rich context information can be combined in a large number of ways to improve the diversity of recommendations, which will lead to more opportunities for music discovery.
Jogging fast alone in downtown of Sydney under a heavy rain at night being tired and angry
These examples use sensors and external data sources for music recommendation. Some of these context-aware music discovery systems recommend not just relevant, but new music to users (Wang et al., 2014). Our contribution is to combine rich context in a way that is a) fault tolerant, and b) aims to facilitate music discovery, by constructing a momentary ephemeral context.
Approach
Here we give an example of how our approach could work to recommend music to Anna by detecting her ephemeral context based on high-level features (e.g. activity, mood, or the weather), which are inferred from low-level sensor data (see Figure 1) and discuss the benefits of such approach.
From low-level to high-level features. We infer that Anna's activity is "jogging" from the pattern of her smartphone's accelerometer and GPS, and because this activity was also planned in her calendar. Her speed is classified as "fast" for jogging, because her speed is 15km/h, while usually she runs 13km/h. For the social component she is classified as "alone", since her Bluetooth sensor does not see Bluetooth sensors of her friends' smartphones and the microphone does not recognise voices around. The location is "downtown of Sydney" based on the coordinates given by her smartphone's GPS, and the point of interest identified by GoogleMaps API, as well as recent reviews about it from FourSquare. The weather is "heavy rain" according to the moisture sensor of her phone and the weather forecast for the location from Weather API. The time of day is "night", because her smartphone time is 23:56. Her physical state is "tired" based on the high heart rate measured by her smart bracelet, and the respiratory pattern coming from her breathing sensor. Her mood is detected as "angry" by her smartphone front camera (Busso et al., 2004) and based on her public interactions on social media (e.g., angry emoticon). We combine these high-level features to construct a momentary ephemeral context, which becomes: "Jogging fast alone in downtown of Sydney under a heavy rain at night being tired and angry".
From individual recommenders to a hybrid one. We propose to use several individual recommenders focused on different sets of high-level features (e.g. a recommender looking only at location, weather, and time). A hybrid recommender later weights recommendendations of each individual one, based on explicit preferences of Anna, and the reliability of underlying high-level features, if detected at all. Anna can change weights to make an emphasis on a certain aspect, such as location or activity, depending on the way she wants to explore music. We provide an interactive web-based demonstration 1 of how such a hybrid recommender might work based on ephemeral context.
Benefits.
Our approach allows us to effectively address fault tolerance and leverage music discovery:
Fault tolerance. Different factors are used as a measure of fault tolerance. For example, if GPS and calendar locations are different, the system will omit location-based recommendations from the hybrid recommender.
Music discovery. Since ephemeral context frequently changes, the recommendations supplied will vary from moment to moment, leading to more opportunities for music discovery (e.g. 8 high-level features taking 8 values each, give more than 16mln combinations).
Outlook
Our next steps will be dedicated to identification of combinations of high-level features influencing music preferences, possibly via a user study; to evaluation of recommendations' relevance with cultural preferences in mind, potentially using crowdsourcing (von Ahn, 2008), and to study how to make such rich user profiling compliant with privacy concerns (Coles-Kemp et al., 2011). We also plan to study how to improve the transparency of ephemermal recommendations using textual and visual explanations. As such, we aim to deliver a streaming music experience, driven by context, while giving the user a sense of transparency and control. We are confident that music discovery through rich context is a very promising research topic, allowing streaming services to provide better personalized experiences to their listeners. | 940 |
1708.02765 | 2744633726 | While prior work on context-based music recommendation focused on fixed set of contexts (e.g. walking, driving, jogging), we propose to use multiple sensors and external data sources to describe momentary (ephemeral) context in a rich way with a very large number of possible states (e.g. jogging fast along in downtown of Sydney under a heavy rain at night being tired and angry). With our approach, we address the problems which current approaches face: 1) a limited ability to infer context from missing or faulty sensor data; 2) an inability to use contextual information to support novel content discovery. | These examples use sensors and external data sources for music recommendation. Some of these context-aware music discovery systems recommend not just relevant, but new music to users @cite_4 . Our contribution is to combine rich context in a way that is a) fault tolerant, and b) aims to facilitate music discovery, by constructing a momentary ephemeral context. | {
"abstract": [
"A goal for the creation and improvement of music recommendation is to retrieve users' preferences and select the music adapting to the preferences. Although the existing researches achieved a certain degree of success and inspired future researches to get more progress, problem of the cold start recommendation and the limitation to the similar music have been pointed out. Hence we incorporate concept of serendipity using 'renso' alignments over Linked Data to satisfy the users' music playing needs. We first collect music-related data from Last.fm, Yahoo! Local, Twitter and LyricWiki, and then create the 'renso' relation on the Music Linked Data. Our system proposes a way of finding suitable but novel music according to the users' contexts. Finally, preliminary experiments confirm balance of accuracy and serendipity of the music recommendation."
],
"cite_N": [
"@cite_4"
],
"mid": [
"306311707"
]
} | Ephemeral Context to Support Robust and Diverse Music Recommendations | Imagine the following persona: Anna is a university student in graphic design. She is active, easy going, and organized; keeping a structured well-planned calendar. She is 21 and high on the trait of Openness to Experience (Tkalcic & Chen, 2015), and enjoys traveling to new places and meeting new people, as well as discovering and listening to new music using online radio services. To improve Anna's listening experience, we consider her context. To decide which aspects of context to look at, we ran an exploratory crowdsourced survey. Out of 103 respondents 97 listen to different music based on their mood (e.g., happy, calm, sad), 92 based on their activity (e.g., commuting, jogging); 38 based on the ambience (e.g. sunny, rainy, loud, quiet), and 32 based on the location (e.g., a city park, a beach). From free-text answers we know that preferences of respondents also change according to the weather, time of day, people around, headphones or speakers used, upcoming concerts, the difficulty of work they do, languages they learn, reminiscence, and music that they just heard somewhere. Several respondents suggested combined causes, such as friends that are around and the activity they performed together.
We propose that Anna's online radio would suggest unexpected pleasant surprises based on her momentary unique (and therefore ephemeral) context. The contribution of this paper is using multiple sensors and external data sources to make the inference of life-logging events richer (through combinations of inputs), and more reliable (using multiple sensors to improve fault tolerance). It describes how rich context information can be combined in a large number of ways to improve the diversity of recommendations, which will lead to more opportunities for music discovery.
Jogging fast alone in downtown of Sydney under a heavy rain at night being tired and angry
These examples use sensors and external data sources for music recommendation. Some of these context-aware music discovery systems recommend not just relevant, but new music to users (Wang et al., 2014). Our contribution is to combine rich context in a way that is a) fault tolerant, and b) aims to facilitate music discovery, by constructing a momentary ephemeral context.
Approach
Here we give an example of how our approach could work to recommend music to Anna by detecting her ephemeral context based on high-level features (e.g. activity, mood, or the weather), which are inferred from low-level sensor data (see Figure 1) and discuss the benefits of such approach.
From low-level to high-level features. We infer that Anna's activity is "jogging" from the pattern of her smartphone's accelerometer and GPS, and because this activity was also planned in her calendar. Her speed is classified as "fast" for jogging, because her speed is 15km/h, while usually she runs 13km/h. For the social component she is classified as "alone", since her Bluetooth sensor does not see Bluetooth sensors of her friends' smartphones and the microphone does not recognise voices around. The location is "downtown of Sydney" based on the coordinates given by her smartphone's GPS, and the point of interest identified by GoogleMaps API, as well as recent reviews about it from FourSquare. The weather is "heavy rain" according to the moisture sensor of her phone and the weather forecast for the location from Weather API. The time of day is "night", because her smartphone time is 23:56. Her physical state is "tired" based on the high heart rate measured by her smart bracelet, and the respiratory pattern coming from her breathing sensor. Her mood is detected as "angry" by her smartphone front camera (Busso et al., 2004) and based on her public interactions on social media (e.g., angry emoticon). We combine these high-level features to construct a momentary ephemeral context, which becomes: "Jogging fast alone in downtown of Sydney under a heavy rain at night being tired and angry".
From individual recommenders to a hybrid one. We propose to use several individual recommenders focused on different sets of high-level features (e.g. a recommender looking only at location, weather, and time). A hybrid recommender later weights recommendendations of each individual one, based on explicit preferences of Anna, and the reliability of underlying high-level features, if detected at all. Anna can change weights to make an emphasis on a certain aspect, such as location or activity, depending on the way she wants to explore music. We provide an interactive web-based demonstration 1 of how such a hybrid recommender might work based on ephemeral context.
Benefits.
Our approach allows us to effectively address fault tolerance and leverage music discovery:
Fault tolerance. Different factors are used as a measure of fault tolerance. For example, if GPS and calendar locations are different, the system will omit location-based recommendations from the hybrid recommender.
Music discovery. Since ephemeral context frequently changes, the recommendations supplied will vary from moment to moment, leading to more opportunities for music discovery (e.g. 8 high-level features taking 8 values each, give more than 16mln combinations).
Outlook
Our next steps will be dedicated to identification of combinations of high-level features influencing music preferences, possibly via a user study; to evaluation of recommendations' relevance with cultural preferences in mind, potentially using crowdsourcing (von Ahn, 2008), and to study how to make such rich user profiling compliant with privacy concerns (Coles-Kemp et al., 2011). We also plan to study how to improve the transparency of ephemermal recommendations using textual and visual explanations. As such, we aim to deliver a streaming music experience, driven by context, while giving the user a sense of transparency and control. We are confident that music discovery through rich context is a very promising research topic, allowing streaming services to provide better personalized experiences to their listeners. | 940 |
1708.02901 | 2743157634 | Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: "different instances but a similar viewpoint and category" and "different viewpoints of the same instance". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task. | Unsupervised learning of visual representations is a research area of particular interest. Approaches to unsupervised learning can be roughly categorized into two main streams: (i) generative models, and (ii) self-supervised learning. Earlier methods for generative models include Anto-Encoders @cite_43 @cite_21 @cite_36 @cite_25 and Restricted Boltzmann Machines (RBMs) @cite_64 @cite_39 @cite_16 @cite_34 . For example, Le al @cite_25 trained a multi-layer auto-encoder on a large-scale dataset of YouTube videos: although no label is provided, some neurons in high-level layers can recognize cats and human faces. Recent generative models such as Generative Adversarial Networks @cite_53 and Variational Auto-Encoders @cite_31 are capable of generating more realistic images. The generated examples or the neural networks that learn to generate examples can be exploited to learn representations of data @cite_12 @cite_61 . | {
"abstract": [
"",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"",
"Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.",
"The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. © 1997 Elsevier Science Ltd",
"",
"While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and in-painting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases.",
"We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
""
],
"cite_N": [
"@cite_61",
"@cite_64",
"@cite_36",
"@cite_53",
"@cite_21",
"@cite_34",
"@cite_39",
"@cite_43",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2100495367",
"2130325614",
"2099471712",
"2025768430",
"",
"2110798204",
"2105464873",
"",
"2054814877",
"2950789693",
""
]
} | Transitive Invariance for Self-supervised Visual Representation Learning | Visual invariance is a core issue in learning visual representations. Traditional features like SIFT [39] and HOG [6] are histograms of edges that are to an extent invariant to illumination, orientations, scales, and translations. Modern deep representations are capable of learning high-level invariance from large-scale data [47] , e.g., viewpoint, pose, deformation, and semantics. These can also be transferred
A B
A ' B ' Figure 1: We propose to obtain rich invariance by applying simple transitive relations. In this example, two different cars A and B are linked by the features that are good for inter-instance invariance (e.g., using [9]); and each car is linked to another view (A and B ) by visual tracking [61]. Then we can obtain new invariance from object pairs A, B , A , B , and A , B via transitivity. We show more examples in the bottom.
to complicated visual recognition tasks [17,38].
In the scheme of supervised learning, human annotations that map a variety of examples into a single label provide supervision for learning invariant representations. For example, two horses with different illumination, poses, and breeds are invariantly annotated as a category of "horse". Such human knowledge on invariance is expected to be learned by capable deep neural networks [33,28] through carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort.
Unsupervised or "self-supervised" learning (e.g., [61,9,45,63,64,35,44,62,40,66]) recently has attracted increasing interests because the "labels" are free to obtain. Unlike supervised learning that learns invariance from the semantic labels, the self-supervised learning scheme mines it from the nature of the data. We observe that most selfsupervised approaches learn representations that are invariant to: (i) inter-instance variations, which reflects the commonality among different instances. For example, relative positions of patches [9] (see also Figure 3) or channels of colors [63,64] can be predicted through the commonality shared by many object instances; (ii) intra-instance variations. Intra-instance invariance is learned from the pose, viewpoint, and illumination changes by tracking a single moving instance in videos [61,44]. However, either source of invariance can be as rich as that provided by human annotations on large-scale datasets like ImageNet.
Even after significant advances in the field of selfsupervised learning, there is still a long way to go compared to supervised learning. What should be the next steps? It seems that an obvious way is to obtain multiple sources of invariance by combining multiple self-supervised tasks, e.g., via multiple losses. Unfortunately, this naïve solution turns out to give little improvement (as we will show by experiments).
We argue that the trick lies not in the tasks but in the way of exploiting data. To leverage both intra-instance and interinstance invariance, in this paper we construct a huge affinity graph consisting of two types of edges (see Figure 1): the first type of edges relates "different instances of similar viewpoints/poses and potentially the same category", and the second type of edges relates "different viewpoints/poses of an identical instance". We instantiate the first type of edges by learning commonalities across instances via the approach of [9], and the second type by unsupervised tracking of objects in videos [61]. We set up simple transitive relations on this graph to infer more complex invariance from the data, which are then used to train a Triplet-Siamese network for learning visual representations.
Experiments show that our representations learned without any annotations can be well transferred to the object detection task. Specifically, we achieve 63.2% mAP with VGG16 [50] when fine-tuning Fast R-CNN on VOC2007, against the ImageNet pre-training baseline of 67.3%. More importantly, we also report the first-ever result of un-/selfsupervised pre-training models fine-tuned on the challenging COCO object detection dataset [37], achieving 23.5% AP comparing against 24.4% AP that is fine-tuned from an ImageNet pre-trained counterpart (both using VGG16). To our knowledge, this is the closest accuracy to the ImageNet pre-training counterpart obtained on object detection tasks.
Overview
Our goal is to learn visual representations which capture: (i) inter-instance invariance (e.g., two instances of cats should have similar features), and (ii) intra-instance invariance (pose, viewpoint, deformation, illumination, and other variance of the same object instance). We have tried to formulate this as a multi-task (multi-loss) learning problem in our initial experiments (detailed in Table 2 and 3) and observed unsatisfactory performance. Instead of doing so, we propose to obtain a richer set of invariance by performing transitive reasoning on the data.
Our first step is to construct a graph that describes the affinity among image patches. A node in the graph denotes The context prediction task defined in [9]. Given two patches in an image, it learns to predict the relative position between them.
an image patch. We define two types of edges in the graph that relate image patches to each other. The first type of edges, called inter-instance edges, link two nodes which correspond to different object instances of similar visual appearance; the second type of edges, called intra-instance edges, link two nodes which correspond to an identical object captured at different time steps of a track. The solid arrows in Figure 1 illustrate these two types of edges. Given the built graph, we want to transit the relations via the known edges and associate unconnected nodes that may provide under-explored invariance ( Figure 1, dash arrows). Specifically, as shown in Figure 1, if patches A, B are linked via an inter-instance edge and A, A and B, B respectively are linked via "intra-instance" edges, we hope to enrich the invariance by simple transitivity and relate three new pairs of: A , B , A, B , and A , B (Figure 1, dash arrows).
We train a Triplet-Siamese network that encourages similar visual representations between the invariant samples (e.g., any pair consisting of A, A , B, B ) and at the same time discourages similar visual representations to a third distractor sample (e.g., a random sample C unconnected to A, A , B, B ). In all of our experiments, we apply VGG16 [50] as the backbone architecture for each branch of this Triplet-Siamese network. The visual representations learned by this backbone architecture are evaluated on other recognition tasks.
Graph Construction
We construct a graph with inter-instance and intrainstance edges. Firstly, we apply the method of [61] on a large set of 100K unlabeled videos (introduced in [61]) and mine millions of moving objects using motion cues (Sec. 4.1). We use the image patches of them to construct the nodes of the graph.
We instantiate inter-instance edges by the selfsupervised method of [9] that learns context predictions on a large set of still images, which provide features to cluster the nodes and set up inter-instance edges (Sec. 4.2). On the other hand, we connect the image patches in the same visual track by intra-instance edges (Sec. 4.3).
Mining Moving Objects
We follow the approach in [61] to find the moving objects in videos. As a brief introduction, this method first applies Improved Dense Trajectories (IDT) [58] on videos to extract SURF [2] feature points and their motion. The video frames are then pruned if there is too much motion (indicating camera motion) or too little motion (e.g., noisy signals). For the remaining frames, it crop a 227×227 bounding box (from ∼600×400 images) which includes the most number of moving points as the foreground object. However, for computational efficiency, in this paper we rescale the image patches to 96 × 96 after cropping and we use them as inputs for clustering and training.
Inter-instance Edges via Clustering
Given the extracted image patches which act as nodes, we want to link them with extra inter-instance edges. We rely on the visual representations learned from [9] to do this. We connect the nodes representing image patches which are close in the feature space. In addition, motivated by the mid-level clustering approaches [51,7], we want to obtain millions of object clusters with a small number of objects in each to maintain high "purity" of the clusters. We describe the implementation details of this step as follows.
We extract the pool5 features of the VGG16 network trained as in [9]. Following [9], we use ImageNet without labels to train this network. Note that because we use a patch size of 96×96, the dimension of our pool5 feature is 3×3×512=4608. The distance between samples is calculated by the cosine distance of these features. We want the object patches in each cluster to be close to each other in the feature space, and we care less about the differences between clusters. However, directly clustering millions of image patches into millions of small clusters (e.g., by K-means) is time consuming. So we apply a hierarchical clustering approach (2-stage in this paper) where we first group the images into a relatively small number of clusters, and then find groups of small number of examples inside each cluster via nearest-neighbor search.
Specifically, in the first stage of clustering, we apply Kmeans clustering with K = 5000 on the image patches. We then remove the clusters with number of examples less than 100 (this reduces K to 546 in our experiments on the image patches mined from the video dataset). We view these clusters as the "parent" clusters (blue circles in Figure 2). Then in the second stage of clustering, inside each parent cluster, we perform nearest-neighbor search for each sample and obtain its top 10 nearest neighbors in the feature space. We then find any group of samples with a group size of 4, inside which all the samples are each other's top-10 nearest neighbors. We call these small clusters with 4 samples "child" clusters (green circles in Figure 2). We then link these image patches with each other inside a child cluster via "inter-instance" edges. Note that different child clusters may overlap, i.e., we allow the same sample to appear in different groups. However, in our experiments we find that most samples appear only in one group. We show some results of clustering in Figure 4
Intra-instance Edges via Tracking
To obtain rich variations of viewpoint and deformation changes of the same object instance, we apply visual tracking on the mined moving objects in the videos as in [61]. More specifically, given a moving object in the video, it applies KCF [23] to track the object for N = 30 frames and obtain another sample of the object in the end of the track. Note that the KCF tracker does not require any human supervision. We add these new objects as nodes to the graph and link the two samples in the same track with an intrainstance edge (purple in Figure 2).
Learning with Transitions in the Graph
With the graph constructed, we want to link more image patches (see dotted links in Figure 1 trivial solution of identical representations, we also encourage the network to generate dissimilar representations if a node is expected to be unrelated. Specifically, we constrain the image patches from different "parent" clusters (which are more likely to have different categories) to have different representations (which we call a negative pair of samples). We design a Triplet-Siamese network with a ranking loss function [59,61] such that the distance between related samples should be smaller than the distance of unrelated samples. Our Triplet-Siamese network includes three towers of a ConvNet with shared weights (Figure 6). For each tower, we adopt the standard VGG16 architecture [50] to the convolutional layers, after which we add two fullyconnected layers with 4096-d and 1024-d outputs. The Triplet-Siamese network accepts a triplet sample as its input: the first two image patches in the triplet are a positive pair, and the last two are a negative pair. We extract their 1024-d features and calculate the ranking loss as follows.
Given an arbitrary pair of image patches A and B, we de-fine their distance as:
D(A, B) = 1 − F (A)·F (B) F (A) F (B) where F (·)
is the representation mapping of the network. With a triplet of (X, X + , X − ) where (X, X + ) is a positive pair and (X, X − ) is a negative pair as defined above, we minimize the ranking loss:
L(X, X + , X − ) = max{0, D(X, X + ) − D(X, X − ) + m},
where m is a margin set as 0.5 in our experiments. Although we have only one objective function, we have different types of training examples. As illustrated in Figure 6, given the set of related samples {A, B, A , B } (see Figure 5) and a random distractor sample C from another parent cluster, we can train the network to handle, e.g., viewpoint invariance for the same instance via L(A, A , C) and invariance to different objects sharing the same semantics via L (A, B , C).
Besides exploring these relations, we have also tried to enforce the distance between different objects to be larger than the distance between two different viewpoints of the same object, e.g., D(A, A ) < D(A, B ). But we have not found this extra relation brings any improvement. Interestingly, we found that the representations learned by our method can in general satisfy D(A, A ) < D(A, B ) after training.
Experiments
We perform extensive analysis on our self-supervised representations. We first evaluate our ConvNet as a feature extractor on different tasks without fine-tuning . We then show the results of transferring the representations to vision tasks including object detection and surface normal estimation with fine-tuning. Implementation Details. To prepare the data for training, we download the 100K videos from YouTube using the URLs provided by [36,61]. By mining the moving objects and tracking in the videos, we obtain ∼10 million image patches of objects. By applying the transitivity on the graph constructed, we obtain 7 million positive pairs of objects where each pair of objects are two different instances with different viewpoints. We also randomly sample 2 million object pairs connected by the intra-instance edges.
We train our network with these 9 million pairs of images using a learning rate of 0.001 and a mini-batch size of 100.
For each pair we sample the third distractor patch from a different "parent cluster" in the same mini-batch. We use the network pre-trained in [9] to initialize our convolutional layers and randomly initialized the fully connected layers. We train the network for 200K iterations with our method.
Qualitative Results without Fine-tuning
We first perform nearest-neighbor search to show qualitative results. We adopt the pool5 feature of the VGG16 network for all methods without any fine-tuning ( Figure 7). We do this experiment on the object instances cropped from the PASCAL VOC 2007 dataset [13] (trainval). As Figure 7 shows, given an query image on the left, the network pre-trained with the context prediction task [9] can retrieve objects of very similar viewpoints. On the other hand, our network shows more variations of objects and can often retrieve objects with the same class as the query. We also show the nearest-neighbor results using fully-supervised ImageNet pre-trained features as a comparison. We also visualize the features using the visualization technique of [65]. For each convolutional unit in conv5 3, we retrieve the objects which give highest activation responses and highlight the receptive fields on the images. We visualize the top 6 images for 4 different convolutional units in Figure 8. We can see these convolutional units are corresponding to different semantic object parts (e.g., fronts of cars or buses wheels, animal legs, eyes or faces).
Analysis on Object Detection
We evaluate how well our representations can be transferred to object detection by fine-tuning Fast R-CNN [16] on PASCAL VOC 2007 [13]. We use the standard trainval set for training and test set for testing with VGG16 as the base architecture. For the detection network, we initialize the weights of convolutional layers from our self-supervised network and randomly initialize the fullyconnected layers using Gaussian noise with zero mean and 0.001 standard deviation.
During fine-tuning Fast R-CNN, we use 0.00025 as the starting learning rate. We reduce the learning rate by 1/10 in every 50K iterations. We fine-tune the network for 150K iterations. Unlike standard Fast R-CNN where the first few convolutional layers of the ImageNet pre-trained network are fixed, we fine-tuned all layers on the PASCAL data as our model is pre-trained in a very different domain (e.g., video patches).
We report the results in Table 1. If we train Fast R-CNN from scratch without any pre-training, we can only obtain 39.7% mAP. With our self-supervised trained network as initialization, the detection mAP is increased to 63.2% (with a 23.5 points improvement). Our result compares competitively (4.1 points lower) to the counterpart using ImageNet pre-training (67.3% with VGG16).
As we incorporate the invariance captured from [61] and [9], we also evaluate the results using these two approaches individually (Table 1). By fine-tuning the context prediction network of [9], we can obtain 61.5% mAP. To train the network of [61], we use exactly the same loss function and initialization as our approach except that there are only training examples of the same instance in the same visual track (i.e., only the samples linked by intra-instance edges in our graph is better than both methods. This comparison indicates the effectiveness of exploiting a greater variety of invariance in representation learning.
Is multi-task learning sufficient? An alternative way of obtaining both intra-and inter-instance invariance is to apply multi-task learning with the two losses of [9] and [61].
Next we compare with this method. For the task in [61], we use the same network architecture as our approach; for the task in [9], we follow their design of a Siamese network. We apply different fully connected layers for different tasks, but share the convolutional layers between these two tasks. Given a mini-batch of training samples, we perform ranking among these images as well as context prediction in each image simultaneously via two losses. The representations learned in this way, when fine-tuned with Fast R-CNN, obtain 62.1% mAP ("Multitask" in Table 2). Comparing to only using context prediction [9] (61.5%), the multi-task learning only gives a marginal improvement (0.6%). This result suggests that multi-task learning in this way is not sufficient; organizing and exploiting the relationships of data, as done by our method, is more effective for representation learning.
How important is tracking? To further understand how much visual tracking helps, we perform ablative analysis by making the visual tracks shorter: we track the moving objects for 15 frames instead of by default 30 frames. This is expected to reduce the viewpoint/pose/deformation variance contributed by tracking. Our model pre-trained in this way shows 61.5% mAP ("15-frame" in Table 2) when fine-tuned for detection. This number is similar to that of using context prediction only (Table 1). This result is not surprising, because it does not add much new information for training. It suggests adding stronger viewpoint/pose/deformation invariance is important for learning All >c1 >c2 >c3 >c4 >c5
Context [9] 62. better features for object detection.
How important is clustering? Furthermore, we want to understand how important it is to cluster images with features learned from still images [9]. We perform another ablative analysis by replacing the features of [9] with HOG [6] during clustering. The rest of the pipeline remains exactly the same. The final result is 60.4% mAP ("HOG" in Table 2). This shows that if the features for clustering are not invariant enough to handle different object instances, the transitivity in the graph becomes less reliable.
Object Detection with Faster R-CNN
Although Fast R-CNN [16] has been a popular testbed of un-/self-supervised features, it relies on Selective Search proposals [55] and thus is not fully end-to-end. We further evaluate the representations on object detection with the end-to-end Faster R-CNN [46] where the Region Proposal Network (RPN) may suffer from the features if they are low-quality.
PASCAL VOC 2007 Results. We fine-tune Faster R-CNN in 8 GPUs for 35K iterations with an initial learning rate of 0.00025 which is reduced by 1/10 after every 15K iterations. Table 3 COCO Results. We further report results on the challenging COCO detection dataset [37]. To the best of our knowledge this is the first work of this kind presented on COCO detection. We fine-tune Faster R-CNN in 8 GPUs for 120K iterations with an initial learning rate of 0.001 which is reduced by 1/10 after 80k iterations. This is trained on the COCO trainval35k split and evaluated on the minival5k split, introduced by [3].
We report the COCO results on Table 4. Faster R-CNN fine-tuned with our self-supervised network obtains 23.5% AP using the COCO metric, which is very close (<1%) to fine-tuning Faster R-CNN with the ImageNet pre-trained counterpart (24.4%). Actually, if the fine-tuning of the Ima-geNet counterpart follows the "shorter" schedule in the public code (61.25K iterations in 8 GPUs, converted from 490K in 1 GPU) 1 , the ImageNet supervised pre-training version has 23.7% AP and is comparable with ours. This comparison also strengthens the significance of our result.
To the best of our knowledge, our model achieves the best performance reported to date on VOC 2007 and COCO using un-/self-supervised pre-training.
Adapting to Surface Normal Estimation
To show the generalization ability of our self-supervised representations, we adopt the learned network to the surface normal estimation task. In this task, given a single Table 5: Results on NYU v2 for per-pixel surface normal estimation, evaluated over valid pixels.
RGB image as input, we train the network to predict the normal/orientation of the pixels. We evaluate our method on the NYUv2 RGBD dataset [49] dataset. We use the official split of 795 images for training and 654 images for testing. We follow the same protocols for generating surface normal ground truth and evaluations as [14,29,15].
To train the network for surface normal estimation, we apply the Fully Convolutional Network (FCN 32-s) proposed in [38] with the VGG16 network as base architecture. For the loss function, we follow the design in [60]. Specifically, instead of direct regression to obtain the normal, we use a codebook of 40 codewords to encode the 3-dimension normals. Each codeword represents one class thus we turn the problem into a 40-class classification for each pixel. We use the same hyperparameters as in [38] for training and the network is fine-tuned for same number of iterations (100K) for different initializations.
To initialize the FCN model with self-supervised nets, we copy the weights of the convolutional layers to the corresponding layers in FCN. For ImageNet pre-trained network, we follow [38] by converting the fully connected layers to convolutional layers and copy all the weights. For the model trained from scratch, we randomly initialize all the layers with "Xavier" initialization [18] . Table 5 shows the results. We report mean and median error for all visible pixels (in degrees) and also the percentage of pixels with error less than 11.25, 22.5 and 30 degrees. Surprisingly, we obtain much better results with our self-supervised trained network than ImageNet pre-training in this task (3 to 4% better in most metrics). As a comparison, the network trained in [9,61] are slightly worse than the ImageNet pre-trained network. These results suggest that our learned representations are competitive to Ima-geNet pre-training for high-level semantic tasks, but outperforms it on tasks such as surface normal estimation. This experiment suggests that different visual tasks may prefer different levels of visual invariance. | 3,903 |
1708.02901 | 2743157634 | Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: "different instances but a similar viewpoint and category" and "different viewpoints of the same instance". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task. | Self-supervised learning is another popular stream for learning invariant features. Visual invariance can be captured by the same instance scene taken in a sequence of video frames @cite_4 @cite_6 @cite_54 @cite_58 @cite_60 @cite_23 @cite_11 @cite_17 @cite_30 @cite_35 . For example, Wang and Gupta @cite_4 leverage tracking of objects in videos to learn visual invariance within individual objects; Jayaraman and Grauman @cite_54 train a Siamese network to model the ego-motion between two frames in a scene; Mathieu al @cite_60 propose to learn representations by predicting future frames; Pathak al @cite_17 train a network to segment the foreground objects where are acquired via motion cues. On the other hand, common characteristics of different object instances can also be mined from data @cite_40 @cite_63 @cite_9 @cite_1 @cite_55 . For example, relative positions of image patches @cite_40 may reflect feasible spatial layouts of objects; possible colors can be inferred @cite_63 @cite_9 if the networks can relate colors to object appearances. Rather than rely on temporal changes in video, these methods are able to exploit still images. | {
"abstract": [
"",
"Current state-of-the-art classification and detection algorithms train deep convolutional networks using labeled data. In this work we study unsupervised feature learning with convolutional networks in the context of temporally coherent unlabeled data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity priors. We establish a connection between slow feature learning and metric learning. Using this connection we define \"temporal coherence\" -- a criterion which can be used to set hyper-parameters in a principled and automated manner. In a transfer learning experiment, we show that the resulting encoder can be used to define a more semantically coherent metric without the use of labels.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset",
"Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.",
"We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.",
"This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"We investigate and improve self-supervision as a drop-in replacement for ImageNet pretraining, focusing on automatic colorization as the proxy task. Self-supervised training has been shown to be more promising for utilizing unlabeled data than other, traditional unsupervised learning methods. We build on this success and evaluate the ability of our self-supervised network in several contexts. On VOC segmentation and classification tasks, we present results that are state-of-the-art among methods not using ImageNet labels for pretraining representations. Moreover, we present the first in-depth analysis of self-supervision via colorization, concluding that formulation of the loss, training details and network architecture play important roles in its effectiveness. This investigation is further expanded by revisiting the ImageNet pretraining paradigm, asking questions such as: How much training data is needed? How many labels are needed? How much do features change when fine-tuned? We relate these questions back to self-supervision by showing that colorization provides a similarly powerful supervisory signal as various flavors of ImageNet pretraining.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"",
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"Data-driven approaches for edge detection have proven effective and achieve top results on modern benchmarks. However, all current data-driven edge detectors require manual supervision for training in the form of hand-labeled region segments or object boundaries. Specifically, human annotators mark semantically meaningful edges which are subsequently used for training. Is this form of strong, high-level supervision actually necessary to learn to accurately detect edges? In this work we present a simple yet effective approach for training edge detectors without human supervision. To this end we utilize motion, and more specifically, the only input to our method is noisy semi-dense matches between frames. We begin with only a rudimentary knowledge of edges (in the form of image gradients), and alternate between improving motion estimation and edge detection in turn. Using a large corpus of video data, we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision (within 3-5 ). Finally, we show that when using a deep network for the edge detector, our approach provides a novel pre-training scheme for object detection."
],
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_4",
"@cite_60",
"@cite_54",
"@cite_9",
"@cite_1",
"@cite_17",
"@cite_6",
"@cite_55",
"@cite_40",
"@cite_23",
"@cite_63",
"@cite_58",
"@cite_11"
],
"mid": [
"",
"1836533770",
"219040644",
"2248556341",
"2198618282",
"2949532563",
"2950064337",
"2575671312",
"2952453038",
"2949891561",
"2950187998",
"",
"2326925005",
"2951590555",
"2204233249"
]
} | Transitive Invariance for Self-supervised Visual Representation Learning | Visual invariance is a core issue in learning visual representations. Traditional features like SIFT [39] and HOG [6] are histograms of edges that are to an extent invariant to illumination, orientations, scales, and translations. Modern deep representations are capable of learning high-level invariance from large-scale data [47] , e.g., viewpoint, pose, deformation, and semantics. These can also be transferred
A B
A ' B ' Figure 1: We propose to obtain rich invariance by applying simple transitive relations. In this example, two different cars A and B are linked by the features that are good for inter-instance invariance (e.g., using [9]); and each car is linked to another view (A and B ) by visual tracking [61]. Then we can obtain new invariance from object pairs A, B , A , B , and A , B via transitivity. We show more examples in the bottom.
to complicated visual recognition tasks [17,38].
In the scheme of supervised learning, human annotations that map a variety of examples into a single label provide supervision for learning invariant representations. For example, two horses with different illumination, poses, and breeds are invariantly annotated as a category of "horse". Such human knowledge on invariance is expected to be learned by capable deep neural networks [33,28] through carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort.
Unsupervised or "self-supervised" learning (e.g., [61,9,45,63,64,35,44,62,40,66]) recently has attracted increasing interests because the "labels" are free to obtain. Unlike supervised learning that learns invariance from the semantic labels, the self-supervised learning scheme mines it from the nature of the data. We observe that most selfsupervised approaches learn representations that are invariant to: (i) inter-instance variations, which reflects the commonality among different instances. For example, relative positions of patches [9] (see also Figure 3) or channels of colors [63,64] can be predicted through the commonality shared by many object instances; (ii) intra-instance variations. Intra-instance invariance is learned from the pose, viewpoint, and illumination changes by tracking a single moving instance in videos [61,44]. However, either source of invariance can be as rich as that provided by human annotations on large-scale datasets like ImageNet.
Even after significant advances in the field of selfsupervised learning, there is still a long way to go compared to supervised learning. What should be the next steps? It seems that an obvious way is to obtain multiple sources of invariance by combining multiple self-supervised tasks, e.g., via multiple losses. Unfortunately, this naïve solution turns out to give little improvement (as we will show by experiments).
We argue that the trick lies not in the tasks but in the way of exploiting data. To leverage both intra-instance and interinstance invariance, in this paper we construct a huge affinity graph consisting of two types of edges (see Figure 1): the first type of edges relates "different instances of similar viewpoints/poses and potentially the same category", and the second type of edges relates "different viewpoints/poses of an identical instance". We instantiate the first type of edges by learning commonalities across instances via the approach of [9], and the second type by unsupervised tracking of objects in videos [61]. We set up simple transitive relations on this graph to infer more complex invariance from the data, which are then used to train a Triplet-Siamese network for learning visual representations.
Experiments show that our representations learned without any annotations can be well transferred to the object detection task. Specifically, we achieve 63.2% mAP with VGG16 [50] when fine-tuning Fast R-CNN on VOC2007, against the ImageNet pre-training baseline of 67.3%. More importantly, we also report the first-ever result of un-/selfsupervised pre-training models fine-tuned on the challenging COCO object detection dataset [37], achieving 23.5% AP comparing against 24.4% AP that is fine-tuned from an ImageNet pre-trained counterpart (both using VGG16). To our knowledge, this is the closest accuracy to the ImageNet pre-training counterpart obtained on object detection tasks.
Overview
Our goal is to learn visual representations which capture: (i) inter-instance invariance (e.g., two instances of cats should have similar features), and (ii) intra-instance invariance (pose, viewpoint, deformation, illumination, and other variance of the same object instance). We have tried to formulate this as a multi-task (multi-loss) learning problem in our initial experiments (detailed in Table 2 and 3) and observed unsatisfactory performance. Instead of doing so, we propose to obtain a richer set of invariance by performing transitive reasoning on the data.
Our first step is to construct a graph that describes the affinity among image patches. A node in the graph denotes The context prediction task defined in [9]. Given two patches in an image, it learns to predict the relative position between them.
an image patch. We define two types of edges in the graph that relate image patches to each other. The first type of edges, called inter-instance edges, link two nodes which correspond to different object instances of similar visual appearance; the second type of edges, called intra-instance edges, link two nodes which correspond to an identical object captured at different time steps of a track. The solid arrows in Figure 1 illustrate these two types of edges. Given the built graph, we want to transit the relations via the known edges and associate unconnected nodes that may provide under-explored invariance ( Figure 1, dash arrows). Specifically, as shown in Figure 1, if patches A, B are linked via an inter-instance edge and A, A and B, B respectively are linked via "intra-instance" edges, we hope to enrich the invariance by simple transitivity and relate three new pairs of: A , B , A, B , and A , B (Figure 1, dash arrows).
We train a Triplet-Siamese network that encourages similar visual representations between the invariant samples (e.g., any pair consisting of A, A , B, B ) and at the same time discourages similar visual representations to a third distractor sample (e.g., a random sample C unconnected to A, A , B, B ). In all of our experiments, we apply VGG16 [50] as the backbone architecture for each branch of this Triplet-Siamese network. The visual representations learned by this backbone architecture are evaluated on other recognition tasks.
Graph Construction
We construct a graph with inter-instance and intrainstance edges. Firstly, we apply the method of [61] on a large set of 100K unlabeled videos (introduced in [61]) and mine millions of moving objects using motion cues (Sec. 4.1). We use the image patches of them to construct the nodes of the graph.
We instantiate inter-instance edges by the selfsupervised method of [9] that learns context predictions on a large set of still images, which provide features to cluster the nodes and set up inter-instance edges (Sec. 4.2). On the other hand, we connect the image patches in the same visual track by intra-instance edges (Sec. 4.3).
Mining Moving Objects
We follow the approach in [61] to find the moving objects in videos. As a brief introduction, this method first applies Improved Dense Trajectories (IDT) [58] on videos to extract SURF [2] feature points and their motion. The video frames are then pruned if there is too much motion (indicating camera motion) or too little motion (e.g., noisy signals). For the remaining frames, it crop a 227×227 bounding box (from ∼600×400 images) which includes the most number of moving points as the foreground object. However, for computational efficiency, in this paper we rescale the image patches to 96 × 96 after cropping and we use them as inputs for clustering and training.
Inter-instance Edges via Clustering
Given the extracted image patches which act as nodes, we want to link them with extra inter-instance edges. We rely on the visual representations learned from [9] to do this. We connect the nodes representing image patches which are close in the feature space. In addition, motivated by the mid-level clustering approaches [51,7], we want to obtain millions of object clusters with a small number of objects in each to maintain high "purity" of the clusters. We describe the implementation details of this step as follows.
We extract the pool5 features of the VGG16 network trained as in [9]. Following [9], we use ImageNet without labels to train this network. Note that because we use a patch size of 96×96, the dimension of our pool5 feature is 3×3×512=4608. The distance between samples is calculated by the cosine distance of these features. We want the object patches in each cluster to be close to each other in the feature space, and we care less about the differences between clusters. However, directly clustering millions of image patches into millions of small clusters (e.g., by K-means) is time consuming. So we apply a hierarchical clustering approach (2-stage in this paper) where we first group the images into a relatively small number of clusters, and then find groups of small number of examples inside each cluster via nearest-neighbor search.
Specifically, in the first stage of clustering, we apply Kmeans clustering with K = 5000 on the image patches. We then remove the clusters with number of examples less than 100 (this reduces K to 546 in our experiments on the image patches mined from the video dataset). We view these clusters as the "parent" clusters (blue circles in Figure 2). Then in the second stage of clustering, inside each parent cluster, we perform nearest-neighbor search for each sample and obtain its top 10 nearest neighbors in the feature space. We then find any group of samples with a group size of 4, inside which all the samples are each other's top-10 nearest neighbors. We call these small clusters with 4 samples "child" clusters (green circles in Figure 2). We then link these image patches with each other inside a child cluster via "inter-instance" edges. Note that different child clusters may overlap, i.e., we allow the same sample to appear in different groups. However, in our experiments we find that most samples appear only in one group. We show some results of clustering in Figure 4
Intra-instance Edges via Tracking
To obtain rich variations of viewpoint and deformation changes of the same object instance, we apply visual tracking on the mined moving objects in the videos as in [61]. More specifically, given a moving object in the video, it applies KCF [23] to track the object for N = 30 frames and obtain another sample of the object in the end of the track. Note that the KCF tracker does not require any human supervision. We add these new objects as nodes to the graph and link the two samples in the same track with an intrainstance edge (purple in Figure 2).
Learning with Transitions in the Graph
With the graph constructed, we want to link more image patches (see dotted links in Figure 1 trivial solution of identical representations, we also encourage the network to generate dissimilar representations if a node is expected to be unrelated. Specifically, we constrain the image patches from different "parent" clusters (which are more likely to have different categories) to have different representations (which we call a negative pair of samples). We design a Triplet-Siamese network with a ranking loss function [59,61] such that the distance between related samples should be smaller than the distance of unrelated samples. Our Triplet-Siamese network includes three towers of a ConvNet with shared weights (Figure 6). For each tower, we adopt the standard VGG16 architecture [50] to the convolutional layers, after which we add two fullyconnected layers with 4096-d and 1024-d outputs. The Triplet-Siamese network accepts a triplet sample as its input: the first two image patches in the triplet are a positive pair, and the last two are a negative pair. We extract their 1024-d features and calculate the ranking loss as follows.
Given an arbitrary pair of image patches A and B, we de-fine their distance as:
D(A, B) = 1 − F (A)·F (B) F (A) F (B) where F (·)
is the representation mapping of the network. With a triplet of (X, X + , X − ) where (X, X + ) is a positive pair and (X, X − ) is a negative pair as defined above, we minimize the ranking loss:
L(X, X + , X − ) = max{0, D(X, X + ) − D(X, X − ) + m},
where m is a margin set as 0.5 in our experiments. Although we have only one objective function, we have different types of training examples. As illustrated in Figure 6, given the set of related samples {A, B, A , B } (see Figure 5) and a random distractor sample C from another parent cluster, we can train the network to handle, e.g., viewpoint invariance for the same instance via L(A, A , C) and invariance to different objects sharing the same semantics via L (A, B , C).
Besides exploring these relations, we have also tried to enforce the distance between different objects to be larger than the distance between two different viewpoints of the same object, e.g., D(A, A ) < D(A, B ). But we have not found this extra relation brings any improvement. Interestingly, we found that the representations learned by our method can in general satisfy D(A, A ) < D(A, B ) after training.
Experiments
We perform extensive analysis on our self-supervised representations. We first evaluate our ConvNet as a feature extractor on different tasks without fine-tuning . We then show the results of transferring the representations to vision tasks including object detection and surface normal estimation with fine-tuning. Implementation Details. To prepare the data for training, we download the 100K videos from YouTube using the URLs provided by [36,61]. By mining the moving objects and tracking in the videos, we obtain ∼10 million image patches of objects. By applying the transitivity on the graph constructed, we obtain 7 million positive pairs of objects where each pair of objects are two different instances with different viewpoints. We also randomly sample 2 million object pairs connected by the intra-instance edges.
We train our network with these 9 million pairs of images using a learning rate of 0.001 and a mini-batch size of 100.
For each pair we sample the third distractor patch from a different "parent cluster" in the same mini-batch. We use the network pre-trained in [9] to initialize our convolutional layers and randomly initialized the fully connected layers. We train the network for 200K iterations with our method.
Qualitative Results without Fine-tuning
We first perform nearest-neighbor search to show qualitative results. We adopt the pool5 feature of the VGG16 network for all methods without any fine-tuning ( Figure 7). We do this experiment on the object instances cropped from the PASCAL VOC 2007 dataset [13] (trainval). As Figure 7 shows, given an query image on the left, the network pre-trained with the context prediction task [9] can retrieve objects of very similar viewpoints. On the other hand, our network shows more variations of objects and can often retrieve objects with the same class as the query. We also show the nearest-neighbor results using fully-supervised ImageNet pre-trained features as a comparison. We also visualize the features using the visualization technique of [65]. For each convolutional unit in conv5 3, we retrieve the objects which give highest activation responses and highlight the receptive fields on the images. We visualize the top 6 images for 4 different convolutional units in Figure 8. We can see these convolutional units are corresponding to different semantic object parts (e.g., fronts of cars or buses wheels, animal legs, eyes or faces).
Analysis on Object Detection
We evaluate how well our representations can be transferred to object detection by fine-tuning Fast R-CNN [16] on PASCAL VOC 2007 [13]. We use the standard trainval set for training and test set for testing with VGG16 as the base architecture. For the detection network, we initialize the weights of convolutional layers from our self-supervised network and randomly initialize the fullyconnected layers using Gaussian noise with zero mean and 0.001 standard deviation.
During fine-tuning Fast R-CNN, we use 0.00025 as the starting learning rate. We reduce the learning rate by 1/10 in every 50K iterations. We fine-tune the network for 150K iterations. Unlike standard Fast R-CNN where the first few convolutional layers of the ImageNet pre-trained network are fixed, we fine-tuned all layers on the PASCAL data as our model is pre-trained in a very different domain (e.g., video patches).
We report the results in Table 1. If we train Fast R-CNN from scratch without any pre-training, we can only obtain 39.7% mAP. With our self-supervised trained network as initialization, the detection mAP is increased to 63.2% (with a 23.5 points improvement). Our result compares competitively (4.1 points lower) to the counterpart using ImageNet pre-training (67.3% with VGG16).
As we incorporate the invariance captured from [61] and [9], we also evaluate the results using these two approaches individually (Table 1). By fine-tuning the context prediction network of [9], we can obtain 61.5% mAP. To train the network of [61], we use exactly the same loss function and initialization as our approach except that there are only training examples of the same instance in the same visual track (i.e., only the samples linked by intra-instance edges in our graph is better than both methods. This comparison indicates the effectiveness of exploiting a greater variety of invariance in representation learning.
Is multi-task learning sufficient? An alternative way of obtaining both intra-and inter-instance invariance is to apply multi-task learning with the two losses of [9] and [61].
Next we compare with this method. For the task in [61], we use the same network architecture as our approach; for the task in [9], we follow their design of a Siamese network. We apply different fully connected layers for different tasks, but share the convolutional layers between these two tasks. Given a mini-batch of training samples, we perform ranking among these images as well as context prediction in each image simultaneously via two losses. The representations learned in this way, when fine-tuned with Fast R-CNN, obtain 62.1% mAP ("Multitask" in Table 2). Comparing to only using context prediction [9] (61.5%), the multi-task learning only gives a marginal improvement (0.6%). This result suggests that multi-task learning in this way is not sufficient; organizing and exploiting the relationships of data, as done by our method, is more effective for representation learning.
How important is tracking? To further understand how much visual tracking helps, we perform ablative analysis by making the visual tracks shorter: we track the moving objects for 15 frames instead of by default 30 frames. This is expected to reduce the viewpoint/pose/deformation variance contributed by tracking. Our model pre-trained in this way shows 61.5% mAP ("15-frame" in Table 2) when fine-tuned for detection. This number is similar to that of using context prediction only (Table 1). This result is not surprising, because it does not add much new information for training. It suggests adding stronger viewpoint/pose/deformation invariance is important for learning All >c1 >c2 >c3 >c4 >c5
Context [9] 62. better features for object detection.
How important is clustering? Furthermore, we want to understand how important it is to cluster images with features learned from still images [9]. We perform another ablative analysis by replacing the features of [9] with HOG [6] during clustering. The rest of the pipeline remains exactly the same. The final result is 60.4% mAP ("HOG" in Table 2). This shows that if the features for clustering are not invariant enough to handle different object instances, the transitivity in the graph becomes less reliable.
Object Detection with Faster R-CNN
Although Fast R-CNN [16] has been a popular testbed of un-/self-supervised features, it relies on Selective Search proposals [55] and thus is not fully end-to-end. We further evaluate the representations on object detection with the end-to-end Faster R-CNN [46] where the Region Proposal Network (RPN) may suffer from the features if they are low-quality.
PASCAL VOC 2007 Results. We fine-tune Faster R-CNN in 8 GPUs for 35K iterations with an initial learning rate of 0.00025 which is reduced by 1/10 after every 15K iterations. Table 3 COCO Results. We further report results on the challenging COCO detection dataset [37]. To the best of our knowledge this is the first work of this kind presented on COCO detection. We fine-tune Faster R-CNN in 8 GPUs for 120K iterations with an initial learning rate of 0.001 which is reduced by 1/10 after 80k iterations. This is trained on the COCO trainval35k split and evaluated on the minival5k split, introduced by [3].
We report the COCO results on Table 4. Faster R-CNN fine-tuned with our self-supervised network obtains 23.5% AP using the COCO metric, which is very close (<1%) to fine-tuning Faster R-CNN with the ImageNet pre-trained counterpart (24.4%). Actually, if the fine-tuning of the Ima-geNet counterpart follows the "shorter" schedule in the public code (61.25K iterations in 8 GPUs, converted from 490K in 1 GPU) 1 , the ImageNet supervised pre-training version has 23.7% AP and is comparable with ours. This comparison also strengthens the significance of our result.
To the best of our knowledge, our model achieves the best performance reported to date on VOC 2007 and COCO using un-/self-supervised pre-training.
Adapting to Surface Normal Estimation
To show the generalization ability of our self-supervised representations, we adopt the learned network to the surface normal estimation task. In this task, given a single Table 5: Results on NYU v2 for per-pixel surface normal estimation, evaluated over valid pixels.
RGB image as input, we train the network to predict the normal/orientation of the pixels. We evaluate our method on the NYUv2 RGBD dataset [49] dataset. We use the official split of 795 images for training and 654 images for testing. We follow the same protocols for generating surface normal ground truth and evaluations as [14,29,15].
To train the network for surface normal estimation, we apply the Fully Convolutional Network (FCN 32-s) proposed in [38] with the VGG16 network as base architecture. For the loss function, we follow the design in [60]. Specifically, instead of direct regression to obtain the normal, we use a codebook of 40 codewords to encode the 3-dimension normals. Each codeword represents one class thus we turn the problem into a 40-class classification for each pixel. We use the same hyperparameters as in [38] for training and the network is fine-tuned for same number of iterations (100K) for different initializations.
To initialize the FCN model with self-supervised nets, we copy the weights of the convolutional layers to the corresponding layers in FCN. For ImageNet pre-trained network, we follow [38] by converting the fully connected layers to convolutional layers and copy all the weights. For the model trained from scratch, we randomly initialize all the layers with "Xavier" initialization [18] . Table 5 shows the results. We report mean and median error for all visible pixels (in degrees) and also the percentage of pixels with error less than 11.25, 22.5 and 30 degrees. Surprisingly, we obtain much better results with our self-supervised trained network than ImageNet pre-training in this task (3 to 4% better in most metrics). As a comparison, the network trained in [9,61] are slightly worse than the ImageNet pre-trained network. These results suggest that our learned representations are competitive to Ima-geNet pre-training for high-level semantic tasks, but outperforms it on tasks such as surface normal estimation. This experiment suggests that different visual tasks may prefer different levels of visual invariance. | 3,903 |
1708.02901 | 2743157634 | Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: "different instances but a similar viewpoint and category" and "different viewpoints of the same instance". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task. | Our work is also closely related to mid-level patch clustering @cite_51 @cite_32 @cite_22 and unsupervised discovery of semantic classes @cite_37 @cite_46 as we attempt to find reliable clusters in our affinity graph. In addition, the ranking function used in this paper is related to deep metric learning with Siamese architectures @cite_29 @cite_7 @cite_48 @cite_62 @cite_50 . | {
"abstract": [
"Given a large dataset of images, we seek to automatically determine the visually similar object and scene classes together with their image segmentation. To achieve this we combine two ideas: (i) that a set of segmented objects can be partitioned into visual object classes using topic discovery models from statistical text analysis; and (ii) that visual object classes can be used to assess the accuracy of a segmentation. To tie these ideas together we compute multiple segmentations of each image and then: (i) learn the object classes; and (ii) choose the correct segmentations. We demonstrate that such an algorithm succeeds in automatically discovering many familiar objects in a variety of image datasets, including those from Caltech, MSRC and LabelMe.",
"Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.",
"This paper addresses the well-established problem of unsupervised object discovery with a novel method inspired by weakly-supervised approaches. In particular, the ability of an object patch to predict the rest of the object (its context) is used as supervisory signal to help discover visually consistent object clusters. The main contributions of this work are: 1) framing unsupervised clustering as a leave-one-out context prediction task; 2) evaluating the quality of context prediction by statistical hypothesis testing between thing and stuff appearance models; and 3) an iterative region prediction and context alignment approach that gradually discovers a visual object cluster together with a segmentation mask and fine-grained correspondences. The proposed method outperforms previous unsupervised as well as weakly-supervised object discovery approaches, and is shown to provide correspondences detailed enough to transfer keypoint annotations.",
"Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE.",
"Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with approximate top- @math ranking objectives, as thye naturally fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset outperforms the conventional visual features by about 10 , obtaining the best reported performance in the literature.",
"",
"Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.",
"",
"We seek to discover the object categories depicted in a set of unlabelled images. We achieve this using a model developed in the statistical text literature: probabilistic latent semantic analysis (pLSA). In text analysis, this is used to discover topics in a corpus using the bag-of-words document representation. Here we treat object categories as topics, so that an image containing instances of several categories is modeled as a mixture of topics. The model is applied to images by using a visual analogue of a word, formed by vector quantizing SIFT-like region descriptors. The topic discovery approach successfully translates to the visual domain: for a small set of objects, we show that both the object categories and their approximate spatial layout are found without supervision. Performance of this unsupervised method is compared to the supervised approach of (2003) on a set of unseen images containing only one object per image. We also extend the bag-of-words vocabulary to include 'doublets' which encode spatially local co-occurring regions. It is demonstrated that this extended vocabulary gives a cleaner image segmentation. Finally, the classification and segmentation methods are applied to a set of images containing multiple objects per image. These results demonstrate that we can successfully build object class models from an unsupervised analysis of images.",
"The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset."
],
"cite_N": [
"@cite_37",
"@cite_62",
"@cite_22",
"@cite_7",
"@cite_48",
"@cite_29",
"@cite_32",
"@cite_50",
"@cite_46",
"@cite_51"
],
"mid": [
"2112301665",
"1975517671",
"1032684693",
"2138621090",
"1514027499",
"",
"2115628259",
"",
"2103658758",
"2951702175"
]
} | Transitive Invariance for Self-supervised Visual Representation Learning | Visual invariance is a core issue in learning visual representations. Traditional features like SIFT [39] and HOG [6] are histograms of edges that are to an extent invariant to illumination, orientations, scales, and translations. Modern deep representations are capable of learning high-level invariance from large-scale data [47] , e.g., viewpoint, pose, deformation, and semantics. These can also be transferred
A B
A ' B ' Figure 1: We propose to obtain rich invariance by applying simple transitive relations. In this example, two different cars A and B are linked by the features that are good for inter-instance invariance (e.g., using [9]); and each car is linked to another view (A and B ) by visual tracking [61]. Then we can obtain new invariance from object pairs A, B , A , B , and A , B via transitivity. We show more examples in the bottom.
to complicated visual recognition tasks [17,38].
In the scheme of supervised learning, human annotations that map a variety of examples into a single label provide supervision for learning invariant representations. For example, two horses with different illumination, poses, and breeds are invariantly annotated as a category of "horse". Such human knowledge on invariance is expected to be learned by capable deep neural networks [33,28] through carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort.
Unsupervised or "self-supervised" learning (e.g., [61,9,45,63,64,35,44,62,40,66]) recently has attracted increasing interests because the "labels" are free to obtain. Unlike supervised learning that learns invariance from the semantic labels, the self-supervised learning scheme mines it from the nature of the data. We observe that most selfsupervised approaches learn representations that are invariant to: (i) inter-instance variations, which reflects the commonality among different instances. For example, relative positions of patches [9] (see also Figure 3) or channels of colors [63,64] can be predicted through the commonality shared by many object instances; (ii) intra-instance variations. Intra-instance invariance is learned from the pose, viewpoint, and illumination changes by tracking a single moving instance in videos [61,44]. However, either source of invariance can be as rich as that provided by human annotations on large-scale datasets like ImageNet.
Even after significant advances in the field of selfsupervised learning, there is still a long way to go compared to supervised learning. What should be the next steps? It seems that an obvious way is to obtain multiple sources of invariance by combining multiple self-supervised tasks, e.g., via multiple losses. Unfortunately, this naïve solution turns out to give little improvement (as we will show by experiments).
We argue that the trick lies not in the tasks but in the way of exploiting data. To leverage both intra-instance and interinstance invariance, in this paper we construct a huge affinity graph consisting of two types of edges (see Figure 1): the first type of edges relates "different instances of similar viewpoints/poses and potentially the same category", and the second type of edges relates "different viewpoints/poses of an identical instance". We instantiate the first type of edges by learning commonalities across instances via the approach of [9], and the second type by unsupervised tracking of objects in videos [61]. We set up simple transitive relations on this graph to infer more complex invariance from the data, which are then used to train a Triplet-Siamese network for learning visual representations.
Experiments show that our representations learned without any annotations can be well transferred to the object detection task. Specifically, we achieve 63.2% mAP with VGG16 [50] when fine-tuning Fast R-CNN on VOC2007, against the ImageNet pre-training baseline of 67.3%. More importantly, we also report the first-ever result of un-/selfsupervised pre-training models fine-tuned on the challenging COCO object detection dataset [37], achieving 23.5% AP comparing against 24.4% AP that is fine-tuned from an ImageNet pre-trained counterpart (both using VGG16). To our knowledge, this is the closest accuracy to the ImageNet pre-training counterpart obtained on object detection tasks.
Overview
Our goal is to learn visual representations which capture: (i) inter-instance invariance (e.g., two instances of cats should have similar features), and (ii) intra-instance invariance (pose, viewpoint, deformation, illumination, and other variance of the same object instance). We have tried to formulate this as a multi-task (multi-loss) learning problem in our initial experiments (detailed in Table 2 and 3) and observed unsatisfactory performance. Instead of doing so, we propose to obtain a richer set of invariance by performing transitive reasoning on the data.
Our first step is to construct a graph that describes the affinity among image patches. A node in the graph denotes The context prediction task defined in [9]. Given two patches in an image, it learns to predict the relative position between them.
an image patch. We define two types of edges in the graph that relate image patches to each other. The first type of edges, called inter-instance edges, link two nodes which correspond to different object instances of similar visual appearance; the second type of edges, called intra-instance edges, link two nodes which correspond to an identical object captured at different time steps of a track. The solid arrows in Figure 1 illustrate these two types of edges. Given the built graph, we want to transit the relations via the known edges and associate unconnected nodes that may provide under-explored invariance ( Figure 1, dash arrows). Specifically, as shown in Figure 1, if patches A, B are linked via an inter-instance edge and A, A and B, B respectively are linked via "intra-instance" edges, we hope to enrich the invariance by simple transitivity and relate three new pairs of: A , B , A, B , and A , B (Figure 1, dash arrows).
We train a Triplet-Siamese network that encourages similar visual representations between the invariant samples (e.g., any pair consisting of A, A , B, B ) and at the same time discourages similar visual representations to a third distractor sample (e.g., a random sample C unconnected to A, A , B, B ). In all of our experiments, we apply VGG16 [50] as the backbone architecture for each branch of this Triplet-Siamese network. The visual representations learned by this backbone architecture are evaluated on other recognition tasks.
Graph Construction
We construct a graph with inter-instance and intrainstance edges. Firstly, we apply the method of [61] on a large set of 100K unlabeled videos (introduced in [61]) and mine millions of moving objects using motion cues (Sec. 4.1). We use the image patches of them to construct the nodes of the graph.
We instantiate inter-instance edges by the selfsupervised method of [9] that learns context predictions on a large set of still images, which provide features to cluster the nodes and set up inter-instance edges (Sec. 4.2). On the other hand, we connect the image patches in the same visual track by intra-instance edges (Sec. 4.3).
Mining Moving Objects
We follow the approach in [61] to find the moving objects in videos. As a brief introduction, this method first applies Improved Dense Trajectories (IDT) [58] on videos to extract SURF [2] feature points and their motion. The video frames are then pruned if there is too much motion (indicating camera motion) or too little motion (e.g., noisy signals). For the remaining frames, it crop a 227×227 bounding box (from ∼600×400 images) which includes the most number of moving points as the foreground object. However, for computational efficiency, in this paper we rescale the image patches to 96 × 96 after cropping and we use them as inputs for clustering and training.
Inter-instance Edges via Clustering
Given the extracted image patches which act as nodes, we want to link them with extra inter-instance edges. We rely on the visual representations learned from [9] to do this. We connect the nodes representing image patches which are close in the feature space. In addition, motivated by the mid-level clustering approaches [51,7], we want to obtain millions of object clusters with a small number of objects in each to maintain high "purity" of the clusters. We describe the implementation details of this step as follows.
We extract the pool5 features of the VGG16 network trained as in [9]. Following [9], we use ImageNet without labels to train this network. Note that because we use a patch size of 96×96, the dimension of our pool5 feature is 3×3×512=4608. The distance between samples is calculated by the cosine distance of these features. We want the object patches in each cluster to be close to each other in the feature space, and we care less about the differences between clusters. However, directly clustering millions of image patches into millions of small clusters (e.g., by K-means) is time consuming. So we apply a hierarchical clustering approach (2-stage in this paper) where we first group the images into a relatively small number of clusters, and then find groups of small number of examples inside each cluster via nearest-neighbor search.
Specifically, in the first stage of clustering, we apply Kmeans clustering with K = 5000 on the image patches. We then remove the clusters with number of examples less than 100 (this reduces K to 546 in our experiments on the image patches mined from the video dataset). We view these clusters as the "parent" clusters (blue circles in Figure 2). Then in the second stage of clustering, inside each parent cluster, we perform nearest-neighbor search for each sample and obtain its top 10 nearest neighbors in the feature space. We then find any group of samples with a group size of 4, inside which all the samples are each other's top-10 nearest neighbors. We call these small clusters with 4 samples "child" clusters (green circles in Figure 2). We then link these image patches with each other inside a child cluster via "inter-instance" edges. Note that different child clusters may overlap, i.e., we allow the same sample to appear in different groups. However, in our experiments we find that most samples appear only in one group. We show some results of clustering in Figure 4
Intra-instance Edges via Tracking
To obtain rich variations of viewpoint and deformation changes of the same object instance, we apply visual tracking on the mined moving objects in the videos as in [61]. More specifically, given a moving object in the video, it applies KCF [23] to track the object for N = 30 frames and obtain another sample of the object in the end of the track. Note that the KCF tracker does not require any human supervision. We add these new objects as nodes to the graph and link the two samples in the same track with an intrainstance edge (purple in Figure 2).
Learning with Transitions in the Graph
With the graph constructed, we want to link more image patches (see dotted links in Figure 1 trivial solution of identical representations, we also encourage the network to generate dissimilar representations if a node is expected to be unrelated. Specifically, we constrain the image patches from different "parent" clusters (which are more likely to have different categories) to have different representations (which we call a negative pair of samples). We design a Triplet-Siamese network with a ranking loss function [59,61] such that the distance between related samples should be smaller than the distance of unrelated samples. Our Triplet-Siamese network includes three towers of a ConvNet with shared weights (Figure 6). For each tower, we adopt the standard VGG16 architecture [50] to the convolutional layers, after which we add two fullyconnected layers with 4096-d and 1024-d outputs. The Triplet-Siamese network accepts a triplet sample as its input: the first two image patches in the triplet are a positive pair, and the last two are a negative pair. We extract their 1024-d features and calculate the ranking loss as follows.
Given an arbitrary pair of image patches A and B, we de-fine their distance as:
D(A, B) = 1 − F (A)·F (B) F (A) F (B) where F (·)
is the representation mapping of the network. With a triplet of (X, X + , X − ) where (X, X + ) is a positive pair and (X, X − ) is a negative pair as defined above, we minimize the ranking loss:
L(X, X + , X − ) = max{0, D(X, X + ) − D(X, X − ) + m},
where m is a margin set as 0.5 in our experiments. Although we have only one objective function, we have different types of training examples. As illustrated in Figure 6, given the set of related samples {A, B, A , B } (see Figure 5) and a random distractor sample C from another parent cluster, we can train the network to handle, e.g., viewpoint invariance for the same instance via L(A, A , C) and invariance to different objects sharing the same semantics via L (A, B , C).
Besides exploring these relations, we have also tried to enforce the distance between different objects to be larger than the distance between two different viewpoints of the same object, e.g., D(A, A ) < D(A, B ). But we have not found this extra relation brings any improvement. Interestingly, we found that the representations learned by our method can in general satisfy D(A, A ) < D(A, B ) after training.
Experiments
We perform extensive analysis on our self-supervised representations. We first evaluate our ConvNet as a feature extractor on different tasks without fine-tuning . We then show the results of transferring the representations to vision tasks including object detection and surface normal estimation with fine-tuning. Implementation Details. To prepare the data for training, we download the 100K videos from YouTube using the URLs provided by [36,61]. By mining the moving objects and tracking in the videos, we obtain ∼10 million image patches of objects. By applying the transitivity on the graph constructed, we obtain 7 million positive pairs of objects where each pair of objects are two different instances with different viewpoints. We also randomly sample 2 million object pairs connected by the intra-instance edges.
We train our network with these 9 million pairs of images using a learning rate of 0.001 and a mini-batch size of 100.
For each pair we sample the third distractor patch from a different "parent cluster" in the same mini-batch. We use the network pre-trained in [9] to initialize our convolutional layers and randomly initialized the fully connected layers. We train the network for 200K iterations with our method.
Qualitative Results without Fine-tuning
We first perform nearest-neighbor search to show qualitative results. We adopt the pool5 feature of the VGG16 network for all methods without any fine-tuning ( Figure 7). We do this experiment on the object instances cropped from the PASCAL VOC 2007 dataset [13] (trainval). As Figure 7 shows, given an query image on the left, the network pre-trained with the context prediction task [9] can retrieve objects of very similar viewpoints. On the other hand, our network shows more variations of objects and can often retrieve objects with the same class as the query. We also show the nearest-neighbor results using fully-supervised ImageNet pre-trained features as a comparison. We also visualize the features using the visualization technique of [65]. For each convolutional unit in conv5 3, we retrieve the objects which give highest activation responses and highlight the receptive fields on the images. We visualize the top 6 images for 4 different convolutional units in Figure 8. We can see these convolutional units are corresponding to different semantic object parts (e.g., fronts of cars or buses wheels, animal legs, eyes or faces).
Analysis on Object Detection
We evaluate how well our representations can be transferred to object detection by fine-tuning Fast R-CNN [16] on PASCAL VOC 2007 [13]. We use the standard trainval set for training and test set for testing with VGG16 as the base architecture. For the detection network, we initialize the weights of convolutional layers from our self-supervised network and randomly initialize the fullyconnected layers using Gaussian noise with zero mean and 0.001 standard deviation.
During fine-tuning Fast R-CNN, we use 0.00025 as the starting learning rate. We reduce the learning rate by 1/10 in every 50K iterations. We fine-tune the network for 150K iterations. Unlike standard Fast R-CNN where the first few convolutional layers of the ImageNet pre-trained network are fixed, we fine-tuned all layers on the PASCAL data as our model is pre-trained in a very different domain (e.g., video patches).
We report the results in Table 1. If we train Fast R-CNN from scratch without any pre-training, we can only obtain 39.7% mAP. With our self-supervised trained network as initialization, the detection mAP is increased to 63.2% (with a 23.5 points improvement). Our result compares competitively (4.1 points lower) to the counterpart using ImageNet pre-training (67.3% with VGG16).
As we incorporate the invariance captured from [61] and [9], we also evaluate the results using these two approaches individually (Table 1). By fine-tuning the context prediction network of [9], we can obtain 61.5% mAP. To train the network of [61], we use exactly the same loss function and initialization as our approach except that there are only training examples of the same instance in the same visual track (i.e., only the samples linked by intra-instance edges in our graph is better than both methods. This comparison indicates the effectiveness of exploiting a greater variety of invariance in representation learning.
Is multi-task learning sufficient? An alternative way of obtaining both intra-and inter-instance invariance is to apply multi-task learning with the two losses of [9] and [61].
Next we compare with this method. For the task in [61], we use the same network architecture as our approach; for the task in [9], we follow their design of a Siamese network. We apply different fully connected layers for different tasks, but share the convolutional layers between these two tasks. Given a mini-batch of training samples, we perform ranking among these images as well as context prediction in each image simultaneously via two losses. The representations learned in this way, when fine-tuned with Fast R-CNN, obtain 62.1% mAP ("Multitask" in Table 2). Comparing to only using context prediction [9] (61.5%), the multi-task learning only gives a marginal improvement (0.6%). This result suggests that multi-task learning in this way is not sufficient; organizing and exploiting the relationships of data, as done by our method, is more effective for representation learning.
How important is tracking? To further understand how much visual tracking helps, we perform ablative analysis by making the visual tracks shorter: we track the moving objects for 15 frames instead of by default 30 frames. This is expected to reduce the viewpoint/pose/deformation variance contributed by tracking. Our model pre-trained in this way shows 61.5% mAP ("15-frame" in Table 2) when fine-tuned for detection. This number is similar to that of using context prediction only (Table 1). This result is not surprising, because it does not add much new information for training. It suggests adding stronger viewpoint/pose/deformation invariance is important for learning All >c1 >c2 >c3 >c4 >c5
Context [9] 62. better features for object detection.
How important is clustering? Furthermore, we want to understand how important it is to cluster images with features learned from still images [9]. We perform another ablative analysis by replacing the features of [9] with HOG [6] during clustering. The rest of the pipeline remains exactly the same. The final result is 60.4% mAP ("HOG" in Table 2). This shows that if the features for clustering are not invariant enough to handle different object instances, the transitivity in the graph becomes less reliable.
Object Detection with Faster R-CNN
Although Fast R-CNN [16] has been a popular testbed of un-/self-supervised features, it relies on Selective Search proposals [55] and thus is not fully end-to-end. We further evaluate the representations on object detection with the end-to-end Faster R-CNN [46] where the Region Proposal Network (RPN) may suffer from the features if they are low-quality.
PASCAL VOC 2007 Results. We fine-tune Faster R-CNN in 8 GPUs for 35K iterations with an initial learning rate of 0.00025 which is reduced by 1/10 after every 15K iterations. Table 3 COCO Results. We further report results on the challenging COCO detection dataset [37]. To the best of our knowledge this is the first work of this kind presented on COCO detection. We fine-tune Faster R-CNN in 8 GPUs for 120K iterations with an initial learning rate of 0.001 which is reduced by 1/10 after 80k iterations. This is trained on the COCO trainval35k split and evaluated on the minival5k split, introduced by [3].
We report the COCO results on Table 4. Faster R-CNN fine-tuned with our self-supervised network obtains 23.5% AP using the COCO metric, which is very close (<1%) to fine-tuning Faster R-CNN with the ImageNet pre-trained counterpart (24.4%). Actually, if the fine-tuning of the Ima-geNet counterpart follows the "shorter" schedule in the public code (61.25K iterations in 8 GPUs, converted from 490K in 1 GPU) 1 , the ImageNet supervised pre-training version has 23.7% AP and is comparable with ours. This comparison also strengthens the significance of our result.
To the best of our knowledge, our model achieves the best performance reported to date on VOC 2007 and COCO using un-/self-supervised pre-training.
Adapting to Surface Normal Estimation
To show the generalization ability of our self-supervised representations, we adopt the learned network to the surface normal estimation task. In this task, given a single Table 5: Results on NYU v2 for per-pixel surface normal estimation, evaluated over valid pixels.
RGB image as input, we train the network to predict the normal/orientation of the pixels. We evaluate our method on the NYUv2 RGBD dataset [49] dataset. We use the official split of 795 images for training and 654 images for testing. We follow the same protocols for generating surface normal ground truth and evaluations as [14,29,15].
To train the network for surface normal estimation, we apply the Fully Convolutional Network (FCN 32-s) proposed in [38] with the VGG16 network as base architecture. For the loss function, we follow the design in [60]. Specifically, instead of direct regression to obtain the normal, we use a codebook of 40 codewords to encode the 3-dimension normals. Each codeword represents one class thus we turn the problem into a 40-class classification for each pixel. We use the same hyperparameters as in [38] for training and the network is fine-tuned for same number of iterations (100K) for different initializations.
To initialize the FCN model with self-supervised nets, we copy the weights of the convolutional layers to the corresponding layers in FCN. For ImageNet pre-trained network, we follow [38] by converting the fully connected layers to convolutional layers and copy all the weights. For the model trained from scratch, we randomly initialize all the layers with "Xavier" initialization [18] . Table 5 shows the results. We report mean and median error for all visible pixels (in degrees) and also the percentage of pixels with error less than 11.25, 22.5 and 30 degrees. Surprisingly, we obtain much better results with our self-supervised trained network than ImageNet pre-training in this task (3 to 4% better in most metrics). As a comparison, the network trained in [9,61] are slightly worse than the ImageNet pre-trained network. These results suggest that our learned representations are competitive to Ima-geNet pre-training for high-level semantic tasks, but outperforms it on tasks such as surface normal estimation. This experiment suggests that different visual tasks may prefer different levels of visual invariance. | 3,903 |
1708.02901 | 2743157634 | Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: "different instances but a similar viewpoint and category" and "different viewpoints of the same instance". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task. | Our generic framework can be instantiated by any two self-supervised methods that can respectively learn inter- intra-instance invariance. In this paper we adopt Doersch al's @cite_40 context prediction method to build inter-instance invariance, and Wang and Gupta's @cite_4 tracking method to build intra-instance invariance. We analyze their behaviors as follows. | {
"abstract": [
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
],
"cite_N": [
"@cite_40",
"@cite_4"
],
"mid": [
"2950187998",
"219040644"
]
} | Transitive Invariance for Self-supervised Visual Representation Learning | Visual invariance is a core issue in learning visual representations. Traditional features like SIFT [39] and HOG [6] are histograms of edges that are to an extent invariant to illumination, orientations, scales, and translations. Modern deep representations are capable of learning high-level invariance from large-scale data [47] , e.g., viewpoint, pose, deformation, and semantics. These can also be transferred
A B
A ' B ' Figure 1: We propose to obtain rich invariance by applying simple transitive relations. In this example, two different cars A and B are linked by the features that are good for inter-instance invariance (e.g., using [9]); and each car is linked to another view (A and B ) by visual tracking [61]. Then we can obtain new invariance from object pairs A, B , A , B , and A , B via transitivity. We show more examples in the bottom.
to complicated visual recognition tasks [17,38].
In the scheme of supervised learning, human annotations that map a variety of examples into a single label provide supervision for learning invariant representations. For example, two horses with different illumination, poses, and breeds are invariantly annotated as a category of "horse". Such human knowledge on invariance is expected to be learned by capable deep neural networks [33,28] through carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort.
Unsupervised or "self-supervised" learning (e.g., [61,9,45,63,64,35,44,62,40,66]) recently has attracted increasing interests because the "labels" are free to obtain. Unlike supervised learning that learns invariance from the semantic labels, the self-supervised learning scheme mines it from the nature of the data. We observe that most selfsupervised approaches learn representations that are invariant to: (i) inter-instance variations, which reflects the commonality among different instances. For example, relative positions of patches [9] (see also Figure 3) or channels of colors [63,64] can be predicted through the commonality shared by many object instances; (ii) intra-instance variations. Intra-instance invariance is learned from the pose, viewpoint, and illumination changes by tracking a single moving instance in videos [61,44]. However, either source of invariance can be as rich as that provided by human annotations on large-scale datasets like ImageNet.
Even after significant advances in the field of selfsupervised learning, there is still a long way to go compared to supervised learning. What should be the next steps? It seems that an obvious way is to obtain multiple sources of invariance by combining multiple self-supervised tasks, e.g., via multiple losses. Unfortunately, this naïve solution turns out to give little improvement (as we will show by experiments).
We argue that the trick lies not in the tasks but in the way of exploiting data. To leverage both intra-instance and interinstance invariance, in this paper we construct a huge affinity graph consisting of two types of edges (see Figure 1): the first type of edges relates "different instances of similar viewpoints/poses and potentially the same category", and the second type of edges relates "different viewpoints/poses of an identical instance". We instantiate the first type of edges by learning commonalities across instances via the approach of [9], and the second type by unsupervised tracking of objects in videos [61]. We set up simple transitive relations on this graph to infer more complex invariance from the data, which are then used to train a Triplet-Siamese network for learning visual representations.
Experiments show that our representations learned without any annotations can be well transferred to the object detection task. Specifically, we achieve 63.2% mAP with VGG16 [50] when fine-tuning Fast R-CNN on VOC2007, against the ImageNet pre-training baseline of 67.3%. More importantly, we also report the first-ever result of un-/selfsupervised pre-training models fine-tuned on the challenging COCO object detection dataset [37], achieving 23.5% AP comparing against 24.4% AP that is fine-tuned from an ImageNet pre-trained counterpart (both using VGG16). To our knowledge, this is the closest accuracy to the ImageNet pre-training counterpart obtained on object detection tasks.
Overview
Our goal is to learn visual representations which capture: (i) inter-instance invariance (e.g., two instances of cats should have similar features), and (ii) intra-instance invariance (pose, viewpoint, deformation, illumination, and other variance of the same object instance). We have tried to formulate this as a multi-task (multi-loss) learning problem in our initial experiments (detailed in Table 2 and 3) and observed unsatisfactory performance. Instead of doing so, we propose to obtain a richer set of invariance by performing transitive reasoning on the data.
Our first step is to construct a graph that describes the affinity among image patches. A node in the graph denotes The context prediction task defined in [9]. Given two patches in an image, it learns to predict the relative position between them.
an image patch. We define two types of edges in the graph that relate image patches to each other. The first type of edges, called inter-instance edges, link two nodes which correspond to different object instances of similar visual appearance; the second type of edges, called intra-instance edges, link two nodes which correspond to an identical object captured at different time steps of a track. The solid arrows in Figure 1 illustrate these two types of edges. Given the built graph, we want to transit the relations via the known edges and associate unconnected nodes that may provide under-explored invariance ( Figure 1, dash arrows). Specifically, as shown in Figure 1, if patches A, B are linked via an inter-instance edge and A, A and B, B respectively are linked via "intra-instance" edges, we hope to enrich the invariance by simple transitivity and relate three new pairs of: A , B , A, B , and A , B (Figure 1, dash arrows).
We train a Triplet-Siamese network that encourages similar visual representations between the invariant samples (e.g., any pair consisting of A, A , B, B ) and at the same time discourages similar visual representations to a third distractor sample (e.g., a random sample C unconnected to A, A , B, B ). In all of our experiments, we apply VGG16 [50] as the backbone architecture for each branch of this Triplet-Siamese network. The visual representations learned by this backbone architecture are evaluated on other recognition tasks.
Graph Construction
We construct a graph with inter-instance and intrainstance edges. Firstly, we apply the method of [61] on a large set of 100K unlabeled videos (introduced in [61]) and mine millions of moving objects using motion cues (Sec. 4.1). We use the image patches of them to construct the nodes of the graph.
We instantiate inter-instance edges by the selfsupervised method of [9] that learns context predictions on a large set of still images, which provide features to cluster the nodes and set up inter-instance edges (Sec. 4.2). On the other hand, we connect the image patches in the same visual track by intra-instance edges (Sec. 4.3).
Mining Moving Objects
We follow the approach in [61] to find the moving objects in videos. As a brief introduction, this method first applies Improved Dense Trajectories (IDT) [58] on videos to extract SURF [2] feature points and their motion. The video frames are then pruned if there is too much motion (indicating camera motion) or too little motion (e.g., noisy signals). For the remaining frames, it crop a 227×227 bounding box (from ∼600×400 images) which includes the most number of moving points as the foreground object. However, for computational efficiency, in this paper we rescale the image patches to 96 × 96 after cropping and we use them as inputs for clustering and training.
Inter-instance Edges via Clustering
Given the extracted image patches which act as nodes, we want to link them with extra inter-instance edges. We rely on the visual representations learned from [9] to do this. We connect the nodes representing image patches which are close in the feature space. In addition, motivated by the mid-level clustering approaches [51,7], we want to obtain millions of object clusters with a small number of objects in each to maintain high "purity" of the clusters. We describe the implementation details of this step as follows.
We extract the pool5 features of the VGG16 network trained as in [9]. Following [9], we use ImageNet without labels to train this network. Note that because we use a patch size of 96×96, the dimension of our pool5 feature is 3×3×512=4608. The distance between samples is calculated by the cosine distance of these features. We want the object patches in each cluster to be close to each other in the feature space, and we care less about the differences between clusters. However, directly clustering millions of image patches into millions of small clusters (e.g., by K-means) is time consuming. So we apply a hierarchical clustering approach (2-stage in this paper) where we first group the images into a relatively small number of clusters, and then find groups of small number of examples inside each cluster via nearest-neighbor search.
Specifically, in the first stage of clustering, we apply Kmeans clustering with K = 5000 on the image patches. We then remove the clusters with number of examples less than 100 (this reduces K to 546 in our experiments on the image patches mined from the video dataset). We view these clusters as the "parent" clusters (blue circles in Figure 2). Then in the second stage of clustering, inside each parent cluster, we perform nearest-neighbor search for each sample and obtain its top 10 nearest neighbors in the feature space. We then find any group of samples with a group size of 4, inside which all the samples are each other's top-10 nearest neighbors. We call these small clusters with 4 samples "child" clusters (green circles in Figure 2). We then link these image patches with each other inside a child cluster via "inter-instance" edges. Note that different child clusters may overlap, i.e., we allow the same sample to appear in different groups. However, in our experiments we find that most samples appear only in one group. We show some results of clustering in Figure 4
Intra-instance Edges via Tracking
To obtain rich variations of viewpoint and deformation changes of the same object instance, we apply visual tracking on the mined moving objects in the videos as in [61]. More specifically, given a moving object in the video, it applies KCF [23] to track the object for N = 30 frames and obtain another sample of the object in the end of the track. Note that the KCF tracker does not require any human supervision. We add these new objects as nodes to the graph and link the two samples in the same track with an intrainstance edge (purple in Figure 2).
Learning with Transitions in the Graph
With the graph constructed, we want to link more image patches (see dotted links in Figure 1 trivial solution of identical representations, we also encourage the network to generate dissimilar representations if a node is expected to be unrelated. Specifically, we constrain the image patches from different "parent" clusters (which are more likely to have different categories) to have different representations (which we call a negative pair of samples). We design a Triplet-Siamese network with a ranking loss function [59,61] such that the distance between related samples should be smaller than the distance of unrelated samples. Our Triplet-Siamese network includes three towers of a ConvNet with shared weights (Figure 6). For each tower, we adopt the standard VGG16 architecture [50] to the convolutional layers, after which we add two fullyconnected layers with 4096-d and 1024-d outputs. The Triplet-Siamese network accepts a triplet sample as its input: the first two image patches in the triplet are a positive pair, and the last two are a negative pair. We extract their 1024-d features and calculate the ranking loss as follows.
Given an arbitrary pair of image patches A and B, we de-fine their distance as:
D(A, B) = 1 − F (A)·F (B) F (A) F (B) where F (·)
is the representation mapping of the network. With a triplet of (X, X + , X − ) where (X, X + ) is a positive pair and (X, X − ) is a negative pair as defined above, we minimize the ranking loss:
L(X, X + , X − ) = max{0, D(X, X + ) − D(X, X − ) + m},
where m is a margin set as 0.5 in our experiments. Although we have only one objective function, we have different types of training examples. As illustrated in Figure 6, given the set of related samples {A, B, A , B } (see Figure 5) and a random distractor sample C from another parent cluster, we can train the network to handle, e.g., viewpoint invariance for the same instance via L(A, A , C) and invariance to different objects sharing the same semantics via L (A, B , C).
Besides exploring these relations, we have also tried to enforce the distance between different objects to be larger than the distance between two different viewpoints of the same object, e.g., D(A, A ) < D(A, B ). But we have not found this extra relation brings any improvement. Interestingly, we found that the representations learned by our method can in general satisfy D(A, A ) < D(A, B ) after training.
Experiments
We perform extensive analysis on our self-supervised representations. We first evaluate our ConvNet as a feature extractor on different tasks without fine-tuning . We then show the results of transferring the representations to vision tasks including object detection and surface normal estimation with fine-tuning. Implementation Details. To prepare the data for training, we download the 100K videos from YouTube using the URLs provided by [36,61]. By mining the moving objects and tracking in the videos, we obtain ∼10 million image patches of objects. By applying the transitivity on the graph constructed, we obtain 7 million positive pairs of objects where each pair of objects are two different instances with different viewpoints. We also randomly sample 2 million object pairs connected by the intra-instance edges.
We train our network with these 9 million pairs of images using a learning rate of 0.001 and a mini-batch size of 100.
For each pair we sample the third distractor patch from a different "parent cluster" in the same mini-batch. We use the network pre-trained in [9] to initialize our convolutional layers and randomly initialized the fully connected layers. We train the network for 200K iterations with our method.
Qualitative Results without Fine-tuning
We first perform nearest-neighbor search to show qualitative results. We adopt the pool5 feature of the VGG16 network for all methods without any fine-tuning ( Figure 7). We do this experiment on the object instances cropped from the PASCAL VOC 2007 dataset [13] (trainval). As Figure 7 shows, given an query image on the left, the network pre-trained with the context prediction task [9] can retrieve objects of very similar viewpoints. On the other hand, our network shows more variations of objects and can often retrieve objects with the same class as the query. We also show the nearest-neighbor results using fully-supervised ImageNet pre-trained features as a comparison. We also visualize the features using the visualization technique of [65]. For each convolutional unit in conv5 3, we retrieve the objects which give highest activation responses and highlight the receptive fields on the images. We visualize the top 6 images for 4 different convolutional units in Figure 8. We can see these convolutional units are corresponding to different semantic object parts (e.g., fronts of cars or buses wheels, animal legs, eyes or faces).
Analysis on Object Detection
We evaluate how well our representations can be transferred to object detection by fine-tuning Fast R-CNN [16] on PASCAL VOC 2007 [13]. We use the standard trainval set for training and test set for testing with VGG16 as the base architecture. For the detection network, we initialize the weights of convolutional layers from our self-supervised network and randomly initialize the fullyconnected layers using Gaussian noise with zero mean and 0.001 standard deviation.
During fine-tuning Fast R-CNN, we use 0.00025 as the starting learning rate. We reduce the learning rate by 1/10 in every 50K iterations. We fine-tune the network for 150K iterations. Unlike standard Fast R-CNN where the first few convolutional layers of the ImageNet pre-trained network are fixed, we fine-tuned all layers on the PASCAL data as our model is pre-trained in a very different domain (e.g., video patches).
We report the results in Table 1. If we train Fast R-CNN from scratch without any pre-training, we can only obtain 39.7% mAP. With our self-supervised trained network as initialization, the detection mAP is increased to 63.2% (with a 23.5 points improvement). Our result compares competitively (4.1 points lower) to the counterpart using ImageNet pre-training (67.3% with VGG16).
As we incorporate the invariance captured from [61] and [9], we also evaluate the results using these two approaches individually (Table 1). By fine-tuning the context prediction network of [9], we can obtain 61.5% mAP. To train the network of [61], we use exactly the same loss function and initialization as our approach except that there are only training examples of the same instance in the same visual track (i.e., only the samples linked by intra-instance edges in our graph is better than both methods. This comparison indicates the effectiveness of exploiting a greater variety of invariance in representation learning.
Is multi-task learning sufficient? An alternative way of obtaining both intra-and inter-instance invariance is to apply multi-task learning with the two losses of [9] and [61].
Next we compare with this method. For the task in [61], we use the same network architecture as our approach; for the task in [9], we follow their design of a Siamese network. We apply different fully connected layers for different tasks, but share the convolutional layers between these two tasks. Given a mini-batch of training samples, we perform ranking among these images as well as context prediction in each image simultaneously via two losses. The representations learned in this way, when fine-tuned with Fast R-CNN, obtain 62.1% mAP ("Multitask" in Table 2). Comparing to only using context prediction [9] (61.5%), the multi-task learning only gives a marginal improvement (0.6%). This result suggests that multi-task learning in this way is not sufficient; organizing and exploiting the relationships of data, as done by our method, is more effective for representation learning.
How important is tracking? To further understand how much visual tracking helps, we perform ablative analysis by making the visual tracks shorter: we track the moving objects for 15 frames instead of by default 30 frames. This is expected to reduce the viewpoint/pose/deformation variance contributed by tracking. Our model pre-trained in this way shows 61.5% mAP ("15-frame" in Table 2) when fine-tuned for detection. This number is similar to that of using context prediction only (Table 1). This result is not surprising, because it does not add much new information for training. It suggests adding stronger viewpoint/pose/deformation invariance is important for learning All >c1 >c2 >c3 >c4 >c5
Context [9] 62. better features for object detection.
How important is clustering? Furthermore, we want to understand how important it is to cluster images with features learned from still images [9]. We perform another ablative analysis by replacing the features of [9] with HOG [6] during clustering. The rest of the pipeline remains exactly the same. The final result is 60.4% mAP ("HOG" in Table 2). This shows that if the features for clustering are not invariant enough to handle different object instances, the transitivity in the graph becomes less reliable.
Object Detection with Faster R-CNN
Although Fast R-CNN [16] has been a popular testbed of un-/self-supervised features, it relies on Selective Search proposals [55] and thus is not fully end-to-end. We further evaluate the representations on object detection with the end-to-end Faster R-CNN [46] where the Region Proposal Network (RPN) may suffer from the features if they are low-quality.
PASCAL VOC 2007 Results. We fine-tune Faster R-CNN in 8 GPUs for 35K iterations with an initial learning rate of 0.00025 which is reduced by 1/10 after every 15K iterations. Table 3 COCO Results. We further report results on the challenging COCO detection dataset [37]. To the best of our knowledge this is the first work of this kind presented on COCO detection. We fine-tune Faster R-CNN in 8 GPUs for 120K iterations with an initial learning rate of 0.001 which is reduced by 1/10 after 80k iterations. This is trained on the COCO trainval35k split and evaluated on the minival5k split, introduced by [3].
We report the COCO results on Table 4. Faster R-CNN fine-tuned with our self-supervised network obtains 23.5% AP using the COCO metric, which is very close (<1%) to fine-tuning Faster R-CNN with the ImageNet pre-trained counterpart (24.4%). Actually, if the fine-tuning of the Ima-geNet counterpart follows the "shorter" schedule in the public code (61.25K iterations in 8 GPUs, converted from 490K in 1 GPU) 1 , the ImageNet supervised pre-training version has 23.7% AP and is comparable with ours. This comparison also strengthens the significance of our result.
To the best of our knowledge, our model achieves the best performance reported to date on VOC 2007 and COCO using un-/self-supervised pre-training.
Adapting to Surface Normal Estimation
To show the generalization ability of our self-supervised representations, we adopt the learned network to the surface normal estimation task. In this task, given a single Table 5: Results on NYU v2 for per-pixel surface normal estimation, evaluated over valid pixels.
RGB image as input, we train the network to predict the normal/orientation of the pixels. We evaluate our method on the NYUv2 RGBD dataset [49] dataset. We use the official split of 795 images for training and 654 images for testing. We follow the same protocols for generating surface normal ground truth and evaluations as [14,29,15].
To train the network for surface normal estimation, we apply the Fully Convolutional Network (FCN 32-s) proposed in [38] with the VGG16 network as base architecture. For the loss function, we follow the design in [60]. Specifically, instead of direct regression to obtain the normal, we use a codebook of 40 codewords to encode the 3-dimension normals. Each codeword represents one class thus we turn the problem into a 40-class classification for each pixel. We use the same hyperparameters as in [38] for training and the network is fine-tuned for same number of iterations (100K) for different initializations.
To initialize the FCN model with self-supervised nets, we copy the weights of the convolutional layers to the corresponding layers in FCN. For ImageNet pre-trained network, we follow [38] by converting the fully connected layers to convolutional layers and copy all the weights. For the model trained from scratch, we randomly initialize all the layers with "Xavier" initialization [18] . Table 5 shows the results. We report mean and median error for all visible pixels (in degrees) and also the percentage of pixels with error less than 11.25, 22.5 and 30 degrees. Surprisingly, we obtain much better results with our self-supervised trained network than ImageNet pre-training in this task (3 to 4% better in most metrics). As a comparison, the network trained in [9,61] are slightly worse than the ImageNet pre-trained network. These results suggest that our learned representations are competitive to Ima-geNet pre-training for high-level semantic tasks, but outperforms it on tasks such as surface normal estimation. This experiment suggests that different visual tasks may prefer different levels of visual invariance. | 3,903 |
1708.02901 | 2743157634 | Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: "different instances but a similar viewpoint and category" and "different viewpoints of the same instance". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task. | The context prediction task in @cite_40 randomly samples a patch (blue in Figure ) and one of its eight neighbors (red), and trains the network to predict their relative position, defined as an 8-way classification problem. In the first two examples in Figure , the context prediction model is able to predict that the leg" patch is below the face'' patch of the cat, indicating that the model has learned some commonality of spatial layout from the training data. However, the model would fail if the pose, viewpoint, or deformation of the object is changed drastically, , in the third example of Figure --- unless the dataset is diversified and large enough to include gradually changing poses, it is hard for the models to learn that the changed pose can be of the same object type. | {
"abstract": [
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations."
],
"cite_N": [
"@cite_40"
],
"mid": [
"2950187998"
]
} | Transitive Invariance for Self-supervised Visual Representation Learning | Visual invariance is a core issue in learning visual representations. Traditional features like SIFT [39] and HOG [6] are histograms of edges that are to an extent invariant to illumination, orientations, scales, and translations. Modern deep representations are capable of learning high-level invariance from large-scale data [47] , e.g., viewpoint, pose, deformation, and semantics. These can also be transferred
A B
A ' B ' Figure 1: We propose to obtain rich invariance by applying simple transitive relations. In this example, two different cars A and B are linked by the features that are good for inter-instance invariance (e.g., using [9]); and each car is linked to another view (A and B ) by visual tracking [61]. Then we can obtain new invariance from object pairs A, B , A , B , and A , B via transitivity. We show more examples in the bottom.
to complicated visual recognition tasks [17,38].
In the scheme of supervised learning, human annotations that map a variety of examples into a single label provide supervision for learning invariant representations. For example, two horses with different illumination, poses, and breeds are invariantly annotated as a category of "horse". Such human knowledge on invariance is expected to be learned by capable deep neural networks [33,28] through carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort.
Unsupervised or "self-supervised" learning (e.g., [61,9,45,63,64,35,44,62,40,66]) recently has attracted increasing interests because the "labels" are free to obtain. Unlike supervised learning that learns invariance from the semantic labels, the self-supervised learning scheme mines it from the nature of the data. We observe that most selfsupervised approaches learn representations that are invariant to: (i) inter-instance variations, which reflects the commonality among different instances. For example, relative positions of patches [9] (see also Figure 3) or channels of colors [63,64] can be predicted through the commonality shared by many object instances; (ii) intra-instance variations. Intra-instance invariance is learned from the pose, viewpoint, and illumination changes by tracking a single moving instance in videos [61,44]. However, either source of invariance can be as rich as that provided by human annotations on large-scale datasets like ImageNet.
Even after significant advances in the field of selfsupervised learning, there is still a long way to go compared to supervised learning. What should be the next steps? It seems that an obvious way is to obtain multiple sources of invariance by combining multiple self-supervised tasks, e.g., via multiple losses. Unfortunately, this naïve solution turns out to give little improvement (as we will show by experiments).
We argue that the trick lies not in the tasks but in the way of exploiting data. To leverage both intra-instance and interinstance invariance, in this paper we construct a huge affinity graph consisting of two types of edges (see Figure 1): the first type of edges relates "different instances of similar viewpoints/poses and potentially the same category", and the second type of edges relates "different viewpoints/poses of an identical instance". We instantiate the first type of edges by learning commonalities across instances via the approach of [9], and the second type by unsupervised tracking of objects in videos [61]. We set up simple transitive relations on this graph to infer more complex invariance from the data, which are then used to train a Triplet-Siamese network for learning visual representations.
Experiments show that our representations learned without any annotations can be well transferred to the object detection task. Specifically, we achieve 63.2% mAP with VGG16 [50] when fine-tuning Fast R-CNN on VOC2007, against the ImageNet pre-training baseline of 67.3%. More importantly, we also report the first-ever result of un-/selfsupervised pre-training models fine-tuned on the challenging COCO object detection dataset [37], achieving 23.5% AP comparing against 24.4% AP that is fine-tuned from an ImageNet pre-trained counterpart (both using VGG16). To our knowledge, this is the closest accuracy to the ImageNet pre-training counterpart obtained on object detection tasks.
Overview
Our goal is to learn visual representations which capture: (i) inter-instance invariance (e.g., two instances of cats should have similar features), and (ii) intra-instance invariance (pose, viewpoint, deformation, illumination, and other variance of the same object instance). We have tried to formulate this as a multi-task (multi-loss) learning problem in our initial experiments (detailed in Table 2 and 3) and observed unsatisfactory performance. Instead of doing so, we propose to obtain a richer set of invariance by performing transitive reasoning on the data.
Our first step is to construct a graph that describes the affinity among image patches. A node in the graph denotes The context prediction task defined in [9]. Given two patches in an image, it learns to predict the relative position between them.
an image patch. We define two types of edges in the graph that relate image patches to each other. The first type of edges, called inter-instance edges, link two nodes which correspond to different object instances of similar visual appearance; the second type of edges, called intra-instance edges, link two nodes which correspond to an identical object captured at different time steps of a track. The solid arrows in Figure 1 illustrate these two types of edges. Given the built graph, we want to transit the relations via the known edges and associate unconnected nodes that may provide under-explored invariance ( Figure 1, dash arrows). Specifically, as shown in Figure 1, if patches A, B are linked via an inter-instance edge and A, A and B, B respectively are linked via "intra-instance" edges, we hope to enrich the invariance by simple transitivity and relate three new pairs of: A , B , A, B , and A , B (Figure 1, dash arrows).
We train a Triplet-Siamese network that encourages similar visual representations between the invariant samples (e.g., any pair consisting of A, A , B, B ) and at the same time discourages similar visual representations to a third distractor sample (e.g., a random sample C unconnected to A, A , B, B ). In all of our experiments, we apply VGG16 [50] as the backbone architecture for each branch of this Triplet-Siamese network. The visual representations learned by this backbone architecture are evaluated on other recognition tasks.
Graph Construction
We construct a graph with inter-instance and intrainstance edges. Firstly, we apply the method of [61] on a large set of 100K unlabeled videos (introduced in [61]) and mine millions of moving objects using motion cues (Sec. 4.1). We use the image patches of them to construct the nodes of the graph.
We instantiate inter-instance edges by the selfsupervised method of [9] that learns context predictions on a large set of still images, which provide features to cluster the nodes and set up inter-instance edges (Sec. 4.2). On the other hand, we connect the image patches in the same visual track by intra-instance edges (Sec. 4.3).
Mining Moving Objects
We follow the approach in [61] to find the moving objects in videos. As a brief introduction, this method first applies Improved Dense Trajectories (IDT) [58] on videos to extract SURF [2] feature points and their motion. The video frames are then pruned if there is too much motion (indicating camera motion) or too little motion (e.g., noisy signals). For the remaining frames, it crop a 227×227 bounding box (from ∼600×400 images) which includes the most number of moving points as the foreground object. However, for computational efficiency, in this paper we rescale the image patches to 96 × 96 after cropping and we use them as inputs for clustering and training.
Inter-instance Edges via Clustering
Given the extracted image patches which act as nodes, we want to link them with extra inter-instance edges. We rely on the visual representations learned from [9] to do this. We connect the nodes representing image patches which are close in the feature space. In addition, motivated by the mid-level clustering approaches [51,7], we want to obtain millions of object clusters with a small number of objects in each to maintain high "purity" of the clusters. We describe the implementation details of this step as follows.
We extract the pool5 features of the VGG16 network trained as in [9]. Following [9], we use ImageNet without labels to train this network. Note that because we use a patch size of 96×96, the dimension of our pool5 feature is 3×3×512=4608. The distance between samples is calculated by the cosine distance of these features. We want the object patches in each cluster to be close to each other in the feature space, and we care less about the differences between clusters. However, directly clustering millions of image patches into millions of small clusters (e.g., by K-means) is time consuming. So we apply a hierarchical clustering approach (2-stage in this paper) where we first group the images into a relatively small number of clusters, and then find groups of small number of examples inside each cluster via nearest-neighbor search.
Specifically, in the first stage of clustering, we apply Kmeans clustering with K = 5000 on the image patches. We then remove the clusters with number of examples less than 100 (this reduces K to 546 in our experiments on the image patches mined from the video dataset). We view these clusters as the "parent" clusters (blue circles in Figure 2). Then in the second stage of clustering, inside each parent cluster, we perform nearest-neighbor search for each sample and obtain its top 10 nearest neighbors in the feature space. We then find any group of samples with a group size of 4, inside which all the samples are each other's top-10 nearest neighbors. We call these small clusters with 4 samples "child" clusters (green circles in Figure 2). We then link these image patches with each other inside a child cluster via "inter-instance" edges. Note that different child clusters may overlap, i.e., we allow the same sample to appear in different groups. However, in our experiments we find that most samples appear only in one group. We show some results of clustering in Figure 4
Intra-instance Edges via Tracking
To obtain rich variations of viewpoint and deformation changes of the same object instance, we apply visual tracking on the mined moving objects in the videos as in [61]. More specifically, given a moving object in the video, it applies KCF [23] to track the object for N = 30 frames and obtain another sample of the object in the end of the track. Note that the KCF tracker does not require any human supervision. We add these new objects as nodes to the graph and link the two samples in the same track with an intrainstance edge (purple in Figure 2).
Learning with Transitions in the Graph
With the graph constructed, we want to link more image patches (see dotted links in Figure 1 trivial solution of identical representations, we also encourage the network to generate dissimilar representations if a node is expected to be unrelated. Specifically, we constrain the image patches from different "parent" clusters (which are more likely to have different categories) to have different representations (which we call a negative pair of samples). We design a Triplet-Siamese network with a ranking loss function [59,61] such that the distance between related samples should be smaller than the distance of unrelated samples. Our Triplet-Siamese network includes three towers of a ConvNet with shared weights (Figure 6). For each tower, we adopt the standard VGG16 architecture [50] to the convolutional layers, after which we add two fullyconnected layers with 4096-d and 1024-d outputs. The Triplet-Siamese network accepts a triplet sample as its input: the first two image patches in the triplet are a positive pair, and the last two are a negative pair. We extract their 1024-d features and calculate the ranking loss as follows.
Given an arbitrary pair of image patches A and B, we de-fine their distance as:
D(A, B) = 1 − F (A)·F (B) F (A) F (B) where F (·)
is the representation mapping of the network. With a triplet of (X, X + , X − ) where (X, X + ) is a positive pair and (X, X − ) is a negative pair as defined above, we minimize the ranking loss:
L(X, X + , X − ) = max{0, D(X, X + ) − D(X, X − ) + m},
where m is a margin set as 0.5 in our experiments. Although we have only one objective function, we have different types of training examples. As illustrated in Figure 6, given the set of related samples {A, B, A , B } (see Figure 5) and a random distractor sample C from another parent cluster, we can train the network to handle, e.g., viewpoint invariance for the same instance via L(A, A , C) and invariance to different objects sharing the same semantics via L (A, B , C).
Besides exploring these relations, we have also tried to enforce the distance between different objects to be larger than the distance between two different viewpoints of the same object, e.g., D(A, A ) < D(A, B ). But we have not found this extra relation brings any improvement. Interestingly, we found that the representations learned by our method can in general satisfy D(A, A ) < D(A, B ) after training.
Experiments
We perform extensive analysis on our self-supervised representations. We first evaluate our ConvNet as a feature extractor on different tasks without fine-tuning . We then show the results of transferring the representations to vision tasks including object detection and surface normal estimation with fine-tuning. Implementation Details. To prepare the data for training, we download the 100K videos from YouTube using the URLs provided by [36,61]. By mining the moving objects and tracking in the videos, we obtain ∼10 million image patches of objects. By applying the transitivity on the graph constructed, we obtain 7 million positive pairs of objects where each pair of objects are two different instances with different viewpoints. We also randomly sample 2 million object pairs connected by the intra-instance edges.
We train our network with these 9 million pairs of images using a learning rate of 0.001 and a mini-batch size of 100.
For each pair we sample the third distractor patch from a different "parent cluster" in the same mini-batch. We use the network pre-trained in [9] to initialize our convolutional layers and randomly initialized the fully connected layers. We train the network for 200K iterations with our method.
Qualitative Results without Fine-tuning
We first perform nearest-neighbor search to show qualitative results. We adopt the pool5 feature of the VGG16 network for all methods without any fine-tuning ( Figure 7). We do this experiment on the object instances cropped from the PASCAL VOC 2007 dataset [13] (trainval). As Figure 7 shows, given an query image on the left, the network pre-trained with the context prediction task [9] can retrieve objects of very similar viewpoints. On the other hand, our network shows more variations of objects and can often retrieve objects with the same class as the query. We also show the nearest-neighbor results using fully-supervised ImageNet pre-trained features as a comparison. We also visualize the features using the visualization technique of [65]. For each convolutional unit in conv5 3, we retrieve the objects which give highest activation responses and highlight the receptive fields on the images. We visualize the top 6 images for 4 different convolutional units in Figure 8. We can see these convolutional units are corresponding to different semantic object parts (e.g., fronts of cars or buses wheels, animal legs, eyes or faces).
Analysis on Object Detection
We evaluate how well our representations can be transferred to object detection by fine-tuning Fast R-CNN [16] on PASCAL VOC 2007 [13]. We use the standard trainval set for training and test set for testing with VGG16 as the base architecture. For the detection network, we initialize the weights of convolutional layers from our self-supervised network and randomly initialize the fullyconnected layers using Gaussian noise with zero mean and 0.001 standard deviation.
During fine-tuning Fast R-CNN, we use 0.00025 as the starting learning rate. We reduce the learning rate by 1/10 in every 50K iterations. We fine-tune the network for 150K iterations. Unlike standard Fast R-CNN where the first few convolutional layers of the ImageNet pre-trained network are fixed, we fine-tuned all layers on the PASCAL data as our model is pre-trained in a very different domain (e.g., video patches).
We report the results in Table 1. If we train Fast R-CNN from scratch without any pre-training, we can only obtain 39.7% mAP. With our self-supervised trained network as initialization, the detection mAP is increased to 63.2% (with a 23.5 points improvement). Our result compares competitively (4.1 points lower) to the counterpart using ImageNet pre-training (67.3% with VGG16).
As we incorporate the invariance captured from [61] and [9], we also evaluate the results using these two approaches individually (Table 1). By fine-tuning the context prediction network of [9], we can obtain 61.5% mAP. To train the network of [61], we use exactly the same loss function and initialization as our approach except that there are only training examples of the same instance in the same visual track (i.e., only the samples linked by intra-instance edges in our graph is better than both methods. This comparison indicates the effectiveness of exploiting a greater variety of invariance in representation learning.
Is multi-task learning sufficient? An alternative way of obtaining both intra-and inter-instance invariance is to apply multi-task learning with the two losses of [9] and [61].
Next we compare with this method. For the task in [61], we use the same network architecture as our approach; for the task in [9], we follow their design of a Siamese network. We apply different fully connected layers for different tasks, but share the convolutional layers between these two tasks. Given a mini-batch of training samples, we perform ranking among these images as well as context prediction in each image simultaneously via two losses. The representations learned in this way, when fine-tuned with Fast R-CNN, obtain 62.1% mAP ("Multitask" in Table 2). Comparing to only using context prediction [9] (61.5%), the multi-task learning only gives a marginal improvement (0.6%). This result suggests that multi-task learning in this way is not sufficient; organizing and exploiting the relationships of data, as done by our method, is more effective for representation learning.
How important is tracking? To further understand how much visual tracking helps, we perform ablative analysis by making the visual tracks shorter: we track the moving objects for 15 frames instead of by default 30 frames. This is expected to reduce the viewpoint/pose/deformation variance contributed by tracking. Our model pre-trained in this way shows 61.5% mAP ("15-frame" in Table 2) when fine-tuned for detection. This number is similar to that of using context prediction only (Table 1). This result is not surprising, because it does not add much new information for training. It suggests adding stronger viewpoint/pose/deformation invariance is important for learning All >c1 >c2 >c3 >c4 >c5
Context [9] 62. better features for object detection.
How important is clustering? Furthermore, we want to understand how important it is to cluster images with features learned from still images [9]. We perform another ablative analysis by replacing the features of [9] with HOG [6] during clustering. The rest of the pipeline remains exactly the same. The final result is 60.4% mAP ("HOG" in Table 2). This shows that if the features for clustering are not invariant enough to handle different object instances, the transitivity in the graph becomes less reliable.
Object Detection with Faster R-CNN
Although Fast R-CNN [16] has been a popular testbed of un-/self-supervised features, it relies on Selective Search proposals [55] and thus is not fully end-to-end. We further evaluate the representations on object detection with the end-to-end Faster R-CNN [46] where the Region Proposal Network (RPN) may suffer from the features if they are low-quality.
PASCAL VOC 2007 Results. We fine-tune Faster R-CNN in 8 GPUs for 35K iterations with an initial learning rate of 0.00025 which is reduced by 1/10 after every 15K iterations. Table 3 COCO Results. We further report results on the challenging COCO detection dataset [37]. To the best of our knowledge this is the first work of this kind presented on COCO detection. We fine-tune Faster R-CNN in 8 GPUs for 120K iterations with an initial learning rate of 0.001 which is reduced by 1/10 after 80k iterations. This is trained on the COCO trainval35k split and evaluated on the minival5k split, introduced by [3].
We report the COCO results on Table 4. Faster R-CNN fine-tuned with our self-supervised network obtains 23.5% AP using the COCO metric, which is very close (<1%) to fine-tuning Faster R-CNN with the ImageNet pre-trained counterpart (24.4%). Actually, if the fine-tuning of the Ima-geNet counterpart follows the "shorter" schedule in the public code (61.25K iterations in 8 GPUs, converted from 490K in 1 GPU) 1 , the ImageNet supervised pre-training version has 23.7% AP and is comparable with ours. This comparison also strengthens the significance of our result.
To the best of our knowledge, our model achieves the best performance reported to date on VOC 2007 and COCO using un-/self-supervised pre-training.
Adapting to Surface Normal Estimation
To show the generalization ability of our self-supervised representations, we adopt the learned network to the surface normal estimation task. In this task, given a single Table 5: Results on NYU v2 for per-pixel surface normal estimation, evaluated over valid pixels.
RGB image as input, we train the network to predict the normal/orientation of the pixels. We evaluate our method on the NYUv2 RGBD dataset [49] dataset. We use the official split of 795 images for training and 654 images for testing. We follow the same protocols for generating surface normal ground truth and evaluations as [14,29,15].
To train the network for surface normal estimation, we apply the Fully Convolutional Network (FCN 32-s) proposed in [38] with the VGG16 network as base architecture. For the loss function, we follow the design in [60]. Specifically, instead of direct regression to obtain the normal, we use a codebook of 40 codewords to encode the 3-dimension normals. Each codeword represents one class thus we turn the problem into a 40-class classification for each pixel. We use the same hyperparameters as in [38] for training and the network is fine-tuned for same number of iterations (100K) for different initializations.
To initialize the FCN model with self-supervised nets, we copy the weights of the convolutional layers to the corresponding layers in FCN. For ImageNet pre-trained network, we follow [38] by converting the fully connected layers to convolutional layers and copy all the weights. For the model trained from scratch, we randomly initialize all the layers with "Xavier" initialization [18] . Table 5 shows the results. We report mean and median error for all visible pixels (in degrees) and also the percentage of pixels with error less than 11.25, 22.5 and 30 degrees. Surprisingly, we obtain much better results with our self-supervised trained network than ImageNet pre-training in this task (3 to 4% better in most metrics). As a comparison, the network trained in [9,61] are slightly worse than the ImageNet pre-trained network. These results suggest that our learned representations are competitive to Ima-geNet pre-training for high-level semantic tasks, but outperforms it on tasks such as surface normal estimation. This experiment suggests that different visual tasks may prefer different levels of visual invariance. | 3,903 |
1708.02901 | 2743157634 | Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: "different instances but a similar viewpoint and category" and "different viewpoints of the same instance". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task. | On the other hand, these changes can be more successfully captured by the visual tracking method presented in @cite_4 , , see A A' and B B' in Figure . But by tracking an identical instance we cannot associate different instances of the same semantics. Thus we expect the representations learned in @cite_4 are weak in handling the variations between different objects in the same category. | {
"abstract": [
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
],
"cite_N": [
"@cite_4"
],
"mid": [
"219040644"
]
} | Transitive Invariance for Self-supervised Visual Representation Learning | Visual invariance is a core issue in learning visual representations. Traditional features like SIFT [39] and HOG [6] are histograms of edges that are to an extent invariant to illumination, orientations, scales, and translations. Modern deep representations are capable of learning high-level invariance from large-scale data [47] , e.g., viewpoint, pose, deformation, and semantics. These can also be transferred
A B
A ' B ' Figure 1: We propose to obtain rich invariance by applying simple transitive relations. In this example, two different cars A and B are linked by the features that are good for inter-instance invariance (e.g., using [9]); and each car is linked to another view (A and B ) by visual tracking [61]. Then we can obtain new invariance from object pairs A, B , A , B , and A , B via transitivity. We show more examples in the bottom.
to complicated visual recognition tasks [17,38].
In the scheme of supervised learning, human annotations that map a variety of examples into a single label provide supervision for learning invariant representations. For example, two horses with different illumination, poses, and breeds are invariantly annotated as a category of "horse". Such human knowledge on invariance is expected to be learned by capable deep neural networks [33,28] through carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort.
Unsupervised or "self-supervised" learning (e.g., [61,9,45,63,64,35,44,62,40,66]) recently has attracted increasing interests because the "labels" are free to obtain. Unlike supervised learning that learns invariance from the semantic labels, the self-supervised learning scheme mines it from the nature of the data. We observe that most selfsupervised approaches learn representations that are invariant to: (i) inter-instance variations, which reflects the commonality among different instances. For example, relative positions of patches [9] (see also Figure 3) or channels of colors [63,64] can be predicted through the commonality shared by many object instances; (ii) intra-instance variations. Intra-instance invariance is learned from the pose, viewpoint, and illumination changes by tracking a single moving instance in videos [61,44]. However, either source of invariance can be as rich as that provided by human annotations on large-scale datasets like ImageNet.
Even after significant advances in the field of selfsupervised learning, there is still a long way to go compared to supervised learning. What should be the next steps? It seems that an obvious way is to obtain multiple sources of invariance by combining multiple self-supervised tasks, e.g., via multiple losses. Unfortunately, this naïve solution turns out to give little improvement (as we will show by experiments).
We argue that the trick lies not in the tasks but in the way of exploiting data. To leverage both intra-instance and interinstance invariance, in this paper we construct a huge affinity graph consisting of two types of edges (see Figure 1): the first type of edges relates "different instances of similar viewpoints/poses and potentially the same category", and the second type of edges relates "different viewpoints/poses of an identical instance". We instantiate the first type of edges by learning commonalities across instances via the approach of [9], and the second type by unsupervised tracking of objects in videos [61]. We set up simple transitive relations on this graph to infer more complex invariance from the data, which are then used to train a Triplet-Siamese network for learning visual representations.
Experiments show that our representations learned without any annotations can be well transferred to the object detection task. Specifically, we achieve 63.2% mAP with VGG16 [50] when fine-tuning Fast R-CNN on VOC2007, against the ImageNet pre-training baseline of 67.3%. More importantly, we also report the first-ever result of un-/selfsupervised pre-training models fine-tuned on the challenging COCO object detection dataset [37], achieving 23.5% AP comparing against 24.4% AP that is fine-tuned from an ImageNet pre-trained counterpart (both using VGG16). To our knowledge, this is the closest accuracy to the ImageNet pre-training counterpart obtained on object detection tasks.
Overview
Our goal is to learn visual representations which capture: (i) inter-instance invariance (e.g., two instances of cats should have similar features), and (ii) intra-instance invariance (pose, viewpoint, deformation, illumination, and other variance of the same object instance). We have tried to formulate this as a multi-task (multi-loss) learning problem in our initial experiments (detailed in Table 2 and 3) and observed unsatisfactory performance. Instead of doing so, we propose to obtain a richer set of invariance by performing transitive reasoning on the data.
Our first step is to construct a graph that describes the affinity among image patches. A node in the graph denotes The context prediction task defined in [9]. Given two patches in an image, it learns to predict the relative position between them.
an image patch. We define two types of edges in the graph that relate image patches to each other. The first type of edges, called inter-instance edges, link two nodes which correspond to different object instances of similar visual appearance; the second type of edges, called intra-instance edges, link two nodes which correspond to an identical object captured at different time steps of a track. The solid arrows in Figure 1 illustrate these two types of edges. Given the built graph, we want to transit the relations via the known edges and associate unconnected nodes that may provide under-explored invariance ( Figure 1, dash arrows). Specifically, as shown in Figure 1, if patches A, B are linked via an inter-instance edge and A, A and B, B respectively are linked via "intra-instance" edges, we hope to enrich the invariance by simple transitivity and relate three new pairs of: A , B , A, B , and A , B (Figure 1, dash arrows).
We train a Triplet-Siamese network that encourages similar visual representations between the invariant samples (e.g., any pair consisting of A, A , B, B ) and at the same time discourages similar visual representations to a third distractor sample (e.g., a random sample C unconnected to A, A , B, B ). In all of our experiments, we apply VGG16 [50] as the backbone architecture for each branch of this Triplet-Siamese network. The visual representations learned by this backbone architecture are evaluated on other recognition tasks.
Graph Construction
We construct a graph with inter-instance and intrainstance edges. Firstly, we apply the method of [61] on a large set of 100K unlabeled videos (introduced in [61]) and mine millions of moving objects using motion cues (Sec. 4.1). We use the image patches of them to construct the nodes of the graph.
We instantiate inter-instance edges by the selfsupervised method of [9] that learns context predictions on a large set of still images, which provide features to cluster the nodes and set up inter-instance edges (Sec. 4.2). On the other hand, we connect the image patches in the same visual track by intra-instance edges (Sec. 4.3).
Mining Moving Objects
We follow the approach in [61] to find the moving objects in videos. As a brief introduction, this method first applies Improved Dense Trajectories (IDT) [58] on videos to extract SURF [2] feature points and their motion. The video frames are then pruned if there is too much motion (indicating camera motion) or too little motion (e.g., noisy signals). For the remaining frames, it crop a 227×227 bounding box (from ∼600×400 images) which includes the most number of moving points as the foreground object. However, for computational efficiency, in this paper we rescale the image patches to 96 × 96 after cropping and we use them as inputs for clustering and training.
Inter-instance Edges via Clustering
Given the extracted image patches which act as nodes, we want to link them with extra inter-instance edges. We rely on the visual representations learned from [9] to do this. We connect the nodes representing image patches which are close in the feature space. In addition, motivated by the mid-level clustering approaches [51,7], we want to obtain millions of object clusters with a small number of objects in each to maintain high "purity" of the clusters. We describe the implementation details of this step as follows.
We extract the pool5 features of the VGG16 network trained as in [9]. Following [9], we use ImageNet without labels to train this network. Note that because we use a patch size of 96×96, the dimension of our pool5 feature is 3×3×512=4608. The distance between samples is calculated by the cosine distance of these features. We want the object patches in each cluster to be close to each other in the feature space, and we care less about the differences between clusters. However, directly clustering millions of image patches into millions of small clusters (e.g., by K-means) is time consuming. So we apply a hierarchical clustering approach (2-stage in this paper) where we first group the images into a relatively small number of clusters, and then find groups of small number of examples inside each cluster via nearest-neighbor search.
Specifically, in the first stage of clustering, we apply Kmeans clustering with K = 5000 on the image patches. We then remove the clusters with number of examples less than 100 (this reduces K to 546 in our experiments on the image patches mined from the video dataset). We view these clusters as the "parent" clusters (blue circles in Figure 2). Then in the second stage of clustering, inside each parent cluster, we perform nearest-neighbor search for each sample and obtain its top 10 nearest neighbors in the feature space. We then find any group of samples with a group size of 4, inside which all the samples are each other's top-10 nearest neighbors. We call these small clusters with 4 samples "child" clusters (green circles in Figure 2). We then link these image patches with each other inside a child cluster via "inter-instance" edges. Note that different child clusters may overlap, i.e., we allow the same sample to appear in different groups. However, in our experiments we find that most samples appear only in one group. We show some results of clustering in Figure 4
Intra-instance Edges via Tracking
To obtain rich variations of viewpoint and deformation changes of the same object instance, we apply visual tracking on the mined moving objects in the videos as in [61]. More specifically, given a moving object in the video, it applies KCF [23] to track the object for N = 30 frames and obtain another sample of the object in the end of the track. Note that the KCF tracker does not require any human supervision. We add these new objects as nodes to the graph and link the two samples in the same track with an intrainstance edge (purple in Figure 2).
Learning with Transitions in the Graph
With the graph constructed, we want to link more image patches (see dotted links in Figure 1 trivial solution of identical representations, we also encourage the network to generate dissimilar representations if a node is expected to be unrelated. Specifically, we constrain the image patches from different "parent" clusters (which are more likely to have different categories) to have different representations (which we call a negative pair of samples). We design a Triplet-Siamese network with a ranking loss function [59,61] such that the distance between related samples should be smaller than the distance of unrelated samples. Our Triplet-Siamese network includes three towers of a ConvNet with shared weights (Figure 6). For each tower, we adopt the standard VGG16 architecture [50] to the convolutional layers, after which we add two fullyconnected layers with 4096-d and 1024-d outputs. The Triplet-Siamese network accepts a triplet sample as its input: the first two image patches in the triplet are a positive pair, and the last two are a negative pair. We extract their 1024-d features and calculate the ranking loss as follows.
Given an arbitrary pair of image patches A and B, we de-fine their distance as:
D(A, B) = 1 − F (A)·F (B) F (A) F (B) where F (·)
is the representation mapping of the network. With a triplet of (X, X + , X − ) where (X, X + ) is a positive pair and (X, X − ) is a negative pair as defined above, we minimize the ranking loss:
L(X, X + , X − ) = max{0, D(X, X + ) − D(X, X − ) + m},
where m is a margin set as 0.5 in our experiments. Although we have only one objective function, we have different types of training examples. As illustrated in Figure 6, given the set of related samples {A, B, A , B } (see Figure 5) and a random distractor sample C from another parent cluster, we can train the network to handle, e.g., viewpoint invariance for the same instance via L(A, A , C) and invariance to different objects sharing the same semantics via L (A, B , C).
Besides exploring these relations, we have also tried to enforce the distance between different objects to be larger than the distance between two different viewpoints of the same object, e.g., D(A, A ) < D(A, B ). But we have not found this extra relation brings any improvement. Interestingly, we found that the representations learned by our method can in general satisfy D(A, A ) < D(A, B ) after training.
Experiments
We perform extensive analysis on our self-supervised representations. We first evaluate our ConvNet as a feature extractor on different tasks without fine-tuning . We then show the results of transferring the representations to vision tasks including object detection and surface normal estimation with fine-tuning. Implementation Details. To prepare the data for training, we download the 100K videos from YouTube using the URLs provided by [36,61]. By mining the moving objects and tracking in the videos, we obtain ∼10 million image patches of objects. By applying the transitivity on the graph constructed, we obtain 7 million positive pairs of objects where each pair of objects are two different instances with different viewpoints. We also randomly sample 2 million object pairs connected by the intra-instance edges.
We train our network with these 9 million pairs of images using a learning rate of 0.001 and a mini-batch size of 100.
For each pair we sample the third distractor patch from a different "parent cluster" in the same mini-batch. We use the network pre-trained in [9] to initialize our convolutional layers and randomly initialized the fully connected layers. We train the network for 200K iterations with our method.
Qualitative Results without Fine-tuning
We first perform nearest-neighbor search to show qualitative results. We adopt the pool5 feature of the VGG16 network for all methods without any fine-tuning ( Figure 7). We do this experiment on the object instances cropped from the PASCAL VOC 2007 dataset [13] (trainval). As Figure 7 shows, given an query image on the left, the network pre-trained with the context prediction task [9] can retrieve objects of very similar viewpoints. On the other hand, our network shows more variations of objects and can often retrieve objects with the same class as the query. We also show the nearest-neighbor results using fully-supervised ImageNet pre-trained features as a comparison. We also visualize the features using the visualization technique of [65]. For each convolutional unit in conv5 3, we retrieve the objects which give highest activation responses and highlight the receptive fields on the images. We visualize the top 6 images for 4 different convolutional units in Figure 8. We can see these convolutional units are corresponding to different semantic object parts (e.g., fronts of cars or buses wheels, animal legs, eyes or faces).
Analysis on Object Detection
We evaluate how well our representations can be transferred to object detection by fine-tuning Fast R-CNN [16] on PASCAL VOC 2007 [13]. We use the standard trainval set for training and test set for testing with VGG16 as the base architecture. For the detection network, we initialize the weights of convolutional layers from our self-supervised network and randomly initialize the fullyconnected layers using Gaussian noise with zero mean and 0.001 standard deviation.
During fine-tuning Fast R-CNN, we use 0.00025 as the starting learning rate. We reduce the learning rate by 1/10 in every 50K iterations. We fine-tune the network for 150K iterations. Unlike standard Fast R-CNN where the first few convolutional layers of the ImageNet pre-trained network are fixed, we fine-tuned all layers on the PASCAL data as our model is pre-trained in a very different domain (e.g., video patches).
We report the results in Table 1. If we train Fast R-CNN from scratch without any pre-training, we can only obtain 39.7% mAP. With our self-supervised trained network as initialization, the detection mAP is increased to 63.2% (with a 23.5 points improvement). Our result compares competitively (4.1 points lower) to the counterpart using ImageNet pre-training (67.3% with VGG16).
As we incorporate the invariance captured from [61] and [9], we also evaluate the results using these two approaches individually (Table 1). By fine-tuning the context prediction network of [9], we can obtain 61.5% mAP. To train the network of [61], we use exactly the same loss function and initialization as our approach except that there are only training examples of the same instance in the same visual track (i.e., only the samples linked by intra-instance edges in our graph is better than both methods. This comparison indicates the effectiveness of exploiting a greater variety of invariance in representation learning.
Is multi-task learning sufficient? An alternative way of obtaining both intra-and inter-instance invariance is to apply multi-task learning with the two losses of [9] and [61].
Next we compare with this method. For the task in [61], we use the same network architecture as our approach; for the task in [9], we follow their design of a Siamese network. We apply different fully connected layers for different tasks, but share the convolutional layers between these two tasks. Given a mini-batch of training samples, we perform ranking among these images as well as context prediction in each image simultaneously via two losses. The representations learned in this way, when fine-tuned with Fast R-CNN, obtain 62.1% mAP ("Multitask" in Table 2). Comparing to only using context prediction [9] (61.5%), the multi-task learning only gives a marginal improvement (0.6%). This result suggests that multi-task learning in this way is not sufficient; organizing and exploiting the relationships of data, as done by our method, is more effective for representation learning.
How important is tracking? To further understand how much visual tracking helps, we perform ablative analysis by making the visual tracks shorter: we track the moving objects for 15 frames instead of by default 30 frames. This is expected to reduce the viewpoint/pose/deformation variance contributed by tracking. Our model pre-trained in this way shows 61.5% mAP ("15-frame" in Table 2) when fine-tuned for detection. This number is similar to that of using context prediction only (Table 1). This result is not surprising, because it does not add much new information for training. It suggests adding stronger viewpoint/pose/deformation invariance is important for learning All >c1 >c2 >c3 >c4 >c5
Context [9] 62. better features for object detection.
How important is clustering? Furthermore, we want to understand how important it is to cluster images with features learned from still images [9]. We perform another ablative analysis by replacing the features of [9] with HOG [6] during clustering. The rest of the pipeline remains exactly the same. The final result is 60.4% mAP ("HOG" in Table 2). This shows that if the features for clustering are not invariant enough to handle different object instances, the transitivity in the graph becomes less reliable.
Object Detection with Faster R-CNN
Although Fast R-CNN [16] has been a popular testbed of un-/self-supervised features, it relies on Selective Search proposals [55] and thus is not fully end-to-end. We further evaluate the representations on object detection with the end-to-end Faster R-CNN [46] where the Region Proposal Network (RPN) may suffer from the features if they are low-quality.
PASCAL VOC 2007 Results. We fine-tune Faster R-CNN in 8 GPUs for 35K iterations with an initial learning rate of 0.00025 which is reduced by 1/10 after every 15K iterations. Table 3 COCO Results. We further report results on the challenging COCO detection dataset [37]. To the best of our knowledge this is the first work of this kind presented on COCO detection. We fine-tune Faster R-CNN in 8 GPUs for 120K iterations with an initial learning rate of 0.001 which is reduced by 1/10 after 80k iterations. This is trained on the COCO trainval35k split and evaluated on the minival5k split, introduced by [3].
We report the COCO results on Table 4. Faster R-CNN fine-tuned with our self-supervised network obtains 23.5% AP using the COCO metric, which is very close (<1%) to fine-tuning Faster R-CNN with the ImageNet pre-trained counterpart (24.4%). Actually, if the fine-tuning of the Ima-geNet counterpart follows the "shorter" schedule in the public code (61.25K iterations in 8 GPUs, converted from 490K in 1 GPU) 1 , the ImageNet supervised pre-training version has 23.7% AP and is comparable with ours. This comparison also strengthens the significance of our result.
To the best of our knowledge, our model achieves the best performance reported to date on VOC 2007 and COCO using un-/self-supervised pre-training.
Adapting to Surface Normal Estimation
To show the generalization ability of our self-supervised representations, we adopt the learned network to the surface normal estimation task. In this task, given a single Table 5: Results on NYU v2 for per-pixel surface normal estimation, evaluated over valid pixels.
RGB image as input, we train the network to predict the normal/orientation of the pixels. We evaluate our method on the NYUv2 RGBD dataset [49] dataset. We use the official split of 795 images for training and 654 images for testing. We follow the same protocols for generating surface normal ground truth and evaluations as [14,29,15].
To train the network for surface normal estimation, we apply the Fully Convolutional Network (FCN 32-s) proposed in [38] with the VGG16 network as base architecture. For the loss function, we follow the design in [60]. Specifically, instead of direct regression to obtain the normal, we use a codebook of 40 codewords to encode the 3-dimension normals. Each codeword represents one class thus we turn the problem into a 40-class classification for each pixel. We use the same hyperparameters as in [38] for training and the network is fine-tuned for same number of iterations (100K) for different initializations.
To initialize the FCN model with self-supervised nets, we copy the weights of the convolutional layers to the corresponding layers in FCN. For ImageNet pre-trained network, we follow [38] by converting the fully connected layers to convolutional layers and copy all the weights. For the model trained from scratch, we randomly initialize all the layers with "Xavier" initialization [18] . Table 5 shows the results. We report mean and median error for all visible pixels (in degrees) and also the percentage of pixels with error less than 11.25, 22.5 and 30 degrees. Surprisingly, we obtain much better results with our self-supervised trained network than ImageNet pre-training in this task (3 to 4% better in most metrics). As a comparison, the network trained in [9,61] are slightly worse than the ImageNet pre-trained network. These results suggest that our learned representations are competitive to Ima-geNet pre-training for high-level semantic tasks, but outperforms it on tasks such as surface normal estimation. This experiment suggests that different visual tasks may prefer different levels of visual invariance. | 3,903 |
1906.00181 | 2947665243 | Unsupervised domain translation has recently achieved impressive performance with rapidly developed generative adversarial network (GAN) and availability of sufficient training data. However, existing domain translation frameworks form in a disposable way where the learning experiences are ignored. In this work, we take this research direction toward unsupervised meta domain translation problem. We propose a meta translation model called MT-GAN to find parameter initialization of a conditional GAN, which can quickly adapt for a new domain translation task with limited training samples. In the meta-training procedure, MT-GAN is explicitly fine-tuned with a primary translation task and a synthesized dual translation task. Then we design a meta-optimization objective to require the fine-tuned MT-GAN to produce good generalization performance. We demonstrate effectiveness of our model on ten diverse two-domain translation tasks and multiple face identity translation tasks. We show that our proposed approach significantly outperforms the existing domain translation methods when using no more than @math training samples in each image domain. | In recent years, generative adversarial network (GAN) @cite_10 has gained a wide range of interests in generative modeling. In a GAN, a generator is trained to produce fake but plausible images, while a discriminator is trained to distinguish difference between real and fake images. Conditional generative adversarial network (CGAN) @cite_0 is the conditional version of GAN in which the generator is feeded with noise vector together with additional data (e.g., class labels) that conditions on both the generator and discriminator. Deep convolutional generative adversarial network (DCGAN) @cite_19 is an extensive exploration of convolution neural network architectures in GAN and contributes to improve the quality of image synthesis. GANs have been successfully leveraged to many image generation applications @cite_28 @cite_4 @cite_7 @cite_32 . Our method adopts the adversarial loss to render images from the generators to be real in the target domain and make meta-training performance improve meta-learners' generalization. | {
"abstract": [
"We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.",
"We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.",
"Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN, that conditions GANs’ generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild.",
"Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https: github.com JiahuiYu generative_inpainting.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_10"
],
"mid": [
"2963800363",
"2546066744",
"2883861033",
"2963540914",
"2125389028",
"2173520492",
"2099471712"
]
} | LEARNING TO TRANSFER: UNSUPERVISED META DOMAIN TRANSLATION | Unsupervised domain translation tasks [1,2], which aim at learning a mapping that can transfer images from a source domain to a target domain using unpaired training data only, have been widely investigated in recent years. However, recent literature focuses on learning a model for a specific translation task, which lacks the ability to generalize to other tasks. In comparison, human intelligence has the ability to quickly learn new concepts with prior learning experiences. Taking painting as an example, after being able to paint a natural scene in Monet's style, people have learned the basic skills of painting with usage of painting brush, palette, etc. When learning to paint Van Gogh's style, we do not need to learn how to draw from the beginning. Instead, we can quickly adapt to this new task by viewing a few Van Gogh's paintings, since we have remembered the basic patterns of painting from the prior learning experiences. To this end, we would like to require the domain translation agents to effectively utilize prior experiences and knowledge from other translation tasks when learning a new translation task.
In this paper, we take a step toward unsupervised meta domain translation (UMDT) problem which aims to effectively leverage learning experiences from domain translation tasks. Specifically, we propose a meta translation model called MT-GAN to develop strategies that are robust to different task contexts. In other words, we want to find the initialization of a conditional GAN's parameters that could be quickly adapted to a new domain translation task with a limited amount of training samples. Our model contains two meta-learners, i.e., a meta-generator G which keeps the memory of prior translation experiences and a meta-discriminator D which teaches G how to quickly generalize to a new task. Our approach combines model-agnostic meta-learning algorithm (MAML) [3] and GAN learning [4] to iteratively update G and D. Within a meta-training minibatch, for a specific translation task, we synthesize its dual translation task with current states of MT-GAN and train these two tasks in a dual learning form [1,5]. Then we design a meta-optimization objective to evaluate the performance of fine-tuned MT-GAN, and minimize the expected losses on the meta-testing samples with respect to parameters of MT-GAN, which ensures that the direction taken to fine-tuning leads to a good generalization performance.
We extensively evaluate the effectiveness and generalizing ability of the proposed MT-GAN algorithm on two kinds of translation task distributions. The first one contains 10 diverse two-domain translation tasks covering a wide range of scenarios, including labels↔photos, horses↔zebras, summer↔winter, etc. The second one is established by repeatedly sampling two arbitrary identities, which forms a two-domain translation task, from a multiple identity dataset [6]. For each translation task, we take the other 9 tasks as the training dataset and test the meta-learned parameter initialization on the task. Our experiments only use 10 samples at most in an image domain of a translation task and show that the proposed meta-learning approach outperforms ordinary domain translation models, such as CycleGAN [1] and StarGAN [2].
We summarize our contributions as follow: 1) We present a novel domain translation problem in this work, i,e., UMDT. 2) We propose a MT-GAN that jointly trains two meta-learners in an adversarial and dual form, which has not been explored by existing meta-learning approaches. 3) We extensively verify the effectiveness of our meta-learning based approach on a wide range of translation tasks.
Meta Translation
Problem Formulation
The goal of unsupervised meta domain translation (UMDT) is to first leverage unpaired data for efficient training, and then the obtained model can be applied on a wide range of new domain translation tasks. We formally define the UMDT problem as follows. Assuming that there are a series of tasks following distribution P (T ). Each task T is associated with an underlying distribution P T X×Y (x, y) of two domains X and Y . More specifically, the objective of task T is to learn a function that can map samples from domain X to domain Y . Concretely, we assume that we have a set of training
sample tasks {T i } N i=1 ,
where N is the number of training tasks. Each training task T is a tuple T = (S T , Q T ), where S T denotes the support set, and Q T denotes the query set. The support set
S T = {{x i } K i=1 ∈ X s T , {y i } K i=1 ∈ Y s T } contains 2K unpaired samples from two different domains. Q T = {{x i } L i=1 ∈ X q T , {y i } L i=1 ∈ Y q T }
is the query set that contains 2L unpaired samples from the same two domains. A UMDT algorithm takes {T i } N i=1 as inputs and produces a learning strategy to two meta-learners, i.e., a meta-generator G and a meta-discriminator D. When a new translation task
T N +1 = {{x} K i=1 ∈ X T N +1 , {y} K i=1 ∈ Y T N +1
} comes at the test time, the learning strategy should learn a fine-tuned G and a fine-tuned D that accomplish T N +1 . We denote the above process as K-shot domain translation. In general, the meta-learners iteratively adjust the meta-parameters on data from support set and assess their generalization performance by calculating meta-objective with data from query set.
Our Approach
We introduce the formulation of MT-GAN as following: For a K-shot domain translation problem P (T ) and dataset
{T i } N i=1
, our goal is to find a meta-generator G that transfers samples from a source domain to a target domain, and a meta-discriminator D that discriminates whether a sample is from real image domain or generator. Formally the generator G and discriminator D are parameterized by θ g and θ d respectively. For a specific translation task T ∼ P (T ), there have two image domains X and Y . Our primary target is to learn a mapping function F : X → Y that is derived from G. Since there has no paired data for training and observing that there naturally exists a dual task which learns another mapping direction H: Y → X, we utilize these two translation tasks as a two-agent game for fine-tuning in the meta-training period. Specifically, two discriminators D Y and D X are used to render images from the generators to be real in the target domain. In the first iteration, we update the parameters of discriminators and generators as:
θ d Y ,0 = θ d + α∇ θ d L T ,0 ; θ d X ,0 = θ d,wc + α∇ θ d,wc L T ,0 ,(1)θ f,0 = θ g − α∇ θg L T ,0 ; θ h,0 = θ g,wc − α∇ θg,wc L T ,0 ,(2)
where α is the learning rate during the meta-training period. θ d Y ,0 , θ f,0 , θ d X ,0 and θ h,0 are the parameters of D Y , F , D X and H respectively at the first meta-training iteration t = 0. θ d,wc and θ g,wc are the parameters of wc(D) and wc(G), where wc is the network weights copy operation that detaches back-propagation gradient from meta-optimization objectives. G and D can be parameter initialization for any translation task in practice. However, in a specific T , using G for fine-tuning of both F and H is ambiguous since the meta-optimization objective will become to require G and D to be well adapted for both X → Y and Y → X at the same time. Therefore, we update G and D only for X → Y translation with wc operation. The overall objective L T ,0 for training the discriminators and generators at iteration t = 0 is given as:
L T ,0 (G, D, X s T , Y s T ) =L adv (G, D, X s T , Y s T ) + L adv (wc(G), wc(D), Y s T , X s T ) +λ cyc L cyc (G, wc(G), X s T , Y s T ) + λ idt L idt (G, wc(G), X s T , Y s T ),(3)L adv (G, D, X s T , Y s T ) = E y∼Y s T [log D(y)] + E x∼X s T [log(1 − D(G(x)))],(4)L cyc (G 1 , G 2 , X s T , Y s T ) = E x∼X s T [ G 2 (G 1 (x)) − x 1 ] + E y∼Y s T [ G 1 (G 2 (y)) − y 1 ],(5)L idt (G 1 , G 2 , X s T , Y s T ) = E x∼X s T [ G 2 (x) − x 1 ] + E y∼Y s T [ G 1 (y) − y 1 ],(6)
where L adv , L cyc and L idt are adversarial loss, cycle-consistency loss and identity loss respectively. λ cyc and λ idt are the weights to balance different loss terms. At iteration t = 0, we follow the popular unsupervised domain translation model, e.g., CycleGAN, to fine-tune the initialized d X , d Y , F and H to quickly adapt to the T task. Formally we update the parameters as follow:
θ d Y ,t+1 = θ d Y ,t + α∇ θ d Y ,t L T ,t+1 ; θ d X ,t+1 = θ d X ,t + α∇ θ d X ,t L T ,t+1 ,(7)θ f,t+1 = θ f,t − α∇ θ f,t L T ,t+1 ; θ h,t+1 = θ h,t − α∇ θ h,t L T ,t+1 ,(8)L T ,t+1 (F t , H t , D X,t , D Y,t , X s T , Y s T ) =L adv (F t , D Y,t , X s T , Y s T ) + L adv (H t , D X,t , Y s T , X s T ) +λ cyc L cyc (F t , H t , X s T , Y s T ) + λ idt L idt (F t , H t , X s T , Y s T ),(9)
where D Y,t , F t , D X,t and H t are the state of the D Y , F , D X and H at meta-training iteration t. For meta-optimization, we minimize the expected loss on query set Q T with updated discriminators and generators across the task T to train the initial parameters of D and G. Our MT-GAN model can be trained as follows:
θ d = θ d + β∇ θ d L q T ,T +1 ,(10)θ g = θ g − β∇ θg L q T ,T +1 ,(11)L q T ,T +1 (F T , H T , D X,T , D Y,T , X q T , Y q T ) =L adv (F T , D Y,T , X q T , Y q T ) + L adv (H T , D X,T , Y q T , X q T ) +λ cyc L cyc (F T , H T , X q T , Y q T ) + λ idt L idt (F T , H T , X q T , Y q T )(12)
where β is the learning rate for the meta-optimization, and T is the overall iteration number of meta-training. The full algorithm of MT-GAN is outlined in Algorithm 1 in a general case. for t in iterations T do 10:
Compute fine-tuned parameters θ d X ,t+1 , θ d Y ,t+1 , θ F,t+1 , θ H,Update θ d = θ d + β∇ θ d Ti∼P (T ) L q
Ti,T +1
18:
Update θ g = θ g − β∇ θg Ti∼P (T ) L q Ti,T +1
19: end while
At inference time, for an unseen translation task
T N +1 = {{x i } K i=1 ∈ X T N +1 , {y i } K i=1 ∈ Y T N +1 },
we iteratively fine-tune the obtained G and D with meta-training steps from Eqn.(1) to Eqn. (9) to obtain an F that transfers
{x i } K i=1
to Y domain and an H that transfers {y i } K i=1 to X domain.
Experiments
Experimental Setup
We extensively evaluate the effectiveness and generalizing ability of the proposed MT-GAN algorithm for UMDT problem on two kinds of translation task distributions. The first one (denoted as P 1 (T )) contains 10 diverse translation tasks collected by [1]: labels↔photos, horses↔zebras, summer↔winter, apple↔orange, monet↔photo, cezanne↔photo, ukiyoe↔photo, vangogh↔photo, photos↔maps and labels↔facades. In addition, the Facescrub dataset [6], which comprises 531 different celebrities, is utilized as another collection of domain translation tasks (denoted as P 2 (T )) that are less diverse, in which different identities are viewed as different domains. Then we can sample arbitrary two identities to form a two-domain translation task that aims to transfer the identity of face images while preserving original face orientation and expression.
In our experiments, for both P 1 (T ) and P 2 (T ), we simulate the meta domain translation scenarios by randomly select N = 9 tasks as a training dataset and select the other 1 task as the testing dataset/task. We establish 10 training datasets and 10 corresponding testing datasets for both P 1 (T ) and P 2 (T ). For each training dataset, we randomly select overall 2000 meta batches from the 9 tasks for model training. We set the meta batch-size to 2 to fit the memory limit of the GPU. Following the common settings of few-shot learning, we mainly focus on 5-shot domain translation and 10-shot domain translation in all experiments. Moreover, we set the query set's size L to 10. For the testing task, we randomly select 5 meta batches from the 1 task for model testing.
For each meta-training period, we use stochastic gradient descent (SGD) with learning rate α = 0.0001 to fine-tune the generators and discriminators. At meta-optimization time, we use the Adam optimizer [27] with learning rate β = 0.0002 to update both meta-generator and meta-discriminator. For model fine-tuning on the testing tasks, we also use the Adam optimizer with learning rate β = 0.0002 to fine-tune both meta-generator and meta-discriminator. The overall iteration number of meta-training T is set to 100. We set the loss function balance parameters λ cyc and λ idt to be 10 and 5.
For each K-shot domain translation task, we fine-tune the trained MT-GAN on each meta batch of the testing dataset, and report the average score and its standard deviation. The Frechet Inception Distance (FID) [28] that measures similarity between generated image dataset and real image dataset is used to evaluate translation results' quality. The lower the FID is, the better the translation results are. In addition, we perform face classification experiments on face identity translation tasks. We re-trained VGG-16 network [29] on Facescrub, and compute the top-1 and top-5 classification accuracy rates of the translation results.
Model configuration
We follow [1] to configure the models. The meta-generator G network consists of two convolution layers with stride 2 and kernel size 3 × 3, six residual blocks [30] with kernel size 3 × 3 and two transposed convolution layers with stride 0.5 and kernel size 3 × 3. For meta-discriminator D, we use PatchGANs [13] that consists of five convolution layers with stride 2 and kernel size 4 × 4. For both G and D, we use batch normalization [31] among network layers.
Results
Qualitative and Quantitative Evaluation We compare our method with two baseline domain translation models, i.e., CycleGAN [1] and StarGAN [2]. We retrain both CycleGAN and StarGAN on each meta batch in testing dataset of a given K-shot domain translation task, and report their average performance with the retrained models. We show qualitative comparison results of the testing tasks in Figure 1. We observe that StarGAN typically produces quite blurry and noisy outputs, and obviously suffers from the limited training samples. CycleGAN maintains the main structure of the source inputs in most cases and transfers some domain-specific features of the target domains in the translation results. However, CycleGAN still fails to locate the accurate regions that domain-specific features should be transferred in, and produces unnatural images. For example, the translated apple by CycleGAN in 10-shot orange→apple is surrounded by inaccurate apple features, and the translated map by CycleGAN in 10-shot photos→maps mistakenly transfers the land and houses to water label. On the contrary, most of our results well preserve the domain-invariant features [32,33] and accurately transfer the domain-specific features [32,33] in the translation results. It should be noticed that, even with limited unpaired training samples, our model is still able to detect semantic regions of source inputs. For instance, our model successfully detect the water area and land in 10-shot domain translation of the Figure 1 (f) and (g).
For the quantitative evaluation, we present the FID score results of various testing tasks in Table 1. For face identity translation, we report the top-1 and top-5 face recognition accuracy of generated images from CylceGAN, StarGAN and our model in Table 2. We can observe that the quantitative results are quite related to the qualitative results in Figure 1, in which our model consistently outperforms CycleGAN and StarGAN. We also find that improvement brought by our model on some natural image generation tasks, such as four painting↔photo tasks, is less significant than other tasks, such as photos↔maps and labels↔photos. Such result is not surprising because only limited samples are hard to include all patterns for natural image generation, while the patterns of photos↔maps or labels↔photos are more simple and regularized.
Comparing the performance of MT-GAN on 5-shot and 10-shot domain translation tasks, we can see that the proposed meta-learning approach is quite robust to the drop in the amount of training samples. With only 5 training samples, MT-GAN still successfully transfers source inputs to target domains in most cases. With the increase of training samples, MT-GAN steadily improves performance on 10-shot domain translation tasks.
Convergence Rate The meta-learning based approach has demonstrated that, with several domain translation tasks, it can incorporate the prior learning experiences on these tasks and generalize to a new task with better performance than ordinary translation models in the above experiments. We show that meta-learning brings another benefit, i.e., faster convergence rate in a training process. We show the training curves of cycle-consistency loss with respect to training steps on the different testing tasks in Figure 2. We choose cycle-consistency loss to reflect the convergence rate because a smaller cycle-consistency loss indicates that the two translation models well relate two domains. Comparing training curves of CycleGAN and our model, we observe that our model rapidly minimizes the cycle-consistency loss in the first Table 2: Average classification accuracy of 5-shot and 10-shot face identity translation tasks. The best top-1 and top-5 classification accuracy are in bold. several steps. In addition, our model achieves lower cycle-consistency loss than CycleGAN after numerous iteration steps in most cases. When training abnormity of GAN occurs, we can see that our model can recover to original training states more quickly than CycleGAN. These results demonstrate that our model indeed learns adaptation strategies from previous translation tasks, and helps to converge more quickly in current tasks.
Conclusions
In this work, we devise the unsupervised meta domain translation (UMDT) problem which aims to effectively incorporates prior domain translation experiences. Accordingly, we propose a model called MT-GAN to find the initialization of a meta-generator and a meta-discriminator that can be used for initialization of any translation task. We jointly train two meta-learners in a adversarial and dual form. We demonstrate our model on ten diverse domain translation tasks and face identity translation tasks. Both qualitative and quantitative results show that the meta-learning based approach significantly outperforms ordinary translation models. In addition, we show that our model can achieve faster convergence rate than CycleGAN, which further demonstrates MT-GAN indeed learns adaptation strategies from previous learning experiences.
For future works, it will be interesting to extend the training paradigm of MT-GAN to other image generation or domain transfer learning tasks. In addition, how to learn the adaptation strategies from many-shot domain translation tasks will be worthy to explore. | 3,419 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | Early works in lane detection and departure warning system date back to the 1990s. Previously proposed methods in this area can be classified as low-level image feature based, machine deep learning (DL) based approaches, or a hybrid between the two. The most widely used LDW systems are either vision-based (e.g., histogram analysis, Hough transformation) or more recently on DL. In general, vision-based and DL lane detection systems start by capturing images using a selected type of sensor, pre-processing the image, followed by lane line detection and tracking. While many types of sensors have been proposed for capturing lanes images such as radars, laser range, lidar, active infrared etc., the most widely used device is a mobile camera. An alternative to vision- and DL-based systems is the use of global-positioning systems (GPS) combined with Geographic Information Systems @cite_6 . However, current LDW based on GPS can be unreliable, mainly because of the often poor reliability and resolution of GPS location and speed detection, signal loss (e.g., in covered areas), and inaccurate map databases. Due to these limitations, most modern research conducted in LDW involves a utilization of Neural Networks-based solutions in some form. | {
"abstract": [
"A lane-detection system is an important component of many intelligent transportation systems. We present a robust lane-detection-and-tracking algorithm to deal with challenging scenarios such as a lane curvature, worn lane markings, lane changes, and emerging, ending, merging, and splitting lanes. We first present a comparative study to find a good real-time lane-marking classifier. Once detection is done, the lane markings are grouped into lane-boundary hypotheses. We group left and right lane boundaries separately to effectively handle merging and splitting lanes. A fast and robust algorithm, based on random-sample consensus and particle filtering, is proposed to generate a large number of hypotheses in real time. The generated hypotheses are evaluated and grouped based on a probabilistic framework. The suggested framework effectively combines a likelihood-based object-recognition algorithm with a Markov-style process (tracking) and can also be applied to general-part-based object-tracking problems. An experimental result on local streets and highways shows that the suggested algorithm is very reliable."
],
"cite_N": [
"@cite_6"
],
"mid": [
"2146575011"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | Neural Networks have been a subject of investigation in the autonomous vehicles field for a while. Among the very first attempts to use a neural network for vehicle navigation, ALNINN @cite_12 is considered a pioneer and one of the most influential paper. This model is comprised of a shallow neural network that predicts actions out of captured images from a forward facing camera mounted on-board a vehicle, with few obstacles, leading to the potential use of neural networks for autonomous navigation. More recently, advances in object detection such as the contribution made by DL and Region Convolutional Neural Network (R-CNN) @cite_14 in combination with Region Proposal Network (RPN) @cite_4 have created models such as Mask R-CNN @cite_1 that provide state of the art predictions. New trends in Neural Network object detection include segmentation, which we applied in our model as an estimator for LDW. | {
"abstract": [
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand."
],
"cite_N": [
"@cite_1",
"@cite_14",
"@cite_4",
"@cite_12"
],
"mid": [
"",
"2102605133",
"2613718673",
"2167224731"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | Image feature-based lane detection is a well researched area of computer vision @cite_13 . The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi @cite_10 , lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95 In 2005, Lee and Yi @cite_2 introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee @cite_18 proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. | {
"abstract": [
"This paper describes the generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety. Based on a full-custom massively parallel hardware, it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings) at a rate of 10 Hz. Thanks to a geometrical transform supported by a specific hardware module, the perspective effect is removed from both left and right stereo images; the left is used to detect lane markings with a series of morphological filters, while both remapped stereo images are used for the detection of free-space in front of the vehicle. The output of the processing is displayed on both an on-board monitor and a control-panel to give visual feedbacks to the driver. The system was tested on the mobile laboratory (MOB-LAB) experimental land vehicle, which was driven for more than 3000 km along extra-urban roads and freeways at speeds up to 80 km h, and demonstrated its robustness with respect to shadows and changing illumination conditions, different road textures, and vehicle movement.",
"This paper presents a feature-based machine vision system for estimating lane-departure of a traveling vehicle on a road. The system uses edge information to define an edge distribution function (EDF), the histogram of edge magnitudes with respect to edge orientation angle. The EDF enables the edge-related information and the lane-related information to be connected. Examining the EDF by the shape parameters of the local maxima and the symmetry axis results in identifying whether a change in the traveling direction of a vehicle has occurred. The EDF minimizes the effect of noise and the use of heuristics, and eliminates the task of localizing lane marks. The proposed system enhances the adaptability to cope with the random and dynamic environment of a road scene and leads to a reliable lane-departure warning system.",
"Abstract Statistics show that worldwide motor vehicle collisions lead to significant deaths and disabilities as well as substantial financial costs to both society and the individuals involved. Unintended lane departure is a leading cause of road fatalities by the collision. To reduce the number of traffic accidents and to improve driver’s safety lane departure warning (LDW), the system has emerged as a promising tool. Vision-based lane detection and departure warning system has been investigated over two decades. During this period, many different problems related to lane detection and departure warning have been addressed. This paper provides an overview of current LDW system, describing in particular pre-processing, lane models, lane de Ntection techniques and departure warning system.",
"This paper presents a lane-departure identification (LDI) system of a traveling vehicle on a structured road with lane marks. As is the case with modified version of the previous EDF-based LDI approach [J.W. Lee, A machine vision system for lane-departure detection, CVIU 86 (2002) 52-78], the new system increases the number of lane-related parameters and introduces departure ratios to determine the instant of lane departure and a linear regression (LR) to minimize wrong decisions due to noise effects. To enhance the robustness of LDI, we conceive of a lane boundary pixel extractor (LBPE) capable of extracting pixels expected to be on lane boundaries. Then, the Hough transform utilizes the pixels from the LBPE to provide the lane-related parameters such as an orientation and a location parameter. The fundamental idea of the proposed LDI is based on an observation that the ratios of orientations and location parameters of left- and right-lane boundaries are equal to one as far as the optical axis of a camera mounted on a vehicle is coincident with the center of lane. The ratios enable the lane-related parameters and the symmetrical property of both lane boundaries to be connected. In addition, the LR of the lane-related parameters of a series of successive images plays the role of determining the trend of a vehicle's traveling direction and the error of the LR is used to avoid a wrong LDI. We show the efficiency of the proposed LDI system with some real images."
],
"cite_N": [
"@cite_10",
"@cite_18",
"@cite_13",
"@cite_2"
],
"mid": [
"2136929315",
"1966957946",
"2745410201",
"2053131138"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) @cite_6 . Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution. | {
"abstract": [
"A lane-detection system is an important component of many intelligent transportation systems. We present a robust lane-detection-and-tracking algorithm to deal with challenging scenarios such as a lane curvature, worn lane markings, lane changes, and emerging, ending, merging, and splitting lanes. We first present a comparative study to find a good real-time lane-marking classifier. Once detection is done, the lane markings are grouped into lane-boundary hypotheses. We group left and right lane boundaries separately to effectively handle merging and splitting lanes. A fast and robust algorithm, based on random-sample consensus and particle filtering, is proposed to generate a large number of hypotheses in real time. The generated hypotheses are evaluated and grouped based on a probabilistic framework. The suggested framework effectively combines a likelihood-based object-recognition algorithm with a Markov-style process (tracking) and can also be applied to general-part-based object-tracking problems. An experimental result on local streets and highways shows that the suggested algorithm is very reliable."
],
"cite_N": [
"@cite_6"
],
"mid": [
"2146575011"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. (a-d) are near to impossible without using CNN. Kim and Lee @cite_3 combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. @cite_15 , they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, @cite_11 discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN. | {
"abstract": [
"Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.",
"In this paper, we introduce a robust lane detection method based on the combined convolutional neural network (CNN) with random sample consensus (RANSAC) algorithm. At first, we calculate edges in an image using a hat shape kernel and then detect lanes using the CNN combined with the RANSAC. If the road scene is simple, we can easily detect the lane by using the RANSAC algorithm only. But if the road scene is complex and includes roadside trees, fence, or intersection etc., then it is hard to detect lanes robustly because of noisy edges. To alleviate that problem, we use CNN in the lane detection before and after applying the RANSAC algorithm. In training process of CNN, input data consist of edge images in a region of interest (ROI) and target data become the images that have only drawn real white color lane in black background. The CNN structure consists of 8 layers with 3 convolutional layers, 2 subsampling layers and multi-layer perceptron (MLP) including 3 fully-connected layers. Convolutional and subsampling layers are hierarchically arranged and their arrangement represents a deep structure in deep learning. As a result, proposed lane detection algorithm successfully eliminates noise lines and the performance is found to be better than other formal line detection algorithms such as RANSAC and hough transform.",
"Accurate traffic sign detection, from vehicle-mounted cameras, is an important task for autonomous driving and driver assistance. It is a challenging task especially when the videos acquired from mobile cameras on portable devices are low-quality. In this paper, we focus on naturalistic videos captured from vehicle-mounted cameras. It has been shown that Region-based Convolutional Neural Networks provide high accuracy rates in object detection tasks. Yet, they are computationally expensive, and often require a GPU for faster training and processing. In this paper, we present a new method, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of much faster training and testing, and comparable or better performance without requiring specialized processors. Our test videos cover a range of different weather and daytime scenarios. The experimental results show the promise of the proposed method and a faster performance compared to the other detectors."
],
"cite_N": [
"@cite_15",
"@cite_3",
"@cite_11"
],
"mid": [
"1585377561",
"1829670322",
"2769260766"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | More recently, in @cite_5 , they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed. | {
"abstract": [
"In this paper, we propose a Dual-View Convolutional Neutral Network (DVCNN) framework for lane detection. First, to improve the low precision ratios of literature works, a novel DVCNN strategy is designed where the front-view image and the top-view one are optimized simultaneously. In the front-view image, we exclude false detections including moving vehicles, barriers and curbs, while in the top-view image non-club-shaped structures are removed such as ground arrows and words. Second, we present a weighted hat-like filter which not only recalls potential lane line candidates, but also alleviates the disturbance of the gradual textures and reduces most false detections. Third, different from other methods, a global optimization function is designed where the lane line probabilities, lengths, widths, orientations and the amount are all taken into account. After the optimization, the optimal combination composed of true lane lines can be explored. Experiments demonstrate that our algorithm is more accurate and robust than the state-of-the-art."
],
"cite_N": [
"@cite_5"
],
"mid": [
"2516555671"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using real-time captured road images. They all use real-time captured images @cite_17 . | {
"abstract": [
"Misunderstanding of driver correction behaviors is the primary reason for false warnings of lane-departure-prediction systems. We proposed a learning-based approach to predict unintended lane-departure behaviors and chances of drivers to bring vehicles back to the lane. First, a personalized driver model for lane-departure and lane-keeping behavior is established by combining the Gaussian mixture model and the hidden Markov model. Second, based on this model, we developed an online model-based prediction algorithm to predict the forthcoming vehicle trajectory and judge whether the driver will act a lane departure behavior or correction behavior. We also develop a warning strategy based on the model-based prediction algorithm that allows the lane-departure warning system to be acceptable for drivers according to the predicted trajectory. In addition, the naturalistic driving data of ten drivers were collected to train the personalized driver model and validate this approach. We compared the proposed method with a basic time-to-lane-crossing (TLC) method and a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. Experimental results show that the proposed approach can reduce the false-warning rate to 3.13 on average at 1-s prediction time."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2592152148"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | The TLC model has been extensively used on production vehicles @cite_21 . TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. @cite_20 Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary @cite_17 @cite_0 . | {
"abstract": [
"",
"In this paper, a technique for the identification of the unwanted lane departure of a traveling vehicle on a road is proposed. A piecewise linear stretching function (PLSF) is used to improve the contrast level of the region of interest (ROI). Lane markings on the road are detected by dividing the ROI into two subregions and applying the Hough transform in each subregion independently. This segmentation approach improves the computational time required for lane detection. For lane departure identification, a distance-based departure measure is computed at each frame, and a necessary warning message is issued to the driver when such measure exceeds a threshold. The novelty of the proposed algorithm is the identification of the lane departure only using three lane-related parameters based on the Euclidean distance transform to estimate the departure measure. The use of the Euclidean distance transform in combination with the PLSF keeps the false alarm around 3 and the lane detection rate above 97 under various lighting conditions. Experimental results indicate that the proposed system can detect lane boundaries in the presence of several image artifacts, such as lighting changes, poor lane markings, and occlusions by a vehicle, and it issues an accurate lane departure warning in a short time interval. The proposed technique shows the efficiency with some real video sequences.",
"The main goal of this paper is to develop a distance to line crossing (DLC) based computation of time to line crossing (TLC). Different computation methods with increasing complexity are provided. A discussion develops the influence of assumptions generally assumed for approximation. A sensitivity analysis with respect to vehicle parameters and positioning is performed. For TLC computation, both straight and curved vehicle paths are considered. The road curvature being another important variable considered in the proposed computations, an observer for its estimation is then proposed. An evaluation over a digitalized test track is first performed. Real data are then collected through an experiment carried out in test tracks with the equipped prototype vehicle. Based on these real data, TLC is then computed with the theoretically proposed methods. The obtained results outlined the necessity to take into consideration vehicle dynamics to use the TLC as a lane departure indicator.",
"Misunderstanding of driver correction behaviors is the primary reason for false warnings of lane-departure-prediction systems. We proposed a learning-based approach to predict unintended lane-departure behaviors and chances of drivers to bring vehicles back to the lane. First, a personalized driver model for lane-departure and lane-keeping behavior is established by combining the Gaussian mixture model and the hidden Markov model. Second, based on this model, we developed an online model-based prediction algorithm to predict the forthcoming vehicle trajectory and judge whether the driver will act a lane departure behavior or correction behavior. We also develop a warning strategy based on the model-based prediction algorithm that allows the lane-departure warning system to be acceptable for drivers according to the predicted trajectory. In addition, the naturalistic driving data of ten drivers were collected to train the personalized driver model and validate this approach. We compared the proposed method with a basic time-to-lane-crossing (TLC) method and a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. Experimental results show that the proposed approach can reduce the false-warning rate to 3.13 on average at 1-s prediction time."
],
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2054970201",
"2098456462",
"2592152148"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00093 | 2946949691 | In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 . | @cite_17 proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the driver’s lane-keeping and lane-departure behaviors by using a joint-probability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individual’s driving style. In combination with HMM to estimate the vehicle’s lateral displacement, they were able to reduce the FAR by 3.07. | {
"abstract": [
"Misunderstanding of driver correction behaviors is the primary reason for false warnings of lane-departure-prediction systems. We proposed a learning-based approach to predict unintended lane-departure behaviors and chances of drivers to bring vehicles back to the lane. First, a personalized driver model for lane-departure and lane-keeping behavior is established by combining the Gaussian mixture model and the hidden Markov model. Second, based on this model, we developed an online model-based prediction algorithm to predict the forthcoming vehicle trajectory and judge whether the driver will act a lane departure behavior or correction behavior. We also develop a warning strategy based on the model-based prediction algorithm that allows the lane-departure warning system to be acceptable for drivers according to the predicted trajectory. In addition, the naturalistic driving data of ten drivers were collected to train the personalized driver model and validate this approach. We compared the proposed method with a basic time-to-lane-crossing (TLC) method and a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. Experimental results show that the proposed approach can reduce the false-warning rate to 3.13 on average at 1-s prediction time."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2592152148"
]
} | Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions* | Motor vehicle collisions are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people worldwide die and 50 million are injured every year due to traffic-related accidents. Traffic accidents result in considerable economic cost, currently estimated at 1-2% of average gross national product ($518 billion globally per year) [1]. According to the European Accident Research and Safety Report 2013, more than 90% of driving accidents are caused by safetycritical driver errors [2]. Lane incursions due to driver error are a common cause of accidents. Estimates from the U.S. National Highway Traffic Safety Administration indicate that 11% of accidents are due to the driver inappropriately departing from their lane while traveling [3] To address this risk, Lane Departure Warning (LDW) systems are becoming a commonly deployed driver assistance technology aimed at improving on-road safety and reducing traffic accidents [4]. LDW systems typically detect lanes from low-level image features such as edges and contours. Several solutions aimed at detecting vehicle lane position and alerting drivers to potentially unsafe lane departure events have been developed [5]. For example, simple image feature based systems have been developed to detect straight lines, polynomial, cubic-spline, piecewise linear, and circular arcs-relevant to lane detection [6]. However, image feature based systems have predictable limitations and can become unreliable with increasing road scene complexity (e.g., shadows, low visibility, occlusions, and curves) [7]. Due to these limitations, researchers have been turning their attention to machine learning (ML) based methods to overcome the above-mentioned shortcomings, most recently deep neural networks (DNN). Region-based Convolution Neural Networks (RCNNs), a type of DNN architecture, outperform other DNN architectures in object detection and recognition applications. Due to these key advantages, we have chosen RCNNs to build a simple but robust LDW system that overcomes limitations of previous LDW systems.
The primary application of this project is to detect unsafe driver behaviors, like lane incursions or departures, in atrisk drivers with diabetes. Diabetes affects nearly 10% of the population in the USA and continues to increase with urbanization, obesity, and aging [8]. Drivers with diabetes have a significantly elevated crash risk compared to the general population-presenting a pressing problem of public health and patient safety. On-road risk in diabetes is linked to disease and unsafe physiologic states (e.g., hypoglycemia). These key factors make this population a prime target for improving safety with driver assistance systems like LDW.
This model is capable of processing large data collection representing multiple Terabytes (TBs) of video collected from at-risk drivers with diabetes. We present this lane detection model using a Mask-RCNN architecture to analyze lane departures and incursions from lane detections in challenging, lower-resolution, and noisy video recordings. Lane incursion is defined as performing an incomplete lane departure while quickly returning back to the original lane of travel. While previous literature addresses simple lane line detection, our model focuses on advancing these models by improving detection and segmentation of the driving lane area. Once the driving lane area is detected in the video frame, we tracked a centroid of convex hull region, representing the driving lane area. The centroid location with respect to the image vertical center line was used to determine if the driver was driving within the lane or s/he was denaturing from it. Subsequently, the time series relative lane position was used to infer driving behavior. This paper is organized as follows: Related Works describes previous related work done on lane departure using image-based features and machine learning based approaches. Custom Dataset provides general information about the data collected, annotated, and used for this project. Proposed Model and Lane Departure presents our approach for detecting lane departure events. Finally, the summary and discussion of our work is presented in Conclusions.
A. Image Feature Based Methods
Image feature-based lane detection is a well researched area of computer vision [14]. The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi [15], lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95% of the situations tested. Some of the limitations to their proposed system were computational complexity, which needed well painted lane, and assumptions such as having a lane within the region of interest and fixed minimum width of lane.
In 2005, Lee and Yi [16] introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee [17] proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) [9]. Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution.
B. Deep Learning Based Methods
To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. 1 (a-d) are near to impossible without using CNN. Kim and Lee [18] combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. In [19], they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, Ozcan et al. [20] discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN.
More recently, in [21], they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed.
C. Lane departure models
The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using realtime captured road images. They all use real-time captured images [22].
The TLC model has been extensively used on production vehicles [23]. TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. [24] Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary [22], [24]. Wang et al. [22] proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the drivers lane-keeping and lane-departure behaviors by using a jointprobability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individuals driving style. In combination with HMM to estimate the vehicles lateral displacement, they were able to reduce the FAR by 3.07.
III. CUSTOM DATASET
Our dataset is was collected as part of a clinical study where 77 legally licensed and active older drivers (ages 65-90, =75.7; 36 female, 41 male) were recruited. The aim of that particular project is to study the driving behavior of individuals with disabilities condition. Drivers who had physical limitations were permitted if they met state licensure standards as these limitations are ubiquitous in older adults. Each driver drove in their typical environment with their typical strategies and driving behaviors for 3-months (total data collection embody nearly 19.3 years). One of our contribution to this study was the detection of lane departure and incursion. For this task, we used 4,162 annotated images to train our model. The images had a resolution of 752x480 and the videos run at an average of 25 fps. These images were split into Training (70%), Validation (15%), and Test (15%) sets. Amount all videos, we tested on our lane crossing/departure algorithm in 30 diverse videos.
IV. PROPOSED MODEL AND LANE DEPARTURE
Lane detection in presence of noisy, lower-resolution image data presents significant challenges. Illumination, color contrasts, and image resolution immediately prohibit the use of low-level image feature-based algorithms for detecting the lanes. Consequently, we turned our attention to machine/DL based models to detect lane regions as these models perform better than low-level image feature-based algorithms for given lower quality recordings on custom dataset. We selected Mask-RCNN [25] architecture since we were mainly interested in segmented lane regions within the image and we could tolerate 5 fps [13], while it provide a state-ofthe-art mAP (mean average precision). The Mask-RCNN architecture, illustrated in Fig. 3, can be divided into two networks. The first network is the region proposal network (RPN) used for generating region proposals and a second network that use these proposals to detect objects. Video processing pipeline including detection and tracking is given in Algo. 1.
algorithm 1 Video Control Algorithm procedure VIDEOCAPTURE(video) mp4 f rame ← video while f rame = Null do Loop until video end M ← detection Mask Display ← Tracking(M) f rame ← video
A. Lane Detection
Our Mask-RCNN based model was configured using ResNet-50 as backbone with a learning rate of 0.001, a learning momentum of 0.9, and 256 RPN Anchors per image. It was trained to detect lane regions only, using segmented mask, on the contrary to other lane detection models where their main goal is to detect the lane lines. Lane detection in lieu to lines detection was considerably easier given the quality of the images we were working with. This approach provided us a lane segmentation mask, which was later used to track the lane regions. To mitigate FPs, we used a Region of Interest (ROI) skim mask that concealed areas not relevant to our view of interest. Fig. 1 (a-d) provides some of the example detections during daytime, nighttime, and shadowy conditions on the road.
B. Lane Tracking
Mask tracking algorithm used for lane departure and incursion predictions as explained in Algo. 2. Once the lane mask regions were detected, the point coordinates conforming the mask were used to compute a convex hull enclosing the mask. For this purpose, we employed a Quickhull algorithm, which is shown in Algo. 3, in order to obtain a Convex Hull polygon. Next, a centroid of the convex hull was calculated. Our model used the centroid in order to track its vertical and horizontal offset of the vehicle within the lane as shown in Fig. 2. For reference, the vertical offset was calculated according to an imaginary vertical line in the middle of the image as illustrated in Fig. 2. Also, the horizontal reference was chosen to be a imaginary line between the vehicle and the detected mask. The horizontal offset was not used in our quest; however, it was implemented to detect driving separation distance possibly useful for acceleration and braking.
The offsets were calculated using the distance between a line and a point in 2D space Eq. (1). The offset units were measured in number of pixels. These offsets were first tracked over time, then normalized by their means, centered at zero and smoothed using a Fix-lag Kalman filter as it is shown in Fig. 4 (a). We found that centering around zero allows us better generalization among drivers since cameras are not necessarily mounted on the same location among different vehicles.
distance(ax + by + c = 0, (x 0 , y 0 )) = |ax 0 + by 0 + c| √ a 2 + b 2(1)
C. Lane departure classification
The plots in Fig.4 (b) and (c) illustrate the typical patterns observed when the line lanes are crossed towards left or right, respectively. These two plots were obtained while a driver changes from the right lane to the left lane and back to the right lane. It is clear that there is a high peak that starts developing as the driver departs from the center of its lane following by depression zone and a trend to go back to zero. Detecting and measuring this pattern is the core idea of our algorithm in 2 that predicts the type of lane crossing that has occurred. Our test dataset had a limited sample of verified incursion events (N=3). Based on available experiments on A ← le f t most point B ← right most point S1 ← points in S right to oriented line AB S2 ← points in S right to oriented line BA FindHull(S1, A, B) FindHull(S2, B, A) function FINDHULL(Sk, P, Q) get points right of P to Q for each point in Sk do C ← f ind f arthest point f rom PQ S0 ← points inside triangle PCQ S1 ← points right side line PC S2 ← points right side line CQ FindHull(S1, P,C) FindHull(S2,C, Q) Out put ← ConvexHull set at 0.5, this means that any predicted object is considered a TP if its IoU with respect to the ground trust is greater than 0.5. The overall mAP was calculated to be 0.82 for lane detections on test dataset.
IoU(A, B) = A ∩ B A ∪ B (2) mAP = 1 |thresholds| ∑ t T P(t) T P(t) + FP(t) + FN(t)(3)
B. Lane Crossing Algorithm Results
We tested our lane departure algorithm using 30 short driving videos from 1 to 3 minutes long each. Diversity of drivers and environmental conditions were considered when selecting the videos. Each of the videos had at least one occurrance of lane crossing and in some circumstances, one line of the lane or portion of it was not visible(i.e. lane bifurcation on a highway exit). We used the algorithm to test for lane changes to the left, right, and lane incursion, the results are summarized in Table I. While we did not
VI. CONCLUSION
We proposed a novel algorithm to detect and differentiate lane departure events, including incursions, on lowerresolution video recordings with challenging conditions. In our novel implementation, the model was trained to detect lanes departures with a sensitivity of 0.82. Future investigations will expand our model to a wider variety of vehicle classes, which will likely improve FP rates, and use of segmented masks to detect lane types and improve lane incursion detection. An area that should be further explored is the use of horizontal offset as a mean to detect proximity even when image perspectives are subject to chirp effect. While our implementation was performed using only prerecorded videos, utilizing a convex hull centroid offset may permit lane tracking during real-time implementation on vehicles. Our results underscore the feasibility and utility of applying DL models to autonomous driving systems, LDW/LDP, advanced driver assistance systems, and onroad interventions to improve safety in medically at-risk populations. | 2,806 |
1906.00284 | 2946988528 | Heterogeneity in wireless network architectures (i.e., the coexistence of 3G, LTE, 5G, WiFi, etc.) has become a key component of current and future generation cellular networks. Simultaneous aggregation of each client's traffic across multiple such radio access technologies (RATs) base stations (BSs) can significantly increase the system throughput, and has become an important feature of cellular standards on multi-RAT integration. Distributed algorithms that can realize the full potential of this aggregation are thus of great importance to operators. In this paper, we study the problem of resource allocation for multi-RAT traffic aggregation in HetNets (heterogeneous networks). Our goal is to ensure that the resources at each BS are allocated so that the aggregate throughput achieved by each client across its RATs satisfies a proportional fairness (PF) criterion. In particular, we provide a simple distributed algorithm for resource allocation at each BS that extends the PF allocation algorithm for a single BS. Despite its simplicity and lack of coordination across the BSs, we show that our algorithm converges to the desired PF solution and provide (tight) bounds on its convergence speed. We also study the characteristics of the optimal solution and use its properties to prove the optimality of our algorithm's outcomes. | Single-RAT Multi-BS Communication. Prior works have studied the problem of traffic aggregation when a client can simultaneously communicate with multiple same technology BSs. For example, @cite_26 uses game theory to model selfish traffic splitting by each client in WLANs. On the other hand, the resource allocation problem in HetNets is primarily addressed at the BS side. Similarly, @cite_17 proposes an approximation algorithm to address the problem of client association and traffic splitting in LTE DC. Our algorithm (AFRA) goes beyond this and other related work by guaranteeing optimal resource allocation for any number of RATs and BSs . Other works have developed centralized client association algorithms to achieve max-min @cite_21 and proportional fairness @cite_2 in multi-rate WLANs. In contrast, the problem of resource allocation in HetNets needs to be solved in a fully distributed manner. | {
"abstract": [
"We consider non-cooperative mobiles, each faced with the problem of which subset of WLANs access points (APs) to connect and multihome to, and how to split its traffic among them. Considering the many users regime, we obtain a potential game model and study its equilibrium. We obtain pricing for which the total throughput is maximized at equilibrium and study the convergence to equilibrium under various evolutionary dynamics. We also study the case where the Internet service provider (ISP) could charge prices greater than that of the cost price mechanism and show that even in this case multihoming is desirable.",
"In multi-rate wireless LANs, throughput-based fair bandwidth allocation can lead to drastically reduced aggregate throughput. To balance aggregate throughput while serving users in a fair manner, proportional fair or time-based fair scheduling has been proposed to apply at each access point (AP). However, since a realistic deployment of wireless LANs can consist of a network of APs, this paper considers proportional fairness in this much wider setting. Our technique is to intelligently associate users with APs to achieve optimal proportional fairness in a network of APs. We propose two approximation algorithms for periodical offline optimization. Our algorithms are the first approximation algorithms in the literature with a tight worst-case guarantee for the NP-hard problem. Our simulation results demonstrate that our algorithms can obtain an aggregate throughput which can be as much as 2.3 times more than that of the max-min fair allocation in 802.11b. While maintaining aggregate throughput, our approximation algorithms outperform the default user-AP association method in the 802.11b standard significantly in terms of fairness.",
"The traffic load of wireless LANs is often unevenly distributed among the access points (APs), which results in unfair bandwidth allocation among users. We argue that the load imbalance and consequent unfair bandwidth allocation can be greatly reduced by intelligent association control. In this paper, we present an efficient solution to determine the user-AP associations for max-min fair bandwidth allocation. We show the strong correlation between fairness and load balancing, which enables us to use load balancing techniques for obtaining optimal max-min fair bandwidth allocation. As this problem is NP-hard, we devise algorithms that achieve constant-factor approximation. In our algorithms, we first compute a fractional association solution, in which users can be associated with multiple APs simultaneously. This solution guarantees the fairest bandwidth allocation in terms of max-min fairness. Then, by utilizing a rounding method, we obtain the integral solution from the fractional solution. We also consider time fairness and present a polynomial-time algorithm for optimal integral solution. We further extend our schemes for the on-line case where users may join and leave dynamically. Our simulations demonstrate that the proposed algorithms achieve close to optimal load balancing (i.e., max-min fairness) and they outperform commonly used heuristics.",
"We consider network utility maximization problems over heterogeneous cellular networks (HetNets) that permit dual connectivity. Dual connectivity (DC) is a feature that targets emerging practical HetNet deployments that will comprise of non-ideal (higher latency) connections between transmission nodes, and has been recently introduced to the LTE-Advanced standard. DC allows for a user to be simultaneously served by a macro node as well as one other (typically micro or pico) node and requires relatively coarser level coordination among serving nodes. For such a DC enabled HetNet we comprehensively analyze the problem of determining an optimal user association that maximizes the weighted sum rate system utility subject to per-user rate constraints, over all feasible associations. Here, in any feasible association each user can be associated with (i.e., configured to receive data from) any one macro node (in a given set of macro nodes) and any one pico node that lies in the chosen macro node's coverage area. We show that, remarkably, this problem can be cast as a non-monotone submodular set function maximization problem, which allows us to construct a constant-factor approximation algorithm. We then consider the proportional fairness (PF) system utility and characterize the PF optimal resource allocation. This enables us to construct an efficient algorithm to determine an association that is optimal up-to an additive constant. We then validate the performance of our algorithms via numerical results."
],
"cite_N": [
"@cite_26",
"@cite_2",
"@cite_21",
"@cite_17"
],
"mid": [
"2161300749",
"2165715302",
"2143690081",
"2963847355"
]
} | Proportional Fair RAT Aggregation in HetNets | The increasing demand for wireless data has led to denser and more heterogeneous wireless network deployments. This heterogeneity manifests itself in terms of network deployments across multiple radio access technologies (e.g., 3G, LTE, WiFi, 5G), cell sizes (e.g., macro, pico, femto), and frequency bands (e.g., TV bands, 1.8-2.4 GHz, mmWave), etc. To realize the gains associated with such heterogeneous networks (HetNets), consumer (client) devices are also being equipped with an increasing number of radio access technologies (RATs), and some are already able to simultaneously aggregate the traffic across multiple RATs to increase throughput [1].
To support such traffic aggregation on the network side, the 3GPP (3rd generation partnership project) has been actively developing multi-RAT integration solutions. The introduction of LWA (LTE-WiFi Aggregation) as part of the 3GPP Release 13 [2] was a step in this direction. LWA allows using both LTE and WiFi links for a single traffic flow and is generally more efficient than transport layer aggregation protocols (e.g., MultiPath TCP), due to coordination at lower protocol stack layers. LWA's design primarily follows the LTE Dual Connectivity (DC) architecture (defined in 3GPP Release 12 [3]), which allows a wireless device to connect to two LTE eNBs that are on different carrier frequencies, and utilize the radio resources that belong to both of them. Currently, the 3GPP is working on a solution to support below IP (layer 2) multi-RAT integration across any combination of RATs, including LTE, WiFi, 802.11ad/ay, and 5G New Radio (NR) [4]. The proposed architecture would allow for dynamic traffic splitting across RATs for each client, which can lead to a significant increase in the system performance (e.g., total throughput).
However, it is difficult to design resource allocation algorithms for each BS 1 that realize the performance benefits of such integrated HetNets. Specifically, (i) backhaul links from different BSs in HetNets show diverse capacity and latency characteristics and depend on the underlying backhauling technology. For example, cable and DSL have on average 28 and 62 ms roundtrip latencies, respectively [5], [6]. The latency can be even higher when a network operator uses a third party ISP to communicate with its BSs (e.g., a mobile operator that uses a wired ISP to control its WiFi BSs). Such latencies make it infeasible for BSs to communicate with each other or a central controller for real-time resource allocation at each BS. As a result, any practical resource allocation algorithm for multi-RAT HetNets should be fully distributed (i.e., autonomously executed by each BS). (ii) Resource allocation has many practical constraints. Conventional BS hardware allows only minor modifications to existing resource allocation algorithms through software updates, limiting the algorithm design space. New algorithms should also incur minimal signaling overhead and computational complexity. Distributed algorithms based on the traditional network utility maximization framework [7], [8] do not meet these requirements, because as we will show later through simulations the resulting algorithms are radically different from how conventional BSs operate, have significant over-the-air signaling overhead, and increase the computational complexity on the client side. (iii) In HetNets, each client has access to a client-specific set of RATs, and receives packets at a different PHY rate on each RAT. These rates are naturally different across clients. This multi-rate property of HetNets makes it particularly challenging to design resource allocation algorithms with performance guarantee. As a result, existing solutions in the literature are all limited to simple setups, e.g., when each client has only two RATs as in the case of LWA [9] or LTE DC [10].
In this paper, we study the problem of resource allocation for traffic aggregation in multi-RAT HetNets. We focus on the proportional-fair (PF) fairness objective as it is widely used and implemented in BSs and provides a balance between fairness and throughput [11], [12]. We first consider PF resource allocation in a single BS, and then use our insights from this case to design a distributed algorithm that meets our three research challenges. We next show that our algorithm converges to an optimal PF resource allocation. The key contributions are as follows:
• Algorithm Design: We study the basics of PF resource allocation in a single BS to gain intuition for the distributed algorithm design. We show that PF resource allocation in a single BS can be viewed as a special type of water-filling. We generalize this observation to a new fully distributed water-filling algorithm ( We also show that replacing the conventional resource allocation algorithm on each BS with AFRA can substantially increase the system throughput and fairness. • Performance: We conduct extensive simulations to characterize AFRA's convergence time properties as we scale the number of BSs and clients. We also introduce policies that reduce the convergence time by more than 30%. Finally, we compare the performance of AFRA against DDNUM, a dual decomposition algorithm that we derived from the NUM framework. We show that compared to DDNUM, AFRA is 2-3 times faster with 4-5 times less over-the-air overhead. This paper is organized as follows. We discuss the related work in Section II. We present the system model and details of AFRA in Section III. In Sections IV and V we prove the convergence and optimality of AFRA. We present the results of our experiments, simulations, and comparisons against DDNUM in Section VI. We conclude the paper in Section VII.
III. SYSTEM MODEL
We discuss the system model and the resource allocation algorithm that is autonomously executed by each BS.
A. Network Model
We consider a HetNet composed of a set of BSs M = {1, ..., M } and a set of clients N = {1, ..., N }. Each BS has a limited transmission range and can only serve clients within its range. Each client has a client-specific number of RATs, and therefore has access to a subset of BSs. We model clients that can aggregate traffic across BSs of the same technology (e.g., LTE DC) with multiple such RATs. Fig 1 shows an example HetNet topology. We assume that clients split their traffic over the BSs and focus on the resource allocation problem at each BS. It is itself a challenging problem to determine which BS to associate with among same technology BSs (e.g., choosing the optimal LTE BS if a client has an LTE RAT). We assume there exists a rule to pre-determine client RAT to BS association. The pre-determination rule could for instance be any load balancing algorithm [24], [25], or based on the received signal strength. Similar to [13]- [16], [24], we assume that the transmission in one BS does not interfere with an adjacent BS. This can be achieved through spectrum separation between BSs that belong to different access networks and frequency reuse among same technology BSs.
B. Throughput Model
We consider a multi-rate system and use R i,j to denote the PHY rate of client i to BS j. Since each BS generally serves more than one client, clients of the same BS need to share resources such as time and frequency slots (e.g. in 3/4/5G) or transmission opportunities (e.g. in WiFi). The throughput achieved by client i from BS j thus depends on the load of the BS and will be a fraction of R i,j . We assume that each BS employs a TDMA throughput sharing model 2 and let λ i,j denote the fraction of time allocated to client i by BS j. Hence, the throughput achieved by client i from BS j is equal to λ i,j R i,j and its total throughput across all its RATs would be Total Throughput of Client i = r i = M j=1 λ i,j R i,j (1) 2 In Section VI-A, we discuss how we can extend our model and algorithm to capture practical implementation issues such as WiFi contention.
The total amount of time fractions available to each BS cannot exceed 1. Thus, for the λ i,j s to be feasible we have We first describe the basics of the PF resource allocation that is conventionally implemented in today's BSs. Consider a network topology consisting of only a single BS j and n clients. Let r i denote the throughput of client i and ω i a positive number that denotes its weight (or priority). A widely used objective function for PF is to maximize [12]. It represents a tradeoff between throughput and fairness among the clients. Let λ i denote the time fraction allocated to client i by BS j. To maximize the PF objective function, the BS needs to solve the following problem
N i=1 λ i,j ≤ 1 ∀j ∈ M (2) λ i,j ≥ 0 ∀i ∈ N, j ∈ M(3)n i=1 ω i log(r i ) [11],P 1 : max n i=1 ω i log(R i,j λ i ) s.t. n i=1 λ i ≤ 1 variables: λ i ≥ 0
Problem P 1 can be easily solved through a simple algorithm. The Lagrangian of P 1 can be expressed as
L(λ, µ) = n i=1 ω i log(R i,j λ i ) + µ(1 − n i=1 λ i )(4)
where µ is a constant number (Lagrange multiplier) chosen to meet the time resource constraint. Differentiating with respect to time fraction resource λ i and setting to zero gives
R i,j ω i R i,j λ i − µ = 0 =⇒ ω i λ i = µ ∀i ∈ {1, ..., n }(5)
Since the sum of time fractions at optimality is equal to 1, we can conclude from Eq. (5) that µ = ω i . With known µ and ω i , we can derive λ i s from Eq. (5). Now, let θ j be defined as 1 µ . Leveraging Eq. (5), we have
λ i ω i = θ j ∀i ∈ {1, ..., n } =⇒ r i ω i R i,j = θ j ∀i ∈ {1, .
.., n } (6) Eq. (6) has an interesting water-filling based interpretation: the time allocated to each client is such that the throughput of the client divided by its PHY rate times its weight is the same across all clients. We refer to this ratio (i.e., θ j ) as the water-fill level of BS j. In the next section, we will turn this observation in a single BS into a distributed resource allocation algorithm in HetNets.
D. Distributed Resource Allocation in HetNets
There are two approaches to designing a resource allocation algorithm for generic HetNets. One approach, as we show in the Appendix, is to extend the formulation in P 1 to include multiple BSs and client RATs, and use dual decomposition to derive a distributed algorithm. This approach converges to the optimal solution; however, the Lagrange multipliers across BSs would no longer correspond to BSs' water-fill levels. The second approach is to directly generalize the water-filling interpretation to derive an alternative algorithm, which still converges to the optimal solution (Section V) with far less overhead, convergence time, and complexity than the dual decomposition based algorithm (Section VI-C).
From Eq. (6), we observe that in a network with only a single BS, the BS allocates its time resources so that the clients who get the time resources reach the same water-fill level (i.e., throughput divided by ω i R i,j ). Thus, in generic HetNets, if each BS considers the total throughput of each client across all its RATs (i.e., r i ) divided by ω i R i,j in its water-fill definition, this should lead to a fair distributed algorithm. In other words, each BS j should share its time resources across its clients such that: (1) all clients who get the time resources reach the same water-fill level at BS j (i.e., θ j ), and (2) if a client (e.g., i ) does not get any time resources from BS j, its r i ω i R i ,j is greater than θ j . Fig. 2 illustrates this operation. There are 4 clients with non-zero PHY rates to BS j. Blue boxes denote contributions to r i ω i R i,j by BS j (when it allocates time resources) and white boxes show contributions to it by other BSs. BS j allocates its time resources so that all clients that get resources achieve the same water-fill level (θ j ). Clients that do not get any resources from BS j have a higher r i ω i R i,j than θ j . Client i 3 is one such client in this example.
We next turn this idea into a distributed resource allocation algorithm. Consider slotted time for now. Algorithm AFRA ( Fig. 3) summarizes the steps that are autonomously executed by each BS j. There are three main steps in the algorithm: (i) clients are sorted based on the total throughout they receive from other BSs (r i ) divided by ω i R i,j (Line 3), (ii) BS j finds the water-fill level (θ j ) and allocates the time resources accordingly (Line 4), and (iii) finally we introduce a randomization parameter to limit concurrent resource adaptation of a single client by multiple BSs (Line 5). We next elaborate on how each BS j finds its water-fill level and its clients' time resource fractions (Line 4). Let n denote the number of clients such that R i,j > 0. Let r i denote the total throughput of client i from all BSs other than j. Consider an ordering in clients' r i ωiRi,j according to Line 3 of AFRA. In order to solve the water-fill problem (i.e., Line 4 of AFRA), we need to find the water-fill level θ j , client index k, and time fractions λ i,j s such that
r 1 + λ 1,j R 1,j ω 1 R 1,j = r 2 + λ 2,j R 2,j ω 2 R 2,j = ... = r k + λ k,j R k,j ω k R k,j = θ j (7) r k ω k R k,j < θ j ≤ r k+1 ω k+1 R k+1,j (8) k i=1 λ i,j = 1, λ i,j > 0(9)
We can find these variables with a simple set of linear operations. First, we can find k by checking a set of inequalities
r 2 ω 1 R 1,j ω 2 R 2,j −r 1 R1,j ≥ 1 ⇒ k = 1 else r 3 ω 1 R 1,j ω 3 R 3,j −r 1 R1,j + r 3 ω 2 R 2,j ω 3 R 3,j −r 2 R2,j ≥ 1 ⇒ k = 2 else ... r n ω 1 R 1,j ω n R n ,j −r 1 R1,j + ... + r n ω n −1 R n −1,j ω n R n ,j −r n −1 R n −1,j ≥ 1 ⇒ k = n − 1 else k = n
In the first inequality, we first check if
r 1 +R1,j ω1R1,j ≤ r 2 ω2R2,j .
If this is true, from Eq. (7) we conclude that client 2 would have a higher r 2 ω2R2,j than r1 ω1R1,j even if BS j allocated all its time resources to client 1 (i.e., to the client with minimum r i ωiRi,j across all n clients). As a result k should be equal to 1. This procedure (and logic) is continued until k is found.
With known k, we can find θ j by combining Eqs. (7) and (9) and solving the following linear equation
k i=1 θ j ω i R i,j − r i R i,j = 1(10)
With known k and θ j , the λ i,j s can be found from Eq. (7). AFRA's Computational Complexity and Message Passing Overhead. We calculate AFRA's computational complexity in finding the new time resource fractions (λ i,j s) for a BS j. Let n denote the number of clients with non-zero PHY rates to j. The complexity of sorting clients (Line 3) is O(n log(n )). The complexity of finding the water-fill level and the new time resource fractions (Line 4) is O(n log(n )) (with a binary search to find k). Thus, the overall computational complexity is O(n log(n )). If we assume that each client has on average K RATs, then on average n would be equal to KN M . Thus, the computational complexity would also be equal to O( KN M log( KN M )). Each BS uses the total throughput of each client across all its RATs in its calculations to find the water-fill level and the new λ i,j s. Each time a client's time resource (and hence total throughput) is changed, the client needs to inform all BSs to which it is connected about its new total throughput. Thus, the total message passing overhead generated by clients of a single BS is at most equal to O(n K), or alternatively O( K 2 N M ).
IV. CONVERGENCE AND SPEED OF AFRA
In this section, we investigate the convergence properties of AFRA. We first show that as BSs autonomously execute AFRA, the system converges to an equilibrium. Next, we investigate the convergence time properties of AFRA and provide tight bounds to quantify it.
A. Convergence to an Equilibrium
Before we discuss convergence, we present a formal definition of an equilibrium. Definition 1 Equilibrium: The vector of time fractions across all the BSs and clients is an equilibrium outcome if none of the BSs can increase its water-fill level through unilateral change of its time resource allocations.
Our next theorem guarantees the convergence of AFRA.
Theorem 1 Let each BS autonomously execute AFRA. Then, the system converges to an equilibrium, i.e., ∀i ∈ N and j ∈ M λ i,j → λ eq i,j , θ j → θ eq j , and r i → r eq i .
Proof: Let λ denote the vector of time fractions (λ i,j s) across all clients and BSs, and f (λ) = N i=1 ω i log(r i ) be the potential function. A potential function [26] is a useful tool to analyze equilibrium properties, as it maps the payoff (e.g., throughput) of all clients into a single function.
Since the number of clients and BSs is finite, f is bounded. The key step to prove convergence, is to show that each time a BS j adjusts its time fractions (i.e., λ i,j s), the potential function (f ) increases. This property coupled with f 's boundedness guarantees its convergence. We will show later in Eq. (15) that the change in potential function is proportional to the product of the change in water-fill levels and the change in λ i,j s. Since f converges (i.e., its variations converge to 0), one or both of these terms should converge to 0. Either of these conditions guarantee the convergence of the λ i,j s (and hence, θ j s and r i s).
Next, we show that each time a BS runs AFRA, f increases. When a BS runs AFRA, it takes some time resources from clients with high ri ωiRi,j and distributes them across clients with lower values. To ease the proof presentation, we focus on two clients and follow the changes on f as the BS adjusts the λ i,j s dedicated to these clients.
Let, i, i denote two clients who are currently receiving time resources from BS j. Assume the following initial (old) order between these two clients
r i ω i R i,j < r i ω i R i ,j(11)
Therefore, as BS j executes AFRA it changes the time resources from λ i and λ i to λ i +δ and λ i −δ, respectively. This, only changes the two corresponding terms in the potential function, i.e.
f (λ) new − f (λ) old = ω i log(r i + δR i,j )− ω i log(r i ) + ω i log(r i − δR i ,j ) − ω i log(r i ) = ω i log(1 + δ R i,j r i ) + ω i log(1 − δ R i ,j r i )(12)
Let g(δ) denote the variation in potential function, i.e.
g(δ) = ω i log(1 + δ R i,j r i ) + ω i log(1 − δ R i ,j r i )(13)
Thus, to prove convergence, we need to prove that g(δ) is always positive. We prove this by showing that first g (δ) ≥ 0. This shows that g(δ) is always non-decreasing. Second, we show that g(δ) is positive for very small values of δ. Now
g (δ) = ω i Ri,j ri 1 + δ Ri,j ri − ω i R i ,j r i 1 − δ R i ,j r i = ω i R i,j r i + δR i,j − ω i R i ,j r i − δR i ,j = ω i R i,j r new i − ω i R i ,j r new i ≥ 0(14)
Here r new i and r new i are the new throughput values for clients i and i , respectively. It is clear that after BS j adjusts the time resources, we still have
r new i ωiRi,j ≤ r new i ω i R i ,j
. This is because after
BS j reduces λ i ,j , r new i ω i R i ,j
would be either equal to the new water-fill level or higher than it (if λ i ,j = 0). On the other hand, r new i ωiRi,j would be equal to the new water-fill level. As a result, the final term in Eq. (14) is non-negative. Finally, g(δ) is greater than zero for small values of δ because
g(δ) Taylor Approx ≈ ω i δ R i,j r i − ω i δ R i ,j r i = δ( ω i R i,j r i − ω i R i ,j r i ) > 0(15)
The last term in the above equation is due to Eq. (11).
B. Convergence Time
Before we can derive a bound on convergence time, we need to define a discretization factor on the time fractions (i.e., λ i,j s). This technicality is due to the fact that λ i,j s in our model are continuous variables, which can cause some BSs to continuously make infinitesimal adjustments to them. These adjustments converge to 0 as time goes to infinity.
In practice, operations always happen in discretized levels. For example, consider the following discretization policy: Definition 2 Discretization Policy: During water-fill calculation by a BS j in AFRA, the time fraction allocated to the client with minimum ri ωiRi,j should increase by at least . Otherwise, the BS would not update its time fractions.
Based on the above discretization policy, we can derive the following bound on the convergence time. Proof: Let f (λ) = N i=1 ω i log(r i ) be the potential function from the proof of Theorem 1. To compute a bound on the convergence time, we study the increments of f . The key step is to find a lower bound on f 's increments. Since f increases whenever a BS makes adjustments to its λ i,j s, the convergence time is then upper bounded by the difference between the maximum and minimum possible values of f divided by the lower bound on f 's increments.
We take the following steps to find a lower bound on the potential function's increments. Let {i 1 , i 2 , ..., i q } denote the set of clients with non-zero PHY rates to BS j and assume the following initial (old) order among the clients
r old i1 ω i1 R i1,j ≤ r old i2 ω i2 R i2,j ≤ ... ≤ r old iq ω iq R iq,j(16)
When BS j executes AFRA, it adjusts the time fractions in a way that increases the time resources allocated to client i 1 . Let i1 denote the increase in client 1's time resources and r new i1 = r i1 its new throughput. Let ip denote the change in client i p 's (i p ∈ {i 2 , ..., i q }) time resources and r new ip its new throughput. Hence, we have
r new i1 = r i1 , r old i1 = r i1 − i1 R i1,j(17)r new ip = r ip , r old ip = r ip + ip R ip,j ∀i p ∈ {i 2 , ..., i q } (18) i1 = i2 + i3 + ... + iq(19)
However, even after BS j adjusts its time resources, i 1 would still have the minimum ri ωiRi,j across all clients. This is due to the water-fill based operation in AFRA. As a result
r i1 ω i1 R i1,j ≤ r ip ω ip R ip,j ∀i p ∈ {i 2 , ..., i q } (20) =⇒ R ip,j r ip ≤ ω i1 ω ip R i1,j r i1(21)
Next, we find a lower bound on the potential function's increments
f (λ) old − f (λ) new = ω i1 log(1 − i1 R i1,j r i1 ) + q p=2 ω ip log(1 + ip R ip,j r ip ) Eq. (21) ≤ ω i1 log(1 − i1 R i1,j r i1 ) + q p=2 ω ip log(1 + ip R i1,j r i1 ω i1 ω ip )(22)
Let W = q p=2 ω ip and x p = ip Ri 1 ,j ωi 1 ri 1 ωi p . Since the logarithm is a concave function, from Jensen's inequality [27],
q p=2 ω ip log(1 + x p ) = W q p=2 ω ip W log(1 + x p ) ≤ W log( q p=2 ( ω ip W + ω ip W x p )) = W log(1 + q p=2 ω ip W x p )(23)
Leveraging Eq. (23), we conclude that Eq. (22) is
≤ ω i1 log(1 − i1 R i1,j r i1 ) + W log(1 + ω i1 W R i1,j r i1 = i 1 q p=2 ip ) = ω i1 [log(1 − z) + γ log(1 + z γ )]
Taylor Series
≤ −ω i1 z 2 2 =⇒ f (λ) new − f (λ) old ≥ ω i1 z 2 2(24)
where z = i1 Ri 1 ,j ri1 and γ = W ωi 1
. Note that since we seek an upper bound on convergence time, we can choose a small enough i1 so that z, z γ < 1. These assumptions increase the upper bound but allow us to use the Taylor series in Eq. (24). If we let R min and R max denote the minimum and maximum PHY rates across all the clients and BSs, then we have
Convergence Time ≤ Maxf (λ) − Minf (λ) 1 2 ω min 2 ( Ri 1 ,j ri 1 ) 2 ≤ ( N i=1 ω i )(log(r max ) − log(r min )) 1 2 ω min 2 ( Rmin M Rmax ) 2 ≤ ( ω i )(log(M R max ) − log( ωmin ωi R min )) 1 2 ω min 2 ( Rmin M Rmax ) 2 ≡ O( N M 2 log(M N ) 2 )(25)
V. OPTIMALITY OF AFRA Beyond convergence, we study the optimality properties of AFRA's equilibria. We first derive some useful properties of the equilibria that we leverage for optimality analysis. Next, we prove that the equilibria also maximize the global proportional fair resource allocation problem across all the BSs, and hence are globally optimal. Finally we discuss the uniqueness of the equilibria and prove that while the equilibrium throughput vector across all the clients is unique, there could be infinitely many resource allocations that realize this outcome. For simplicity, we do not consider discretization in this section.
Theorem 3 Consider an equilibrium outcome of AFRA. Let r eq i denote the throughput of client i, θ eq j the water-fill level of BS j, and λ eq i,j the fraction of time allocated to client i by BS j. Then I ωiRi,j r eq
i ≤ 1 θ eq j ∀i ∈ N, j ∈ M II N i=1 λ eq i,j = 1 ∀j ∈ M III N i=1 ω i = M j=1
1 θ eq j Proof: Part 1. From the water-fill definition we have
R i,j , λ eq i,j > 0 =⇒ r eq i ω i R i,j = θ eq j (26) R i,j > 0, λ eq i,j = 0 =⇒ r eq i ω i R i,j ≥ θ eq j(27)
Property I follows from Eqs. (26) and (27). Part 2. Every BS can always increase its water-fill level by distributing its unused time resources across its clients. The property follows, since at equilibrium the water-fill levels cannot be further increased. Part 3. We leverage I and II to derive property III as follows
N i=1 ω i = N i=1 ω i r eq i r eq i = N i=1 M j=1 ω i λ eq i,j R i,j r eq i = λ eq i,j >0 ω i λ eq i,j R i,j r eq i Eq. (26) = λ eq i,j >0 λ eq i,j θ eq j II = M j=1 1 θ eq j(28)
We next show that any equilibrium outcome of AFRA is globally optimal, i.e., it maximizes the global PF resource allocation problem. Proof: Let r eq i and θ eq j denote the throughput of client i and water-fill level of BS j at an equilibrium, respectively.
We prove that for any feasible selection of λ i,j s (i.e., λ i,j s that satisfy the feasibility conditions in Eqs. (2) and (3)) and the corresponding clients' throughput values (i.e., r i s as defined in Eq. (1)) we have
N i=1 ω i log(r i ) ≤ N i=1 ω i log(r eq i )(29)
Define W = N i=1 ω i . Eq. (29) can then be proved through the following inequalities by leveraging properties I and III from Theorem 3:
N i=1 ω i log(r i ) − N i=1 ω i log(r eq i ) = N i=1 ω i log( r i r eq i ) = W N i=1 ω i W log( r i r eq i ) Jensen Inequality ≤ W log( N i=1 ( ω i W × r i r eq i )) = W log( 1 W × N i=1 ( ω i r i r eq i )) = W log( 1 W × N i=1 M j=1 ω i λ i,j R i,j r eq i ) I ≤ W log( 1 W × N i=1 M j=1 λ i,j θ eq j ) Eq. (2) ≤ W log( 1 W × M j=1 1 θ eq j ) III = W log( 1 W × N i=1 ω i ) = 0(30)
In our last theorem we prove that while the equilibrium throughput vector across all clients is unique, there could be infinitely many resource allocations that realize this outcome.
Theorem 5 Let r eq = (r eq 1 , ..., r eq N ) denote the vector of throughput rates across all clients at an equilibrium. Then, r eq is unique. However, there could be infinitely many resource allocations across the BSs that realize r eq . Proof: Part 1. We first prove that r eq is unique. Let r eq maximize the global proportional-fair resource allocation across all clients and assume r eq is a different equilibria. From Theorem 4, we know that every other equilibrium should also maximize the global PF resource allocation. This means that all inequalities in Eq. (30) should be equalities for any equilibrium, including r eq . Now, for the first inequality to be an equality (i.e., Jensen inequality of Eq. (30)), the following condition needs to be satisfied [27] r eq
Further, since N i=1 ω i log(r eq i ) = N i=1 ω i log(r eq i ), we conclude that r eq i = r eq i ∀i ∈ N(32)
Part 2. To prove that there could be infinitely many resource allocations that realize r eq , we provide an example. Consider a topology with two BSs (j 1 , j 2 ) and two clients (i 1 , i 2 ). Let R i1,j = 1 ∀j ∈ M, R i2,j = 2 ∀j ∈ M, and ω i1 = ω i2 = 2. Then, ω i log(r i ) is maximized by the following time fractions for any α ∈ [0 1]. λ i,j = α for i = i 1 , j = j 1 and i = i 2 , j = j 2 λ i,j = 1 − α for i = i 1 , j = j 2 and i = i 2 , j = j 1 (33)
Here, irrespective of α, r i1 = 1 and r i2 = 2.
VI. PERFORMANCE EVALUATION
In this section, we evaluate AFRA's performance through experiments and simulations. First, we investigate the benefits of MAC level traffic aggregation in a small testbed composed of four SDR (software-defined radio)-based BSs and clients. Next, we conduct simulations to evaluate AFRA's equilibria properties as we scale the number of clients and BSs. Finally, we compare AFRA's speed and over-the-air signaling overhead against DDNUM, a dual decomposition based algorithm that we derived from the NUM framework.
A. SDR-Based Implementation and Real-World Performance
Implementation. We construct a HetNet topology composed of a WiFi BS, a cellular BS, and two clients. The two BSs are physically separated from each other and are placed in an indoor lab environment (Fig. 4(a)). We use a WARP board [28] with 802.11a reference design as our WiFi BS. We use another WARP board with OFDM PHY (WARP OFDM reference design) and a custom TDMA (Time Division Multiple Access) MAC to mimic a cellular BS. We use two other WARP boards to construct our two clients. Each client has access to both WiFi and cellular radios, and remains static and connected to both BSs throughout the experiments.
A server running iPerf sessions is connected to both BSs through Ethernet. For each client, the server generates a single fully-backlogged UDP traffic flow with 500 byte packets. We implement a below-IP sublayer to split this traffic flow between the two BSs. This sublayer is responsible for selection of the BS to be used for each packet, and acts similar to the LWA Adaptation Protocol (LWAAP) in the LWA standard [2]. In our implementation, we sequentially iterate between the WiFi and cellular BSs to route the packets of each traffic flow.
AFRA, as presented in Section III-D, does not account for various types of overhead (e.g., PHY/MAC header, ACKs, idle slots, collisions) that exist in PHY/MAC protocols. To address the issue, we introduce the notion of effective rate (R eff ) and replace all R i,j s in AFRA with R eff i,j s. For a single packet, R eff can be calculated as the number of bits in the packet divided by the total time it takes by a BS to successfully transmit that packet (including all overhead). In our implementation, each BS keeps track of the total time spent in successfully transmitting the past 5 packets of each traffic flow (i.e., the past 5 packets of each client) to calculate its R eff i,j . The averaging over 5 packets is to account for channel fluctuations in our experiments, and can be adjusted based on the client mobility.
We implement the following mechanisms: (i) WiFi only: the cellular BS is off but the WiFi BS is active; (ii) Cellular only: WiFi BS is off; (iii) AGG-RR: this scheme uses aggregation but with a round robin (RR) scheduler at the WiFi BS and conventional PF MAC at the cellular BS. With the RR scheduler, the WiFi BS maintains a different queue for each client and sequentially serves a single packet from each queue at every round. With the PF MAC at the cellular BS, the BS dedicates its time resources to each client according to Section III-C (single BS PF); (iv) AFRA: each BS uses its calculated λ i,j s to determine the number of packets that should be served from each queue in WiFi and the number of time slots that should be dedicated to each queue (client) in cellular, at every round. In our implementation, both clients' ω i are equal to 1 and the BSs updates their λ i,j s every 5 ms.
Performance Results. Fig. 4(c) shows the performance of the four schemes. In both the WiFi only and Cellular only options, only a single BS is active throughout the experiments. We observe that the Cellular only scheme provides a higher sum throughput than the WiFi only scheme. With careful evaluation of packet transmission traces, we discovered that this higher throughput is primarily due to the corresponding MAC protocols. In particular, WiFi MAC provides the same transmission opportunity to each traffic flow (client). As a result, the client with lower PHY rate occupies the channel for a longer duration that the other client. This decreases the throughput for both clients. In contrast, the cellular TDMA MAC provides the same transmission time for both clients (with 2 clients, single BS PF equally divides the time between the clients (Eq. 5)). As a result, the throughput of the client with higher PHY rate does not drop because of the client with a lower PHY rate. This, along with other MAC issues such as Fig. 4. We use two WARP boards to construct two BSs in our testbed. The BSs are connected to a server through Ethernet. The server runs a single fully-backlogged DL UDP iPerf session to each client. A sublayer implementation below the IP layer at the server, selects the BS for each packet of every traffic flow. The clients (not shown in the photo) have access to both radios, and remain static and connected to both BSs throughput the experiments (a); Cellular TDMA and WiFi MACs. The PHY header and ACKs are sent at a fixed transmission rate. Clients embed the throughput they receive from other BS in their ACK packets. The MAC header and payload are transmitted at a variable transmission rate. We define R ef f as the total number of payload bits divided by the total time it takes to successfully transmit a packet. We replace all R i,j s in AFRA with R ef f i,j to derive the λ i,j s and determine the number of packets that should be served from each queue (b); Total throughput across the two clients for four schemes: WiFi only (WiFi), Cellular only (Cellular), AGG-RR, and AFRA. AFRA achieves a higher average total throughput (29 Mbps vs 20 Mbps) and PF index (2.3 vs 1.97) compared to AGG-RR(c); Per-client throughput values for both AFRA and AGG-RR (d). WiFi contention reduce the WiFi only throughput. Fig. 4(c) also shows that the two RAT aggregation schemes (AGG-RR and AFRA) can successfully aggregate WiFi and cellular capacities and provide a higher sum throughput than the WiFi only and Cellular only options. Further, AFRA increases the average total throughput by 45% (from 20 to 29 Mbps) with 18 and 11 Mbps per-client total throughput values (per-client throughput plots are shown in Fig. 4(d)). Let us define the proportional fairness index as PF = 2 i=1 log(r i ) (r i is the total throughput of each client across its RATs in Mbps). Then, the PF index in AFRA would be 2.3. With AGG-RR, the per-client throughput rates drop to 12.5 and 7.5 Mbps. Thus, the PF index reduces to 1.97. AGG-RR uses the conventional scheduling algorithms on each BS (i.e., it uses RR in WiFi and single BS PF in cellular), which reduce both the sum throughput and the PF fairness index.
B. AFRA's Equilibria Properties
Setup. We simulated network deployments with N clients and M BSs to evaluate AFRA's equilibria properties as we scale the number of clients and BSs. All clients' ω i s are equal to 1. Half of the BSs are WiFi and the other half are cellular. Each client has access to 4 RATs, two WiFi and two cellular. The PHY rates for the WiFi and cellular RATs are randomly selected from the sets {1, 2, 5.5, 11} Mbps and {5.2, 10.3, 25.5, 51} Mbps, respectively. In each simulation realization, we randomly associate clients' RATs with BSs. Next, we run AFRA until an equilibrium is reached. We set the discretization factor equal to 0.05, i.e., a BS adjusts its time fractions only if the increase in time fraction (i.e., λ i,j ) at its client with minimum ri ωiRi,j is greater than or equal to 0.05. For the initial allocation, each BS equally divides its time across its clients. Unless otherwise specified, each of our simulation points is an average of 100 simulation realizations.
AFRA's Convergence Time. Figs. 5(a) and 5(b) depict the impact of the number of clients and BSs on AFRA's convergence time. In each of these figures, we count the number of steps until convergence is reached. At each step, a single BS that needs to adjust its time fractions is randomly selected. In Fig. 5(a), we vary the number of clients from 10 to 100 and plot the corresponding convergence times for three different M values: 10, 20, and 50. We repeat this simulation by changing the N and M variables and plot the corresponding results in Fig. 5(b). From these two figures, we observe that time to convergence is highest when the number of clients is between one to two times the number of BSs. As the ratio between the number of clients and BSs (i.e., N M ) leaves this range, the convergence time rapidly drops and then stabilizes. The results show that AFRA requires a small number of steps to reach an equilibrium.
Policies to Further Reduce AFRA's Convergence Time. Our next goal is to design policies that can further reduce AFRA's convergence time. To gain intuition on how to design such policies, we simulated a topology with 10 clients and 10 BSs and plotted the evolution of the potential function (i.e., i log(r i )) as BSs adjusted their time fractions. The results are shown in Fig. 5(c). Here, each Run corresponds to a different simulation realization. From these realizations we make two observations. First, there is a wide gap in the convergence times. Second, a high jump in the potential function pushes the system closer to equilibrium. Based on these observations, we designed a prioritization policy among the BSs to reduce the convergence time.
We let each BS calculate the increase in the potential function assuming that it is the only BS executing AFRA. Since in AFRA each BS knows the current total throughput of its clients, it has all the needed information to calculate the increase in the potential function due to its action. Next, each BS broadcasts its calculated value. Finally, the BS with the highest value gets priority in executing AFRA. This distributed policy can be easily implemented in networks where all the BSs are connected to the same backbone (e.g., Ethernet). The solid black curve in Fig. 5(c) shows the potential function's evolution with this policy. We observe that on average, the convergence time drops from 15 steps to 10, i.e., the prioritization policy reduces the convergence time by 33%. We repeated this simulation for another setup with 20 clients to increase the topological redundancy. The results are plotted in Fig. 5(d). Similarly, the average convergence time reduces from 19 steps to 13, i.e., a 32% reduction in convergence time.
C. Comparison Against DDNUM
We have compared AFRA's performance against DDNUM, a distributed algorithm that we developed by leveraging dual decomposition and the NUM framework. Dual decomposition is appropriate to solve the multi-RAT PF allocation problem, because the coupling constraint (Eq. (2)) can be relaxed through the dual problem and then the problem decouples into subproblems that can be iteratively solved by clients and BSs.
DDNUM is in essence similar to the standard dual algorithm presented in [7] to solve the basic NUM problem. We modified the algorithm in [7] to capture the constraints of our problem. At a high level, DDNUM has three main steps (for detailed algorithm derivation and discussions, refer to the Appendix):
• Step 1: Initialization: set t = 0 and µ µ µ(0) to some nonnegative value for each BS. Here, µ µ µ(t) is the vector of Lagrange multipliers that shows the cost or congestion across all BSs. Each BS broadcasts its µ j (0) to clients with R i,j > 0. • Step 2: Each client i locally solves its Lagrangian problem,
i.e., finds its time fractions (λ * i,j (µ j (t))) for each BS with R i,j > 0 and informs those BSs. • Step 3: Each BS updates its price with a step size γ and broadcasts the new price µ j (t + 1) to all its clients. This procedure is repeated until a satisfying termination point is reached (e.g., the solution is within a desired proximity of the optimal solution). Similar to AFRA, DDNUM is guaranteed to converge and maximize the global optimization problem. However, there are several practicality and performance issues. We highlight a few of these issues next.
Setup. To compare AFRA to DDNUM, we used the simulation setup in Section VI-B (without the BS prioritization policy). We first run AFRA and let the system converge to an equilibrium. Next, we consider the 95% value of AFRA's potential function at equilibrium as the desired algorithm termination point. We count the number of steps to reach the termination point and the resulting over-the-air signaling overhead in each of these two schemes. In DDNUM, the step size γ (step 3) provides a balance between the final throughput values and speed. We choose the γ that results in the fastest convergence time, subject to the potential function reaching the termination point. Finally, both AFRA and DDNUM can operate in either parallel or sequential mode with similar relative performance. We present the sequential mode results, i.e., at each time only a single BS adjusts its water-fill level (in AFRA) or announces a new price (in DDNUM). We assume that clients immediately update their BSs about their new throughput values (in AFRA) and desired λ i,j s (in DDNUM) with no impact on the convergence time (similar to an FDD system in which uplink data is immediately available). Speed. Fig. 6(a) show the convergence time results for a scenario with 10 BSs and varying number of clients. We observe that irrespective of the number of clients, DDNUM increase the convergence time by a factor of 2-3x with an average of 2.4x. In AFRA, each BS simultaneously calculates the water-fill level and finds the corresponding time fraction for each client. In DDNUM, the pricing mechanism requires a high number of iterations so that clients can find their optimal time fractions. This increases the convergence time.
Over-the-Air Overhead. Fig. 6(b) shows the wireless signaling overhead results of the two schemes. We observe that DDNUM increases the signaling overhead by a factor of 4-5x with an average of 4.5x. There are several factors that contribute to DDNUM's high signaling overhead. First, the increases in convergence time results in a similar multiplicative increase in overhead. Second, in DDNUM both BSs and clients contribute to overhead. BSs continuously broadcast new prices and clients continuously inform each of their BSs about their desired time fractions. In contrast, in AFRA only clients update the BSs regarding their new throughput values. Third, with careful examination of simulation traces, we observed that in AFRA the water-fill operation only impacts a few of a BS's clients each time. In DDNUM, each time a BS updates its price, most of its clients would request new time fractions.
Practicality. In DDNUM, each BS broadcasts its price while each client finds its desired λ * i,j s from its BSs. However, in real wireless systems BSs are responsible for resource allocation. Note that in DDNUM, it is not practical to shift the calculation of λ * i,j s (i.e., step 2) to BSs. This is because in order for a BS j to find the λ * i,j s for each of its clients (e.g., i), it would require knowledge about the client's R i,j and µ j to every other BS for which the client's rate (i.e., R i,j ) is greater than zero. This information is only available at the client and pushing it to the BS would significantly increase the overhead, which is already very high in DDNUM.
Complexity. In DDNUM, each client has to solve a complex Lagrangian subproblem to find its desired time fraction for each BS (step 2). This increases the computational complexity on the client devices. In contrast, AFRA identifies the time resources at the BSs, which have higher power and computing resources. Moreover, as we discussed in Section III-D, AFRA has a very low total computational complexity.
VII. CONCLUSION
We addressed the problem of proportional fair multi-RAT traffic aggregation in HetNets. We studied the conventional PF resource allocation in a single BS and showed that we can look at the problem as a special type of water-filling. Based on this observation, we designed a new fully distributed waterfilling algorithm for HetNets. We also studied the convergence, speed, and optimality of our algorithm. We proved that our algorithm quickly converges to equilibria and derived tight bounds to quantify its speed. We also studied the characteristics of the optimal outcome, and used the properties to prove the outcomes of our algorithm are globally optimal.
APPENDIX
To maximize the PF objective function in generic multi-RAT HetNets we need to solve the following problem
P 2 : max N i=1 ω i log(r i ) s.t. r i = M j=1 λ i,j R i,j ∀i ∈ N N i=1 λ i,j ≤ 1 ∀j ∈ M variables: λ i,j ≥ 0 ∀i ∈ N, j ∈ M
By capturing the first constraint in the objective function we can reformulate P 2 as
P 3 : max N i=1 (ω i log( M j=1 λ i,j R i,j )) s.t. N i=1 λ i,j ≤ 1 ∀j ∈ M variables: λ i,j ≥ 0 ∀i ∈ N, j ∈ M
We can use dual decomposition to solve P 3 since the constraints that couple the λ i,j variables (i.e., the first line of constraints in P 3 ) can be relaxed using Lagrange duality, and then the optimization problem decouples into several subproblems that as we show next can be solved distributedly.
Let µ j be the Lagrange multiplier for the j th constraint. Then the Lagrangian of P 3 can be written as
L(λ, µ µ µ) = N i=1 (ω i log( M j=1 λ i,j R i,j )) + M j=1 µ j (1 − N i=1 λ i,j ) = N i=1 ω i log( M j=1 λ i,j R i,j ) − M j=1 µ j λ i,j + M j=1 µ j (34)
Here λ is the vector of original optimization variables, which are also referred to as primal variables. The Lagrange multipliers (µ j ) are also referred to as dual variables. The problem now separates into two levels of optimization [7]. At the lower level, each client i needs to solve the following Lagrangian subproblem for a given µ µ µ max λi,j
ω i log( M j=1 λ i,j R i,j ) − M j=1 µ j λ i,j s.t. λ i,j ≥ 0 ∀i ∈ N, j ∈ M(35)
At a higher level, we have the master dual problem in charge of updating the dual variables (µ µ µ) by solving the following dual problem: where g i (µ µ µ) is the dual function, obtained as the maximum value of the Lagrangian subproblem solved in (35) for a given µ µ µ. This approach solves the dual problem. However, since the original problem in P 3 is convex (and there exists a strictly feasible solution), solving the dual problem equivalently solves the primal problem in P 3 .
Note that the objective function in (36) is convex and differentiable. Hence, we can use the following simple gradient method at each BS j to solve (36):
µ j (t + 1) = µ j (t) − γ 1 − N i=1 λ * i,j (t) +(37)
where λ * i,j is the solution to (35), t is the iteration index, γ > 0 is a positive step size, and [.] + denotes the projection into the non-negative orthant.
As t → ∞, the dual variables converge to the dual optimal µ µ µ * and the primal variables λ * (µ µ µ(t)) converge to the optimal primal variable λ * . Algorithm DDNUM shown below, summarizes the above steps.
DDNUM: Dual Decomposition Based Resource Allocation
Inputs: Known R i,j at each client i for every BS j for which R i,j > 0. Initialization: Set t = 0 and µ µ µ(0) to some nonnegative value for each BS.
• Step 1: Each client i locally solves its Lagrangian problem in (35), i.e., finds its time fractions (λ * i,j (µ µ µ(t))) for each BS j with R i,j > 0, and informs those BSs. • Step 2: Each BS updates its price according to Eq. (37) and broadcasts the new price to all its clients (i.e., clients with R i,j > 0). • Step 3: Set t ← t + 1 and go to step 1 (until the satisfying termination point is reached). | 9,032 |
cs0206003 | 2949865365 | Representing defeasibility is an important issue in common sense reasoning. In reasoning about action and change, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules. Furthermore, different types of defeasible information may also interfere with each other during the reasoning. In this paper, we develop a prioritized logic programming approach to handle defeasibilities in reasoning about action. In particular, we propose three action languages AT ^ 0 , AT ^ 1 and AT ^ 2 which handle three types of defeasibilities in action domains named defeasible constraints, defeasible observations and actions with defeasible and abnormal effects respectively. Each language with a higher superscript can be viewed as an extension of the language with a lower superscript. These action languages inherit the simple syntax of A language but their semantics is developed in terms of transition systems where transition functions are defined based on prioritized logic programs. By illustrating various examples, we show that our approach eventually provides a powerful mechanism to handle various defeasibilities in temporal prediction and postdiction. We also investigate semantic properties of these three action languages and characterize classes of action domains that present more desirable solutions in reasoning about action within the underlying action languages. | An early effort on handling defeasible causal rules in reasoning about action was due to the author's previous work @cite_2 , in which the author identified the restriction of McCain and Turner's causal theory of actions @cite_11 and claimed that in general a causal rule should be treated as a defeasible rule in order to solve the ramification problem properly. In @cite_2 , constraints ) and ) simply correspond to defaults @math and @math respectively. By combining Reiter's default theory @cite_10 and Winslett's PMA @cite_21 the author developed a causality-based minimal change principle for reasoning about action and change which subsumes McCain and Turner's causal theory. | {
"abstract": [
"",
"The need to make default assumptions is frequently encountered in reasoning about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non-monotonicity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occuring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.",
"Ginsberg and Smith [6, 7] propose a new method for reasoning about action, which they term a possible worlds approach (PWA). The PWA is an elegant, simple, and potentially very powerful domain-independent technique that has proven fruitful in other areas of AI [13, 5]. In the domain of reasoning about action, Ginsberg and Smith offer the PWA as a solution to the frame problem (What facts about the world remain true when an action is performed?) and its dual, the ramification problem [3] (What facts about the world must change when an action is performed?). In addition, Ginsberg and Smith offer the PWA as a solution to the qualification problem (When is it reasonable to assume that an action will succeed?), and claim for the PWA computational advantages over other approaches such as situation calculus. Here and in [16] I show that the PWA fails to solve the frame, ramification, and qualification problems, even with additional simplifying restrictions not imposed by Ginsberg and Smith. The cause of the failure seems to be a lack of distinction in the PWA between the state of the world and the description of the state of the world. I introduce a new approach to reasoning about action, called the possible models approach, and show that the possible models approach works as well as the PWA on the examples of [6, 7] but does not suffer from its deficiencies.",
"Recent research on reasoning about action has shown that the traditional logic form of domain constraints is problematic to represent ramifications of actions that are related to causality of domains. To handle this problem properly, as proposed by some researchers, it is necessary to describe causal relations of domains explicitly in action theories. In this paper, we address this problem from a new point of view. Specifically, unlike other researchers viewing causal relations as some kind of inference rules, we distinguish causal relations between defeasible and non-defeasible cases. It turns out that a causal theory in our formalism can be specified by using Reiter's default logic. Based on this idea, we propose a causality-based minimal change approach for representing effects of actions, and argue that our approach provides more plausible solutions for the ramification and qualification problems compared with other related work. We also describe a logic programming approximation to compute causal theories of actions which provides an implementational basis for our approach."
],
"cite_N": [
"@cite_11",
"@cite_10",
"@cite_21",
"@cite_2"
],
"mid": [
"",
"2155322595",
"175258934",
"2017479918"
]
} | 0 |
||
cs0206003 | 2949865365 | Representing defeasibility is an important issue in common sense reasoning. In reasoning about action and change, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules. Furthermore, different types of defeasible information may also interfere with each other during the reasoning. In this paper, we develop a prioritized logic programming approach to handle defeasibilities in reasoning about action. In particular, we propose three action languages AT ^ 0 , AT ^ 1 and AT ^ 2 which handle three types of defeasibilities in action domains named defeasible constraints, defeasible observations and actions with defeasible and abnormal effects respectively. Each language with a higher superscript can be viewed as an extension of the language with a lower superscript. These action languages inherit the simple syntax of A language but their semantics is developed in terms of transition systems where transition functions are defined based on prioritized logic programs. By illustrating various examples, we show that our approach eventually provides a powerful mechanism to handle various defeasibilities in temporal prediction and postdiction. We also investigate semantic properties of these three action languages and characterize classes of action domains that present more desirable solutions in reasoning about action within the underlying action languages. | Although the work presented in @cite_2 provided a natural way to represent causality in reasoning about action, there were several restrictions in this action theory. First, due to technical restrictions, only normal defaults or defaults without justifications are the suitable forms to represent causal rules in problem domains. Second, this action theory did not handle the other two major defeasibilities - defeasible observations and actions with defeasible and abnormal effects. | {
"abstract": [
"Recent research on reasoning about action has shown that the traditional logic form of domain constraints is problematic to represent ramifications of actions that are related to causality of domains. To handle this problem properly, as proposed by some researchers, it is necessary to describe causal relations of domains explicitly in action theories. In this paper, we address this problem from a new point of view. Specifically, unlike other researchers viewing causal relations as some kind of inference rules, we distinguish causal relations between defeasible and non-defeasible cases. It turns out that a causal theory in our formalism can be specified by using Reiter's default logic. Based on this idea, we propose a causality-based minimal change approach for representing effects of actions, and argue that our approach provides more plausible solutions for the ramification and qualification problems compared with other related work. We also describe a logic programming approximation to compute causal theories of actions which provides an implementational basis for our approach."
],
"cite_N": [
"@cite_2"
],
"mid": [
"2017479918"
]
} | 0 |
||
1708.05884 | 2949490711 | Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups. | Learning to Navigate. Navigation has traditionally been approached by either employing supervised learning (SL) methods @cite_11 @cite_2 @cite_12 @cite_20 @cite_38 @cite_29 @cite_28 or reinforcement learning (RL) methods @cite_14 @cite_24 @cite_44 @cite_1 @cite_39 @cite_30 . Furthermore, combinations of the two have been proposed in an effort to leverage advantages of both techniques, e.g. for increasing sample efficiency for RL methods @cite_15 @cite_41 @cite_26 @cite_27 @cite_37 . For the case of controlling physics-driven vehicles, SL can be advantageous when acquiring labeled data is not too costly or inefficient, and has been proven to have relative success in the field of autonomous driving, among other applications, in recent years @cite_11 @cite_2 @cite_12 . However, the use of neural networks for SL in autonomous driving goes back to much earlier work @cite_20 @cite_38 . | {
"abstract": [
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"",
"Inspired by how humans learn dynamic motor skills through a progressive process of coaching and practices, we introduce an intuitive and interactive framework for developing dynamic controllers. The user only needs to provide a primitive initial controller and high-level, human-readable instructions as if s he is coaching a human trainee, while the character has the ability to interpret the abstract instructions, accumulate the knowledge from the coach, and improve its skill iteratively. We introduce “control rigs” as an intermediate layer of control module to facilitate the mapping between high-level instructions and low-level control variables. Control rigs also utilize the human coach's knowledge to reduce the search space for control optimization. In addition, we develop a new sampling-based optimization method, Covariance Matrix Adaptation with Classification (CMA-C), to efficiently compute-control rig parameters. Based on the observation of human ability to “learn from failure”, CMA-C utilizes the failed simulation trials to approximate an infeasible region in the space of control rig parameters, resulting a faster convergence for the CMA optimization. We demonstrate the design process of complex dynamic controllers using our framework, including precision jumps, turnaround jumps, monkey vaults, drop-and-rolls, and wall-backflips.",
"Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running.",
"",
"The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection. The Arcade Learning Environment (ALE) provides a set of Atari games that represent a useful benchmark set of such applications. A recent breakthrough in combining model-free reinforcement learning with deep learning, called DQN, achieves the best real-time agents thus far. Planning-based approaches achieve far higher scores than the best model-free approaches, but they exploit information that is not available to human players, and they are orders of magnitude slower than needed for real-time play. Our main goal in this work is to build a better real-time Atari game playing agent than DQN. The central idea is to use the slow planning-based agents to provide training data for a deep-learning architecture capable of real-time play. We proposed new agents based on this idea and show that they outperform DQN.",
"The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our compressed network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver's perspective.",
"",
"",
"",
"Learning physics-based locomotion skills is a difficult problem, leading to solutions that typically exploit prior knowledge of various forms. In this paper we aim to learn a variety of environment-aware locomotion skills with a limited amount of prior knowledge. We adopt a two-level hierarchical control framework. First, low-level controllers are learned that operate at a fine timescale and which achieve robust walking gaits that satisfy stepping-target and style objectives. Second, high-level controllers are then learned which plan at the timescale of steps by invoking desired step targets for the low-level controller. The high-level controller makes decisions directly based on high-dimensional inputs, including terrain maps or other suitable representations of the surroundings. Both levels of the control policy are trained using deep reinforcement learning. Results are demonstrated on a simulated 3D biped. Low-level controllers are learned for a variety of motion styles and demonstrate robustness with respect to force-based disturbances, terrain variations, and style interpolation. High-level controllers are demonstrated that are capable of following trails through terrains, dribbling a soccer ball towards a target location, and navigating through static or dynamic obstacles.",
"We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments.",
"Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solver ...",
"",
"",
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)."
],
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_37",
"@cite_14",
"@cite_26",
"@cite_28",
"@cite_41",
"@cite_29",
"@cite_1",
"@cite_39",
"@cite_24",
"@cite_44",
"@cite_27",
"@cite_2",
"@cite_15",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"2260756217",
"",
"1993309788",
"2963864421",
"2104733512",
"",
"2151210636",
"2038794597",
"",
"",
"",
"2739330054",
"2952578114",
"2953248129",
"2605314490",
"",
"",
"2342840547"
]
} | Teaching UAVs to Race Using UE4Sim * | Unmanned aerial vehicles (UAVs) like drones and multicopters are attracting more attention in the graphics community. This development is stimulated by the merging of researchers from robotics, graphics, and computer vision to a common scientific community. Recent UAVrelated contributions in computer graphics cover a wide spectrum from computational multicopter design, optimization and fabrication [6] to state-of-the-art video capturing using quadrotor-based camera systems [15] and the generation of dynamically feasible trajectories [38]. While UAV design and point-to-point stabilized flight navigation is becoming a solved problem (as is evident from recent advances in UAV technology from industry leaders such as DJI, Amazon, and Intel), autonomous navigation of UAVs in more complex and real-world scenarios, such as unknown congested environments, GPS-denied areas, through narrow spaces, or around obstacles, is still far from being * www.airsim.org solved. In fact, only human pilots can reliably maneuver in these environments. This is a complex problem, since it requires both the sensing of real-world conditions and the understanding of appropriate policies of response to perceive obstacles through optimal navigation trajectory adjustment. There is perhaps no area where human pilots are more required to control UAVs then in the emerging sport of UAV racing where all these complex sense-and-understand tasks are conducted at neck-break speeds of over 100 km/h. Learning to control racing UAVs is a challenging task even for humans. It takes hours of practice and quite often hundreds of crashes. A more affordable approach to develop professional flight skills is to first train many hours on a flight simulator before going to the field. Since most of the fine motor skills of flight control are developed in simulators, the pilot is able to quickly transition to real-world flights.
In this contribution, we capitalize on this insight from how human pilots learn to sense and react with appropriate controls to their environment to train a deep network that can fly racing UAVs through challenging racing courses, many of which test the capabilities of even professional pilots. Inspired by recent work that trains artificial intelligence (AI) systems through the use of computer games [5], we create a photo-realistic UAV racing game with accurate physics using the Unreal Engine 4 (UE4) and integrate it with UE4Sim [27]. As this is the core learning environment, we develop a photo-realistic and customizable racing area in the form of a stadium based on a three-dimensional (3D) scanned real-world location to minimize discrepancy incurred from transitioning from the simulated to a realworld scenario. Inspired by recent work on self-driving cars [3], our automated racing UAV approach goes beyond simple pattern detection by learning the full control system required to fly a UAV through a racing course (arguably much more complicated than driving a car). As such, the proposed network extends the complexity of previous work to the control of a six degrees of freedom (6-DoF) UAV flying system, enabling the UAV to traverse tight spaces and make sharp turns at very high speeds (a task that cannot be performed by a ground vehicle). Our imitation learn- ing based approach simultaneously addresses both problems of perception and policy selection as the UAV navigates through the course, after it is trained from human pilot input on how to control itself (exploitation) and how to correct itself in case of drift (exploration). Our developed simulator is multi-purpose, as it enables the evaluation of a trained network in real-time on racing courses it has not encountered before.
Contributions. Our specific contributions are as follows.
(1) We are the first to introduce a photo-realistic simulator that is based on a real-world 3D environment that can be easily customized to build increasingly challenging racing courses, enables realistic UAV physical behavior, and is integrated with a real-world UAV controller (powered by a human pilot or a synthetic one). Logging video data from the UAV point-of-view and pilot controls is seamless and can be used to effortlessly generate large-scale training data for AI systems targeting UAV flying in particular and selfdriving vehicles in general (e.g. self-driving cars).
(2) To facilitate the training, parameter tuning, and evaluation of deep networks on this type of simulated data, we provide a full integration between the simulator and an end-to-end deep learning pipeline (based on TensorFlow) to be made publicly available to the community. Similar to other deep networks trained for game play, our integration will allow the community to fully explore many scenarios and tasks that go far beyond UAV racing in a rich and diverse photo-realistic gaming environment (e.g. obstacle avoidance and path planning).
(3) To the best of our knowledge, this paper is the first which fully demonstrates the capability of deep networks in learning how to master the complex control of UAVs at rac-ing speeds through difficult flight scenarios. Experiments show that our trained network can reach near-expert performance, while outperforming inexperienced pilots, who can use our system in a learning game play mode to become better pilots.
Overview
The fundamental modules of our proposed system are summarized in Figure 2, which represents the end-to-end dataset generation, learning, and evaluation process. In what follows, we provide details for each of these modules, namely how datasets are automatically generated within the simulator, how our proposed DNN is designed and trained, and how the learned DNN is evaluated. Note that this generic architecture can also be applied to any other type of vision-based navigation task made possible using our simulator.
Simulation Centric Dataset Generation
Our simulation environment allows for the automatic generation of customizable datasets, which comprise rich expert data to robustly train a DNN through imitation learning.
UAV Flight Simulation and Real-World Creation. The core of the system is the application of our UE4 based simulator. It is built on top of the open source UE4 project for computer vision called UAVSim [28]. Several changes were made to adapt the simulator for training our proposed racing DNN. First, we replaced the UAV with the 3D model and specifications of a racing quadcopter (see Figure3). We retuned the PID controller of the UAV to be more responsive and to function in a racing mode, where altitude control and stablization are still enabled but with much higher rates and steeper pitch and roll angles. In fact, this is now a popular racing mode available on consumer UAVs, such as the DJI Mavic. The simulator frame rate is locked at 60 fps and at every frame a log is recorded with UAV position, orientation, velocity, and stick inputs from the pilot. To accommodate for realistic input, we integrated the same UAV transmitter that would be used in real-world racing scenarios. We refer to the supplementary material for an example pilot recording. Following paradigms set by UAV racing norms, each racing course/track in our simulator comprises a sequence of gates connected by uniformly spaced cones. The track has a timing system that records time between each gate, lap, and completion time of the race. The gates have their own logic to detect whether the UAV has passed through the gate in the correct direction. This allows us to trigger both the start and ending of the race, as well as, determine the number of gates traversed by the UAV. These metrics (time and percentage of gates passed) constitute the overall per-track performance of a pilot, albeit human or a DNN.
Many professional pilots compete in time trials of well- known tracks such as those posted by the MultiGP Drone Racing League. Following this paradigm, our simulator race course is modeled after a football stadium, where local professional pilots regularly setup MultiGP tracks. Using a combination of LiDAR scanning and aerial photogrammetry, we captured the stadium with an accuracy of 0.5 cm; see Figure 5. A team of architects used the dense point cloud and textured mesh to create an accurate solid model with physics based rendering (PBR) textures in 3DSmax for export to UE4. This manifested in a geometrically accurate and photo-realistic race course that remains low in poly count, so as to run in UE4 at 60 fps, in which all training and evaluation experiments are conducted. We refer to Figure 4 for a side-by-side comparison of the real and virtual stadiums. Moreover, we want the simulated race course to be as dynamic and photo-realistic as possible, since the eventual goal is to transition the trained DNN from the simulated environment to the real-world, particularly starting with a similar venue as learned within the simulator. The concept of generating synthetic clones of real-world data for deep learning purposes has been adopted in previous work [8].
A key requirement for relatively straightforward simulatedto-real world transition is the DNN's ability to learn to automatically detect the gates and cones in the track within a complexly textured and dynamic environment. To this end, we enrich the simulated environment and race track with customizable textures (e.g. grass, snow, and dirt), gates (different shapes and appearance), and lighting.
Automatic Track Generation. We developed a track editor, where a user draws a 2D sketch of the overhead view of the track, the 3D track is automatically generated accordingly, and it is integrated into the timing system. With this editor, we created eleven tracks: seven for training, Figure 4: left: Aerial image captured from an UAV hovering above the stadium racing track. right: Rendering of the reconstructed stadium generated at a similar altitude and viewing angle within the simulator. and four for testing and evaluation. Each track is defined by gate positions and track lanes delineated by uniformly spaced racing cones distributed along the splines connecting adjacent gates. To avoid user bias in designing the race tracks, we use images collected from the internet and trace their contours in the editor to create uniquely stylized tracks. Following trends in designing tracks popularly used in the UAV racing community, both training and testing tracks have a large variety of types of turns and strait lengths. From a learning point of view, this track diversity exposes the DNN to a large number of track variations, as well as, their corresponding navigation controls. Obviously, the testing/evaluation tracks are never seen in training, neither by the human pilot nor the DNN.
Acquiring Large-Scale Ground-Truth Pilot Data. We record human pilot input from a Taranis flight transmitter integrated into the simulator through a joystick. This input is solicited from three pilots with different levels of skill: novice (has never flown before), intermediate (a moderately experienced pilot), and expert (a professional racing pilot). The pilots are given the opportunity to fly the seven training tracks as many times as needed until they successfully complete the tracks at their best time while passing through all gates. For the evaluation tracks, the pilots are allowed to fly the course only as many times needed to complete the entire course without crashing. We automatically score pi-lot performance based on lap time and percentage of gates traversed.
The simulation environment allows us to log the images rendered from the UAV camera point-of-view and the UAV flight controls from the transmitter. As mentioned earlier and to enable exploration, robust imitation learning requires the augmentation of these ground-truth logs with synthetic ones generated at a user-defined set of UAV offset positions and orientations accompanied by the corresponding controls needed to correct for these offsets. Also, since the logs can be replayed at a later time in the simulator, we can augment the dataset further by changing environmental conditions, including lighting, cone spacing or appearance, and other environmental dynamics (e.g. clouds). Therefore, each pilot flight leads to a large number of image-control pairs (both original and augmented) that will be used to train the UAV to robustly recover from possible drift along each training track, as well as, unseen evaluation tracks. Details of how our proposed DNN architecture is designed and trained are provided in Section 5. In general, more augmented data should improve UAV flight performance assuming that the control mapping and original flight data are noise-free. However, in many scenarios, this is not the case, so we find that there is a limit after which augmentation does not help (or even degrades) explorative learning. Empirical results validating this observation are detailed in Section 6.
Note that assigning corrective controls to the augmented data is quite complex in general, since they depend on many factors, including current UAV velocity, relative position on the track, its weight and current attitude. While it is possible to get this data in the simulation, it is very difficult to obtain it in the real-world in real-time. Therefore, we employ a fairly simple but effective model to determine these augmented controls that also scales to real-world settings. We add or subtract a corrective value to the pilot roll and yaw stick inputs for each position or orientation offset that is applied. For rotational offsets, we do not only apply a yaw correction but also couple it to roll because the UAV is in motion while rotating causing it to wash out due to its inertia.
DNN Interface for Real-Time Evaluation. To evaluate the performance of a trained DNN in real-time at 60 fps, we establish a TCP socket connection between the UE4 simulator and the Python wrapper (TensorFlow) executing the DNN. In doing so, the simulator continuously sends rendered UAV camera images across TCP to the DNN, which in turn processes each image individually to predict the next UAV stick inputs (flight controls) that are fed back to the UAV in the simulator using the same connection. Another advantage of this TCP connection is that the DNN prediction can be run on a separate system than the one running the simulator. We expect that this versatile and multi-purpose interface between the simulator and DNN framework will enable opportunities for the research community to further develop DNN solutions to not only the task of automated UAV navigation (using imitation learning) but to the more general task of vehicle maneuvering and obstacle avoidance (possibly using other forms of learning including RL).
Learning
In this section, we provide a detailed description of the learning strategy used to train our DNN, its network architecture and design. We also explore some of the inner workings of one of these trained DNNs to shed light on how this network is solving the problem of automated UAV racing.
Dataset Preparation and Augmentation
As it is the case for DNN-based solutions to other tasks, a careful construction of the training set is a key requirement to robust and effective DNN training. To this end and as mentioned earlier, we dedicate seven racing tracks (with their corresponding image-control pairs logged from human pilot runs in our simulator) for training and four tracks for testing/evaluation. We design the tracks such that they are similar to what racing professionals are accustomed to and such that they offer enough diversity and capability for exploration for proper network generalization on the unseen tracks. Figure 6 illustrates an overhead view of all these tracks.
As mentioned in Section 4, we log the pilot flight inputs and all other necessary parameters so that we can accurately replay the flights. These log files are then augmented using the specified offsets, replayed within the simulator while saving all rendered images and thus, providing exploratory insights to the racing DNN. For completeness, we summarize the details of the data generated for each training/testing track in Table 1. It is clear that the augmentation increases the size of the original dataset by approximately seven times. In Section 6, we show the effect of changing the amount of augmentation on the UAV's ability to generalize well. Moreover, for this dataset, we choose to use the intermediate pilot only so as to strike a trade-off between style of flight and overall size of the dataset. In Section 6.3, we show the effects of training with different flying styles.
Network Architecture and Implementation Details
To train a DNN to predict stick controls to the UAV from images, we choose a regression network architecture similar in spirit to the one used by Bojarski et al. [3]; however, we make changes to accommodate the complexity of the task at hand and to improve robustness in training. Our DNN architecture is shown in Figure7. The network consists of eight layers, five convolutional and three fully-connected. Since we implicitly want to localize the track and gates, we use striding in the convolutional layers instead of (max) pooling, which would add some degree of translation invariance. The DNN is given a single RGB-image with a 320×180 pixel resolution as input and is trained to regress to the four control/stick inputs to the UAV using a standard L 2 -loss and dropout ratio of 0.5. We find that the relatively high input resolution (i.e. higher network capacity), as compared to related methods [3,41], is useful to learn this more complicated maneuvering task and to enhance the network's ability to look further ahead. This affords the network with more robustness needed for long-term trajectory stability. We arrived to this compact network architecture by running extensive validation experiments that strikes a reasonable tradeoff between computational complexity and predictive performance. This careful design makes the proposed DNN architecture feasible for real-time applications on embedded hardware (e.g. Nvidia TX1) unlike previous architectures [3], if they use the same input size. In Table 2, we show both evaluation time on and technical details of the NVIDIA Titan X, and how it compares to a NVIDIA TX-1. Based on [30], we expect our network to still run at real-time speed with over 60 frames per second on this embedded hardware.
For training, we exploit a standard stochastic gradient descent (SGD) optimization strategy (namely Adam) using the TensorFlow platform. As such, one instance of our DNN can be trained to convergence on our dataset in less than two hours on a single GPU. This relatively fast training time enables finer hyper-parameter tuning. Table 2: Comparison of the NVIDIA Titan X and the NVIDIA TX-1. The performance of the TX-1 is approximated according to [30].
In contrast to other work where the frame rate is sampled down to 10 fps or lower [3,4,41], our racing environment is highly dynamic (with tight turns, high speed, and low inertia of the UAV), so we use a frame rate of 60 fps. This allows the UAV to be very responsive and move at high speeds, while maintaining a level of smoothness in controls. An al-ternative approach for temporally smooth controls is to include historic data in the training process (e.g. add the previous controls as input to the DNN). This can make the network more complex, harder to train, and less responsive in the highly dynamic racing environment, where many time critical decisions have to be made within a couple of frames (about 30 ms). Therefore, we find the high learning frame rate of 60 fps a good tradeoff between smooth controls and responsiveness.
Reinforcement vs. Imitation Learning. Of course, our simulator can lend itself useful in training networks using reinforcement learning. This type of learning does not specifically require supervised pilot information, as it searches for an optimal policy that leads to the highest eventual reward (e.g. highest percentage of gates traversed or lowest lap time). Recent methods have made use of reinforcement to learn simpler tasks without supervision [5]; however, they require weeks of training and a much faster simulator (1,000fps is possible in simple non photo-realistic games). For UAV racing, the required task is much more complicated and since the intent is to transfer the learned network into the real-world, a (slower) photo-realistic simulator is mandatory. Because of these two constraints, we decided to train our DNN using imitation learning instead of reinforcement learning.
Network Visualization
After training our DNN to convergence, we visualize how parts of the network behave in order to get additional insights. Figure8 shows some feature maps in different layers of the trained DNN for the same input image. Note how the filters have automatically learned to extract all necessary information in the scene (i.e. gates and cones), while in higher-level layers they are not responding to other parts of the environment. Although the feature map resolution becomes very low in the higher DNN layers, the feature map in the fifth convolutional layer is interesting as it marks the top, left, and right of parts of a gate with just a single activation each. This clearly demonstrates that our DNN is learning semantically intuitive features for the task of UAV racing. Figure 8: Visualization of feature maps at different convolutional layers in our trained network. Notice how the network activates in locations of semantic meaning for the task of UAV racing, namely the gates and cones.
Evaluation
In order to evaluate the performance of our DNN, we create four testing tracks based on well-known race tracks found in TORCS and Gran Turismo. We refer to Figure 6 for an overhead view of these tracks). Since the tracks must fit within the football stadium environment, they are scaled down leading to much sharper turns and shorter straightaways with the UAV reaching top speeds of over 100 km/h. Therefore, the evaluation tracks are significantly more difficult than they may have been originally intended in their original racing environments. We rank the four tracks in terms of difficulty ranging from easy (track 1), medium (track 2), hard (track 3), to very hard (track 4). For all the following evaluations, both the trained networks and human pilots are tasked to fly two laps in the testing tracks and are scored based on the total gates they fly through and overall lap time.
Effects of Exploration
We find exploration to be the predominant factor influencing network performance. As mentioned earlier, we augment the pilot flight data with offsets and corresponding corrective controls. We conduct grid search to find a suitable degree of augmentation and to analyze the effect it has on overall UAV racing performance. To do this, we define two sets of offset parameters: one that acts as a horizontal offset (roll-offset) and one that acts as a rotational offset (yaw-offset). Figure9 shows how the racing accuracy (percentage of gates traversed) varies with different sets of these augmentation offsets across the four testing tracks. It is clear that increasing the number of rendered images with yaw-offset has the greatest impact on performance. While it is possible for the DNN to complete tracks without being trained on roll-offsets, this is not the case for yaw-offsets. However, the huge gain in adding rotated camera views saturates quickly, and at a certain point the network does not benefit from more extensive augmentation. Therefore, we found four yaw-offsets to be sufficient. Including camera views with horizontal shifts is also beneficial, since the network is better equipped to recover once it is about to leave the track on straights. We found two roll-offsets to be sufficient to ensure this. Therefore, in the rest of our experiments, we use the following augmentation setup in training: horizontal roll-offset set {−50 • , 50 • } and rotational yaw- Figure 9: Effect of data augmentation in training to overall UAV racing performance. By augmenting the original flight logs with data captured at more offsets (roll and yaw) from the original trajectory along with their corresponding corrective controls, our UAV DNN can learn to traverse almost all the gates of the testing tracks, since it has learned to correct for exploratory maneuvers. After a sufficient amount of augmentation, no additional benefit is realized in improved racing performance.
offset set {−30 • , −15 • , 15 • , 30 • }.
Comparison to State-of-the-Art
We compare our racing DNN to the two most related and recent network architectures, the first denoted as Nvidia (for self-driving cars [3]) and the second as MAV (for forest path navigating UAVs [41]). While the domains of these works are similar, it should be noted that flying a high-speed racing UAV is a particularly challenging task, especially since the effect of inertia is much more significant and there are more degrees of freedom. For fair comparison, we scale our dataset to the same input dimensionality and re-train each of the three networks. We then evaluate each of the trained models on the task of UAV racing in the testing tracks. It is noteworthy to point out that both the Nvidia and MAV networks (in their original implementation) use data augmentation as well, so when training, we maintain the same strategy. For the Nvidia network, the exact offset choices for training are not publicly known, so we use a rotational offset set of {−30 • , 30 • } to augment its data. As for the MAV network, we use the same augmentation parameters proposed in the paper, i.e. a rotational offset of {−30 • , 30 • }. We needed to modify the MAV network to allow for a regression output instead of its original classification (left, center and right controls). This is necessary, since our task is much more complex and discrete controls would lead to inadequate UAV racing performance.
It should be noted that in the original implementation of the Nvidia network [3] (based on real-world driving data), it was realized that additional augmentation was needed for reasonable automatic driving performance after the realworld data was acquired. To avoid recapturing the data again, synthetic viewpoints (generated by interpolation) were used to augment the training dataset, which introduced undesirable distortions. By using our simulator, we are able to extract any number of camera views without distortions. Therefore, we wanted to also gauge the effect of additional augmentation to both the Nvidia and MAV networks, when they are trained using our default augmentation setting: horizontal roll-offset of {−50 • , 50 • } and rotational yaw-offset of {−30 • , −15 • , 15 • , 30 • }. We denote these trained networks as Nvidia++ and MAV++. Table 3 summarizes the results of these different network variants on the testing tracks. Results indicate that the performance of the original Nvidia and MAV networks suffer from insufficient data augmentation. They clearly do not make use of enough exploration. These networks improve in performance when our proposed data augmentation scheme (enabled by our simulator) is used. Regardless, our proposed DNN outperforms the Nvidia/Nvidia++ and MAV/MAV++ networks, where this improvement is less significant when more data augmentation or more exploratory behavior is learned. Unlike the other networks, our DNN performs consistently well on all the unseen tracks, owing to its sufficient network capacity needed to learn this complex task.
Pilot Diversity & Human vs. DNN
In this section, we investigate how the flying style of a pilot affects the network that is being learned. To do this, Table 3: Accuracy score of different pilots and networks on the four test tracks. The accuracy score represents the percentage of completed racing gates. The networks ending with ++ are variants of the original network with our augmentation strategy.
we compare the performance of the different networks on the testing set, when each of them is trained with flight data captured from pilots of varying flight expertise (intermediate, and expert). We also trained models using the Nvidia [3] and MAV [41] architectures, with and without our default data augmentation settings. Table 3 summarizes the lap time and accuracy of these networks. Clearly, the pilot flight style can significantly affect the performance of the learned network. Figure 10 shows that there is a high correlation regarding both performance and flying style of the pilot used in training and the corresponding learned network. The trained networks clearly resemble the flying style and also the proficiency of their human trainers. Thus, our network that was trained on flights of the intermediate pilot achieves high accuracies but is quite slow, just as the expert network sometimes misses gates but achieves very good lap and overall times. Interestingly, although the networks perform similar to their pilot, they fly more consistently, and therefore tend to outperform the human pilot with regards to overall time on multiple laps. This is especially true for our intermediate network. Both the intermediate and the expert network clearly outperforms the novice human pilot, who takes several hours of practice and several attempts to reach similar performance to the network. Even our expert pilots were not always able to complete the test tracks on the first attempt.
While the percentage of passed gates and best lap time give a good indication about the network performance, they do not convey any information about the style of the pilot. To this end, we visualize the performance of human pilots and the trained networks by plotting their trajectories onto the track (from a 2D overhead viewpoint). Moreover, we encode their speeds as a heatmap, where blue corresponds to the minimum speed and red to the maximum speed. Figure 11 shows a collection of heatmaps revealing several interesting insights. Firstly, the networks clearly imitate the style of the pilot they were trained on. This is especially true for the intermediate proficiency level, while the expert network sometimes overshoots, which causes it to loose speed and therefore to not match the speed pattern as well as the intermediate one. We also note that the performance gap between network and human increases as the expertise of the pilot increases. Note that the flight path of the expert network is less smooth and centered than its human correspondent and the intermediate network, respectively. This is partly due to the fact that the networks were only trained on two laps of flying across seven training tracks. An expert pilot has a lot more training than that and is therefore able to generalize much better to unseen environments. However, the experience advantage of the intermediate pilot over the network is much less and therefore the performance gap is smaller. We also show the performance of our novice pilot on these tracks. While the intermediate pilots accelerate on straights, the novice clearly is not able to control speed that well, creating a very narrow velocity range. Albeit flying quite slow, he also gets of track several times. This underlines how challenging UAV racing is, especially for unexperienced pilots.
Conclusions and Future Work
In this paper, we proposed a robust imitation learning based framework to teach an unmanned aerial vehicle (UAV) to fly through challenging racing tracks at very high speeds, an unprecedented and difficult task. To do this, we trained a deep neural network (DNN) to predict the necessary UAV controls from raw image data, grounded in a photo-realistic simulator that also allows for realistic UAV physics. Training is made possible by logging data (rendered images from the UAV and stick controls) from human pilot flights, while they maneuver the UAV through racing tracks. This data is augmented with sufficient offsets so as to teach the network to recover from flight mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.
In the future, we aim to transfer the network we trained in our simulator to the real-world to compete against human pilots in real-world racing scenarios. Although we accurately modeled the simulated racing environment, the differences in appearance between the simulated and real-world will need to be reconciled. Therefore, we will investigate deep transfer learning techniques to enable a smooth transition between simulator and the real-world. Since our developed simulator and its seamless interface to deep learning platforms is generic in nature, we expect that this combination will open up unique opportunities for the community to develop better automated UAV flying methods, to expand its reach to other fields of autonomous navigation such as selfdriving cars, and to benefit other interesting AI tasks (e.g. obstacle avoidance). Figure 11: Visualization of human and automated UAV flights super-imposed onto a 2D overhead view of different tracks. The color coding illustrates the instantaneous speed of the UAV. Notice how the UAV learns to speedup on straights and to slow down at turns and how the flying style corresponds, especially with the intermediate pilots. | 5,330 |
1708.05884 | 2949490711 | Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups. | In the work of @cite_11 , a deep neural network (DNN) is trained to map recorded camera views to 3-DoF steering commands (steering wheel angle, throttle, and brake). Seventy-two hours of human driven training data was tediously collected from a forward facing camera and augmented with two additional views to provide data for simulated drifting and corrective maneuvering. The simulated and on-road results of this pioneering work demonstrate the ability of a DNN to learn (end-to-end) the control process of a self-driving car from raw video data. | {
"abstract": [
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)."
],
"cite_N": [
"@cite_11"
],
"mid": [
"2342840547"
]
} | Teaching UAVs to Race Using UE4Sim * | Unmanned aerial vehicles (UAVs) like drones and multicopters are attracting more attention in the graphics community. This development is stimulated by the merging of researchers from robotics, graphics, and computer vision to a common scientific community. Recent UAVrelated contributions in computer graphics cover a wide spectrum from computational multicopter design, optimization and fabrication [6] to state-of-the-art video capturing using quadrotor-based camera systems [15] and the generation of dynamically feasible trajectories [38]. While UAV design and point-to-point stabilized flight navigation is becoming a solved problem (as is evident from recent advances in UAV technology from industry leaders such as DJI, Amazon, and Intel), autonomous navigation of UAVs in more complex and real-world scenarios, such as unknown congested environments, GPS-denied areas, through narrow spaces, or around obstacles, is still far from being * www.airsim.org solved. In fact, only human pilots can reliably maneuver in these environments. This is a complex problem, since it requires both the sensing of real-world conditions and the understanding of appropriate policies of response to perceive obstacles through optimal navigation trajectory adjustment. There is perhaps no area where human pilots are more required to control UAVs then in the emerging sport of UAV racing where all these complex sense-and-understand tasks are conducted at neck-break speeds of over 100 km/h. Learning to control racing UAVs is a challenging task even for humans. It takes hours of practice and quite often hundreds of crashes. A more affordable approach to develop professional flight skills is to first train many hours on a flight simulator before going to the field. Since most of the fine motor skills of flight control are developed in simulators, the pilot is able to quickly transition to real-world flights.
In this contribution, we capitalize on this insight from how human pilots learn to sense and react with appropriate controls to their environment to train a deep network that can fly racing UAVs through challenging racing courses, many of which test the capabilities of even professional pilots. Inspired by recent work that trains artificial intelligence (AI) systems through the use of computer games [5], we create a photo-realistic UAV racing game with accurate physics using the Unreal Engine 4 (UE4) and integrate it with UE4Sim [27]. As this is the core learning environment, we develop a photo-realistic and customizable racing area in the form of a stadium based on a three-dimensional (3D) scanned real-world location to minimize discrepancy incurred from transitioning from the simulated to a realworld scenario. Inspired by recent work on self-driving cars [3], our automated racing UAV approach goes beyond simple pattern detection by learning the full control system required to fly a UAV through a racing course (arguably much more complicated than driving a car). As such, the proposed network extends the complexity of previous work to the control of a six degrees of freedom (6-DoF) UAV flying system, enabling the UAV to traverse tight spaces and make sharp turns at very high speeds (a task that cannot be performed by a ground vehicle). Our imitation learn- ing based approach simultaneously addresses both problems of perception and policy selection as the UAV navigates through the course, after it is trained from human pilot input on how to control itself (exploitation) and how to correct itself in case of drift (exploration). Our developed simulator is multi-purpose, as it enables the evaluation of a trained network in real-time on racing courses it has not encountered before.
Contributions. Our specific contributions are as follows.
(1) We are the first to introduce a photo-realistic simulator that is based on a real-world 3D environment that can be easily customized to build increasingly challenging racing courses, enables realistic UAV physical behavior, and is integrated with a real-world UAV controller (powered by a human pilot or a synthetic one). Logging video data from the UAV point-of-view and pilot controls is seamless and can be used to effortlessly generate large-scale training data for AI systems targeting UAV flying in particular and selfdriving vehicles in general (e.g. self-driving cars).
(2) To facilitate the training, parameter tuning, and evaluation of deep networks on this type of simulated data, we provide a full integration between the simulator and an end-to-end deep learning pipeline (based on TensorFlow) to be made publicly available to the community. Similar to other deep networks trained for game play, our integration will allow the community to fully explore many scenarios and tasks that go far beyond UAV racing in a rich and diverse photo-realistic gaming environment (e.g. obstacle avoidance and path planning).
(3) To the best of our knowledge, this paper is the first which fully demonstrates the capability of deep networks in learning how to master the complex control of UAVs at rac-ing speeds through difficult flight scenarios. Experiments show that our trained network can reach near-expert performance, while outperforming inexperienced pilots, who can use our system in a learning game play mode to become better pilots.
Overview
The fundamental modules of our proposed system are summarized in Figure 2, which represents the end-to-end dataset generation, learning, and evaluation process. In what follows, we provide details for each of these modules, namely how datasets are automatically generated within the simulator, how our proposed DNN is designed and trained, and how the learned DNN is evaluated. Note that this generic architecture can also be applied to any other type of vision-based navigation task made possible using our simulator.
Simulation Centric Dataset Generation
Our simulation environment allows for the automatic generation of customizable datasets, which comprise rich expert data to robustly train a DNN through imitation learning.
UAV Flight Simulation and Real-World Creation. The core of the system is the application of our UE4 based simulator. It is built on top of the open source UE4 project for computer vision called UAVSim [28]. Several changes were made to adapt the simulator for training our proposed racing DNN. First, we replaced the UAV with the 3D model and specifications of a racing quadcopter (see Figure3). We retuned the PID controller of the UAV to be more responsive and to function in a racing mode, where altitude control and stablization are still enabled but with much higher rates and steeper pitch and roll angles. In fact, this is now a popular racing mode available on consumer UAVs, such as the DJI Mavic. The simulator frame rate is locked at 60 fps and at every frame a log is recorded with UAV position, orientation, velocity, and stick inputs from the pilot. To accommodate for realistic input, we integrated the same UAV transmitter that would be used in real-world racing scenarios. We refer to the supplementary material for an example pilot recording. Following paradigms set by UAV racing norms, each racing course/track in our simulator comprises a sequence of gates connected by uniformly spaced cones. The track has a timing system that records time between each gate, lap, and completion time of the race. The gates have their own logic to detect whether the UAV has passed through the gate in the correct direction. This allows us to trigger both the start and ending of the race, as well as, determine the number of gates traversed by the UAV. These metrics (time and percentage of gates passed) constitute the overall per-track performance of a pilot, albeit human or a DNN.
Many professional pilots compete in time trials of well- known tracks such as those posted by the MultiGP Drone Racing League. Following this paradigm, our simulator race course is modeled after a football stadium, where local professional pilots regularly setup MultiGP tracks. Using a combination of LiDAR scanning and aerial photogrammetry, we captured the stadium with an accuracy of 0.5 cm; see Figure 5. A team of architects used the dense point cloud and textured mesh to create an accurate solid model with physics based rendering (PBR) textures in 3DSmax for export to UE4. This manifested in a geometrically accurate and photo-realistic race course that remains low in poly count, so as to run in UE4 at 60 fps, in which all training and evaluation experiments are conducted. We refer to Figure 4 for a side-by-side comparison of the real and virtual stadiums. Moreover, we want the simulated race course to be as dynamic and photo-realistic as possible, since the eventual goal is to transition the trained DNN from the simulated environment to the real-world, particularly starting with a similar venue as learned within the simulator. The concept of generating synthetic clones of real-world data for deep learning purposes has been adopted in previous work [8].
A key requirement for relatively straightforward simulatedto-real world transition is the DNN's ability to learn to automatically detect the gates and cones in the track within a complexly textured and dynamic environment. To this end, we enrich the simulated environment and race track with customizable textures (e.g. grass, snow, and dirt), gates (different shapes and appearance), and lighting.
Automatic Track Generation. We developed a track editor, where a user draws a 2D sketch of the overhead view of the track, the 3D track is automatically generated accordingly, and it is integrated into the timing system. With this editor, we created eleven tracks: seven for training, Figure 4: left: Aerial image captured from an UAV hovering above the stadium racing track. right: Rendering of the reconstructed stadium generated at a similar altitude and viewing angle within the simulator. and four for testing and evaluation. Each track is defined by gate positions and track lanes delineated by uniformly spaced racing cones distributed along the splines connecting adjacent gates. To avoid user bias in designing the race tracks, we use images collected from the internet and trace their contours in the editor to create uniquely stylized tracks. Following trends in designing tracks popularly used in the UAV racing community, both training and testing tracks have a large variety of types of turns and strait lengths. From a learning point of view, this track diversity exposes the DNN to a large number of track variations, as well as, their corresponding navigation controls. Obviously, the testing/evaluation tracks are never seen in training, neither by the human pilot nor the DNN.
Acquiring Large-Scale Ground-Truth Pilot Data. We record human pilot input from a Taranis flight transmitter integrated into the simulator through a joystick. This input is solicited from three pilots with different levels of skill: novice (has never flown before), intermediate (a moderately experienced pilot), and expert (a professional racing pilot). The pilots are given the opportunity to fly the seven training tracks as many times as needed until they successfully complete the tracks at their best time while passing through all gates. For the evaluation tracks, the pilots are allowed to fly the course only as many times needed to complete the entire course without crashing. We automatically score pi-lot performance based on lap time and percentage of gates traversed.
The simulation environment allows us to log the images rendered from the UAV camera point-of-view and the UAV flight controls from the transmitter. As mentioned earlier and to enable exploration, robust imitation learning requires the augmentation of these ground-truth logs with synthetic ones generated at a user-defined set of UAV offset positions and orientations accompanied by the corresponding controls needed to correct for these offsets. Also, since the logs can be replayed at a later time in the simulator, we can augment the dataset further by changing environmental conditions, including lighting, cone spacing or appearance, and other environmental dynamics (e.g. clouds). Therefore, each pilot flight leads to a large number of image-control pairs (both original and augmented) that will be used to train the UAV to robustly recover from possible drift along each training track, as well as, unseen evaluation tracks. Details of how our proposed DNN architecture is designed and trained are provided in Section 5. In general, more augmented data should improve UAV flight performance assuming that the control mapping and original flight data are noise-free. However, in many scenarios, this is not the case, so we find that there is a limit after which augmentation does not help (or even degrades) explorative learning. Empirical results validating this observation are detailed in Section 6.
Note that assigning corrective controls to the augmented data is quite complex in general, since they depend on many factors, including current UAV velocity, relative position on the track, its weight and current attitude. While it is possible to get this data in the simulation, it is very difficult to obtain it in the real-world in real-time. Therefore, we employ a fairly simple but effective model to determine these augmented controls that also scales to real-world settings. We add or subtract a corrective value to the pilot roll and yaw stick inputs for each position or orientation offset that is applied. For rotational offsets, we do not only apply a yaw correction but also couple it to roll because the UAV is in motion while rotating causing it to wash out due to its inertia.
DNN Interface for Real-Time Evaluation. To evaluate the performance of a trained DNN in real-time at 60 fps, we establish a TCP socket connection between the UE4 simulator and the Python wrapper (TensorFlow) executing the DNN. In doing so, the simulator continuously sends rendered UAV camera images across TCP to the DNN, which in turn processes each image individually to predict the next UAV stick inputs (flight controls) that are fed back to the UAV in the simulator using the same connection. Another advantage of this TCP connection is that the DNN prediction can be run on a separate system than the one running the simulator. We expect that this versatile and multi-purpose interface between the simulator and DNN framework will enable opportunities for the research community to further develop DNN solutions to not only the task of automated UAV navigation (using imitation learning) but to the more general task of vehicle maneuvering and obstacle avoidance (possibly using other forms of learning including RL).
Learning
In this section, we provide a detailed description of the learning strategy used to train our DNN, its network architecture and design. We also explore some of the inner workings of one of these trained DNNs to shed light on how this network is solving the problem of automated UAV racing.
Dataset Preparation and Augmentation
As it is the case for DNN-based solutions to other tasks, a careful construction of the training set is a key requirement to robust and effective DNN training. To this end and as mentioned earlier, we dedicate seven racing tracks (with their corresponding image-control pairs logged from human pilot runs in our simulator) for training and four tracks for testing/evaluation. We design the tracks such that they are similar to what racing professionals are accustomed to and such that they offer enough diversity and capability for exploration for proper network generalization on the unseen tracks. Figure 6 illustrates an overhead view of all these tracks.
As mentioned in Section 4, we log the pilot flight inputs and all other necessary parameters so that we can accurately replay the flights. These log files are then augmented using the specified offsets, replayed within the simulator while saving all rendered images and thus, providing exploratory insights to the racing DNN. For completeness, we summarize the details of the data generated for each training/testing track in Table 1. It is clear that the augmentation increases the size of the original dataset by approximately seven times. In Section 6, we show the effect of changing the amount of augmentation on the UAV's ability to generalize well. Moreover, for this dataset, we choose to use the intermediate pilot only so as to strike a trade-off between style of flight and overall size of the dataset. In Section 6.3, we show the effects of training with different flying styles.
Network Architecture and Implementation Details
To train a DNN to predict stick controls to the UAV from images, we choose a regression network architecture similar in spirit to the one used by Bojarski et al. [3]; however, we make changes to accommodate the complexity of the task at hand and to improve robustness in training. Our DNN architecture is shown in Figure7. The network consists of eight layers, five convolutional and three fully-connected. Since we implicitly want to localize the track and gates, we use striding in the convolutional layers instead of (max) pooling, which would add some degree of translation invariance. The DNN is given a single RGB-image with a 320×180 pixel resolution as input and is trained to regress to the four control/stick inputs to the UAV using a standard L 2 -loss and dropout ratio of 0.5. We find that the relatively high input resolution (i.e. higher network capacity), as compared to related methods [3,41], is useful to learn this more complicated maneuvering task and to enhance the network's ability to look further ahead. This affords the network with more robustness needed for long-term trajectory stability. We arrived to this compact network architecture by running extensive validation experiments that strikes a reasonable tradeoff between computational complexity and predictive performance. This careful design makes the proposed DNN architecture feasible for real-time applications on embedded hardware (e.g. Nvidia TX1) unlike previous architectures [3], if they use the same input size. In Table 2, we show both evaluation time on and technical details of the NVIDIA Titan X, and how it compares to a NVIDIA TX-1. Based on [30], we expect our network to still run at real-time speed with over 60 frames per second on this embedded hardware.
For training, we exploit a standard stochastic gradient descent (SGD) optimization strategy (namely Adam) using the TensorFlow platform. As such, one instance of our DNN can be trained to convergence on our dataset in less than two hours on a single GPU. This relatively fast training time enables finer hyper-parameter tuning. Table 2: Comparison of the NVIDIA Titan X and the NVIDIA TX-1. The performance of the TX-1 is approximated according to [30].
In contrast to other work where the frame rate is sampled down to 10 fps or lower [3,4,41], our racing environment is highly dynamic (with tight turns, high speed, and low inertia of the UAV), so we use a frame rate of 60 fps. This allows the UAV to be very responsive and move at high speeds, while maintaining a level of smoothness in controls. An al-ternative approach for temporally smooth controls is to include historic data in the training process (e.g. add the previous controls as input to the DNN). This can make the network more complex, harder to train, and less responsive in the highly dynamic racing environment, where many time critical decisions have to be made within a couple of frames (about 30 ms). Therefore, we find the high learning frame rate of 60 fps a good tradeoff between smooth controls and responsiveness.
Reinforcement vs. Imitation Learning. Of course, our simulator can lend itself useful in training networks using reinforcement learning. This type of learning does not specifically require supervised pilot information, as it searches for an optimal policy that leads to the highest eventual reward (e.g. highest percentage of gates traversed or lowest lap time). Recent methods have made use of reinforcement to learn simpler tasks without supervision [5]; however, they require weeks of training and a much faster simulator (1,000fps is possible in simple non photo-realistic games). For UAV racing, the required task is much more complicated and since the intent is to transfer the learned network into the real-world, a (slower) photo-realistic simulator is mandatory. Because of these two constraints, we decided to train our DNN using imitation learning instead of reinforcement learning.
Network Visualization
After training our DNN to convergence, we visualize how parts of the network behave in order to get additional insights. Figure8 shows some feature maps in different layers of the trained DNN for the same input image. Note how the filters have automatically learned to extract all necessary information in the scene (i.e. gates and cones), while in higher-level layers they are not responding to other parts of the environment. Although the feature map resolution becomes very low in the higher DNN layers, the feature map in the fifth convolutional layer is interesting as it marks the top, left, and right of parts of a gate with just a single activation each. This clearly demonstrates that our DNN is learning semantically intuitive features for the task of UAV racing. Figure 8: Visualization of feature maps at different convolutional layers in our trained network. Notice how the network activates in locations of semantic meaning for the task of UAV racing, namely the gates and cones.
Evaluation
In order to evaluate the performance of our DNN, we create four testing tracks based on well-known race tracks found in TORCS and Gran Turismo. We refer to Figure 6 for an overhead view of these tracks). Since the tracks must fit within the football stadium environment, they are scaled down leading to much sharper turns and shorter straightaways with the UAV reaching top speeds of over 100 km/h. Therefore, the evaluation tracks are significantly more difficult than they may have been originally intended in their original racing environments. We rank the four tracks in terms of difficulty ranging from easy (track 1), medium (track 2), hard (track 3), to very hard (track 4). For all the following evaluations, both the trained networks and human pilots are tasked to fly two laps in the testing tracks and are scored based on the total gates they fly through and overall lap time.
Effects of Exploration
We find exploration to be the predominant factor influencing network performance. As mentioned earlier, we augment the pilot flight data with offsets and corresponding corrective controls. We conduct grid search to find a suitable degree of augmentation and to analyze the effect it has on overall UAV racing performance. To do this, we define two sets of offset parameters: one that acts as a horizontal offset (roll-offset) and one that acts as a rotational offset (yaw-offset). Figure9 shows how the racing accuracy (percentage of gates traversed) varies with different sets of these augmentation offsets across the four testing tracks. It is clear that increasing the number of rendered images with yaw-offset has the greatest impact on performance. While it is possible for the DNN to complete tracks without being trained on roll-offsets, this is not the case for yaw-offsets. However, the huge gain in adding rotated camera views saturates quickly, and at a certain point the network does not benefit from more extensive augmentation. Therefore, we found four yaw-offsets to be sufficient. Including camera views with horizontal shifts is also beneficial, since the network is better equipped to recover once it is about to leave the track on straights. We found two roll-offsets to be sufficient to ensure this. Therefore, in the rest of our experiments, we use the following augmentation setup in training: horizontal roll-offset set {−50 • , 50 • } and rotational yaw- Figure 9: Effect of data augmentation in training to overall UAV racing performance. By augmenting the original flight logs with data captured at more offsets (roll and yaw) from the original trajectory along with their corresponding corrective controls, our UAV DNN can learn to traverse almost all the gates of the testing tracks, since it has learned to correct for exploratory maneuvers. After a sufficient amount of augmentation, no additional benefit is realized in improved racing performance.
offset set {−30 • , −15 • , 15 • , 30 • }.
Comparison to State-of-the-Art
We compare our racing DNN to the two most related and recent network architectures, the first denoted as Nvidia (for self-driving cars [3]) and the second as MAV (for forest path navigating UAVs [41]). While the domains of these works are similar, it should be noted that flying a high-speed racing UAV is a particularly challenging task, especially since the effect of inertia is much more significant and there are more degrees of freedom. For fair comparison, we scale our dataset to the same input dimensionality and re-train each of the three networks. We then evaluate each of the trained models on the task of UAV racing in the testing tracks. It is noteworthy to point out that both the Nvidia and MAV networks (in their original implementation) use data augmentation as well, so when training, we maintain the same strategy. For the Nvidia network, the exact offset choices for training are not publicly known, so we use a rotational offset set of {−30 • , 30 • } to augment its data. As for the MAV network, we use the same augmentation parameters proposed in the paper, i.e. a rotational offset of {−30 • , 30 • }. We needed to modify the MAV network to allow for a regression output instead of its original classification (left, center and right controls). This is necessary, since our task is much more complex and discrete controls would lead to inadequate UAV racing performance.
It should be noted that in the original implementation of the Nvidia network [3] (based on real-world driving data), it was realized that additional augmentation was needed for reasonable automatic driving performance after the realworld data was acquired. To avoid recapturing the data again, synthetic viewpoints (generated by interpolation) were used to augment the training dataset, which introduced undesirable distortions. By using our simulator, we are able to extract any number of camera views without distortions. Therefore, we wanted to also gauge the effect of additional augmentation to both the Nvidia and MAV networks, when they are trained using our default augmentation setting: horizontal roll-offset of {−50 • , 50 • } and rotational yaw-offset of {−30 • , −15 • , 15 • , 30 • }. We denote these trained networks as Nvidia++ and MAV++. Table 3 summarizes the results of these different network variants on the testing tracks. Results indicate that the performance of the original Nvidia and MAV networks suffer from insufficient data augmentation. They clearly do not make use of enough exploration. These networks improve in performance when our proposed data augmentation scheme (enabled by our simulator) is used. Regardless, our proposed DNN outperforms the Nvidia/Nvidia++ and MAV/MAV++ networks, where this improvement is less significant when more data augmentation or more exploratory behavior is learned. Unlike the other networks, our DNN performs consistently well on all the unseen tracks, owing to its sufficient network capacity needed to learn this complex task.
Pilot Diversity & Human vs. DNN
In this section, we investigate how the flying style of a pilot affects the network that is being learned. To do this, Table 3: Accuracy score of different pilots and networks on the four test tracks. The accuracy score represents the percentage of completed racing gates. The networks ending with ++ are variants of the original network with our augmentation strategy.
we compare the performance of the different networks on the testing set, when each of them is trained with flight data captured from pilots of varying flight expertise (intermediate, and expert). We also trained models using the Nvidia [3] and MAV [41] architectures, with and without our default data augmentation settings. Table 3 summarizes the lap time and accuracy of these networks. Clearly, the pilot flight style can significantly affect the performance of the learned network. Figure 10 shows that there is a high correlation regarding both performance and flying style of the pilot used in training and the corresponding learned network. The trained networks clearly resemble the flying style and also the proficiency of their human trainers. Thus, our network that was trained on flights of the intermediate pilot achieves high accuracies but is quite slow, just as the expert network sometimes misses gates but achieves very good lap and overall times. Interestingly, although the networks perform similar to their pilot, they fly more consistently, and therefore tend to outperform the human pilot with regards to overall time on multiple laps. This is especially true for our intermediate network. Both the intermediate and the expert network clearly outperforms the novice human pilot, who takes several hours of practice and several attempts to reach similar performance to the network. Even our expert pilots were not always able to complete the test tracks on the first attempt.
While the percentage of passed gates and best lap time give a good indication about the network performance, they do not convey any information about the style of the pilot. To this end, we visualize the performance of human pilots and the trained networks by plotting their trajectories onto the track (from a 2D overhead viewpoint). Moreover, we encode their speeds as a heatmap, where blue corresponds to the minimum speed and red to the maximum speed. Figure 11 shows a collection of heatmaps revealing several interesting insights. Firstly, the networks clearly imitate the style of the pilot they were trained on. This is especially true for the intermediate proficiency level, while the expert network sometimes overshoots, which causes it to loose speed and therefore to not match the speed pattern as well as the intermediate one. We also note that the performance gap between network and human increases as the expertise of the pilot increases. Note that the flight path of the expert network is less smooth and centered than its human correspondent and the intermediate network, respectively. This is partly due to the fact that the networks were only trained on two laps of flying across seven training tracks. An expert pilot has a lot more training than that and is therefore able to generalize much better to unseen environments. However, the experience advantage of the intermediate pilot over the network is much less and therefore the performance gap is smaller. We also show the performance of our novice pilot on these tracks. While the intermediate pilots accelerate on straights, the novice clearly is not able to control speed that well, creating a very narrow velocity range. Albeit flying quite slow, he also gets of track several times. This underlines how challenging UAV racing is, especially for unexperienced pilots.
Conclusions and Future Work
In this paper, we proposed a robust imitation learning based framework to teach an unmanned aerial vehicle (UAV) to fly through challenging racing tracks at very high speeds, an unprecedented and difficult task. To do this, we trained a deep neural network (DNN) to predict the necessary UAV controls from raw image data, grounded in a photo-realistic simulator that also allows for realistic UAV physics. Training is made possible by logging data (rendered images from the UAV and stick controls) from human pilot flights, while they maneuver the UAV through racing tracks. This data is augmented with sufficient offsets so as to teach the network to recover from flight mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.
In the future, we aim to transfer the network we trained in our simulator to the real-world to compete against human pilots in real-world racing scenarios. Although we accurately modeled the simulated racing environment, the differences in appearance between the simulated and real-world will need to be reconciled. Therefore, we will investigate deep transfer learning techniques to enable a smooth transition between simulator and the real-world. Since our developed simulator and its seamless interface to deep learning platforms is generic in nature, we expect that this combination will open up unique opportunities for the community to develop better automated UAV flying methods, to expand its reach to other fields of autonomous navigation such as selfdriving cars, and to benefit other interesting AI tasks (e.g. obstacle avoidance). Figure 11: Visualization of human and automated UAV flights super-imposed onto a 2D overhead view of different tracks. The color coding illustrates the instantaneous speed of the UAV. Notice how the UAV learns to speedup on straights and to slow down at turns and how the flying style corresponds, especially with the intermediate pilots. | 5,330 |
1708.05884 | 2949490711 | Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups. | Similar to our work but for cars, @cite_2 use TORCS (The Open Racing Car Simulator) @cite_4 to train a DNN to drive at casual speeds through a course and properly pass or follow other vehicles in its lane. This work builds off earlier work using TORCS, which focused on keeping the car on a track @cite_29 . In contrast to our work, the vehicle controls to be predicted in the work of @cite_2 are limited, since only a small discrete set of expected control outputs are available: turn-left, turn-right, throttle, and brake. Recently, TORCS has also been successfully used in several RL approaches for autonomous car driving @cite_30 @cite_21 @cite_46 ; however, in these cases, RL was used to teach the agent to drive specific tracks or all available tracks rather than learning to drive never before seen tracks. | {
"abstract": [
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"",
"The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our compressed network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver's perspective.",
"",
"Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper we extend the approach in [16]. The Max-Pooling Convolutional Neural Network (MPCNN) compressor is evolved online, maximizing the distances between normalized feature vectors computed from the images collected by the recurrent neural network (RNN) controllers during their evaluation in the environment. These two interleaved evolutionary searches are used to find MPCNN compressors and RNN controllers that drive a race car in the TORCS racing simulator using only visual input."
],
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_29",
"@cite_21",
"@cite_2",
"@cite_46"
],
"mid": [
"2260756217",
"",
"2038794597",
"",
"2953248129",
"276602078"
]
} | Teaching UAVs to Race Using UE4Sim * | Unmanned aerial vehicles (UAVs) like drones and multicopters are attracting more attention in the graphics community. This development is stimulated by the merging of researchers from robotics, graphics, and computer vision to a common scientific community. Recent UAVrelated contributions in computer graphics cover a wide spectrum from computational multicopter design, optimization and fabrication [6] to state-of-the-art video capturing using quadrotor-based camera systems [15] and the generation of dynamically feasible trajectories [38]. While UAV design and point-to-point stabilized flight navigation is becoming a solved problem (as is evident from recent advances in UAV technology from industry leaders such as DJI, Amazon, and Intel), autonomous navigation of UAVs in more complex and real-world scenarios, such as unknown congested environments, GPS-denied areas, through narrow spaces, or around obstacles, is still far from being * www.airsim.org solved. In fact, only human pilots can reliably maneuver in these environments. This is a complex problem, since it requires both the sensing of real-world conditions and the understanding of appropriate policies of response to perceive obstacles through optimal navigation trajectory adjustment. There is perhaps no area where human pilots are more required to control UAVs then in the emerging sport of UAV racing where all these complex sense-and-understand tasks are conducted at neck-break speeds of over 100 km/h. Learning to control racing UAVs is a challenging task even for humans. It takes hours of practice and quite often hundreds of crashes. A more affordable approach to develop professional flight skills is to first train many hours on a flight simulator before going to the field. Since most of the fine motor skills of flight control are developed in simulators, the pilot is able to quickly transition to real-world flights.
In this contribution, we capitalize on this insight from how human pilots learn to sense and react with appropriate controls to their environment to train a deep network that can fly racing UAVs through challenging racing courses, many of which test the capabilities of even professional pilots. Inspired by recent work that trains artificial intelligence (AI) systems through the use of computer games [5], we create a photo-realistic UAV racing game with accurate physics using the Unreal Engine 4 (UE4) and integrate it with UE4Sim [27]. As this is the core learning environment, we develop a photo-realistic and customizable racing area in the form of a stadium based on a three-dimensional (3D) scanned real-world location to minimize discrepancy incurred from transitioning from the simulated to a realworld scenario. Inspired by recent work on self-driving cars [3], our automated racing UAV approach goes beyond simple pattern detection by learning the full control system required to fly a UAV through a racing course (arguably much more complicated than driving a car). As such, the proposed network extends the complexity of previous work to the control of a six degrees of freedom (6-DoF) UAV flying system, enabling the UAV to traverse tight spaces and make sharp turns at very high speeds (a task that cannot be performed by a ground vehicle). Our imitation learn- ing based approach simultaneously addresses both problems of perception and policy selection as the UAV navigates through the course, after it is trained from human pilot input on how to control itself (exploitation) and how to correct itself in case of drift (exploration). Our developed simulator is multi-purpose, as it enables the evaluation of a trained network in real-time on racing courses it has not encountered before.
Contributions. Our specific contributions are as follows.
(1) We are the first to introduce a photo-realistic simulator that is based on a real-world 3D environment that can be easily customized to build increasingly challenging racing courses, enables realistic UAV physical behavior, and is integrated with a real-world UAV controller (powered by a human pilot or a synthetic one). Logging video data from the UAV point-of-view and pilot controls is seamless and can be used to effortlessly generate large-scale training data for AI systems targeting UAV flying in particular and selfdriving vehicles in general (e.g. self-driving cars).
(2) To facilitate the training, parameter tuning, and evaluation of deep networks on this type of simulated data, we provide a full integration between the simulator and an end-to-end deep learning pipeline (based on TensorFlow) to be made publicly available to the community. Similar to other deep networks trained for game play, our integration will allow the community to fully explore many scenarios and tasks that go far beyond UAV racing in a rich and diverse photo-realistic gaming environment (e.g. obstacle avoidance and path planning).
(3) To the best of our knowledge, this paper is the first which fully demonstrates the capability of deep networks in learning how to master the complex control of UAVs at rac-ing speeds through difficult flight scenarios. Experiments show that our trained network can reach near-expert performance, while outperforming inexperienced pilots, who can use our system in a learning game play mode to become better pilots.
Overview
The fundamental modules of our proposed system are summarized in Figure 2, which represents the end-to-end dataset generation, learning, and evaluation process. In what follows, we provide details for each of these modules, namely how datasets are automatically generated within the simulator, how our proposed DNN is designed and trained, and how the learned DNN is evaluated. Note that this generic architecture can also be applied to any other type of vision-based navigation task made possible using our simulator.
Simulation Centric Dataset Generation
Our simulation environment allows for the automatic generation of customizable datasets, which comprise rich expert data to robustly train a DNN through imitation learning.
UAV Flight Simulation and Real-World Creation. The core of the system is the application of our UE4 based simulator. It is built on top of the open source UE4 project for computer vision called UAVSim [28]. Several changes were made to adapt the simulator for training our proposed racing DNN. First, we replaced the UAV with the 3D model and specifications of a racing quadcopter (see Figure3). We retuned the PID controller of the UAV to be more responsive and to function in a racing mode, where altitude control and stablization are still enabled but with much higher rates and steeper pitch and roll angles. In fact, this is now a popular racing mode available on consumer UAVs, such as the DJI Mavic. The simulator frame rate is locked at 60 fps and at every frame a log is recorded with UAV position, orientation, velocity, and stick inputs from the pilot. To accommodate for realistic input, we integrated the same UAV transmitter that would be used in real-world racing scenarios. We refer to the supplementary material for an example pilot recording. Following paradigms set by UAV racing norms, each racing course/track in our simulator comprises a sequence of gates connected by uniformly spaced cones. The track has a timing system that records time between each gate, lap, and completion time of the race. The gates have their own logic to detect whether the UAV has passed through the gate in the correct direction. This allows us to trigger both the start and ending of the race, as well as, determine the number of gates traversed by the UAV. These metrics (time and percentage of gates passed) constitute the overall per-track performance of a pilot, albeit human or a DNN.
Many professional pilots compete in time trials of well- known tracks such as those posted by the MultiGP Drone Racing League. Following this paradigm, our simulator race course is modeled after a football stadium, where local professional pilots regularly setup MultiGP tracks. Using a combination of LiDAR scanning and aerial photogrammetry, we captured the stadium with an accuracy of 0.5 cm; see Figure 5. A team of architects used the dense point cloud and textured mesh to create an accurate solid model with physics based rendering (PBR) textures in 3DSmax for export to UE4. This manifested in a geometrically accurate and photo-realistic race course that remains low in poly count, so as to run in UE4 at 60 fps, in which all training and evaluation experiments are conducted. We refer to Figure 4 for a side-by-side comparison of the real and virtual stadiums. Moreover, we want the simulated race course to be as dynamic and photo-realistic as possible, since the eventual goal is to transition the trained DNN from the simulated environment to the real-world, particularly starting with a similar venue as learned within the simulator. The concept of generating synthetic clones of real-world data for deep learning purposes has been adopted in previous work [8].
A key requirement for relatively straightforward simulatedto-real world transition is the DNN's ability to learn to automatically detect the gates and cones in the track within a complexly textured and dynamic environment. To this end, we enrich the simulated environment and race track with customizable textures (e.g. grass, snow, and dirt), gates (different shapes and appearance), and lighting.
Automatic Track Generation. We developed a track editor, where a user draws a 2D sketch of the overhead view of the track, the 3D track is automatically generated accordingly, and it is integrated into the timing system. With this editor, we created eleven tracks: seven for training, Figure 4: left: Aerial image captured from an UAV hovering above the stadium racing track. right: Rendering of the reconstructed stadium generated at a similar altitude and viewing angle within the simulator. and four for testing and evaluation. Each track is defined by gate positions and track lanes delineated by uniformly spaced racing cones distributed along the splines connecting adjacent gates. To avoid user bias in designing the race tracks, we use images collected from the internet and trace their contours in the editor to create uniquely stylized tracks. Following trends in designing tracks popularly used in the UAV racing community, both training and testing tracks have a large variety of types of turns and strait lengths. From a learning point of view, this track diversity exposes the DNN to a large number of track variations, as well as, their corresponding navigation controls. Obviously, the testing/evaluation tracks are never seen in training, neither by the human pilot nor the DNN.
Acquiring Large-Scale Ground-Truth Pilot Data. We record human pilot input from a Taranis flight transmitter integrated into the simulator through a joystick. This input is solicited from three pilots with different levels of skill: novice (has never flown before), intermediate (a moderately experienced pilot), and expert (a professional racing pilot). The pilots are given the opportunity to fly the seven training tracks as many times as needed until they successfully complete the tracks at their best time while passing through all gates. For the evaluation tracks, the pilots are allowed to fly the course only as many times needed to complete the entire course without crashing. We automatically score pi-lot performance based on lap time and percentage of gates traversed.
The simulation environment allows us to log the images rendered from the UAV camera point-of-view and the UAV flight controls from the transmitter. As mentioned earlier and to enable exploration, robust imitation learning requires the augmentation of these ground-truth logs with synthetic ones generated at a user-defined set of UAV offset positions and orientations accompanied by the corresponding controls needed to correct for these offsets. Also, since the logs can be replayed at a later time in the simulator, we can augment the dataset further by changing environmental conditions, including lighting, cone spacing or appearance, and other environmental dynamics (e.g. clouds). Therefore, each pilot flight leads to a large number of image-control pairs (both original and augmented) that will be used to train the UAV to robustly recover from possible drift along each training track, as well as, unseen evaluation tracks. Details of how our proposed DNN architecture is designed and trained are provided in Section 5. In general, more augmented data should improve UAV flight performance assuming that the control mapping and original flight data are noise-free. However, in many scenarios, this is not the case, so we find that there is a limit after which augmentation does not help (or even degrades) explorative learning. Empirical results validating this observation are detailed in Section 6.
Note that assigning corrective controls to the augmented data is quite complex in general, since they depend on many factors, including current UAV velocity, relative position on the track, its weight and current attitude. While it is possible to get this data in the simulation, it is very difficult to obtain it in the real-world in real-time. Therefore, we employ a fairly simple but effective model to determine these augmented controls that also scales to real-world settings. We add or subtract a corrective value to the pilot roll and yaw stick inputs for each position or orientation offset that is applied. For rotational offsets, we do not only apply a yaw correction but also couple it to roll because the UAV is in motion while rotating causing it to wash out due to its inertia.
DNN Interface for Real-Time Evaluation. To evaluate the performance of a trained DNN in real-time at 60 fps, we establish a TCP socket connection between the UE4 simulator and the Python wrapper (TensorFlow) executing the DNN. In doing so, the simulator continuously sends rendered UAV camera images across TCP to the DNN, which in turn processes each image individually to predict the next UAV stick inputs (flight controls) that are fed back to the UAV in the simulator using the same connection. Another advantage of this TCP connection is that the DNN prediction can be run on a separate system than the one running the simulator. We expect that this versatile and multi-purpose interface between the simulator and DNN framework will enable opportunities for the research community to further develop DNN solutions to not only the task of automated UAV navigation (using imitation learning) but to the more general task of vehicle maneuvering and obstacle avoidance (possibly using other forms of learning including RL).
Learning
In this section, we provide a detailed description of the learning strategy used to train our DNN, its network architecture and design. We also explore some of the inner workings of one of these trained DNNs to shed light on how this network is solving the problem of automated UAV racing.
Dataset Preparation and Augmentation
As it is the case for DNN-based solutions to other tasks, a careful construction of the training set is a key requirement to robust and effective DNN training. To this end and as mentioned earlier, we dedicate seven racing tracks (with their corresponding image-control pairs logged from human pilot runs in our simulator) for training and four tracks for testing/evaluation. We design the tracks such that they are similar to what racing professionals are accustomed to and such that they offer enough diversity and capability for exploration for proper network generalization on the unseen tracks. Figure 6 illustrates an overhead view of all these tracks.
As mentioned in Section 4, we log the pilot flight inputs and all other necessary parameters so that we can accurately replay the flights. These log files are then augmented using the specified offsets, replayed within the simulator while saving all rendered images and thus, providing exploratory insights to the racing DNN. For completeness, we summarize the details of the data generated for each training/testing track in Table 1. It is clear that the augmentation increases the size of the original dataset by approximately seven times. In Section 6, we show the effect of changing the amount of augmentation on the UAV's ability to generalize well. Moreover, for this dataset, we choose to use the intermediate pilot only so as to strike a trade-off between style of flight and overall size of the dataset. In Section 6.3, we show the effects of training with different flying styles.
Network Architecture and Implementation Details
To train a DNN to predict stick controls to the UAV from images, we choose a regression network architecture similar in spirit to the one used by Bojarski et al. [3]; however, we make changes to accommodate the complexity of the task at hand and to improve robustness in training. Our DNN architecture is shown in Figure7. The network consists of eight layers, five convolutional and three fully-connected. Since we implicitly want to localize the track and gates, we use striding in the convolutional layers instead of (max) pooling, which would add some degree of translation invariance. The DNN is given a single RGB-image with a 320×180 pixel resolution as input and is trained to regress to the four control/stick inputs to the UAV using a standard L 2 -loss and dropout ratio of 0.5. We find that the relatively high input resolution (i.e. higher network capacity), as compared to related methods [3,41], is useful to learn this more complicated maneuvering task and to enhance the network's ability to look further ahead. This affords the network with more robustness needed for long-term trajectory stability. We arrived to this compact network architecture by running extensive validation experiments that strikes a reasonable tradeoff between computational complexity and predictive performance. This careful design makes the proposed DNN architecture feasible for real-time applications on embedded hardware (e.g. Nvidia TX1) unlike previous architectures [3], if they use the same input size. In Table 2, we show both evaluation time on and technical details of the NVIDIA Titan X, and how it compares to a NVIDIA TX-1. Based on [30], we expect our network to still run at real-time speed with over 60 frames per second on this embedded hardware.
For training, we exploit a standard stochastic gradient descent (SGD) optimization strategy (namely Adam) using the TensorFlow platform. As such, one instance of our DNN can be trained to convergence on our dataset in less than two hours on a single GPU. This relatively fast training time enables finer hyper-parameter tuning. Table 2: Comparison of the NVIDIA Titan X and the NVIDIA TX-1. The performance of the TX-1 is approximated according to [30].
In contrast to other work where the frame rate is sampled down to 10 fps or lower [3,4,41], our racing environment is highly dynamic (with tight turns, high speed, and low inertia of the UAV), so we use a frame rate of 60 fps. This allows the UAV to be very responsive and move at high speeds, while maintaining a level of smoothness in controls. An al-ternative approach for temporally smooth controls is to include historic data in the training process (e.g. add the previous controls as input to the DNN). This can make the network more complex, harder to train, and less responsive in the highly dynamic racing environment, where many time critical decisions have to be made within a couple of frames (about 30 ms). Therefore, we find the high learning frame rate of 60 fps a good tradeoff between smooth controls and responsiveness.
Reinforcement vs. Imitation Learning. Of course, our simulator can lend itself useful in training networks using reinforcement learning. This type of learning does not specifically require supervised pilot information, as it searches for an optimal policy that leads to the highest eventual reward (e.g. highest percentage of gates traversed or lowest lap time). Recent methods have made use of reinforcement to learn simpler tasks without supervision [5]; however, they require weeks of training and a much faster simulator (1,000fps is possible in simple non photo-realistic games). For UAV racing, the required task is much more complicated and since the intent is to transfer the learned network into the real-world, a (slower) photo-realistic simulator is mandatory. Because of these two constraints, we decided to train our DNN using imitation learning instead of reinforcement learning.
Network Visualization
After training our DNN to convergence, we visualize how parts of the network behave in order to get additional insights. Figure8 shows some feature maps in different layers of the trained DNN for the same input image. Note how the filters have automatically learned to extract all necessary information in the scene (i.e. gates and cones), while in higher-level layers they are not responding to other parts of the environment. Although the feature map resolution becomes very low in the higher DNN layers, the feature map in the fifth convolutional layer is interesting as it marks the top, left, and right of parts of a gate with just a single activation each. This clearly demonstrates that our DNN is learning semantically intuitive features for the task of UAV racing. Figure 8: Visualization of feature maps at different convolutional layers in our trained network. Notice how the network activates in locations of semantic meaning for the task of UAV racing, namely the gates and cones.
Evaluation
In order to evaluate the performance of our DNN, we create four testing tracks based on well-known race tracks found in TORCS and Gran Turismo. We refer to Figure 6 for an overhead view of these tracks). Since the tracks must fit within the football stadium environment, they are scaled down leading to much sharper turns and shorter straightaways with the UAV reaching top speeds of over 100 km/h. Therefore, the evaluation tracks are significantly more difficult than they may have been originally intended in their original racing environments. We rank the four tracks in terms of difficulty ranging from easy (track 1), medium (track 2), hard (track 3), to very hard (track 4). For all the following evaluations, both the trained networks and human pilots are tasked to fly two laps in the testing tracks and are scored based on the total gates they fly through and overall lap time.
Effects of Exploration
We find exploration to be the predominant factor influencing network performance. As mentioned earlier, we augment the pilot flight data with offsets and corresponding corrective controls. We conduct grid search to find a suitable degree of augmentation and to analyze the effect it has on overall UAV racing performance. To do this, we define two sets of offset parameters: one that acts as a horizontal offset (roll-offset) and one that acts as a rotational offset (yaw-offset). Figure9 shows how the racing accuracy (percentage of gates traversed) varies with different sets of these augmentation offsets across the four testing tracks. It is clear that increasing the number of rendered images with yaw-offset has the greatest impact on performance. While it is possible for the DNN to complete tracks without being trained on roll-offsets, this is not the case for yaw-offsets. However, the huge gain in adding rotated camera views saturates quickly, and at a certain point the network does not benefit from more extensive augmentation. Therefore, we found four yaw-offsets to be sufficient. Including camera views with horizontal shifts is also beneficial, since the network is better equipped to recover once it is about to leave the track on straights. We found two roll-offsets to be sufficient to ensure this. Therefore, in the rest of our experiments, we use the following augmentation setup in training: horizontal roll-offset set {−50 • , 50 • } and rotational yaw- Figure 9: Effect of data augmentation in training to overall UAV racing performance. By augmenting the original flight logs with data captured at more offsets (roll and yaw) from the original trajectory along with their corresponding corrective controls, our UAV DNN can learn to traverse almost all the gates of the testing tracks, since it has learned to correct for exploratory maneuvers. After a sufficient amount of augmentation, no additional benefit is realized in improved racing performance.
offset set {−30 • , −15 • , 15 • , 30 • }.
Comparison to State-of-the-Art
We compare our racing DNN to the two most related and recent network architectures, the first denoted as Nvidia (for self-driving cars [3]) and the second as MAV (for forest path navigating UAVs [41]). While the domains of these works are similar, it should be noted that flying a high-speed racing UAV is a particularly challenging task, especially since the effect of inertia is much more significant and there are more degrees of freedom. For fair comparison, we scale our dataset to the same input dimensionality and re-train each of the three networks. We then evaluate each of the trained models on the task of UAV racing in the testing tracks. It is noteworthy to point out that both the Nvidia and MAV networks (in their original implementation) use data augmentation as well, so when training, we maintain the same strategy. For the Nvidia network, the exact offset choices for training are not publicly known, so we use a rotational offset set of {−30 • , 30 • } to augment its data. As for the MAV network, we use the same augmentation parameters proposed in the paper, i.e. a rotational offset of {−30 • , 30 • }. We needed to modify the MAV network to allow for a regression output instead of its original classification (left, center and right controls). This is necessary, since our task is much more complex and discrete controls would lead to inadequate UAV racing performance.
It should be noted that in the original implementation of the Nvidia network [3] (based on real-world driving data), it was realized that additional augmentation was needed for reasonable automatic driving performance after the realworld data was acquired. To avoid recapturing the data again, synthetic viewpoints (generated by interpolation) were used to augment the training dataset, which introduced undesirable distortions. By using our simulator, we are able to extract any number of camera views without distortions. Therefore, we wanted to also gauge the effect of additional augmentation to both the Nvidia and MAV networks, when they are trained using our default augmentation setting: horizontal roll-offset of {−50 • , 50 • } and rotational yaw-offset of {−30 • , −15 • , 15 • , 30 • }. We denote these trained networks as Nvidia++ and MAV++. Table 3 summarizes the results of these different network variants on the testing tracks. Results indicate that the performance of the original Nvidia and MAV networks suffer from insufficient data augmentation. They clearly do not make use of enough exploration. These networks improve in performance when our proposed data augmentation scheme (enabled by our simulator) is used. Regardless, our proposed DNN outperforms the Nvidia/Nvidia++ and MAV/MAV++ networks, where this improvement is less significant when more data augmentation or more exploratory behavior is learned. Unlike the other networks, our DNN performs consistently well on all the unseen tracks, owing to its sufficient network capacity needed to learn this complex task.
Pilot Diversity & Human vs. DNN
In this section, we investigate how the flying style of a pilot affects the network that is being learned. To do this, Table 3: Accuracy score of different pilots and networks on the four test tracks. The accuracy score represents the percentage of completed racing gates. The networks ending with ++ are variants of the original network with our augmentation strategy.
we compare the performance of the different networks on the testing set, when each of them is trained with flight data captured from pilots of varying flight expertise (intermediate, and expert). We also trained models using the Nvidia [3] and MAV [41] architectures, with and without our default data augmentation settings. Table 3 summarizes the lap time and accuracy of these networks. Clearly, the pilot flight style can significantly affect the performance of the learned network. Figure 10 shows that there is a high correlation regarding both performance and flying style of the pilot used in training and the corresponding learned network. The trained networks clearly resemble the flying style and also the proficiency of their human trainers. Thus, our network that was trained on flights of the intermediate pilot achieves high accuracies but is quite slow, just as the expert network sometimes misses gates but achieves very good lap and overall times. Interestingly, although the networks perform similar to their pilot, they fly more consistently, and therefore tend to outperform the human pilot with regards to overall time on multiple laps. This is especially true for our intermediate network. Both the intermediate and the expert network clearly outperforms the novice human pilot, who takes several hours of practice and several attempts to reach similar performance to the network. Even our expert pilots were not always able to complete the test tracks on the first attempt.
While the percentage of passed gates and best lap time give a good indication about the network performance, they do not convey any information about the style of the pilot. To this end, we visualize the performance of human pilots and the trained networks by plotting their trajectories onto the track (from a 2D overhead viewpoint). Moreover, we encode their speeds as a heatmap, where blue corresponds to the minimum speed and red to the maximum speed. Figure 11 shows a collection of heatmaps revealing several interesting insights. Firstly, the networks clearly imitate the style of the pilot they were trained on. This is especially true for the intermediate proficiency level, while the expert network sometimes overshoots, which causes it to loose speed and therefore to not match the speed pattern as well as the intermediate one. We also note that the performance gap between network and human increases as the expertise of the pilot increases. Note that the flight path of the expert network is less smooth and centered than its human correspondent and the intermediate network, respectively. This is partly due to the fact that the networks were only trained on two laps of flying across seven training tracks. An expert pilot has a lot more training than that and is therefore able to generalize much better to unseen environments. However, the experience advantage of the intermediate pilot over the network is much less and therefore the performance gap is smaller. We also show the performance of our novice pilot on these tracks. While the intermediate pilots accelerate on straights, the novice clearly is not able to control speed that well, creating a very narrow velocity range. Albeit flying quite slow, he also gets of track several times. This underlines how challenging UAV racing is, especially for unexperienced pilots.
Conclusions and Future Work
In this paper, we proposed a robust imitation learning based framework to teach an unmanned aerial vehicle (UAV) to fly through challenging racing tracks at very high speeds, an unprecedented and difficult task. To do this, we trained a deep neural network (DNN) to predict the necessary UAV controls from raw image data, grounded in a photo-realistic simulator that also allows for realistic UAV physics. Training is made possible by logging data (rendered images from the UAV and stick controls) from human pilot flights, while they maneuver the UAV through racing tracks. This data is augmented with sufficient offsets so as to teach the network to recover from flight mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.
In the future, we aim to transfer the network we trained in our simulator to the real-world to compete against human pilots in real-world racing scenarios. Although we accurately modeled the simulated racing environment, the differences in appearance between the simulated and real-world will need to be reconciled. Therefore, we will investigate deep transfer learning techniques to enable a smooth transition between simulator and the real-world. Since our developed simulator and its seamless interface to deep learning platforms is generic in nature, we expect that this combination will open up unique opportunities for the community to develop better automated UAV flying methods, to expand its reach to other fields of autonomous navigation such as selfdriving cars, and to benefit other interesting AI tasks (e.g. obstacle avoidance). Figure 11: Visualization of human and automated UAV flights super-imposed onto a 2D overhead view of different tracks. The color coding illustrates the instantaneous speed of the UAV. Notice how the UAV learns to speedup on straights and to slow down at turns and how the flying style corresponds, especially with the intermediate pilots. | 5,330 |
1708.05884 | 2949490711 | Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups. | @cite_0 trained a network on autonomous car datasets and then deployed it to control a drone. For this, they used full supervision by providing image and measured steering angle pairs from pre-collected datasets, and collecting their own dataset containing image and binary obstacle indication pairs. While they demonstrate an ability to transfer successfully to other environments, their approach does not model and exploit the full six degrees of freedom available. It also focuses on slow and safe navigation, rather than optimizing for speed as is the case for racing. Finally, with their network being fairly complex, they report an inference speed of 20fps (CPU) for remote processing, which is more than three times lower than the estimated frame rate for our proposed method when running on-board processing, and more than 27 times lower compared to our method running remotely on GPU. | {
"abstract": [
"Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world. In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics. Using a 3D game engine, we create small towers of wooden blocks whose stability is randomized and render them collapsing (or remaining upright). This data allows us to train large convolutional network models which can accurately predict the outcome, as well as estimating the block trajectories. The models are also able to generalize in two important ways: (i) to new physical scenarios, e.g. towers with an additional block and (ii) to images of real wooden blocks, where it obtains a performance comparable to human subjects."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2293598046"
]
} | Teaching UAVs to Race Using UE4Sim * | Unmanned aerial vehicles (UAVs) like drones and multicopters are attracting more attention in the graphics community. This development is stimulated by the merging of researchers from robotics, graphics, and computer vision to a common scientific community. Recent UAVrelated contributions in computer graphics cover a wide spectrum from computational multicopter design, optimization and fabrication [6] to state-of-the-art video capturing using quadrotor-based camera systems [15] and the generation of dynamically feasible trajectories [38]. While UAV design and point-to-point stabilized flight navigation is becoming a solved problem (as is evident from recent advances in UAV technology from industry leaders such as DJI, Amazon, and Intel), autonomous navigation of UAVs in more complex and real-world scenarios, such as unknown congested environments, GPS-denied areas, through narrow spaces, or around obstacles, is still far from being * www.airsim.org solved. In fact, only human pilots can reliably maneuver in these environments. This is a complex problem, since it requires both the sensing of real-world conditions and the understanding of appropriate policies of response to perceive obstacles through optimal navigation trajectory adjustment. There is perhaps no area where human pilots are more required to control UAVs then in the emerging sport of UAV racing where all these complex sense-and-understand tasks are conducted at neck-break speeds of over 100 km/h. Learning to control racing UAVs is a challenging task even for humans. It takes hours of practice and quite often hundreds of crashes. A more affordable approach to develop professional flight skills is to first train many hours on a flight simulator before going to the field. Since most of the fine motor skills of flight control are developed in simulators, the pilot is able to quickly transition to real-world flights.
In this contribution, we capitalize on this insight from how human pilots learn to sense and react with appropriate controls to their environment to train a deep network that can fly racing UAVs through challenging racing courses, many of which test the capabilities of even professional pilots. Inspired by recent work that trains artificial intelligence (AI) systems through the use of computer games [5], we create a photo-realistic UAV racing game with accurate physics using the Unreal Engine 4 (UE4) and integrate it with UE4Sim [27]. As this is the core learning environment, we develop a photo-realistic and customizable racing area in the form of a stadium based on a three-dimensional (3D) scanned real-world location to minimize discrepancy incurred from transitioning from the simulated to a realworld scenario. Inspired by recent work on self-driving cars [3], our automated racing UAV approach goes beyond simple pattern detection by learning the full control system required to fly a UAV through a racing course (arguably much more complicated than driving a car). As such, the proposed network extends the complexity of previous work to the control of a six degrees of freedom (6-DoF) UAV flying system, enabling the UAV to traverse tight spaces and make sharp turns at very high speeds (a task that cannot be performed by a ground vehicle). Our imitation learn- ing based approach simultaneously addresses both problems of perception and policy selection as the UAV navigates through the course, after it is trained from human pilot input on how to control itself (exploitation) and how to correct itself in case of drift (exploration). Our developed simulator is multi-purpose, as it enables the evaluation of a trained network in real-time on racing courses it has not encountered before.
Contributions. Our specific contributions are as follows.
(1) We are the first to introduce a photo-realistic simulator that is based on a real-world 3D environment that can be easily customized to build increasingly challenging racing courses, enables realistic UAV physical behavior, and is integrated with a real-world UAV controller (powered by a human pilot or a synthetic one). Logging video data from the UAV point-of-view and pilot controls is seamless and can be used to effortlessly generate large-scale training data for AI systems targeting UAV flying in particular and selfdriving vehicles in general (e.g. self-driving cars).
(2) To facilitate the training, parameter tuning, and evaluation of deep networks on this type of simulated data, we provide a full integration between the simulator and an end-to-end deep learning pipeline (based on TensorFlow) to be made publicly available to the community. Similar to other deep networks trained for game play, our integration will allow the community to fully explore many scenarios and tasks that go far beyond UAV racing in a rich and diverse photo-realistic gaming environment (e.g. obstacle avoidance and path planning).
(3) To the best of our knowledge, this paper is the first which fully demonstrates the capability of deep networks in learning how to master the complex control of UAVs at rac-ing speeds through difficult flight scenarios. Experiments show that our trained network can reach near-expert performance, while outperforming inexperienced pilots, who can use our system in a learning game play mode to become better pilots.
Overview
The fundamental modules of our proposed system are summarized in Figure 2, which represents the end-to-end dataset generation, learning, and evaluation process. In what follows, we provide details for each of these modules, namely how datasets are automatically generated within the simulator, how our proposed DNN is designed and trained, and how the learned DNN is evaluated. Note that this generic architecture can also be applied to any other type of vision-based navigation task made possible using our simulator.
Simulation Centric Dataset Generation
Our simulation environment allows for the automatic generation of customizable datasets, which comprise rich expert data to robustly train a DNN through imitation learning.
UAV Flight Simulation and Real-World Creation. The core of the system is the application of our UE4 based simulator. It is built on top of the open source UE4 project for computer vision called UAVSim [28]. Several changes were made to adapt the simulator for training our proposed racing DNN. First, we replaced the UAV with the 3D model and specifications of a racing quadcopter (see Figure3). We retuned the PID controller of the UAV to be more responsive and to function in a racing mode, where altitude control and stablization are still enabled but with much higher rates and steeper pitch and roll angles. In fact, this is now a popular racing mode available on consumer UAVs, such as the DJI Mavic. The simulator frame rate is locked at 60 fps and at every frame a log is recorded with UAV position, orientation, velocity, and stick inputs from the pilot. To accommodate for realistic input, we integrated the same UAV transmitter that would be used in real-world racing scenarios. We refer to the supplementary material for an example pilot recording. Following paradigms set by UAV racing norms, each racing course/track in our simulator comprises a sequence of gates connected by uniformly spaced cones. The track has a timing system that records time between each gate, lap, and completion time of the race. The gates have their own logic to detect whether the UAV has passed through the gate in the correct direction. This allows us to trigger both the start and ending of the race, as well as, determine the number of gates traversed by the UAV. These metrics (time and percentage of gates passed) constitute the overall per-track performance of a pilot, albeit human or a DNN.
Many professional pilots compete in time trials of well- known tracks such as those posted by the MultiGP Drone Racing League. Following this paradigm, our simulator race course is modeled after a football stadium, where local professional pilots regularly setup MultiGP tracks. Using a combination of LiDAR scanning and aerial photogrammetry, we captured the stadium with an accuracy of 0.5 cm; see Figure 5. A team of architects used the dense point cloud and textured mesh to create an accurate solid model with physics based rendering (PBR) textures in 3DSmax for export to UE4. This manifested in a geometrically accurate and photo-realistic race course that remains low in poly count, so as to run in UE4 at 60 fps, in which all training and evaluation experiments are conducted. We refer to Figure 4 for a side-by-side comparison of the real and virtual stadiums. Moreover, we want the simulated race course to be as dynamic and photo-realistic as possible, since the eventual goal is to transition the trained DNN from the simulated environment to the real-world, particularly starting with a similar venue as learned within the simulator. The concept of generating synthetic clones of real-world data for deep learning purposes has been adopted in previous work [8].
A key requirement for relatively straightforward simulatedto-real world transition is the DNN's ability to learn to automatically detect the gates and cones in the track within a complexly textured and dynamic environment. To this end, we enrich the simulated environment and race track with customizable textures (e.g. grass, snow, and dirt), gates (different shapes and appearance), and lighting.
Automatic Track Generation. We developed a track editor, where a user draws a 2D sketch of the overhead view of the track, the 3D track is automatically generated accordingly, and it is integrated into the timing system. With this editor, we created eleven tracks: seven for training, Figure 4: left: Aerial image captured from an UAV hovering above the stadium racing track. right: Rendering of the reconstructed stadium generated at a similar altitude and viewing angle within the simulator. and four for testing and evaluation. Each track is defined by gate positions and track lanes delineated by uniformly spaced racing cones distributed along the splines connecting adjacent gates. To avoid user bias in designing the race tracks, we use images collected from the internet and trace their contours in the editor to create uniquely stylized tracks. Following trends in designing tracks popularly used in the UAV racing community, both training and testing tracks have a large variety of types of turns and strait lengths. From a learning point of view, this track diversity exposes the DNN to a large number of track variations, as well as, their corresponding navigation controls. Obviously, the testing/evaluation tracks are never seen in training, neither by the human pilot nor the DNN.
Acquiring Large-Scale Ground-Truth Pilot Data. We record human pilot input from a Taranis flight transmitter integrated into the simulator through a joystick. This input is solicited from three pilots with different levels of skill: novice (has never flown before), intermediate (a moderately experienced pilot), and expert (a professional racing pilot). The pilots are given the opportunity to fly the seven training tracks as many times as needed until they successfully complete the tracks at their best time while passing through all gates. For the evaluation tracks, the pilots are allowed to fly the course only as many times needed to complete the entire course without crashing. We automatically score pi-lot performance based on lap time and percentage of gates traversed.
The simulation environment allows us to log the images rendered from the UAV camera point-of-view and the UAV flight controls from the transmitter. As mentioned earlier and to enable exploration, robust imitation learning requires the augmentation of these ground-truth logs with synthetic ones generated at a user-defined set of UAV offset positions and orientations accompanied by the corresponding controls needed to correct for these offsets. Also, since the logs can be replayed at a later time in the simulator, we can augment the dataset further by changing environmental conditions, including lighting, cone spacing or appearance, and other environmental dynamics (e.g. clouds). Therefore, each pilot flight leads to a large number of image-control pairs (both original and augmented) that will be used to train the UAV to robustly recover from possible drift along each training track, as well as, unseen evaluation tracks. Details of how our proposed DNN architecture is designed and trained are provided in Section 5. In general, more augmented data should improve UAV flight performance assuming that the control mapping and original flight data are noise-free. However, in many scenarios, this is not the case, so we find that there is a limit after which augmentation does not help (or even degrades) explorative learning. Empirical results validating this observation are detailed in Section 6.
Note that assigning corrective controls to the augmented data is quite complex in general, since they depend on many factors, including current UAV velocity, relative position on the track, its weight and current attitude. While it is possible to get this data in the simulation, it is very difficult to obtain it in the real-world in real-time. Therefore, we employ a fairly simple but effective model to determine these augmented controls that also scales to real-world settings. We add or subtract a corrective value to the pilot roll and yaw stick inputs for each position or orientation offset that is applied. For rotational offsets, we do not only apply a yaw correction but also couple it to roll because the UAV is in motion while rotating causing it to wash out due to its inertia.
DNN Interface for Real-Time Evaluation. To evaluate the performance of a trained DNN in real-time at 60 fps, we establish a TCP socket connection between the UE4 simulator and the Python wrapper (TensorFlow) executing the DNN. In doing so, the simulator continuously sends rendered UAV camera images across TCP to the DNN, which in turn processes each image individually to predict the next UAV stick inputs (flight controls) that are fed back to the UAV in the simulator using the same connection. Another advantage of this TCP connection is that the DNN prediction can be run on a separate system than the one running the simulator. We expect that this versatile and multi-purpose interface between the simulator and DNN framework will enable opportunities for the research community to further develop DNN solutions to not only the task of automated UAV navigation (using imitation learning) but to the more general task of vehicle maneuvering and obstacle avoidance (possibly using other forms of learning including RL).
Learning
In this section, we provide a detailed description of the learning strategy used to train our DNN, its network architecture and design. We also explore some of the inner workings of one of these trained DNNs to shed light on how this network is solving the problem of automated UAV racing.
Dataset Preparation and Augmentation
As it is the case for DNN-based solutions to other tasks, a careful construction of the training set is a key requirement to robust and effective DNN training. To this end and as mentioned earlier, we dedicate seven racing tracks (with their corresponding image-control pairs logged from human pilot runs in our simulator) for training and four tracks for testing/evaluation. We design the tracks such that they are similar to what racing professionals are accustomed to and such that they offer enough diversity and capability for exploration for proper network generalization on the unseen tracks. Figure 6 illustrates an overhead view of all these tracks.
As mentioned in Section 4, we log the pilot flight inputs and all other necessary parameters so that we can accurately replay the flights. These log files are then augmented using the specified offsets, replayed within the simulator while saving all rendered images and thus, providing exploratory insights to the racing DNN. For completeness, we summarize the details of the data generated for each training/testing track in Table 1. It is clear that the augmentation increases the size of the original dataset by approximately seven times. In Section 6, we show the effect of changing the amount of augmentation on the UAV's ability to generalize well. Moreover, for this dataset, we choose to use the intermediate pilot only so as to strike a trade-off between style of flight and overall size of the dataset. In Section 6.3, we show the effects of training with different flying styles.
Network Architecture and Implementation Details
To train a DNN to predict stick controls to the UAV from images, we choose a regression network architecture similar in spirit to the one used by Bojarski et al. [3]; however, we make changes to accommodate the complexity of the task at hand and to improve robustness in training. Our DNN architecture is shown in Figure7. The network consists of eight layers, five convolutional and three fully-connected. Since we implicitly want to localize the track and gates, we use striding in the convolutional layers instead of (max) pooling, which would add some degree of translation invariance. The DNN is given a single RGB-image with a 320×180 pixel resolution as input and is trained to regress to the four control/stick inputs to the UAV using a standard L 2 -loss and dropout ratio of 0.5. We find that the relatively high input resolution (i.e. higher network capacity), as compared to related methods [3,41], is useful to learn this more complicated maneuvering task and to enhance the network's ability to look further ahead. This affords the network with more robustness needed for long-term trajectory stability. We arrived to this compact network architecture by running extensive validation experiments that strikes a reasonable tradeoff between computational complexity and predictive performance. This careful design makes the proposed DNN architecture feasible for real-time applications on embedded hardware (e.g. Nvidia TX1) unlike previous architectures [3], if they use the same input size. In Table 2, we show both evaluation time on and technical details of the NVIDIA Titan X, and how it compares to a NVIDIA TX-1. Based on [30], we expect our network to still run at real-time speed with over 60 frames per second on this embedded hardware.
For training, we exploit a standard stochastic gradient descent (SGD) optimization strategy (namely Adam) using the TensorFlow platform. As such, one instance of our DNN can be trained to convergence on our dataset in less than two hours on a single GPU. This relatively fast training time enables finer hyper-parameter tuning. Table 2: Comparison of the NVIDIA Titan X and the NVIDIA TX-1. The performance of the TX-1 is approximated according to [30].
In contrast to other work where the frame rate is sampled down to 10 fps or lower [3,4,41], our racing environment is highly dynamic (with tight turns, high speed, and low inertia of the UAV), so we use a frame rate of 60 fps. This allows the UAV to be very responsive and move at high speeds, while maintaining a level of smoothness in controls. An al-ternative approach for temporally smooth controls is to include historic data in the training process (e.g. add the previous controls as input to the DNN). This can make the network more complex, harder to train, and less responsive in the highly dynamic racing environment, where many time critical decisions have to be made within a couple of frames (about 30 ms). Therefore, we find the high learning frame rate of 60 fps a good tradeoff between smooth controls and responsiveness.
Reinforcement vs. Imitation Learning. Of course, our simulator can lend itself useful in training networks using reinforcement learning. This type of learning does not specifically require supervised pilot information, as it searches for an optimal policy that leads to the highest eventual reward (e.g. highest percentage of gates traversed or lowest lap time). Recent methods have made use of reinforcement to learn simpler tasks without supervision [5]; however, they require weeks of training and a much faster simulator (1,000fps is possible in simple non photo-realistic games). For UAV racing, the required task is much more complicated and since the intent is to transfer the learned network into the real-world, a (slower) photo-realistic simulator is mandatory. Because of these two constraints, we decided to train our DNN using imitation learning instead of reinforcement learning.
Network Visualization
After training our DNN to convergence, we visualize how parts of the network behave in order to get additional insights. Figure8 shows some feature maps in different layers of the trained DNN for the same input image. Note how the filters have automatically learned to extract all necessary information in the scene (i.e. gates and cones), while in higher-level layers they are not responding to other parts of the environment. Although the feature map resolution becomes very low in the higher DNN layers, the feature map in the fifth convolutional layer is interesting as it marks the top, left, and right of parts of a gate with just a single activation each. This clearly demonstrates that our DNN is learning semantically intuitive features for the task of UAV racing. Figure 8: Visualization of feature maps at different convolutional layers in our trained network. Notice how the network activates in locations of semantic meaning for the task of UAV racing, namely the gates and cones.
Evaluation
In order to evaluate the performance of our DNN, we create four testing tracks based on well-known race tracks found in TORCS and Gran Turismo. We refer to Figure 6 for an overhead view of these tracks). Since the tracks must fit within the football stadium environment, they are scaled down leading to much sharper turns and shorter straightaways with the UAV reaching top speeds of over 100 km/h. Therefore, the evaluation tracks are significantly more difficult than they may have been originally intended in their original racing environments. We rank the four tracks in terms of difficulty ranging from easy (track 1), medium (track 2), hard (track 3), to very hard (track 4). For all the following evaluations, both the trained networks and human pilots are tasked to fly two laps in the testing tracks and are scored based on the total gates they fly through and overall lap time.
Effects of Exploration
We find exploration to be the predominant factor influencing network performance. As mentioned earlier, we augment the pilot flight data with offsets and corresponding corrective controls. We conduct grid search to find a suitable degree of augmentation and to analyze the effect it has on overall UAV racing performance. To do this, we define two sets of offset parameters: one that acts as a horizontal offset (roll-offset) and one that acts as a rotational offset (yaw-offset). Figure9 shows how the racing accuracy (percentage of gates traversed) varies with different sets of these augmentation offsets across the four testing tracks. It is clear that increasing the number of rendered images with yaw-offset has the greatest impact on performance. While it is possible for the DNN to complete tracks without being trained on roll-offsets, this is not the case for yaw-offsets. However, the huge gain in adding rotated camera views saturates quickly, and at a certain point the network does not benefit from more extensive augmentation. Therefore, we found four yaw-offsets to be sufficient. Including camera views with horizontal shifts is also beneficial, since the network is better equipped to recover once it is about to leave the track on straights. We found two roll-offsets to be sufficient to ensure this. Therefore, in the rest of our experiments, we use the following augmentation setup in training: horizontal roll-offset set {−50 • , 50 • } and rotational yaw- Figure 9: Effect of data augmentation in training to overall UAV racing performance. By augmenting the original flight logs with data captured at more offsets (roll and yaw) from the original trajectory along with their corresponding corrective controls, our UAV DNN can learn to traverse almost all the gates of the testing tracks, since it has learned to correct for exploratory maneuvers. After a sufficient amount of augmentation, no additional benefit is realized in improved racing performance.
offset set {−30 • , −15 • , 15 • , 30 • }.
Comparison to State-of-the-Art
We compare our racing DNN to the two most related and recent network architectures, the first denoted as Nvidia (for self-driving cars [3]) and the second as MAV (for forest path navigating UAVs [41]). While the domains of these works are similar, it should be noted that flying a high-speed racing UAV is a particularly challenging task, especially since the effect of inertia is much more significant and there are more degrees of freedom. For fair comparison, we scale our dataset to the same input dimensionality and re-train each of the three networks. We then evaluate each of the trained models on the task of UAV racing in the testing tracks. It is noteworthy to point out that both the Nvidia and MAV networks (in their original implementation) use data augmentation as well, so when training, we maintain the same strategy. For the Nvidia network, the exact offset choices for training are not publicly known, so we use a rotational offset set of {−30 • , 30 • } to augment its data. As for the MAV network, we use the same augmentation parameters proposed in the paper, i.e. a rotational offset of {−30 • , 30 • }. We needed to modify the MAV network to allow for a regression output instead of its original classification (left, center and right controls). This is necessary, since our task is much more complex and discrete controls would lead to inadequate UAV racing performance.
It should be noted that in the original implementation of the Nvidia network [3] (based on real-world driving data), it was realized that additional augmentation was needed for reasonable automatic driving performance after the realworld data was acquired. To avoid recapturing the data again, synthetic viewpoints (generated by interpolation) were used to augment the training dataset, which introduced undesirable distortions. By using our simulator, we are able to extract any number of camera views without distortions. Therefore, we wanted to also gauge the effect of additional augmentation to both the Nvidia and MAV networks, when they are trained using our default augmentation setting: horizontal roll-offset of {−50 • , 50 • } and rotational yaw-offset of {−30 • , −15 • , 15 • , 30 • }. We denote these trained networks as Nvidia++ and MAV++. Table 3 summarizes the results of these different network variants on the testing tracks. Results indicate that the performance of the original Nvidia and MAV networks suffer from insufficient data augmentation. They clearly do not make use of enough exploration. These networks improve in performance when our proposed data augmentation scheme (enabled by our simulator) is used. Regardless, our proposed DNN outperforms the Nvidia/Nvidia++ and MAV/MAV++ networks, where this improvement is less significant when more data augmentation or more exploratory behavior is learned. Unlike the other networks, our DNN performs consistently well on all the unseen tracks, owing to its sufficient network capacity needed to learn this complex task.
Pilot Diversity & Human vs. DNN
In this section, we investigate how the flying style of a pilot affects the network that is being learned. To do this, Table 3: Accuracy score of different pilots and networks on the four test tracks. The accuracy score represents the percentage of completed racing gates. The networks ending with ++ are variants of the original network with our augmentation strategy.
we compare the performance of the different networks on the testing set, when each of them is trained with flight data captured from pilots of varying flight expertise (intermediate, and expert). We also trained models using the Nvidia [3] and MAV [41] architectures, with and without our default data augmentation settings. Table 3 summarizes the lap time and accuracy of these networks. Clearly, the pilot flight style can significantly affect the performance of the learned network. Figure 10 shows that there is a high correlation regarding both performance and flying style of the pilot used in training and the corresponding learned network. The trained networks clearly resemble the flying style and also the proficiency of their human trainers. Thus, our network that was trained on flights of the intermediate pilot achieves high accuracies but is quite slow, just as the expert network sometimes misses gates but achieves very good lap and overall times. Interestingly, although the networks perform similar to their pilot, they fly more consistently, and therefore tend to outperform the human pilot with regards to overall time on multiple laps. This is especially true for our intermediate network. Both the intermediate and the expert network clearly outperforms the novice human pilot, who takes several hours of practice and several attempts to reach similar performance to the network. Even our expert pilots were not always able to complete the test tracks on the first attempt.
While the percentage of passed gates and best lap time give a good indication about the network performance, they do not convey any information about the style of the pilot. To this end, we visualize the performance of human pilots and the trained networks by plotting their trajectories onto the track (from a 2D overhead viewpoint). Moreover, we encode their speeds as a heatmap, where blue corresponds to the minimum speed and red to the maximum speed. Figure 11 shows a collection of heatmaps revealing several interesting insights. Firstly, the networks clearly imitate the style of the pilot they were trained on. This is especially true for the intermediate proficiency level, while the expert network sometimes overshoots, which causes it to loose speed and therefore to not match the speed pattern as well as the intermediate one. We also note that the performance gap between network and human increases as the expertise of the pilot increases. Note that the flight path of the expert network is less smooth and centered than its human correspondent and the intermediate network, respectively. This is partly due to the fact that the networks were only trained on two laps of flying across seven training tracks. An expert pilot has a lot more training than that and is therefore able to generalize much better to unseen environments. However, the experience advantage of the intermediate pilot over the network is much less and therefore the performance gap is smaller. We also show the performance of our novice pilot on these tracks. While the intermediate pilots accelerate on straights, the novice clearly is not able to control speed that well, creating a very narrow velocity range. Albeit flying quite slow, he also gets of track several times. This underlines how challenging UAV racing is, especially for unexperienced pilots.
Conclusions and Future Work
In this paper, we proposed a robust imitation learning based framework to teach an unmanned aerial vehicle (UAV) to fly through challenging racing tracks at very high speeds, an unprecedented and difficult task. To do this, we trained a deep neural network (DNN) to predict the necessary UAV controls from raw image data, grounded in a photo-realistic simulator that also allows for realistic UAV physics. Training is made possible by logging data (rendered images from the UAV and stick controls) from human pilot flights, while they maneuver the UAV through racing tracks. This data is augmented with sufficient offsets so as to teach the network to recover from flight mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.
In the future, we aim to transfer the network we trained in our simulator to the real-world to compete against human pilots in real-world racing scenarios. Although we accurately modeled the simulated racing environment, the differences in appearance between the simulated and real-world will need to be reconciled. Therefore, we will investigate deep transfer learning techniques to enable a smooth transition between simulator and the real-world. Since our developed simulator and its seamless interface to deep learning platforms is generic in nature, we expect that this combination will open up unique opportunities for the community to develop better automated UAV flying methods, to expand its reach to other fields of autonomous navigation such as selfdriving cars, and to benefit other interesting AI tasks (e.g. obstacle avoidance). Figure 11: Visualization of human and automated UAV flights super-imposed onto a 2D overhead view of different tracks. The color coding illustrates the instantaneous speed of the UAV. Notice how the UAV learns to speedup on straights and to slow down at turns and how the flying style corresponds, especially with the intermediate pilots. | 5,330 |
1708.05884 | 2949490711 | Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups. | Simulation. As mentioned earlier, generating diverse natural' training data for sequential decision making through SL is tedious. Generating additional data for exploration purposes (i.e. in scenarios where both input and output pairs have to be generated) is much more so. Therefore, a lot of attention from the community is being given to simulators (or games) for this source of data. In fact, a broad range of work has exploited them recently for these types of learning, namely in animation and motion planning @cite_42 @cite_17 @cite_14 @cite_32 @cite_1 @cite_7 @cite_37 , scene understanding @cite_25 @cite_3 , pedestrian detection @cite_5 , and identification of 2D 3D objects @cite_36 @cite_45 @cite_16 . For instance, the authors of @cite_42 used Unity, a video game engine similar to Unreal Engine, to teach a bird how to fly in simulation. | {
"abstract": [
"Inspired by how humans learn dynamic motor skills through a progressive process of coaching and practices, we introduce an intuitive and interactive framework for developing dynamic controllers. The user only needs to provide a primitive initial controller and high-level, human-readable instructions as if s he is coaching a human trainee, while the character has the ability to interpret the abstract instructions, accumulate the knowledge from the coach, and improve its skill iteratively. We introduce “control rigs” as an intermediate layer of control module to facilitate the mapping between high-level instructions and low-level control variables. Control rigs also utilize the human coach's knowledge to reduce the search space for control optimization. In addition, we develop a new sampling-based optimization method, Covariance Matrix Adaptation with Classification (CMA-C), to efficiently compute-control rig parameters. Based on the observation of human ability to “learn from failure”, CMA-C utilizes the failed simulation trials to approximate an infeasible region in the space of control rig parameters, resulting a faster convergence for the CMA optimization. We demonstrate the design process of complex dynamic controllers using our framework, including precision jumps, turnaround jumps, monkey vaults, drop-and-rolls, and wall-backflips.",
"Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"We present a novel, general-purpose Model-Predictive Control (MPC) algorithm that we call Control Particle Belief Propagation (C-PBP). C-PBP combines multimodal, gradient-free sampling and a Markov Random Field factorization to effectively perform simultaneous path finding and smoothing in high-dimensional spaces. We demonstrate the method in online synthesis of interactive and physically valid humanoid movements, including balancing, recovery from both small and extreme disturbances, reaching, balancing on a ball, juggling a ball, and fully steerable locomotion in an environment with obstacles. Such a large repertoire of movements has not been demonstrated before at interactive frame rates, especially considering that all our movement emerges from simple cost functions. Furthermore, we abstain from using any precomputation to train a control policy offline, reference data such as motion capture clips, or state machines that break the movements down into more manageable subtasks. Operating under these conditions enables rapid and convenient iteration when designing the cost functions.",
"We introduce a new approach for recognizing and reconstructing 3D objects in images. Our approach is based on an analysis by synthesis strategy. A forward synthesis model constructs possible geometric interpretations of the world, and then selects the interpretation that best agrees with the measured visual evidence. The forward model synthesizes visual templates defined on invariant (HOG) features. These visual templates are discriminatively trained to be accurate for inverse estimation. We introduce an efficient \"brute-force\" approach to inference that searches through a large number of candidate reconstructions, returning the optimal one. One benefit of such an approach is that recognition is inherently (re)constructive. We show state of the art performance for detection and reconstruction on two challenging 3D object recognition datasets of cars and cuboids.",
"",
"",
"",
"In this work we address the problem of indoor scene understanding from RGB-D images. Specifically, we propose to find instances of common furniture classes, their spatial extent, and their pose with respect to generalized class models. To accomplish this, we use a deep, wide, multi-output convolutional neural network (CNN) that predicts class, pose, and location of possible objects simultaneously. To overcome the lack of large annotated RGB-D training sets (especially those with pose), we use an on-the-fly rendering pipeline that generates realistic cluttered room scenes in parallel to training. We then perform transfer learning on the relatively small amount of publicly available annotated RGB-D data, and find that our model is able to successfully annotate even highly challenging real scenes. Importantly, our trained network is able to understand noisy and sparse observations of highly cluttered scenes with a remarkable degree of accuracy, inferring class and pose from a very limited set of cues. Additionally, our neural network is only moderately deep and computes class, pose and position in tandem, so the overall run-time is significantly faster than existing methods, estimating all output parameters simultaneously in parallel.",
"",
"",
"Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching.",
"In a glance, we can perceive whether a stack of dishes will topple, a branch will support a child’s weight, a grocery bag is poorly packed and liable to tear or crush its contents, or a tool is firmly attached to a table or free to be lifted. Such rapid physical inferences are central to how people interact with the world and with each other, yet their computational underpinnings are poorly understood. We propose a model based on an “intuitive physics engine,” a cognitive mechanism similar to computer engines that simulate rich physics in video games and graphics, but that uses approximate, probabilistic simulations to make robust and fast inferences in complex natural scenes where crucial information is unobserved. This single model fits data from five distinct psychophysical tasks, captures several illusions and biases, and explains core aspects of human mental models and common-sense reasoning that are instrumental to how humans understand their everyday world.",
"We present a Model-Predictive Control (MPC) system for online synthesis of interactive and physically valid character motion. Our system enables a complex (36-DOF) 3D human character model to balance in a given pose, dodge projectiles, and improvise a get up strategy if forced to lose balance, all in a dynamic and unpredictable environment. Such contact-rich, predictive and reactive motions have previously only been generated offline or using a handcrafted state machine or a dataset of reference motions, which our system does not require. For each animation frame, our system generates trajectories of character control parameters for the near future --- a few seconds --- using Sequential Monte Carlo sampling. Our main technical contribution is a multimodal, tree-based sampler that simultaneously explores multiple different near-term control strategies represented as parameter splines. The strategies represented by each sample are evaluated in parallel using a causal physics engine. The best strategy, as determined by an objective function measuring goal achievement, fluidity of motion, etc., is used as the control signal for the current frame, but maintaining multiple hypotheses is crucial for adapting to dynamically changing environments."
],
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_7",
"@cite_36",
"@cite_42",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_45",
"@cite_5",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"1993309788",
"2963864421",
"2053621240",
"2059704894",
"",
"",
"",
"2203820691",
"",
"",
"1964201035",
"2059100041",
"1996626756"
]
} | Teaching UAVs to Race Using UE4Sim * | Unmanned aerial vehicles (UAVs) like drones and multicopters are attracting more attention in the graphics community. This development is stimulated by the merging of researchers from robotics, graphics, and computer vision to a common scientific community. Recent UAVrelated contributions in computer graphics cover a wide spectrum from computational multicopter design, optimization and fabrication [6] to state-of-the-art video capturing using quadrotor-based camera systems [15] and the generation of dynamically feasible trajectories [38]. While UAV design and point-to-point stabilized flight navigation is becoming a solved problem (as is evident from recent advances in UAV technology from industry leaders such as DJI, Amazon, and Intel), autonomous navigation of UAVs in more complex and real-world scenarios, such as unknown congested environments, GPS-denied areas, through narrow spaces, or around obstacles, is still far from being * www.airsim.org solved. In fact, only human pilots can reliably maneuver in these environments. This is a complex problem, since it requires both the sensing of real-world conditions and the understanding of appropriate policies of response to perceive obstacles through optimal navigation trajectory adjustment. There is perhaps no area where human pilots are more required to control UAVs then in the emerging sport of UAV racing where all these complex sense-and-understand tasks are conducted at neck-break speeds of over 100 km/h. Learning to control racing UAVs is a challenging task even for humans. It takes hours of practice and quite often hundreds of crashes. A more affordable approach to develop professional flight skills is to first train many hours on a flight simulator before going to the field. Since most of the fine motor skills of flight control are developed in simulators, the pilot is able to quickly transition to real-world flights.
In this contribution, we capitalize on this insight from how human pilots learn to sense and react with appropriate controls to their environment to train a deep network that can fly racing UAVs through challenging racing courses, many of which test the capabilities of even professional pilots. Inspired by recent work that trains artificial intelligence (AI) systems through the use of computer games [5], we create a photo-realistic UAV racing game with accurate physics using the Unreal Engine 4 (UE4) and integrate it with UE4Sim [27]. As this is the core learning environment, we develop a photo-realistic and customizable racing area in the form of a stadium based on a three-dimensional (3D) scanned real-world location to minimize discrepancy incurred from transitioning from the simulated to a realworld scenario. Inspired by recent work on self-driving cars [3], our automated racing UAV approach goes beyond simple pattern detection by learning the full control system required to fly a UAV through a racing course (arguably much more complicated than driving a car). As such, the proposed network extends the complexity of previous work to the control of a six degrees of freedom (6-DoF) UAV flying system, enabling the UAV to traverse tight spaces and make sharp turns at very high speeds (a task that cannot be performed by a ground vehicle). Our imitation learn- ing based approach simultaneously addresses both problems of perception and policy selection as the UAV navigates through the course, after it is trained from human pilot input on how to control itself (exploitation) and how to correct itself in case of drift (exploration). Our developed simulator is multi-purpose, as it enables the evaluation of a trained network in real-time on racing courses it has not encountered before.
Contributions. Our specific contributions are as follows.
(1) We are the first to introduce a photo-realistic simulator that is based on a real-world 3D environment that can be easily customized to build increasingly challenging racing courses, enables realistic UAV physical behavior, and is integrated with a real-world UAV controller (powered by a human pilot or a synthetic one). Logging video data from the UAV point-of-view and pilot controls is seamless and can be used to effortlessly generate large-scale training data for AI systems targeting UAV flying in particular and selfdriving vehicles in general (e.g. self-driving cars).
(2) To facilitate the training, parameter tuning, and evaluation of deep networks on this type of simulated data, we provide a full integration between the simulator and an end-to-end deep learning pipeline (based on TensorFlow) to be made publicly available to the community. Similar to other deep networks trained for game play, our integration will allow the community to fully explore many scenarios and tasks that go far beyond UAV racing in a rich and diverse photo-realistic gaming environment (e.g. obstacle avoidance and path planning).
(3) To the best of our knowledge, this paper is the first which fully demonstrates the capability of deep networks in learning how to master the complex control of UAVs at rac-ing speeds through difficult flight scenarios. Experiments show that our trained network can reach near-expert performance, while outperforming inexperienced pilots, who can use our system in a learning game play mode to become better pilots.
Overview
The fundamental modules of our proposed system are summarized in Figure 2, which represents the end-to-end dataset generation, learning, and evaluation process. In what follows, we provide details for each of these modules, namely how datasets are automatically generated within the simulator, how our proposed DNN is designed and trained, and how the learned DNN is evaluated. Note that this generic architecture can also be applied to any other type of vision-based navigation task made possible using our simulator.
Simulation Centric Dataset Generation
Our simulation environment allows for the automatic generation of customizable datasets, which comprise rich expert data to robustly train a DNN through imitation learning.
UAV Flight Simulation and Real-World Creation. The core of the system is the application of our UE4 based simulator. It is built on top of the open source UE4 project for computer vision called UAVSim [28]. Several changes were made to adapt the simulator for training our proposed racing DNN. First, we replaced the UAV with the 3D model and specifications of a racing quadcopter (see Figure3). We retuned the PID controller of the UAV to be more responsive and to function in a racing mode, where altitude control and stablization are still enabled but with much higher rates and steeper pitch and roll angles. In fact, this is now a popular racing mode available on consumer UAVs, such as the DJI Mavic. The simulator frame rate is locked at 60 fps and at every frame a log is recorded with UAV position, orientation, velocity, and stick inputs from the pilot. To accommodate for realistic input, we integrated the same UAV transmitter that would be used in real-world racing scenarios. We refer to the supplementary material for an example pilot recording. Following paradigms set by UAV racing norms, each racing course/track in our simulator comprises a sequence of gates connected by uniformly spaced cones. The track has a timing system that records time between each gate, lap, and completion time of the race. The gates have their own logic to detect whether the UAV has passed through the gate in the correct direction. This allows us to trigger both the start and ending of the race, as well as, determine the number of gates traversed by the UAV. These metrics (time and percentage of gates passed) constitute the overall per-track performance of a pilot, albeit human or a DNN.
Many professional pilots compete in time trials of well- known tracks such as those posted by the MultiGP Drone Racing League. Following this paradigm, our simulator race course is modeled after a football stadium, where local professional pilots regularly setup MultiGP tracks. Using a combination of LiDAR scanning and aerial photogrammetry, we captured the stadium with an accuracy of 0.5 cm; see Figure 5. A team of architects used the dense point cloud and textured mesh to create an accurate solid model with physics based rendering (PBR) textures in 3DSmax for export to UE4. This manifested in a geometrically accurate and photo-realistic race course that remains low in poly count, so as to run in UE4 at 60 fps, in which all training and evaluation experiments are conducted. We refer to Figure 4 for a side-by-side comparison of the real and virtual stadiums. Moreover, we want the simulated race course to be as dynamic and photo-realistic as possible, since the eventual goal is to transition the trained DNN from the simulated environment to the real-world, particularly starting with a similar venue as learned within the simulator. The concept of generating synthetic clones of real-world data for deep learning purposes has been adopted in previous work [8].
A key requirement for relatively straightforward simulatedto-real world transition is the DNN's ability to learn to automatically detect the gates and cones in the track within a complexly textured and dynamic environment. To this end, we enrich the simulated environment and race track with customizable textures (e.g. grass, snow, and dirt), gates (different shapes and appearance), and lighting.
Automatic Track Generation. We developed a track editor, where a user draws a 2D sketch of the overhead view of the track, the 3D track is automatically generated accordingly, and it is integrated into the timing system. With this editor, we created eleven tracks: seven for training, Figure 4: left: Aerial image captured from an UAV hovering above the stadium racing track. right: Rendering of the reconstructed stadium generated at a similar altitude and viewing angle within the simulator. and four for testing and evaluation. Each track is defined by gate positions and track lanes delineated by uniformly spaced racing cones distributed along the splines connecting adjacent gates. To avoid user bias in designing the race tracks, we use images collected from the internet and trace their contours in the editor to create uniquely stylized tracks. Following trends in designing tracks popularly used in the UAV racing community, both training and testing tracks have a large variety of types of turns and strait lengths. From a learning point of view, this track diversity exposes the DNN to a large number of track variations, as well as, their corresponding navigation controls. Obviously, the testing/evaluation tracks are never seen in training, neither by the human pilot nor the DNN.
Acquiring Large-Scale Ground-Truth Pilot Data. We record human pilot input from a Taranis flight transmitter integrated into the simulator through a joystick. This input is solicited from three pilots with different levels of skill: novice (has never flown before), intermediate (a moderately experienced pilot), and expert (a professional racing pilot). The pilots are given the opportunity to fly the seven training tracks as many times as needed until they successfully complete the tracks at their best time while passing through all gates. For the evaluation tracks, the pilots are allowed to fly the course only as many times needed to complete the entire course without crashing. We automatically score pi-lot performance based on lap time and percentage of gates traversed.
The simulation environment allows us to log the images rendered from the UAV camera point-of-view and the UAV flight controls from the transmitter. As mentioned earlier and to enable exploration, robust imitation learning requires the augmentation of these ground-truth logs with synthetic ones generated at a user-defined set of UAV offset positions and orientations accompanied by the corresponding controls needed to correct for these offsets. Also, since the logs can be replayed at a later time in the simulator, we can augment the dataset further by changing environmental conditions, including lighting, cone spacing or appearance, and other environmental dynamics (e.g. clouds). Therefore, each pilot flight leads to a large number of image-control pairs (both original and augmented) that will be used to train the UAV to robustly recover from possible drift along each training track, as well as, unseen evaluation tracks. Details of how our proposed DNN architecture is designed and trained are provided in Section 5. In general, more augmented data should improve UAV flight performance assuming that the control mapping and original flight data are noise-free. However, in many scenarios, this is not the case, so we find that there is a limit after which augmentation does not help (or even degrades) explorative learning. Empirical results validating this observation are detailed in Section 6.
Note that assigning corrective controls to the augmented data is quite complex in general, since they depend on many factors, including current UAV velocity, relative position on the track, its weight and current attitude. While it is possible to get this data in the simulation, it is very difficult to obtain it in the real-world in real-time. Therefore, we employ a fairly simple but effective model to determine these augmented controls that also scales to real-world settings. We add or subtract a corrective value to the pilot roll and yaw stick inputs for each position or orientation offset that is applied. For rotational offsets, we do not only apply a yaw correction but also couple it to roll because the UAV is in motion while rotating causing it to wash out due to its inertia.
DNN Interface for Real-Time Evaluation. To evaluate the performance of a trained DNN in real-time at 60 fps, we establish a TCP socket connection between the UE4 simulator and the Python wrapper (TensorFlow) executing the DNN. In doing so, the simulator continuously sends rendered UAV camera images across TCP to the DNN, which in turn processes each image individually to predict the next UAV stick inputs (flight controls) that are fed back to the UAV in the simulator using the same connection. Another advantage of this TCP connection is that the DNN prediction can be run on a separate system than the one running the simulator. We expect that this versatile and multi-purpose interface between the simulator and DNN framework will enable opportunities for the research community to further develop DNN solutions to not only the task of automated UAV navigation (using imitation learning) but to the more general task of vehicle maneuvering and obstacle avoidance (possibly using other forms of learning including RL).
Learning
In this section, we provide a detailed description of the learning strategy used to train our DNN, its network architecture and design. We also explore some of the inner workings of one of these trained DNNs to shed light on how this network is solving the problem of automated UAV racing.
Dataset Preparation and Augmentation
As it is the case for DNN-based solutions to other tasks, a careful construction of the training set is a key requirement to robust and effective DNN training. To this end and as mentioned earlier, we dedicate seven racing tracks (with their corresponding image-control pairs logged from human pilot runs in our simulator) for training and four tracks for testing/evaluation. We design the tracks such that they are similar to what racing professionals are accustomed to and such that they offer enough diversity and capability for exploration for proper network generalization on the unseen tracks. Figure 6 illustrates an overhead view of all these tracks.
As mentioned in Section 4, we log the pilot flight inputs and all other necessary parameters so that we can accurately replay the flights. These log files are then augmented using the specified offsets, replayed within the simulator while saving all rendered images and thus, providing exploratory insights to the racing DNN. For completeness, we summarize the details of the data generated for each training/testing track in Table 1. It is clear that the augmentation increases the size of the original dataset by approximately seven times. In Section 6, we show the effect of changing the amount of augmentation on the UAV's ability to generalize well. Moreover, for this dataset, we choose to use the intermediate pilot only so as to strike a trade-off between style of flight and overall size of the dataset. In Section 6.3, we show the effects of training with different flying styles.
Network Architecture and Implementation Details
To train a DNN to predict stick controls to the UAV from images, we choose a regression network architecture similar in spirit to the one used by Bojarski et al. [3]; however, we make changes to accommodate the complexity of the task at hand and to improve robustness in training. Our DNN architecture is shown in Figure7. The network consists of eight layers, five convolutional and three fully-connected. Since we implicitly want to localize the track and gates, we use striding in the convolutional layers instead of (max) pooling, which would add some degree of translation invariance. The DNN is given a single RGB-image with a 320×180 pixel resolution as input and is trained to regress to the four control/stick inputs to the UAV using a standard L 2 -loss and dropout ratio of 0.5. We find that the relatively high input resolution (i.e. higher network capacity), as compared to related methods [3,41], is useful to learn this more complicated maneuvering task and to enhance the network's ability to look further ahead. This affords the network with more robustness needed for long-term trajectory stability. We arrived to this compact network architecture by running extensive validation experiments that strikes a reasonable tradeoff between computational complexity and predictive performance. This careful design makes the proposed DNN architecture feasible for real-time applications on embedded hardware (e.g. Nvidia TX1) unlike previous architectures [3], if they use the same input size. In Table 2, we show both evaluation time on and technical details of the NVIDIA Titan X, and how it compares to a NVIDIA TX-1. Based on [30], we expect our network to still run at real-time speed with over 60 frames per second on this embedded hardware.
For training, we exploit a standard stochastic gradient descent (SGD) optimization strategy (namely Adam) using the TensorFlow platform. As such, one instance of our DNN can be trained to convergence on our dataset in less than two hours on a single GPU. This relatively fast training time enables finer hyper-parameter tuning. Table 2: Comparison of the NVIDIA Titan X and the NVIDIA TX-1. The performance of the TX-1 is approximated according to [30].
In contrast to other work where the frame rate is sampled down to 10 fps or lower [3,4,41], our racing environment is highly dynamic (with tight turns, high speed, and low inertia of the UAV), so we use a frame rate of 60 fps. This allows the UAV to be very responsive and move at high speeds, while maintaining a level of smoothness in controls. An al-ternative approach for temporally smooth controls is to include historic data in the training process (e.g. add the previous controls as input to the DNN). This can make the network more complex, harder to train, and less responsive in the highly dynamic racing environment, where many time critical decisions have to be made within a couple of frames (about 30 ms). Therefore, we find the high learning frame rate of 60 fps a good tradeoff between smooth controls and responsiveness.
Reinforcement vs. Imitation Learning. Of course, our simulator can lend itself useful in training networks using reinforcement learning. This type of learning does not specifically require supervised pilot information, as it searches for an optimal policy that leads to the highest eventual reward (e.g. highest percentage of gates traversed or lowest lap time). Recent methods have made use of reinforcement to learn simpler tasks without supervision [5]; however, they require weeks of training and a much faster simulator (1,000fps is possible in simple non photo-realistic games). For UAV racing, the required task is much more complicated and since the intent is to transfer the learned network into the real-world, a (slower) photo-realistic simulator is mandatory. Because of these two constraints, we decided to train our DNN using imitation learning instead of reinforcement learning.
Network Visualization
After training our DNN to convergence, we visualize how parts of the network behave in order to get additional insights. Figure8 shows some feature maps in different layers of the trained DNN for the same input image. Note how the filters have automatically learned to extract all necessary information in the scene (i.e. gates and cones), while in higher-level layers they are not responding to other parts of the environment. Although the feature map resolution becomes very low in the higher DNN layers, the feature map in the fifth convolutional layer is interesting as it marks the top, left, and right of parts of a gate with just a single activation each. This clearly demonstrates that our DNN is learning semantically intuitive features for the task of UAV racing. Figure 8: Visualization of feature maps at different convolutional layers in our trained network. Notice how the network activates in locations of semantic meaning for the task of UAV racing, namely the gates and cones.
Evaluation
In order to evaluate the performance of our DNN, we create four testing tracks based on well-known race tracks found in TORCS and Gran Turismo. We refer to Figure 6 for an overhead view of these tracks). Since the tracks must fit within the football stadium environment, they are scaled down leading to much sharper turns and shorter straightaways with the UAV reaching top speeds of over 100 km/h. Therefore, the evaluation tracks are significantly more difficult than they may have been originally intended in their original racing environments. We rank the four tracks in terms of difficulty ranging from easy (track 1), medium (track 2), hard (track 3), to very hard (track 4). For all the following evaluations, both the trained networks and human pilots are tasked to fly two laps in the testing tracks and are scored based on the total gates they fly through and overall lap time.
Effects of Exploration
We find exploration to be the predominant factor influencing network performance. As mentioned earlier, we augment the pilot flight data with offsets and corresponding corrective controls. We conduct grid search to find a suitable degree of augmentation and to analyze the effect it has on overall UAV racing performance. To do this, we define two sets of offset parameters: one that acts as a horizontal offset (roll-offset) and one that acts as a rotational offset (yaw-offset). Figure9 shows how the racing accuracy (percentage of gates traversed) varies with different sets of these augmentation offsets across the four testing tracks. It is clear that increasing the number of rendered images with yaw-offset has the greatest impact on performance. While it is possible for the DNN to complete tracks without being trained on roll-offsets, this is not the case for yaw-offsets. However, the huge gain in adding rotated camera views saturates quickly, and at a certain point the network does not benefit from more extensive augmentation. Therefore, we found four yaw-offsets to be sufficient. Including camera views with horizontal shifts is also beneficial, since the network is better equipped to recover once it is about to leave the track on straights. We found two roll-offsets to be sufficient to ensure this. Therefore, in the rest of our experiments, we use the following augmentation setup in training: horizontal roll-offset set {−50 • , 50 • } and rotational yaw- Figure 9: Effect of data augmentation in training to overall UAV racing performance. By augmenting the original flight logs with data captured at more offsets (roll and yaw) from the original trajectory along with their corresponding corrective controls, our UAV DNN can learn to traverse almost all the gates of the testing tracks, since it has learned to correct for exploratory maneuvers. After a sufficient amount of augmentation, no additional benefit is realized in improved racing performance.
offset set {−30 • , −15 • , 15 • , 30 • }.
Comparison to State-of-the-Art
We compare our racing DNN to the two most related and recent network architectures, the first denoted as Nvidia (for self-driving cars [3]) and the second as MAV (for forest path navigating UAVs [41]). While the domains of these works are similar, it should be noted that flying a high-speed racing UAV is a particularly challenging task, especially since the effect of inertia is much more significant and there are more degrees of freedom. For fair comparison, we scale our dataset to the same input dimensionality and re-train each of the three networks. We then evaluate each of the trained models on the task of UAV racing in the testing tracks. It is noteworthy to point out that both the Nvidia and MAV networks (in their original implementation) use data augmentation as well, so when training, we maintain the same strategy. For the Nvidia network, the exact offset choices for training are not publicly known, so we use a rotational offset set of {−30 • , 30 • } to augment its data. As for the MAV network, we use the same augmentation parameters proposed in the paper, i.e. a rotational offset of {−30 • , 30 • }. We needed to modify the MAV network to allow for a regression output instead of its original classification (left, center and right controls). This is necessary, since our task is much more complex and discrete controls would lead to inadequate UAV racing performance.
It should be noted that in the original implementation of the Nvidia network [3] (based on real-world driving data), it was realized that additional augmentation was needed for reasonable automatic driving performance after the realworld data was acquired. To avoid recapturing the data again, synthetic viewpoints (generated by interpolation) were used to augment the training dataset, which introduced undesirable distortions. By using our simulator, we are able to extract any number of camera views without distortions. Therefore, we wanted to also gauge the effect of additional augmentation to both the Nvidia and MAV networks, when they are trained using our default augmentation setting: horizontal roll-offset of {−50 • , 50 • } and rotational yaw-offset of {−30 • , −15 • , 15 • , 30 • }. We denote these trained networks as Nvidia++ and MAV++. Table 3 summarizes the results of these different network variants on the testing tracks. Results indicate that the performance of the original Nvidia and MAV networks suffer from insufficient data augmentation. They clearly do not make use of enough exploration. These networks improve in performance when our proposed data augmentation scheme (enabled by our simulator) is used. Regardless, our proposed DNN outperforms the Nvidia/Nvidia++ and MAV/MAV++ networks, where this improvement is less significant when more data augmentation or more exploratory behavior is learned. Unlike the other networks, our DNN performs consistently well on all the unseen tracks, owing to its sufficient network capacity needed to learn this complex task.
Pilot Diversity & Human vs. DNN
In this section, we investigate how the flying style of a pilot affects the network that is being learned. To do this, Table 3: Accuracy score of different pilots and networks on the four test tracks. The accuracy score represents the percentage of completed racing gates. The networks ending with ++ are variants of the original network with our augmentation strategy.
we compare the performance of the different networks on the testing set, when each of them is trained with flight data captured from pilots of varying flight expertise (intermediate, and expert). We also trained models using the Nvidia [3] and MAV [41] architectures, with and without our default data augmentation settings. Table 3 summarizes the lap time and accuracy of these networks. Clearly, the pilot flight style can significantly affect the performance of the learned network. Figure 10 shows that there is a high correlation regarding both performance and flying style of the pilot used in training and the corresponding learned network. The trained networks clearly resemble the flying style and also the proficiency of their human trainers. Thus, our network that was trained on flights of the intermediate pilot achieves high accuracies but is quite slow, just as the expert network sometimes misses gates but achieves very good lap and overall times. Interestingly, although the networks perform similar to their pilot, they fly more consistently, and therefore tend to outperform the human pilot with regards to overall time on multiple laps. This is especially true for our intermediate network. Both the intermediate and the expert network clearly outperforms the novice human pilot, who takes several hours of practice and several attempts to reach similar performance to the network. Even our expert pilots were not always able to complete the test tracks on the first attempt.
While the percentage of passed gates and best lap time give a good indication about the network performance, they do not convey any information about the style of the pilot. To this end, we visualize the performance of human pilots and the trained networks by plotting their trajectories onto the track (from a 2D overhead viewpoint). Moreover, we encode their speeds as a heatmap, where blue corresponds to the minimum speed and red to the maximum speed. Figure 11 shows a collection of heatmaps revealing several interesting insights. Firstly, the networks clearly imitate the style of the pilot they were trained on. This is especially true for the intermediate proficiency level, while the expert network sometimes overshoots, which causes it to loose speed and therefore to not match the speed pattern as well as the intermediate one. We also note that the performance gap between network and human increases as the expertise of the pilot increases. Note that the flight path of the expert network is less smooth and centered than its human correspondent and the intermediate network, respectively. This is partly due to the fact that the networks were only trained on two laps of flying across seven training tracks. An expert pilot has a lot more training than that and is therefore able to generalize much better to unseen environments. However, the experience advantage of the intermediate pilot over the network is much less and therefore the performance gap is smaller. We also show the performance of our novice pilot on these tracks. While the intermediate pilots accelerate on straights, the novice clearly is not able to control speed that well, creating a very narrow velocity range. Albeit flying quite slow, he also gets of track several times. This underlines how challenging UAV racing is, especially for unexperienced pilots.
Conclusions and Future Work
In this paper, we proposed a robust imitation learning based framework to teach an unmanned aerial vehicle (UAV) to fly through challenging racing tracks at very high speeds, an unprecedented and difficult task. To do this, we trained a deep neural network (DNN) to predict the necessary UAV controls from raw image data, grounded in a photo-realistic simulator that also allows for realistic UAV physics. Training is made possible by logging data (rendered images from the UAV and stick controls) from human pilot flights, while they maneuver the UAV through racing tracks. This data is augmented with sufficient offsets so as to teach the network to recover from flight mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.
In the future, we aim to transfer the network we trained in our simulator to the real-world to compete against human pilots in real-world racing scenarios. Although we accurately modeled the simulated racing environment, the differences in appearance between the simulated and real-world will need to be reconciled. Therefore, we will investigate deep transfer learning techniques to enable a smooth transition between simulator and the real-world. Since our developed simulator and its seamless interface to deep learning platforms is generic in nature, we expect that this combination will open up unique opportunities for the community to develop better automated UAV flying methods, to expand its reach to other fields of autonomous navigation such as selfdriving cars, and to benefit other interesting AI tasks (e.g. obstacle avoidance). Figure 11: Visualization of human and automated UAV flights super-imposed onto a 2D overhead view of different tracks. The color coding illustrates the instantaneous speed of the UAV. Notice how the UAV learns to speedup on straights and to slow down at turns and how the flying style corresponds, especially with the intermediate pilots. | 5,330 |
1708.05884 | 2949490711 | Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups. | Moreover, there is another line of work that uses hardware-in-the-loop (HILT) simulation. Examples include JMAVSim @cite_18 @cite_33 which was used to develop and evaluate controllers and RotorS @cite_8 which was used to study visual servoing. The visual quality of most HIL simulators is very basic and far from photo-realistic with the exception of AirSim @cite_13 . While there are multiple established simulators such as Realflight, Flightgear, or XPlane for simulating aerial platforms, they have several limitations. In contrast to Unreal Engine, advanced shading and post-processing settings are not available and the selection of assets and textures is limited. Recent work @cite_40 @cite_19 @cite_48 @cite_10 @cite_9 highlights how modern game engines can be used to generate photo-realistic training datasets and pixel-accurate segmentation masks. The goal of this work is to build an automated UAV flying system (based on imitation learning) that can relatively easily be transitioned from a simulated world to the real one. Therefore, we choose Sim4CV @cite_47 @cite_9 as our simulator, which uses the open source game engine UE4 and provides a full software in-the-loop UAV simulation. The simulator also provides a lot of flexibility in terms of assets, textures, and communication interfaces. | {
"abstract": [
"",
"Purpose – The purpose of this paper is to present the development of hardware‐in‐the‐loop simulation (HILS) for visual target tracking of an octorotor unmanned aerial vehicle (UAV) with onboard computer vision.Design methodology approach – HILS for visual target tracking of an octorotor UAV is developed by integrating real embedded computer vision hardware and camera to software simulation of the UAV dynamics, flight control and navigation systems run on Simulink. Visualization of the visual target tracking is developed using FlightGear. The computer vision system is used to recognize and track a moving target using feature correlation between captured scene images and object images stored in the database. Features of the captured images are extracted using speed‐up robust feature (SURF) algorithm, and subsequently matched with features extracted from object image using fast library for approximate nearest neighbor (FLANN) algorithm. Kalman filter is applied to predict the position of the moving target on...",
"In this chapter we present a modular Micro Aerial Vehicle (MAV) simulation framework, which enables a quick start to perform research on MAVs. After reading this chapter, the reader will have a ready to use MAV simulator, including control and state estimation. The simulator was designed in a modular way, such that different controllers and state estimators can be used interchangeably, while incorporating new MAVs is reduced to a few steps. The provided controllers can be adapted to a custom vehicle by only changing a parameter file. Different controllers and state estimators can be compared with the provided evaluation framework. The simulation framework is a good starting point to tackle higher level tasks, such as collision avoidance, path planning, and vision based problems, like Simultaneous Localization and Mapping (SLAM), on MAVs. All components were designed to be analogous to its real world counterparts. This allows the usage of the same controllers and state estimators, including their parameters, in the simulation as on the real MAV.",
"",
"",
"",
"",
"Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.",
"In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https: ivul.kaust.edu.sa Pages pub-benchmark-simulator-uav.aspx.).",
"Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights."
],
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_48",
"@cite_10",
"@cite_9",
"@cite_19",
"@cite_40",
"@cite_47",
"@cite_13"
],
"mid": [
"",
"2050155800",
"2465948386",
"",
"",
"",
"",
"2949907962",
"2518876086",
"2615547864"
]
} | Teaching UAVs to Race Using UE4Sim * | Unmanned aerial vehicles (UAVs) like drones and multicopters are attracting more attention in the graphics community. This development is stimulated by the merging of researchers from robotics, graphics, and computer vision to a common scientific community. Recent UAVrelated contributions in computer graphics cover a wide spectrum from computational multicopter design, optimization and fabrication [6] to state-of-the-art video capturing using quadrotor-based camera systems [15] and the generation of dynamically feasible trajectories [38]. While UAV design and point-to-point stabilized flight navigation is becoming a solved problem (as is evident from recent advances in UAV technology from industry leaders such as DJI, Amazon, and Intel), autonomous navigation of UAVs in more complex and real-world scenarios, such as unknown congested environments, GPS-denied areas, through narrow spaces, or around obstacles, is still far from being * www.airsim.org solved. In fact, only human pilots can reliably maneuver in these environments. This is a complex problem, since it requires both the sensing of real-world conditions and the understanding of appropriate policies of response to perceive obstacles through optimal navigation trajectory adjustment. There is perhaps no area where human pilots are more required to control UAVs then in the emerging sport of UAV racing where all these complex sense-and-understand tasks are conducted at neck-break speeds of over 100 km/h. Learning to control racing UAVs is a challenging task even for humans. It takes hours of practice and quite often hundreds of crashes. A more affordable approach to develop professional flight skills is to first train many hours on a flight simulator before going to the field. Since most of the fine motor skills of flight control are developed in simulators, the pilot is able to quickly transition to real-world flights.
In this contribution, we capitalize on this insight from how human pilots learn to sense and react with appropriate controls to their environment to train a deep network that can fly racing UAVs through challenging racing courses, many of which test the capabilities of even professional pilots. Inspired by recent work that trains artificial intelligence (AI) systems through the use of computer games [5], we create a photo-realistic UAV racing game with accurate physics using the Unreal Engine 4 (UE4) and integrate it with UE4Sim [27]. As this is the core learning environment, we develop a photo-realistic and customizable racing area in the form of a stadium based on a three-dimensional (3D) scanned real-world location to minimize discrepancy incurred from transitioning from the simulated to a realworld scenario. Inspired by recent work on self-driving cars [3], our automated racing UAV approach goes beyond simple pattern detection by learning the full control system required to fly a UAV through a racing course (arguably much more complicated than driving a car). As such, the proposed network extends the complexity of previous work to the control of a six degrees of freedom (6-DoF) UAV flying system, enabling the UAV to traverse tight spaces and make sharp turns at very high speeds (a task that cannot be performed by a ground vehicle). Our imitation learn- ing based approach simultaneously addresses both problems of perception and policy selection as the UAV navigates through the course, after it is trained from human pilot input on how to control itself (exploitation) and how to correct itself in case of drift (exploration). Our developed simulator is multi-purpose, as it enables the evaluation of a trained network in real-time on racing courses it has not encountered before.
Contributions. Our specific contributions are as follows.
(1) We are the first to introduce a photo-realistic simulator that is based on a real-world 3D environment that can be easily customized to build increasingly challenging racing courses, enables realistic UAV physical behavior, and is integrated with a real-world UAV controller (powered by a human pilot or a synthetic one). Logging video data from the UAV point-of-view and pilot controls is seamless and can be used to effortlessly generate large-scale training data for AI systems targeting UAV flying in particular and selfdriving vehicles in general (e.g. self-driving cars).
(2) To facilitate the training, parameter tuning, and evaluation of deep networks on this type of simulated data, we provide a full integration between the simulator and an end-to-end deep learning pipeline (based on TensorFlow) to be made publicly available to the community. Similar to other deep networks trained for game play, our integration will allow the community to fully explore many scenarios and tasks that go far beyond UAV racing in a rich and diverse photo-realistic gaming environment (e.g. obstacle avoidance and path planning).
(3) To the best of our knowledge, this paper is the first which fully demonstrates the capability of deep networks in learning how to master the complex control of UAVs at rac-ing speeds through difficult flight scenarios. Experiments show that our trained network can reach near-expert performance, while outperforming inexperienced pilots, who can use our system in a learning game play mode to become better pilots.
Overview
The fundamental modules of our proposed system are summarized in Figure 2, which represents the end-to-end dataset generation, learning, and evaluation process. In what follows, we provide details for each of these modules, namely how datasets are automatically generated within the simulator, how our proposed DNN is designed and trained, and how the learned DNN is evaluated. Note that this generic architecture can also be applied to any other type of vision-based navigation task made possible using our simulator.
Simulation Centric Dataset Generation
Our simulation environment allows for the automatic generation of customizable datasets, which comprise rich expert data to robustly train a DNN through imitation learning.
UAV Flight Simulation and Real-World Creation. The core of the system is the application of our UE4 based simulator. It is built on top of the open source UE4 project for computer vision called UAVSim [28]. Several changes were made to adapt the simulator for training our proposed racing DNN. First, we replaced the UAV with the 3D model and specifications of a racing quadcopter (see Figure3). We retuned the PID controller of the UAV to be more responsive and to function in a racing mode, where altitude control and stablization are still enabled but with much higher rates and steeper pitch and roll angles. In fact, this is now a popular racing mode available on consumer UAVs, such as the DJI Mavic. The simulator frame rate is locked at 60 fps and at every frame a log is recorded with UAV position, orientation, velocity, and stick inputs from the pilot. To accommodate for realistic input, we integrated the same UAV transmitter that would be used in real-world racing scenarios. We refer to the supplementary material for an example pilot recording. Following paradigms set by UAV racing norms, each racing course/track in our simulator comprises a sequence of gates connected by uniformly spaced cones. The track has a timing system that records time between each gate, lap, and completion time of the race. The gates have their own logic to detect whether the UAV has passed through the gate in the correct direction. This allows us to trigger both the start and ending of the race, as well as, determine the number of gates traversed by the UAV. These metrics (time and percentage of gates passed) constitute the overall per-track performance of a pilot, albeit human or a DNN.
Many professional pilots compete in time trials of well- known tracks such as those posted by the MultiGP Drone Racing League. Following this paradigm, our simulator race course is modeled after a football stadium, where local professional pilots regularly setup MultiGP tracks. Using a combination of LiDAR scanning and aerial photogrammetry, we captured the stadium with an accuracy of 0.5 cm; see Figure 5. A team of architects used the dense point cloud and textured mesh to create an accurate solid model with physics based rendering (PBR) textures in 3DSmax for export to UE4. This manifested in a geometrically accurate and photo-realistic race course that remains low in poly count, so as to run in UE4 at 60 fps, in which all training and evaluation experiments are conducted. We refer to Figure 4 for a side-by-side comparison of the real and virtual stadiums. Moreover, we want the simulated race course to be as dynamic and photo-realistic as possible, since the eventual goal is to transition the trained DNN from the simulated environment to the real-world, particularly starting with a similar venue as learned within the simulator. The concept of generating synthetic clones of real-world data for deep learning purposes has been adopted in previous work [8].
A key requirement for relatively straightforward simulatedto-real world transition is the DNN's ability to learn to automatically detect the gates and cones in the track within a complexly textured and dynamic environment. To this end, we enrich the simulated environment and race track with customizable textures (e.g. grass, snow, and dirt), gates (different shapes and appearance), and lighting.
Automatic Track Generation. We developed a track editor, where a user draws a 2D sketch of the overhead view of the track, the 3D track is automatically generated accordingly, and it is integrated into the timing system. With this editor, we created eleven tracks: seven for training, Figure 4: left: Aerial image captured from an UAV hovering above the stadium racing track. right: Rendering of the reconstructed stadium generated at a similar altitude and viewing angle within the simulator. and four for testing and evaluation. Each track is defined by gate positions and track lanes delineated by uniformly spaced racing cones distributed along the splines connecting adjacent gates. To avoid user bias in designing the race tracks, we use images collected from the internet and trace their contours in the editor to create uniquely stylized tracks. Following trends in designing tracks popularly used in the UAV racing community, both training and testing tracks have a large variety of types of turns and strait lengths. From a learning point of view, this track diversity exposes the DNN to a large number of track variations, as well as, their corresponding navigation controls. Obviously, the testing/evaluation tracks are never seen in training, neither by the human pilot nor the DNN.
Acquiring Large-Scale Ground-Truth Pilot Data. We record human pilot input from a Taranis flight transmitter integrated into the simulator through a joystick. This input is solicited from three pilots with different levels of skill: novice (has never flown before), intermediate (a moderately experienced pilot), and expert (a professional racing pilot). The pilots are given the opportunity to fly the seven training tracks as many times as needed until they successfully complete the tracks at their best time while passing through all gates. For the evaluation tracks, the pilots are allowed to fly the course only as many times needed to complete the entire course without crashing. We automatically score pi-lot performance based on lap time and percentage of gates traversed.
The simulation environment allows us to log the images rendered from the UAV camera point-of-view and the UAV flight controls from the transmitter. As mentioned earlier and to enable exploration, robust imitation learning requires the augmentation of these ground-truth logs with synthetic ones generated at a user-defined set of UAV offset positions and orientations accompanied by the corresponding controls needed to correct for these offsets. Also, since the logs can be replayed at a later time in the simulator, we can augment the dataset further by changing environmental conditions, including lighting, cone spacing or appearance, and other environmental dynamics (e.g. clouds). Therefore, each pilot flight leads to a large number of image-control pairs (both original and augmented) that will be used to train the UAV to robustly recover from possible drift along each training track, as well as, unseen evaluation tracks. Details of how our proposed DNN architecture is designed and trained are provided in Section 5. In general, more augmented data should improve UAV flight performance assuming that the control mapping and original flight data are noise-free. However, in many scenarios, this is not the case, so we find that there is a limit after which augmentation does not help (or even degrades) explorative learning. Empirical results validating this observation are detailed in Section 6.
Note that assigning corrective controls to the augmented data is quite complex in general, since they depend on many factors, including current UAV velocity, relative position on the track, its weight and current attitude. While it is possible to get this data in the simulation, it is very difficult to obtain it in the real-world in real-time. Therefore, we employ a fairly simple but effective model to determine these augmented controls that also scales to real-world settings. We add or subtract a corrective value to the pilot roll and yaw stick inputs for each position or orientation offset that is applied. For rotational offsets, we do not only apply a yaw correction but also couple it to roll because the UAV is in motion while rotating causing it to wash out due to its inertia.
DNN Interface for Real-Time Evaluation. To evaluate the performance of a trained DNN in real-time at 60 fps, we establish a TCP socket connection between the UE4 simulator and the Python wrapper (TensorFlow) executing the DNN. In doing so, the simulator continuously sends rendered UAV camera images across TCP to the DNN, which in turn processes each image individually to predict the next UAV stick inputs (flight controls) that are fed back to the UAV in the simulator using the same connection. Another advantage of this TCP connection is that the DNN prediction can be run on a separate system than the one running the simulator. We expect that this versatile and multi-purpose interface between the simulator and DNN framework will enable opportunities for the research community to further develop DNN solutions to not only the task of automated UAV navigation (using imitation learning) but to the more general task of vehicle maneuvering and obstacle avoidance (possibly using other forms of learning including RL).
Learning
In this section, we provide a detailed description of the learning strategy used to train our DNN, its network architecture and design. We also explore some of the inner workings of one of these trained DNNs to shed light on how this network is solving the problem of automated UAV racing.
Dataset Preparation and Augmentation
As it is the case for DNN-based solutions to other tasks, a careful construction of the training set is a key requirement to robust and effective DNN training. To this end and as mentioned earlier, we dedicate seven racing tracks (with their corresponding image-control pairs logged from human pilot runs in our simulator) for training and four tracks for testing/evaluation. We design the tracks such that they are similar to what racing professionals are accustomed to and such that they offer enough diversity and capability for exploration for proper network generalization on the unseen tracks. Figure 6 illustrates an overhead view of all these tracks.
As mentioned in Section 4, we log the pilot flight inputs and all other necessary parameters so that we can accurately replay the flights. These log files are then augmented using the specified offsets, replayed within the simulator while saving all rendered images and thus, providing exploratory insights to the racing DNN. For completeness, we summarize the details of the data generated for each training/testing track in Table 1. It is clear that the augmentation increases the size of the original dataset by approximately seven times. In Section 6, we show the effect of changing the amount of augmentation on the UAV's ability to generalize well. Moreover, for this dataset, we choose to use the intermediate pilot only so as to strike a trade-off between style of flight and overall size of the dataset. In Section 6.3, we show the effects of training with different flying styles.
Network Architecture and Implementation Details
To train a DNN to predict stick controls to the UAV from images, we choose a regression network architecture similar in spirit to the one used by Bojarski et al. [3]; however, we make changes to accommodate the complexity of the task at hand and to improve robustness in training. Our DNN architecture is shown in Figure7. The network consists of eight layers, five convolutional and three fully-connected. Since we implicitly want to localize the track and gates, we use striding in the convolutional layers instead of (max) pooling, which would add some degree of translation invariance. The DNN is given a single RGB-image with a 320×180 pixel resolution as input and is trained to regress to the four control/stick inputs to the UAV using a standard L 2 -loss and dropout ratio of 0.5. We find that the relatively high input resolution (i.e. higher network capacity), as compared to related methods [3,41], is useful to learn this more complicated maneuvering task and to enhance the network's ability to look further ahead. This affords the network with more robustness needed for long-term trajectory stability. We arrived to this compact network architecture by running extensive validation experiments that strikes a reasonable tradeoff between computational complexity and predictive performance. This careful design makes the proposed DNN architecture feasible for real-time applications on embedded hardware (e.g. Nvidia TX1) unlike previous architectures [3], if they use the same input size. In Table 2, we show both evaluation time on and technical details of the NVIDIA Titan X, and how it compares to a NVIDIA TX-1. Based on [30], we expect our network to still run at real-time speed with over 60 frames per second on this embedded hardware.
For training, we exploit a standard stochastic gradient descent (SGD) optimization strategy (namely Adam) using the TensorFlow platform. As such, one instance of our DNN can be trained to convergence on our dataset in less than two hours on a single GPU. This relatively fast training time enables finer hyper-parameter tuning. Table 2: Comparison of the NVIDIA Titan X and the NVIDIA TX-1. The performance of the TX-1 is approximated according to [30].
In contrast to other work where the frame rate is sampled down to 10 fps or lower [3,4,41], our racing environment is highly dynamic (with tight turns, high speed, and low inertia of the UAV), so we use a frame rate of 60 fps. This allows the UAV to be very responsive and move at high speeds, while maintaining a level of smoothness in controls. An al-ternative approach for temporally smooth controls is to include historic data in the training process (e.g. add the previous controls as input to the DNN). This can make the network more complex, harder to train, and less responsive in the highly dynamic racing environment, where many time critical decisions have to be made within a couple of frames (about 30 ms). Therefore, we find the high learning frame rate of 60 fps a good tradeoff between smooth controls and responsiveness.
Reinforcement vs. Imitation Learning. Of course, our simulator can lend itself useful in training networks using reinforcement learning. This type of learning does not specifically require supervised pilot information, as it searches for an optimal policy that leads to the highest eventual reward (e.g. highest percentage of gates traversed or lowest lap time). Recent methods have made use of reinforcement to learn simpler tasks without supervision [5]; however, they require weeks of training and a much faster simulator (1,000fps is possible in simple non photo-realistic games). For UAV racing, the required task is much more complicated and since the intent is to transfer the learned network into the real-world, a (slower) photo-realistic simulator is mandatory. Because of these two constraints, we decided to train our DNN using imitation learning instead of reinforcement learning.
Network Visualization
After training our DNN to convergence, we visualize how parts of the network behave in order to get additional insights. Figure8 shows some feature maps in different layers of the trained DNN for the same input image. Note how the filters have automatically learned to extract all necessary information in the scene (i.e. gates and cones), while in higher-level layers they are not responding to other parts of the environment. Although the feature map resolution becomes very low in the higher DNN layers, the feature map in the fifth convolutional layer is interesting as it marks the top, left, and right of parts of a gate with just a single activation each. This clearly demonstrates that our DNN is learning semantically intuitive features for the task of UAV racing. Figure 8: Visualization of feature maps at different convolutional layers in our trained network. Notice how the network activates in locations of semantic meaning for the task of UAV racing, namely the gates and cones.
Evaluation
In order to evaluate the performance of our DNN, we create four testing tracks based on well-known race tracks found in TORCS and Gran Turismo. We refer to Figure 6 for an overhead view of these tracks). Since the tracks must fit within the football stadium environment, they are scaled down leading to much sharper turns and shorter straightaways with the UAV reaching top speeds of over 100 km/h. Therefore, the evaluation tracks are significantly more difficult than they may have been originally intended in their original racing environments. We rank the four tracks in terms of difficulty ranging from easy (track 1), medium (track 2), hard (track 3), to very hard (track 4). For all the following evaluations, both the trained networks and human pilots are tasked to fly two laps in the testing tracks and are scored based on the total gates they fly through and overall lap time.
Effects of Exploration
We find exploration to be the predominant factor influencing network performance. As mentioned earlier, we augment the pilot flight data with offsets and corresponding corrective controls. We conduct grid search to find a suitable degree of augmentation and to analyze the effect it has on overall UAV racing performance. To do this, we define two sets of offset parameters: one that acts as a horizontal offset (roll-offset) and one that acts as a rotational offset (yaw-offset). Figure9 shows how the racing accuracy (percentage of gates traversed) varies with different sets of these augmentation offsets across the four testing tracks. It is clear that increasing the number of rendered images with yaw-offset has the greatest impact on performance. While it is possible for the DNN to complete tracks without being trained on roll-offsets, this is not the case for yaw-offsets. However, the huge gain in adding rotated camera views saturates quickly, and at a certain point the network does not benefit from more extensive augmentation. Therefore, we found four yaw-offsets to be sufficient. Including camera views with horizontal shifts is also beneficial, since the network is better equipped to recover once it is about to leave the track on straights. We found two roll-offsets to be sufficient to ensure this. Therefore, in the rest of our experiments, we use the following augmentation setup in training: horizontal roll-offset set {−50 • , 50 • } and rotational yaw- Figure 9: Effect of data augmentation in training to overall UAV racing performance. By augmenting the original flight logs with data captured at more offsets (roll and yaw) from the original trajectory along with their corresponding corrective controls, our UAV DNN can learn to traverse almost all the gates of the testing tracks, since it has learned to correct for exploratory maneuvers. After a sufficient amount of augmentation, no additional benefit is realized in improved racing performance.
offset set {−30 • , −15 • , 15 • , 30 • }.
Comparison to State-of-the-Art
We compare our racing DNN to the two most related and recent network architectures, the first denoted as Nvidia (for self-driving cars [3]) and the second as MAV (for forest path navigating UAVs [41]). While the domains of these works are similar, it should be noted that flying a high-speed racing UAV is a particularly challenging task, especially since the effect of inertia is much more significant and there are more degrees of freedom. For fair comparison, we scale our dataset to the same input dimensionality and re-train each of the three networks. We then evaluate each of the trained models on the task of UAV racing in the testing tracks. It is noteworthy to point out that both the Nvidia and MAV networks (in their original implementation) use data augmentation as well, so when training, we maintain the same strategy. For the Nvidia network, the exact offset choices for training are not publicly known, so we use a rotational offset set of {−30 • , 30 • } to augment its data. As for the MAV network, we use the same augmentation parameters proposed in the paper, i.e. a rotational offset of {−30 • , 30 • }. We needed to modify the MAV network to allow for a regression output instead of its original classification (left, center and right controls). This is necessary, since our task is much more complex and discrete controls would lead to inadequate UAV racing performance.
It should be noted that in the original implementation of the Nvidia network [3] (based on real-world driving data), it was realized that additional augmentation was needed for reasonable automatic driving performance after the realworld data was acquired. To avoid recapturing the data again, synthetic viewpoints (generated by interpolation) were used to augment the training dataset, which introduced undesirable distortions. By using our simulator, we are able to extract any number of camera views without distortions. Therefore, we wanted to also gauge the effect of additional augmentation to both the Nvidia and MAV networks, when they are trained using our default augmentation setting: horizontal roll-offset of {−50 • , 50 • } and rotational yaw-offset of {−30 • , −15 • , 15 • , 30 • }. We denote these trained networks as Nvidia++ and MAV++. Table 3 summarizes the results of these different network variants on the testing tracks. Results indicate that the performance of the original Nvidia and MAV networks suffer from insufficient data augmentation. They clearly do not make use of enough exploration. These networks improve in performance when our proposed data augmentation scheme (enabled by our simulator) is used. Regardless, our proposed DNN outperforms the Nvidia/Nvidia++ and MAV/MAV++ networks, where this improvement is less significant when more data augmentation or more exploratory behavior is learned. Unlike the other networks, our DNN performs consistently well on all the unseen tracks, owing to its sufficient network capacity needed to learn this complex task.
Pilot Diversity & Human vs. DNN
In this section, we investigate how the flying style of a pilot affects the network that is being learned. To do this, Table 3: Accuracy score of different pilots and networks on the four test tracks. The accuracy score represents the percentage of completed racing gates. The networks ending with ++ are variants of the original network with our augmentation strategy.
we compare the performance of the different networks on the testing set, when each of them is trained with flight data captured from pilots of varying flight expertise (intermediate, and expert). We also trained models using the Nvidia [3] and MAV [41] architectures, with and without our default data augmentation settings. Table 3 summarizes the lap time and accuracy of these networks. Clearly, the pilot flight style can significantly affect the performance of the learned network. Figure 10 shows that there is a high correlation regarding both performance and flying style of the pilot used in training and the corresponding learned network. The trained networks clearly resemble the flying style and also the proficiency of their human trainers. Thus, our network that was trained on flights of the intermediate pilot achieves high accuracies but is quite slow, just as the expert network sometimes misses gates but achieves very good lap and overall times. Interestingly, although the networks perform similar to their pilot, they fly more consistently, and therefore tend to outperform the human pilot with regards to overall time on multiple laps. This is especially true for our intermediate network. Both the intermediate and the expert network clearly outperforms the novice human pilot, who takes several hours of practice and several attempts to reach similar performance to the network. Even our expert pilots were not always able to complete the test tracks on the first attempt.
While the percentage of passed gates and best lap time give a good indication about the network performance, they do not convey any information about the style of the pilot. To this end, we visualize the performance of human pilots and the trained networks by plotting their trajectories onto the track (from a 2D overhead viewpoint). Moreover, we encode their speeds as a heatmap, where blue corresponds to the minimum speed and red to the maximum speed. Figure 11 shows a collection of heatmaps revealing several interesting insights. Firstly, the networks clearly imitate the style of the pilot they were trained on. This is especially true for the intermediate proficiency level, while the expert network sometimes overshoots, which causes it to loose speed and therefore to not match the speed pattern as well as the intermediate one. We also note that the performance gap between network and human increases as the expertise of the pilot increases. Note that the flight path of the expert network is less smooth and centered than its human correspondent and the intermediate network, respectively. This is partly due to the fact that the networks were only trained on two laps of flying across seven training tracks. An expert pilot has a lot more training than that and is therefore able to generalize much better to unseen environments. However, the experience advantage of the intermediate pilot over the network is much less and therefore the performance gap is smaller. We also show the performance of our novice pilot on these tracks. While the intermediate pilots accelerate on straights, the novice clearly is not able to control speed that well, creating a very narrow velocity range. Albeit flying quite slow, he also gets of track several times. This underlines how challenging UAV racing is, especially for unexperienced pilots.
Conclusions and Future Work
In this paper, we proposed a robust imitation learning based framework to teach an unmanned aerial vehicle (UAV) to fly through challenging racing tracks at very high speeds, an unprecedented and difficult task. To do this, we trained a deep neural network (DNN) to predict the necessary UAV controls from raw image data, grounded in a photo-realistic simulator that also allows for realistic UAV physics. Training is made possible by logging data (rendered images from the UAV and stick controls) from human pilot flights, while they maneuver the UAV through racing tracks. This data is augmented with sufficient offsets so as to teach the network to recover from flight mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.
In the future, we aim to transfer the network we trained in our simulator to the real-world to compete against human pilots in real-world racing scenarios. Although we accurately modeled the simulated racing environment, the differences in appearance between the simulated and real-world will need to be reconciled. Therefore, we will investigate deep transfer learning techniques to enable a smooth transition between simulator and the real-world. Since our developed simulator and its seamless interface to deep learning platforms is generic in nature, we expect that this combination will open up unique opportunities for the community to develop better automated UAV flying methods, to expand its reach to other fields of autonomous navigation such as selfdriving cars, and to benefit other interesting AI tasks (e.g. obstacle avoidance). Figure 11: Visualization of human and automated UAV flights super-imposed onto a 2D overhead view of different tracks. The color coding illustrates the instantaneous speed of the UAV. Notice how the UAV learns to speedup on straights and to slow down at turns and how the flying style corresponds, especially with the intermediate pilots. | 5,330 |
1708.05552 | 2950142842 | Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset. | Early works, from @math s, have made efforts on automating neural network design which often searched good architecture by the genetic algorithm or other evolutionary algorithms @cite_6 @cite_21 @cite_24 @cite_12 @cite_8 @cite_20 @cite_19 . Nevertheless, these works, to our best knowledge, cannot perform competitively compared with hand-crafted networks. Recent works, Neural Architecture Search (NAS) @cite_31 and MetaQNN @cite_4 , adopted reinforcement learning to automatically search a good network architecture. Although they can yield good performance on small datasets such as CIFAR- @math , CIFAR- @math , the direct use of MetaQNN or NAS for architecture design on big datasets like ImageNet @cite_13 is computationally expensive via searching in a huge space. Besides, the network generated by this kind of methods is task-specific or dataset-specific, that is, it cannot been well transferred to other tasks nor datasets with different input data sizes. For example, the network designed for CIFAR- @math cannot been generalized to ImageNet. | {
"abstract": [
"At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"Despite the success of CNNs, selecting the optimal architecture for a given task remains an open problem. Instead of aiming to select a single optimal architecture, we propose a \"fabric\" that embeds an exponentially large number of architectures. The fabric consists of a 3D trellis that connects response maps at different layers, scales, and channels with a sparse homogeneous local connectivity pattern. The only hyper-parameters of a fabric are the number of channels and layers. While individual architectures can be recovered as paths, the fabric can in addition ensemble all embedded architectures together, sharing their weights where their paths overlap. Parameters can be learned using standard methods based on back-propagation, at a cost that scales linearly in the fabric size. We present benchmark results competitive with the state of the art for image classification on MNIST and CIFAR10, and for semantic segmentation on the Part Labels dataset.",
"",
"Various schemes for combining genetic algorithms and neural networks have been proposed and tested in recent years, but the literature is scattered among a variety of journals, proceedings and technical reports. Activity in this area is clearly increasing. The authors provide an overview of this body of literature drawing out common themes and providing, where possible, the emerging wisdom about what seems to work and what does not. >",
"Research in neuroevolution---that is, evolving artificial neural networks (ANNs) through evolutionary algorithms---is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.",
"",
"Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.",
"Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models."
],
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_31",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"2556833785",
"2418011751",
"2148872333",
"1653273292",
"2119814172",
"",
"2553303224",
"2266822037",
"2108598243",
"2606006859"
]
} | 0 |
||
1708.05552 | 2950142842 | Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset. | Instead, our approach is aimed to design network block architecture by an efficient search method with a distributed asynchronous Q-learning framework as well as an early-stop strategy. The block design conception follows the modern convolutional neural networks such as Inception @cite_5 @cite_34 @cite_15 and Resnet @cite_32 @cite_16 . The inception-based networks construct the inception blocks via a hand-crafted multi-level feature extractor strategy by computing @math , @math , and @math convolutions, while the Resnet uses residue blocks with shortcut connection to make it easier to represent the identity mapping which allows a very deep network. The blocks automatically generated by our approach have similar structures such as some blocks contain short cut connections and inception-like multi-branch combination. We will discuss the details in . | {
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters."
],
"cite_N": [
"@cite_32",
"@cite_16",
"@cite_5",
"@cite_15",
"@cite_34"
],
"mid": [
"2949650786",
"2302255633",
"2950179405",
"2949605076",
"2949117887"
]
} | 0 |
||
1708.05552 | 2950142842 | Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset. | Another bunch of related works include hyper-parameter optimization @cite_1 , meta-learning @cite_22 and learning to learn methods @cite_30 @cite_23 . However, the goal of these works is to use meta-data to improve the performance of the existing algorithms, such as finding the optimal learning rate of optimization methods or the optimal number of hidden layers to construct the network. In this paper, we focus on learning the entire topological architecture of network blocks to improve the performance. | {
"abstract": [
"This paper introduces the application of gradient descent methods to meta-learning. The concept of \"meta-learning\", i.e. of a system that improves or discovers a learning algorithm, has been of interest in machine learning for decades because of its appealing applications. Previous meta-learning approaches have been based on evolutionary methods and, therefore, have been restricted to small models with few free parameters. We make meta-learning in large systems feasible by using recurrent neural networks withth eir attendant learning routines as meta-learning systems. Our system derived complex well performing learning algorithms from scratch. In this paper we also show that our approachp erforms non-stationary time series prediction.",
"",
"Different researchers hold different views of what the term meta-learning exactly means. The first part of this paper provides our own perspective view in which the goal is to build self-adaptive learners (i.e. learning algorithms that improve their bias dynamically through experience by accumulating meta-knowledge). The second part provides a survey of meta-learning as reported by the machine-learning literature. We find that, despite different views and research lines, a question remains constant: how can we exploit knowledge about learning (i.e. meta-knowledge) to improve the performance of learning algorithms? Clearly the answer to this question is key to the advancement of the field and continues being the subject of intensive research.",
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art."
],
"cite_N": [
"@cite_30",
"@cite_1",
"@cite_22",
"@cite_23"
],
"mid": [
"2137825550",
"",
"2145680191",
"2963775850"
]
} | 0 |
||
1906.00410 | 2947469668 | Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment. | @cite_5 present an empirical study of generalization in Deep-RL, testing interpolation and extrapolation performance of state-of-the-art algorithms when varying simulation parameters in control tasks. The authors provide an experimental assessment of generalization under varying training and testing distributions. Our work extends these results by providing results for the case when the training distribution parameters are learned and change during policy training. | {
"abstract": [
"Deep reinforcement learning (RL) has achieved breakthrough results on many tasks, but has been shown to be sensitive to system changes at test time. As a result, building deep RL agents that generalize has become an active research area. Our aim is to catalyze and streamline community-wide progress on this problem by providing the first benchmark and a common experimental protocol for investigating generalization in RL. Our benchmark contains a diverse set of environments and our evaluation methodology covers both in-distribution and out-of-distribution generalization. To provide a set of baselines for future research, we conduct a systematic evaluation of deep RL algorithms, including those that specifically tackle the problem of generalization."
],
"cite_N": [
"@cite_5"
],
"mid": [
"2898436992"
]
} | Learning Domain Randomization Distributions for Transfer of Locomotion Policies | Deep Reinforcement Learning (Deep-RL) is a powerful technique for synthesizing locomotion controllers for robot systems. Inspired by successes in video games (Mnih et al., 2015) and board games (Silver et al., 2016), recent work has demonstrated the applicability of Deep-RL in robotics . Since the data requirements for Deep-RL make their direct application to real robot systems costly, or even infeasible, a large body of recent work has focused on training controllers in simulation and deploying them in a real robot system. This is particularly challenging, but it is crucial to realize real world development of these systems.
Robot simulators provide a solution to the data requirements of Deep-RL. Except for simple robot systems in controlled environments, however, real robot experience may not correspond to situations that were used in simulation; an issue known as the reality gap (Jakobi et al., 1995). One way to address the reality gap is to perform system identification to tune the simulation parameters. This approach works if collecting data on the target system is not prohibitively expensive and the number of parameters of the simulation are small. The reality gap may still exist, however, due to a mis-specification of the simulation model.
Another method to shrink the reality gap is to train policies to maximize performance over a diverse set of simulation models, where the parameters of each model are sampled randomly, an approach known as domain randomization (DR). This aims to address the issue of model misspecification by providing diverse simulated experience. Domain randomization has been demonstrated to effectively produce controllers that can be trained in simulation with high likelihood of successful outcomes on a real robot system after deployment (Andrychowicz et al., 2018) and finetuning with real world data (Chen et al., 2018).
While successful, an aspect that has not been addressed in depth is the selection of the domain randomization distribution. For vision-based components, DR should be tuned so that features learned in simulation do not depend strongly on the appearance of simulated environments. For the control components, the focus of this work, there is a dependency arXiv:1906.00410v1 [cs.LG] 2 Jun 2019 between optimal behaviour and the dynamics of the environment. In this case, the DR distribution should be selected carefully to ensure that the real robot experience is represented in the simulated experience sampled under DR. If real robot data is available, one could use gradient-free search (Chebotar et al., 2018) or Bayesian inference (Rajeswaran et al., 2016) to update the DR distribution after executing the learned policy on the target system. These methods are based on the assumption that there is a a set of simulators from which real world experience can be synthesized.
In this work we propose to learn the parameters of the simulator distribution, such that the policy is trained over the most diverse set of simulator parameters in which it can plausibly succeed. By making the simulation distribution as wide as possible, we aim to encode the largest set of behaviours that is possible on a single policy, with fixed capacity. As shown in our experiments, training on the widest distribution possible has two problems: our models usually have finite capacity and picking a domain randomization that is too varied slows down convergence as shown in Figure 9.
Instead, we let the optimization process focus on environments where the task is feasible. We propose an algorithm that simultaneously learns the domain randomization distribution while optimizing the policy to maximize performance over the learned distribution. To operate over a wide range of possible simulator parameters, we train context-aware policies which take as input the current state of the environment, alongside contextual information describing the sampled parameters of the simulator. This enables our policies to learn context-specific strategies which consider the current dynamics of the environment, rather than an average over all possible simulator parameters. When deployed on the target environment, we concurrently fine-tune the policy parameters while searching for the context that maximizes performance. We evaluate our method on a variety of control problems from the OpenAI Gym suite of benchmarks. We find that our method is able to improve on the performance of fixed domain randomization. Furthermore, we demonstrate our model's robustness to initial simulator distribution parameters, showing that our method repeatably converges to similar domain randomization distributions across different experiments. (Packer et al., 2018) present an empirical study of generalization in Deep-RL, testing interpolation and extrapolation performance of state-of-the-art algorithms when varying simulation parameters in control tasks. The authors provide an experimental assessment of generalization under varying training and testing distributions. Our work extends these results by providing results for the case when the training distribution parameters are learned and change during policy training. (Chebotar et al., 2018) propose training policies on a distribution of simulators, whose parameters are fit to real-world data. Their proposed algorithm switches back and forth between optimizing the policy under the DR distribution and updating the DR distribution by minimizing the discrepancy between simulated and real world trajectories. In contrast, we aim to learn policies that maximize performance over a diverse distribution of environments where the task is feasible, as a way of minimizing the interactions with the real robot system. (Rajeswaran et al., 2016) propose a related approach for learning robust policies over a distribution of simulator models. The proposed approach, based on the the -percentile conditional value at risk (CVaR) (Tamar et al., 2015) objective, improves the policy performance on a small proportion of environments where the policy performs the worst. The authors propose an algorithm that updates the distribution of simulation models to maximize the likelihood of real-world trajectories, via Bayesian inference. The combination of worst-case performance optimization and Bayesian updates ensures that the resulting policy is robust to errors in the estimation of the simulation model parameters. Our method can be combined with the CVaR objective to encourage diversity of the learned DR distribution.
Problem Statement
We consider parametric Markov Decision Processes (MDPs) (Sutton & Barto, 2018). An MDP M is defined by the tuple S, A, p, r, γ, ρ 0 , where S is the set of possible states and A is the set of actions, p : S × A × S − → R + , encodes the state transition dynamics, r : S × A − → R + is the taskdependent reward function, γ is a discount factor, and ρ 0 : S − → R is the initial state distribution. Let s t and a t be the state and action taken at time t. At the beginning of each episode, s 0 ∼ ρ 0 (.). Trajectories τ are obtained by iteratively sampling actions using the current policy π a t ∼ π(a t |s t ) and evaluating next states according to the transition dynamics s t+1 ∼ p(s t+1 |a t ). Given an MDP M, the goal is then to learn policy π to maximize the expected sum of rewards
J M (π) = E τ [R(τ )|π] = E τ [ ∞ t=0 γ t r t ], where r t = r(s t , a t ).
In our work, we aim to maximize performance over a distribution of MDPs, each described by a context vector z representing the variables that change over the distribution: changes in transition dynamics, rewards, initial state distribution, etc. Thus, our objective is to maximize (Yu et al., 2018;Chen et al., 2018;Rakelly et al., 2019), we condition the policy on the context vector, π(a t |s t , z). In the experiments reported in this paper, we let z encode the parameters of the transition model in a physically based simulator; e.g. mass, friction or damping.
E z∼p ( z) [J Mz (π)], where p(z) is the domain randomization distribution. Similar to
Proposed Method
In practice, making the context distribution p(z) as wide as possible may be detrimental to the objective of maximizing performance. For instance, if the distribution has infinite support and wide variance, there may be more environments sampled from the context distribution for which the desired task is impossible (e.g. reaching a target state). Thus, sampling trajectories from a wide context distribution results in high variance on the directions of improvement, slowing progress on policy learning. On the other hand, if we make the context distribution to be too narrow, policy learning can progress more rapidly but may not generalize to the whole set of possible contexts.
We introduce LSDR (Learning the Sweet-spot Distribution Range) algorithm for concurrently learning a domain randomization distribution and a robust policy that maximizes performance over it. Instead of directly sampling from p(z), we use a surrogate distribution p φ (z), with trainable parameters φ. Our goal is to find appropriate parameters φ to optimize π(·|s, z). LSDR proceeds by updating the policy with trajectories sampled from p φ (z), and updating the φ based on the performance of the policy p(z). To avoid the collapse of the learned distribution, we propose using a regularizer that encourages the distribution to be diverse. The idea is to sample more data from environments where improvement of the policy is possible, without collapsing to environments that are trivial to solve. We summarize our training and testing procedure in Algorithm 1 and 2.
In our experiments, we use Proximal Policy Optimization (PPO) (Schulman et al., 2017) for the UpdatePolicy procedure in Algorithm (1).
Algorithm 1 Learning the policy and training distribution
Input: testing distribution p(z), initial parameters of the learned distribution φ, initial policy π, buffer size B, total
iterations N for i ∈ {1, ..., N } do z ∼ p φ (z) s 0 ∼ ρ 0 (s) B = {} while |B| < B do a t ∼ π(a t |s t , z) s t+1 , r t ∼ p(s t+1 , r t |s t , a t , z) append (s t , z k , a t , r t , s t+1 ) to B if s is terminal then z ∼ p φ (z) s t+1 ∼ ρ 0 (s) end if s t ← s t+1 end while φ ← UpdateDistribution(φ, p(z), π) π ← UpdatePolicy(π, B) end for
Algorithm 2 UpdateDistribution
Input: learned distribution parameters φ, testing distribution p(z), policy π, total iterations M , total trajectory samples K for i ∈ {1, ..., M } do sample z 1:K from p(z) Obtain Monte-Carlo estimate of L DR (φ) by executing π on environments with z 1:
K φ ← φ + λ∇ φ (L DR (φ) − αD KL (p φ (z)||p(z))) end for
Algorithm 3 Fine-tuning the policy at test-time Input: learned distribution parameters φ, policy π, buffer size B, total iterations N Initialize guess for context vectorẑ ∼ p φ (z) for i ∈ {0, ..., N } do Collect B samples into B by executing policy π(a|s,ẑ) π ← UpdatePolicy(π, B) z ← UpdatePolicy(ẑ, B) end for
Learning the Sweet-spot Distribution Range
The goal of our method is to find a training distribution p φ (z) to maximize the expected reward of the policy under the test distribution, while gradually reducing the sampling frequency of environments that make the task unsolvable 1 . Such situation is common in physics-based simulations, where a bad selection of simulation parameters may lead to environments where the task is impossible due to physical limits or unstable simulations.
We start by assuming that the test distribution p(z) has wide but bounded support, such that we get a distribution of solvable and unsolvable tasks. To update the training distribution, we use an objective of the following form
arg max φ L DR (φ) − αD KL (p φ (z)||p(z))(1)
where the first term is designed to encourage improvement on environments that are more likely to be solvable, while the second term is a regularizer that keeps the distribution from collapsing. In our experiments, we set
L DR (φ) = E z∼p(z) [J Mz (π) log(p φ (z))].
Optimizing this objective encourages focusing on environments where the current policy performs the best. Other suitable objectives are the improvement over the previous policy E z [J Mz (π i ) − J Mz (π i-1 )], or an estimate of the performance of the context dependent optimal policy E z [Ĵ Mz (π * )].
If we use the performance of the policy as a way of determining whether the task is solvable for a given context z, 1 In this work, we consider a task solvable if there exists a policy that brings the environment to a set of desired goal states. then a trivial solution would be to make p φ (z) concentrate on few easy environments. The second term in Eq. (1) helps to avoid this issue by penalizing distributions that deviate too much from p(z), which is assumed to be wide. When p(z) is uniform this is equivalent to maximizing the entropy of p φ (z).
To estimate the gradient of Eq. (1) with respect to φ, we use the log-derivative score function gradient estimator (Fu, 2006), resulting in the following Monte-Carlo update :
φ ← φ + λ 1 K K i=1 J Mz i (π)∇ φ log(p φ (z i )) − α∇ φ D KL (p φ (z)||p(z))(2)
where z i ∼ p(z). Updating φ with samples from the distribution we are learning has the problem that we never get information about the performance of the policy in low probability contexts under p φ (z). This is problematic since if context z k were assigned a low probability early in training, we would require a large number of samples to update its probability-even if the policy performs well on z k during later stages in training. To address this issue, we use samples from p(z) to evaluate the gradient of L DR (φ). While changing the sampling distribution introduces bias, which could be corrected by using importance sampling, we find that both the second term in Eq. (1) and sampling from p(z) are crucial to avoid the collapse of the learned distribution (see Fig. 4). To ensure that the two terms in Eq.
(1) have similar scale, we standardize the evaluations of J Mz i with exponentially averaged batch statistics and set α to the fixed value of H(p(z)) −1 . We evaluate the impact of learning the DR distribution on two standard benchmark locomotion tasks: Hopper and Half-cheetah from the MuJoCo tasks (illustrated in Fig. 1) in the OpenAI Gym suite (Brockman et al., 2016). We use an explicit encoding of the context vector z, corresponding to the torso size, density, foot friction and joint damping of the environments. In this work, we focus on uni-dimensional domain randomization contexts. In this work, we run experiments for each context variable independently 2 . We selected p(z) as an uniform distribution over ranges that include both solvable and unsolvable environments. We initialize p φ (z) to be the same as p(z). In these experiments, both distributions are implemented as discrete distributions with 100 bins. When sampling from this distribution, we first select a bin according to the discrete probabilities, then select a continuous context value uniformly at random from the ranges of the corresponding bin.
Experiments
We compare the test-time jump-start and asymptotic performance of policies learned with p φ (z) (learned domain randomization) and p(z) (fixed domain randomization). At test time, we sample (uniformly at random) a test set of 50 samples from the support of p(z) and run policy search optimization, initializing the policy with the parameters obtained at training time. The questions we aim to answer with our experiments are: 1) does learning policies with wide DR distributions affect the performance of the policy in the environments where the task is solvable? 2) does learning the DR distribution converge to the actual ranges where the task is solvable? 3) Is learning the DR distribution beneficial?
Results
Learned Distribution Ranges: Table 1 shows the ranges for p(z) and the final equivalent ranges for the distributions found by our method. Figure 2 and 3 show the evolution of p φ (z) during training, using our method. Each plot corresponds to a separate domain randomization experiment, where we randomized one different simulator parameter while keeping the rest fixed. Initially, each of these distribution is uniform. As the agent becomes better over the training distribution, it becomes easier to discriminate between promising environments (where the task is solvable) and impossible ones where rewards stay at low values. After around 1500 epochs, the distributions have converged to their final distributions. For Hopper, the learned distributions corresponds closely with the environments where we can find a policy using vanilla policy gradient methods from scratch. To determine the consistency of these results, we ran the Hopper torso size experiment 7 times, and fitted the parameters of a uniform distribution to the resulting p φ (z). The mean ranges (± one standard deviation) across the 7 2 To enable distribution learning multi-dimensional contexts, we are exploring the use of parameterizations, different from the discrete distribution, that do not suffer from the curse of dimensionality experiments were [0.00086 ± 0.00159, 0.09275 ± 0.00342], which provides some evidence for the reproducibility of our method.
Learned vs Fixed Domain Randomization: We compare the jumpstart and asymptotic performance between learning the domain randomization distribution and keeping it fixed. Our results show our method, using PPO as the policy optimizer (LSDR) vs keeping the domain randomization distri- bution fixed (Fixed-DR) which corresponds to keeping the domain randomization distribution fixed. For these methods, we also compare whether training a context-conditioned policy or a robust policy is better at generalization. We ran the same experiments for Hopper and Half-Cheetah.
Figures 5 and 6 depict learning curves when fine tuning the policy at test-time, for torso size randomization. All the methods start with the same random seed at training time. The policies are trained for 3000 epochs, where we collect B = 4000 samples per epoch for the policy update. For the distribution update, we collect K = 10 additional trajectories and run the gradient update M = 10 times (without re-sampling new trajectories). We report averages over 50 different environments (corresponding to samples from p(z), one random seed per environment). For clarity of presentation, we report the comparison over a "reasonable" torso size range (where the locomotion task is feasible) and a "hard" range, where the policy fails to make the robot move forward. For Hopper, the reasonable torso size range corresponds to start and asymptotic performance over using fixed domain randomization, within the reasonable range. On the hard ranges, LSDR performs slightly worse. But in most of the contexts in this range the task is not actually solvable; i.e. the optimal policy found by vanilla-PPO does not result in successful locomotion on the hard range 3 . Figures 5 and 6 also show the performance of a contextual policy to that of a non-contextual policy. Our results show, training a contextual policy boosts the performance for both scenarios, where the domain randomization distribution is fixed, and when the distribution is being learned.
Contextual policy
Using a different policy optimizer We also experimented with using EPOpt-PPO (Rajeswaran et al., 2016) as the policy optimizer in Algorithm (1). The motivation for this is to mitigate the bias towards environments with higher cumulative rewards J Mz (π) early during training. EPOpt encourages improving the policy on the worst performing environments, at the expense of collecting more data per From the resulting 100 trajectories, we use the 10% trajectories that resulted in the lowest rewards to fill the buffer for a PPO policy update, discarding the rest of the trajectories. Figure 9 compares the effect of learning the domain randomization distribution vs using a fixed wide range in . Learning curves for torso randomization on the Hopper task, using EPOpt as the policy optimizer. Lines represent mean performance, while the shaded regions correspond to the maximum and minimum performance over the [0.01, 0.09] torso size range. Learning the domain randomization distribution results in faster convergence and higher asymptotic performance. this setting. We found that learning the domain randomization distribution resulted in faster convergence to high reward policies over the evaluation range [0.01, 0.09], while resulting in a slightly better asymptotic performance. We believe this could be a consequence of lower variance in the policy gradient estimates, as the the learned p φ (z) has lower variance than p(z). Interestingly, using EPOpt resulted in a distribution with a wider torso size range than vanilla PPO, from approximately 0.0 to 0.14, demonstrating that optimizing worst case performance does help in alleviating the bias towards high reward environments.
Discussion
By allowing the agent to learn a good representative distribution, we are able to learn to solve difficult control tasks that heavily rely on a good initial domain randomization range. Our main experimental validation of domain randomization distribution learning is in the domain of simulated robotic locomotion. As shown in our experiments, our method is not sensitive to the initial domain randomization distribution and is able to converge to a more diverse range, while staying within the feasible range.
In this work, we study uni-dimensional context distribution learning. Due to the curse of dimensionality, there are limitations in using a discrete distribution -we are currently experimenting with alternative distributions such as truncated normal distributions, approximation of discrete distributions, etc. Using multidimensional contexts should enable an agent trained in simulation to obtain experience that's closer to a real world robot, which is the goal of this work. An issue that requires further investigation is the fact that we use the same reward function over all environments, without considering the effect of the simulation parameters on the reward scale. For instance, in a challenging environment, the agent may obtain low rewards but still manage to produce a policy that successfully solves the task; e.g. successful forward locomotion in the Hopper task. A poorly constructed reward may not only lead to undesirable behavior, but may complicate distribution learning if the scale of the rewards for successful policies varies across contexts. | 3,678 |
1906.00410 | 2947469668 | Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment. | @cite_12 propose training policies on a distribution of simulators, whose parameters are fit to real-world data. Their proposed algorithm switches back and forth between optimizing the policy under the DR distribution and updating the DR distribution by minimizing the discrepancy between simulated and real world trajectories. In contrast, we aim to learn policies that maximize performance over a diverse distribution of environments where the task is feasible, as a way of minimizing the interactions with the real robot system. | {
"abstract": [
"We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at this https URL"
],
"cite_N": [
"@cite_12"
],
"mid": [
"2897345632"
]
} | Learning Domain Randomization Distributions for Transfer of Locomotion Policies | Deep Reinforcement Learning (Deep-RL) is a powerful technique for synthesizing locomotion controllers for robot systems. Inspired by successes in video games (Mnih et al., 2015) and board games (Silver et al., 2016), recent work has demonstrated the applicability of Deep-RL in robotics . Since the data requirements for Deep-RL make their direct application to real robot systems costly, or even infeasible, a large body of recent work has focused on training controllers in simulation and deploying them in a real robot system. This is particularly challenging, but it is crucial to realize real world development of these systems.
Robot simulators provide a solution to the data requirements of Deep-RL. Except for simple robot systems in controlled environments, however, real robot experience may not correspond to situations that were used in simulation; an issue known as the reality gap (Jakobi et al., 1995). One way to address the reality gap is to perform system identification to tune the simulation parameters. This approach works if collecting data on the target system is not prohibitively expensive and the number of parameters of the simulation are small. The reality gap may still exist, however, due to a mis-specification of the simulation model.
Another method to shrink the reality gap is to train policies to maximize performance over a diverse set of simulation models, where the parameters of each model are sampled randomly, an approach known as domain randomization (DR). This aims to address the issue of model misspecification by providing diverse simulated experience. Domain randomization has been demonstrated to effectively produce controllers that can be trained in simulation with high likelihood of successful outcomes on a real robot system after deployment (Andrychowicz et al., 2018) and finetuning with real world data (Chen et al., 2018).
While successful, an aspect that has not been addressed in depth is the selection of the domain randomization distribution. For vision-based components, DR should be tuned so that features learned in simulation do not depend strongly on the appearance of simulated environments. For the control components, the focus of this work, there is a dependency arXiv:1906.00410v1 [cs.LG] 2 Jun 2019 between optimal behaviour and the dynamics of the environment. In this case, the DR distribution should be selected carefully to ensure that the real robot experience is represented in the simulated experience sampled under DR. If real robot data is available, one could use gradient-free search (Chebotar et al., 2018) or Bayesian inference (Rajeswaran et al., 2016) to update the DR distribution after executing the learned policy on the target system. These methods are based on the assumption that there is a a set of simulators from which real world experience can be synthesized.
In this work we propose to learn the parameters of the simulator distribution, such that the policy is trained over the most diverse set of simulator parameters in which it can plausibly succeed. By making the simulation distribution as wide as possible, we aim to encode the largest set of behaviours that is possible on a single policy, with fixed capacity. As shown in our experiments, training on the widest distribution possible has two problems: our models usually have finite capacity and picking a domain randomization that is too varied slows down convergence as shown in Figure 9.
Instead, we let the optimization process focus on environments where the task is feasible. We propose an algorithm that simultaneously learns the domain randomization distribution while optimizing the policy to maximize performance over the learned distribution. To operate over a wide range of possible simulator parameters, we train context-aware policies which take as input the current state of the environment, alongside contextual information describing the sampled parameters of the simulator. This enables our policies to learn context-specific strategies which consider the current dynamics of the environment, rather than an average over all possible simulator parameters. When deployed on the target environment, we concurrently fine-tune the policy parameters while searching for the context that maximizes performance. We evaluate our method on a variety of control problems from the OpenAI Gym suite of benchmarks. We find that our method is able to improve on the performance of fixed domain randomization. Furthermore, we demonstrate our model's robustness to initial simulator distribution parameters, showing that our method repeatably converges to similar domain randomization distributions across different experiments. (Packer et al., 2018) present an empirical study of generalization in Deep-RL, testing interpolation and extrapolation performance of state-of-the-art algorithms when varying simulation parameters in control tasks. The authors provide an experimental assessment of generalization under varying training and testing distributions. Our work extends these results by providing results for the case when the training distribution parameters are learned and change during policy training. (Chebotar et al., 2018) propose training policies on a distribution of simulators, whose parameters are fit to real-world data. Their proposed algorithm switches back and forth between optimizing the policy under the DR distribution and updating the DR distribution by minimizing the discrepancy between simulated and real world trajectories. In contrast, we aim to learn policies that maximize performance over a diverse distribution of environments where the task is feasible, as a way of minimizing the interactions with the real robot system. (Rajeswaran et al., 2016) propose a related approach for learning robust policies over a distribution of simulator models. The proposed approach, based on the the -percentile conditional value at risk (CVaR) (Tamar et al., 2015) objective, improves the policy performance on a small proportion of environments where the policy performs the worst. The authors propose an algorithm that updates the distribution of simulation models to maximize the likelihood of real-world trajectories, via Bayesian inference. The combination of worst-case performance optimization and Bayesian updates ensures that the resulting policy is robust to errors in the estimation of the simulation model parameters. Our method can be combined with the CVaR objective to encourage diversity of the learned DR distribution.
Problem Statement
We consider parametric Markov Decision Processes (MDPs) (Sutton & Barto, 2018). An MDP M is defined by the tuple S, A, p, r, γ, ρ 0 , where S is the set of possible states and A is the set of actions, p : S × A × S − → R + , encodes the state transition dynamics, r : S × A − → R + is the taskdependent reward function, γ is a discount factor, and ρ 0 : S − → R is the initial state distribution. Let s t and a t be the state and action taken at time t. At the beginning of each episode, s 0 ∼ ρ 0 (.). Trajectories τ are obtained by iteratively sampling actions using the current policy π a t ∼ π(a t |s t ) and evaluating next states according to the transition dynamics s t+1 ∼ p(s t+1 |a t ). Given an MDP M, the goal is then to learn policy π to maximize the expected sum of rewards
J M (π) = E τ [R(τ )|π] = E τ [ ∞ t=0 γ t r t ], where r t = r(s t , a t ).
In our work, we aim to maximize performance over a distribution of MDPs, each described by a context vector z representing the variables that change over the distribution: changes in transition dynamics, rewards, initial state distribution, etc. Thus, our objective is to maximize (Yu et al., 2018;Chen et al., 2018;Rakelly et al., 2019), we condition the policy on the context vector, π(a t |s t , z). In the experiments reported in this paper, we let z encode the parameters of the transition model in a physically based simulator; e.g. mass, friction or damping.
E z∼p ( z) [J Mz (π)], where p(z) is the domain randomization distribution. Similar to
Proposed Method
In practice, making the context distribution p(z) as wide as possible may be detrimental to the objective of maximizing performance. For instance, if the distribution has infinite support and wide variance, there may be more environments sampled from the context distribution for which the desired task is impossible (e.g. reaching a target state). Thus, sampling trajectories from a wide context distribution results in high variance on the directions of improvement, slowing progress on policy learning. On the other hand, if we make the context distribution to be too narrow, policy learning can progress more rapidly but may not generalize to the whole set of possible contexts.
We introduce LSDR (Learning the Sweet-spot Distribution Range) algorithm for concurrently learning a domain randomization distribution and a robust policy that maximizes performance over it. Instead of directly sampling from p(z), we use a surrogate distribution p φ (z), with trainable parameters φ. Our goal is to find appropriate parameters φ to optimize π(·|s, z). LSDR proceeds by updating the policy with trajectories sampled from p φ (z), and updating the φ based on the performance of the policy p(z). To avoid the collapse of the learned distribution, we propose using a regularizer that encourages the distribution to be diverse. The idea is to sample more data from environments where improvement of the policy is possible, without collapsing to environments that are trivial to solve. We summarize our training and testing procedure in Algorithm 1 and 2.
In our experiments, we use Proximal Policy Optimization (PPO) (Schulman et al., 2017) for the UpdatePolicy procedure in Algorithm (1).
Algorithm 1 Learning the policy and training distribution
Input: testing distribution p(z), initial parameters of the learned distribution φ, initial policy π, buffer size B, total
iterations N for i ∈ {1, ..., N } do z ∼ p φ (z) s 0 ∼ ρ 0 (s) B = {} while |B| < B do a t ∼ π(a t |s t , z) s t+1 , r t ∼ p(s t+1 , r t |s t , a t , z) append (s t , z k , a t , r t , s t+1 ) to B if s is terminal then z ∼ p φ (z) s t+1 ∼ ρ 0 (s) end if s t ← s t+1 end while φ ← UpdateDistribution(φ, p(z), π) π ← UpdatePolicy(π, B) end for
Algorithm 2 UpdateDistribution
Input: learned distribution parameters φ, testing distribution p(z), policy π, total iterations M , total trajectory samples K for i ∈ {1, ..., M } do sample z 1:K from p(z) Obtain Monte-Carlo estimate of L DR (φ) by executing π on environments with z 1:
K φ ← φ + λ∇ φ (L DR (φ) − αD KL (p φ (z)||p(z))) end for
Algorithm 3 Fine-tuning the policy at test-time Input: learned distribution parameters φ, policy π, buffer size B, total iterations N Initialize guess for context vectorẑ ∼ p φ (z) for i ∈ {0, ..., N } do Collect B samples into B by executing policy π(a|s,ẑ) π ← UpdatePolicy(π, B) z ← UpdatePolicy(ẑ, B) end for
Learning the Sweet-spot Distribution Range
The goal of our method is to find a training distribution p φ (z) to maximize the expected reward of the policy under the test distribution, while gradually reducing the sampling frequency of environments that make the task unsolvable 1 . Such situation is common in physics-based simulations, where a bad selection of simulation parameters may lead to environments where the task is impossible due to physical limits or unstable simulations.
We start by assuming that the test distribution p(z) has wide but bounded support, such that we get a distribution of solvable and unsolvable tasks. To update the training distribution, we use an objective of the following form
arg max φ L DR (φ) − αD KL (p φ (z)||p(z))(1)
where the first term is designed to encourage improvement on environments that are more likely to be solvable, while the second term is a regularizer that keeps the distribution from collapsing. In our experiments, we set
L DR (φ) = E z∼p(z) [J Mz (π) log(p φ (z))].
Optimizing this objective encourages focusing on environments where the current policy performs the best. Other suitable objectives are the improvement over the previous policy E z [J Mz (π i ) − J Mz (π i-1 )], or an estimate of the performance of the context dependent optimal policy E z [Ĵ Mz (π * )].
If we use the performance of the policy as a way of determining whether the task is solvable for a given context z, 1 In this work, we consider a task solvable if there exists a policy that brings the environment to a set of desired goal states. then a trivial solution would be to make p φ (z) concentrate on few easy environments. The second term in Eq. (1) helps to avoid this issue by penalizing distributions that deviate too much from p(z), which is assumed to be wide. When p(z) is uniform this is equivalent to maximizing the entropy of p φ (z).
To estimate the gradient of Eq. (1) with respect to φ, we use the log-derivative score function gradient estimator (Fu, 2006), resulting in the following Monte-Carlo update :
φ ← φ + λ 1 K K i=1 J Mz i (π)∇ φ log(p φ (z i )) − α∇ φ D KL (p φ (z)||p(z))(2)
where z i ∼ p(z). Updating φ with samples from the distribution we are learning has the problem that we never get information about the performance of the policy in low probability contexts under p φ (z). This is problematic since if context z k were assigned a low probability early in training, we would require a large number of samples to update its probability-even if the policy performs well on z k during later stages in training. To address this issue, we use samples from p(z) to evaluate the gradient of L DR (φ). While changing the sampling distribution introduces bias, which could be corrected by using importance sampling, we find that both the second term in Eq. (1) and sampling from p(z) are crucial to avoid the collapse of the learned distribution (see Fig. 4). To ensure that the two terms in Eq.
(1) have similar scale, we standardize the evaluations of J Mz i with exponentially averaged batch statistics and set α to the fixed value of H(p(z)) −1 . We evaluate the impact of learning the DR distribution on two standard benchmark locomotion tasks: Hopper and Half-cheetah from the MuJoCo tasks (illustrated in Fig. 1) in the OpenAI Gym suite (Brockman et al., 2016). We use an explicit encoding of the context vector z, corresponding to the torso size, density, foot friction and joint damping of the environments. In this work, we focus on uni-dimensional domain randomization contexts. In this work, we run experiments for each context variable independently 2 . We selected p(z) as an uniform distribution over ranges that include both solvable and unsolvable environments. We initialize p φ (z) to be the same as p(z). In these experiments, both distributions are implemented as discrete distributions with 100 bins. When sampling from this distribution, we first select a bin according to the discrete probabilities, then select a continuous context value uniformly at random from the ranges of the corresponding bin.
Experiments
We compare the test-time jump-start and asymptotic performance of policies learned with p φ (z) (learned domain randomization) and p(z) (fixed domain randomization). At test time, we sample (uniformly at random) a test set of 50 samples from the support of p(z) and run policy search optimization, initializing the policy with the parameters obtained at training time. The questions we aim to answer with our experiments are: 1) does learning policies with wide DR distributions affect the performance of the policy in the environments where the task is solvable? 2) does learning the DR distribution converge to the actual ranges where the task is solvable? 3) Is learning the DR distribution beneficial?
Results
Learned Distribution Ranges: Table 1 shows the ranges for p(z) and the final equivalent ranges for the distributions found by our method. Figure 2 and 3 show the evolution of p φ (z) during training, using our method. Each plot corresponds to a separate domain randomization experiment, where we randomized one different simulator parameter while keeping the rest fixed. Initially, each of these distribution is uniform. As the agent becomes better over the training distribution, it becomes easier to discriminate between promising environments (where the task is solvable) and impossible ones where rewards stay at low values. After around 1500 epochs, the distributions have converged to their final distributions. For Hopper, the learned distributions corresponds closely with the environments where we can find a policy using vanilla policy gradient methods from scratch. To determine the consistency of these results, we ran the Hopper torso size experiment 7 times, and fitted the parameters of a uniform distribution to the resulting p φ (z). The mean ranges (± one standard deviation) across the 7 2 To enable distribution learning multi-dimensional contexts, we are exploring the use of parameterizations, different from the discrete distribution, that do not suffer from the curse of dimensionality experiments were [0.00086 ± 0.00159, 0.09275 ± 0.00342], which provides some evidence for the reproducibility of our method.
Learned vs Fixed Domain Randomization: We compare the jumpstart and asymptotic performance between learning the domain randomization distribution and keeping it fixed. Our results show our method, using PPO as the policy optimizer (LSDR) vs keeping the domain randomization distri- bution fixed (Fixed-DR) which corresponds to keeping the domain randomization distribution fixed. For these methods, we also compare whether training a context-conditioned policy or a robust policy is better at generalization. We ran the same experiments for Hopper and Half-Cheetah.
Figures 5 and 6 depict learning curves when fine tuning the policy at test-time, for torso size randomization. All the methods start with the same random seed at training time. The policies are trained for 3000 epochs, where we collect B = 4000 samples per epoch for the policy update. For the distribution update, we collect K = 10 additional trajectories and run the gradient update M = 10 times (without re-sampling new trajectories). We report averages over 50 different environments (corresponding to samples from p(z), one random seed per environment). For clarity of presentation, we report the comparison over a "reasonable" torso size range (where the locomotion task is feasible) and a "hard" range, where the policy fails to make the robot move forward. For Hopper, the reasonable torso size range corresponds to start and asymptotic performance over using fixed domain randomization, within the reasonable range. On the hard ranges, LSDR performs slightly worse. But in most of the contexts in this range the task is not actually solvable; i.e. the optimal policy found by vanilla-PPO does not result in successful locomotion on the hard range 3 . Figures 5 and 6 also show the performance of a contextual policy to that of a non-contextual policy. Our results show, training a contextual policy boosts the performance for both scenarios, where the domain randomization distribution is fixed, and when the distribution is being learned.
Contextual policy
Using a different policy optimizer We also experimented with using EPOpt-PPO (Rajeswaran et al., 2016) as the policy optimizer in Algorithm (1). The motivation for this is to mitigate the bias towards environments with higher cumulative rewards J Mz (π) early during training. EPOpt encourages improving the policy on the worst performing environments, at the expense of collecting more data per From the resulting 100 trajectories, we use the 10% trajectories that resulted in the lowest rewards to fill the buffer for a PPO policy update, discarding the rest of the trajectories. Figure 9 compares the effect of learning the domain randomization distribution vs using a fixed wide range in . Learning curves for torso randomization on the Hopper task, using EPOpt as the policy optimizer. Lines represent mean performance, while the shaded regions correspond to the maximum and minimum performance over the [0.01, 0.09] torso size range. Learning the domain randomization distribution results in faster convergence and higher asymptotic performance. this setting. We found that learning the domain randomization distribution resulted in faster convergence to high reward policies over the evaluation range [0.01, 0.09], while resulting in a slightly better asymptotic performance. We believe this could be a consequence of lower variance in the policy gradient estimates, as the the learned p φ (z) has lower variance than p(z). Interestingly, using EPOpt resulted in a distribution with a wider torso size range than vanilla PPO, from approximately 0.0 to 0.14, demonstrating that optimizing worst case performance does help in alleviating the bias towards high reward environments.
Discussion
By allowing the agent to learn a good representative distribution, we are able to learn to solve difficult control tasks that heavily rely on a good initial domain randomization range. Our main experimental validation of domain randomization distribution learning is in the domain of simulated robotic locomotion. As shown in our experiments, our method is not sensitive to the initial domain randomization distribution and is able to converge to a more diverse range, while staying within the feasible range.
In this work, we study uni-dimensional context distribution learning. Due to the curse of dimensionality, there are limitations in using a discrete distribution -we are currently experimenting with alternative distributions such as truncated normal distributions, approximation of discrete distributions, etc. Using multidimensional contexts should enable an agent trained in simulation to obtain experience that's closer to a real world robot, which is the goal of this work. An issue that requires further investigation is the fact that we use the same reward function over all environments, without considering the effect of the simulation parameters on the reward scale. For instance, in a challenging environment, the agent may obtain low rewards but still manage to produce a policy that successfully solves the task; e.g. successful forward locomotion in the Hopper task. A poorly constructed reward may not only lead to undesirable behavior, but may complicate distribution learning if the scale of the rewards for successful policies varies across contexts. | 3,678 |
1906.00410 | 2947469668 | Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment. | @cite_10 propose a related approach for learning robust policies over a distribution of simulator models. The proposed approach, based on the the @math -percentile conditional value at risk (CVaR) @cite_11 objective, improves the policy performance on a small proportion of environments where the policy performs the worst. The authors propose an algorithm that updates the distribution of simulation models to maximize the likelihood of real-world trajectories, via Bayesian inference. The combination of worst-case performance optimization and Bayesian updates ensures that the resulting policy is robust to errors in the estimation of the simulation model parameters. Our method can be combined with the CVaR objective to encourage diversity of the learned DR distribution. | {
"abstract": [
"Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning adaptation.",
"Conditional Value at Risk (CVaR) is a prominent risk measure that is being used extensively in various domains. We develop a new formula for the gradient of the CVaR in the form of a conditional expectation. Based on this formula, we propose a novel sampling-based estimator for the CVaR gradient, in the spirit of the likelihood-ratio method. We analyze the bias of the estimator, and prove the convergence of a corresponding stochastic gradient descent algorithm to a local CVaR optimum. Our method allows to consider CVaR optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risk-sensitive controller for the game of Tetris."
],
"cite_N": [
"@cite_10",
"@cite_11"
],
"mid": [
"2529477964",
"2949963197"
]
} | Learning Domain Randomization Distributions for Transfer of Locomotion Policies | Deep Reinforcement Learning (Deep-RL) is a powerful technique for synthesizing locomotion controllers for robot systems. Inspired by successes in video games (Mnih et al., 2015) and board games (Silver et al., 2016), recent work has demonstrated the applicability of Deep-RL in robotics . Since the data requirements for Deep-RL make their direct application to real robot systems costly, or even infeasible, a large body of recent work has focused on training controllers in simulation and deploying them in a real robot system. This is particularly challenging, but it is crucial to realize real world development of these systems.
Robot simulators provide a solution to the data requirements of Deep-RL. Except for simple robot systems in controlled environments, however, real robot experience may not correspond to situations that were used in simulation; an issue known as the reality gap (Jakobi et al., 1995). One way to address the reality gap is to perform system identification to tune the simulation parameters. This approach works if collecting data on the target system is not prohibitively expensive and the number of parameters of the simulation are small. The reality gap may still exist, however, due to a mis-specification of the simulation model.
Another method to shrink the reality gap is to train policies to maximize performance over a diverse set of simulation models, where the parameters of each model are sampled randomly, an approach known as domain randomization (DR). This aims to address the issue of model misspecification by providing diverse simulated experience. Domain randomization has been demonstrated to effectively produce controllers that can be trained in simulation with high likelihood of successful outcomes on a real robot system after deployment (Andrychowicz et al., 2018) and finetuning with real world data (Chen et al., 2018).
While successful, an aspect that has not been addressed in depth is the selection of the domain randomization distribution. For vision-based components, DR should be tuned so that features learned in simulation do not depend strongly on the appearance of simulated environments. For the control components, the focus of this work, there is a dependency arXiv:1906.00410v1 [cs.LG] 2 Jun 2019 between optimal behaviour and the dynamics of the environment. In this case, the DR distribution should be selected carefully to ensure that the real robot experience is represented in the simulated experience sampled under DR. If real robot data is available, one could use gradient-free search (Chebotar et al., 2018) or Bayesian inference (Rajeswaran et al., 2016) to update the DR distribution after executing the learned policy on the target system. These methods are based on the assumption that there is a a set of simulators from which real world experience can be synthesized.
In this work we propose to learn the parameters of the simulator distribution, such that the policy is trained over the most diverse set of simulator parameters in which it can plausibly succeed. By making the simulation distribution as wide as possible, we aim to encode the largest set of behaviours that is possible on a single policy, with fixed capacity. As shown in our experiments, training on the widest distribution possible has two problems: our models usually have finite capacity and picking a domain randomization that is too varied slows down convergence as shown in Figure 9.
Instead, we let the optimization process focus on environments where the task is feasible. We propose an algorithm that simultaneously learns the domain randomization distribution while optimizing the policy to maximize performance over the learned distribution. To operate over a wide range of possible simulator parameters, we train context-aware policies which take as input the current state of the environment, alongside contextual information describing the sampled parameters of the simulator. This enables our policies to learn context-specific strategies which consider the current dynamics of the environment, rather than an average over all possible simulator parameters. When deployed on the target environment, we concurrently fine-tune the policy parameters while searching for the context that maximizes performance. We evaluate our method on a variety of control problems from the OpenAI Gym suite of benchmarks. We find that our method is able to improve on the performance of fixed domain randomization. Furthermore, we demonstrate our model's robustness to initial simulator distribution parameters, showing that our method repeatably converges to similar domain randomization distributions across different experiments. (Packer et al., 2018) present an empirical study of generalization in Deep-RL, testing interpolation and extrapolation performance of state-of-the-art algorithms when varying simulation parameters in control tasks. The authors provide an experimental assessment of generalization under varying training and testing distributions. Our work extends these results by providing results for the case when the training distribution parameters are learned and change during policy training. (Chebotar et al., 2018) propose training policies on a distribution of simulators, whose parameters are fit to real-world data. Their proposed algorithm switches back and forth between optimizing the policy under the DR distribution and updating the DR distribution by minimizing the discrepancy between simulated and real world trajectories. In contrast, we aim to learn policies that maximize performance over a diverse distribution of environments where the task is feasible, as a way of minimizing the interactions with the real robot system. (Rajeswaran et al., 2016) propose a related approach for learning robust policies over a distribution of simulator models. The proposed approach, based on the the -percentile conditional value at risk (CVaR) (Tamar et al., 2015) objective, improves the policy performance on a small proportion of environments where the policy performs the worst. The authors propose an algorithm that updates the distribution of simulation models to maximize the likelihood of real-world trajectories, via Bayesian inference. The combination of worst-case performance optimization and Bayesian updates ensures that the resulting policy is robust to errors in the estimation of the simulation model parameters. Our method can be combined with the CVaR objective to encourage diversity of the learned DR distribution.
Problem Statement
We consider parametric Markov Decision Processes (MDPs) (Sutton & Barto, 2018). An MDP M is defined by the tuple S, A, p, r, γ, ρ 0 , where S is the set of possible states and A is the set of actions, p : S × A × S − → R + , encodes the state transition dynamics, r : S × A − → R + is the taskdependent reward function, γ is a discount factor, and ρ 0 : S − → R is the initial state distribution. Let s t and a t be the state and action taken at time t. At the beginning of each episode, s 0 ∼ ρ 0 (.). Trajectories τ are obtained by iteratively sampling actions using the current policy π a t ∼ π(a t |s t ) and evaluating next states according to the transition dynamics s t+1 ∼ p(s t+1 |a t ). Given an MDP M, the goal is then to learn policy π to maximize the expected sum of rewards
J M (π) = E τ [R(τ )|π] = E τ [ ∞ t=0 γ t r t ], where r t = r(s t , a t ).
In our work, we aim to maximize performance over a distribution of MDPs, each described by a context vector z representing the variables that change over the distribution: changes in transition dynamics, rewards, initial state distribution, etc. Thus, our objective is to maximize (Yu et al., 2018;Chen et al., 2018;Rakelly et al., 2019), we condition the policy on the context vector, π(a t |s t , z). In the experiments reported in this paper, we let z encode the parameters of the transition model in a physically based simulator; e.g. mass, friction or damping.
E z∼p ( z) [J Mz (π)], where p(z) is the domain randomization distribution. Similar to
Proposed Method
In practice, making the context distribution p(z) as wide as possible may be detrimental to the objective of maximizing performance. For instance, if the distribution has infinite support and wide variance, there may be more environments sampled from the context distribution for which the desired task is impossible (e.g. reaching a target state). Thus, sampling trajectories from a wide context distribution results in high variance on the directions of improvement, slowing progress on policy learning. On the other hand, if we make the context distribution to be too narrow, policy learning can progress more rapidly but may not generalize to the whole set of possible contexts.
We introduce LSDR (Learning the Sweet-spot Distribution Range) algorithm for concurrently learning a domain randomization distribution and a robust policy that maximizes performance over it. Instead of directly sampling from p(z), we use a surrogate distribution p φ (z), with trainable parameters φ. Our goal is to find appropriate parameters φ to optimize π(·|s, z). LSDR proceeds by updating the policy with trajectories sampled from p φ (z), and updating the φ based on the performance of the policy p(z). To avoid the collapse of the learned distribution, we propose using a regularizer that encourages the distribution to be diverse. The idea is to sample more data from environments where improvement of the policy is possible, without collapsing to environments that are trivial to solve. We summarize our training and testing procedure in Algorithm 1 and 2.
In our experiments, we use Proximal Policy Optimization (PPO) (Schulman et al., 2017) for the UpdatePolicy procedure in Algorithm (1).
Algorithm 1 Learning the policy and training distribution
Input: testing distribution p(z), initial parameters of the learned distribution φ, initial policy π, buffer size B, total
iterations N for i ∈ {1, ..., N } do z ∼ p φ (z) s 0 ∼ ρ 0 (s) B = {} while |B| < B do a t ∼ π(a t |s t , z) s t+1 , r t ∼ p(s t+1 , r t |s t , a t , z) append (s t , z k , a t , r t , s t+1 ) to B if s is terminal then z ∼ p φ (z) s t+1 ∼ ρ 0 (s) end if s t ← s t+1 end while φ ← UpdateDistribution(φ, p(z), π) π ← UpdatePolicy(π, B) end for
Algorithm 2 UpdateDistribution
Input: learned distribution parameters φ, testing distribution p(z), policy π, total iterations M , total trajectory samples K for i ∈ {1, ..., M } do sample z 1:K from p(z) Obtain Monte-Carlo estimate of L DR (φ) by executing π on environments with z 1:
K φ ← φ + λ∇ φ (L DR (φ) − αD KL (p φ (z)||p(z))) end for
Algorithm 3 Fine-tuning the policy at test-time Input: learned distribution parameters φ, policy π, buffer size B, total iterations N Initialize guess for context vectorẑ ∼ p φ (z) for i ∈ {0, ..., N } do Collect B samples into B by executing policy π(a|s,ẑ) π ← UpdatePolicy(π, B) z ← UpdatePolicy(ẑ, B) end for
Learning the Sweet-spot Distribution Range
The goal of our method is to find a training distribution p φ (z) to maximize the expected reward of the policy under the test distribution, while gradually reducing the sampling frequency of environments that make the task unsolvable 1 . Such situation is common in physics-based simulations, where a bad selection of simulation parameters may lead to environments where the task is impossible due to physical limits or unstable simulations.
We start by assuming that the test distribution p(z) has wide but bounded support, such that we get a distribution of solvable and unsolvable tasks. To update the training distribution, we use an objective of the following form
arg max φ L DR (φ) − αD KL (p φ (z)||p(z))(1)
where the first term is designed to encourage improvement on environments that are more likely to be solvable, while the second term is a regularizer that keeps the distribution from collapsing. In our experiments, we set
L DR (φ) = E z∼p(z) [J Mz (π) log(p φ (z))].
Optimizing this objective encourages focusing on environments where the current policy performs the best. Other suitable objectives are the improvement over the previous policy E z [J Mz (π i ) − J Mz (π i-1 )], or an estimate of the performance of the context dependent optimal policy E z [Ĵ Mz (π * )].
If we use the performance of the policy as a way of determining whether the task is solvable for a given context z, 1 In this work, we consider a task solvable if there exists a policy that brings the environment to a set of desired goal states. then a trivial solution would be to make p φ (z) concentrate on few easy environments. The second term in Eq. (1) helps to avoid this issue by penalizing distributions that deviate too much from p(z), which is assumed to be wide. When p(z) is uniform this is equivalent to maximizing the entropy of p φ (z).
To estimate the gradient of Eq. (1) with respect to φ, we use the log-derivative score function gradient estimator (Fu, 2006), resulting in the following Monte-Carlo update :
φ ← φ + λ 1 K K i=1 J Mz i (π)∇ φ log(p φ (z i )) − α∇ φ D KL (p φ (z)||p(z))(2)
where z i ∼ p(z). Updating φ with samples from the distribution we are learning has the problem that we never get information about the performance of the policy in low probability contexts under p φ (z). This is problematic since if context z k were assigned a low probability early in training, we would require a large number of samples to update its probability-even if the policy performs well on z k during later stages in training. To address this issue, we use samples from p(z) to evaluate the gradient of L DR (φ). While changing the sampling distribution introduces bias, which could be corrected by using importance sampling, we find that both the second term in Eq. (1) and sampling from p(z) are crucial to avoid the collapse of the learned distribution (see Fig. 4). To ensure that the two terms in Eq.
(1) have similar scale, we standardize the evaluations of J Mz i with exponentially averaged batch statistics and set α to the fixed value of H(p(z)) −1 . We evaluate the impact of learning the DR distribution on two standard benchmark locomotion tasks: Hopper and Half-cheetah from the MuJoCo tasks (illustrated in Fig. 1) in the OpenAI Gym suite (Brockman et al., 2016). We use an explicit encoding of the context vector z, corresponding to the torso size, density, foot friction and joint damping of the environments. In this work, we focus on uni-dimensional domain randomization contexts. In this work, we run experiments for each context variable independently 2 . We selected p(z) as an uniform distribution over ranges that include both solvable and unsolvable environments. We initialize p φ (z) to be the same as p(z). In these experiments, both distributions are implemented as discrete distributions with 100 bins. When sampling from this distribution, we first select a bin according to the discrete probabilities, then select a continuous context value uniformly at random from the ranges of the corresponding bin.
Experiments
We compare the test-time jump-start and asymptotic performance of policies learned with p φ (z) (learned domain randomization) and p(z) (fixed domain randomization). At test time, we sample (uniformly at random) a test set of 50 samples from the support of p(z) and run policy search optimization, initializing the policy with the parameters obtained at training time. The questions we aim to answer with our experiments are: 1) does learning policies with wide DR distributions affect the performance of the policy in the environments where the task is solvable? 2) does learning the DR distribution converge to the actual ranges where the task is solvable? 3) Is learning the DR distribution beneficial?
Results
Learned Distribution Ranges: Table 1 shows the ranges for p(z) and the final equivalent ranges for the distributions found by our method. Figure 2 and 3 show the evolution of p φ (z) during training, using our method. Each plot corresponds to a separate domain randomization experiment, where we randomized one different simulator parameter while keeping the rest fixed. Initially, each of these distribution is uniform. As the agent becomes better over the training distribution, it becomes easier to discriminate between promising environments (where the task is solvable) and impossible ones where rewards stay at low values. After around 1500 epochs, the distributions have converged to their final distributions. For Hopper, the learned distributions corresponds closely with the environments where we can find a policy using vanilla policy gradient methods from scratch. To determine the consistency of these results, we ran the Hopper torso size experiment 7 times, and fitted the parameters of a uniform distribution to the resulting p φ (z). The mean ranges (± one standard deviation) across the 7 2 To enable distribution learning multi-dimensional contexts, we are exploring the use of parameterizations, different from the discrete distribution, that do not suffer from the curse of dimensionality experiments were [0.00086 ± 0.00159, 0.09275 ± 0.00342], which provides some evidence for the reproducibility of our method.
Learned vs Fixed Domain Randomization: We compare the jumpstart and asymptotic performance between learning the domain randomization distribution and keeping it fixed. Our results show our method, using PPO as the policy optimizer (LSDR) vs keeping the domain randomization distri- bution fixed (Fixed-DR) which corresponds to keeping the domain randomization distribution fixed. For these methods, we also compare whether training a context-conditioned policy or a robust policy is better at generalization. We ran the same experiments for Hopper and Half-Cheetah.
Figures 5 and 6 depict learning curves when fine tuning the policy at test-time, for torso size randomization. All the methods start with the same random seed at training time. The policies are trained for 3000 epochs, where we collect B = 4000 samples per epoch for the policy update. For the distribution update, we collect K = 10 additional trajectories and run the gradient update M = 10 times (without re-sampling new trajectories). We report averages over 50 different environments (corresponding to samples from p(z), one random seed per environment). For clarity of presentation, we report the comparison over a "reasonable" torso size range (where the locomotion task is feasible) and a "hard" range, where the policy fails to make the robot move forward. For Hopper, the reasonable torso size range corresponds to start and asymptotic performance over using fixed domain randomization, within the reasonable range. On the hard ranges, LSDR performs slightly worse. But in most of the contexts in this range the task is not actually solvable; i.e. the optimal policy found by vanilla-PPO does not result in successful locomotion on the hard range 3 . Figures 5 and 6 also show the performance of a contextual policy to that of a non-contextual policy. Our results show, training a contextual policy boosts the performance for both scenarios, where the domain randomization distribution is fixed, and when the distribution is being learned.
Contextual policy
Using a different policy optimizer We also experimented with using EPOpt-PPO (Rajeswaran et al., 2016) as the policy optimizer in Algorithm (1). The motivation for this is to mitigate the bias towards environments with higher cumulative rewards J Mz (π) early during training. EPOpt encourages improving the policy on the worst performing environments, at the expense of collecting more data per From the resulting 100 trajectories, we use the 10% trajectories that resulted in the lowest rewards to fill the buffer for a PPO policy update, discarding the rest of the trajectories. Figure 9 compares the effect of learning the domain randomization distribution vs using a fixed wide range in . Learning curves for torso randomization on the Hopper task, using EPOpt as the policy optimizer. Lines represent mean performance, while the shaded regions correspond to the maximum and minimum performance over the [0.01, 0.09] torso size range. Learning the domain randomization distribution results in faster convergence and higher asymptotic performance. this setting. We found that learning the domain randomization distribution resulted in faster convergence to high reward policies over the evaluation range [0.01, 0.09], while resulting in a slightly better asymptotic performance. We believe this could be a consequence of lower variance in the policy gradient estimates, as the the learned p φ (z) has lower variance than p(z). Interestingly, using EPOpt resulted in a distribution with a wider torso size range than vanilla PPO, from approximately 0.0 to 0.14, demonstrating that optimizing worst case performance does help in alleviating the bias towards high reward environments.
Discussion
By allowing the agent to learn a good representative distribution, we are able to learn to solve difficult control tasks that heavily rely on a good initial domain randomization range. Our main experimental validation of domain randomization distribution learning is in the domain of simulated robotic locomotion. As shown in our experiments, our method is not sensitive to the initial domain randomization distribution and is able to converge to a more diverse range, while staying within the feasible range.
In this work, we study uni-dimensional context distribution learning. Due to the curse of dimensionality, there are limitations in using a discrete distribution -we are currently experimenting with alternative distributions such as truncated normal distributions, approximation of discrete distributions, etc. Using multidimensional contexts should enable an agent trained in simulation to obtain experience that's closer to a real world robot, which is the goal of this work. An issue that requires further investigation is the fact that we use the same reward function over all environments, without considering the effect of the simulation parameters on the reward scale. For instance, in a challenging environment, the agent may obtain low rewards but still manage to produce a policy that successfully solves the task; e.g. successful forward locomotion in the Hopper task. A poorly constructed reward may not only lead to undesirable behavior, but may complicate distribution learning if the scale of the rewards for successful policies varies across contexts. | 3,678 |
1906.00410 | 2947469668 | Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment. | Related to learning the DR distribution, @cite_17 propose using Bayesian Optimization (BO) to update from the simulation model distribution. This is done by evaluating the improvement over the current policy by using a policy gradient algorithm with data sampled from the current simulator distribution. The parameters of the simulator distribution for the next iteration are selected to maximize said improvement. | {
"abstract": [
"Policy gradient methods have been successfully applied to a variety of reinforcement learning tasks. However, while learning in a simulator, these methods do not utilise the opportunity to improve learning by adjusting certain environment variables: unobservable state features that are randomly determined by the environment in a physical setting, but that are controllable in a simulator. This can lead to slow learning, or convergence to highly suboptimal policies. In this paper, we present contextual policy optimisation (CPO). The central idea is to use Bayesian optimisation to actively select the distribution of the environment variable that maximises the improvement generated by each iteration of the policy gradient method. To make this Bayesian optimisation practical, we contribute two easy-to-compute low-dimensional fingerprints of the current policy. We apply CPO to a number of continuous control tasks of varying difficulty and show that CPO can efficiently learn policies that are robust to significant rare events, which are unlikely to be observable under random sampling but are key to learning good policies."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2804860445"
]
} | Learning Domain Randomization Distributions for Transfer of Locomotion Policies | Deep Reinforcement Learning (Deep-RL) is a powerful technique for synthesizing locomotion controllers for robot systems. Inspired by successes in video games (Mnih et al., 2015) and board games (Silver et al., 2016), recent work has demonstrated the applicability of Deep-RL in robotics . Since the data requirements for Deep-RL make their direct application to real robot systems costly, or even infeasible, a large body of recent work has focused on training controllers in simulation and deploying them in a real robot system. This is particularly challenging, but it is crucial to realize real world development of these systems.
Robot simulators provide a solution to the data requirements of Deep-RL. Except for simple robot systems in controlled environments, however, real robot experience may not correspond to situations that were used in simulation; an issue known as the reality gap (Jakobi et al., 1995). One way to address the reality gap is to perform system identification to tune the simulation parameters. This approach works if collecting data on the target system is not prohibitively expensive and the number of parameters of the simulation are small. The reality gap may still exist, however, due to a mis-specification of the simulation model.
Another method to shrink the reality gap is to train policies to maximize performance over a diverse set of simulation models, where the parameters of each model are sampled randomly, an approach known as domain randomization (DR). This aims to address the issue of model misspecification by providing diverse simulated experience. Domain randomization has been demonstrated to effectively produce controllers that can be trained in simulation with high likelihood of successful outcomes on a real robot system after deployment (Andrychowicz et al., 2018) and finetuning with real world data (Chen et al., 2018).
While successful, an aspect that has not been addressed in depth is the selection of the domain randomization distribution. For vision-based components, DR should be tuned so that features learned in simulation do not depend strongly on the appearance of simulated environments. For the control components, the focus of this work, there is a dependency arXiv:1906.00410v1 [cs.LG] 2 Jun 2019 between optimal behaviour and the dynamics of the environment. In this case, the DR distribution should be selected carefully to ensure that the real robot experience is represented in the simulated experience sampled under DR. If real robot data is available, one could use gradient-free search (Chebotar et al., 2018) or Bayesian inference (Rajeswaran et al., 2016) to update the DR distribution after executing the learned policy on the target system. These methods are based on the assumption that there is a a set of simulators from which real world experience can be synthesized.
In this work we propose to learn the parameters of the simulator distribution, such that the policy is trained over the most diverse set of simulator parameters in which it can plausibly succeed. By making the simulation distribution as wide as possible, we aim to encode the largest set of behaviours that is possible on a single policy, with fixed capacity. As shown in our experiments, training on the widest distribution possible has two problems: our models usually have finite capacity and picking a domain randomization that is too varied slows down convergence as shown in Figure 9.
Instead, we let the optimization process focus on environments where the task is feasible. We propose an algorithm that simultaneously learns the domain randomization distribution while optimizing the policy to maximize performance over the learned distribution. To operate over a wide range of possible simulator parameters, we train context-aware policies which take as input the current state of the environment, alongside contextual information describing the sampled parameters of the simulator. This enables our policies to learn context-specific strategies which consider the current dynamics of the environment, rather than an average over all possible simulator parameters. When deployed on the target environment, we concurrently fine-tune the policy parameters while searching for the context that maximizes performance. We evaluate our method on a variety of control problems from the OpenAI Gym suite of benchmarks. We find that our method is able to improve on the performance of fixed domain randomization. Furthermore, we demonstrate our model's robustness to initial simulator distribution parameters, showing that our method repeatably converges to similar domain randomization distributions across different experiments. (Packer et al., 2018) present an empirical study of generalization in Deep-RL, testing interpolation and extrapolation performance of state-of-the-art algorithms when varying simulation parameters in control tasks. The authors provide an experimental assessment of generalization under varying training and testing distributions. Our work extends these results by providing results for the case when the training distribution parameters are learned and change during policy training. (Chebotar et al., 2018) propose training policies on a distribution of simulators, whose parameters are fit to real-world data. Their proposed algorithm switches back and forth between optimizing the policy under the DR distribution and updating the DR distribution by minimizing the discrepancy between simulated and real world trajectories. In contrast, we aim to learn policies that maximize performance over a diverse distribution of environments where the task is feasible, as a way of minimizing the interactions with the real robot system. (Rajeswaran et al., 2016) propose a related approach for learning robust policies over a distribution of simulator models. The proposed approach, based on the the -percentile conditional value at risk (CVaR) (Tamar et al., 2015) objective, improves the policy performance on a small proportion of environments where the policy performs the worst. The authors propose an algorithm that updates the distribution of simulation models to maximize the likelihood of real-world trajectories, via Bayesian inference. The combination of worst-case performance optimization and Bayesian updates ensures that the resulting policy is robust to errors in the estimation of the simulation model parameters. Our method can be combined with the CVaR objective to encourage diversity of the learned DR distribution.
Problem Statement
We consider parametric Markov Decision Processes (MDPs) (Sutton & Barto, 2018). An MDP M is defined by the tuple S, A, p, r, γ, ρ 0 , where S is the set of possible states and A is the set of actions, p : S × A × S − → R + , encodes the state transition dynamics, r : S × A − → R + is the taskdependent reward function, γ is a discount factor, and ρ 0 : S − → R is the initial state distribution. Let s t and a t be the state and action taken at time t. At the beginning of each episode, s 0 ∼ ρ 0 (.). Trajectories τ are obtained by iteratively sampling actions using the current policy π a t ∼ π(a t |s t ) and evaluating next states according to the transition dynamics s t+1 ∼ p(s t+1 |a t ). Given an MDP M, the goal is then to learn policy π to maximize the expected sum of rewards
J M (π) = E τ [R(τ )|π] = E τ [ ∞ t=0 γ t r t ], where r t = r(s t , a t ).
In our work, we aim to maximize performance over a distribution of MDPs, each described by a context vector z representing the variables that change over the distribution: changes in transition dynamics, rewards, initial state distribution, etc. Thus, our objective is to maximize (Yu et al., 2018;Chen et al., 2018;Rakelly et al., 2019), we condition the policy on the context vector, π(a t |s t , z). In the experiments reported in this paper, we let z encode the parameters of the transition model in a physically based simulator; e.g. mass, friction or damping.
E z∼p ( z) [J Mz (π)], where p(z) is the domain randomization distribution. Similar to
Proposed Method
In practice, making the context distribution p(z) as wide as possible may be detrimental to the objective of maximizing performance. For instance, if the distribution has infinite support and wide variance, there may be more environments sampled from the context distribution for which the desired task is impossible (e.g. reaching a target state). Thus, sampling trajectories from a wide context distribution results in high variance on the directions of improvement, slowing progress on policy learning. On the other hand, if we make the context distribution to be too narrow, policy learning can progress more rapidly but may not generalize to the whole set of possible contexts.
We introduce LSDR (Learning the Sweet-spot Distribution Range) algorithm for concurrently learning a domain randomization distribution and a robust policy that maximizes performance over it. Instead of directly sampling from p(z), we use a surrogate distribution p φ (z), with trainable parameters φ. Our goal is to find appropriate parameters φ to optimize π(·|s, z). LSDR proceeds by updating the policy with trajectories sampled from p φ (z), and updating the φ based on the performance of the policy p(z). To avoid the collapse of the learned distribution, we propose using a regularizer that encourages the distribution to be diverse. The idea is to sample more data from environments where improvement of the policy is possible, without collapsing to environments that are trivial to solve. We summarize our training and testing procedure in Algorithm 1 and 2.
In our experiments, we use Proximal Policy Optimization (PPO) (Schulman et al., 2017) for the UpdatePolicy procedure in Algorithm (1).
Algorithm 1 Learning the policy and training distribution
Input: testing distribution p(z), initial parameters of the learned distribution φ, initial policy π, buffer size B, total
iterations N for i ∈ {1, ..., N } do z ∼ p φ (z) s 0 ∼ ρ 0 (s) B = {} while |B| < B do a t ∼ π(a t |s t , z) s t+1 , r t ∼ p(s t+1 , r t |s t , a t , z) append (s t , z k , a t , r t , s t+1 ) to B if s is terminal then z ∼ p φ (z) s t+1 ∼ ρ 0 (s) end if s t ← s t+1 end while φ ← UpdateDistribution(φ, p(z), π) π ← UpdatePolicy(π, B) end for
Algorithm 2 UpdateDistribution
Input: learned distribution parameters φ, testing distribution p(z), policy π, total iterations M , total trajectory samples K for i ∈ {1, ..., M } do sample z 1:K from p(z) Obtain Monte-Carlo estimate of L DR (φ) by executing π on environments with z 1:
K φ ← φ + λ∇ φ (L DR (φ) − αD KL (p φ (z)||p(z))) end for
Algorithm 3 Fine-tuning the policy at test-time Input: learned distribution parameters φ, policy π, buffer size B, total iterations N Initialize guess for context vectorẑ ∼ p φ (z) for i ∈ {0, ..., N } do Collect B samples into B by executing policy π(a|s,ẑ) π ← UpdatePolicy(π, B) z ← UpdatePolicy(ẑ, B) end for
Learning the Sweet-spot Distribution Range
The goal of our method is to find a training distribution p φ (z) to maximize the expected reward of the policy under the test distribution, while gradually reducing the sampling frequency of environments that make the task unsolvable 1 . Such situation is common in physics-based simulations, where a bad selection of simulation parameters may lead to environments where the task is impossible due to physical limits or unstable simulations.
We start by assuming that the test distribution p(z) has wide but bounded support, such that we get a distribution of solvable and unsolvable tasks. To update the training distribution, we use an objective of the following form
arg max φ L DR (φ) − αD KL (p φ (z)||p(z))(1)
where the first term is designed to encourage improvement on environments that are more likely to be solvable, while the second term is a regularizer that keeps the distribution from collapsing. In our experiments, we set
L DR (φ) = E z∼p(z) [J Mz (π) log(p φ (z))].
Optimizing this objective encourages focusing on environments where the current policy performs the best. Other suitable objectives are the improvement over the previous policy E z [J Mz (π i ) − J Mz (π i-1 )], or an estimate of the performance of the context dependent optimal policy E z [Ĵ Mz (π * )].
If we use the performance of the policy as a way of determining whether the task is solvable for a given context z, 1 In this work, we consider a task solvable if there exists a policy that brings the environment to a set of desired goal states. then a trivial solution would be to make p φ (z) concentrate on few easy environments. The second term in Eq. (1) helps to avoid this issue by penalizing distributions that deviate too much from p(z), which is assumed to be wide. When p(z) is uniform this is equivalent to maximizing the entropy of p φ (z).
To estimate the gradient of Eq. (1) with respect to φ, we use the log-derivative score function gradient estimator (Fu, 2006), resulting in the following Monte-Carlo update :
φ ← φ + λ 1 K K i=1 J Mz i (π)∇ φ log(p φ (z i )) − α∇ φ D KL (p φ (z)||p(z))(2)
where z i ∼ p(z). Updating φ with samples from the distribution we are learning has the problem that we never get information about the performance of the policy in low probability contexts under p φ (z). This is problematic since if context z k were assigned a low probability early in training, we would require a large number of samples to update its probability-even if the policy performs well on z k during later stages in training. To address this issue, we use samples from p(z) to evaluate the gradient of L DR (φ). While changing the sampling distribution introduces bias, which could be corrected by using importance sampling, we find that both the second term in Eq. (1) and sampling from p(z) are crucial to avoid the collapse of the learned distribution (see Fig. 4). To ensure that the two terms in Eq.
(1) have similar scale, we standardize the evaluations of J Mz i with exponentially averaged batch statistics and set α to the fixed value of H(p(z)) −1 . We evaluate the impact of learning the DR distribution on two standard benchmark locomotion tasks: Hopper and Half-cheetah from the MuJoCo tasks (illustrated in Fig. 1) in the OpenAI Gym suite (Brockman et al., 2016). We use an explicit encoding of the context vector z, corresponding to the torso size, density, foot friction and joint damping of the environments. In this work, we focus on uni-dimensional domain randomization contexts. In this work, we run experiments for each context variable independently 2 . We selected p(z) as an uniform distribution over ranges that include both solvable and unsolvable environments. We initialize p φ (z) to be the same as p(z). In these experiments, both distributions are implemented as discrete distributions with 100 bins. When sampling from this distribution, we first select a bin according to the discrete probabilities, then select a continuous context value uniformly at random from the ranges of the corresponding bin.
Experiments
We compare the test-time jump-start and asymptotic performance of policies learned with p φ (z) (learned domain randomization) and p(z) (fixed domain randomization). At test time, we sample (uniformly at random) a test set of 50 samples from the support of p(z) and run policy search optimization, initializing the policy with the parameters obtained at training time. The questions we aim to answer with our experiments are: 1) does learning policies with wide DR distributions affect the performance of the policy in the environments where the task is solvable? 2) does learning the DR distribution converge to the actual ranges where the task is solvable? 3) Is learning the DR distribution beneficial?
Results
Learned Distribution Ranges: Table 1 shows the ranges for p(z) and the final equivalent ranges for the distributions found by our method. Figure 2 and 3 show the evolution of p φ (z) during training, using our method. Each plot corresponds to a separate domain randomization experiment, where we randomized one different simulator parameter while keeping the rest fixed. Initially, each of these distribution is uniform. As the agent becomes better over the training distribution, it becomes easier to discriminate between promising environments (where the task is solvable) and impossible ones where rewards stay at low values. After around 1500 epochs, the distributions have converged to their final distributions. For Hopper, the learned distributions corresponds closely with the environments where we can find a policy using vanilla policy gradient methods from scratch. To determine the consistency of these results, we ran the Hopper torso size experiment 7 times, and fitted the parameters of a uniform distribution to the resulting p φ (z). The mean ranges (± one standard deviation) across the 7 2 To enable distribution learning multi-dimensional contexts, we are exploring the use of parameterizations, different from the discrete distribution, that do not suffer from the curse of dimensionality experiments were [0.00086 ± 0.00159, 0.09275 ± 0.00342], which provides some evidence for the reproducibility of our method.
Learned vs Fixed Domain Randomization: We compare the jumpstart and asymptotic performance between learning the domain randomization distribution and keeping it fixed. Our results show our method, using PPO as the policy optimizer (LSDR) vs keeping the domain randomization distri- bution fixed (Fixed-DR) which corresponds to keeping the domain randomization distribution fixed. For these methods, we also compare whether training a context-conditioned policy or a robust policy is better at generalization. We ran the same experiments for Hopper and Half-Cheetah.
Figures 5 and 6 depict learning curves when fine tuning the policy at test-time, for torso size randomization. All the methods start with the same random seed at training time. The policies are trained for 3000 epochs, where we collect B = 4000 samples per epoch for the policy update. For the distribution update, we collect K = 10 additional trajectories and run the gradient update M = 10 times (without re-sampling new trajectories). We report averages over 50 different environments (corresponding to samples from p(z), one random seed per environment). For clarity of presentation, we report the comparison over a "reasonable" torso size range (where the locomotion task is feasible) and a "hard" range, where the policy fails to make the robot move forward. For Hopper, the reasonable torso size range corresponds to start and asymptotic performance over using fixed domain randomization, within the reasonable range. On the hard ranges, LSDR performs slightly worse. But in most of the contexts in this range the task is not actually solvable; i.e. the optimal policy found by vanilla-PPO does not result in successful locomotion on the hard range 3 . Figures 5 and 6 also show the performance of a contextual policy to that of a non-contextual policy. Our results show, training a contextual policy boosts the performance for both scenarios, where the domain randomization distribution is fixed, and when the distribution is being learned.
Contextual policy
Using a different policy optimizer We also experimented with using EPOpt-PPO (Rajeswaran et al., 2016) as the policy optimizer in Algorithm (1). The motivation for this is to mitigate the bias towards environments with higher cumulative rewards J Mz (π) early during training. EPOpt encourages improving the policy on the worst performing environments, at the expense of collecting more data per From the resulting 100 trajectories, we use the 10% trajectories that resulted in the lowest rewards to fill the buffer for a PPO policy update, discarding the rest of the trajectories. Figure 9 compares the effect of learning the domain randomization distribution vs using a fixed wide range in . Learning curves for torso randomization on the Hopper task, using EPOpt as the policy optimizer. Lines represent mean performance, while the shaded regions correspond to the maximum and minimum performance over the [0.01, 0.09] torso size range. Learning the domain randomization distribution results in faster convergence and higher asymptotic performance. this setting. We found that learning the domain randomization distribution resulted in faster convergence to high reward policies over the evaluation range [0.01, 0.09], while resulting in a slightly better asymptotic performance. We believe this could be a consequence of lower variance in the policy gradient estimates, as the the learned p φ (z) has lower variance than p(z). Interestingly, using EPOpt resulted in a distribution with a wider torso size range than vanilla PPO, from approximately 0.0 to 0.14, demonstrating that optimizing worst case performance does help in alleviating the bias towards high reward environments.
Discussion
By allowing the agent to learn a good representative distribution, we are able to learn to solve difficult control tasks that heavily rely on a good initial domain randomization range. Our main experimental validation of domain randomization distribution learning is in the domain of simulated robotic locomotion. As shown in our experiments, our method is not sensitive to the initial domain randomization distribution and is able to converge to a more diverse range, while staying within the feasible range.
In this work, we study uni-dimensional context distribution learning. Due to the curse of dimensionality, there are limitations in using a discrete distribution -we are currently experimenting with alternative distributions such as truncated normal distributions, approximation of discrete distributions, etc. Using multidimensional contexts should enable an agent trained in simulation to obtain experience that's closer to a real world robot, which is the goal of this work. An issue that requires further investigation is the fact that we use the same reward function over all environments, without considering the effect of the simulation parameters on the reward scale. For instance, in a challenging environment, the agent may obtain low rewards but still manage to produce a policy that successfully solves the task; e.g. successful forward locomotion in the Hopper task. A poorly constructed reward may not only lead to undesirable behavior, but may complicate distribution learning if the scale of the rewards for successful policies varies across contexts. | 3,678 |
1906.00410 | 2947469668 | Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment. | @cite_15 also use context-conditioned policies, where the context is implicitly encoded into a vector @math . During the training phase, their proposed algorithm improves on the performance of the policy while learning a probabilistic mapping from trajectory data to context vectors. At test time, the learned mapping is used for online inference of the context vector. This is similar in spirit to the Universal Policies with Online System Identification method @cite_1 , which instead uses deterministic context inference with an explicit context encoding. Again, these methods use a fixed DR distribution and could benefit from adapting it during training, as we propose in this work. | {
"abstract": [
"Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.",
"We present a new method of learning control policies that successfully operate under unknown dynamic models. We create such policies by leveraging a large number of training examples that are generated using a physical simulator. Our system is made of two components: a Universal Policy (UP) and a function for Online System Identification (OSI). We describe our control policy as universal because it is trained over a wide array of dynamic models. These variations in the dynamic model may include differences in mass and inertia of the robots' components, variable friction coefficients, or unknown mass of an object to be manipulated. By training the Universal Policy with this variation, the control policy is prepared for a wider array of possible conditions when executed in an unknown environment. The second part of our system uses the recent state and action history of the system to predict the dynamics model parameters mu. The value of mu from the Online System Identification is then provided as input to the control policy (along with the system state). Together, UP-OSI is a robust control policy that can be used across a wide range of dynamic models, and that is also responsive to sudden changes in the environment. We have evaluated the performance of this system on a variety of tasks, including the problem of cart-pole swing-up, the double inverted pendulum, locomotion of a hopper, and block-throwing of a manipulator. UP-OSI is effective at these tasks across a wide range of dynamic models. Moreover, when tested with dynamic models outside of the training range, UP-OSI outperforms the Universal Policy alone, even when UP is given the actual value of the model dynamics. In addition to the benefits of creating more robust controllers, UP-OSI also holds out promise of narrowing the Reality Gap between simulated and real physical systems."
],
"cite_N": [
"@cite_15",
"@cite_1"
],
"mid": [
"2952526277",
"2586431331"
]
} | Learning Domain Randomization Distributions for Transfer of Locomotion Policies | Deep Reinforcement Learning (Deep-RL) is a powerful technique for synthesizing locomotion controllers for robot systems. Inspired by successes in video games (Mnih et al., 2015) and board games (Silver et al., 2016), recent work has demonstrated the applicability of Deep-RL in robotics . Since the data requirements for Deep-RL make their direct application to real robot systems costly, or even infeasible, a large body of recent work has focused on training controllers in simulation and deploying them in a real robot system. This is particularly challenging, but it is crucial to realize real world development of these systems.
Robot simulators provide a solution to the data requirements of Deep-RL. Except for simple robot systems in controlled environments, however, real robot experience may not correspond to situations that were used in simulation; an issue known as the reality gap (Jakobi et al., 1995). One way to address the reality gap is to perform system identification to tune the simulation parameters. This approach works if collecting data on the target system is not prohibitively expensive and the number of parameters of the simulation are small. The reality gap may still exist, however, due to a mis-specification of the simulation model.
Another method to shrink the reality gap is to train policies to maximize performance over a diverse set of simulation models, where the parameters of each model are sampled randomly, an approach known as domain randomization (DR). This aims to address the issue of model misspecification by providing diverse simulated experience. Domain randomization has been demonstrated to effectively produce controllers that can be trained in simulation with high likelihood of successful outcomes on a real robot system after deployment (Andrychowicz et al., 2018) and finetuning with real world data (Chen et al., 2018).
While successful, an aspect that has not been addressed in depth is the selection of the domain randomization distribution. For vision-based components, DR should be tuned so that features learned in simulation do not depend strongly on the appearance of simulated environments. For the control components, the focus of this work, there is a dependency arXiv:1906.00410v1 [cs.LG] 2 Jun 2019 between optimal behaviour and the dynamics of the environment. In this case, the DR distribution should be selected carefully to ensure that the real robot experience is represented in the simulated experience sampled under DR. If real robot data is available, one could use gradient-free search (Chebotar et al., 2018) or Bayesian inference (Rajeswaran et al., 2016) to update the DR distribution after executing the learned policy on the target system. These methods are based on the assumption that there is a a set of simulators from which real world experience can be synthesized.
In this work we propose to learn the parameters of the simulator distribution, such that the policy is trained over the most diverse set of simulator parameters in which it can plausibly succeed. By making the simulation distribution as wide as possible, we aim to encode the largest set of behaviours that is possible on a single policy, with fixed capacity. As shown in our experiments, training on the widest distribution possible has two problems: our models usually have finite capacity and picking a domain randomization that is too varied slows down convergence as shown in Figure 9.
Instead, we let the optimization process focus on environments where the task is feasible. We propose an algorithm that simultaneously learns the domain randomization distribution while optimizing the policy to maximize performance over the learned distribution. To operate over a wide range of possible simulator parameters, we train context-aware policies which take as input the current state of the environment, alongside contextual information describing the sampled parameters of the simulator. This enables our policies to learn context-specific strategies which consider the current dynamics of the environment, rather than an average over all possible simulator parameters. When deployed on the target environment, we concurrently fine-tune the policy parameters while searching for the context that maximizes performance. We evaluate our method on a variety of control problems from the OpenAI Gym suite of benchmarks. We find that our method is able to improve on the performance of fixed domain randomization. Furthermore, we demonstrate our model's robustness to initial simulator distribution parameters, showing that our method repeatably converges to similar domain randomization distributions across different experiments. (Packer et al., 2018) present an empirical study of generalization in Deep-RL, testing interpolation and extrapolation performance of state-of-the-art algorithms when varying simulation parameters in control tasks. The authors provide an experimental assessment of generalization under varying training and testing distributions. Our work extends these results by providing results for the case when the training distribution parameters are learned and change during policy training. (Chebotar et al., 2018) propose training policies on a distribution of simulators, whose parameters are fit to real-world data. Their proposed algorithm switches back and forth between optimizing the policy under the DR distribution and updating the DR distribution by minimizing the discrepancy between simulated and real world trajectories. In contrast, we aim to learn policies that maximize performance over a diverse distribution of environments where the task is feasible, as a way of minimizing the interactions with the real robot system. (Rajeswaran et al., 2016) propose a related approach for learning robust policies over a distribution of simulator models. The proposed approach, based on the the -percentile conditional value at risk (CVaR) (Tamar et al., 2015) objective, improves the policy performance on a small proportion of environments where the policy performs the worst. The authors propose an algorithm that updates the distribution of simulation models to maximize the likelihood of real-world trajectories, via Bayesian inference. The combination of worst-case performance optimization and Bayesian updates ensures that the resulting policy is robust to errors in the estimation of the simulation model parameters. Our method can be combined with the CVaR objective to encourage diversity of the learned DR distribution.
Problem Statement
We consider parametric Markov Decision Processes (MDPs) (Sutton & Barto, 2018). An MDP M is defined by the tuple S, A, p, r, γ, ρ 0 , where S is the set of possible states and A is the set of actions, p : S × A × S − → R + , encodes the state transition dynamics, r : S × A − → R + is the taskdependent reward function, γ is a discount factor, and ρ 0 : S − → R is the initial state distribution. Let s t and a t be the state and action taken at time t. At the beginning of each episode, s 0 ∼ ρ 0 (.). Trajectories τ are obtained by iteratively sampling actions using the current policy π a t ∼ π(a t |s t ) and evaluating next states according to the transition dynamics s t+1 ∼ p(s t+1 |a t ). Given an MDP M, the goal is then to learn policy π to maximize the expected sum of rewards
J M (π) = E τ [R(τ )|π] = E τ [ ∞ t=0 γ t r t ], where r t = r(s t , a t ).
In our work, we aim to maximize performance over a distribution of MDPs, each described by a context vector z representing the variables that change over the distribution: changes in transition dynamics, rewards, initial state distribution, etc. Thus, our objective is to maximize (Yu et al., 2018;Chen et al., 2018;Rakelly et al., 2019), we condition the policy on the context vector, π(a t |s t , z). In the experiments reported in this paper, we let z encode the parameters of the transition model in a physically based simulator; e.g. mass, friction or damping.
E z∼p ( z) [J Mz (π)], where p(z) is the domain randomization distribution. Similar to
Proposed Method
In practice, making the context distribution p(z) as wide as possible may be detrimental to the objective of maximizing performance. For instance, if the distribution has infinite support and wide variance, there may be more environments sampled from the context distribution for which the desired task is impossible (e.g. reaching a target state). Thus, sampling trajectories from a wide context distribution results in high variance on the directions of improvement, slowing progress on policy learning. On the other hand, if we make the context distribution to be too narrow, policy learning can progress more rapidly but may not generalize to the whole set of possible contexts.
We introduce LSDR (Learning the Sweet-spot Distribution Range) algorithm for concurrently learning a domain randomization distribution and a robust policy that maximizes performance over it. Instead of directly sampling from p(z), we use a surrogate distribution p φ (z), with trainable parameters φ. Our goal is to find appropriate parameters φ to optimize π(·|s, z). LSDR proceeds by updating the policy with trajectories sampled from p φ (z), and updating the φ based on the performance of the policy p(z). To avoid the collapse of the learned distribution, we propose using a regularizer that encourages the distribution to be diverse. The idea is to sample more data from environments where improvement of the policy is possible, without collapsing to environments that are trivial to solve. We summarize our training and testing procedure in Algorithm 1 and 2.
In our experiments, we use Proximal Policy Optimization (PPO) (Schulman et al., 2017) for the UpdatePolicy procedure in Algorithm (1).
Algorithm 1 Learning the policy and training distribution
Input: testing distribution p(z), initial parameters of the learned distribution φ, initial policy π, buffer size B, total
iterations N for i ∈ {1, ..., N } do z ∼ p φ (z) s 0 ∼ ρ 0 (s) B = {} while |B| < B do a t ∼ π(a t |s t , z) s t+1 , r t ∼ p(s t+1 , r t |s t , a t , z) append (s t , z k , a t , r t , s t+1 ) to B if s is terminal then z ∼ p φ (z) s t+1 ∼ ρ 0 (s) end if s t ← s t+1 end while φ ← UpdateDistribution(φ, p(z), π) π ← UpdatePolicy(π, B) end for
Algorithm 2 UpdateDistribution
Input: learned distribution parameters φ, testing distribution p(z), policy π, total iterations M , total trajectory samples K for i ∈ {1, ..., M } do sample z 1:K from p(z) Obtain Monte-Carlo estimate of L DR (φ) by executing π on environments with z 1:
K φ ← φ + λ∇ φ (L DR (φ) − αD KL (p φ (z)||p(z))) end for
Algorithm 3 Fine-tuning the policy at test-time Input: learned distribution parameters φ, policy π, buffer size B, total iterations N Initialize guess for context vectorẑ ∼ p φ (z) for i ∈ {0, ..., N } do Collect B samples into B by executing policy π(a|s,ẑ) π ← UpdatePolicy(π, B) z ← UpdatePolicy(ẑ, B) end for
Learning the Sweet-spot Distribution Range
The goal of our method is to find a training distribution p φ (z) to maximize the expected reward of the policy under the test distribution, while gradually reducing the sampling frequency of environments that make the task unsolvable 1 . Such situation is common in physics-based simulations, where a bad selection of simulation parameters may lead to environments where the task is impossible due to physical limits or unstable simulations.
We start by assuming that the test distribution p(z) has wide but bounded support, such that we get a distribution of solvable and unsolvable tasks. To update the training distribution, we use an objective of the following form
arg max φ L DR (φ) − αD KL (p φ (z)||p(z))(1)
where the first term is designed to encourage improvement on environments that are more likely to be solvable, while the second term is a regularizer that keeps the distribution from collapsing. In our experiments, we set
L DR (φ) = E z∼p(z) [J Mz (π) log(p φ (z))].
Optimizing this objective encourages focusing on environments where the current policy performs the best. Other suitable objectives are the improvement over the previous policy E z [J Mz (π i ) − J Mz (π i-1 )], or an estimate of the performance of the context dependent optimal policy E z [Ĵ Mz (π * )].
If we use the performance of the policy as a way of determining whether the task is solvable for a given context z, 1 In this work, we consider a task solvable if there exists a policy that brings the environment to a set of desired goal states. then a trivial solution would be to make p φ (z) concentrate on few easy environments. The second term in Eq. (1) helps to avoid this issue by penalizing distributions that deviate too much from p(z), which is assumed to be wide. When p(z) is uniform this is equivalent to maximizing the entropy of p φ (z).
To estimate the gradient of Eq. (1) with respect to φ, we use the log-derivative score function gradient estimator (Fu, 2006), resulting in the following Monte-Carlo update :
φ ← φ + λ 1 K K i=1 J Mz i (π)∇ φ log(p φ (z i )) − α∇ φ D KL (p φ (z)||p(z))(2)
where z i ∼ p(z). Updating φ with samples from the distribution we are learning has the problem that we never get information about the performance of the policy in low probability contexts under p φ (z). This is problematic since if context z k were assigned a low probability early in training, we would require a large number of samples to update its probability-even if the policy performs well on z k during later stages in training. To address this issue, we use samples from p(z) to evaluate the gradient of L DR (φ). While changing the sampling distribution introduces bias, which could be corrected by using importance sampling, we find that both the second term in Eq. (1) and sampling from p(z) are crucial to avoid the collapse of the learned distribution (see Fig. 4). To ensure that the two terms in Eq.
(1) have similar scale, we standardize the evaluations of J Mz i with exponentially averaged batch statistics and set α to the fixed value of H(p(z)) −1 . We evaluate the impact of learning the DR distribution on two standard benchmark locomotion tasks: Hopper and Half-cheetah from the MuJoCo tasks (illustrated in Fig. 1) in the OpenAI Gym suite (Brockman et al., 2016). We use an explicit encoding of the context vector z, corresponding to the torso size, density, foot friction and joint damping of the environments. In this work, we focus on uni-dimensional domain randomization contexts. In this work, we run experiments for each context variable independently 2 . We selected p(z) as an uniform distribution over ranges that include both solvable and unsolvable environments. We initialize p φ (z) to be the same as p(z). In these experiments, both distributions are implemented as discrete distributions with 100 bins. When sampling from this distribution, we first select a bin according to the discrete probabilities, then select a continuous context value uniformly at random from the ranges of the corresponding bin.
Experiments
We compare the test-time jump-start and asymptotic performance of policies learned with p φ (z) (learned domain randomization) and p(z) (fixed domain randomization). At test time, we sample (uniformly at random) a test set of 50 samples from the support of p(z) and run policy search optimization, initializing the policy with the parameters obtained at training time. The questions we aim to answer with our experiments are: 1) does learning policies with wide DR distributions affect the performance of the policy in the environments where the task is solvable? 2) does learning the DR distribution converge to the actual ranges where the task is solvable? 3) Is learning the DR distribution beneficial?
Results
Learned Distribution Ranges: Table 1 shows the ranges for p(z) and the final equivalent ranges for the distributions found by our method. Figure 2 and 3 show the evolution of p φ (z) during training, using our method. Each plot corresponds to a separate domain randomization experiment, where we randomized one different simulator parameter while keeping the rest fixed. Initially, each of these distribution is uniform. As the agent becomes better over the training distribution, it becomes easier to discriminate between promising environments (where the task is solvable) and impossible ones where rewards stay at low values. After around 1500 epochs, the distributions have converged to their final distributions. For Hopper, the learned distributions corresponds closely with the environments where we can find a policy using vanilla policy gradient methods from scratch. To determine the consistency of these results, we ran the Hopper torso size experiment 7 times, and fitted the parameters of a uniform distribution to the resulting p φ (z). The mean ranges (± one standard deviation) across the 7 2 To enable distribution learning multi-dimensional contexts, we are exploring the use of parameterizations, different from the discrete distribution, that do not suffer from the curse of dimensionality experiments were [0.00086 ± 0.00159, 0.09275 ± 0.00342], which provides some evidence for the reproducibility of our method.
Learned vs Fixed Domain Randomization: We compare the jumpstart and asymptotic performance between learning the domain randomization distribution and keeping it fixed. Our results show our method, using PPO as the policy optimizer (LSDR) vs keeping the domain randomization distri- bution fixed (Fixed-DR) which corresponds to keeping the domain randomization distribution fixed. For these methods, we also compare whether training a context-conditioned policy or a robust policy is better at generalization. We ran the same experiments for Hopper and Half-Cheetah.
Figures 5 and 6 depict learning curves when fine tuning the policy at test-time, for torso size randomization. All the methods start with the same random seed at training time. The policies are trained for 3000 epochs, where we collect B = 4000 samples per epoch for the policy update. For the distribution update, we collect K = 10 additional trajectories and run the gradient update M = 10 times (without re-sampling new trajectories). We report averages over 50 different environments (corresponding to samples from p(z), one random seed per environment). For clarity of presentation, we report the comparison over a "reasonable" torso size range (where the locomotion task is feasible) and a "hard" range, where the policy fails to make the robot move forward. For Hopper, the reasonable torso size range corresponds to start and asymptotic performance over using fixed domain randomization, within the reasonable range. On the hard ranges, LSDR performs slightly worse. But in most of the contexts in this range the task is not actually solvable; i.e. the optimal policy found by vanilla-PPO does not result in successful locomotion on the hard range 3 . Figures 5 and 6 also show the performance of a contextual policy to that of a non-contextual policy. Our results show, training a contextual policy boosts the performance for both scenarios, where the domain randomization distribution is fixed, and when the distribution is being learned.
Contextual policy
Using a different policy optimizer We also experimented with using EPOpt-PPO (Rajeswaran et al., 2016) as the policy optimizer in Algorithm (1). The motivation for this is to mitigate the bias towards environments with higher cumulative rewards J Mz (π) early during training. EPOpt encourages improving the policy on the worst performing environments, at the expense of collecting more data per From the resulting 100 trajectories, we use the 10% trajectories that resulted in the lowest rewards to fill the buffer for a PPO policy update, discarding the rest of the trajectories. Figure 9 compares the effect of learning the domain randomization distribution vs using a fixed wide range in . Learning curves for torso randomization on the Hopper task, using EPOpt as the policy optimizer. Lines represent mean performance, while the shaded regions correspond to the maximum and minimum performance over the [0.01, 0.09] torso size range. Learning the domain randomization distribution results in faster convergence and higher asymptotic performance. this setting. We found that learning the domain randomization distribution resulted in faster convergence to high reward policies over the evaluation range [0.01, 0.09], while resulting in a slightly better asymptotic performance. We believe this could be a consequence of lower variance in the policy gradient estimates, as the the learned p φ (z) has lower variance than p(z). Interestingly, using EPOpt resulted in a distribution with a wider torso size range than vanilla PPO, from approximately 0.0 to 0.14, demonstrating that optimizing worst case performance does help in alleviating the bias towards high reward environments.
Discussion
By allowing the agent to learn a good representative distribution, we are able to learn to solve difficult control tasks that heavily rely on a good initial domain randomization range. Our main experimental validation of domain randomization distribution learning is in the domain of simulated robotic locomotion. As shown in our experiments, our method is not sensitive to the initial domain randomization distribution and is able to converge to a more diverse range, while staying within the feasible range.
In this work, we study uni-dimensional context distribution learning. Due to the curse of dimensionality, there are limitations in using a discrete distribution -we are currently experimenting with alternative distributions such as truncated normal distributions, approximation of discrete distributions, etc. Using multidimensional contexts should enable an agent trained in simulation to obtain experience that's closer to a real world robot, which is the goal of this work. An issue that requires further investigation is the fact that we use the same reward function over all environments, without considering the effect of the simulation parameters on the reward scale. For instance, in a challenging environment, the agent may obtain low rewards but still manage to produce a policy that successfully solves the task; e.g. successful forward locomotion in the Hopper task. A poorly constructed reward may not only lead to undesirable behavior, but may complicate distribution learning if the scale of the rewards for successful policies varies across contexts. | 3,678 |
1906.00495 | 2903254834 | Non-negative matrix factorization (NMF) minimizes the euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers. In this paper, we propose a Truncated CauchyNMF loss that handle outliers by truncating large errors, and develop a Truncated CauchyNMF to robustly learn the subspace on noisy datasets contaminated by outliers. We theoretically analyze the robustness of Truncated CauchyNMF comparing with the competing models and theoretically prove that Truncated CauchyNMF has a generalization bound which converges at a rate of order @math , where @math is the sample size. We evaluate Truncated CauchyNMF by image clustering on both simulated and real datasets. The experimental results on the datasets containing gross corruptions validate the effectiveness and robustness of Truncated CauchyNMF for learning robust subspaces. | Traditional NMF @cite_9 assumes that noise obeys a Gaussian distribution and derives the following squared @math -norm based objective function: @math , where @math signifies the matrix Frobenius norm. It is commonly known that NMF can be solved by using the multiplicative update rule (MUR, @cite_9 ). Because of the nice mathematical property of squared @math -norm and the efficiency of MUR, NMF has been extended for various applications @cite_18 @cite_7 @cite_16 . However, NMF and its extensions are non-robust because the @math -norm is sensitive to outliers. | {
"abstract": [
"Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.",
"Nonnegative matrix factorization (NMF) approximates a given data matrix as a product of two low-rank nonnegative matrices, usually by minimizing the L2 or the KL distance between the data matrix and the matrix product. This factorization was shown to be useful for several important computer vision applications. We propose here two new NMF algorithms that minimize the Earth mover's distance (EMD) error between the data and the matrix product. The algorithms (EMD NMF and bilateral EMD NMF) are iterative and based on linear programming methods. We prove their convergence, discuss their numerical difficulties, and propose efficient approximations. Naturally, the matrices obtained with EMD NMF are different from those obtained with L2-NMF. We discuss these differences in the context of two challenging computer vision tasks, texture classification and face recognition, perform actual NMF-based image segmentation for the first time, and demonstrate the advantages of the new methods with common benchmarks.",
"Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.",
"Let A be a real m×n matrix with m≧n. It is well known (cf. [4]) that @math (1) where @math The matrix U consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of AA T , and the matrix V consists of the orthonormalized eigenvectors of A T A. The diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values. We shall assume that @math Thus if rank(A)=r, σ r+1 = σ r+2=⋯=σ n = 0. The decomposition (1) is called the singular value decomposition (SVD)."
],
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_18",
"@cite_7"
],
"mid": [
"2135029798",
"2105431676",
"2108119513",
"2078841894"
]
} | Truncated Cauchy Non-negative Matrix Factorization | N ON-NEGATIVE matrix factorization (NMF, [16]) explores the non-negativity property of data and has received considerable attention in many fields, such as text mining [25], hyper-spectral imaging [26], and gene expression clustering [38]. It decomposes a data matrix into the product of two lower dimensional non-negative factor matrices by minimizing the Eudlidean distance between their product and the original data matrix. Since NMF only allows additive, non-subtractive combinations, it obtains a natural parts-based representation of the data. NMF is optimal when the dataset contains additive Gaussian noise, and so it fails on grossly corrupted datasets, e.g., the AR database [22] where face images are partially occluded by sunglasses or scarves. This is because the corruptions or outliers seriously violate the noise assumption.
Many models have been proposed to improve the robustness of NMF. Hamza and Brady [12] proposed a hypersurface cost based NMF (HCNMF) which minimizes the hypersurface cost function 1 between the data matrix and its approximation. HCNMF is a significant contribution for improving the robustness of NMF, but its optimization algorithm is time-consuming because the Armijo's rule based line search that it employs is complex. Lam [15] proposed 1. The hypersurface cost function is defined as h(x) = (1 + x 2 ) − 1 which is quadratic when its argument is small and linear when its argument is large. L 1 -NMF 2 to model the noise in a data matrix by a Laplace distribution. Although L 1 -NMF is less sensitive to outliers than NMF, its optimization is expensive because the L 1norm based loss function is non-smooth. This problem is largely reduced by Manhattan NMF (MahNMF, [11]), which solves L 1 -NMF by approximating the non-smooth loss function with a smooth one and minimizing the approximated loss function with Nesterov's method [36]. Zhang et al. [29] proposed an L 1 -norm regularized Robust NMF (RNMF-L 1 ) to recover the uncorrupted data matrix by subtracting a sparse error matrix from the corrupted data matrix. Kong et al. [14] proposed L 2,1 -NMF to minimize the L 2,1 -norm of an error matrix to prevent noise of large magnitude from dominating the objective function. Gao et al. [47] further proposed robust capped norm NMF (RCNMF) to filter out the effect of outlier samples by limiting their proportions in the objective function. However, the iterative algorithms utilized in L 2,1 -NMF and RCNMF converge slowly because they involve a successive use of the power method [1]. Recently, Bhattacharyya et al. [48] proposed an important robust variant of convex NMF which only requires the average L 1 -norm of noise over large subsets of columns to be small; Pan et al. [49] proposed an L 1 -norm based robust dictionary learning model; and Gillis and Luce [50] proposed a robust near-separable NMF which can determine the low-rank, avoid normalizing data, and filter out outliers. HCNMF, L 1 -NMF, RNMF-L 1 , L 2,1 -NMF, RCNMF, [48], [49] and [50] share a common drawback, i.e., they all fail when the dataset is contaminated by serious corruptions because the breakdown point of the L 1 -norm based models is determined by the dimensionality of the data [7].
In this paper, we propose a Truncated Cauchy nonnegative matrix factorization (Truncated CauchyNMF) 2. When the noise is modeled by Laplace distribution, the maximum likelihood estimation yields an L 1 -norm based objective function. We therefore term the method in [15] LG] 2 Jun 2019 model to learn a subspace on a dataset contaminated by large magnitude noise or corruption. In particular, we proposed a Truncated Cauchy loss that simultaneously and appropriately models moderate outliers (because the loss corresponds to a fat tailed distribution in-between the truncation points) and extreme outliers (because the truncation directly cut off large errors). Based on the proposed loss function, we develop a novel Truncated CauchyNMF model. We theoretically analyze the robustness of Truncated CauchyNMF and show that Truncated CauchyNMF is more robust than a family of NMF models, and derive a theoretical guarantee for its generalization ability and show that Truncated CauchyNMF converges at a rate of order O( ln n/n), where n is the sample size. Truncated CauchyNMF is difficult to optimize because the loss function includes a nonlinear logarithmic function. To address this, we optimize Truncated CauchyNMF by half-quadratic (HQ) programming based on the theory of convex conjugation. HQ introduces a weight for each entry of the data matrix and alternately and analytically updates the weight and updates both factor matrices by easily solving a weighted non-negative least squares problem with Nesterov's method [23]. Intuitively, the introduced weight reflects the magnitude of the error. The heavier the corruption, the smaller the weight, and the less an entry contributes to learning the subspace. By performing truncation on magnitudes of errors, we prove that HQ introduces zero weights for entries with extreme outliers, and thus HQ is able to learn the intrinsic subspace on the inlier entries.
In summary, the contributions of this paper are threefold: (1) we propose a robust subspace learning framework called Truncated CauchyNMF, and develop a Nesterovbased HQ algorithm to solve it; (2) we theoretically analyze the robustness of Truncated CauchyNMF comparing with a family of NMF models, and provide insight as to why Truncated CauchyNMF is the most robust method; and (3) we theoretically analyze the generalization ability of Truncated CauchyNMF, and provide performance guarantees for the proposed model. We evaluate Truncated CauchyNMF by image clustering on both simulated and real datasets. The experimental results on the datasets containing gross corruptions validate the effectiveness and robustness of Truncated CauchyNMF for learning the subspace.
The rest of this paper is organized as follows: Section 2 describes the proposed Truncated CauchyNMF, Section 3 develops the Nesterov-based half-quadratic (HQ) programming algorithm for solving Truncated CauchyNMF. Section 4 surveys the related works and Section 5 verifies Truncated CauchyNMF on simulated and real datasets. Section 6 concludes this paper. All the proofs are given in the supplementary material.
TRUNCATED CAUCHY NON-NEGATIVE MATRIX FACTORIZATION
Classical NMF [16] is not robust because its loss function e 2 (x) = x 2 is sensitive to outliers considering the errors of large magnitude dominate the loss function. Although some robust loss functions, such as e 1 (x) = |x| for L 1 -NMF [15], Hypersurface cost e h (x) = √ 1 + x 2 − 1 [12], and Cauchy loss e c (x; γ) = ln 1 + (x/γ) 2 , are less sensitive to outliers, they introduces infinite energy for infinitely large noise in the extreme case. To remedy this problem, we propose a Truncated Cauchy loss by truncating the magnitudes of large errors to limit the effects of extreme outliers, i.e.,
e t (x; γ, ε) = ln 1 + (x/γ) 2 , |x| ≤ ε ln 1 + (ε/γ) 2 , |x| > ε ,(1)
where γ is the scale parameter of the Cauchy distribution and ε is a constant.
To study the behavior of the Truncated Cauchy loss, we compare the loss functions e 2 (x), e 1 (x), e c (x; 1), e t (x; 1, 5), and the loss function of the L 0 -norm, i.e., e 0 (x) = 1, x = 0 0, x = 0 in Figure 1, because the L 0 -norm induces robust models. Figure 1(a) shows that when the error is moderately large, e.g., |x| ≤ 5, e t (x; 1, 5) shifts from e 2 (x) to e 1 (x) and corresponds to a fat-tailed distribution, and implies that the Truncated Cauchy loss can model moderate outliers well, while e 2 (x) cannot because it makes the outliers dominate the objective function. When the error gets larger and larger, e t (x; 1, 5) gets away from e 1 (x) and behaves like e 0 (x), and e t (x; 1, 5) keeps constant once the error exceeds a threshold, e.g., |x| > 5, and implies that the Truncated Cauchy loss can model extreme outliers, whereas neither e 1 (x) nor e c (x; 1) cannot because they encourage infinite energy to infinitely large error. Intuitively, the Truncated Cauchy loss can model both moderate and extreme outliers well. Figure 1(b) plots the curves of both e t (x; γ, ε) and e 0 (x) with varying γ from 0.001 to 1 and accordingly varying ε from 25 to 200. It shows that e t (x; γ, ε) behaves more and more close to e 0 (x) when γ approaches zero. By comparing the behaviors of loss functions, we believe that the Truncated Cauchy loss can induce robust NMF model. Given n high-dimensional samples arranged in a non-negative matrix V = [v 1 , . . . , v n ] ∈ R m×n + , Truncated Cauchy non-negative matrix factorization (Truncated CauchyNMF) approximately decomposes V into the product of two lower dimensional non-negative matrices, i.e., V = W H + E, where W ∈ R m×r + signifies the basis, H = [h 1 , . . . , h n ] ∈ R r×n + signifies the coefficients, and E ∈ R m×n signifies the error matrix which is measured by using the proposed Truncated Cauchy loss. The objective function of Truncated CauchyNMF can be written as min W ≥0,H≥0
1 2 ij g(( V − W H γ ) 2 ij ),(2)
where g(x) = ln(1 + x), 0 ≤ x ≤ σ ln(1 + σ), x > σ is utilized for the convenience of derivation and σ is a truncation parameter, and γ is the scale parameter. We will next show that the truncation parameter σ can be implicitly determined by robust statistics and the scale parameter γ can be estimated by the Nagy algorithm [32]. It is not hard to see that Truncated CauchyNMF includes CauchyNMF as a special case when σ = +∞. Since (2) assigns fixed energy to any large error whose magnitude exceeds γ √ σ, Truncated CauchyNMF can filter out any extreme outliers.
To illustrate the ability of Truncated CauchyNMF to model outliers, Figure 2 gives an illustrative example that demonstrates its application to corrupted face images. In this example, we select 26 frontal face images of an individual in two sessions from the Purdue AR database [22] (see all face images in Figure 2(a)). In each session, there are 13 frontal face images with different facial expressions, captured under different illumination conditions, with sunglasses, and with a scarf. Each image is cropped into a 165 × 120-dimensional pixel array and reshaped into a 19800-dimensional vector. The total number of face images compose a 19800 × 26-dimensional non-negative matrix because the pixel values are non-negative. In this experiment, we aim at learning the intrinsically clean face images from the contaminated images. This task is quite challenging because more than half the images are contaminated. Since these images were taken in two sessions, we set the dimensionality low (r = 2) to learn two basis images. Figure 2(b) shows that Truncated CauchyNMF robustly recovers all face images even when they are contaminated by a variety of facial expressions, illumination, and occlusion. Figure 2(c) presents the reconstruction errors and Figure 2(d) shows the basis images, which confirms that Truncated CauchyNMF is able to learn clean basis images with the outliers filtered out.
In the following subsections, we will analyze the generalization ability and robustness of Truncated CauchyNMF. Before that, we introduce Lemma 1 which states that the new representations generated by Truncated CauchyNMF are bounded if the input observations are bounded. This lemma will be utilized in the following analysis with the only assumption that each base is a unit vector. Such an assumption is typical in NMF because the bases W are usually normalized to limit the variance of its local minimizers. We use · p to represent the L p -norm and · to represent the Euclidean norm. Lemma 1. Assuming W i = 1, i = 1, . . . , r, and that the input observations are bounded, i.e., v ≤ α for some α > 0. Then the new representations are also bounded, i.e., h ≤ 2α + (σα)/( √ 2γ).
= + (b) (c) (a) (d)
Although Truncated CauchyNMF (2) has a differentiable objective function, solving it is difficult because the natural logarithmical function is nonlinear. Section 3 will present a half-quadratic (HQ) programming algorithm for solving Truncated CauchyNMF.
Generalization Ability
To analyze the generalization ability of Truncated CauchyNMF, we further assume that samples [v 1 , . . . , v n ] are independent and identically distributed and drawn from a space V with a Borel measure ρ. We use A ·j and A ij to denote the j-th column and the (i, j)-th entry of a matrix , respectively, and a i is the i-th entry of a vector a.
For any W ∈ R m×r + , we define the reconstruction error of a sample v as follows:
f W (v) = min h∈R r + j g(( v − W h γ ) 2 j ).(4)
Therefore, the objective function of Truancated CauchyNMF (2) can be written as
min W ≥0,H≥0 1 2 ij g(( V − W H γ ) 2 ij ) = min W ≥0 1 2 i f W (v i ). (5)
Let us define the empirical reconstruction error of Truncated CauchyNMF as R n (f W ) = 1 n n i=1 f W (v i ), and the expected reconstruction error of Truncated CauchyNMF as
R(f W ) = E v 1 n n i=1 f W (v i ). Intuitively, we want to learn W * = arg min W ≥0 R(f W ).(6)
However, since the distribution of v is unknown, we cannot minimize R(f W ) directly. Instead, we use the empirical risk minimization (ERM, [2]) algorithm to learn W n to approximate W * , as follows:
W n = arg min W ≥0 R n (f W ).(7)
We are interested in the difference between W n and W * . If the distance is small, we can say that W n is a good approximation of W * . Here, we measure the distance of their reduced expected reconstruction error as follows:
R(f Wn ) − R(f W * ) ≤ 2 sup f W ∈F W |R(f W ) − R n (f W )|, where F W = {f W |W ∈ W = R m×r + }.
The right hand side is known as the generalization error. Note that since NMF is convex with respect to either W or H but not both, the minimizer f Wn is hard to obtain. In practice, a local minimizer is used as an approximation. Measuring the distance between the local minimizer and the global minimizer is also an interesting and challenging problem.
By analyzing the covering number [30] of the function class F W and Lemma 1, we derive a generalization error bound for Truncated CauchyNMF as follows: Theorem 1. Let W ·i = 1, i = 1, . . . , r, and F W = {f W |W ∈ W = R m×r + }. Assume that v ≤ α. For any δ > 0, with probability at least 1 − δ, the equation (3) holds, where Γ( 1 2 ) = √ π; Γ(1) = 1; and Γ(x + 1) = xΓ(x).
Remark 1. Theorem 1 shows that under the setting of our proposed Truncated CauchyNMF, the expected reconstruction error R(f Wn ) will converge to R(f W * ) with a fast rate of order O( ln n/n), which means that when the sample size n is large, the distance between R(f Wn ) and R(f W * ) will be small. Moreover, if n is large and a local minimizer W (obtained by optimizing the non-convex objective of Truncated CauchyNMF) is close to the global minimizer W n , the local minimizer will also be close to the optimal W * . Remark 2. Theorem 1 also implies that for any W learned from (2), the corresponding empirical reconstruction error R n (f W ) will converge to its expectation with a specific rate guarantee, which means our proposed Truncated CauchyNMF can generalize well to unseen data.
Note that the noise sampled from the Cauchy distribution should not be bounded because Cauchy distribution is heavy-tailed. And bounded observations always imply bounded noise. However, Theorem 1 keeps the boundedness assumption on the observations for two reasons: (1) the truncated loss function indicates that the observations corresponding to unbounded noise are discarded, and (2) in real applications, the energy of observations should be bounded, which means their L 2 -norms are bounded.
Robustness Analysis
We next compare the robustness of Truncated CauchyNMF with those of other NMF models by using a sampleweighted procedure interpretation [20]. The sampleweighted procedure compares the robustness of different algorithms from the optimization viewpoint.
Let F (W H) denote the objective function of any NMF problem and f (t) = F (tW H) where t ∈ R. We can verify that the NMF problem is equivalent to finding a pair of W H such that
f (1) = 0 3 , where f (t) denotes the derivative of f (t). Let c(V ij , W H) = (V − W H) ij (−W H) ij
be the contribution of the j-th entry of the i-th training example to the optimization procedure and e(V ij , W H) = |V − W H| ij be an error function. Note that we choose c(V ij , W H) as the basis of contribution because we choose NMF, which aims to find a pair of W H such that ij c(V ij , W H) = 0 and is sensitive to noise, as the baseline for comparing the robustness. Also note that e(V ij , W H) represents the noise added to the (i, j)-th entry of V . The interpretation of the sample-weighted procedure explains the optimization procedure as being contribution-weighted with respect to the noise.
We compare f (1) of a family of NMF models in Table 1. Note that since multiplying f (1) by a constant will not change its zero points, we can normalize the weights of different NMF models to unity when the noise is equal to zero. During the optimization procedures, robust algorithms should assign a small weight to an entry of the training set with large noise. Therefore, by comparing the derivative f (1), we can easily make the following statements:
(1) L 1 -NMF 4 is more robust to noise and outliers than NMF; Huber-NMF combines the ideas of NMF and L 1 -NMF; (2) HCNMF, L 2,1 -NMF, RCNMF, and RNMF-L 1 work similarly to L 1 -NMF because their weights are of order O(1/e(V ij , W H)) with respect to the noise. It also becomes clear that HCNMF, L 2,1 -NMF, and RCNMF exploit some data structure information because the weights include the neighborhood information of e(V ij , W H) and that RNMF-L 1 is less sensitive to noise because it employs a sparse matrix S to adjust the weights; (3) The interpretation of the sample-weighted procedure also illustrates why CIM-NMF works well for heavy noise. This is because its weights decrease exponentially when the noise is large; And (4) for the proposed Truncated CauchyNMF, when the noise is larger than a threshold, its weights will drop directly to zero, which decrease far faster than that of CIM-NMF and thus Truncated CauchyNMF is very robust to extreme outliers. Finally, we conclude that Truncated CauchyNMF is more robust than any other NMF models with respect to extreme outliers because it has the power to provide smaller weights to examples.
HALF-QUADRATIC PROGRAMMING ALGORITHM FOR TRUNCATED CAUCHYNMF
Note that Truncated CauchyNMF (2) cannot be solved directly because the energy function g(x) is non-quadratic. We present a half-quadratic (HQ) programming algorithm based on conjugate function theory [9]. To adopt the HQ 4. For the soundness of defining the subgradient of L 1 -norm, we state that 0 0 can be any value in [−1, 1].
R(f Wn ) − R(f W * ) ≤ sup f W ∈F W E v 1 2n ij g V − W H γ 2 ij − 1 2n ij g V − W H γ 2 ij ≤ min 2 + α 2 γ 2 mr ln (4 1 m π 1 2 8rα 2 + 2α 2 r 2 + σα 2 √ 2r 3 + 2rσ 2 α 2 r 4 mr)/Γ( m 2 ) 1 m 2 + ln( 2 δ ) /2n ≤ 2 n + α 2 γ 2 mr ln 4 1 m π 1 2 8rα 2 + 2α 2 r 2 + σα 2 √ 2r 3 + 2rσ 2 α 2 r 4 mrn /Γ( m 2 ) 1 m 2 + ln( 2 δ ) /2n.(3)
NMF methods
Objective
function F (W H) Derivative f (1) NMF V − W H 2 F ij 2c(V ij , W H) HCNMF ij ( 1 + (V − W H) 2 ij − 1) ij 1 1+(V −W H) 2 ij c(V ij , W H) L 2,1 -NMF V − W H 2,1 ij 1 l (V −W H) 2 lj c(V ij , W H) RCNMF n j=1 min{ V ·j − W H ·j , θ} n j=1 i 1 l (V −W H) 2 lj c(V ij , W H), V ·j − W H ·j ≤ θ 0, V ·j − W H ·j ≥ θ RNMF-L 1 V − W H − S 2 F + λ S 1 ij 2(1 − S ij (V −W H) ij )c(V ij , W H) L 1 -NMF V − W H 1 ij 1 |V −W H| ij c(V ij , W H) HuberNMF m i=1 n j=1 l((V − W H) ij , σ), where l(x, σ) = x 2 , |x| ≤ σ 2σ|x| − σ 2 , |x| ≥ σ ij 2c(V ij , W H), |V − W H| ij ≤ σ 2σ |V −W H| ij c(V ij , W H), |V − W H| ij ≥ σ CIM-NMF m i=1 n j=1 1 − 1 √ 2πσ e −(V −W H) 2 ij /2σ 2 ij 1 √ 2πσ 3 e −(V −W H) 2 ij /2σ 2 c(V ij , W H) CauchyNMF ij ln(1 + ( V −W H γ ) ij ) ij 2 γ 2 +(V −W H) 2 ij c(V ij , W H) Truncated CauchyNMF ij g(( V −W H γ ) 2 ij ), where g(x) = ln(1 + x), 0 ≤ x ≤ σ ln(1 + σ), x > σ ij 2 γ 2 +(V −W H) 2 ij c(V ij , W H), |V − W H| ij ≤ γ √ σ 0 · c(V ij , W H), |V − W H| ij > γ √ σ
algorithm, we transform (2) to the following maximization form:
max W ≥0,H≥0 1 2 ij f (( V − W H γ ) 2 ij ),(8)
where
f (x) = −g(x)
is the core function utilized in HQ.
Since the negative logarithmic function is convex, f (x) is also convex.
HQ-based Alternating Optimization
Generally speaking, the half-quadratic (HQ) programming algorithm [9] reformulates the non-quadratic loss function as an augmented loss function in an enlarged parameter space by introducing an additional auxiliary variable based on the convex conjugation theory [3]. HQ is equivalent to the quasi-Newton method [24] and has been widely applied in non-quadratic optimization.
Note that the function f (x) : R + → R is continuous, and according to [3], its conjugate f * (y) :
R → R ∪ {+∞} is defined as f * (y) = max x∈R+ {xy − f (x)}. Since f (x) is convex and closed (although the domain R + is open, f (x) is closed, see Section A.3.3 in [3])
, the conjugate of its conjugate function is itself [3], i.e., f * * = f , then we have:
Theorem 2.
The core function f (x) and its conjugate f * (y) satisfy
f (x) = max y {yx − f * (y)}, x ∈ R + ,(9)
and the maximizer is
y * = −1/(1 + x), 0 ≤ x ≤ σ 0, x > σ .
By (9), we have the augmented loss function
substituting x = ( V −W H γ ) 2 ij intof (( V − W H γ ) 2 ij ) = max Yij {Y ij ( V − W H γ ) 2 ij −f * (Y ij )}, (10)
where Y ij is the auxiliary variable introduced by HQ for
( V −W H γ ) 2 ij .
By substituting (10) into (8), we have the objective function in an enlarged parameter space
max W ≥0,H≥0 { 1 2 ij max Yij {Y ij ( V − W H γ ) 2 ij − f * (Y ij )}} = max W ≥0,H≥0,Y { 1 2 ij {Y ij ( V − W H γ ) 2 ij − f * (Y ij )}},(11)
where the equality comes from the separability of the optimization problems with respect to Y ij . Although the objective function in (8) is non-quadratic, its equivalent problem (11) is essentially a quadratic optimization. In this paper, HQ solves (11) based on the block coordinate descent framework. In particular, HQ recursively optimizes the following three problems. At t-th iteration,
Y t+1 : max Y 1 2 ij (Y ij ( V − W t H t γ ) 2 ij − f * (Y ij )),(12)H t+1 : max H≥0 1 2 ij (Y t+1 ij ( V − W t H γ ) 2 ij ),(13)W t+1 : max W ≥0 1 2 ij (Y t+1 ij ( V − W H t+1 γ ) 2 ij ).(14)
Using Theorem 2, we know that the solution of (12) can be expressed analytically as
Y t+1 ij = − 1 1+( V −W t H t γ ) 2 ij , if |(V − W t H t ) ij | ≤ γ √ σ 0, if |(V − W t H t ) ij | > γ √ σ .
Since (13) and (14) are symmetric and intrinsically weighted non-negative least squares (WNLS) problems, they can be optimized in the same way using the Nesterov method [10]. Taking (13) as an example, the procedure of its Nesterov based optimization is summarized in Algorithm 1, and its derivative is derived in the supplementary material. Considering that (13) is a constrained optimization problem, similar to [18], we use the following projected gradient-based criterion to check the stationarity of the search point , i.e.,
∇ P j (h k ) = 0, where ∇ P j (h k ) l = ∇ P j (h k ) l , (h k ) l ≥ 0 min{0, ∇ P j (h k ) l }, (h k ) l = 0
. Since the above stopping criterion will make OGM run unnecessarily long, similar to [18], we use a relaxed version
∇ P j (h k ) F ≤ max{ 1 , 10 −3 } × ∇ P j (h 0 ) F ,(15)
where 1 is a tolerance that controls how far the search point is from a stationary point.
Algorithm 1 Optimal Gradient Method (OGM) for WNLS
Input: V ·j ∈ R m + , W t ∈ R m×r + , H t ·j ∈ R r + , D t+1 j . Output: H t+1 ·j . 1: Initialize z 0 = H t ·j , h 0 = H t ·j , α 0 = 1, k = 0. 2: Calculate L j = W t T D t+1 j W t 2 . repeat 3: ∇ j (z k ) = W t T D t+1 j W t z k − W t T D t+1 j V ·j . 4: h k+1 = Π + (z k − ∇j (z k ) Lj ). 5: α k+1 = 1+ √ 4α 2 k +1 2 . 6: z k+1 = h k+1 + α k −1 α k+1 (h k+1 − h k ). 7: k ← k + 1. until {The stopping criterion (15) is satisfied.} 8: H t+1 ·j = z k .
The complete procedure of the HQ algorithm is summarized in Algorithm 2. The weights of entries and factor matrices are updated recursively until the objective function does not change. We use the following stopping criterion to check the convergence in Algorithm 2:
|F (W t , H t ) − F (W * , H * )| |F (W 0 , H 0 ) − F (W t , H t )| ≤ 2 ,(16)
where 2 signifies the tolerance, F (W, H) signifies the objective function of (8) and (W * , H * ) signifies a local minimizer 5 . The stopping criterion (16) implies that HQ stops when the search point is sufficiently close to the minimizer and sufficiently far from the initial point. Line 3 updates the scale parameter by the Nagy algorithm and will be further presented in Section 3.2. Line 4 detects outliers by robust statistics and will be presented in Section 3.3.
The main time cost of Algorithm 2 is incurred on lines 2, 4, 5, 6, 7, 8, and 9. The time complexities of lines 2 and 7 are both O(mnr). According to Algorithm 1, the time complexities of lines 6 and 9 are O(mr 2 ) and O(nr 2 ), respectively. Since line 4 introduces a median operator, its time complexity is O(mn ln(mn)). In summary, the total complexity of Algorithm 2 is O((mn ln(mn) + mnr 2 )).
Scale Estimation
The parameter estimation problem for Cauchy distribution has been studied for several decades [32] [33] [34]. Nagy [32] proposed an I-divergence based method, termed the Nagy algorithm for short, to simultaneously estimate location and 5. Since any local minimal is unknown beforehand, we instead utilize
(W t−1 , H t−1 ) in our experiments.
Algorithm 2 Half-quadratic (HQ) Programming Algorithm for Truncated CauchyNMF
Input: W ∈ R m×n + , r min{m, n}. Output: W, H. 1: Initialize W 0 ∈ R m×r + , H 0 ∈ R r×n + , t = 0. repeat 2: Calculate E t = V − W t H t and Q t+1 = 1 (1+( E t γ ) 2 )
.
3: Update the scale parameter γ based on E t . 4: Detect the indices Ω(t) of outliers and set Q t+1
Ω(t) = 0. for j = 1, . . . , n do 5: Calculate D t+1 j = diag(Q t+1 ·j ). 6: Update H t+1 ·j by Algorithm 1. end for 7: Calculate E t = V − W t H t+1 and Q t+1 = 1 (1+( E t γ ) 2 )
.
for i = 1, . . . , m do 8: Calculate D t+1 i = diag(Q t+1 i· ). 9: Update W t+1 i· by Algorithm 1. end for 10: t ← t + 1. until {Stopping criterion (16) is satisfied.} 11: W = W t , H = H t .
scale parameters. The Nagy algorithm minimizes the discrimination information 6 between the empirical distribution of the data points and the prior Cauchy distribution with respect to the parameters. In our Truncated CauchyNMF model (2), the location parameter of the Cauchy distribution is assumed to be zero, and thus we only need to estimate the scale-parameter γ.
Here we employ the Nagy algorithm to estimate the scale-parameter based on all the residual errors of the data. According to [32], supposing there exist a large number of residual errors, the scale-parameter estimation problem can be formulated as
min γ D(η n |f 0,γ ) = min γ +∞ ∞ ln 1 f 0,γ (x) dF n (x) = min γ N n=1 1 N ln 1 f 0,γ (x k ) ,(17)
where D(·|·) denotes the discrimination information, and the first equality is due to the independence of η n and γ, and the second equality is due to the Law of large numbers. By substituting the probability density function f 0,γ of Cauchy distribution 7 into (17) To solve this problem, Nagy [32] proposed an efficient iterative algorithm, i.e.,
γ k+1 = γ k 1/e 0 k − 1, k = 0, 1, 2, . . . ,(18)
6. The discrimination information of random variable ξ 1 given random variable ξ 2 is defined as D(ξ 1 |ξ 2 ) = +∞ −∞ ln
f 1 (x) f 2 (x) dF 1 (x),
where f 1 and f 2 are the PDFs of ξ 1 and ξ 2 , and F 1 is the distribution function of ξ 1 .
7. The probability density function (PDF) of Cauchy distribution is fx 0 ,γ (x) = 1/(πγ(1 + ( x−x 0 γ ) 2 )), where x 0 is the location parameter, specifying the location of the peak of the distribution, and γ is the scale parameter, specifying the half-width at half-maximum.
where γ 0 > 0, and e 0 k = 1 mn m i=1 n j=1
1 (1+( E ij γ k ) 2 )
. In [32], Nagy proved that the algorithm (18) converges to a fixed point assuming the number of data points is large enough, and this assumption is reasonable in Truncated CauchyNMF.
Outlier Rejection
Looking more carefully at (12), (13) and (14), HQ intrinsically assigns a weight for each entry of V with both factor matrices H t+1 and W t+1 fixed, i.e., Q t+1
ij = 1 1+( E t γ ) 2 ij , if |E t ij | ≤ γ √ σ 0, if |E t ij | > γ √ σ
, where E t denotes the error matrix at the t-th iteration. The larger the magnitude of error for a particular entry, the lighter the weight is assigned to it by HQ. Intuitively, the corrupted entry contributes less in learning the intrinsic subspace. If the magnitude of error exceeds a threshold γ √ σ, Truncated CauchyNMF assigns zero weights to the corrupted entries to inhibit their contribution to the learned subspace. That is how Truncated CauchyNMF filters out extreme outliers.
However, it is non-trivial to estimate the threshold γ √ σ. Here, we introduce a robust statistics-based method to explicitly detect the support of the outliers instead of estimating the threshold to detect outliers. Since the energy function of Truncated CauchyNMF gets close to that of NMF as the error tends towards zero, i.e., lim x→0 (ln(1 + x 2 ) − x 2 ) = 0. Truncated CauchyNMF encourages the small magnitude errors to have a Gaussian distribution. Let Θ t denote the set of magnitudes of error at the t-th iteration of HQ, i.e.,
Θ t = {|E t ij | : 1 ≤ i ≤ m, 1 ≤ j ≤ n} where E t = V − W t H t .
It is reasonable to believe that a subset of Θ t , i.e., Γ t = {θ ∈ Θ t : θ ≤ med{Θ t }}, obeys a Gaussian distribution, where med{Θ t } signifies the median of Θ t . Since |Γ t | = mn 2 , it suffices to estimate both the mean µ t and standard deviation δ t from Γ t . According to the three-sigma-rule, we detect the outliers as O t = {τ ∈ Γ t : |τ − µ t | > 3δ t } and output their indices Ω(t).
To illustrate the effect of outlier rejection, Figure 3 presents a sequence of weighting matrices generated by HQ for the motivating example described in Figure 2. It shows that HQ correctly assigns zero weights for the corrupted entries in only a few iterations and finally detects almost all outliers including illumination, sunglasses, and scarves (see the last column in Figure 3) in the end.
NMF
Traditional NMF [17] assumes that noise obeys a Gaussian distribution and derives the following squared L 2 -norm based objective function:
min W ≥0,H≥0 V − W H 2 F , where X F = ij X 2 ij signifies the matrix Frobenius norm.
It is commonly known that NMF can be solved by using the multiplicative update rule (MUR, [17]). Because of the nice mathematical property of squared L 2 -norm and the efficiency of MUR, NMF has been extended for various applications [4] [6] [28]. However, NMF and its extensions are non-robust because the L 2 -norm is sensitive to outliers.
Hypersurface Cost Based NMF
Hamza and Brady [12] proposed a hypersurface cost based NMF (HCNMF) by minimizing the summation of hypersurface costs of errors, i.e., min W ≥0,
H≥0 { ij δ((V − W H) ij )}, where δ(x) = √ 1 + x 2 − 1
is the hypersurface cost function. According to [12], the hypersurface cost function has differentiable and bounded influence function. Since the hypersurface cost function is differentiable, HCNMF can be directly solved by using the projected gradient method. However, the optimization of HCNMF is difficult because Armijo's rule based line search is time consuming [12].
L 1 -Norm Based NMF
To improve the robustness of NMF, Lam [15] assumed that noise is independent and identically distributed from Laplace distribution and proposed L 1 -NMF as follows: min W ≥0,H≥0 V − W H 1 , where X 1 = ij |X ij | and | · | signifies the absolute value function. Since the L 1norm based loss function is non-smooth, the optimization algorithm in [15] is not scalable on large-scale datasets. Manhattan NMF (MahNMF, [11]) remidies this problem by approximating the loss function of L 1 -NMF with a smooth function and minimizing the approximated loss function using Nesterov's method. Although L 1 -NMF is less sensitive to outliers than NMF, it is not sufficiently robust because its breakdown point is related to the dimensionality of data [7].
L 1 -Norm Regularized Robust NMF
Zhang et al. [29] assumed that the dataset contains both Laplace distributed noise and Gaussian distributed noise and proposed an L 1 -norm regularized Robust NMF (RNMF-L 1 ) as follows: min W ≥0,H≥0,S { V − W H − S 2 F + λ S 1 }, where λ is a positive constant that trades off the sparsity of S. Similar to L 1 -NMF, RNMF-L 1 is also less sensitive to outliers than NMF, but they are both non-robust to large numbers of outliers because the L 1 -minimization model has a low breakdown point. Moreover, it is nontrivial to determine the tradeoff parameter λ.
L 2,1 -Norm Based NMF
Since NMF is substantially a summation of the squared L 2norm of the errors, the large magnitude errors dominate the objective function and cause NMF to be non-robust. To solve this problem, Kong et al. [14] proposed the L 2,1 -norm based NMF (L 2,1 -NMF) which minimizes the L 2,1 -norm of the error matrix, i.e., min W ≥0,H≥0 V − W H 2,1 , where the L 2,1 -norm is defined as E 2,1 = n j=1 E ·j 2 . In contrast to NMF, L 2,1 -NMF is more robust because the influences of noisy examples are inhibited in learning the subspace.
Robust Capped Norm NMF
Gao et al. [47] proposed a robust capped norm NMF (RCNMF) to completely filter out the effect of outliers by instead minimizing the following objective function: W ≥0,H≥0 n j=1 min{ V ·j − W H ·j , θ}, where θ is a threshold that chooses the outlier samples. RCNMF cannot be applied in practical applications because it is non-trivial to determine the pre-defined threshold, and the utilized iterative algorithms in both [14] and [47] converge slowly with the successive use of the power method [1].
Correntropy Induced Metric Based NMF
The most closely-related work is the half-quadratic algorithm for optimizing robust NMF, which includes the Correntropy-Induced Metric (CIM)-based NMF (CIM-NMF) and Huber-NMF by Du et al. [8]. CIM-NMF measures the approximation errors by using CIM [19], i.e.,
min W ≥0,H≥0 m i=1 n j=1 ρ((V −W H) ij , δ), where ρ(x, δ) = 1 − 1 √ 2πδ e − x 2
2δ 2 . Since the energy function ρ(x, δ) increases slowly as the error increases, CIM-NMF is insensitive to outliers. In a similar way, Huber-NMF [8] measures the approximation errors by using the Huber function, i.e.,
min W ≥0,H≥0 m i=1 n j=1 l ((V − W H) ij , c), where l (x, c) = x 2 , |x| ≤ c 2c|x| − c 2 , |x| ≥ c and the cutoff c is automatically determined by c = med{|(V − W H) ij |}.
Truncated CauchyNMF is different from both CIM-NMF and Huber-NMF in four aspects: (1) Truncated CauchyNMF is derived from the proposed Truncated Cauchy loss which can model both modearte and extreme outliers, whereas neither CIM-NMF or Huber-NMF can do that; (2) Truncated CauchyNMF demonstrates strong evidence of both robustness and generalization ability, whereas neither CIM-NMF nor Huber-NMF demonstrates evidence of neither;
(3) Truncated CauchyNMF iteratively detects outliers by the robust statistics on the magnitude of errors, and thus performs more robustly than CIM-NMF and Huber-NMF in practice; And (4) Truncated CauchyNMF obtains the optima for each factor in each iteration round by solving the weighted non-negative least squares (WNLS) problems, whereas the multiplicative update rules for CIM-NMF and Huber-NMF do not.
EXPERIMENTAL VERIFICATION
We explore both the robustness and the effectiveness of Truncated CauchyNMF on two popular face image datasets, ORL [27] and AR [22], and one object image dataset, i.e., Caltech 101 [44], by comparing with six typical NMF models: (1) L 2 -NMF [16] optimized by NeNMF [10]; (2) L 1 -NMF [15] optimized by MahNMF [11]; (3) RNMF-L 1 [29]; (4) L 2,1 -NMF [14]; (5) CIM-NMF [8]; and (6) Huber-NMF [8]. We first present a toy example to intuitively show the robustness of Truncated CauchyNMF and several clustering experiments on the contaminated ORL dataset to confirm its robustness. We then analyze the effectiveness of Truncated CauchyNMF by clustering and recognizing face images in the AR dataset, and clustering object images in the Caltech 101 dataset.
An Illustrative Study
To illustrate Truncated CauchyNMF's ability to learn a subspace, we apply Truncated CauchyNMF on a synthetic dataset composed of 180 two-dimensional data points (see Figure 4(a)). All data points are distributed in a onedimensional subspace, i.e., a straight line (y = 0.2x). Both L 2 -NMF and L 1 -NMF are applied on this synthetic dataset for comparison. Figure 4(a) shows that all methods learn the intrinsic subspace correctly on the clean dataset. Figures 4(b) to 4(d) demonstrate the robustness of Truncated CauchyNMF on a noisy dataset. First, we randomly select 20 data points and 2 Relative reconstruction error (%) of L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, CIM-NMF, and CauchyNMF on ORL dataset contaminated by Laplace noise with deviation varying from 40 to 280. contaminate their x-coordinates, with their y-coordinates retained to simulate outliers. Figure 4(b) shows that L 2 -NMF fails to recover the subspace in the presence of 1 9 outliers, while both Truncated CauchyNMF and L 1 -NMF perform robustly in this case. However, the robustness of L 1 -NMF decreases as the outliers increase. To study this point, we randomly select another 20 data points and contaminate their x-coordinates. Figure 4(c) shows that both L 2 -NMF and L 1 -NMF fail to recover the subspace, but Truncated CauchyNMF succeeds. To study the robustness of Truncated CauchyNMF on seriously corrupted datasets, we randomly select an additional 40 data points as outliers.
δ L 2 -NMF L 2,1 -NMF RNMF-L 1 L 1 -NMF
We contaminate their y-coordinates while keeping their xcoordinates consistent. Figure 4(d) shows that Truncated CauchyNMF still recovers the intrinsic subspace in the presence of 4 9 outliers while both L 2 -NMF and L 1 -NMF fail in this case. In other words, the breakdown point of Truncated CauchyNMF is greater than 44.4%, which is quite close to the highest breakdown point of 50%.
Simulated Corruption
We first evaluate Truncated CauchyNMF' robustness to simulated corruptions. To this end, we add three typical corruptions, i.e., Laplace noise, and Salt & Pepper noise, randomly positioned blocks, to frontal face images from the Cambridge ORL database and compare the clustering performance of our methods with the performance of other methods on these contaminated images. Figure 5 shows example face images contaminated by these corruptions.
The Cambridge ORL database [27] contains 400 frontal face photos of 40 individuals. There are 10 photos of each individual with a variety of lighting, facial expressions and facial details (with-glasses or without-glasses). All photos were taken against the same dark background and each photo was cropped to a 32 × 32 pixel array and normalized to a long vector. The clustering performance is evaluated by two metrics, namely accuracy and normalized mutual information [22]. The number of clusters is set equal to the number of individuals, i.e., 40. Intuitively, the better a model clusters contaminated images, the more robust it is for learning the subspace. In this experiment, we utilize K-means [21] as a baseline. To qualify the robustness of all NMF models, we compare their relative reconstruction errors, i.e., V − W H F / V F , whereV denotes the clean dataset, and W and H signify the factorization results on the contaminated dataset.
Laplace Noise
Laplace noise exists in many types of observation, e.g., gradient-based image features such as SIFT [31], but the classical NMF cannot deal with such data because the distributions violate the assumption of classical NMF. In this experiment, we study Truncated CauchyNMF's capacity to deal with Laplace noisy data. We simulate Laplace noise by adding random noise to each pixel of each face image from ORL where the noise obeys a Laplace distribution Laplace(0, δ). For the purpose of verifying the robustness of Truncated CauchyNMF, we vary the deviation δ from 40 to 280 because the maximum pixel value is 255. Figure 5(a) gives an example face image and its seven noisy versions by adding Laplace noise. Figure 6(a) and 6(b) present the mean and standard deviations of accuracy and normalized mutual information of Truncated CauchyNMF and the representative models. 3 Relative reconstruction error (%) of L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, CIM-NMF, and Truncated CauchyNMF on ORL dataset contaminated by Salt & Pepper noise with the percentage of corrupted pixels varying from 5% to 60%. Figure 6 confirms that NMF models outperform Kmeans in terms of accuracy and normalized mutual information. L 1 -NMF outperforms L 2 -NMF and L 2,1 -NMF because L 1 -NMF models Laplace noise better. L 1 -NMF outperforms RNMF-L 1 because L 1 -NMF assigns smaller weight for large noise than RNMF-L 1 . CIM-NMF and Huber-NMF perform comparably with L 1 -NMF when the deviation of Laplace noise is moderate. However, as the deviation increases, their performance is dramatically reduced because large-magnitude outliers seriously influence the factorization results. In contrast, Truncated CauchyNMF outperforms all the representative NMF models and remains stable as deviation varies.
p L 2 -NMF L 2,1 -NMF RNMF-L 1 L 1 -NMF
The clustering performance in Figure 6 confirms Truncated CauchyNMF's effectiveness in learning the subspace on the ORL dataset contaminated by Laplace noise. Table 2 compares the relative reconstruction errors of Truncated CauchyNMF and the representative algorithms. It shows that CauchyNMF performs the most robustly in all situations. That is because Truncated CauchyNMF can not only model the simulated Laplace noise but also models the underlying outliers, e.g., glasses, in the ORL dataset.
Salt & Pepper Noise
Salt & Pepper noise is a common type of corruption in images. The removal of Salt & Pepper noise is a challenging task in computer vision since this type of noise contaminates each pixel by zero or the maximum pixel value, and the noise distribution violates the noise assumption of traditional learning models. In this experiment, we verify Truncated CauchyNMF's capacity to handle Salt & Pepper noises. We add Salt & Pepper noise to each frontal face image of the ORL dataset (see Figure 5(b) for the contaminated face images of a certain individual) and compare the clustering performance of Truncated CauchyNMF on the contaminated dataset with that of the representative algorithms. To demonstrate the robustness of Truncated CauchyNMF, we vary the percentage of corrupted pixels from 5% to 60%. For each case of additive Salt & Pepper noise, we repeat the clustering test 10 times and report the average accuracy and average normalized mutual information to eliminate the effect of initial points. Figure 7 shows that all models perform satisfactorily when 5% of the pixels of each image are corrupted. As the number of corrupted pixels increases, the classical L 2 -NMF is seriously influenced by the Salt & Pepper noise and its performance is dramatically reduced. Although L 1 -NMF, Huber-NMF and CIM-NMF perform more robustly than L 2 -NMF, their performance is also degraded when more than 40% of pixels are corrupted. Truncated CauchyNMF performs quite stably even when 40% of pixels are corrupted and outperforms all the representative models in most cases. All the models fail when 60% of pixels are corrupted, because it is difficult to distinguish inliers from outliers in this case. Table 3 gives a comparison of Truncated CauchyNMF and the representative algorithms in terms of relative reconstruction error. It shows that L 1 -NMF, Hubel-NMF, CIM-NMF and Truncated CauchyNMF perform comparably when less than 20% of the pixels are corrupted, but the robustness of L 1 -NMF, Hubel-NMF, CIM-NMF are unstable as the percentage of corrupted pixels increases. Truncated CauchyNMF performs stably when 30% ∼ 50% of the pixels are corrupted by Salt & Pepper noise. This confirms the robustness of Truncated CauchyNMF.
Contiguous Occlusion
The removal of contiguous segments of an object due to occlusion is a challenging problem in computer vision. Many techniques such as L 1 -norm minimization and nuclear norm minimization are unable to handle this problem. In this experiment, we utilize contiguous occlusion to simulate extreme outliers. Specifically, we randomly position a b × bsized block on each face image of the ORL dataset and fill each block with a pixel array whose pixel values equal 550.
To verify the effectiveness of subspace learning, we apply both K-means and all NMF models to the contaminated dataset and compare the clustering performance in terms of both accuracy and normalized mutual information. This task is quite challenging because large numbers of outliers with large magnitudes must be ignored to learn a clean TABLE 4 Average accuracy (%) and average normalized mutual information (%) of K-means, L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, CauchyNMF, CIM-NMF, and Truncated CauchyNMF on occluded ORL dataset with block size b varying from 10 to 22 with step size 2. subspace. To study the influence of outliers, we vary the block size b from 10 to 22, where the minimum block size and maximum block size imply 10% and 50% outliers, respectively. Figure 5(c) shows the occluded face images of a certain individual. Table 4 shows that K-means, L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, and CauchyNMF 8 are seriously deteriorated by the added continuous occlusions. Although CIM-NMF performs robustly when the percentage of outliers is moderate, i.e., 10% (corresponds to b = 10)and 14% (corresponds to b = 12), its performance is unstable when the percentage of outliers reaches 20% (corresponds to b = 14). This is because CIM-NMF keeps energies for extreme outliers and makes a large number of extreme outliers dominate the objective function. By contrast, Truncated CauchyNMF reduces energies of extreme outliers to zeros, and thus performs robustly when the percentage of outliers is less than 40% (corresponds to b = 20).
Real-life Corruption
The previous section has evaluated the robustness of Truncated CauchyNMF under several types of synthetic outliers including Laplace noise, Salt & Pepper noise, and contiguous occlusion. The experimental results show that our methods consistently learns the subspace even when half the pixels in each image are corrupted, while other NMF models fail under this extreme condition. In this section, we evaluate Truncated CauchyNMF's ability to learn the subspace under natural sources of corruption, e.g., contiguous disguise in the AR dataset and object variations in the Caltech-101 dataset.
Contiguous Disguise
The Purdue AR dataset [22] contains 2600 frontal face images taken from 100 individuals comprising 50 males and 50 8. In this experiment, we compare with CauchyNMF to show the effect ot truncation. For CauchyNMF, we set σ = +∞ and adopt the proposed HQ algorithm to solve it. females in two sessions. There is a total of 13 images in each session, including one normal image, three images depicting different facial expressions, three images under varying illumination conditions, three images with sunglasses, and three images with a scarf for each individual. Each image is cropped into a 55×40-dimensional pixel array and reshaped into a 2200-dimensional long vector. Figure 8 gives 20 example images of two individuals and shows that the images with disguises, i.e., sunglasses and scarf, are seriously contaminated by outliers. Therefore, it is quite challenging to correctly group these contaminated images, e.g., the 4th, 5th, 9th and 10th columns in Figure 8, with the clean images, e.g., the 1st and 6th columns in Figure 8. According to the results in Section 5.2.3, Truncated CauchyNMF can handle contiguous occlusions with extreme outliers well, we will therefore show the effectiveness of Truncated CauchyNMF to do this job.
To evaluate the effectiveness of Truncated CauchyNMF in clustering, we randomly select between two and ten images of each individual to comprise the dataset. By concatenating all the long vectors, we obtain an image intensity matrix denoted as V . We then apply NMF to V to learn the subspace, i.e., V ≈ W H, where the rank of W and H equals the number of clusters. Lastly, we output the cluster labels by performing K-means on H. To eliminate the influence of randomness, we repeat this trial 50 times and report the averaged accuracy and averaged normalized mutual information for comparison. Figure 9 gives both average accuracy and average normalized mutual information in relation to the number of clusters of Truncated CauchyNMF and other NMF models. It shows that Truncated CauchyNMF consistently achieves the highest clustering performance on the AR dataset. This result confirms that Truncated CauchyNMF learns the subspace more effectively than other NMF models, even when the images are contaminated by contiguous disguises such as sunglasses and a scarf.
We further conduct the face recognition experiment on the AR dataset to evaluate the effectiveness of Truncated CauchyNMF. In this experiment, we treat the images taken in the first session as the training set and the images taken in the second session as the test set. This task is challenging because (1) the distribution of the training set is different from that of the test set, and (2) both training and test sets are seriously contaminated by outliers. We first learn a subspace by conducting Truncated CauchyNMF on the whole dataset and then classify each test image by the sparse representation classification method (SRC) [45] on the coefficients of both training images and test images in the learned subspace. Since there are totally 100 individuals and the images of each individual were taken in two sessions, we set the reduced dimensionality of Truncated CauchyNMF to 200. We also conduct other NMF variants with the same setting for comparison. To filter out the influence of continuous occlusions in face recognition, Zhou et al. [46] proposed a sparse error correction method (SEC) which labels each pixel of test image as occluded pixel and non-occluded one by using Markov random field (MRF) and learns a representation of each test image on non-occluded pixels. Although SEC succeeds to filter out the continuous occlusions in the test set, it cannot handle outliers in the training set. By contrast, Truncated CauchyNMF can take the occlusions off on both training and test images, and thus boost the performance of the subsequent classification. Table 5 shows the face recognition accuracies of NMF variants and SEC. In the AR dataset, each individual contains one normal image and twelve contaminated images under different conditions including varying facial expressions, illuminations, wearing sunglasses, and wearing scarves. In this experiment, we not only show the results on total test set but also show the results on the test images taken under different conditions separately. Table 5 shows that Truncated CauchyNMF performs the best in most cases, especially, it performs almost perfectly on normal images. It validates that Truncated CauchyNMF can learn an effective subspace from the contaminated data. In most situations, SEC performs excellently, but the last two columns indicate that the contaminated training images seriously weaken SEC. Truncated CauchyNMF performs well in such situa- tions because it effectively removes the influence of outliers in the subspace learning stage.
Object Variation
The Caltech 101 dataset [44] contains pictures of objects captured from 101 categories. The number of pictures for each category varies from 40 to 800. Figure 10 shows example images from 6 different categories including dolphin, butterfly, sunflower, watch, pizza and cougar body. We extract convolutional neural network (CNN) feature for each image using the Caffe framework [39] and pre-trained model of Imagenet with AlexNet [42]. As objects from the same categories may vary in shape, color and size, and the pictures are taken from different viewpoints, clustering objects of the same category together is a very challenging task. We will show the good performance of Truncated CauchyNMF compared to other methods such as CIM-NMF, Huber-NMF, L 2,1 -NMF, RNMF-L 1 , L 2 -NMF, L 1 -NMF, and K-means.
Following the similar protocol as in section 5.3.1, we demonstrate the effectiveness of Truncated CauchyNMF in clustering objects. We test with 2 to 10 randomly selected categories. The image feature matrix is denoted as V . NMFs are applied to V to compute the subspace, i.e. V ≈ W H, where the rank of W and H equals the number of clusters. Cluster labels are obtained by performing K-means on H. We repeated such trial 50 times and computed averaged accuracies and normalized mutual information among all trials for comparison. Figure 11 presents the accuracy and normalized mutual information versus cluster numbers of different NMF models. Truncated CauchyNMF significantly outperforms other approaches. As the number of categories increases, the accuracy achieved by other NMF models decreases quickly, while Truncated CauchyNMF maintains a strong subspace learning ability. We can see from the figure that Truncated CauchyNMF is more robust to the object variations compared to other models. Fig. 11. The clustering performance, in terms of both accuracy and normalized mutual information, of Truncated CauchyNMF, CIM-NMF, Huber-NMF, L 2,1 -NMF, RNMF-L 1 , L 2 -NMF, L 1 -NMF, and K-means on the Caltech101 dataset with the number of clusters varying from 2 to 10.
Note that, in all above experiments, we optimized the Truncated CauchyNMF and the other NMF models with different types of algorithms. However, the high performance is not due to the optimization algorithm. To study this point, we applied the Nesterov based HQ algorithm to optimize the representative NMF models and compared their clustering performance on the AR dataset. The results show that Truncated CauchyNMF consistently outperforms the other NMF models. See the supplementary materials for detailed discussions.
CONCLUSION
This paper proposes a Truncated CauchyNMF framework for learning subspaces from corrupted data. We propose a Truncated Cauchy loss which can simultaneously and appropriately model both moderate and extreme outliers, and develop a novel Truncated CauchyNMF model. We theoretically analyze the robustness of Truncated CauchyNMF by comparing with a family of NMF models, and provide the performance guarantees of Truncated CauchyNMF. Considering that the objective function is neither convex nor quadratic, we optimize Truncated CauchyNMF by using half-quadratic programming and alternately updating both factor matrices. We experimentally verify the robustness and effectiveness of our methods on both synthetic and natural datasets and confirm that Truncated CauchyNMF are robust for learning subspace even when half the data points are contaminated. Naiyang Guan is currently an Associate Professor with the College of Computer, at The National University of Defense Technology of China. He received the BS, MS, and PhD degree from The National University of Defense Technology. His research interests include machine learning, computer vision, and data mining. He has authored and co-authored 20+ research papers including IEEE T-NNLS, T-IP, T-SP, Neurocomputing, Genes, BMC Genomics, ICDM, IJ-CAI, ECCV | 10,208 |
1906.00495 | 2903254834 | Non-negative matrix factorization (NMF) minimizes the euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers. In this paper, we propose a Truncated CauchyNMF loss that handle outliers by truncating large errors, and develop a Truncated CauchyNMF to robustly learn the subspace on noisy datasets contaminated by outliers. We theoretically analyze the robustness of Truncated CauchyNMF comparing with the competing models and theoretically prove that Truncated CauchyNMF has a generalization bound which converges at a rate of order @math , where @math is the sample size. We evaluate Truncated CauchyNMF by image clustering on both simulated and real datasets. The experimental results on the datasets containing gross corruptions validate the effectiveness and robustness of Truncated CauchyNMF for learning robust subspaces. | Zhang . @cite_17 assumed that the dataset contains both Laplace distributed noise and Gaussian distributed noise and proposed an @math -norm regularized Robust NMF (RNMF- @math ) as follows: @math , where @math is a positive constant that trades off the sparsity of @math . Similar to @math -NMF, RNMF- @math is also less sensitive to outliers than NMF, but they are both non-robust to large numbers of outliers because the @math -minimization model has a low breakdown point. Moreover, it is non-trivial to determine the tradeoff parameter @math . | {
"abstract": [
"Non-negative matrix factorization (NMF) is a recently popularized technique for learning parts-based, linear representations of non-negative data. The traditional NMF is optimized under the Gaussian noise or Poisson noise assumption, and hence not suitable if the data are grossly corrupted. To improve the robustness of NMF, a novel algorithm named robust nonnegative matrix factorization (RNMF) is proposed in this paper. We assume that some entries of the data matrix may be arbitrarily corrupted, but the corruption is sparse. RNMF decomposes the non-negative data matrix as the summation of one sparse error matrix and the product of two non-negative matrices. An efficient iterative approach is developed to solve the optimization problem of RNMF. We present experimental results on two face databases to verify the effectiveness of the proposed method."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2118375674"
]
} | Truncated Cauchy Non-negative Matrix Factorization | N ON-NEGATIVE matrix factorization (NMF, [16]) explores the non-negativity property of data and has received considerable attention in many fields, such as text mining [25], hyper-spectral imaging [26], and gene expression clustering [38]. It decomposes a data matrix into the product of two lower dimensional non-negative factor matrices by minimizing the Eudlidean distance between their product and the original data matrix. Since NMF only allows additive, non-subtractive combinations, it obtains a natural parts-based representation of the data. NMF is optimal when the dataset contains additive Gaussian noise, and so it fails on grossly corrupted datasets, e.g., the AR database [22] where face images are partially occluded by sunglasses or scarves. This is because the corruptions or outliers seriously violate the noise assumption.
Many models have been proposed to improve the robustness of NMF. Hamza and Brady [12] proposed a hypersurface cost based NMF (HCNMF) which minimizes the hypersurface cost function 1 between the data matrix and its approximation. HCNMF is a significant contribution for improving the robustness of NMF, but its optimization algorithm is time-consuming because the Armijo's rule based line search that it employs is complex. Lam [15] proposed 1. The hypersurface cost function is defined as h(x) = (1 + x 2 ) − 1 which is quadratic when its argument is small and linear when its argument is large. L 1 -NMF 2 to model the noise in a data matrix by a Laplace distribution. Although L 1 -NMF is less sensitive to outliers than NMF, its optimization is expensive because the L 1norm based loss function is non-smooth. This problem is largely reduced by Manhattan NMF (MahNMF, [11]), which solves L 1 -NMF by approximating the non-smooth loss function with a smooth one and minimizing the approximated loss function with Nesterov's method [36]. Zhang et al. [29] proposed an L 1 -norm regularized Robust NMF (RNMF-L 1 ) to recover the uncorrupted data matrix by subtracting a sparse error matrix from the corrupted data matrix. Kong et al. [14] proposed L 2,1 -NMF to minimize the L 2,1 -norm of an error matrix to prevent noise of large magnitude from dominating the objective function. Gao et al. [47] further proposed robust capped norm NMF (RCNMF) to filter out the effect of outlier samples by limiting their proportions in the objective function. However, the iterative algorithms utilized in L 2,1 -NMF and RCNMF converge slowly because they involve a successive use of the power method [1]. Recently, Bhattacharyya et al. [48] proposed an important robust variant of convex NMF which only requires the average L 1 -norm of noise over large subsets of columns to be small; Pan et al. [49] proposed an L 1 -norm based robust dictionary learning model; and Gillis and Luce [50] proposed a robust near-separable NMF which can determine the low-rank, avoid normalizing data, and filter out outliers. HCNMF, L 1 -NMF, RNMF-L 1 , L 2,1 -NMF, RCNMF, [48], [49] and [50] share a common drawback, i.e., they all fail when the dataset is contaminated by serious corruptions because the breakdown point of the L 1 -norm based models is determined by the dimensionality of the data [7].
In this paper, we propose a Truncated Cauchy nonnegative matrix factorization (Truncated CauchyNMF) 2. When the noise is modeled by Laplace distribution, the maximum likelihood estimation yields an L 1 -norm based objective function. We therefore term the method in [15] LG] 2 Jun 2019 model to learn a subspace on a dataset contaminated by large magnitude noise or corruption. In particular, we proposed a Truncated Cauchy loss that simultaneously and appropriately models moderate outliers (because the loss corresponds to a fat tailed distribution in-between the truncation points) and extreme outliers (because the truncation directly cut off large errors). Based on the proposed loss function, we develop a novel Truncated CauchyNMF model. We theoretically analyze the robustness of Truncated CauchyNMF and show that Truncated CauchyNMF is more robust than a family of NMF models, and derive a theoretical guarantee for its generalization ability and show that Truncated CauchyNMF converges at a rate of order O( ln n/n), where n is the sample size. Truncated CauchyNMF is difficult to optimize because the loss function includes a nonlinear logarithmic function. To address this, we optimize Truncated CauchyNMF by half-quadratic (HQ) programming based on the theory of convex conjugation. HQ introduces a weight for each entry of the data matrix and alternately and analytically updates the weight and updates both factor matrices by easily solving a weighted non-negative least squares problem with Nesterov's method [23]. Intuitively, the introduced weight reflects the magnitude of the error. The heavier the corruption, the smaller the weight, and the less an entry contributes to learning the subspace. By performing truncation on magnitudes of errors, we prove that HQ introduces zero weights for entries with extreme outliers, and thus HQ is able to learn the intrinsic subspace on the inlier entries.
In summary, the contributions of this paper are threefold: (1) we propose a robust subspace learning framework called Truncated CauchyNMF, and develop a Nesterovbased HQ algorithm to solve it; (2) we theoretically analyze the robustness of Truncated CauchyNMF comparing with a family of NMF models, and provide insight as to why Truncated CauchyNMF is the most robust method; and (3) we theoretically analyze the generalization ability of Truncated CauchyNMF, and provide performance guarantees for the proposed model. We evaluate Truncated CauchyNMF by image clustering on both simulated and real datasets. The experimental results on the datasets containing gross corruptions validate the effectiveness and robustness of Truncated CauchyNMF for learning the subspace.
The rest of this paper is organized as follows: Section 2 describes the proposed Truncated CauchyNMF, Section 3 develops the Nesterov-based half-quadratic (HQ) programming algorithm for solving Truncated CauchyNMF. Section 4 surveys the related works and Section 5 verifies Truncated CauchyNMF on simulated and real datasets. Section 6 concludes this paper. All the proofs are given in the supplementary material.
TRUNCATED CAUCHY NON-NEGATIVE MATRIX FACTORIZATION
Classical NMF [16] is not robust because its loss function e 2 (x) = x 2 is sensitive to outliers considering the errors of large magnitude dominate the loss function. Although some robust loss functions, such as e 1 (x) = |x| for L 1 -NMF [15], Hypersurface cost e h (x) = √ 1 + x 2 − 1 [12], and Cauchy loss e c (x; γ) = ln 1 + (x/γ) 2 , are less sensitive to outliers, they introduces infinite energy for infinitely large noise in the extreme case. To remedy this problem, we propose a Truncated Cauchy loss by truncating the magnitudes of large errors to limit the effects of extreme outliers, i.e.,
e t (x; γ, ε) = ln 1 + (x/γ) 2 , |x| ≤ ε ln 1 + (ε/γ) 2 , |x| > ε ,(1)
where γ is the scale parameter of the Cauchy distribution and ε is a constant.
To study the behavior of the Truncated Cauchy loss, we compare the loss functions e 2 (x), e 1 (x), e c (x; 1), e t (x; 1, 5), and the loss function of the L 0 -norm, i.e., e 0 (x) = 1, x = 0 0, x = 0 in Figure 1, because the L 0 -norm induces robust models. Figure 1(a) shows that when the error is moderately large, e.g., |x| ≤ 5, e t (x; 1, 5) shifts from e 2 (x) to e 1 (x) and corresponds to a fat-tailed distribution, and implies that the Truncated Cauchy loss can model moderate outliers well, while e 2 (x) cannot because it makes the outliers dominate the objective function. When the error gets larger and larger, e t (x; 1, 5) gets away from e 1 (x) and behaves like e 0 (x), and e t (x; 1, 5) keeps constant once the error exceeds a threshold, e.g., |x| > 5, and implies that the Truncated Cauchy loss can model extreme outliers, whereas neither e 1 (x) nor e c (x; 1) cannot because they encourage infinite energy to infinitely large error. Intuitively, the Truncated Cauchy loss can model both moderate and extreme outliers well. Figure 1(b) plots the curves of both e t (x; γ, ε) and e 0 (x) with varying γ from 0.001 to 1 and accordingly varying ε from 25 to 200. It shows that e t (x; γ, ε) behaves more and more close to e 0 (x) when γ approaches zero. By comparing the behaviors of loss functions, we believe that the Truncated Cauchy loss can induce robust NMF model. Given n high-dimensional samples arranged in a non-negative matrix V = [v 1 , . . . , v n ] ∈ R m×n + , Truncated Cauchy non-negative matrix factorization (Truncated CauchyNMF) approximately decomposes V into the product of two lower dimensional non-negative matrices, i.e., V = W H + E, where W ∈ R m×r + signifies the basis, H = [h 1 , . . . , h n ] ∈ R r×n + signifies the coefficients, and E ∈ R m×n signifies the error matrix which is measured by using the proposed Truncated Cauchy loss. The objective function of Truncated CauchyNMF can be written as min W ≥0,H≥0
1 2 ij g(( V − W H γ ) 2 ij ),(2)
where g(x) = ln(1 + x), 0 ≤ x ≤ σ ln(1 + σ), x > σ is utilized for the convenience of derivation and σ is a truncation parameter, and γ is the scale parameter. We will next show that the truncation parameter σ can be implicitly determined by robust statistics and the scale parameter γ can be estimated by the Nagy algorithm [32]. It is not hard to see that Truncated CauchyNMF includes CauchyNMF as a special case when σ = +∞. Since (2) assigns fixed energy to any large error whose magnitude exceeds γ √ σ, Truncated CauchyNMF can filter out any extreme outliers.
To illustrate the ability of Truncated CauchyNMF to model outliers, Figure 2 gives an illustrative example that demonstrates its application to corrupted face images. In this example, we select 26 frontal face images of an individual in two sessions from the Purdue AR database [22] (see all face images in Figure 2(a)). In each session, there are 13 frontal face images with different facial expressions, captured under different illumination conditions, with sunglasses, and with a scarf. Each image is cropped into a 165 × 120-dimensional pixel array and reshaped into a 19800-dimensional vector. The total number of face images compose a 19800 × 26-dimensional non-negative matrix because the pixel values are non-negative. In this experiment, we aim at learning the intrinsically clean face images from the contaminated images. This task is quite challenging because more than half the images are contaminated. Since these images were taken in two sessions, we set the dimensionality low (r = 2) to learn two basis images. Figure 2(b) shows that Truncated CauchyNMF robustly recovers all face images even when they are contaminated by a variety of facial expressions, illumination, and occlusion. Figure 2(c) presents the reconstruction errors and Figure 2(d) shows the basis images, which confirms that Truncated CauchyNMF is able to learn clean basis images with the outliers filtered out.
In the following subsections, we will analyze the generalization ability and robustness of Truncated CauchyNMF. Before that, we introduce Lemma 1 which states that the new representations generated by Truncated CauchyNMF are bounded if the input observations are bounded. This lemma will be utilized in the following analysis with the only assumption that each base is a unit vector. Such an assumption is typical in NMF because the bases W are usually normalized to limit the variance of its local minimizers. We use · p to represent the L p -norm and · to represent the Euclidean norm. Lemma 1. Assuming W i = 1, i = 1, . . . , r, and that the input observations are bounded, i.e., v ≤ α for some α > 0. Then the new representations are also bounded, i.e., h ≤ 2α + (σα)/( √ 2γ).
= + (b) (c) (a) (d)
Although Truncated CauchyNMF (2) has a differentiable objective function, solving it is difficult because the natural logarithmical function is nonlinear. Section 3 will present a half-quadratic (HQ) programming algorithm for solving Truncated CauchyNMF.
Generalization Ability
To analyze the generalization ability of Truncated CauchyNMF, we further assume that samples [v 1 , . . . , v n ] are independent and identically distributed and drawn from a space V with a Borel measure ρ. We use A ·j and A ij to denote the j-th column and the (i, j)-th entry of a matrix , respectively, and a i is the i-th entry of a vector a.
For any W ∈ R m×r + , we define the reconstruction error of a sample v as follows:
f W (v) = min h∈R r + j g(( v − W h γ ) 2 j ).(4)
Therefore, the objective function of Truancated CauchyNMF (2) can be written as
min W ≥0,H≥0 1 2 ij g(( V − W H γ ) 2 ij ) = min W ≥0 1 2 i f W (v i ). (5)
Let us define the empirical reconstruction error of Truncated CauchyNMF as R n (f W ) = 1 n n i=1 f W (v i ), and the expected reconstruction error of Truncated CauchyNMF as
R(f W ) = E v 1 n n i=1 f W (v i ). Intuitively, we want to learn W * = arg min W ≥0 R(f W ).(6)
However, since the distribution of v is unknown, we cannot minimize R(f W ) directly. Instead, we use the empirical risk minimization (ERM, [2]) algorithm to learn W n to approximate W * , as follows:
W n = arg min W ≥0 R n (f W ).(7)
We are interested in the difference between W n and W * . If the distance is small, we can say that W n is a good approximation of W * . Here, we measure the distance of their reduced expected reconstruction error as follows:
R(f Wn ) − R(f W * ) ≤ 2 sup f W ∈F W |R(f W ) − R n (f W )|, where F W = {f W |W ∈ W = R m×r + }.
The right hand side is known as the generalization error. Note that since NMF is convex with respect to either W or H but not both, the minimizer f Wn is hard to obtain. In practice, a local minimizer is used as an approximation. Measuring the distance between the local minimizer and the global minimizer is also an interesting and challenging problem.
By analyzing the covering number [30] of the function class F W and Lemma 1, we derive a generalization error bound for Truncated CauchyNMF as follows: Theorem 1. Let W ·i = 1, i = 1, . . . , r, and F W = {f W |W ∈ W = R m×r + }. Assume that v ≤ α. For any δ > 0, with probability at least 1 − δ, the equation (3) holds, where Γ( 1 2 ) = √ π; Γ(1) = 1; and Γ(x + 1) = xΓ(x).
Remark 1. Theorem 1 shows that under the setting of our proposed Truncated CauchyNMF, the expected reconstruction error R(f Wn ) will converge to R(f W * ) with a fast rate of order O( ln n/n), which means that when the sample size n is large, the distance between R(f Wn ) and R(f W * ) will be small. Moreover, if n is large and a local minimizer W (obtained by optimizing the non-convex objective of Truncated CauchyNMF) is close to the global minimizer W n , the local minimizer will also be close to the optimal W * . Remark 2. Theorem 1 also implies that for any W learned from (2), the corresponding empirical reconstruction error R n (f W ) will converge to its expectation with a specific rate guarantee, which means our proposed Truncated CauchyNMF can generalize well to unseen data.
Note that the noise sampled from the Cauchy distribution should not be bounded because Cauchy distribution is heavy-tailed. And bounded observations always imply bounded noise. However, Theorem 1 keeps the boundedness assumption on the observations for two reasons: (1) the truncated loss function indicates that the observations corresponding to unbounded noise are discarded, and (2) in real applications, the energy of observations should be bounded, which means their L 2 -norms are bounded.
Robustness Analysis
We next compare the robustness of Truncated CauchyNMF with those of other NMF models by using a sampleweighted procedure interpretation [20]. The sampleweighted procedure compares the robustness of different algorithms from the optimization viewpoint.
Let F (W H) denote the objective function of any NMF problem and f (t) = F (tW H) where t ∈ R. We can verify that the NMF problem is equivalent to finding a pair of W H such that
f (1) = 0 3 , where f (t) denotes the derivative of f (t). Let c(V ij , W H) = (V − W H) ij (−W H) ij
be the contribution of the j-th entry of the i-th training example to the optimization procedure and e(V ij , W H) = |V − W H| ij be an error function. Note that we choose c(V ij , W H) as the basis of contribution because we choose NMF, which aims to find a pair of W H such that ij c(V ij , W H) = 0 and is sensitive to noise, as the baseline for comparing the robustness. Also note that e(V ij , W H) represents the noise added to the (i, j)-th entry of V . The interpretation of the sample-weighted procedure explains the optimization procedure as being contribution-weighted with respect to the noise.
We compare f (1) of a family of NMF models in Table 1. Note that since multiplying f (1) by a constant will not change its zero points, we can normalize the weights of different NMF models to unity when the noise is equal to zero. During the optimization procedures, robust algorithms should assign a small weight to an entry of the training set with large noise. Therefore, by comparing the derivative f (1), we can easily make the following statements:
(1) L 1 -NMF 4 is more robust to noise and outliers than NMF; Huber-NMF combines the ideas of NMF and L 1 -NMF; (2) HCNMF, L 2,1 -NMF, RCNMF, and RNMF-L 1 work similarly to L 1 -NMF because their weights are of order O(1/e(V ij , W H)) with respect to the noise. It also becomes clear that HCNMF, L 2,1 -NMF, and RCNMF exploit some data structure information because the weights include the neighborhood information of e(V ij , W H) and that RNMF-L 1 is less sensitive to noise because it employs a sparse matrix S to adjust the weights; (3) The interpretation of the sample-weighted procedure also illustrates why CIM-NMF works well for heavy noise. This is because its weights decrease exponentially when the noise is large; And (4) for the proposed Truncated CauchyNMF, when the noise is larger than a threshold, its weights will drop directly to zero, which decrease far faster than that of CIM-NMF and thus Truncated CauchyNMF is very robust to extreme outliers. Finally, we conclude that Truncated CauchyNMF is more robust than any other NMF models with respect to extreme outliers because it has the power to provide smaller weights to examples.
HALF-QUADRATIC PROGRAMMING ALGORITHM FOR TRUNCATED CAUCHYNMF
Note that Truncated CauchyNMF (2) cannot be solved directly because the energy function g(x) is non-quadratic. We present a half-quadratic (HQ) programming algorithm based on conjugate function theory [9]. To adopt the HQ 4. For the soundness of defining the subgradient of L 1 -norm, we state that 0 0 can be any value in [−1, 1].
R(f Wn ) − R(f W * ) ≤ sup f W ∈F W E v 1 2n ij g V − W H γ 2 ij − 1 2n ij g V − W H γ 2 ij ≤ min 2 + α 2 γ 2 mr ln (4 1 m π 1 2 8rα 2 + 2α 2 r 2 + σα 2 √ 2r 3 + 2rσ 2 α 2 r 4 mr)/Γ( m 2 ) 1 m 2 + ln( 2 δ ) /2n ≤ 2 n + α 2 γ 2 mr ln 4 1 m π 1 2 8rα 2 + 2α 2 r 2 + σα 2 √ 2r 3 + 2rσ 2 α 2 r 4 mrn /Γ( m 2 ) 1 m 2 + ln( 2 δ ) /2n.(3)
NMF methods
Objective
function F (W H) Derivative f (1) NMF V − W H 2 F ij 2c(V ij , W H) HCNMF ij ( 1 + (V − W H) 2 ij − 1) ij 1 1+(V −W H) 2 ij c(V ij , W H) L 2,1 -NMF V − W H 2,1 ij 1 l (V −W H) 2 lj c(V ij , W H) RCNMF n j=1 min{ V ·j − W H ·j , θ} n j=1 i 1 l (V −W H) 2 lj c(V ij , W H), V ·j − W H ·j ≤ θ 0, V ·j − W H ·j ≥ θ RNMF-L 1 V − W H − S 2 F + λ S 1 ij 2(1 − S ij (V −W H) ij )c(V ij , W H) L 1 -NMF V − W H 1 ij 1 |V −W H| ij c(V ij , W H) HuberNMF m i=1 n j=1 l((V − W H) ij , σ), where l(x, σ) = x 2 , |x| ≤ σ 2σ|x| − σ 2 , |x| ≥ σ ij 2c(V ij , W H), |V − W H| ij ≤ σ 2σ |V −W H| ij c(V ij , W H), |V − W H| ij ≥ σ CIM-NMF m i=1 n j=1 1 − 1 √ 2πσ e −(V −W H) 2 ij /2σ 2 ij 1 √ 2πσ 3 e −(V −W H) 2 ij /2σ 2 c(V ij , W H) CauchyNMF ij ln(1 + ( V −W H γ ) ij ) ij 2 γ 2 +(V −W H) 2 ij c(V ij , W H) Truncated CauchyNMF ij g(( V −W H γ ) 2 ij ), where g(x) = ln(1 + x), 0 ≤ x ≤ σ ln(1 + σ), x > σ ij 2 γ 2 +(V −W H) 2 ij c(V ij , W H), |V − W H| ij ≤ γ √ σ 0 · c(V ij , W H), |V − W H| ij > γ √ σ
algorithm, we transform (2) to the following maximization form:
max W ≥0,H≥0 1 2 ij f (( V − W H γ ) 2 ij ),(8)
where
f (x) = −g(x)
is the core function utilized in HQ.
Since the negative logarithmic function is convex, f (x) is also convex.
HQ-based Alternating Optimization
Generally speaking, the half-quadratic (HQ) programming algorithm [9] reformulates the non-quadratic loss function as an augmented loss function in an enlarged parameter space by introducing an additional auxiliary variable based on the convex conjugation theory [3]. HQ is equivalent to the quasi-Newton method [24] and has been widely applied in non-quadratic optimization.
Note that the function f (x) : R + → R is continuous, and according to [3], its conjugate f * (y) :
R → R ∪ {+∞} is defined as f * (y) = max x∈R+ {xy − f (x)}. Since f (x) is convex and closed (although the domain R + is open, f (x) is closed, see Section A.3.3 in [3])
, the conjugate of its conjugate function is itself [3], i.e., f * * = f , then we have:
Theorem 2.
The core function f (x) and its conjugate f * (y) satisfy
f (x) = max y {yx − f * (y)}, x ∈ R + ,(9)
and the maximizer is
y * = −1/(1 + x), 0 ≤ x ≤ σ 0, x > σ .
By (9), we have the augmented loss function
substituting x = ( V −W H γ ) 2 ij intof (( V − W H γ ) 2 ij ) = max Yij {Y ij ( V − W H γ ) 2 ij −f * (Y ij )}, (10)
where Y ij is the auxiliary variable introduced by HQ for
( V −W H γ ) 2 ij .
By substituting (10) into (8), we have the objective function in an enlarged parameter space
max W ≥0,H≥0 { 1 2 ij max Yij {Y ij ( V − W H γ ) 2 ij − f * (Y ij )}} = max W ≥0,H≥0,Y { 1 2 ij {Y ij ( V − W H γ ) 2 ij − f * (Y ij )}},(11)
where the equality comes from the separability of the optimization problems with respect to Y ij . Although the objective function in (8) is non-quadratic, its equivalent problem (11) is essentially a quadratic optimization. In this paper, HQ solves (11) based on the block coordinate descent framework. In particular, HQ recursively optimizes the following three problems. At t-th iteration,
Y t+1 : max Y 1 2 ij (Y ij ( V − W t H t γ ) 2 ij − f * (Y ij )),(12)H t+1 : max H≥0 1 2 ij (Y t+1 ij ( V − W t H γ ) 2 ij ),(13)W t+1 : max W ≥0 1 2 ij (Y t+1 ij ( V − W H t+1 γ ) 2 ij ).(14)
Using Theorem 2, we know that the solution of (12) can be expressed analytically as
Y t+1 ij = − 1 1+( V −W t H t γ ) 2 ij , if |(V − W t H t ) ij | ≤ γ √ σ 0, if |(V − W t H t ) ij | > γ √ σ .
Since (13) and (14) are symmetric and intrinsically weighted non-negative least squares (WNLS) problems, they can be optimized in the same way using the Nesterov method [10]. Taking (13) as an example, the procedure of its Nesterov based optimization is summarized in Algorithm 1, and its derivative is derived in the supplementary material. Considering that (13) is a constrained optimization problem, similar to [18], we use the following projected gradient-based criterion to check the stationarity of the search point , i.e.,
∇ P j (h k ) = 0, where ∇ P j (h k ) l = ∇ P j (h k ) l , (h k ) l ≥ 0 min{0, ∇ P j (h k ) l }, (h k ) l = 0
. Since the above stopping criterion will make OGM run unnecessarily long, similar to [18], we use a relaxed version
∇ P j (h k ) F ≤ max{ 1 , 10 −3 } × ∇ P j (h 0 ) F ,(15)
where 1 is a tolerance that controls how far the search point is from a stationary point.
Algorithm 1 Optimal Gradient Method (OGM) for WNLS
Input: V ·j ∈ R m + , W t ∈ R m×r + , H t ·j ∈ R r + , D t+1 j . Output: H t+1 ·j . 1: Initialize z 0 = H t ·j , h 0 = H t ·j , α 0 = 1, k = 0. 2: Calculate L j = W t T D t+1 j W t 2 . repeat 3: ∇ j (z k ) = W t T D t+1 j W t z k − W t T D t+1 j V ·j . 4: h k+1 = Π + (z k − ∇j (z k ) Lj ). 5: α k+1 = 1+ √ 4α 2 k +1 2 . 6: z k+1 = h k+1 + α k −1 α k+1 (h k+1 − h k ). 7: k ← k + 1. until {The stopping criterion (15) is satisfied.} 8: H t+1 ·j = z k .
The complete procedure of the HQ algorithm is summarized in Algorithm 2. The weights of entries and factor matrices are updated recursively until the objective function does not change. We use the following stopping criterion to check the convergence in Algorithm 2:
|F (W t , H t ) − F (W * , H * )| |F (W 0 , H 0 ) − F (W t , H t )| ≤ 2 ,(16)
where 2 signifies the tolerance, F (W, H) signifies the objective function of (8) and (W * , H * ) signifies a local minimizer 5 . The stopping criterion (16) implies that HQ stops when the search point is sufficiently close to the minimizer and sufficiently far from the initial point. Line 3 updates the scale parameter by the Nagy algorithm and will be further presented in Section 3.2. Line 4 detects outliers by robust statistics and will be presented in Section 3.3.
The main time cost of Algorithm 2 is incurred on lines 2, 4, 5, 6, 7, 8, and 9. The time complexities of lines 2 and 7 are both O(mnr). According to Algorithm 1, the time complexities of lines 6 and 9 are O(mr 2 ) and O(nr 2 ), respectively. Since line 4 introduces a median operator, its time complexity is O(mn ln(mn)). In summary, the total complexity of Algorithm 2 is O((mn ln(mn) + mnr 2 )).
Scale Estimation
The parameter estimation problem for Cauchy distribution has been studied for several decades [32] [33] [34]. Nagy [32] proposed an I-divergence based method, termed the Nagy algorithm for short, to simultaneously estimate location and 5. Since any local minimal is unknown beforehand, we instead utilize
(W t−1 , H t−1 ) in our experiments.
Algorithm 2 Half-quadratic (HQ) Programming Algorithm for Truncated CauchyNMF
Input: W ∈ R m×n + , r min{m, n}. Output: W, H. 1: Initialize W 0 ∈ R m×r + , H 0 ∈ R r×n + , t = 0. repeat 2: Calculate E t = V − W t H t and Q t+1 = 1 (1+( E t γ ) 2 )
.
3: Update the scale parameter γ based on E t . 4: Detect the indices Ω(t) of outliers and set Q t+1
Ω(t) = 0. for j = 1, . . . , n do 5: Calculate D t+1 j = diag(Q t+1 ·j ). 6: Update H t+1 ·j by Algorithm 1. end for 7: Calculate E t = V − W t H t+1 and Q t+1 = 1 (1+( E t γ ) 2 )
.
for i = 1, . . . , m do 8: Calculate D t+1 i = diag(Q t+1 i· ). 9: Update W t+1 i· by Algorithm 1. end for 10: t ← t + 1. until {Stopping criterion (16) is satisfied.} 11: W = W t , H = H t .
scale parameters. The Nagy algorithm minimizes the discrimination information 6 between the empirical distribution of the data points and the prior Cauchy distribution with respect to the parameters. In our Truncated CauchyNMF model (2), the location parameter of the Cauchy distribution is assumed to be zero, and thus we only need to estimate the scale-parameter γ.
Here we employ the Nagy algorithm to estimate the scale-parameter based on all the residual errors of the data. According to [32], supposing there exist a large number of residual errors, the scale-parameter estimation problem can be formulated as
min γ D(η n |f 0,γ ) = min γ +∞ ∞ ln 1 f 0,γ (x) dF n (x) = min γ N n=1 1 N ln 1 f 0,γ (x k ) ,(17)
where D(·|·) denotes the discrimination information, and the first equality is due to the independence of η n and γ, and the second equality is due to the Law of large numbers. By substituting the probability density function f 0,γ of Cauchy distribution 7 into (17) To solve this problem, Nagy [32] proposed an efficient iterative algorithm, i.e.,
γ k+1 = γ k 1/e 0 k − 1, k = 0, 1, 2, . . . ,(18)
6. The discrimination information of random variable ξ 1 given random variable ξ 2 is defined as D(ξ 1 |ξ 2 ) = +∞ −∞ ln
f 1 (x) f 2 (x) dF 1 (x),
where f 1 and f 2 are the PDFs of ξ 1 and ξ 2 , and F 1 is the distribution function of ξ 1 .
7. The probability density function (PDF) of Cauchy distribution is fx 0 ,γ (x) = 1/(πγ(1 + ( x−x 0 γ ) 2 )), where x 0 is the location parameter, specifying the location of the peak of the distribution, and γ is the scale parameter, specifying the half-width at half-maximum.
where γ 0 > 0, and e 0 k = 1 mn m i=1 n j=1
1 (1+( E ij γ k ) 2 )
. In [32], Nagy proved that the algorithm (18) converges to a fixed point assuming the number of data points is large enough, and this assumption is reasonable in Truncated CauchyNMF.
Outlier Rejection
Looking more carefully at (12), (13) and (14), HQ intrinsically assigns a weight for each entry of V with both factor matrices H t+1 and W t+1 fixed, i.e., Q t+1
ij = 1 1+( E t γ ) 2 ij , if |E t ij | ≤ γ √ σ 0, if |E t ij | > γ √ σ
, where E t denotes the error matrix at the t-th iteration. The larger the magnitude of error for a particular entry, the lighter the weight is assigned to it by HQ. Intuitively, the corrupted entry contributes less in learning the intrinsic subspace. If the magnitude of error exceeds a threshold γ √ σ, Truncated CauchyNMF assigns zero weights to the corrupted entries to inhibit their contribution to the learned subspace. That is how Truncated CauchyNMF filters out extreme outliers.
However, it is non-trivial to estimate the threshold γ √ σ. Here, we introduce a robust statistics-based method to explicitly detect the support of the outliers instead of estimating the threshold to detect outliers. Since the energy function of Truncated CauchyNMF gets close to that of NMF as the error tends towards zero, i.e., lim x→0 (ln(1 + x 2 ) − x 2 ) = 0. Truncated CauchyNMF encourages the small magnitude errors to have a Gaussian distribution. Let Θ t denote the set of magnitudes of error at the t-th iteration of HQ, i.e.,
Θ t = {|E t ij | : 1 ≤ i ≤ m, 1 ≤ j ≤ n} where E t = V − W t H t .
It is reasonable to believe that a subset of Θ t , i.e., Γ t = {θ ∈ Θ t : θ ≤ med{Θ t }}, obeys a Gaussian distribution, where med{Θ t } signifies the median of Θ t . Since |Γ t | = mn 2 , it suffices to estimate both the mean µ t and standard deviation δ t from Γ t . According to the three-sigma-rule, we detect the outliers as O t = {τ ∈ Γ t : |τ − µ t | > 3δ t } and output their indices Ω(t).
To illustrate the effect of outlier rejection, Figure 3 presents a sequence of weighting matrices generated by HQ for the motivating example described in Figure 2. It shows that HQ correctly assigns zero weights for the corrupted entries in only a few iterations and finally detects almost all outliers including illumination, sunglasses, and scarves (see the last column in Figure 3) in the end.
NMF
Traditional NMF [17] assumes that noise obeys a Gaussian distribution and derives the following squared L 2 -norm based objective function:
min W ≥0,H≥0 V − W H 2 F , where X F = ij X 2 ij signifies the matrix Frobenius norm.
It is commonly known that NMF can be solved by using the multiplicative update rule (MUR, [17]). Because of the nice mathematical property of squared L 2 -norm and the efficiency of MUR, NMF has been extended for various applications [4] [6] [28]. However, NMF and its extensions are non-robust because the L 2 -norm is sensitive to outliers.
Hypersurface Cost Based NMF
Hamza and Brady [12] proposed a hypersurface cost based NMF (HCNMF) by minimizing the summation of hypersurface costs of errors, i.e., min W ≥0,
H≥0 { ij δ((V − W H) ij )}, where δ(x) = √ 1 + x 2 − 1
is the hypersurface cost function. According to [12], the hypersurface cost function has differentiable and bounded influence function. Since the hypersurface cost function is differentiable, HCNMF can be directly solved by using the projected gradient method. However, the optimization of HCNMF is difficult because Armijo's rule based line search is time consuming [12].
L 1 -Norm Based NMF
To improve the robustness of NMF, Lam [15] assumed that noise is independent and identically distributed from Laplace distribution and proposed L 1 -NMF as follows: min W ≥0,H≥0 V − W H 1 , where X 1 = ij |X ij | and | · | signifies the absolute value function. Since the L 1norm based loss function is non-smooth, the optimization algorithm in [15] is not scalable on large-scale datasets. Manhattan NMF (MahNMF, [11]) remidies this problem by approximating the loss function of L 1 -NMF with a smooth function and minimizing the approximated loss function using Nesterov's method. Although L 1 -NMF is less sensitive to outliers than NMF, it is not sufficiently robust because its breakdown point is related to the dimensionality of data [7].
L 1 -Norm Regularized Robust NMF
Zhang et al. [29] assumed that the dataset contains both Laplace distributed noise and Gaussian distributed noise and proposed an L 1 -norm regularized Robust NMF (RNMF-L 1 ) as follows: min W ≥0,H≥0,S { V − W H − S 2 F + λ S 1 }, where λ is a positive constant that trades off the sparsity of S. Similar to L 1 -NMF, RNMF-L 1 is also less sensitive to outliers than NMF, but they are both non-robust to large numbers of outliers because the L 1 -minimization model has a low breakdown point. Moreover, it is nontrivial to determine the tradeoff parameter λ.
L 2,1 -Norm Based NMF
Since NMF is substantially a summation of the squared L 2norm of the errors, the large magnitude errors dominate the objective function and cause NMF to be non-robust. To solve this problem, Kong et al. [14] proposed the L 2,1 -norm based NMF (L 2,1 -NMF) which minimizes the L 2,1 -norm of the error matrix, i.e., min W ≥0,H≥0 V − W H 2,1 , where the L 2,1 -norm is defined as E 2,1 = n j=1 E ·j 2 . In contrast to NMF, L 2,1 -NMF is more robust because the influences of noisy examples are inhibited in learning the subspace.
Robust Capped Norm NMF
Gao et al. [47] proposed a robust capped norm NMF (RCNMF) to completely filter out the effect of outliers by instead minimizing the following objective function: W ≥0,H≥0 n j=1 min{ V ·j − W H ·j , θ}, where θ is a threshold that chooses the outlier samples. RCNMF cannot be applied in practical applications because it is non-trivial to determine the pre-defined threshold, and the utilized iterative algorithms in both [14] and [47] converge slowly with the successive use of the power method [1].
Correntropy Induced Metric Based NMF
The most closely-related work is the half-quadratic algorithm for optimizing robust NMF, which includes the Correntropy-Induced Metric (CIM)-based NMF (CIM-NMF) and Huber-NMF by Du et al. [8]. CIM-NMF measures the approximation errors by using CIM [19], i.e.,
min W ≥0,H≥0 m i=1 n j=1 ρ((V −W H) ij , δ), where ρ(x, δ) = 1 − 1 √ 2πδ e − x 2
2δ 2 . Since the energy function ρ(x, δ) increases slowly as the error increases, CIM-NMF is insensitive to outliers. In a similar way, Huber-NMF [8] measures the approximation errors by using the Huber function, i.e.,
min W ≥0,H≥0 m i=1 n j=1 l ((V − W H) ij , c), where l (x, c) = x 2 , |x| ≤ c 2c|x| − c 2 , |x| ≥ c and the cutoff c is automatically determined by c = med{|(V − W H) ij |}.
Truncated CauchyNMF is different from both CIM-NMF and Huber-NMF in four aspects: (1) Truncated CauchyNMF is derived from the proposed Truncated Cauchy loss which can model both modearte and extreme outliers, whereas neither CIM-NMF or Huber-NMF can do that; (2) Truncated CauchyNMF demonstrates strong evidence of both robustness and generalization ability, whereas neither CIM-NMF nor Huber-NMF demonstrates evidence of neither;
(3) Truncated CauchyNMF iteratively detects outliers by the robust statistics on the magnitude of errors, and thus performs more robustly than CIM-NMF and Huber-NMF in practice; And (4) Truncated CauchyNMF obtains the optima for each factor in each iteration round by solving the weighted non-negative least squares (WNLS) problems, whereas the multiplicative update rules for CIM-NMF and Huber-NMF do not.
EXPERIMENTAL VERIFICATION
We explore both the robustness and the effectiveness of Truncated CauchyNMF on two popular face image datasets, ORL [27] and AR [22], and one object image dataset, i.e., Caltech 101 [44], by comparing with six typical NMF models: (1) L 2 -NMF [16] optimized by NeNMF [10]; (2) L 1 -NMF [15] optimized by MahNMF [11]; (3) RNMF-L 1 [29]; (4) L 2,1 -NMF [14]; (5) CIM-NMF [8]; and (6) Huber-NMF [8]. We first present a toy example to intuitively show the robustness of Truncated CauchyNMF and several clustering experiments on the contaminated ORL dataset to confirm its robustness. We then analyze the effectiveness of Truncated CauchyNMF by clustering and recognizing face images in the AR dataset, and clustering object images in the Caltech 101 dataset.
An Illustrative Study
To illustrate Truncated CauchyNMF's ability to learn a subspace, we apply Truncated CauchyNMF on a synthetic dataset composed of 180 two-dimensional data points (see Figure 4(a)). All data points are distributed in a onedimensional subspace, i.e., a straight line (y = 0.2x). Both L 2 -NMF and L 1 -NMF are applied on this synthetic dataset for comparison. Figure 4(a) shows that all methods learn the intrinsic subspace correctly on the clean dataset. Figures 4(b) to 4(d) demonstrate the robustness of Truncated CauchyNMF on a noisy dataset. First, we randomly select 20 data points and 2 Relative reconstruction error (%) of L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, CIM-NMF, and CauchyNMF on ORL dataset contaminated by Laplace noise with deviation varying from 40 to 280. contaminate their x-coordinates, with their y-coordinates retained to simulate outliers. Figure 4(b) shows that L 2 -NMF fails to recover the subspace in the presence of 1 9 outliers, while both Truncated CauchyNMF and L 1 -NMF perform robustly in this case. However, the robustness of L 1 -NMF decreases as the outliers increase. To study this point, we randomly select another 20 data points and contaminate their x-coordinates. Figure 4(c) shows that both L 2 -NMF and L 1 -NMF fail to recover the subspace, but Truncated CauchyNMF succeeds. To study the robustness of Truncated CauchyNMF on seriously corrupted datasets, we randomly select an additional 40 data points as outliers.
δ L 2 -NMF L 2,1 -NMF RNMF-L 1 L 1 -NMF
We contaminate their y-coordinates while keeping their xcoordinates consistent. Figure 4(d) shows that Truncated CauchyNMF still recovers the intrinsic subspace in the presence of 4 9 outliers while both L 2 -NMF and L 1 -NMF fail in this case. In other words, the breakdown point of Truncated CauchyNMF is greater than 44.4%, which is quite close to the highest breakdown point of 50%.
Simulated Corruption
We first evaluate Truncated CauchyNMF' robustness to simulated corruptions. To this end, we add three typical corruptions, i.e., Laplace noise, and Salt & Pepper noise, randomly positioned blocks, to frontal face images from the Cambridge ORL database and compare the clustering performance of our methods with the performance of other methods on these contaminated images. Figure 5 shows example face images contaminated by these corruptions.
The Cambridge ORL database [27] contains 400 frontal face photos of 40 individuals. There are 10 photos of each individual with a variety of lighting, facial expressions and facial details (with-glasses or without-glasses). All photos were taken against the same dark background and each photo was cropped to a 32 × 32 pixel array and normalized to a long vector. The clustering performance is evaluated by two metrics, namely accuracy and normalized mutual information [22]. The number of clusters is set equal to the number of individuals, i.e., 40. Intuitively, the better a model clusters contaminated images, the more robust it is for learning the subspace. In this experiment, we utilize K-means [21] as a baseline. To qualify the robustness of all NMF models, we compare their relative reconstruction errors, i.e., V − W H F / V F , whereV denotes the clean dataset, and W and H signify the factorization results on the contaminated dataset.
Laplace Noise
Laplace noise exists in many types of observation, e.g., gradient-based image features such as SIFT [31], but the classical NMF cannot deal with such data because the distributions violate the assumption of classical NMF. In this experiment, we study Truncated CauchyNMF's capacity to deal with Laplace noisy data. We simulate Laplace noise by adding random noise to each pixel of each face image from ORL where the noise obeys a Laplace distribution Laplace(0, δ). For the purpose of verifying the robustness of Truncated CauchyNMF, we vary the deviation δ from 40 to 280 because the maximum pixel value is 255. Figure 5(a) gives an example face image and its seven noisy versions by adding Laplace noise. Figure 6(a) and 6(b) present the mean and standard deviations of accuracy and normalized mutual information of Truncated CauchyNMF and the representative models. 3 Relative reconstruction error (%) of L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, CIM-NMF, and Truncated CauchyNMF on ORL dataset contaminated by Salt & Pepper noise with the percentage of corrupted pixels varying from 5% to 60%. Figure 6 confirms that NMF models outperform Kmeans in terms of accuracy and normalized mutual information. L 1 -NMF outperforms L 2 -NMF and L 2,1 -NMF because L 1 -NMF models Laplace noise better. L 1 -NMF outperforms RNMF-L 1 because L 1 -NMF assigns smaller weight for large noise than RNMF-L 1 . CIM-NMF and Huber-NMF perform comparably with L 1 -NMF when the deviation of Laplace noise is moderate. However, as the deviation increases, their performance is dramatically reduced because large-magnitude outliers seriously influence the factorization results. In contrast, Truncated CauchyNMF outperforms all the representative NMF models and remains stable as deviation varies.
p L 2 -NMF L 2,1 -NMF RNMF-L 1 L 1 -NMF
The clustering performance in Figure 6 confirms Truncated CauchyNMF's effectiveness in learning the subspace on the ORL dataset contaminated by Laplace noise. Table 2 compares the relative reconstruction errors of Truncated CauchyNMF and the representative algorithms. It shows that CauchyNMF performs the most robustly in all situations. That is because Truncated CauchyNMF can not only model the simulated Laplace noise but also models the underlying outliers, e.g., glasses, in the ORL dataset.
Salt & Pepper Noise
Salt & Pepper noise is a common type of corruption in images. The removal of Salt & Pepper noise is a challenging task in computer vision since this type of noise contaminates each pixel by zero or the maximum pixel value, and the noise distribution violates the noise assumption of traditional learning models. In this experiment, we verify Truncated CauchyNMF's capacity to handle Salt & Pepper noises. We add Salt & Pepper noise to each frontal face image of the ORL dataset (see Figure 5(b) for the contaminated face images of a certain individual) and compare the clustering performance of Truncated CauchyNMF on the contaminated dataset with that of the representative algorithms. To demonstrate the robustness of Truncated CauchyNMF, we vary the percentage of corrupted pixels from 5% to 60%. For each case of additive Salt & Pepper noise, we repeat the clustering test 10 times and report the average accuracy and average normalized mutual information to eliminate the effect of initial points. Figure 7 shows that all models perform satisfactorily when 5% of the pixels of each image are corrupted. As the number of corrupted pixels increases, the classical L 2 -NMF is seriously influenced by the Salt & Pepper noise and its performance is dramatically reduced. Although L 1 -NMF, Huber-NMF and CIM-NMF perform more robustly than L 2 -NMF, their performance is also degraded when more than 40% of pixels are corrupted. Truncated CauchyNMF performs quite stably even when 40% of pixels are corrupted and outperforms all the representative models in most cases. All the models fail when 60% of pixels are corrupted, because it is difficult to distinguish inliers from outliers in this case. Table 3 gives a comparison of Truncated CauchyNMF and the representative algorithms in terms of relative reconstruction error. It shows that L 1 -NMF, Hubel-NMF, CIM-NMF and Truncated CauchyNMF perform comparably when less than 20% of the pixels are corrupted, but the robustness of L 1 -NMF, Hubel-NMF, CIM-NMF are unstable as the percentage of corrupted pixels increases. Truncated CauchyNMF performs stably when 30% ∼ 50% of the pixels are corrupted by Salt & Pepper noise. This confirms the robustness of Truncated CauchyNMF.
Contiguous Occlusion
The removal of contiguous segments of an object due to occlusion is a challenging problem in computer vision. Many techniques such as L 1 -norm minimization and nuclear norm minimization are unable to handle this problem. In this experiment, we utilize contiguous occlusion to simulate extreme outliers. Specifically, we randomly position a b × bsized block on each face image of the ORL dataset and fill each block with a pixel array whose pixel values equal 550.
To verify the effectiveness of subspace learning, we apply both K-means and all NMF models to the contaminated dataset and compare the clustering performance in terms of both accuracy and normalized mutual information. This task is quite challenging because large numbers of outliers with large magnitudes must be ignored to learn a clean TABLE 4 Average accuracy (%) and average normalized mutual information (%) of K-means, L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, CauchyNMF, CIM-NMF, and Truncated CauchyNMF on occluded ORL dataset with block size b varying from 10 to 22 with step size 2. subspace. To study the influence of outliers, we vary the block size b from 10 to 22, where the minimum block size and maximum block size imply 10% and 50% outliers, respectively. Figure 5(c) shows the occluded face images of a certain individual. Table 4 shows that K-means, L 2 -NMF, L 2,1 -NMF, RNMF-L 1 , L 1 -NMF, Huber-NMF, and CauchyNMF 8 are seriously deteriorated by the added continuous occlusions. Although CIM-NMF performs robustly when the percentage of outliers is moderate, i.e., 10% (corresponds to b = 10)and 14% (corresponds to b = 12), its performance is unstable when the percentage of outliers reaches 20% (corresponds to b = 14). This is because CIM-NMF keeps energies for extreme outliers and makes a large number of extreme outliers dominate the objective function. By contrast, Truncated CauchyNMF reduces energies of extreme outliers to zeros, and thus performs robustly when the percentage of outliers is less than 40% (corresponds to b = 20).
Real-life Corruption
The previous section has evaluated the robustness of Truncated CauchyNMF under several types of synthetic outliers including Laplace noise, Salt & Pepper noise, and contiguous occlusion. The experimental results show that our methods consistently learns the subspace even when half the pixels in each image are corrupted, while other NMF models fail under this extreme condition. In this section, we evaluate Truncated CauchyNMF's ability to learn the subspace under natural sources of corruption, e.g., contiguous disguise in the AR dataset and object variations in the Caltech-101 dataset.
Contiguous Disguise
The Purdue AR dataset [22] contains 2600 frontal face images taken from 100 individuals comprising 50 males and 50 8. In this experiment, we compare with CauchyNMF to show the effect ot truncation. For CauchyNMF, we set σ = +∞ and adopt the proposed HQ algorithm to solve it. females in two sessions. There is a total of 13 images in each session, including one normal image, three images depicting different facial expressions, three images under varying illumination conditions, three images with sunglasses, and three images with a scarf for each individual. Each image is cropped into a 55×40-dimensional pixel array and reshaped into a 2200-dimensional long vector. Figure 8 gives 20 example images of two individuals and shows that the images with disguises, i.e., sunglasses and scarf, are seriously contaminated by outliers. Therefore, it is quite challenging to correctly group these contaminated images, e.g., the 4th, 5th, 9th and 10th columns in Figure 8, with the clean images, e.g., the 1st and 6th columns in Figure 8. According to the results in Section 5.2.3, Truncated CauchyNMF can handle contiguous occlusions with extreme outliers well, we will therefore show the effectiveness of Truncated CauchyNMF to do this job.
To evaluate the effectiveness of Truncated CauchyNMF in clustering, we randomly select between two and ten images of each individual to comprise the dataset. By concatenating all the long vectors, we obtain an image intensity matrix denoted as V . We then apply NMF to V to learn the subspace, i.e., V ≈ W H, where the rank of W and H equals the number of clusters. Lastly, we output the cluster labels by performing K-means on H. To eliminate the influence of randomness, we repeat this trial 50 times and report the averaged accuracy and averaged normalized mutual information for comparison. Figure 9 gives both average accuracy and average normalized mutual information in relation to the number of clusters of Truncated CauchyNMF and other NMF models. It shows that Truncated CauchyNMF consistently achieves the highest clustering performance on the AR dataset. This result confirms that Truncated CauchyNMF learns the subspace more effectively than other NMF models, even when the images are contaminated by contiguous disguises such as sunglasses and a scarf.
We further conduct the face recognition experiment on the AR dataset to evaluate the effectiveness of Truncated CauchyNMF. In this experiment, we treat the images taken in the first session as the training set and the images taken in the second session as the test set. This task is challenging because (1) the distribution of the training set is different from that of the test set, and (2) both training and test sets are seriously contaminated by outliers. We first learn a subspace by conducting Truncated CauchyNMF on the whole dataset and then classify each test image by the sparse representation classification method (SRC) [45] on the coefficients of both training images and test images in the learned subspace. Since there are totally 100 individuals and the images of each individual were taken in two sessions, we set the reduced dimensionality of Truncated CauchyNMF to 200. We also conduct other NMF variants with the same setting for comparison. To filter out the influence of continuous occlusions in face recognition, Zhou et al. [46] proposed a sparse error correction method (SEC) which labels each pixel of test image as occluded pixel and non-occluded one by using Markov random field (MRF) and learns a representation of each test image on non-occluded pixels. Although SEC succeeds to filter out the continuous occlusions in the test set, it cannot handle outliers in the training set. By contrast, Truncated CauchyNMF can take the occlusions off on both training and test images, and thus boost the performance of the subsequent classification. Table 5 shows the face recognition accuracies of NMF variants and SEC. In the AR dataset, each individual contains one normal image and twelve contaminated images under different conditions including varying facial expressions, illuminations, wearing sunglasses, and wearing scarves. In this experiment, we not only show the results on total test set but also show the results on the test images taken under different conditions separately. Table 5 shows that Truncated CauchyNMF performs the best in most cases, especially, it performs almost perfectly on normal images. It validates that Truncated CauchyNMF can learn an effective subspace from the contaminated data. In most situations, SEC performs excellently, but the last two columns indicate that the contaminated training images seriously weaken SEC. Truncated CauchyNMF performs well in such situa- tions because it effectively removes the influence of outliers in the subspace learning stage.
Object Variation
The Caltech 101 dataset [44] contains pictures of objects captured from 101 categories. The number of pictures for each category varies from 40 to 800. Figure 10 shows example images from 6 different categories including dolphin, butterfly, sunflower, watch, pizza and cougar body. We extract convolutional neural network (CNN) feature for each image using the Caffe framework [39] and pre-trained model of Imagenet with AlexNet [42]. As objects from the same categories may vary in shape, color and size, and the pictures are taken from different viewpoints, clustering objects of the same category together is a very challenging task. We will show the good performance of Truncated CauchyNMF compared to other methods such as CIM-NMF, Huber-NMF, L 2,1 -NMF, RNMF-L 1 , L 2 -NMF, L 1 -NMF, and K-means.
Following the similar protocol as in section 5.3.1, we demonstrate the effectiveness of Truncated CauchyNMF in clustering objects. We test with 2 to 10 randomly selected categories. The image feature matrix is denoted as V . NMFs are applied to V to compute the subspace, i.e. V ≈ W H, where the rank of W and H equals the number of clusters. Cluster labels are obtained by performing K-means on H. We repeated such trial 50 times and computed averaged accuracies and normalized mutual information among all trials for comparison. Figure 11 presents the accuracy and normalized mutual information versus cluster numbers of different NMF models. Truncated CauchyNMF significantly outperforms other approaches. As the number of categories increases, the accuracy achieved by other NMF models decreases quickly, while Truncated CauchyNMF maintains a strong subspace learning ability. We can see from the figure that Truncated CauchyNMF is more robust to the object variations compared to other models. Fig. 11. The clustering performance, in terms of both accuracy and normalized mutual information, of Truncated CauchyNMF, CIM-NMF, Huber-NMF, L 2,1 -NMF, RNMF-L 1 , L 2 -NMF, L 1 -NMF, and K-means on the Caltech101 dataset with the number of clusters varying from 2 to 10.
Note that, in all above experiments, we optimized the Truncated CauchyNMF and the other NMF models with different types of algorithms. However, the high performance is not due to the optimization algorithm. To study this point, we applied the Nesterov based HQ algorithm to optimize the representative NMF models and compared their clustering performance on the AR dataset. The results show that Truncated CauchyNMF consistently outperforms the other NMF models. See the supplementary materials for detailed discussions.
CONCLUSION
This paper proposes a Truncated CauchyNMF framework for learning subspaces from corrupted data. We propose a Truncated Cauchy loss which can simultaneously and appropriately model both moderate and extreme outliers, and develop a novel Truncated CauchyNMF model. We theoretically analyze the robustness of Truncated CauchyNMF by comparing with a family of NMF models, and provide the performance guarantees of Truncated CauchyNMF. Considering that the objective function is neither convex nor quadratic, we optimize Truncated CauchyNMF by using half-quadratic programming and alternately updating both factor matrices. We experimentally verify the robustness and effectiveness of our methods on both synthetic and natural datasets and confirm that Truncated CauchyNMF are robust for learning subspace even when half the data points are contaminated. Naiyang Guan is currently an Associate Professor with the College of Computer, at The National University of Defense Technology of China. He received the BS, MS, and PhD degree from The National University of Defense Technology. His research interests include machine learning, computer vision, and data mining. He has authored and co-authored 20+ research papers including IEEE T-NNLS, T-IP, T-SP, Neurocomputing, Genes, BMC Genomics, ICDM, IJ-CAI, ECCV | 10,208 |
1906.00801 | 2948014727 | We introduce a global Landau-Ginzburg model which is mirror to several toric Deligne-Mumford stacks and describe the change of the Gromov-Witten theories under discrepant transformations. We prove a formal decomposition of the quantum cohomology D-modules (and of the all-genus Gromov-Witten potentials) under a discrepant toric wall-crossing. In the case of weighted blowups of weak-Fano compact toric stacks along toric centres, we show that an analytic lift of the formal decomposition corresponds, via the Gamma-integral structure, to an Orlov-type semiorthogonal decomposition of topological K-groups. We state a conjectural functoriality of Gromov-Witten theories under discrepant transformations in terms of a Riemann-Hilbert problem. | Recently, Clingempeel-Le Floch-Romo @cite_18 compared the hemisphere partition functions (which in our language correspond to certain solutions of the quantum D-modules) of the cyclic quotient singularities @math and their Hirzebruch-Jung resolutions. They discussed the relation to a semiorthogonal decomposition of the the derived categories, extending the work of Herbst-Hori-Page @cite_54 to the anomalous (discrepant) case. Their examples are complementary to ours: the Hirzebruch-Jung resolutions are type (II-ii) discrepant transformations whereas transformations in Theorem are of type (II-i) or (III) (see Remark for these types). | {
"abstract": [
"We study B-branes in two-dimensional N=(2,2) anomalous models, and their behaviour as we vary bulk parameters in the quantum Kahler moduli space. We focus on the case of (2,2) theories defined by abelian gauged linear sigma models (GLSM). We use the hemisphere partition function as a guide to find how B-branes split in the IR into components supported on Higgs, mixed and Coulomb branches: this generalizes the band restriction rule of Herbst-Hori-Page to anomalous models. As a central example, we work out in detail the case of GLSMs for Hirzebruch-Jung resolutions of cyclic surface singularities. In these non-compact models we explain how to compute and regularize the hemisphere partition function for a brane with compact support, and check that its Higgs branch component explicitly matches with the geometric central charge of an object in the derived category.",
"We study B-type D-branes in linear sigma models with Abelian gauge groups. The most important finding is the grade restriction rule. It classifies representations of the gauge group on the Chan-Paton factor, which can be used to define a family of D-branes over a region of the Kahler moduli space that connects special points of different character. As an application, we find a precise, transparent relation between D-branes in various geometric phases as well as free orbifold and Landau-Ginzburg points. The result reproduces and unifies many of the earlier mathematical results on equivalences of D-brane categories, including the McKay correspondence and Orlov's construction."
],
"cite_N": [
"@cite_18",
"@cite_54"
],
"mid": [
"2902333126",
"1579955660"
]
} | GLOBAL MIRRORS AND DISCREPANT TRANSFORMATIONS FOR TORIC DELIGNE-MUMFORD STACKS | 0 |
|
cs0006046 | 2953374640 | We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems. 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP; there is also a natural duality transformation from (a,b)-CSP to (b,a)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas. | For three-coloring, we know of several relevant references. Lawler @cite_15 is primarily concerned with the general chromatic number, but he also gives the following very simple algorithm for 3-coloring: for each maximal independent set, test whether the complement is bipartite. The maximal independent sets can be listed with polynomial delay @cite_5 , and there are at most @math such sets @cite_21 , so this algorithm takes time @math . Schiermeyer @cite_18 gives a complicated algorithm for solving 3-colorability in time @math , based on the following idea: if there is one vertex @math of degree @math then the graph is 3-colorable iff @math is bipartite, and the problem is easily solved. Otherwise, Schiermeyer performs certain reductions involving maximal independent sets that attempt to increase the degree of @math while partitioning the problem into subproblems, at least one of which will remain solvable. Our @math bound significantly improves both of these results. | {
"abstract": [
"Abstract We present an algorithm that generates all maximal independent sets of a graph in lexicographic order, with only polynomial delay between the output of two successive independent sets. We also show that there is no polynomial-delay algorithm for generating all maximal independent sets in reverse lexicographic order, unless P=NP.",
"",
"A clique is a maximal complete subgraph of a graph. The maximum number of cliques possible in a graph withn nodes is determined. Also, bounds are obtained for the number of different sizes of cliques possible in such a graph.",
"In this paper we describe and analyze an improved algorithm for deciding the 3-Colourability problem. If G is a simple graph on n vertices then we will show that this algorithm tests a graph for 3-Colourability, i.e. an assignment of three colours to the vertices of G such that two adjacent vertices obtain different colours, in less than O(1.415n) steps."
],
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_21",
"@cite_18"
],
"mid": [
"2007940874",
"2007200873",
"2013568176",
"1526568622"
]
} | 3-Coloring in Time O(1.3289 n ) | There are many known NP-complete problems including such important graph theoretic problems as coloring and independent sets. Unless P=NP, we know that no polynomial time algorithm for these problems can exist, but that does not obviate the need to solve them as efficiently as possible, indeed the fact that these problems are hard makes efficient algorithms for them especially important.
We are interested in this paper in worst case analysis of algorithms for 3-coloring, a basic NPcomplete problem. We will also discuss other related problems including 3-SAT, 3-edge-coloring and 3-list-coloring.
Our algorithms for these problems are based on the following simple idea: to find a solution to a 3-coloring problem, it is not necessary to choose a color for each vertex (giving something like O(3 n ) time). Instead, it suffices to only partially solve the problem by restricting each vertex to two of the three colors. We can then test whether the partial solution can be extended to a complete coloring in polynomial time (e.g. as a 2-SAT instance). This idea applied naively already gives a simple O(1.5 n ) time randomized algorithm; we improve this by taking advantage of local structure (if we choose a color for one vertex, this restricts the colors of several neighbors at once). It seems likely that our idea of only searching for a partial solution can be applied to many other combinatorial search problems.
If we perform local reductions as above in a 3-coloring problem, we eventually reach a situation in which some uncolored vertices are surrounded by partially colored neighbors, and we run out of good local configurations to use. To avoid this problem, we translate our 3-coloring problem to one that also generalizes the other problems listed above: constraint satisfaction (CSP). In an (a, b)-CSP instance, we are given a collection of n variables, each of which can be given one of a different colors. However certain color combinations are disallowed: we also have input a collection of m constraints, each of which forbids one coloring of some b-tuple of variables. Thus 3-satisfiability is exactly (2, 3)-CSP, and 3-coloring is a special case of (3, 2)-CSP in which the constraints disallow adjacent vertices from having the same color.
As we show, (a, b)-CSP instances can be transformed in certain interesting and useful ways: in particular, one can transform (a, b)-CSP into (b, a)-CSP and vice versa, one can transform (a, b)-CSP into (max(a, b), 2)-CSP, and in any (a, 2)-CSP instance one can eliminate variables for which only two colors are allowed, reducing the problem to a smaller one of the same form. Because of this ability to eliminate partially colored variables immediately rather than saving them for a later 2-SAT instance, we can solve a (3, 2)-CSP instance without running out of good local configurations.
Our actual algorithm solves (3, 2)-CSP by applying such reductions only until we reach instances with a certain simplified structure, which can then be solved in polynomial time as an instance of graph matching. We further improve our time bound for graph 3-vertex-coloring by using methods involving network flow to find a large set of good local reductions which we apply before treating the remaining problem as a (3, 2)-CSP instance. And similarly, we solve 3-edge-coloring by using graph matching methods to find a large set of good local reductions which we apply before treating the remaining problem as a 3-vertex-coloring instance.
New Results
We show the following: Except where otherwise specified, n denotes the number of vertices in a graph or variables in a SAT or CSP instance, while m denotes the number of edges in a graph, constraints in an CSP instance, or clauses in a SAT problem.
Constraint Satisfaction Problems
We now describe a common generalization of satisfiability and graph coloring as a constraint satisfaction problem (CSP) [16,28]. We are given a collection of n variables, each of which has a list of possible colors allowed. We are also given a collection of m constraints, consisting of a tuple of variables and a color for each variable. A constraint is satisfied by a coloring if not every variable in the tuple is colored in the way specified by the constraint. We would like to choose one color from the allowed list of each variable, in a way not conflicting with any constraints. For instance, 3-satisfiability can easily be expressed in this form. Each variable of the satisfiability problem may be colored (assigned the value) either true (T) or false (F). For each clause like (x 1 ∨ x 2 ∨ ¬x 3 ), we make a constraint ((v 1 , F), (v 2 , F), (v 3 , T)). Such a constraint is satisfied if and only if at least one of the corresponding clause's terms is true.
In the (a, b)-CSP problem, we restrict our attention to instances in which each variable has at y y y y most a possible colors and each constraint involves at most b variables. The CSP instance constructed above from a 3-SAT instance is then a (2, 3)-CSP instance, and in fact 3-SAT is easily seen to be equivalent to (2, 3)-CSP.
(x 1 x 2 x 3 ) (¬x 1 x 3 x 4 ) (x 1 x 2 x 4 )
In this paper, we will concentrate our attention instead on (3, 2)-CSP and (4, 2)-CSP. We can represent a (d, 2)-CSP instance graphically, by interpreting each variable as a vertex containing up to d possible colors, and by drawing edges connecting incompatible pairs of vertex colors ( Figure 1). Note that this graphical structure is not actually a graph, as the edges connect colors within a vertex rather than the vertices themselves. However, graph 3-colorability and graph 3-list-colorability can be translated directly to a form of (3, 2)-CSP: we keep the original vertices of the graph and their possible colors, and add up to three constraints for each edge of the graph to enforce the condition that the edge's endpoints have different colors ( Figure 2).
Of course, since these problems are all NP-complete, the theory of NP-completeness provides translations from one problem to the other, but the translations above are size-preserving and very simple. We will later describe more complicated translations from 3-coloring and 3-edge-coloring to (3, 2)-CSP in which the input graph is partially colored before treating the remaining graph as an CSP instance, leading to improved time bounds over our pure CSP algorithm.
As we now show, (a, b)-CSP instances can be transformed in certain interesting and useful ways. We first describe a form of duality that transforms (a, b)-CSP instances into (b, a)-CSP instances, exchanging constraints for variables and vice versa. Proof: An assignment of colors to the original (a, b)-CSP instance's variables solves the problem if and only if, for each constraint, there is at least one pair (V, C) in the constraint that does not appear in the coloring. In our transformed problem, we choose one variable per original constraint, with the colors available to the new variable being these pairs (V, C) in the corresponding constraint in the original problem. Choosing such a pair in a coloring of the transformed problem is interpreted as ruling out C as a possible color for V in the original problem. We then add constraints to our transformed problem to ensure that for each V there remains at least one color that is not ruled out: yy y y yy y y yy yy y y y yy y y yy yy y Figure 4: (3, 2)-CSP instance with a two-color variable (left) and reduced instance after application of Lemma 2 (right).
we add one constraint for each a-tuple of colors of new variables-recall that each such color is a pair (V, C)-such that all colors in the a-tuple involve the same original variable V and exhaust all the choices of colors for V. 2
This duality may be easier to understand with a small example. As discussed above, 3-SAT is essentially the same as (2, 3)-CSP, so Lemma 1 can be used to translate 3-SAT to (3, 2)-CSP. Suppose we start with the 3-SAT instance (
x 1 ∨ x 2 ∨ ¬x 3 ) ∧ (¬x 1 ∨ x 3 ∨ x 4 ) ∧ (x 1 ∨ ¬x 2 ∨ ¬x 4 )
. Then we make a (3, 2)-CSP instance ( Figure 3) with three variables v i , one for each 3-SAT clause. Each variable has three possible colors: (1, 2, 3) for v i , (1,3,4) for v 2 , and (1, 2, 4) for v 3 . The requirement that value T or F be available to x 1 corresponds to the constraints ((v 1 , 1), (v 2 , 1)) and ((v 2 , 1), (v 3 , 1)); we similarly get constraints ((v 1 , 2), (v 3 , 2)), ((v 1 , 3), (v 2 , 3)), and ((v 2 , 4), (v 3 , 4)). One possible coloring of this (3, 2)-CSP instance would be to color v 1 1, v 2 3, and v 3 4; this would give satisfying assignments in which x 1 and x 3 are T, x 4 is F, and x 2 can be either T or F.
We can similarly translate an (a, a)-CSP instance into an (a, 2)-CSP instance in which each variable corresponds to either a constraint or a variable, and each constraint forces the variable colorings to match up with the dual constraint colorings; we omit the details as we do not use this construction in our algorithms.
Simplification of CSP Instances
Before we describe our CSP algorithms, we describe some situations in which the number of variables in an CSP instance may be reduced with little computational effort.
Lemma 2
Let v be a variable in an (a, 2)-CSP instance, such that only two of the a colors are allowed at v. Then we can find an equivalent (a, 2)-CSP instance with one fewer variable.
Proof: Let the two colors allowed at v be R and G. Define conflict(C) to be the set of pairs {(u, A) : ((u, A), (v, C)) is a constraint. We then include conflict(R) × conflict(G) to our set of constraints.
Any pair ((u, A), (w, B)) ∈ conflict(R) × conflict(G) does not reduce the space of solutions to the original problem since if both (u, A) and (w, B) were present in a coloring there would be no possible color left for v. Conversely if all such constraints are satisfied, one of the two colors for v must be available. Therefore we can now find a smaller equivalent problem by removing v, as shown in Figure 4. 2
When we apply this variable elimination scheme, the number of constraints can increase, but there can exist only (an) 2 distinct constraints, which in our applications will be a small polynomial.
Lemma 3 Let (v, X) and (w, Y) be (variable,color) pairs in an (a, 2)-CSP instance, such that v = w the only constraints involving these pairs are either of the form ((v, X), (w, Z)) with Y = Z, or ((v, Z), (w, Y)) with X = Z. Then we can find an equivalent (a, 2)-CSP instance with two fewer variables.
Proof: It is safe to choose the colors (v, X) and (w, Y), since these two choices do not conflict with each other nor with anything else in the CSP instance. 2 Lemma 4 Let (v, R) and (v, B) be (variable,color) pairs in an (a, 2)-CSP instance, such that whenever the instance contains a constraint ((v, R), (w, X)) it also contains a constraint ((v, B), (w, X)). Then we can find an equivalent (a, 2)-CSP instance with one fewer variable.
Proof: Any solution involving (v, B) can be changed to one involving (v, R) without violating any additional constraints, so it is safe to remove the option of coloring v with color B. Once we remove this option, v is restricted to two colors, and we can apply Lemma 2. Proof: We may safely assign color R to v and remove it from the instance. 2 Lemma 6 Let (v, R) be a (variable,color) pair in an (a, 2)-CSP instance that is involved in constraints with all three color options of another variable w. Then we can find an equivalent (a, b)-CSP instance with one fewer variable.
Proof: No coloring of the instance can use (v, R), so we can restrict v to the remaining two colors and apply Lemma 2. 2
We say that a CSP instance in which none of Lemmas 2-6 applies is reduced.
Simple Randomized CSP Algorithm
We first demonstrate the usefulness of Lemma 2 by describing a very simple randomized algorithm for solving (3, 2)-CSP instances in expected time O(2 n/2 n O(1) ).
Lemma 7
If we are given a (3, 2)-CSP instance I, then in random polynomial time we can find an instance I ′ with two fewer variables, such that if I ′ is solvable then so is I, and if I is solvable then with probability at least 1 2 so is I ′ .
y y y y y y y y y y y y y y y y Proof: If no constraint exists, we can solve the problem immediately. Otherwise choose some constraint ((v, X), (w, Y)). Rename the colors if necessary so that both v and w have available the same three colors R, G, and B, and so that X = Y = R. Restrict the colorings of v and w to two colors each in one of four ways, chosen uniformly at random from the four possible such restrictions in which exactly one of v and w is restricted to colors G and B ( Figure 5). Then it can be verified by examination of cases that any valid coloring of the problem remains valid for exactly two of these four restrictions, so with probability 1 2 it continues to be a solution to the restricted problem. Now apply Lemma 2 and eliminate both v and w from the problem. 2
Corollary 1 In expected time O(2 n/2 n O(1) ) we can find a solution to a (3, 2)-CSP instance if one exists.
Proof: We perform the reduction above n/2 times, taking polynomial time and giving probability at least 2 −n/2 of finding a correct solution. If we repeat this method until a solution is found, the expected number of repetitions is 2 n/2 . 2
Faster CSP Algorithm
We now describe a more complicated method of solving (3, 2)-CSP instances deterministically with the somewhat better time bound of O(1.36443 n ). More generally, our algorithm can actually handle (4, 2)-CSP instances. Any (4, 2)-CSP instance can be transformed into a (3, 2)-CSP instance y y y y y y Figure 6: Isolated constraint between two three-color variables (left) can be replaced by a single four-color variable (right).
by expanding each of its four-color variables to two three-color variables, each having two of the original four colors, with a constraint connecting the third color of each new variable ( Figure 6). Therefore, the natural definition of the "size" of a (4, 2)-CSP instance is n = n 3 + 2n 4 , where n i denotes the number of variables with i colors. However, we instead define the size to be n = n 3 + (2 − ǫ)n 4 , where ǫ ≈ 0.095543 is a constant to be determined more precisely later. In any case, the size of a (3, 2)-CSP instance remains equal to its number of variables, so any bound on the running time of our algorithm in terms of n applies directly to (3, 2)-CSP. The basic idea of our algorithm is to find a set of local configurations that must occur within any (4, 2)-CSP instance I, such that any instance containing such a configuration can be replaced by a small number of smaller instances.
In more detail, for each configuration we describe a set of smaller instances I i of size |I| − r i such that I is solvable if and only if at least one of the instances I i is solvable. If one particular configuration occurred at each step of the algorithm, this would lead to a recurrence of the form
T(n) = T(n − r i ) + poly(n) = O(λ(r 1 , r 2 , . . .) n )
for the worst-case running time of our algorithm, where the base λ(r 1 , r 2 , . . .) of the exponent in the running time is the largest zero of the function f (x) = 1 − x −r i (such a function is not necessarily a polynomial because the r i will not necessarily be integers). We call this value λ(r 1 , r 2 , . . .) the work factor of the given local configuration. The overall time bound will be λ n where λ is the largest work factor among the configurations we have identified. This value λ will depend on our previous choice of ǫ; we will choose ǫ in such a way as to minimize λ.
Single Constraints and Multiple Adjacencies
We first consider local configurations in which some (variable,color) pair is incident on only one constraint, or has multiple constraints to the same variable. First, suppose that (variable,color) pair (v, R) is involved in only a single constraint ((v, R), (w, R)). If this is also the only constraint involving (w, R), we call it an isolated constraint. Otherwise, we call it a dangling constraint.
Lemma 8 Let ((v, R), (w, R)
) be an isolated constraint in a (4, 2)-CSP instance, and let ǫ ≤ 0.545.
Then the instance can be replaced by smaller instances with work factor at most λ(2 − ǫ, 3 − ǫ).
Proof:
If v and w are both three-color variables, then the instance can be colored if and only if we can color the instance formed by replacing them with a single four-color variable, in which the y y yy yy y y four colors are the remaining choices for v and w other than R ( Figure 6). Thus in this case we can reduce the problem size by ǫ, with no additional work.
Otherwise, if there exists a coloring of the given instance, there exists one in which exactly one of v and w is given color R. Suppose first that v has four colors while w has only three. Thus we can reduce the problem to two instances, in one of which (v, R) is used (so v is removed from the problem, and (w, R) is removed as a choice for variable w, allowing us to remove the variable by Lemma 2) and in the other of which (w, R) is used (Figure 7). The first subproblem has its size reduced by 3 − ǫ since both variables are removed, while the second's size is reduced by 2 − ǫ since w is removed while v loses one of its colors but is not removed. Thus the work factor is λ(2 − ǫ, 3 − ǫ). Similarly, if both are four-color variables, the work factor is λ(3 − 2ǫ, 3 − 2ǫ). For the given range of ǫ, this second work factor is smaller than the first. 2 Lemma 9 Let ((v, R), (w, R)) be a dangling constraint in a reduced (4, 2)-CSP instance. Then the instance can be replaced by smaller instances with work factor at most λ(2 − ǫ, 3 − ǫ).
Proof:
The second constraint for (w, R) can not involve v, or we would be able to apply Lemma 4. We choose either to use color (w, R) or to restrict w to avoid that color ( Figure 8). If we use color (w, R), we eliminate choice (v, R) and another choice on the other neighbor of w. If we avoid color (w, R), we may safely use color (v, R). y y y y y y y y y y Figure 9: Implication from (v, R) to (w, R), such that (w, R) has two distinct neighbors. Restricting w eliminates v and w (top right) while assigning w color R eliminates three variables (bottom right).
In the worst case, the other neighbor of (w, R) has four colors, so removing one only reduces the problem size by 1 − ǫ. There are four cases depending on the number of colors of v and w: If both have three colors, the work factor is λ(2, 3 − ǫ). If only v has four colors, the work factor is λ(3 − ǫ, 3 − 2ǫ). If only w has four colors, the work factor is λ(2 − ǫ, 4 − 2ǫ). If both have four colors, the work factor is λ(3 − 2ǫ, 4 − 3ǫ). These factors are all dominated by the one in the statement of the lemma. 2 Lemma 10 Suppose a reduced (4, 2)-CSP instance includes two constraints such as ((v, R), (w, B)) and ((v, R), (w, G)) that connect one color of variable v with two colors of variable w, and let ǫ ≤ 0.4. Then the instance can be replaced by smaller instances with work factor at most λ(2−ǫ, 3−2ǫ).
Proof:
We assume that the instance has no color choice with only a single constraint, or we could apply one of Lemmas 8 and 9 to achieve the given work factor.
We say that (v, R) implies (w, R) if there are constraints from (v, R) to every other color choice of w. If the target (w, R) of an implication is not the source of another implication, then using (w, R) eliminates w and at least two other colors, while avoiding (w, R) forces us to also avoid (v, R) ( Figure 9). Thus, in this case we achieve work factor either λ
(2 − ǫ, 3 − 2ǫ) if w has three color choices, or λ(2 − 2ǫ, 4 − 3ǫ) if it has four.
If the target of every implication is the source of another, then we can find a cycle of colors each of which implies the next in the cycle ( Figure 10). If no other constraints involve colors in the cycle (as is true in the figure), we can use them all, reducing the problem by the length of the cycle for free. Otherwise, let (v, R) be a color in the cycle that has an outside constraint. If we use (v, R), we must use the colors in the rest of the cycle, and eliminate the (variable,color) pair outside the cycle constrained by (v, R). If we avoid (v, R), we must also avoid the colors in the rest of the cycle. The maximum work factor for this case is λ(2, 3 − ǫ), and arises when the cycle consists of only two variables, both of which have only three allowed colors. y y y y y y y y y y y y y y y y y y y y Finally, if the situation described in the lemma exists without forming any implication, then w must have four color choices, exactly two of which are constrained by (v, R). In this case restricting w to those two choices reduces the size by at least 3 − 2ǫ, while restricting it to the remaining two choices reduces the size by 2 − ǫ, again giving work factor λ(2 − ǫ, 3 − 2ǫ). 2
Highly Constrained Colors
We next consider cases in which choosing one color for a variable eliminates many other choices, or in which adjacent (variable,color) pairs have different numbers of constraints.
Lemma 11
Suppose a reduced (4, 2)-CSP instance includes a color pair (v, R) involved in three or more constraints, where v has four color choices, or a pair (v, R) involved in four or more constraints, where v has three color choices. Then the instance can be replaced by smaller instances with work factor at most λ(1 − ǫ, 5 − 4ǫ).
Proof:
We can assume from Lemma 10 that each constraint connects (v, R) to a different variable. Then if we choose to use color (v, R), we eliminate v and remove a choice from each of its neighbors, either eliminating them or reducing their number of choices from four to three. If we don't use (v, R), we eliminate that color only. So if v has four choices, the work factor is at most λ(1 − ǫ, 5 − 4ǫ), and if it has three choices and four or more constraints, the work factor is at most λ(1, 5 − 4ǫ). 2 Lemma 12 Suppose a reduced (4, 2)-CSP instance includes a (variable,color) pair (v, R) with three constraints, one of which connects it to a variable with four color choices, and let ǫ ≤ 0.3576. Suppose also that none of the previous lemmas applies. Then the instance can be replaced by smaller instances with work factor at most λ(3 − ǫ, 4 − ǫ, 4 − ǫ).
Proof: For convenience suppose that the four-color neighbor is (w, R). We can assume (w, R) has only two constraints, else it would be covered by a previous lemma. Then, if (v, R) and (w, R) do not form a triangle with a third (variable,color) pair ( Figure 11, left), we choose either to use or avoid color (v, R). If we use (v, R), we eliminate v and the three adjacent color choices. If we avoid (v, R), we create a dangling constraint at (w, R), which we have seen in Lemma 9 allows us to further subdivide the instance with work factor λ(3 − ǫ, 3 − 2ǫ) in addition to the elimination of v. Thus, the overall work factor in this case is λ(4 − ǫ, 4 − 2ǫ, 4 − 3ǫ).
On the other hand, suppose we have a triangle of constraints formed by (v, R), (w, R), and a third (variable,color) pair (x, R), as shown in Figure 11, right. Then (v, R) and (x, R) are the only choices constraining (w, R), so if (v, R) and (x, R) are both not chosen, we can safely choose to use color (w, R). Therefore, we make three smaller instances, in each of which we choose to use one of the three choices in the triangle. We can assume from the previous cases that (v, R) has only three choices, and further its third neighbor (other than (w, R) and (x, R)) must also have only three choices or we could apply the previous case of the lemma. In the worst case, (x, R) has only two constraints and x has only three color choices. Therefore, the size of the subproblems formed by choosing (v, R), (w, R), and (x, R) is reduced by at least 4 − ǫ, 4 − ǫ, and 3 − ǫ respectively, leading to a work factor of λ(3 − ǫ, 4 − ǫ, 4 − ǫ). If instead x has four color choices, we get the better work factor λ(4 − 2ǫ, 4 − 2ǫ, 4 − 2ǫ).
For the given range of ǫ, the largest of these work factors is λ(3 − ǫ, 4 − ǫ, 4 − ǫ). 2
Lemma 13
Suppose a reduced (4, 2)-CSP instance includes a (variable,color) pair (v, R) with three constraints, one of which connects it to a variable with two constraints. Suppose also that none of the previous lemmas applies. Then the instance can be replaced by smaller instances with work factor at most max{λ(1 + ǫ, 4), λ(3, 4 − ǫ, 4)}.
Proof: Let (w, R) be the neighbor with two constraints. Note that (since the previous lemma is assumed not to apply) all neighbors of (v, R) have only three color choices. First, suppose (v, R) and (w, R) are not part of a triangle of constraints (Figure 12, top). Then, if we choose to use color (v, R) we eliminate four variables, while if we avoid using it we create a dangling constraint on (w, R) which we further subdivide into two more instances according to Lemma 9. Thus, the work factor in this case is λ(3, 4 − ǫ, 4). y y y y y y y y y y y y y y y y y y y Second, suppose that (v, R) and (w, R) are part of a triangle with a third (variable,color) pair (x, R), and that (x, R) has three constraints ( Figure 12, bottom left). Then (as in the previous lemma) we may choose to use one of the three choices in the triangle, resulting in work factor λ (3,4,4).
Finally, suppose that (v, R), (w, R), and (x, R) form a triangle as above, but that (x, R) has only two constraints ( Figure 12, bottom right). Then if we choose to use (v, R) we eliminate four variables, while if we avoid using it we create an isolated constraint between (w, R) and (x, R). Thus in this case the work factor is λ(1 + ǫ, 4). 2
If none of the above lemmas applies to an instance, then each color choice in the instance must have either two or three constraints, and each neighbor of that choice must have the same number of constraints.
Triply-Constrained Colors
Within this section we assume that we have a (4, 2)-CSP instance in which none of the previous reduction lemmas applies, so any (variable,color) pair must be involved in exactly as many constraints as each of its neighbors.
We now consider the remaining (variable,color) pairs that have three constraints each. Define a three-component to be a subset of such pairs such that any pair in the subset is connected to any other by a path of constraints. We distinguish two such types of components: a small three-component is yy yy y yy yy y yy yy y yy yy yy Figure 13: The two possible small three-components with k = 8.
one that involves only four distinct variables, while a large three-component involves five or more variables. Note that we can assume by the previous lemmas that each variable in a component has only three color choices.
Lemma 14 Let C be a small three-component involving k (variable,color) pairs. Then k must be a multiple of four, and each variable involved in the component has exactly k/4 pairs in C.
Proof: Let v and w be variables in a small component C. Then each (variable,color) pair in C from variable v has exactly one constraint to a distinct (variable,color) pair from variable w, so the numbers of pairs from v equals the number of pairs from w. The assertions that each variable has the same number of pairs, and that the total number of pairs is a multiple of four, then follow. 2
We say that a small three-component is good if k = 4 in the lemma above.
Lemma 15
Let C be a small three-component that is not good. Then the instance can be replaced by smaller instances with work factor at most λ(4, 4, 4).
Proof: A component with k = 12 uses up all color choices for all four variables. Thus we may consider these variables in isolation from the rest of the instance, and either color them all (if possible) or determine that the instance is unsolvable. The remaining small components have k = 8. Such a component may be drawn with the four variables at the corners of a square, and the top, left, and right pairs of edges uncrossed ( Figure 13). If only the center two pairs were crossed, we would actually have two k = 4 components, and if any other two or three of the remaining pairs were crossed, we could reduce the number of crossings in the drawing by swapping the colors at one of the variables. Thus, the only possible small components with k = 8 are the one with all six pairs uncrossed, and the one with only one pair crossed.
The first of these allows all four variables to be colored and removed, while in the other case there exist only three maximal subsets of variables that can be colored. (In the figure, these three sets are formed by the bottom two vertices, and the two sets formed by removing one bottom vertex). We split into instances by choosing to color each of these maximal subsets, eliminating all four variables in the component and giving work factor λ(4, 4, 4). 2
Define a witness to a large three-component to be a set of five (variable,color) pairs with five distinct variables, such that there exist constraints from one pair to three others, and from at least yy yy yy yy yy yy y y y y yy yy yy yy yy yy y y y y yy yy yy yy yy yy y y y y yy yy yy yy yy yy y y y y one of those three to the fifth. By convention we use (v, R) to denote the first pair, (w, R), (x, R), and (y, R) to denote the pairs connected by constraints to (v, R), and (z, R) to be the fifth pair in the witness.
Lemma 16 Every large three-component has a witness.
Proof: Choose some arbitrary pair (u, R) as a starting point, and perform a breadth first search in the graph formed by the pairs and constraints in the component. Let (z, R) be the first pair reached by this search where z is not one of the variables adjacent to (u, R), let (v, R) be the grandparent of (z, R) in the breadth first search tree, and let the other three pairs be the neighbors of (v, R).
Then it is easy to see that (v, R) and its neighbors must use the same four variables as (u, R) and its neighbors, while z by definition uses a different variable. 2
Lemma 17
Suppose that a (4, 2)-CSP instance contains a large three-component. Then the instance can be replaced by smaller instances with work factor at most λ(4, 4, 5, 5).
Proof: Let (v, R), (w, R), (x, R), (y, R), and (z, R) be a witness for the component. Then we distinguish subcases according to how many of the neighbors of (z, R) are pairs in the witness.
1. If (z, R) has a constraint with only one pair in the witness, say (w, R), then we choose either to use color (z, R) or to avoid it. If we use it, we eliminate some four variables. If we avoid it, then we cause (w, R) to have only two constraints. If (w, R) is also constrained by one of (x, R) or (y, R), we then have a triangle of constraints ( Figure 14, top left). We can assume without loss of generality that the remaining constraint from this triangle does not connect to a different color of variable z, for if it did we could instead use the same five variables in a different order to get a witness of this form. We then further subdivide into three more instances, in each of which we choose to use one of the pairs in the triangle, as in the second case of Lemma 13. This gives overall work factor λ(4, 4, 5, 5).
On the other hand, if (v, R) and (w, R) are not part of a triangle ( Figure 14, top right), then (after avoiding (z, R)) we can apply the first case of Lemma 13 again achieving the same work factor.
2. If (z, R) has constraints with two pairs in the witness ( Figure 14, bottom left), then choosing to use (z, R) eliminates four variables and causes (v, R) to dangle, while avoiding (z, R) eliminates a single variable. The work factor is thus λ(1, 6, 7).
3. If (z, R) has constraints with all three of (w, R), (y, R), and (z, R) ( Figure 14, bottom right), then choosing to use (z, R) also allows us to use (v, R), eliminating five variables. The work factor is λ(1, 5).
The largest of the three work factors arising in these cases is the first one, λ(4, 4, 5, 5). 2
Doubly-Constrained Colors
As in the previous section, we define a two-component to be a subset of (variable,color) pairs such that each has two constraints, and any pair in the subset is connected to any other by a path of constraints. A two-component must have the form of a cycle of pairs, but it is possible for more than one pair in the cycle to involve the same variable. We distinguish two such types of components: a small two-component is one that involves only three pairs, while a large two-component involves four or more pairs.
Lemma 18 Suppose a reduced (4, 2)-CSP instance includes a large two-component, and let ǫ ≤ 0.287. Then the instance can be replaced by smaller instances with work factor at most λ(3, 3, 5).
Proof: We split into subcases:
1. Suppose the cycle passes through five consecutive distinct variables, say (v, R), (w, R), (x, R), (y, R), and (z, R). We can assume that, if any of these five variables has four color choices, then this is true of one of the first four variables. Any coloring that does not use both (v, R) and (y, R) can be made to use at least one of the two colors (w, R) or (x, R) without violating any of the constraints. Therefore, we can divide into three subproblems: one in which we use (w, R), eliminating three variables, one in which we use (x, R), again eliminating three variables, and one in which we use both (v, R) and (y, R), eliminating all five variables. If all five variables have only three color choices, The work factor resulting from this subdivision is λ(3, 3, 5). If some of the variables have four color choices, the work factor is at most λ(3 − ǫ, 4 − ǫ, 5 − 2ǫ), which is smaller for the given range of ǫ.
2. Suppose two colors three constraints apart on a cycle belong to the same variable; for instance, the sequence of colors may be (v, R), (w, R), (x, R), (v, G). Then any coloring can be made to use one of (w, R) or (x, R) without violating any constraints. If we form one subproblem in which we use (w, R) and one in which we use (x, R), we get work factor at most λ(3−ǫ, 3−ǫ) (the worst case occurring when only v has four color choices).
3. Any long cycle which does not contain one of the previous two subcases must pass through the same four variables in the same order one, two, or three times. If it passes through two or three times, all four variables may be safely colored using colors from the cycle, reducing the problem with work factor one. And if the cycle has length exactly four, we may choose one of two ways to use two diagonally opposite colors from the cycle, giving work factor at most λ (4,4).
For the given range of ǫ, the largest of these work factors is λ(3, 3, 5). 2
Matching
Suppose we have a (4, 2)-CSP instance to which none of the preceding reduction lemmas applies. Then, every constraint must be part of a good three-component or a small two-component. As we now show, this simple structure enables us to solve the remaining problem quickly.
Lemma 19
If we are given a (4, 2)-CSP instance in which every constraint must be part of a good three-component or a small two-component, then we can solve it or determine that it is not solvable in polynomial time.
Proof:
We form a bipartite graph, in which the vertices correspond to the variables and components of the instance. We connect a variable to a component by an edge if there is a (variable,color) pair using that variable and belonging to that component. Since each pair in a good three-component or small two-component is connected by a constraint to every other pair in the component, any solution to the instance can use at most one (variable,color) pair per component. Thus, a solution consists of a set of (variable,color) pairs, covering each variable once, and covering each component at most once. In terms of the bipartite graph constructed above, this is simply a matching. So, we can solve the problem by using a graph maximum matching algorithm to determine the existence of a matching that covers all the variables. 2
Overall CSP Algorithm
This completes the case analysis needed for our result. Proof: We employ a backtracking (depth first) search in a state space consisting of (3, 2)-CSP instances. At each point in the search, we examine the current state, and attempt to find a set of smaller instances to replace it with, using one of the reduction lemmas above. Such a replacement can always be found in polynomial time by searching for various simple local configurations in the instance. We then recursively search each smaller instance in succession. If we ever reach an instance in which Lemma 19 applies, we perform a matching algorithm to test whether it is solvable. If so, we find a solution and terminate the search. If not, we backtrack to the most recent branching point of the search and continue with the next alternative at that point.
A bound of λ n on the number of recursive calls in this search algorithm, where λ is the maximum work factor occurring in our reduction lemmas, can be proven by induction on the size of an instance. The work within each call is polynomial and does not add appreciably to the overall time bound.
To determine the maximum work factor, we need to set a value for the parameter ǫ. We used Mathematica to find a numerical value of ǫ minimizing the maximum of the work factors involving ǫ, and found that for ǫ ≈ 0.095543 the work factor is ≈ 1.36443 ≈ λ (4,4,5,5). For ǫ near this value, the two largest work factors are λ(3 − ǫ, 4 − ǫ, 4 − ǫ) (from Lemma 12) and λ(1 + ǫ, 4) (from Lemma 13); the remaining work factors are below 1.36. The true optimum value of ǫ is thus the one for which λ(3 − ǫ, 4 − ǫ, 4 − ǫ) = λ(1 + ǫ, 4).
As we now show, for this optimum ǫ, λ(3 − ǫ, 4 − ǫ, 4 − ǫ) = λ(1 + ǫ, 4) = λ(4, 4, 5, 5), which also arises as a work factor in Lemma 17. Consider subdividing an instance of size n into one of size n − (1 + ǫ) and another of size n − 4, and then further subdividing the first instance into subinstances
of size n − (1 + ǫ) − (3 − ǫ), n − (1 + ǫ) − (4 − ǫ), and n − (1 + ǫ) − (4 − ǫ)
. This four-way subdivision combines subdivisions of type λ(1 + ǫ, 4) and λ(3 − ǫ, 4 − ǫ, 4 − ǫ), so it must have a work factor between those two values. But by assumption those two values equal each other, so they also equal the work factor of the four-way subdivision, which is just λ(4, 4, 5, 5). 2
We use the quantity λ(4, 4, 5, 5) frequently in the remainder of the paper, so we use Λ to denote this value. Theorem 1 immediately gives algorithms for some more well known problems, some of which we improve later. Of these, the least familiar is likely to be list k-coloring: given at each vertex of a graph a list of k colors chosen from some larger set, find a coloring of the whole graph in which each vertex color is chosen from the corresponding list [11].
Corollary 2 We can solve the 3-coloring and 3-list coloring problems in time
Vertex Coloring
Simply by translating a 3-coloring problem into a (3, 2)-CSP instance, as described above, we can test 3-colorability in time O(Λ n ). We now describe some methods to reduce this time bound even further.
The basic idea is as follows: we find a small set of vertices S ⊂ V(G) with a large set N of neighbors, and choose one of the 3 |S| colorings for all vertices in S. For each such coloring, we translate the remaining problem to a (3, 2)-CSP instance. The vertices in S are already colored and need not be included in the (3, 2)-CSP instance. The vertices in N now have a colored neighbor, so for each such vertex at most two possible colors remain; therefore we can eliminate them from the (3, 2)-CSP instance using Lemma 2. The remaining instance has k = |V(G) − S − N| vertices, and can be solved in time O(Λ k ) by Theorem 1. The total time is thus O(3 |S| Λ k ). By choosing S appropriately we can make this quantity smaller than O(Λ n ).
We can assume without loss of generality that all vertices in G have degree three or more, since smaller degree vertices can be removed without changing 3-colorability.
As a first cut at our algorithm, choose X to be any set of vertices, no two adjacent or sharing a neighbor, and maximal with this property. Let Y be the set of neighbors of X. We define a rooted forest F covering G as follows: let the roots of F be the vertices in X, let each vertex in Y be connected to its unique neighbor in X, and let each remaining vertex v in G be connected to some neighbor of v in Y. (Such a neighbor must exist or v could have been added to X). We let the set S of vertices to be colored consist of all of X, together with each vertex in Y having three or more children in F.
We classify the subtrees of F rooted at vertices in Y as follows (Figure 15). If a vertex v in Y has no children, we call the subtree rooted at v a club. If v has one child, we call its subtree a stick. If it has two children, we call its subtree a fork. And if it has three or more children, we call its subtree a broom.
We can now compute the total time of our algorithm by multiplying together a factor of 3 for each vertex in S (that is, the roots of the trees of F and of broom subtrees) and a factor of Λ for each leaf in a stick or fork. We define the cost of a vertex in a tree T to be the product p of such factors involving vertices of T, spread evenly among the vertices-if T contains k vertices the cost is p 1/k . The total time of the algorithm will then be O(c n ) where c is the maximum cost of any vertex. It is not hard to show that this maximum is achieved in trees consisting of three forks (Figure 16), for which the cost is (3(Λ) 6 ) 1/10 ≈ 1.34488. Therefore we can three-color any graph in time O(1.34488 n ). We can improve this somewhat with some more work.
Cycles of Degree-Three Vertices
We begin by showing that we can assume that our graph has a special structure: the degree-three vertices do not form any cycles. For if they do form a cycle, we can remove it cheaply as follows.
Lemma 20 Let G be a 3-coloring instance in which some cycle consists only of degree-three vertices. Then we can replace G by smaller instances with work factor at most λ(5, 6, 7, 8) ≈ 1.2433.
Proof:
Let the cycle C consist of vertices v 1 , v 2 , . . ., v k . We can assume without loss of generality that it has no chords, since otherwise we could find a shorter cycle in G; therefore each v i has a unique neighbor w i outside the cycle, although the w i need not be distinct from each other. Note that, if any w i and w i+1 are adjacent, then G is 3-colorable iff G \ C is; for, if we have a coloring of G \ C, then we can color C by giving v i+1 the same color as w i , and then proceeding to color the remaining cycle vertices in order v i+2 , v i+3 , . . ., v k , v 1 , v 2 , . . ., v i . Each successive vertex has only two previously-colored neighbors, so there remains at least one free color to use, until we return to v i . When we color v i , all three of its neighbors are colored, but two of them have the same color, so again there is a free color. As a consequence, if C has even length, then G is 3-colorable iff G \ C is; for if some w i and w i+1 are given different colors, then the above argument colors C, while if all w i have the same color, then the other two colors can be used in alternation around C.
The first remaining case is that k = 3 (Figure 17, left). Then we divide the problem into two smaller instances, by forcing w 1 and w 2 to have different colors in one instance (by adding an edge between them, Figure 17 top right) while forcing them to have the same color in the other instance (by collapsing the two vertices into a single supervertex, Figure 17 bottom right). If we add an edge between w 1 and w 2 , we may remove C, reducing the problem size by three. If we give them the same color as each other, the instance is only colorable if v 3 is also given the same color, so we can collapse v 3 into the supervertex and remove the other two cycle vertices, reducing the problem size by four. Thus the work factor in this case is λ(3, 4) ≈ 1.2207.
If k is odd and larger than three, we form three smaller instances, as shown in Figure 18. In the first, we add an edge between w 1 and w 2 , and remove C, reducing the problem size by k. In the second, we collapse w 1 and w 2 , add an edge between the new supervertex and w 3 , and again remove C, reducing the problem size by k + 1. In the third instance, we collapse w 1 , w 2 , and w 3 . This forces v 1 and v 3 to have the same color as each other, so we also collapse those two vertices into another supervertex and remove v 2 , reducing the problem size by four. For k ≥ 7 this gives work factor at most λ(4, 7, 8) ≈ 1.1987. For k = 5 the subproblem with n − 4 vertices contains a triangle of degree-three vertices, and can be further subdivided into two subproblems of n − 7 and n − 8 vertices, giving the claimed work factor. 2
Any degree-three vertices remaining after the application of this lemma must form components that are trees. As we now show, we can also limit the size of these trees.
Lemma 21
Let G be a 3-coloring instance containing a connected subset of eight or more degreethree vertices. Then we can replace G by smaller instances with work factor at most λ(2, 5, 6) ≈ 1.3247.
Proof: Suppose the subset forms a k-vertex tree, and let v be a vertex in this tree such that each subtree formed by removing v has at most k/2 vertices. Then, if G is 3-colored, some two of the three neighbors of v must be given the same color, so we can split the instance into three smaller instances, each of which collapses two of the three neighbors into a single supervertex. This collapse reduces the number of vertices by one, and allows the removal of v (since after the collapse v has degree two) and the subtree connected to the third vertex. Thus we achieve work factor λ (a, b, c) where a + b + c = k + 3 and max{a, b, c} ≤ k/2. The worst case is λ(2, 5, 6), achieved when k = 8 and the tree is a path. 2
Planting Good Trees
We define a bushy forest to be an unrooted forest within a given instance graph, such that each internal node has degree four or more (for an example, see the top three levels of Figure 21). A bushy forest is maximal if no internal node is adjacent to a vertex outside the forest, no leaf has three or more neighbors outside the forest, and no vertex outside the forest has four or more neighbors outside the forest. If a leaf v does have three or more neighbors outside the forest, we could add those neighbors to the tree containing v, producing a bushy forest with more vertices. Similarly, if a vertex outside the forest has four or more neighbors outside the forest, we could extend the forest by adding another tree consisting of that vertex and its neighbors.
As we now show, a maximal bushy forest must cover at least a constant fraction of a 3-coloring instance graph. However, each leaf in F has at most two edges outside F, or F would not be maximal, so |X ∪ Y| ≤ 10m F,X∪Y /3 ≤ 20r/3. 2 Let X denote the set of vertices in G\(T ∪F) that are adjacent to vertices in F. By the maximality of F, each vertex in F is adjacent to at most two vertices in X. Let Y = G \ (X ∪ T ∪ F) denote the remaining vertices. By the maximality of T, each vertex in Y is adjacent to at most two vertices in X ∪ Y, and so must have a neighbor in T. Since G \ F contains no degree-four vertices, each vertex in T must have at most two neighbors in Y. As we now show, we can assign vertices in Y to trees in T, extending each tree in T to a tree of height at most two, in such a way that we do not form any tree with three forks, which would otherwise be the worst case for our algorithm.
Pruning Bad Trees
Lemma 23
Let F, T, X, and Y be as above. Then there exists a forest H of height two trees with three branches each, such that the vertices of H are exactly those of S ∪ Y, such that each tree in H has at most five grandchildren, and such that any tree with four or more grandchildren contains at least one vertex with degree four or more in G.
Proof:
We first show how to form a set H ′ of non-disjoint trees in T ∪ Y, and a set of weights on the grandchildren of these trees, such that each tree's grandchildren have weight at most five.
To do this, let each tree in H ′ be formed by one of the K 1,3 trees in T, together with all possible grandchildren in Y that are adjacent to the K 1,3 leaves. We assign each vertex in Y unit weight, which we divide equally among the trees it belongs to.
Then, suppose for a contradiction that some tree h in H ′ has grandchildren with total weight more than five. Then, its grandchildren must form three forks, and at least five of its six grandchildren must have unit weight; i.e., they belong only to tree h. Note that each vertex in Y must have degree three, or we could have added it to the bushy forest, and all its neighbors must be in S ∪ Y, or we could have added it to X. The unit weight grandchildren each have one neighbor in h and two other neighbors in Y. These two other neighbors must be one each from the two other forks in h, for, if to the contrary some unit-weight grandchild v does not have neighbors in both forks, we could have increased the number of trees in T by removing h and adding new trees rooted at v and at the missed fork.
Thus, these five grandchildren each connect to two other grandchildren, and (since no grandchild connects to three grandchildren) the six grandchildren together form a degree-two graph, that is, a union of cycles of degree-three vertices. But after applying Lemma 20 to G, it contains no such cycles. This contradiction implies that the weight of h must be at most five.
Similarly, if the weight of h is more than three, it must have at least one fork, at least one unitweight grandchild outside that fork, and at least one edge connecting that grandchild to a grandchild within the fork. This edge together with a path in h forms a cycle, which must contain a high degree vertex.
We are not quite done, because the assignment of grandchildren to trees in H ′ is fractional and non-disjoint. To form the desired forest H, construct a network flow problem in which the flow source is connected to a node representing each tree t ∈ T by an edge with capacity w(t) = 5 if t contains a high degree vertex and capacity w(t) = 3 otherwise. The node corresponding to tree t is connected by unit-capacity edges to nodes corresponding to the vertices in Y that are adjacent to t, and each of these nodes is connected by a unit-capacity edge to a flow sink. Then the fractional weight system above defines a flow that saturates all edges into the flow sink and is therefore maximum (Figure 19, middle top). But any maximum flow problem with integer edge capacities has an integer solution (Figure 19, middle bottom). This solution must continue to saturate the sink edges, so each vertex in Y will have one unit of flow to some tree t, and no flow to the other adjacent trees. Thus, the flow corresponds to an assignment of vertices in Y to adjacent trees in T such that Figure 20: Coloring a tree with two forks and one stick. If the two fork vertices are colored the same (left), five neighbors (dashed) are restricted to two colors, leaving the two stick vertices for the (3, 2)-CSP instance. If the two forks are colored differently (right), they force the tree root to have the third color, leaving only one vertex for the (3, 2)-CSP instance. each tree is assigned at most w(t) vertices. We then simply let each tree in H consist of a tree in T together with its assigned vertices in Y (Figure 19, bottom). 2
Improved Tree Coloring
We now discuss how to color the trees in the height-two forest H constructed in the previous subsection. As in the discussion at the start of this section, we color some vertices (typically just the root) of each tree in H, leave some vertices (typically the grandchildren) to be part of a later (3, 2)-CSP instance, and average the costs over all the vertices in the tree. However, we average the costs in the following strange way: a cost of Λ is assigned to any vertex with degree four or higher in G, as if it was handled as part of the (3, 2)-CSP instance. The remaining costs are then divided equally among the remaining vertices.
Lemma 24
Let T be a tree with three children and at most five grandchildren. Then T can be colored with cost per degree-three vertex at most (3Λ 3 ) 1/7 ≈ 1.3366.
Proof: First, suppose that T has exactly five grandchildren. At least one vertex of T has high degree. Two of the children x and y must be the roots of forks, while the third child z is the root of a stick. We test each of the nine possible colorings of x and y. In six of the cases, x and y are different, forcing the root to have one particular color (Figure 20, right). In these cases the only remaining vertex after translation to a (3, 2)-CSP instance and application of Lemma 2 will be the child of z, so in each such case T accumulates a further cost of Λ. In the three cases in which x and y are colored the same (Figure 20, left), we must also take an additional factor of Λ for z itself. One of these Λ factors goes to a high degree vertex, while the remaining work is split among the remaining eight vertices. The cost per vertex in this case is then at most (6 + 3Λ) 1/8 ≈ 1.3351.
If T has fewer than five grandchildren, we choose a color for the root of the tree as described at the start of the section. The worst case occurs when the number of grandchildren is either three or four, and is (3Λ 3 ) 1/7 ≈ 1.3366. Figure 21: Partition of vertices into five groups: p bushy forest roots, q other bushy forest internal nodes, r bushy forest leaves, s vertices adjacent to bushy forest leaves, and t degree-three vertices in height-two forest.
Proof: As described in the preceding sections, we find a maximal bushy forest, then cover the remaining vertices by height-two trees. We choose colors for each internal vertex in the bushy forest, and for certain vertices in the height-two trees as described in Lemma 24. Vertices adjacent to these colored vertices are restricted to two colors, while the remaining vertices form a (3, 2)-CSP instance and can be colored using our general (3, 2)-CSP algorithm. Let p denote the number of vertices that are roots in the bushy forest; q denote the number of non-root internal vertices; r denote the number of bushy forest leaves; s denote the number of vertices adjacent to bushy forest leaves; and t denote the number of remaining vertices, which must all be degree-three vertices in the height-two forest ( Figure 21). Then the total time for the algorithm is at most 3 p 2 q Λ s (3Λ 3 ) t/7 .
We now consider which values of these parameters give the worst case for this time bound, subject to the constraints p, q, r, s, t ≥ 0, p + q + r + s + t = n, 4p + 2q ≤ r (from the definition of a bushy forest), 2r ≥ s (from the maximality of the forest), and 20r/3 ≥ s + t (Lemma 22). We ignore the slightly tighter constraint p ≥ 1 since it only complicates the overall solution.
Since the work per vertex in s and t is larger than that in the bushy forests, the time bound is maximized when s and t are as large as possible; that is, when s + t = 20r/3. Further since the work per vertex in s is larger than that in t, s should be as large as possible; that is, s = 2r and t = 14r/3. Increasing p or q and correspondingly decreasing r, s, and t only increases the time bound, since we pay a factor of 2 or more per vertex in p and q and at most Λ for the remaining vertices, so in the worst case the constraint 4p + 2q ≤ r becomes an equality.
It remains only to set the balance between parameters p and q. There are two candidate solutions: one in which q = 0, so r = 4p, and one in which p = 0, so r = 2q. In the former case n = p + 4p + 8p + 56p/3 = 95p/3 and the time bound is 3 p Λ 8p (3Λ 3 ) 8p/3 = 3 11p/3 Λ 16p ≈ 1.3287 n . In the latter case n = q + 2q + 4q + 28q/3 = 49q/3 and the time bound is 2 q Λ 4q (3Λ 3 ) 4q/3 =
Edge Coloring
We now describe an algorithm for finding edge colorings of undirected graphs, using at most three colors, if such colorings exist. We can assume without loss of generality that the graph has vertex degree at most three. Then m ≤ 3n/2, so by applying our vertex coloring algorithm to the line graph of G we could achieve time bound 1.3289 3n/2 ≈ 1.5319 n . Just as we improved our vertex coloring algorithm by performing some reductions in the vertex coloring model before treating the problem as a (3, 2)-CSP instance, we improve this edge coloring bound by performing some reductions in the edge coloring model before treating the problem as a vertex coloring instance.
The main idea is to solve a problem intermediate in generality between 3-edge-coloring and 3-vertex-coloring: 3-edge-coloring with some added constraints that certain pairs of edges should not be the same color.
Lemma 25
Suppose a constrained 3-edge-coloring instance contains an unconstrained edge connecting two degree-three vertices. Then the instance can be replaced by two smaller instances with three fewer edges and two fewer vertices each.
Proof: Let the given edge be (w, x), and let its four neighbors be (u, w), (v, w), (x, y), and (x, z). Then (w, x) can be colored only if its four neighbors together use two of the three colors, which forces these neighbors to be matched into equally colored pairs in one of two ways. Thus, we can replace the instance by two smaller instances: one in which we replace the five edges by the two edges (u, y) and (v, z), and one in which we replace the five edges by the two edges (u, z) and (v, y); in each case we add a constraint between the two new edges. 2
The reduction operation described in Lemma 25 is depicted in Figure 22. We let m 3 denote the number of edges with three neighbors in an unconstrained 3-edge-coloring instance, and m 4 denote the number of edges with four neighbors. Edges with fewer neighbors can be removed at no cost, so we can assume without loss of generality that m = m 3 + m 4 .
Lemma 26 In an unconstrained 3-edge-coloring instance, we can find in polynomial time a set S of m 4 /3 edges such that Lemma 25 can be applied independently to each edge in S.
Proof: Use a maximum matching algorithm in the graph induced by the edges with four neighbors. If the graph is 3-colorable, the resulting matching must contain at least m 4 /3 edges. Applying Lemma 25 to an edge in a matching neither constrains any other edge in the matching, nor causes the remaining edges to stop being a matching. | 11,232 |
cs0006046 | 2953374640 | We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems. 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP; there is also a natural duality transformation from (a,b)-CSP to (b,a)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas. | There has also been some related work on approximate or heuristic 3-coloring algorithms. Blum and Karger @cite_7 show that any 3-chromatic graph can be colored with @math colors in polynomial time. Alon and Kahale @cite_25 describe a technique for coloring random 3-chromatic graphs in expected polynomial time, and Petford and Welsh @cite_9 present a randomized algorithm for 3-coloring graphs which also works well empirically on random graphs although they prove no bounds on its running time. Finally, Vlasie @cite_16 has described a class of instances which are (unlike random 3-chromatic graphs) difficult to color. | {
"abstract": [
"This paper describes a randomised algorithm for the NP-complete problem of 3-colouring the vertices of a graph. The method is based on a model of repulsion in interacting particle systems. Although it seems to work well on most random inputs there is a critical phenomenon apparent reminiscent of critical behaviour in other areas of statistical mechanics.",
"We present a simple generation procedure which turns out to be an effective source of very hard cases for graph 3-colorability. The graphs distributed according to this generation procedure are much denser in very hard cases than previously reported for the same problem size. The coloring cost for these instances is also orders of magnitude bigger. This ability is issued from the fact that the procedure favors-inside the class of graphs with given connectivity and free of 4-cliques-the generation of graphs with relatively few paths of length three (that we call 3-paths). There is a critical value of the ratio between the number of 3-paths and the number of edges, independent of the number of nodes, which separates the graphs having the same connectivity in two regions: one contains almost all graphs free of 4-cliques while the other contains almost no such graphs. The generated very hard cases are near this phase transition, and have a regular structure, witnessed by the low variance in node degrees, as opposite to the random graphs. This regularity in the graph structure seems to confuse the coloring algorithm by inducing an uniform search space, with no clue for the search.",
"Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c.",
"We show how the results of Karger, Motwani, and Sudan (1994) and Blum (1994) can be combined in a natural manner to yield a polynomial-time algorithm for O(n314)-coloring any n-node 3-colorable graph. This improves on the previous best bound of O(n14) colors (, 1994)."
],
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_25",
"@cite_7"
],
"mid": [
"2068179205",
"2161779091",
"2079035346",
"1529433769"
]
} | 3-Coloring in Time O(1.3289 n ) | There are many known NP-complete problems including such important graph theoretic problems as coloring and independent sets. Unless P=NP, we know that no polynomial time algorithm for these problems can exist, but that does not obviate the need to solve them as efficiently as possible, indeed the fact that these problems are hard makes efficient algorithms for them especially important.
We are interested in this paper in worst case analysis of algorithms for 3-coloring, a basic NPcomplete problem. We will also discuss other related problems including 3-SAT, 3-edge-coloring and 3-list-coloring.
Our algorithms for these problems are based on the following simple idea: to find a solution to a 3-coloring problem, it is not necessary to choose a color for each vertex (giving something like O(3 n ) time). Instead, it suffices to only partially solve the problem by restricting each vertex to two of the three colors. We can then test whether the partial solution can be extended to a complete coloring in polynomial time (e.g. as a 2-SAT instance). This idea applied naively already gives a simple O(1.5 n ) time randomized algorithm; we improve this by taking advantage of local structure (if we choose a color for one vertex, this restricts the colors of several neighbors at once). It seems likely that our idea of only searching for a partial solution can be applied to many other combinatorial search problems.
If we perform local reductions as above in a 3-coloring problem, we eventually reach a situation in which some uncolored vertices are surrounded by partially colored neighbors, and we run out of good local configurations to use. To avoid this problem, we translate our 3-coloring problem to one that also generalizes the other problems listed above: constraint satisfaction (CSP). In an (a, b)-CSP instance, we are given a collection of n variables, each of which can be given one of a different colors. However certain color combinations are disallowed: we also have input a collection of m constraints, each of which forbids one coloring of some b-tuple of variables. Thus 3-satisfiability is exactly (2, 3)-CSP, and 3-coloring is a special case of (3, 2)-CSP in which the constraints disallow adjacent vertices from having the same color.
As we show, (a, b)-CSP instances can be transformed in certain interesting and useful ways: in particular, one can transform (a, b)-CSP into (b, a)-CSP and vice versa, one can transform (a, b)-CSP into (max(a, b), 2)-CSP, and in any (a, 2)-CSP instance one can eliminate variables for which only two colors are allowed, reducing the problem to a smaller one of the same form. Because of this ability to eliminate partially colored variables immediately rather than saving them for a later 2-SAT instance, we can solve a (3, 2)-CSP instance without running out of good local configurations.
Our actual algorithm solves (3, 2)-CSP by applying such reductions only until we reach instances with a certain simplified structure, which can then be solved in polynomial time as an instance of graph matching. We further improve our time bound for graph 3-vertex-coloring by using methods involving network flow to find a large set of good local reductions which we apply before treating the remaining problem as a (3, 2)-CSP instance. And similarly, we solve 3-edge-coloring by using graph matching methods to find a large set of good local reductions which we apply before treating the remaining problem as a 3-vertex-coloring instance.
New Results
We show the following: Except where otherwise specified, n denotes the number of vertices in a graph or variables in a SAT or CSP instance, while m denotes the number of edges in a graph, constraints in an CSP instance, or clauses in a SAT problem.
Constraint Satisfaction Problems
We now describe a common generalization of satisfiability and graph coloring as a constraint satisfaction problem (CSP) [16,28]. We are given a collection of n variables, each of which has a list of possible colors allowed. We are also given a collection of m constraints, consisting of a tuple of variables and a color for each variable. A constraint is satisfied by a coloring if not every variable in the tuple is colored in the way specified by the constraint. We would like to choose one color from the allowed list of each variable, in a way not conflicting with any constraints. For instance, 3-satisfiability can easily be expressed in this form. Each variable of the satisfiability problem may be colored (assigned the value) either true (T) or false (F). For each clause like (x 1 ∨ x 2 ∨ ¬x 3 ), we make a constraint ((v 1 , F), (v 2 , F), (v 3 , T)). Such a constraint is satisfied if and only if at least one of the corresponding clause's terms is true.
In the (a, b)-CSP problem, we restrict our attention to instances in which each variable has at y y y y most a possible colors and each constraint involves at most b variables. The CSP instance constructed above from a 3-SAT instance is then a (2, 3)-CSP instance, and in fact 3-SAT is easily seen to be equivalent to (2, 3)-CSP.
(x 1 x 2 x 3 ) (¬x 1 x 3 x 4 ) (x 1 x 2 x 4 )
In this paper, we will concentrate our attention instead on (3, 2)-CSP and (4, 2)-CSP. We can represent a (d, 2)-CSP instance graphically, by interpreting each variable as a vertex containing up to d possible colors, and by drawing edges connecting incompatible pairs of vertex colors ( Figure 1). Note that this graphical structure is not actually a graph, as the edges connect colors within a vertex rather than the vertices themselves. However, graph 3-colorability and graph 3-list-colorability can be translated directly to a form of (3, 2)-CSP: we keep the original vertices of the graph and their possible colors, and add up to three constraints for each edge of the graph to enforce the condition that the edge's endpoints have different colors ( Figure 2).
Of course, since these problems are all NP-complete, the theory of NP-completeness provides translations from one problem to the other, but the translations above are size-preserving and very simple. We will later describe more complicated translations from 3-coloring and 3-edge-coloring to (3, 2)-CSP in which the input graph is partially colored before treating the remaining graph as an CSP instance, leading to improved time bounds over our pure CSP algorithm.
As we now show, (a, b)-CSP instances can be transformed in certain interesting and useful ways. We first describe a form of duality that transforms (a, b)-CSP instances into (b, a)-CSP instances, exchanging constraints for variables and vice versa. Proof: An assignment of colors to the original (a, b)-CSP instance's variables solves the problem if and only if, for each constraint, there is at least one pair (V, C) in the constraint that does not appear in the coloring. In our transformed problem, we choose one variable per original constraint, with the colors available to the new variable being these pairs (V, C) in the corresponding constraint in the original problem. Choosing such a pair in a coloring of the transformed problem is interpreted as ruling out C as a possible color for V in the original problem. We then add constraints to our transformed problem to ensure that for each V there remains at least one color that is not ruled out: yy y y yy y y yy yy y y y yy y y yy yy y Figure 4: (3, 2)-CSP instance with a two-color variable (left) and reduced instance after application of Lemma 2 (right).
we add one constraint for each a-tuple of colors of new variables-recall that each such color is a pair (V, C)-such that all colors in the a-tuple involve the same original variable V and exhaust all the choices of colors for V. 2
This duality may be easier to understand with a small example. As discussed above, 3-SAT is essentially the same as (2, 3)-CSP, so Lemma 1 can be used to translate 3-SAT to (3, 2)-CSP. Suppose we start with the 3-SAT instance (
x 1 ∨ x 2 ∨ ¬x 3 ) ∧ (¬x 1 ∨ x 3 ∨ x 4 ) ∧ (x 1 ∨ ¬x 2 ∨ ¬x 4 )
. Then we make a (3, 2)-CSP instance ( Figure 3) with three variables v i , one for each 3-SAT clause. Each variable has three possible colors: (1, 2, 3) for v i , (1,3,4) for v 2 , and (1, 2, 4) for v 3 . The requirement that value T or F be available to x 1 corresponds to the constraints ((v 1 , 1), (v 2 , 1)) and ((v 2 , 1), (v 3 , 1)); we similarly get constraints ((v 1 , 2), (v 3 , 2)), ((v 1 , 3), (v 2 , 3)), and ((v 2 , 4), (v 3 , 4)). One possible coloring of this (3, 2)-CSP instance would be to color v 1 1, v 2 3, and v 3 4; this would give satisfying assignments in which x 1 and x 3 are T, x 4 is F, and x 2 can be either T or F.
We can similarly translate an (a, a)-CSP instance into an (a, 2)-CSP instance in which each variable corresponds to either a constraint or a variable, and each constraint forces the variable colorings to match up with the dual constraint colorings; we omit the details as we do not use this construction in our algorithms.
Simplification of CSP Instances
Before we describe our CSP algorithms, we describe some situations in which the number of variables in an CSP instance may be reduced with little computational effort.
Lemma 2
Let v be a variable in an (a, 2)-CSP instance, such that only two of the a colors are allowed at v. Then we can find an equivalent (a, 2)-CSP instance with one fewer variable.
Proof: Let the two colors allowed at v be R and G. Define conflict(C) to be the set of pairs {(u, A) : ((u, A), (v, C)) is a constraint. We then include conflict(R) × conflict(G) to our set of constraints.
Any pair ((u, A), (w, B)) ∈ conflict(R) × conflict(G) does not reduce the space of solutions to the original problem since if both (u, A) and (w, B) were present in a coloring there would be no possible color left for v. Conversely if all such constraints are satisfied, one of the two colors for v must be available. Therefore we can now find a smaller equivalent problem by removing v, as shown in Figure 4. 2
When we apply this variable elimination scheme, the number of constraints can increase, but there can exist only (an) 2 distinct constraints, which in our applications will be a small polynomial.
Lemma 3 Let (v, X) and (w, Y) be (variable,color) pairs in an (a, 2)-CSP instance, such that v = w the only constraints involving these pairs are either of the form ((v, X), (w, Z)) with Y = Z, or ((v, Z), (w, Y)) with X = Z. Then we can find an equivalent (a, 2)-CSP instance with two fewer variables.
Proof: It is safe to choose the colors (v, X) and (w, Y), since these two choices do not conflict with each other nor with anything else in the CSP instance. 2 Lemma 4 Let (v, R) and (v, B) be (variable,color) pairs in an (a, 2)-CSP instance, such that whenever the instance contains a constraint ((v, R), (w, X)) it also contains a constraint ((v, B), (w, X)). Then we can find an equivalent (a, 2)-CSP instance with one fewer variable.
Proof: Any solution involving (v, B) can be changed to one involving (v, R) without violating any additional constraints, so it is safe to remove the option of coloring v with color B. Once we remove this option, v is restricted to two colors, and we can apply Lemma 2. Proof: We may safely assign color R to v and remove it from the instance. 2 Lemma 6 Let (v, R) be a (variable,color) pair in an (a, 2)-CSP instance that is involved in constraints with all three color options of another variable w. Then we can find an equivalent (a, b)-CSP instance with one fewer variable.
Proof: No coloring of the instance can use (v, R), so we can restrict v to the remaining two colors and apply Lemma 2. 2
We say that a CSP instance in which none of Lemmas 2-6 applies is reduced.
Simple Randomized CSP Algorithm
We first demonstrate the usefulness of Lemma 2 by describing a very simple randomized algorithm for solving (3, 2)-CSP instances in expected time O(2 n/2 n O(1) ).
Lemma 7
If we are given a (3, 2)-CSP instance I, then in random polynomial time we can find an instance I ′ with two fewer variables, such that if I ′ is solvable then so is I, and if I is solvable then with probability at least 1 2 so is I ′ .
y y y y y y y y y y y y y y y y Proof: If no constraint exists, we can solve the problem immediately. Otherwise choose some constraint ((v, X), (w, Y)). Rename the colors if necessary so that both v and w have available the same three colors R, G, and B, and so that X = Y = R. Restrict the colorings of v and w to two colors each in one of four ways, chosen uniformly at random from the four possible such restrictions in which exactly one of v and w is restricted to colors G and B ( Figure 5). Then it can be verified by examination of cases that any valid coloring of the problem remains valid for exactly two of these four restrictions, so with probability 1 2 it continues to be a solution to the restricted problem. Now apply Lemma 2 and eliminate both v and w from the problem. 2
Corollary 1 In expected time O(2 n/2 n O(1) ) we can find a solution to a (3, 2)-CSP instance if one exists.
Proof: We perform the reduction above n/2 times, taking polynomial time and giving probability at least 2 −n/2 of finding a correct solution. If we repeat this method until a solution is found, the expected number of repetitions is 2 n/2 . 2
Faster CSP Algorithm
We now describe a more complicated method of solving (3, 2)-CSP instances deterministically with the somewhat better time bound of O(1.36443 n ). More generally, our algorithm can actually handle (4, 2)-CSP instances. Any (4, 2)-CSP instance can be transformed into a (3, 2)-CSP instance y y y y y y Figure 6: Isolated constraint between two three-color variables (left) can be replaced by a single four-color variable (right).
by expanding each of its four-color variables to two three-color variables, each having two of the original four colors, with a constraint connecting the third color of each new variable ( Figure 6). Therefore, the natural definition of the "size" of a (4, 2)-CSP instance is n = n 3 + 2n 4 , where n i denotes the number of variables with i colors. However, we instead define the size to be n = n 3 + (2 − ǫ)n 4 , where ǫ ≈ 0.095543 is a constant to be determined more precisely later. In any case, the size of a (3, 2)-CSP instance remains equal to its number of variables, so any bound on the running time of our algorithm in terms of n applies directly to (3, 2)-CSP. The basic idea of our algorithm is to find a set of local configurations that must occur within any (4, 2)-CSP instance I, such that any instance containing such a configuration can be replaced by a small number of smaller instances.
In more detail, for each configuration we describe a set of smaller instances I i of size |I| − r i such that I is solvable if and only if at least one of the instances I i is solvable. If one particular configuration occurred at each step of the algorithm, this would lead to a recurrence of the form
T(n) = T(n − r i ) + poly(n) = O(λ(r 1 , r 2 , . . .) n )
for the worst-case running time of our algorithm, where the base λ(r 1 , r 2 , . . .) of the exponent in the running time is the largest zero of the function f (x) = 1 − x −r i (such a function is not necessarily a polynomial because the r i will not necessarily be integers). We call this value λ(r 1 , r 2 , . . .) the work factor of the given local configuration. The overall time bound will be λ n where λ is the largest work factor among the configurations we have identified. This value λ will depend on our previous choice of ǫ; we will choose ǫ in such a way as to minimize λ.
Single Constraints and Multiple Adjacencies
We first consider local configurations in which some (variable,color) pair is incident on only one constraint, or has multiple constraints to the same variable. First, suppose that (variable,color) pair (v, R) is involved in only a single constraint ((v, R), (w, R)). If this is also the only constraint involving (w, R), we call it an isolated constraint. Otherwise, we call it a dangling constraint.
Lemma 8 Let ((v, R), (w, R)
) be an isolated constraint in a (4, 2)-CSP instance, and let ǫ ≤ 0.545.
Then the instance can be replaced by smaller instances with work factor at most λ(2 − ǫ, 3 − ǫ).
Proof:
If v and w are both three-color variables, then the instance can be colored if and only if we can color the instance formed by replacing them with a single four-color variable, in which the y y yy yy y y four colors are the remaining choices for v and w other than R ( Figure 6). Thus in this case we can reduce the problem size by ǫ, with no additional work.
Otherwise, if there exists a coloring of the given instance, there exists one in which exactly one of v and w is given color R. Suppose first that v has four colors while w has only three. Thus we can reduce the problem to two instances, in one of which (v, R) is used (so v is removed from the problem, and (w, R) is removed as a choice for variable w, allowing us to remove the variable by Lemma 2) and in the other of which (w, R) is used (Figure 7). The first subproblem has its size reduced by 3 − ǫ since both variables are removed, while the second's size is reduced by 2 − ǫ since w is removed while v loses one of its colors but is not removed. Thus the work factor is λ(2 − ǫ, 3 − ǫ). Similarly, if both are four-color variables, the work factor is λ(3 − 2ǫ, 3 − 2ǫ). For the given range of ǫ, this second work factor is smaller than the first. 2 Lemma 9 Let ((v, R), (w, R)) be a dangling constraint in a reduced (4, 2)-CSP instance. Then the instance can be replaced by smaller instances with work factor at most λ(2 − ǫ, 3 − ǫ).
Proof:
The second constraint for (w, R) can not involve v, or we would be able to apply Lemma 4. We choose either to use color (w, R) or to restrict w to avoid that color ( Figure 8). If we use color (w, R), we eliminate choice (v, R) and another choice on the other neighbor of w. If we avoid color (w, R), we may safely use color (v, R). y y y y y y y y y y Figure 9: Implication from (v, R) to (w, R), such that (w, R) has two distinct neighbors. Restricting w eliminates v and w (top right) while assigning w color R eliminates three variables (bottom right).
In the worst case, the other neighbor of (w, R) has four colors, so removing one only reduces the problem size by 1 − ǫ. There are four cases depending on the number of colors of v and w: If both have three colors, the work factor is λ(2, 3 − ǫ). If only v has four colors, the work factor is λ(3 − ǫ, 3 − 2ǫ). If only w has four colors, the work factor is λ(2 − ǫ, 4 − 2ǫ). If both have four colors, the work factor is λ(3 − 2ǫ, 4 − 3ǫ). These factors are all dominated by the one in the statement of the lemma. 2 Lemma 10 Suppose a reduced (4, 2)-CSP instance includes two constraints such as ((v, R), (w, B)) and ((v, R), (w, G)) that connect one color of variable v with two colors of variable w, and let ǫ ≤ 0.4. Then the instance can be replaced by smaller instances with work factor at most λ(2−ǫ, 3−2ǫ).
Proof:
We assume that the instance has no color choice with only a single constraint, or we could apply one of Lemmas 8 and 9 to achieve the given work factor.
We say that (v, R) implies (w, R) if there are constraints from (v, R) to every other color choice of w. If the target (w, R) of an implication is not the source of another implication, then using (w, R) eliminates w and at least two other colors, while avoiding (w, R) forces us to also avoid (v, R) ( Figure 9). Thus, in this case we achieve work factor either λ
(2 − ǫ, 3 − 2ǫ) if w has three color choices, or λ(2 − 2ǫ, 4 − 3ǫ) if it has four.
If the target of every implication is the source of another, then we can find a cycle of colors each of which implies the next in the cycle ( Figure 10). If no other constraints involve colors in the cycle (as is true in the figure), we can use them all, reducing the problem by the length of the cycle for free. Otherwise, let (v, R) be a color in the cycle that has an outside constraint. If we use (v, R), we must use the colors in the rest of the cycle, and eliminate the (variable,color) pair outside the cycle constrained by (v, R). If we avoid (v, R), we must also avoid the colors in the rest of the cycle. The maximum work factor for this case is λ(2, 3 − ǫ), and arises when the cycle consists of only two variables, both of which have only three allowed colors. y y y y y y y y y y y y y y y y y y y y Finally, if the situation described in the lemma exists without forming any implication, then w must have four color choices, exactly two of which are constrained by (v, R). In this case restricting w to those two choices reduces the size by at least 3 − 2ǫ, while restricting it to the remaining two choices reduces the size by 2 − ǫ, again giving work factor λ(2 − ǫ, 3 − 2ǫ). 2
Highly Constrained Colors
We next consider cases in which choosing one color for a variable eliminates many other choices, or in which adjacent (variable,color) pairs have different numbers of constraints.
Lemma 11
Suppose a reduced (4, 2)-CSP instance includes a color pair (v, R) involved in three or more constraints, where v has four color choices, or a pair (v, R) involved in four or more constraints, where v has three color choices. Then the instance can be replaced by smaller instances with work factor at most λ(1 − ǫ, 5 − 4ǫ).
Proof:
We can assume from Lemma 10 that each constraint connects (v, R) to a different variable. Then if we choose to use color (v, R), we eliminate v and remove a choice from each of its neighbors, either eliminating them or reducing their number of choices from four to three. If we don't use (v, R), we eliminate that color only. So if v has four choices, the work factor is at most λ(1 − ǫ, 5 − 4ǫ), and if it has three choices and four or more constraints, the work factor is at most λ(1, 5 − 4ǫ). 2 Lemma 12 Suppose a reduced (4, 2)-CSP instance includes a (variable,color) pair (v, R) with three constraints, one of which connects it to a variable with four color choices, and let ǫ ≤ 0.3576. Suppose also that none of the previous lemmas applies. Then the instance can be replaced by smaller instances with work factor at most λ(3 − ǫ, 4 − ǫ, 4 − ǫ).
Proof: For convenience suppose that the four-color neighbor is (w, R). We can assume (w, R) has only two constraints, else it would be covered by a previous lemma. Then, if (v, R) and (w, R) do not form a triangle with a third (variable,color) pair ( Figure 11, left), we choose either to use or avoid color (v, R). If we use (v, R), we eliminate v and the three adjacent color choices. If we avoid (v, R), we create a dangling constraint at (w, R), which we have seen in Lemma 9 allows us to further subdivide the instance with work factor λ(3 − ǫ, 3 − 2ǫ) in addition to the elimination of v. Thus, the overall work factor in this case is λ(4 − ǫ, 4 − 2ǫ, 4 − 3ǫ).
On the other hand, suppose we have a triangle of constraints formed by (v, R), (w, R), and a third (variable,color) pair (x, R), as shown in Figure 11, right. Then (v, R) and (x, R) are the only choices constraining (w, R), so if (v, R) and (x, R) are both not chosen, we can safely choose to use color (w, R). Therefore, we make three smaller instances, in each of which we choose to use one of the three choices in the triangle. We can assume from the previous cases that (v, R) has only three choices, and further its third neighbor (other than (w, R) and (x, R)) must also have only three choices or we could apply the previous case of the lemma. In the worst case, (x, R) has only two constraints and x has only three color choices. Therefore, the size of the subproblems formed by choosing (v, R), (w, R), and (x, R) is reduced by at least 4 − ǫ, 4 − ǫ, and 3 − ǫ respectively, leading to a work factor of λ(3 − ǫ, 4 − ǫ, 4 − ǫ). If instead x has four color choices, we get the better work factor λ(4 − 2ǫ, 4 − 2ǫ, 4 − 2ǫ).
For the given range of ǫ, the largest of these work factors is λ(3 − ǫ, 4 − ǫ, 4 − ǫ). 2
Lemma 13
Suppose a reduced (4, 2)-CSP instance includes a (variable,color) pair (v, R) with three constraints, one of which connects it to a variable with two constraints. Suppose also that none of the previous lemmas applies. Then the instance can be replaced by smaller instances with work factor at most max{λ(1 + ǫ, 4), λ(3, 4 − ǫ, 4)}.
Proof: Let (w, R) be the neighbor with two constraints. Note that (since the previous lemma is assumed not to apply) all neighbors of (v, R) have only three color choices. First, suppose (v, R) and (w, R) are not part of a triangle of constraints (Figure 12, top). Then, if we choose to use color (v, R) we eliminate four variables, while if we avoid using it we create a dangling constraint on (w, R) which we further subdivide into two more instances according to Lemma 9. Thus, the work factor in this case is λ(3, 4 − ǫ, 4). y y y y y y y y y y y y y y y y y y y Second, suppose that (v, R) and (w, R) are part of a triangle with a third (variable,color) pair (x, R), and that (x, R) has three constraints ( Figure 12, bottom left). Then (as in the previous lemma) we may choose to use one of the three choices in the triangle, resulting in work factor λ (3,4,4).
Finally, suppose that (v, R), (w, R), and (x, R) form a triangle as above, but that (x, R) has only two constraints ( Figure 12, bottom right). Then if we choose to use (v, R) we eliminate four variables, while if we avoid using it we create an isolated constraint between (w, R) and (x, R). Thus in this case the work factor is λ(1 + ǫ, 4). 2
If none of the above lemmas applies to an instance, then each color choice in the instance must have either two or three constraints, and each neighbor of that choice must have the same number of constraints.
Triply-Constrained Colors
Within this section we assume that we have a (4, 2)-CSP instance in which none of the previous reduction lemmas applies, so any (variable,color) pair must be involved in exactly as many constraints as each of its neighbors.
We now consider the remaining (variable,color) pairs that have three constraints each. Define a three-component to be a subset of such pairs such that any pair in the subset is connected to any other by a path of constraints. We distinguish two such types of components: a small three-component is yy yy y yy yy y yy yy y yy yy yy Figure 13: The two possible small three-components with k = 8.
one that involves only four distinct variables, while a large three-component involves five or more variables. Note that we can assume by the previous lemmas that each variable in a component has only three color choices.
Lemma 14 Let C be a small three-component involving k (variable,color) pairs. Then k must be a multiple of four, and each variable involved in the component has exactly k/4 pairs in C.
Proof: Let v and w be variables in a small component C. Then each (variable,color) pair in C from variable v has exactly one constraint to a distinct (variable,color) pair from variable w, so the numbers of pairs from v equals the number of pairs from w. The assertions that each variable has the same number of pairs, and that the total number of pairs is a multiple of four, then follow. 2
We say that a small three-component is good if k = 4 in the lemma above.
Lemma 15
Let C be a small three-component that is not good. Then the instance can be replaced by smaller instances with work factor at most λ(4, 4, 4).
Proof: A component with k = 12 uses up all color choices for all four variables. Thus we may consider these variables in isolation from the rest of the instance, and either color them all (if possible) or determine that the instance is unsolvable. The remaining small components have k = 8. Such a component may be drawn with the four variables at the corners of a square, and the top, left, and right pairs of edges uncrossed ( Figure 13). If only the center two pairs were crossed, we would actually have two k = 4 components, and if any other two or three of the remaining pairs were crossed, we could reduce the number of crossings in the drawing by swapping the colors at one of the variables. Thus, the only possible small components with k = 8 are the one with all six pairs uncrossed, and the one with only one pair crossed.
The first of these allows all four variables to be colored and removed, while in the other case there exist only three maximal subsets of variables that can be colored. (In the figure, these three sets are formed by the bottom two vertices, and the two sets formed by removing one bottom vertex). We split into instances by choosing to color each of these maximal subsets, eliminating all four variables in the component and giving work factor λ(4, 4, 4). 2
Define a witness to a large three-component to be a set of five (variable,color) pairs with five distinct variables, such that there exist constraints from one pair to three others, and from at least yy yy yy yy yy yy y y y y yy yy yy yy yy yy y y y y yy yy yy yy yy yy y y y y yy yy yy yy yy yy y y y y one of those three to the fifth. By convention we use (v, R) to denote the first pair, (w, R), (x, R), and (y, R) to denote the pairs connected by constraints to (v, R), and (z, R) to be the fifth pair in the witness.
Lemma 16 Every large three-component has a witness.
Proof: Choose some arbitrary pair (u, R) as a starting point, and perform a breadth first search in the graph formed by the pairs and constraints in the component. Let (z, R) be the first pair reached by this search where z is not one of the variables adjacent to (u, R), let (v, R) be the grandparent of (z, R) in the breadth first search tree, and let the other three pairs be the neighbors of (v, R).
Then it is easy to see that (v, R) and its neighbors must use the same four variables as (u, R) and its neighbors, while z by definition uses a different variable. 2
Lemma 17
Suppose that a (4, 2)-CSP instance contains a large three-component. Then the instance can be replaced by smaller instances with work factor at most λ(4, 4, 5, 5).
Proof: Let (v, R), (w, R), (x, R), (y, R), and (z, R) be a witness for the component. Then we distinguish subcases according to how many of the neighbors of (z, R) are pairs in the witness.
1. If (z, R) has a constraint with only one pair in the witness, say (w, R), then we choose either to use color (z, R) or to avoid it. If we use it, we eliminate some four variables. If we avoid it, then we cause (w, R) to have only two constraints. If (w, R) is also constrained by one of (x, R) or (y, R), we then have a triangle of constraints ( Figure 14, top left). We can assume without loss of generality that the remaining constraint from this triangle does not connect to a different color of variable z, for if it did we could instead use the same five variables in a different order to get a witness of this form. We then further subdivide into three more instances, in each of which we choose to use one of the pairs in the triangle, as in the second case of Lemma 13. This gives overall work factor λ(4, 4, 5, 5).
On the other hand, if (v, R) and (w, R) are not part of a triangle ( Figure 14, top right), then (after avoiding (z, R)) we can apply the first case of Lemma 13 again achieving the same work factor.
2. If (z, R) has constraints with two pairs in the witness ( Figure 14, bottom left), then choosing to use (z, R) eliminates four variables and causes (v, R) to dangle, while avoiding (z, R) eliminates a single variable. The work factor is thus λ(1, 6, 7).
3. If (z, R) has constraints with all three of (w, R), (y, R), and (z, R) ( Figure 14, bottom right), then choosing to use (z, R) also allows us to use (v, R), eliminating five variables. The work factor is λ(1, 5).
The largest of the three work factors arising in these cases is the first one, λ(4, 4, 5, 5). 2
Doubly-Constrained Colors
As in the previous section, we define a two-component to be a subset of (variable,color) pairs such that each has two constraints, and any pair in the subset is connected to any other by a path of constraints. A two-component must have the form of a cycle of pairs, but it is possible for more than one pair in the cycle to involve the same variable. We distinguish two such types of components: a small two-component is one that involves only three pairs, while a large two-component involves four or more pairs.
Lemma 18 Suppose a reduced (4, 2)-CSP instance includes a large two-component, and let ǫ ≤ 0.287. Then the instance can be replaced by smaller instances with work factor at most λ(3, 3, 5).
Proof: We split into subcases:
1. Suppose the cycle passes through five consecutive distinct variables, say (v, R), (w, R), (x, R), (y, R), and (z, R). We can assume that, if any of these five variables has four color choices, then this is true of one of the first four variables. Any coloring that does not use both (v, R) and (y, R) can be made to use at least one of the two colors (w, R) or (x, R) without violating any of the constraints. Therefore, we can divide into three subproblems: one in which we use (w, R), eliminating three variables, one in which we use (x, R), again eliminating three variables, and one in which we use both (v, R) and (y, R), eliminating all five variables. If all five variables have only three color choices, The work factor resulting from this subdivision is λ(3, 3, 5). If some of the variables have four color choices, the work factor is at most λ(3 − ǫ, 4 − ǫ, 5 − 2ǫ), which is smaller for the given range of ǫ.
2. Suppose two colors three constraints apart on a cycle belong to the same variable; for instance, the sequence of colors may be (v, R), (w, R), (x, R), (v, G). Then any coloring can be made to use one of (w, R) or (x, R) without violating any constraints. If we form one subproblem in which we use (w, R) and one in which we use (x, R), we get work factor at most λ(3−ǫ, 3−ǫ) (the worst case occurring when only v has four color choices).
3. Any long cycle which does not contain one of the previous two subcases must pass through the same four variables in the same order one, two, or three times. If it passes through two or three times, all four variables may be safely colored using colors from the cycle, reducing the problem with work factor one. And if the cycle has length exactly four, we may choose one of two ways to use two diagonally opposite colors from the cycle, giving work factor at most λ (4,4).
For the given range of ǫ, the largest of these work factors is λ(3, 3, 5). 2
Matching
Suppose we have a (4, 2)-CSP instance to which none of the preceding reduction lemmas applies. Then, every constraint must be part of a good three-component or a small two-component. As we now show, this simple structure enables us to solve the remaining problem quickly.
Lemma 19
If we are given a (4, 2)-CSP instance in which every constraint must be part of a good three-component or a small two-component, then we can solve it or determine that it is not solvable in polynomial time.
Proof:
We form a bipartite graph, in which the vertices correspond to the variables and components of the instance. We connect a variable to a component by an edge if there is a (variable,color) pair using that variable and belonging to that component. Since each pair in a good three-component or small two-component is connected by a constraint to every other pair in the component, any solution to the instance can use at most one (variable,color) pair per component. Thus, a solution consists of a set of (variable,color) pairs, covering each variable once, and covering each component at most once. In terms of the bipartite graph constructed above, this is simply a matching. So, we can solve the problem by using a graph maximum matching algorithm to determine the existence of a matching that covers all the variables. 2
Overall CSP Algorithm
This completes the case analysis needed for our result. Proof: We employ a backtracking (depth first) search in a state space consisting of (3, 2)-CSP instances. At each point in the search, we examine the current state, and attempt to find a set of smaller instances to replace it with, using one of the reduction lemmas above. Such a replacement can always be found in polynomial time by searching for various simple local configurations in the instance. We then recursively search each smaller instance in succession. If we ever reach an instance in which Lemma 19 applies, we perform a matching algorithm to test whether it is solvable. If so, we find a solution and terminate the search. If not, we backtrack to the most recent branching point of the search and continue with the next alternative at that point.
A bound of λ n on the number of recursive calls in this search algorithm, where λ is the maximum work factor occurring in our reduction lemmas, can be proven by induction on the size of an instance. The work within each call is polynomial and does not add appreciably to the overall time bound.
To determine the maximum work factor, we need to set a value for the parameter ǫ. We used Mathematica to find a numerical value of ǫ minimizing the maximum of the work factors involving ǫ, and found that for ǫ ≈ 0.095543 the work factor is ≈ 1.36443 ≈ λ (4,4,5,5). For ǫ near this value, the two largest work factors are λ(3 − ǫ, 4 − ǫ, 4 − ǫ) (from Lemma 12) and λ(1 + ǫ, 4) (from Lemma 13); the remaining work factors are below 1.36. The true optimum value of ǫ is thus the one for which λ(3 − ǫ, 4 − ǫ, 4 − ǫ) = λ(1 + ǫ, 4).
As we now show, for this optimum ǫ, λ(3 − ǫ, 4 − ǫ, 4 − ǫ) = λ(1 + ǫ, 4) = λ(4, 4, 5, 5), which also arises as a work factor in Lemma 17. Consider subdividing an instance of size n into one of size n − (1 + ǫ) and another of size n − 4, and then further subdividing the first instance into subinstances
of size n − (1 + ǫ) − (3 − ǫ), n − (1 + ǫ) − (4 − ǫ), and n − (1 + ǫ) − (4 − ǫ)
. This four-way subdivision combines subdivisions of type λ(1 + ǫ, 4) and λ(3 − ǫ, 4 − ǫ, 4 − ǫ), so it must have a work factor between those two values. But by assumption those two values equal each other, so they also equal the work factor of the four-way subdivision, which is just λ(4, 4, 5, 5). 2
We use the quantity λ(4, 4, 5, 5) frequently in the remainder of the paper, so we use Λ to denote this value. Theorem 1 immediately gives algorithms for some more well known problems, some of which we improve later. Of these, the least familiar is likely to be list k-coloring: given at each vertex of a graph a list of k colors chosen from some larger set, find a coloring of the whole graph in which each vertex color is chosen from the corresponding list [11].
Corollary 2 We can solve the 3-coloring and 3-list coloring problems in time
Vertex Coloring
Simply by translating a 3-coloring problem into a (3, 2)-CSP instance, as described above, we can test 3-colorability in time O(Λ n ). We now describe some methods to reduce this time bound even further.
The basic idea is as follows: we find a small set of vertices S ⊂ V(G) with a large set N of neighbors, and choose one of the 3 |S| colorings for all vertices in S. For each such coloring, we translate the remaining problem to a (3, 2)-CSP instance. The vertices in S are already colored and need not be included in the (3, 2)-CSP instance. The vertices in N now have a colored neighbor, so for each such vertex at most two possible colors remain; therefore we can eliminate them from the (3, 2)-CSP instance using Lemma 2. The remaining instance has k = |V(G) − S − N| vertices, and can be solved in time O(Λ k ) by Theorem 1. The total time is thus O(3 |S| Λ k ). By choosing S appropriately we can make this quantity smaller than O(Λ n ).
We can assume without loss of generality that all vertices in G have degree three or more, since smaller degree vertices can be removed without changing 3-colorability.
As a first cut at our algorithm, choose X to be any set of vertices, no two adjacent or sharing a neighbor, and maximal with this property. Let Y be the set of neighbors of X. We define a rooted forest F covering G as follows: let the roots of F be the vertices in X, let each vertex in Y be connected to its unique neighbor in X, and let each remaining vertex v in G be connected to some neighbor of v in Y. (Such a neighbor must exist or v could have been added to X). We let the set S of vertices to be colored consist of all of X, together with each vertex in Y having three or more children in F.
We classify the subtrees of F rooted at vertices in Y as follows (Figure 15). If a vertex v in Y has no children, we call the subtree rooted at v a club. If v has one child, we call its subtree a stick. If it has two children, we call its subtree a fork. And if it has three or more children, we call its subtree a broom.
We can now compute the total time of our algorithm by multiplying together a factor of 3 for each vertex in S (that is, the roots of the trees of F and of broom subtrees) and a factor of Λ for each leaf in a stick or fork. We define the cost of a vertex in a tree T to be the product p of such factors involving vertices of T, spread evenly among the vertices-if T contains k vertices the cost is p 1/k . The total time of the algorithm will then be O(c n ) where c is the maximum cost of any vertex. It is not hard to show that this maximum is achieved in trees consisting of three forks (Figure 16), for which the cost is (3(Λ) 6 ) 1/10 ≈ 1.34488. Therefore we can three-color any graph in time O(1.34488 n ). We can improve this somewhat with some more work.
Cycles of Degree-Three Vertices
We begin by showing that we can assume that our graph has a special structure: the degree-three vertices do not form any cycles. For if they do form a cycle, we can remove it cheaply as follows.
Lemma 20 Let G be a 3-coloring instance in which some cycle consists only of degree-three vertices. Then we can replace G by smaller instances with work factor at most λ(5, 6, 7, 8) ≈ 1.2433.
Proof:
Let the cycle C consist of vertices v 1 , v 2 , . . ., v k . We can assume without loss of generality that it has no chords, since otherwise we could find a shorter cycle in G; therefore each v i has a unique neighbor w i outside the cycle, although the w i need not be distinct from each other. Note that, if any w i and w i+1 are adjacent, then G is 3-colorable iff G \ C is; for, if we have a coloring of G \ C, then we can color C by giving v i+1 the same color as w i , and then proceeding to color the remaining cycle vertices in order v i+2 , v i+3 , . . ., v k , v 1 , v 2 , . . ., v i . Each successive vertex has only two previously-colored neighbors, so there remains at least one free color to use, until we return to v i . When we color v i , all three of its neighbors are colored, but two of them have the same color, so again there is a free color. As a consequence, if C has even length, then G is 3-colorable iff G \ C is; for if some w i and w i+1 are given different colors, then the above argument colors C, while if all w i have the same color, then the other two colors can be used in alternation around C.
The first remaining case is that k = 3 (Figure 17, left). Then we divide the problem into two smaller instances, by forcing w 1 and w 2 to have different colors in one instance (by adding an edge between them, Figure 17 top right) while forcing them to have the same color in the other instance (by collapsing the two vertices into a single supervertex, Figure 17 bottom right). If we add an edge between w 1 and w 2 , we may remove C, reducing the problem size by three. If we give them the same color as each other, the instance is only colorable if v 3 is also given the same color, so we can collapse v 3 into the supervertex and remove the other two cycle vertices, reducing the problem size by four. Thus the work factor in this case is λ(3, 4) ≈ 1.2207.
If k is odd and larger than three, we form three smaller instances, as shown in Figure 18. In the first, we add an edge between w 1 and w 2 , and remove C, reducing the problem size by k. In the second, we collapse w 1 and w 2 , add an edge between the new supervertex and w 3 , and again remove C, reducing the problem size by k + 1. In the third instance, we collapse w 1 , w 2 , and w 3 . This forces v 1 and v 3 to have the same color as each other, so we also collapse those two vertices into another supervertex and remove v 2 , reducing the problem size by four. For k ≥ 7 this gives work factor at most λ(4, 7, 8) ≈ 1.1987. For k = 5 the subproblem with n − 4 vertices contains a triangle of degree-three vertices, and can be further subdivided into two subproblems of n − 7 and n − 8 vertices, giving the claimed work factor. 2
Any degree-three vertices remaining after the application of this lemma must form components that are trees. As we now show, we can also limit the size of these trees.
Lemma 21
Let G be a 3-coloring instance containing a connected subset of eight or more degreethree vertices. Then we can replace G by smaller instances with work factor at most λ(2, 5, 6) ≈ 1.3247.
Proof: Suppose the subset forms a k-vertex tree, and let v be a vertex in this tree such that each subtree formed by removing v has at most k/2 vertices. Then, if G is 3-colored, some two of the three neighbors of v must be given the same color, so we can split the instance into three smaller instances, each of which collapses two of the three neighbors into a single supervertex. This collapse reduces the number of vertices by one, and allows the removal of v (since after the collapse v has degree two) and the subtree connected to the third vertex. Thus we achieve work factor λ (a, b, c) where a + b + c = k + 3 and max{a, b, c} ≤ k/2. The worst case is λ(2, 5, 6), achieved when k = 8 and the tree is a path. 2
Planting Good Trees
We define a bushy forest to be an unrooted forest within a given instance graph, such that each internal node has degree four or more (for an example, see the top three levels of Figure 21). A bushy forest is maximal if no internal node is adjacent to a vertex outside the forest, no leaf has three or more neighbors outside the forest, and no vertex outside the forest has four or more neighbors outside the forest. If a leaf v does have three or more neighbors outside the forest, we could add those neighbors to the tree containing v, producing a bushy forest with more vertices. Similarly, if a vertex outside the forest has four or more neighbors outside the forest, we could extend the forest by adding another tree consisting of that vertex and its neighbors.
As we now show, a maximal bushy forest must cover at least a constant fraction of a 3-coloring instance graph. However, each leaf in F has at most two edges outside F, or F would not be maximal, so |X ∪ Y| ≤ 10m F,X∪Y /3 ≤ 20r/3. 2 Let X denote the set of vertices in G\(T ∪F) that are adjacent to vertices in F. By the maximality of F, each vertex in F is adjacent to at most two vertices in X. Let Y = G \ (X ∪ T ∪ F) denote the remaining vertices. By the maximality of T, each vertex in Y is adjacent to at most two vertices in X ∪ Y, and so must have a neighbor in T. Since G \ F contains no degree-four vertices, each vertex in T must have at most two neighbors in Y. As we now show, we can assign vertices in Y to trees in T, extending each tree in T to a tree of height at most two, in such a way that we do not form any tree with three forks, which would otherwise be the worst case for our algorithm.
Pruning Bad Trees
Lemma 23
Let F, T, X, and Y be as above. Then there exists a forest H of height two trees with three branches each, such that the vertices of H are exactly those of S ∪ Y, such that each tree in H has at most five grandchildren, and such that any tree with four or more grandchildren contains at least one vertex with degree four or more in G.
Proof:
We first show how to form a set H ′ of non-disjoint trees in T ∪ Y, and a set of weights on the grandchildren of these trees, such that each tree's grandchildren have weight at most five.
To do this, let each tree in H ′ be formed by one of the K 1,3 trees in T, together with all possible grandchildren in Y that are adjacent to the K 1,3 leaves. We assign each vertex in Y unit weight, which we divide equally among the trees it belongs to.
Then, suppose for a contradiction that some tree h in H ′ has grandchildren with total weight more than five. Then, its grandchildren must form three forks, and at least five of its six grandchildren must have unit weight; i.e., they belong only to tree h. Note that each vertex in Y must have degree three, or we could have added it to the bushy forest, and all its neighbors must be in S ∪ Y, or we could have added it to X. The unit weight grandchildren each have one neighbor in h and two other neighbors in Y. These two other neighbors must be one each from the two other forks in h, for, if to the contrary some unit-weight grandchild v does not have neighbors in both forks, we could have increased the number of trees in T by removing h and adding new trees rooted at v and at the missed fork.
Thus, these five grandchildren each connect to two other grandchildren, and (since no grandchild connects to three grandchildren) the six grandchildren together form a degree-two graph, that is, a union of cycles of degree-three vertices. But after applying Lemma 20 to G, it contains no such cycles. This contradiction implies that the weight of h must be at most five.
Similarly, if the weight of h is more than three, it must have at least one fork, at least one unitweight grandchild outside that fork, and at least one edge connecting that grandchild to a grandchild within the fork. This edge together with a path in h forms a cycle, which must contain a high degree vertex.
We are not quite done, because the assignment of grandchildren to trees in H ′ is fractional and non-disjoint. To form the desired forest H, construct a network flow problem in which the flow source is connected to a node representing each tree t ∈ T by an edge with capacity w(t) = 5 if t contains a high degree vertex and capacity w(t) = 3 otherwise. The node corresponding to tree t is connected by unit-capacity edges to nodes corresponding to the vertices in Y that are adjacent to t, and each of these nodes is connected by a unit-capacity edge to a flow sink. Then the fractional weight system above defines a flow that saturates all edges into the flow sink and is therefore maximum (Figure 19, middle top). But any maximum flow problem with integer edge capacities has an integer solution (Figure 19, middle bottom). This solution must continue to saturate the sink edges, so each vertex in Y will have one unit of flow to some tree t, and no flow to the other adjacent trees. Thus, the flow corresponds to an assignment of vertices in Y to adjacent trees in T such that Figure 20: Coloring a tree with two forks and one stick. If the two fork vertices are colored the same (left), five neighbors (dashed) are restricted to two colors, leaving the two stick vertices for the (3, 2)-CSP instance. If the two forks are colored differently (right), they force the tree root to have the third color, leaving only one vertex for the (3, 2)-CSP instance. each tree is assigned at most w(t) vertices. We then simply let each tree in H consist of a tree in T together with its assigned vertices in Y (Figure 19, bottom). 2
Improved Tree Coloring
We now discuss how to color the trees in the height-two forest H constructed in the previous subsection. As in the discussion at the start of this section, we color some vertices (typically just the root) of each tree in H, leave some vertices (typically the grandchildren) to be part of a later (3, 2)-CSP instance, and average the costs over all the vertices in the tree. However, we average the costs in the following strange way: a cost of Λ is assigned to any vertex with degree four or higher in G, as if it was handled as part of the (3, 2)-CSP instance. The remaining costs are then divided equally among the remaining vertices.
Lemma 24
Let T be a tree with three children and at most five grandchildren. Then T can be colored with cost per degree-three vertex at most (3Λ 3 ) 1/7 ≈ 1.3366.
Proof: First, suppose that T has exactly five grandchildren. At least one vertex of T has high degree. Two of the children x and y must be the roots of forks, while the third child z is the root of a stick. We test each of the nine possible colorings of x and y. In six of the cases, x and y are different, forcing the root to have one particular color (Figure 20, right). In these cases the only remaining vertex after translation to a (3, 2)-CSP instance and application of Lemma 2 will be the child of z, so in each such case T accumulates a further cost of Λ. In the three cases in which x and y are colored the same (Figure 20, left), we must also take an additional factor of Λ for z itself. One of these Λ factors goes to a high degree vertex, while the remaining work is split among the remaining eight vertices. The cost per vertex in this case is then at most (6 + 3Λ) 1/8 ≈ 1.3351.
If T has fewer than five grandchildren, we choose a color for the root of the tree as described at the start of the section. The worst case occurs when the number of grandchildren is either three or four, and is (3Λ 3 ) 1/7 ≈ 1.3366. Figure 21: Partition of vertices into five groups: p bushy forest roots, q other bushy forest internal nodes, r bushy forest leaves, s vertices adjacent to bushy forest leaves, and t degree-three vertices in height-two forest.
Proof: As described in the preceding sections, we find a maximal bushy forest, then cover the remaining vertices by height-two trees. We choose colors for each internal vertex in the bushy forest, and for certain vertices in the height-two trees as described in Lemma 24. Vertices adjacent to these colored vertices are restricted to two colors, while the remaining vertices form a (3, 2)-CSP instance and can be colored using our general (3, 2)-CSP algorithm. Let p denote the number of vertices that are roots in the bushy forest; q denote the number of non-root internal vertices; r denote the number of bushy forest leaves; s denote the number of vertices adjacent to bushy forest leaves; and t denote the number of remaining vertices, which must all be degree-three vertices in the height-two forest ( Figure 21). Then the total time for the algorithm is at most 3 p 2 q Λ s (3Λ 3 ) t/7 .
We now consider which values of these parameters give the worst case for this time bound, subject to the constraints p, q, r, s, t ≥ 0, p + q + r + s + t = n, 4p + 2q ≤ r (from the definition of a bushy forest), 2r ≥ s (from the maximality of the forest), and 20r/3 ≥ s + t (Lemma 22). We ignore the slightly tighter constraint p ≥ 1 since it only complicates the overall solution.
Since the work per vertex in s and t is larger than that in the bushy forests, the time bound is maximized when s and t are as large as possible; that is, when s + t = 20r/3. Further since the work per vertex in s is larger than that in t, s should be as large as possible; that is, s = 2r and t = 14r/3. Increasing p or q and correspondingly decreasing r, s, and t only increases the time bound, since we pay a factor of 2 or more per vertex in p and q and at most Λ for the remaining vertices, so in the worst case the constraint 4p + 2q ≤ r becomes an equality.
It remains only to set the balance between parameters p and q. There are two candidate solutions: one in which q = 0, so r = 4p, and one in which p = 0, so r = 2q. In the former case n = p + 4p + 8p + 56p/3 = 95p/3 and the time bound is 3 p Λ 8p (3Λ 3 ) 8p/3 = 3 11p/3 Λ 16p ≈ 1.3287 n . In the latter case n = q + 2q + 4q + 28q/3 = 49q/3 and the time bound is 2 q Λ 4q (3Λ 3 ) 4q/3 =
Edge Coloring
We now describe an algorithm for finding edge colorings of undirected graphs, using at most three colors, if such colorings exist. We can assume without loss of generality that the graph has vertex degree at most three. Then m ≤ 3n/2, so by applying our vertex coloring algorithm to the line graph of G we could achieve time bound 1.3289 3n/2 ≈ 1.5319 n . Just as we improved our vertex coloring algorithm by performing some reductions in the vertex coloring model before treating the problem as a (3, 2)-CSP instance, we improve this edge coloring bound by performing some reductions in the edge coloring model before treating the problem as a vertex coloring instance.
The main idea is to solve a problem intermediate in generality between 3-edge-coloring and 3-vertex-coloring: 3-edge-coloring with some added constraints that certain pairs of edges should not be the same color.
Lemma 25
Suppose a constrained 3-edge-coloring instance contains an unconstrained edge connecting two degree-three vertices. Then the instance can be replaced by two smaller instances with three fewer edges and two fewer vertices each.
Proof: Let the given edge be (w, x), and let its four neighbors be (u, w), (v, w), (x, y), and (x, z). Then (w, x) can be colored only if its four neighbors together use two of the three colors, which forces these neighbors to be matched into equally colored pairs in one of two ways. Thus, we can replace the instance by two smaller instances: one in which we replace the five edges by the two edges (u, y) and (v, z), and one in which we replace the five edges by the two edges (u, z) and (v, y); in each case we add a constraint between the two new edges. 2
The reduction operation described in Lemma 25 is depicted in Figure 22. We let m 3 denote the number of edges with three neighbors in an unconstrained 3-edge-coloring instance, and m 4 denote the number of edges with four neighbors. Edges with fewer neighbors can be removed at no cost, so we can assume without loss of generality that m = m 3 + m 4 .
Lemma 26 In an unconstrained 3-edge-coloring instance, we can find in polynomial time a set S of m 4 /3 edges such that Lemma 25 can be applied independently to each edge in S.
Proof: Use a maximum matching algorithm in the graph induced by the edges with four neighbors. If the graph is 3-colorable, the resulting matching must contain at least m 4 /3 edges. Applying Lemma 25 to an edge in a matching neither constrains any other edge in the matching, nor causes the remaining edges to stop being a matching. | 11,232 |
cs0007005 | 2951188190 | In this paper, we present a new methodology for developing systematic and automatic test generation algorithms for multipoint protocols. These algorithms attempt to synthesize network topologies and sequences of events that stress the protocol's correctness or performance. This problem can be viewed as a domain-specific search problem that suffers from the state space explosion problem. One goal of this work is to circumvent the state space explosion problem utilizing knowledge of network and fault modeling, and multipoint protocols. The two approaches investigated in this study are based on forward and backward search techniques. We use an extended finite state machine (FSM) model of the protocol. The first algorithm uses forward search to perform reduced reachability analysis. Using domain-specific information for multicast routing over LANs, the algorithm complexity is reduced from exponential to polynomial in the number of routers. This approach, however, does not fully automate topology synthesis. The second algorithm, the fault-oriented test generation, uses backward search for topology synthesis and uses backtracking to generate event sequences instead of searching forward from initial states. Using these algorithms, we have conducted studies for correctness of the multicast routing protocol PIM. We propose to extend these algorithms to study end-to-end multipoint protocols using a virtual LAN that represents delays of the underlying multicast distribution tree. | Several attempts to apply formal verification to network protocols have been made. Assertional proof techniques were used to prove distance vector routing @cite_13 , path vector routing @cite_10 and route diffusion algorithms @cite_22 @cite_44 and @cite_24 using communicating finite state machines. An example point-to-point mobile application was proved using assertional reasoning in @cite_40 using UNITY @cite_42 . Axiomatic reasoning was used in proving a simple transmission protocol in @cite_38 . Algebraic systems based on the calculus of communicating systems (CCS) @cite_20 have been used to prove CSMA CD @cite_19 . Formal verification has been applied to TCP and T TCP in @cite_39 . | {
"abstract": [
"In order for the nodes of a distributed computer network to communicate, each node must have information about the network's topology. Since nodes and links sometimes crash, a scheme is needed to update this information. One of the major constraints on such a topology information scheme is that it may not involve a central controller. The Topology Information Protocol that was implemented on the MERIT Computer Network is presented and explained; this protocol is quite general and could be implemented on any computer network. It is based on Baran's “Hot Potato Heuristic Routing Doctrine.” A correctness proof of this Topology Information Protocol is also presented.",
"Aho, Ullman, and Yannakakis have proposed a set of protocols that ensure reliable transmission of data across an error-prone channel. They have obtained lower bounds on the complexity required of the protocols to assure reliability for different classes of errors. They specify these protocols with finite-state machines. Although the protocol machines have only a small number of states, they are nontrivial to prove correct. In this paper we present proofs of one of these protocols using the finite-state-machine approach and the abstract-program approach. We also show that the abstract-program approach gives special insight into the operation of the protocol.",
"A new distributed algorithm is presented for dynamically determining weighted shortest paths used for message routing in computer networks. The major features of the algorithm are that the paths defined do not form transient loops when weights change and the number of steps required to find new shortest paths when network links fail is less than for previous algorithms. Specifically, the worst case recovery time is proportional to the largest number of hops h in any of the weighted shortest paths. For previous loop-free distributed algorithms this recovery time is proportional to h2.",
"My goal is to propose a set of questions that I think are important. J. Misra and I are working on these questions.",
"",
"",
"An algorithm for constructing and adaptively maintaining routing tables in communication networks is presented. The algorithm can be employed in message as well as circuit switching networks, uses distributed computation, provides routing tables that are loop-free for each destination at all times, adapts to changes in network flows, and is completely failsafe. The latter means that after arbitrary failures and additions, the network recovers in finite time in the sense of providing routing paths between all physically connected nodes. For each destination, the routes are independently updated by an update cycle triggered by the destination.",
"Mobile computing represents a major point of departure from the traditional distributed computing paradigm. The potentially very large number of independent computing units, a decoupled computing style, frequent disconnections, continuous position changes, and the location-dependent nature of the behavior and communication patterns of the individual components present designers with unprecedented challenges in the areas of modularity and dependability. The paper describes two ideas regarding a modular approach to specifying and reasoning about mobile computing. The novelty of our approach rests with the notion of allowing transient interactions among programs which move in space. We restrict our concern to pairwise interactions involving variable sharing and action synchronization. The motivation behind the transient nature of the interactions comes from the fact that components can communicate with each other only when they are within a certain range. The notation we propose is meant to simplify the writing of mobile applications and is a direct extension of that used in UNITY. Reasoning about mobile computations relies on the UNITY proof logic.",
"",
"This paper deals with a distributed adaptive routing strategy which is very simple and effective, and is free of a ping-pong-type looping in the presence of network failures. Using the number of time intervals required for a node to recover from a network failure as the measure of network's adaptability, performance of this strategy and the ARPANET's previous routing strategy (APRS) is comparatively analyzed without resorting to simulation. Formulas of the exact number of time intervals required for failure recovery under both strategies are also derived. We show that i)the performance of the strategy is always better than, or at least as good as, that of APRS, and ii) network topology has significant effects on the performance of both strategies.",
"0. Introduction.- 1. Experimenting on nondeterministic machines.- 2. Synchronization.- 3. A case study in synchronization and proof techniques.- 4. Case studies in value-communication.- 5. Syntax and semantics of CCS.- 6. Communication trees (CTs) as a model of CCS.- 7. Observation equivalence and its properties.- 8. Some proofs about data structures.- 9. Translation into CCS.- 10. Determinancy and confluence.- 11. Conclusion."
],
"cite_N": [
"@cite_13",
"@cite_38",
"@cite_22",
"@cite_42",
"@cite_39",
"@cite_44",
"@cite_24",
"@cite_40",
"@cite_19",
"@cite_10",
"@cite_20"
],
"mid": [
"2029187109",
"2167056631",
"1985988333",
"2157118812",
"",
"",
"2165681304",
"2051922524",
"91133127",
"2056643723",
"2137865376"
]
} | Systematic Testing of Multicast Routing Protocols: Analysis of Forward and Backward Search Techniques | Network protocols are becoming more complex with the exponential growth of the Internet, and the introduction of new services at the network, transport and application levels. In particular, the advent of IP multicast and the MBone enabled applications ranging from multi-player games to distance learning and teleconferencing, among others. To date, little effort has been exerted to formulate systematic methods and tools that aid in the design and characterization of these protocols.
In addition, researchers are observing new and obscure, yet all too frequent, failure modes over the internets [1] [2]. Such failures are becoming more frequent, mainly due to the increased heterogeneity of technologies, interconnects and configuration of various network components. Due to the synergy and interaction between different network protocols and components, errors at one layer may lead to failures at other layers of the protocol stack. Furthermore, degraded performance of low level network protocols may have ripple effects on end-to-end protocols and applications.
Network protocol errors are often detected by application failure or performance degradation. Such errors are hardest to diagnose when the behavior is unexpected or unfamiliar. Even if a protocol is proven to be correct in isolation, its behavior may be unpredictable in an operational network, where interaction with other protocols and the presence of failures may affect its operation. Protocol errors may be very costly to repair if discovered after deployment. Hence, endeavors should be made to capture protocol flaws early in the design cycle before deployment. To provide an effective solution to the above problems, we present a framework for the systematic design and testing of multicast protocols. The framework integrates test generation algorithms with simulation and implementation. We propose a suite of practical methods and tools for automatic test generation for network protocols.
Many researchers [3] [4] have developed protocol verification methods to ensure certain properties of protocols, like freedom from deadlocks or unspecified receptions. Much of this work, however, was based on assumptions about the network conditions, that may not always hold in today's Internet, and hence may become invalid. Other approaches, such as reachability analysis, attempt to check the protocol state space, and generally suffer from the 'state explosion' problem. This problem is exacerbated with the increased complexity of the protocol. Much of the previous work on protocol verification targets correctness. We target protocol performance and robustness in the presence of network failures. In addition, we provide new methods for studying multicast protocols and topology synthesis that previous works do not provide.
We investigate two approaches for test generation. The first approach, called the fault-independent test generation, uses a forward search algorithm to explore a subset of the protocol state space to generate the test events automatically. State and fault equivalence relations are used in this approach to reduce the state space. The second approach is called the fault-oriented test generation, and uses a mix of forward and backward search techniques to synthesize test events and topologies automatically.
We have applied these methods to multicast routing. Our case studies revealed several design errors, for which we have formulated solutions with the aid of this systematic process.
We further suggest an extension of the model to include end-to-end delays using the notion of virtual LAN. Such extension, in conjunction with the fault-oriented test generation, can be used for performance evaluation of end-to-end multipoint protocols.
The rest of this document is organized as follows. Sec-tion VI presents related work in protocol verification, conformance testing and VLSI chip testing. Section II introduces the proposed framework, and system definition. Sections III, IV, V present the search based approaches and problem complexity, the fault-independent test generation and the fault-oriented test generation, respectively. Section VII concludes 1 .
• Multicast Routing Overview Multicast protocols are the class of protocols that support group communication. Multicast routing protocols include, DVMRP [5], MOSPF [6], PIM-DM [7], CBT [8], and PIM-SM [9]. Multicast routing aims to deliver packets efficiently to group members by establishing distribution trees. Figure 1 shows a very simple example of a source S sending to a group of receivers Ri. Multicast distribution trees may be established by either broadcast-and-prune or explicit join protocols. In the former, such as DVMRP or PIM-DM, a multicast packet is broadcast to all leaf subnetworks. Subnetworks with no local members for the group send prune messages towards the source(s) of the packets to stop further broadcasts. Link state protocols, such as MOSPF, broadcast membership information to all nodes. In contrast, in explicit join protocols, such as CBT or PIM-SM, routers send hop-by-hop join messages for the groups and sources for which they have local members. We conduct robustness case studies for PIM-DM. We are particularly interested in multicast routing protocols, because they are vulnerable to failure modes, such as selective loss, that have not been traditionally studied in the area of protocol design. For most multicast protocols, when routers are connected via a multi-access network (or LAN) 2 , hop-by-hop messages are multicast on the LAN, and may experience selective loss; i.e. may be received by some nodes but not others. The likelihood of selective loss is increased by the fact that LANs often 1 We include appendices for completeness. 2 We use the term LAN to designate a connected network with respect to IP-multicast. This includes shared media (such as Ethernet, or FDDI), hubs, switches, etc. contain hubs, bridges, switches, and other network devices. Selective loss may affect protocol robustness. Similarly, end-to-end multicast protocols and applications must deal with situations of selective loss. This differentiates these applications most clearly from their unicast counterparts, and raises interesting robustness questions. Our case studies illustrate why selective loss should be considered when evaluating protocol robustness. This lesson is likely to extend to the design of higher layer protocols that operate on top of multicast and can have similar selective loss.
II. Framework Overview
Protocols may be evaluated for correctness or performance. We refer to correctness studies that are conducted in the absence of network failures as verification. In contrast, robustness studies consider the presence of network failures (such as packet loss or crashes). In general, the robustness of a protocol is its ability to respond correctly in the face of network component failures and packet loss. This work presents a methodology for studying and evaluating multicast protocols, specifically addressing robustness and performance issues. We propose a framework that integrates automatic test generation as a basic component for protocol design, along with protocol modeling, simulation and implementation testing. The major contribution of this work lies in developing new methods for generating stress test scenarios that target robustness and correctness violation, or worst case performance.
Instead of studying protocol behavior in isolation, we incorporate the protocol model with network dynamics and failures in order to reveal more realistic behavior of protocols in operation.
This section presents an overview of the framework and its constituent components. The model used to represent the protocol and the system is presented along with definitions of the terms used.
Our framework integrates test generation with simulation and implementation code. It is used for Systematic Testing of Robustness by Evaluation of Synthesized Scenarios (STRESS). As the name implies, systematic methods for scenario synthesis are a core part of the framework. We use the term scenarios to denote the test-suite consisting of the topology and events.
The input to this framework is the specification of a protocol, and a definition of its design requirements, in terms of correctness or performance. Usually robustness is defined in terms of network dynamics or fault models. A fault model represents various component faults; such as packet loss, corruption, re-ordering, or machine crashes. The desired output is a set of test-suites that stress the protocol mechanisms according to the robustness criteria.
As shown in Figure 2, the STRESS framework includes test generation, detailed simulation driven by the synthesized tests, and protocol implementation driven through an emulation interface to the simulator. In this work we focus on the test generation (TG) component. The STRESS framework
A. Test Generation
The core contribution of our work lies in the development of systematic test generation algorithms for protocol robustness. We investigate two such algorithms, each using a different approach.
In general test generation may be random or deterministic. Generation of random tests is simple but a large set of tests is needed to achieve a high measure of error coverage. Deterministic test generation (TG), on the other hand, produces tests based on a model of the protocol. The knowledge built into the protocol model enables the production of shorter and higher-quality test sequences. Deterministic TG can be: a) fault-independent, or b) fault-oriented. Fault-independent TG works without targeting individual faults as defined by the fault model. Such an approach may employ a forward search technique to inspect the protocol state space (or an equivalent subset thereof), after integrating the fault into the protocol model. In this sense, it may be considered a variant of reachability analysis. We use the notion of equivalence to reduce the search complexity. Section IV describes our fault-independent approach.
In contrast, fault-oriented tests are generated for specified faults. Fault-oriented test generation starts from the fault (e.g. a lost message) and synthesizes the necessary topology and sequence of events that trigger the error. This algorithm uses a mix of forward and backward searches. We present our fault-oriented algorithm in Section V.
We conduct case studies for the multicast routing protocol PIM-DM to illustrate differences between the approaches, and provide a basis for comparison.
In the remainder of this section, we describe the system model and definition.
B. The system model
We define our target system in terms of network and topology elements and a fault model.
B.1 Elements of the network
Elements of the network consist of multicast capable nodes and bi-directional symmetric links. Nodes run same multicast routing, but not necessarily the same unicast routing. The topology is an N -router LAN modeled at the network level; we do not model the MAC layer.
For end-to-end performance evaluation, the multicast distribution tree is abstracted out as delays between end systems and patterns of loss for the multicast messages. Cascade of LANs or uniform topologies are addressed in future research.
B.2 The fault model
We distinguish between the terms error and fault. An error is a failure of the protocol as defined in the protocol design requirement and specification. For example, duplication in packet delivery is an error for multicast routing. A fault is a low level (e.g. physical layer) anomalous behavior, that may affect the behavior of the protocol under test. Note that a fault may not necessarily be an error for the low level protocol.
The fault model may include: (a) Loss of packets, such as packet loss due to congestion or link failures. We take into consideration selective packet loss, where a multicast packet may be received by some members of the group but not others, (b) Loss of state, such as multicast and/or unicast routing tables, due to machine crashes or insufficient memory resources, (c) The delay model, such as transmission, propagation, or queuing delays. For end-to-end multicast protocols, the delays are those of the multicast distribution tree and depend upon the multicast routing protocol, and (d) Unicast routing anomalies, such as route inconsistencies, oscillations or flapping.
Usually, a fault model is defined in conjunction with the robustness criteria for the protocol under study. For our robustness studies we study PIM. The designing robustness goal for PIM is to be able to recover gracefully (i.e. without going into erroneous stable states) from single protocol message loss. That is, being robust to a single message loss implies that transitions cause the protocol to move from one correct stable state to another, even in the presence of selective message loss. In addition, we study PIM protocol behavior in presence of crashes and route inconsistencies.
C. Test Sequence Definition
A fault model may include a single fault or multiple faults. For our robustness studies we adopt a single-fault model, where only a single fault may occur during a scenario or a test sequence.
We define two sequences, T =< e1, e2, . . . , en > and T ′ =< e1, e2, . . . , ej, f, e k , . . . , en >, where ei is an event and f is a fault. Let P (q, T ) be the sequence of states and stimuli of protocol P under test T starting from the initial state q. T ′ is a test sequence if final P (q, T ′ ) is incorrect; i.e. the stable state reached after the occurrence of the fault does not satisfy the protocol correctness conditions (see Section II-E) irrespective of P (q, T ). In case of a fault-free sequence, where T = T ′ , the error is attributed to a protocol design error. Whereas when T = T ′ , and final P (q, T ) is correct, the error is manifested by the fault. This definition ignores transient protocol behavior. We are only concerned with the stable (i.e. non-transient) behavior of a protocol.
D. Test Scenario
A test scenario is defined by a sequence of (host) events, a topology, and a fault model, as shown in Figure 3. The events are actions performed by the host and act as input to the system; for example, join, leave, or send packet. The topology is the routed topology of set of nodes and links. The nodes run the set of protocols under test or other supporting protocols. The links can be either point-to-point links or LANs. This model may be extended later to represent various delays and bandwidths between pairs of nodes, by using a virtual LAN matrix (see [10]). The fault model used to inject the fault into the test. According to our singlemessage loss model, for example, a fault may denote the 'loss of the second message of type prune traversing a certain link'. Knowing the location and the triggering action of the fault is important in analyzing the protocol behavior.
E. Brief description of PIM-DM
For our robustness studies, we apply our automatic test generation algorithms to a version of the Protocol Independent Multicast-Dense Mode, or PIM-DM. The description given here is useful for Sections III through V.
PIM-DM uses broadcast-and-prune to establish the multicast distribution trees. In this mode of operation, a multicast packet is broadcast to all leaf subnetworks. Subnetworks with no local members send prune messages towards the source(s) of the packets to stop further broadcasts.
Routers with new members joining the group trigger Graft messages towards previously pruned sources to re-establish the branches of the delivery tree. Graft messages are acknowledged explicitly at each hop using the Graft-Ack message.
PIM-DM uses the underlying unicast routing tables to get the next-hop information needed for the RPF (reverse-pathforwarding) checks. This may lead to situations where there are multiple forwarders for a LAN. The Assert mechanism prevents these situations and ensures there is at most one forwarder for a LAN.
The correct function of a multicast routing protocol in general, is to deliver data from senders to group members (only those that have joined the group) without any data loss. For our methods, we only assume that a correctness definition is given by the protocol designer or specification. For illustration, we discuss the protocol errors and the correctness conditions.
E.1 PIM Protocol Errors
In this study we target protocol design and specification errors. We are interested mainly in erroneous stable (i.e. non-transient) states. In general, the protocol errors may be defined in terms of the end-to-end behavior as functional correctness requirements. In our case, for PIM-DM, an error may manifest itself in one of the following ways: 1) black holes: consecutive packet loss between periods of packet delivery, 2) packet looping: the same packet traverses the same set of links multiple times, 3) packet duplication: multiple copies of the same packet are received by the same receiver(s), 4) join latency: lack of packet delivery after a receiver joins the group, 5) leave latency: unnecessary packet delivery after a receiver leaves the group 3 , and 6) wasted bandwidth: unnecessary packet delivery to network links that do not lead to group members.
E.2 Correctness Conditions
We assume that correctness conditions are provided by the protocol designer or the protocol specification. These conditions are necessary to avoid the above protocol errors in a LAN environment, and include 4 : 1. If one (or more) of the routers is expecting to receive packets from the LAN, then one other router must be a forwarder for the LAN. Violation of this condition may lead to data loss (e.g. join latency or black holes). 2. The LAN must have at most one forwarder at a time. Violation of this condition may lead to data packet duplication. 3. The delivery tree must be loop-free:
(a) Any router should accept packets from one incoming interface only for each routing entry. This condition is enforced by the RPF (Reverse Path Forwarding) check.
(b) The underlying unicast topology should be loop-free 5 . Violation of this condition may lead to data packet looping. 4. If one of the routers is a forwarder for the LAN, then there must be at least one router expecting packets from the LANs. Violation of this condition may lead to leave latency.
III. Search-based Approaches
The problem of test synthesis can be viewed as a search problem. By searching the possible sequences of events and 3 Join and leave latencies may be considered in other contexts as performance issues. However, in our study we treat them as errors. 4 These are the correctness conditions for stable states; i.e. not during transients, and are defined in terms of protocol states (as opposed to end point behavior). The mapping from functional correctness requirements for multicast routing to the definition in terms of the protocol model is currently done by the designer. The automation of this process is part of future research. 5 Some esoteric scenarios of route flapping may lead to multicast loops, in spite of RPF checks. Currently, our study does not address this issue, as it does not pertain to a localized behavior.
faults over network topologies and checking for design requirements (either correctness or performance), we can construct the test scenarios that stress the protocol. However, due to the state space explosion, techniques must be used to reduce the complexity of the space to be searched. We attempt to use these techniques to achieve high test quality and protocol coverage. Following we present the GFSM model for the case study protocol (PIM-DM), and use it as an illustrative example to analyze the complexity of the state space and the search problem, as well as illustrate the algorithmic details and principles involved in FITG and FOTG.
A. The Protocol Model
We represent the protocol as a finite state machine (FSM) and the overall LAN system by a global FSM (GFSM). I. FSM model: Every instance of the protocol, running on a single router, is modeled by a deterministic FSM consisting of: (i) a set of states, (ii) a set of stimuli causing state transitions, and (iii) a state transition function (or table) describing the state transition rules. For a system i, this is represented by the machine Mi = (S, τi, δi), where S is a finite set of state symbols, τi is the set of stimuli, and δi is the state transition function S × τi → S.
II. Global FSM model: The global state is defined as the composition of individual router states. The output messages from one router may become input messages to other routers. Such interaction is captured by the GFSM model in the global transition table. The behavior of a system with n routers may be described by MG = (SG, τG, δG), where SG: S1 × S2 × · · · × Sn is the global state space, τG:
n i=1
τi is the set of stimuli, and δG is the global state transition function SG × τG → SG.
The fault model is integrated into the GFSM model. For message loss, the transition caused by the message is either nullified or modified, depending on the selective loss pattern. Crashes may be treated as stimuli causing the routers affected by the crash to transit into a crashed state 6 . Network delays are modeled (when needed) through the delay matrix presented in Section VII. For a given group and a given source (i.e., for a specific source-group pair), we define the states w.r.t. a specific LAN to which the router Ri is attached. For example, a state may indicate that a router is a forwarder for (or a receiver expecting packets from) the LAN. 6 The crashed state maybe one of the states already defined for the protocol, like the empty state, or may be a new state that was not defined previously for the protocol. The possible states for upstream and downstream routers are as follows:
B. PIM-DM Model
Si = {Fi, Fi T imer , N Fi, EUi},
if the router is upstream;
{N Hi, N Hi T imer , N Ci, Mi, N Mi, EDi},
if the router is downstream.
B.1.b Stimuli (τ ).
The stimuli considered here include transmitting and receiving protocol messages, timer events, and external host events. Only stimuli leading to change of state are considered. For example, transmitting messages per se (vs. receiving messages) does not cause any change of state, except for the Graf t, in which case the Rtx timer is set. Following are the stimuli considered in our study:
1. Transmitting messages: Graft transmission (Graf tT x). 2. Receiving messages: Graft reception (Graf tRcv), Join reception (Join), Prune reception (P rune), Graft Acknowledgement reception (GAck), Assert reception (Assert), and forwarded packets reception (F P kt).
3. Timer events: these events occur due to timer expiration (Exp) and include the Graft re-transmission timer (Rtx), the event of its expiration (RtxExp), the forwarder-deletion timer (Del), and the event of its expiration (DelExp). We refer to the event of timer expiration as (T imerImplication).
4. External host events (Ext): include host sending packets (SP kt), host joining a group (HJoin or HJ), and host leaving a group (Leave or L). τ = {Join, P rune, Graf tT x, Graf tRcv, GAck, Assert, F P kt, Rtx, Del, SP kt, HJ, L}.
B.2 Global FSM model
Subscripts are added to distinguish different routers. These subscripts are used to describe router semantics and how routers interact on a LAN. An example global state for a topology of 4 routers connected to a LAN, with router 1 as a forwarder, router 2 expecting packets from the LAN, and routers 3 and 4 have negative caches, is given by {F1, N H2, N C3, N C4}. For the global stimuli τG, subscripts are added to stimuli to denote their originators and recipients (if any). The global transition rules δG are extended to encompass the router and stimuli subscripts 7 .
7 Semantics of the global stimuli and global transitions will be described as needed (see Section V).
C. Defining stable states
We are concerned with stable state (i.e. non-transient) behavior, defined in this section. To obtain erroneous stable states, we need to define the transition mechanisms between such states. We introduce the concept of transition classification and completion to distinguish between transient and stable states.
C.1 Classification of Transitions
We identify two types of transitions; externally triggered (ET) and internally triggered (IT) transitions. The former is stimulated by events external to the system (e.g., HJoin or Leave), whereas the latter is stimulated by events internal to the system (e.g., F P kt or Graf t).
We note that some transitions may be triggered due to either internal and external events, depending on the scenario. For example, a P rune may be triggered due to forwarding packets by an upstream router F P kt (which is an internal event), or a Leave (which is an external event).
A global state is checked for correctness at the end of an externally triggered transition after completing its dependent internally triggered transitions.
Following is a or Leave). Therefore, we should be able to identify a transition based upon its stimuli (either external or internal).
At the end of each complete transition sequence the system exists in either a correct or erroneous stable state. Eventtriggered timers (e.g., Del, Rtx) fire at the end of a complete transition.
D. Problem Complexity
The problem of finding test scenarios leading to protocol error can be viewed as a search problem of the protocol state space. Conventional reachability analysis [11] attempts to investigate this space exhaustively and incurs the 'state space explosion' problem. To circumvent this problem we use search reduction techniques using domain-specific information of multicast routing.
In this section, we give the complexity of exhaustive search, then discuss the reduction techniques we employ based on notion of equivalence, and give the complexity of the state space.
D.1 Complexity of exhaustive search
Exhaustive search attempts to generate all states reachable from initial system states. In the symbolic representation, r routers in state q are represented by q r . The global state for a system of n routers 8 Crashes force any state to the empty state. 9 Two system states (q 1 , q 2 , . . . , qn) and (p 1 , p 2 , . . . , pn) are strictly
equivalent iff q i = p i , where q i , p i ∈ S, ∀1 ≤ i ≤ n.
However, all routers use the same deterministic FSM model, hence all n! permutations of (q 1 , q 2 , . . . , qn) are equivalent. A global state for a system with n routers may be represented as 10 The notion of counting equivalence also applies to transitions and faults.
|S| i=1 s k i i , where k i is the number of routers in state s i ∈ S and Σ |S| i=1 k i = n. Formally, Counting Equivalence states that two system states |S| i=1 s k i i and |S| i=1 s l i i are equivalent if k i = l i ∀i.
Those transitions or faults leading to equivalent states are considered equivalent.
is represented by G = (q r 1 1 , q r 2 2 , . . . , q rm m ), where m = |S|, Σri = n. For symbolic representation of topologies where n is unknown ri ∈ [0, 1, 2, 1+, * ] ('1+' is 1 or more, and '*' is 0 or more).
To satisfy the correctness conditions for PIM-DM, the correct stable global states are those containing no forwarders and no routers expecting packets, or those containing one forwarder and one or more routers expecting packets from the link; symbolically this may be given by:
G1 = F 0 , N H 0 , N C * , and G2 = F 1 , N H 1+ , N C * . 11
We use X to denote any state si ∈ S. The correct space and the erroneous space must be disjoint and they must be complete (i.e. add up to the complete space), otherwise the specification is incorrect. See Appendix I-A for details.
We present two correctness definitions that are used in our case.
• The first definition considers the forwarder states as F and the routers expecting packets from the LAN as N H. Hence, the symbolic representation of the correct states becomes: 11 For convenience, we may represent these two states as G 1 = NC * , and G 2 = F, NH 1+ , NC * . 12 These conditions we have found to be reasonably sufficient to meet the functional correctness requirements. However, they may not be necessary, hence the search may generate false errors. Proving necessity is part of future work. and the number of correct states is:
({X − N H − F } * ), or (N H, F, {X − F } * ),C(n + s − 3, n) + C(n + s − 4, n − 2).
• The second definition considers the forwarder states as {Fi, F i Del } or simply FX , and the states expecting packets from the LAN as {N Hi, N Hi Rtx} or simply N HX . Hence, the symbolic representation of the correct states becomes:
({X − N HX − FX } * ), or (N HX , FX , {X − FX } * ),
and the number of correct states is:
C(n + s − 5, n) + 4 · C(n + s − 5, n − 2) − 2 · C(n + s − 6, n − 3).
Refer to Appendix I-B for more details on deriving the number of correct states.
In general, we find that the size of the error state space, according to both definitions, constitutes the major portion of the whole state space. This means that search techniques explicitly exploring the error states are likely to be more complex than others. We take this in consideration when designing our methods.
IV. Fault-independent Test Generation
Fault-independent test generation (FITG) uses the forward search technique to investigate parts of the state space. As in reachability analysis, forward search starts from initial states and applies the stimuli repeatedly to produce the reachable state space (or part thereof). Conventionally, an exhaustive search is conducted to explore the state space. In the exhaustive approach all reachable states are expanded until the reachable state space is exhausted. We use several manifestations of the notion of counting equivalence introduced earlier to reduce the complexity of the exhaustive algorithm and expand only equivalent subspaces. To examine robustness of the protocol, we incorporate selective loss scenarios into the search.
A. Reduction Using Equivalences
The search procedure starts from the initial states 13 and keeps a list of states visited to prevent looping. Each state is expanded by applying the stimuli and advancing the state machine forward by implementing the transition rules and returning a new stable state each time 14 . We use the counting equivalence notion to reduce the complexity of the search in three stages of the search:
1. The first reduction we use is to investigate only the equivalent initial states. To achieve this we simply treat the set of states constituting the global state as unordered set 13 For our case study the routers start as either a non-member (NM ) or empty upstream routers (EU ), that is, the initial states I.S. = {NM, EU }.
14 For details of the above procedures, see Appendix II-A. One procedure that produces such equivalent initial state space given in Appendix II-B. The complexity of the this algorithm is given by C(n + i.s. − 1, n) as was shown in Section III-D.2 and verified through simulation.
B. Applying the Method
In this section we discuss how the fault-independent test generation can be applied to the model of PIM-DM. We apply forward search techniques to study correctness of PIM-DM. We first study the complexity of the algorithms without faults. Then we apply selective message loss to study the protocol behavior and analyze the protocol errors.
B.1 Method input
The protocol model is provided by the designer or protocol specification, in terms of a transition 3 178 30 2840 263 4 644 48 14385 503 6 7480 106 271019 1430 8 80830 200 4122729 3189 10 843440 338 55951533 6092 12 8621630 528 708071468 10483 14 86885238 778 8.546E+09 16738 Transitions Errors Rtrs Exhaustive Reduced Exhaustive Reduced 3 343 65 33 6 4 1293 119 191 13 6 14962 307 3235 43 8 158913 633 41977 101 10 1638871 1133 491195 195 12 16666549 1843 5441177 333 14 167757882 2799 58220193 523 Similar results were obtained for the number of forwards, expanded states and number of error states. The reduction gained by using the counting equivalence is exponential.
More detailed presentation of the algorithmic details and results are given in Appendix II.
For robustness analysis (vs. verification), faults are included in the GFSM model. Intuitively, an increase in the overall complexity of the algorithms will be observed. Other kinds of equivalence may be investigated to reduce complexity in these cases 18 . Also, other techniques for complexity reduction may be investigated, such as statistical sampling based on randomization or hashing used in 16 This is one case where the correctness conditions for the model are sufficient but not necessary to meet the functional requirements for correctness, thus leading to a false error. Sufficiency and necessity proofs are subject of future work. 17 Repetition constructs include, for example, the '*' to represent zero or more states, or the '1+' to represent one or more states, '2+' two or more, so on. 18 An example of another kind of equivalence is fault dominance, where a system is proven to necessarily reach one error before reaching another, thus the former error dominates the latter error.
SPIN [14]. However, sampling techniques do not achieve full coverage of the state space.
• The topology used in this study is limited to a single-hop LAN. Although we found it quite useful to study multicast routing over LANs, the method needs to be extended to multi-hop LAN to be more general. Our work in [10] introduces the notion of virtual LAN, and future work addresses multi-LAN topologies.
In sum, the fault-independent test generation may be used for protocol verification given the symmetry inherent in the system studied (i.e., protocol and topology). For robustness studies, where the fault model is included in the search, the complexity of the search grows. In this approach we did not address performance issues or topology synthesis. These issues are addressed in the coming sections. However, we shall re-use the notion of forward search and the use of counting equivalence in the method discussed next.
V. Fault-oriented Test Generation
In this section, we investigate the fault-oriented test generation (FOTG), where the tests are generated for specific faults. In this method, the test generation algorithm starts from the fault(s) and searches for a possible error, establishing the necessary topology and events to produce the error.
Once the error is established, a backward search technique produces a test sequence leading to the erroneous state, if such a state is reachable. We use the FSM formalism presented in Section III to represent the protocol. We also re-use some ideas from the FITG algorithm previously presented, such as forward search and the notion of equivalence for search reduction.
A. FOTG Method Overview
Fault-oriented test generation (FOTG) targets specific faults or conditions, and so is better suited to study robustness in the presence of faults in general. FOTG has three main stages: a) topology synthesis, b) forward implication and error detection, and c) backward implication.
The topology synthesis establishes the necessary components (e.g., routers and hosts) of the system to trigger the given condition (e.g., trigger a protocol message
HJoin Ext N M → M, Graf t T x .(N C → N H) Leave Ext M → N M, P rune.(N H → N C), P rune.(N H Rtx → N C)
The above pre-conditions can be derived automatically from the post-conditions. In Appendix III, we describe the 'PreConditions' procedure that takes as input one form of the conventional post-condition transition
←− EU i , Join ←− F i Del , Join ←− NF i , Graf t Rcv ←− NF i , SP kt ←− EU i F i Del P rune ←− F i NF i Del ←− F i Del , Assert ←− F i NH i Rtx,GAck ←− NH i Rtx , HJ ←− NC i , F P kt ←− M i , F P kt ←− ED i NH i Rtx Graf t T x ←− NH i NC i F P kt ←− NM i , L ←− NH i Rtx , L ←− NH i EU i ← I.S. ED i ← I.S. M i HJ ←− NM i NM i L ←− M i , ← I.S.
In cases where the stimulus affects more than one router (e.g., multicast P rune), multiple states need to be simultaneously implied in one backward step, otherwise an I.S.
B. FOTG details
As previously mentioned, our FOTG approach consists of three phases: I) synthesis of the global state to inspect, II) forward implication, and III) backward implication. These phases are explained in more detail in this section. In Section V-C we present an illustrative example for the these phases.
B.1 Synthesizing the Global State
Starting from a condition (e.g., protocol message or stimulus), and using the information in the protocol model (i.e. the transition table), a global state is synthesized for investigation. We refer to this state as the global-state inspected (GI ), and it is obtained as follows:
C. Applying The Method
In this section we discuss how the fault-oriented test gen- Join topology synthesis, forward/backward implication Backward implication
G I = {N H i , N F k , N C j } P rune ←− G I−1 = {N H i , F k , N C j } F P kt ←− G I−2 = {M i , F k , N M j } SP kt ←− G I−3 = {M i , EU k , N M j } HJ i ←− G I−4 = {N M i , EU k , N M j } = I.S.
Losing the Join by the forwarding router R k leads to an error state where router Ri is expecting packets from the LAN, but the LAN has no forwarder.
C.3 Summary of Results
In this section we briefly discuss the results of applying Our method as presented here, however, may not be generalized to transform any type of timing problem into sequencing problem. This topic bears more research in the future.
We have used the sequences of events generated automatically by the algorithm to analyze protocol errors and suggest Forward implication is then applied, and behavior after the crash is checked for correct packet delivery. To achieve this, host stimuli (i.e. SP kt, HJ and L) are applied, then the system state is checked for correctness.
In lots of the cases studied, the system recovered from the crash (i.e. the system state was eventually correct). The recovery is mainly due to the nature of PIM-DM; where protocol states are re-created with reception of data packets. This result is not likely to extend to protocols of other natures; e.g. PIM Sparse-Mode [15].
However, in violation with robustness requirements, there existed cases in which the system did not recover. In Figure 7, the host joining in (II, a) did not have the sufficient state to send a Graf t and hence gets join latency until the negative cache state times out upstream and packets are forwarded onto the LAN as in (II, b). Crash leading to join latency
In Figure 8 (II, a), the downstream router incurs join latency due to the crash of the upstream router. The state is not corrected until the periodic broadcast takes place, and packets are forwarded onto the LAN as in (II, b).
Fig. 8
Crash leading to black holes
D. Challenges and Limitations
Although we have been able to apply FOTG to PIM-DM successfully, a discussion of the open issues and challenges is called for. In this section we address some of these issues. • The analysis for our case studies did not consider network delays. In order to study end-to-end protocols network delays must be considered in the model. In [10] we introduce the notion of virtual LAN to include end-to-end delay semantics.
• Minimal topologies that are necessary and sufficient to trigger the stimuli, may not be sufficient to capture all correctness violations. For example, in some cases it may require one member to trigger a Join, but two members to experience an error caused by Join loss. Hence, the topology synthesis stage must be complete in order to capture all possible errors. To achieve this we propose to use the symbolic representation. For example, to cover all topologies with one or more members we use (M 1+ ). Integration of this notation with the full method is part of future work. We believe that the strength of our fault-oriented method, as was demonstrated, lies in its ability to construct the necessary conditions for erroneous behavior by starting directly from the fault and avoiding the exhaustive walk of the state space. Also, converting timing problems into sequencing problems (as was shown for Graf t analysis) reduces the complexity required to study timers. FOTG as presented in this chapter seems best fit to study protocol robustness in the presence of faults. Faults presented in our studies include single selective loss of protocol messages and router crashes.
VII. Conclusions
In this study we have proposed the STRESS framework to integrate test generation into the protocol design process. More case studies are needed to show more general applicability of our methodology.
Appendix
I. State Space Complexity
In this appendix we present analysis for the state space complexity of our target system. In specific we present completeness proof of the state space and the formulae to compute the size of the correct state space.
A. State Space Completeness
We define the space of all states as X * , denoting zero or more routers in any state. We also define the algebraic operators for the space, where
X * = X 0 ∪ X 1 ∪ X 2+ (1) (Y n , X * ) = Y n+ , {X − Y } *(2)
A.1 Error states
In general, an error may manifest itself as packet duplicates, packet loss, or wasted bandwidth. This is mapped onto the state of the global FSM as follows:
1. The existence of two or more forwarders on the LAN with one or more routers expecting packet from the LAN (e.g., in the N HX state) indicates duplicate delivery of packets.
2. The existence of one or more routers expecting packets from the LAN with no forwarders on the LAN indicates a deficiency in packet delivery (join latency or black holes).
3. The existence of one or more forwarders for the LAN with no routers expecting packets from the LAN indicates wasted bandwidth (leave latency or extra overhead).
-for duplicates: one or more N HX with two or more FX ;
N HX , F 2+ X , X *(3)
-for extra bandwidth: one or more FX with zero N HX;
(FX , {X − N HX } * )(4)
-for blackholes or packet loss: one or more N HX with zero FX ;
(N HX , {X − FX } * )(5)
A.2 Correct states As described earlier, the correct states can be described by the following rule:
∃ exactly one forwarder for the LAN iff ∃ one or more routers expecting packets from the LAN.
-zero N HX with zero FX ;
({X − N HX − FX } * )(6)
-one or more N HX with exactly one FX ;
(N HX , FX , {X − FX } * )(7)
from (B.2) and (B.3) we get:
N HX , F 2+ X , {X − FX } *(8)
if we take the union of (B.8), (B.5) and (B.7), and apply (B.1) we get:
(N HX, X * ) = N H 1+ X , {X − N HX } *(9)
also, from (B.4) and (B.2) we get:
F 1+ X , {X − N HX − FX } *(10)
if we take the union of (B.10) and (B.6) we get:
(F * X , {X − N HX − FX } * ) = ({X − N HX } * )(11)
taking the union of (B.9) and (B.11) we get:
(N H * X , {X − N HX} * ) = (X * )(12)
which is the complete state space. To obtain the ErrorStates we can use: The percentage of the correct and error states
B. Number of Correct and Error State Spaces
ErrorStates = T otalStates − CorrectStates.
II. Forward Search Algorithms
This appendix includes detailed procedures that implement the forward search method as described in Section IV.
It also includes detailed statistics collected for the case study on PIM-DM.
A. Exhaustive Search
The ExpandSpace procedure given below implements an
Fig. 11
Simulation statistics for forward algorithms. The complexity of this procedure is given by (i.s.) n .
B. Reduction Using Equivalence
We use the counting equivalence notion to reduce the complexity of the search in 3 ways:
1. The first reduction we use is to investigate only the equivalent initial states, we call this algorithm Equiv. One procedure that produces such equivalent initial state space is the EquivInit procedure given below. We call the algorithm after the third reduction the reduced algorithm.
C. Complexity analysis of forward search for PIM-DM
The number of reachable states visited, the number of transitions and the number of erroneous states found were recorded. The result is given in Figures 10, 11, 12, 13. The reduction is the ratio of the numbers obtained using the exhaustive algorithm to those obtained using the reduced algorithm.
The This means that we have obtained exponential reduction in complexity, as shown in Figure 14.
A. Pre-Conditions
The procedure described below takes as input the set of post-conditions for the FSM stimuli and genrates the set of pre-conditions. The 'conds' array contains the postconditions (i.e., the effects of the stimuli on the system) and
is indexed by the stimulus.
C. Topology Synthesis
The following procedure synthesizes minimum topologies necessary to trigger the various stimuli of the protocol. It performs the third and forth steps of the topology synthesis procedure explained in Section V-B. were reachable. The statistics for the total and average number of backward calls, rewind calls and backtracks is given in Figure 15.
Although the topology synthesis study we have presented above is not complete, we have covered a large number of corner cases using only a manageable number of topologies and search steps.
To obtain a complete representation of the topologies, we suggest to use the symbolic representation 31 presented in Section III. Based on our initial estimates we expect the number of symbolic topology representations to be approximately 224 topologies, ranging from 2 to 8-router LAN topologies, for the single selective loss and single crash models.
F. Experimental statistics for PIM-DM
To investigate the utility of FOTG as a verification tool we ran this set of simulations. This is not, however, how FOTG is used to study protocol robustness (see previous section for case study analysis).
We also wanted to study the effect of unreachable states on the complexity of the verification. The simulations for our case study show that unreachable states do not contribute in a significant manner to the complexity of the backward search for larger topologies. Hence, in order to use FOTG as a verification tool, it is not sufficient to add the reachability detection capability to FOTG.
The backward search was applied to the equivalent error states (for LANs with 2 to 5 routers connected). The simulation setup involved a call to a procedure similar to 'EquivInit'
in Appendix II-B, with the parameter S as the set of state 31 We have used the repetition constructs '0', '1', '*'. Simulation statistics for backward algorithms symbols, and after an error check was done a call is made to the 'Backward' procedure instead of 'ExpandSpace'.
States were classified as reachable or unreachable. For the four topologies studied (LANs with 2 to 5 routers) statistics were measured (e.g., max, min, median, average, and total) for number of calls to the 'Backward' and 'Rewind' procedures, and the number of backTracks were measured. As shown in Figure 16, the statistics show that, as the topology grows, all the numbers for the reachable states get significantly larger than those for the unreachable states (as in Figure 17), despite the fact that that the percentage of unreachable states increases with the topology as in Figure 18.
The reason for such behavior is due to the fact that when the state is unreachable the algorithm reaches a dead-end relatively early (by exhausting one branch of the search tree).
However, for reachable states, the algorithm keeps on searching until it reaches an initial global state. Hence the reachable states search constitutes the major component that contributes to the complexity of the algorithm.
G. Results
We have implemented an early version of the algorithm in the NS/VINT environment (see http://catarina.usc.edu/vint) and used it to drive detailed simulations of PIM-DM therein, to verify our findings. In this section we discuss the results of applying our method to PIM-DM. The analysis is conducted for single selective message loss.
For the following analyzed messages, we present the steps for topology synthesis, forward and backward implication.
G.3 Graft
Following are the resulting steps for the Graf t loss:
= {N H, N F } Graf t T x −→ G I+1 = {N H Rtx , N F } T imer −→ G I+2 = {N H, N F } Graf t T x −→ G I+3 = {N H Rtx , N F } Graf t Rcv −→ G I+4 = {N H Rtx , F } GAck −→ G I+5 = {N H, F } correct state
We did not reach an error state when the Graf t was lost, with non-interleaving external events.
H. Interleaving events and Sequencing
A Graf t message is acknowledged by the Graf t − Ack (GAck) message, and if not acknowledged it is retransmitted when the retransmission timer expires. In an attempt to create an erroneous scenario, the algorithm generates sequences to clear the retransmission timer, and insert an adverse event.
Since the Graf t reception causes an upstream router to become a forwarder for the LAN, the algorithm interleaves a
Leave event as an adversary event to cause that upstream router to become a non-forwarder. Backward Implication:
Using backward implication, we can construct a sequence of events leading to conditions sufficient to trigger the GAck. Hence, when a Graf t followed by a P rune is interleaved with the Graf t loss, the retransmission timer is reset with the receipt of the GAck for the first Graf t, and the systems ends up in an error state. | 8,424 |
cs0007005 | 2951188190 | In this paper, we present a new methodology for developing systematic and automatic test generation algorithms for multipoint protocols. These algorithms attempt to synthesize network topologies and sequences of events that stress the protocol's correctness or performance. This problem can be viewed as a domain-specific search problem that suffers from the state space explosion problem. One goal of this work is to circumvent the state space explosion problem utilizing knowledge of network and fault modeling, and multipoint protocols. The two approaches investigated in this study are based on forward and backward search techniques. We use an extended finite state machine (FSM) model of the protocol. The first algorithm uses forward search to perform reduced reachability analysis. Using domain-specific information for multicast routing over LANs, the algorithm complexity is reduced from exponential to polynomial in the number of routers. This approach, however, does not fully automate topology synthesis. The second algorithm, the fault-oriented test generation, uses backward search for topology synthesis and uses backtracking to generate event sequences instead of searching forward from initial states. Using these algorithms, we have conducted studies for correctness of the multicast routing protocol PIM. We propose to extend these algorithms to study end-to-end multipoint protocols using a virtual LAN that represents delays of the underlying multicast distribution tree. | The combination of timed automata, invariants, simulation mappings, automaton composition, and temporal logic @cite_29 seem to be very useful tools for proving (or disproving) and reasoning about safety or liveness properties of distributed algorithms. It may also be used to establish asymptotic bounds on the complexity of the distributed algorithms. It is not clear, however, how theorem proving techniques can be used in test synthesis to construct event sequences and topologies that stress network protocols. Parts of our work draw from distributed algorithms verification principles. Yet we feel that our work complements such work, as we focus on test synthesis problems. | {
"abstract": [
"The temporal logic of actions (TLA) is a logic for specifying and reasoning about concurrent systems. Systems and their properties are represented in the same logic, so the assertion that a system meets its specification and the assertion that one system implements another are both expressed by logical implication. TLA is very simple; its syntax and complete formal semantics are summarized in about a page. Yet, TLA is not just a logician's toy; it is extremely powerful, both in principle and in practice. This report introduces TLA and describes how it is used to specify and verify concurrent algorithms. The use of TLA to specify and reason about open systems will be described elsewhere."
],
"cite_N": [
"@cite_29"
],
"mid": [
"2015688007"
]
} | Systematic Testing of Multicast Routing Protocols: Analysis of Forward and Backward Search Techniques | Network protocols are becoming more complex with the exponential growth of the Internet, and the introduction of new services at the network, transport and application levels. In particular, the advent of IP multicast and the MBone enabled applications ranging from multi-player games to distance learning and teleconferencing, among others. To date, little effort has been exerted to formulate systematic methods and tools that aid in the design and characterization of these protocols.
In addition, researchers are observing new and obscure, yet all too frequent, failure modes over the internets [1] [2]. Such failures are becoming more frequent, mainly due to the increased heterogeneity of technologies, interconnects and configuration of various network components. Due to the synergy and interaction between different network protocols and components, errors at one layer may lead to failures at other layers of the protocol stack. Furthermore, degraded performance of low level network protocols may have ripple effects on end-to-end protocols and applications.
Network protocol errors are often detected by application failure or performance degradation. Such errors are hardest to diagnose when the behavior is unexpected or unfamiliar. Even if a protocol is proven to be correct in isolation, its behavior may be unpredictable in an operational network, where interaction with other protocols and the presence of failures may affect its operation. Protocol errors may be very costly to repair if discovered after deployment. Hence, endeavors should be made to capture protocol flaws early in the design cycle before deployment. To provide an effective solution to the above problems, we present a framework for the systematic design and testing of multicast protocols. The framework integrates test generation algorithms with simulation and implementation. We propose a suite of practical methods and tools for automatic test generation for network protocols.
Many researchers [3] [4] have developed protocol verification methods to ensure certain properties of protocols, like freedom from deadlocks or unspecified receptions. Much of this work, however, was based on assumptions about the network conditions, that may not always hold in today's Internet, and hence may become invalid. Other approaches, such as reachability analysis, attempt to check the protocol state space, and generally suffer from the 'state explosion' problem. This problem is exacerbated with the increased complexity of the protocol. Much of the previous work on protocol verification targets correctness. We target protocol performance and robustness in the presence of network failures. In addition, we provide new methods for studying multicast protocols and topology synthesis that previous works do not provide.
We investigate two approaches for test generation. The first approach, called the fault-independent test generation, uses a forward search algorithm to explore a subset of the protocol state space to generate the test events automatically. State and fault equivalence relations are used in this approach to reduce the state space. The second approach is called the fault-oriented test generation, and uses a mix of forward and backward search techniques to synthesize test events and topologies automatically.
We have applied these methods to multicast routing. Our case studies revealed several design errors, for which we have formulated solutions with the aid of this systematic process.
We further suggest an extension of the model to include end-to-end delays using the notion of virtual LAN. Such extension, in conjunction with the fault-oriented test generation, can be used for performance evaluation of end-to-end multipoint protocols.
The rest of this document is organized as follows. Sec-tion VI presents related work in protocol verification, conformance testing and VLSI chip testing. Section II introduces the proposed framework, and system definition. Sections III, IV, V present the search based approaches and problem complexity, the fault-independent test generation and the fault-oriented test generation, respectively. Section VII concludes 1 .
• Multicast Routing Overview Multicast protocols are the class of protocols that support group communication. Multicast routing protocols include, DVMRP [5], MOSPF [6], PIM-DM [7], CBT [8], and PIM-SM [9]. Multicast routing aims to deliver packets efficiently to group members by establishing distribution trees. Figure 1 shows a very simple example of a source S sending to a group of receivers Ri. Multicast distribution trees may be established by either broadcast-and-prune or explicit join protocols. In the former, such as DVMRP or PIM-DM, a multicast packet is broadcast to all leaf subnetworks. Subnetworks with no local members for the group send prune messages towards the source(s) of the packets to stop further broadcasts. Link state protocols, such as MOSPF, broadcast membership information to all nodes. In contrast, in explicit join protocols, such as CBT or PIM-SM, routers send hop-by-hop join messages for the groups and sources for which they have local members. We conduct robustness case studies for PIM-DM. We are particularly interested in multicast routing protocols, because they are vulnerable to failure modes, such as selective loss, that have not been traditionally studied in the area of protocol design. For most multicast protocols, when routers are connected via a multi-access network (or LAN) 2 , hop-by-hop messages are multicast on the LAN, and may experience selective loss; i.e. may be received by some nodes but not others. The likelihood of selective loss is increased by the fact that LANs often 1 We include appendices for completeness. 2 We use the term LAN to designate a connected network with respect to IP-multicast. This includes shared media (such as Ethernet, or FDDI), hubs, switches, etc. contain hubs, bridges, switches, and other network devices. Selective loss may affect protocol robustness. Similarly, end-to-end multicast protocols and applications must deal with situations of selective loss. This differentiates these applications most clearly from their unicast counterparts, and raises interesting robustness questions. Our case studies illustrate why selective loss should be considered when evaluating protocol robustness. This lesson is likely to extend to the design of higher layer protocols that operate on top of multicast and can have similar selective loss.
II. Framework Overview
Protocols may be evaluated for correctness or performance. We refer to correctness studies that are conducted in the absence of network failures as verification. In contrast, robustness studies consider the presence of network failures (such as packet loss or crashes). In general, the robustness of a protocol is its ability to respond correctly in the face of network component failures and packet loss. This work presents a methodology for studying and evaluating multicast protocols, specifically addressing robustness and performance issues. We propose a framework that integrates automatic test generation as a basic component for protocol design, along with protocol modeling, simulation and implementation testing. The major contribution of this work lies in developing new methods for generating stress test scenarios that target robustness and correctness violation, or worst case performance.
Instead of studying protocol behavior in isolation, we incorporate the protocol model with network dynamics and failures in order to reveal more realistic behavior of protocols in operation.
This section presents an overview of the framework and its constituent components. The model used to represent the protocol and the system is presented along with definitions of the terms used.
Our framework integrates test generation with simulation and implementation code. It is used for Systematic Testing of Robustness by Evaluation of Synthesized Scenarios (STRESS). As the name implies, systematic methods for scenario synthesis are a core part of the framework. We use the term scenarios to denote the test-suite consisting of the topology and events.
The input to this framework is the specification of a protocol, and a definition of its design requirements, in terms of correctness or performance. Usually robustness is defined in terms of network dynamics or fault models. A fault model represents various component faults; such as packet loss, corruption, re-ordering, or machine crashes. The desired output is a set of test-suites that stress the protocol mechanisms according to the robustness criteria.
As shown in Figure 2, the STRESS framework includes test generation, detailed simulation driven by the synthesized tests, and protocol implementation driven through an emulation interface to the simulator. In this work we focus on the test generation (TG) component. The STRESS framework
A. Test Generation
The core contribution of our work lies in the development of systematic test generation algorithms for protocol robustness. We investigate two such algorithms, each using a different approach.
In general test generation may be random or deterministic. Generation of random tests is simple but a large set of tests is needed to achieve a high measure of error coverage. Deterministic test generation (TG), on the other hand, produces tests based on a model of the protocol. The knowledge built into the protocol model enables the production of shorter and higher-quality test sequences. Deterministic TG can be: a) fault-independent, or b) fault-oriented. Fault-independent TG works without targeting individual faults as defined by the fault model. Such an approach may employ a forward search technique to inspect the protocol state space (or an equivalent subset thereof), after integrating the fault into the protocol model. In this sense, it may be considered a variant of reachability analysis. We use the notion of equivalence to reduce the search complexity. Section IV describes our fault-independent approach.
In contrast, fault-oriented tests are generated for specified faults. Fault-oriented test generation starts from the fault (e.g. a lost message) and synthesizes the necessary topology and sequence of events that trigger the error. This algorithm uses a mix of forward and backward searches. We present our fault-oriented algorithm in Section V.
We conduct case studies for the multicast routing protocol PIM-DM to illustrate differences between the approaches, and provide a basis for comparison.
In the remainder of this section, we describe the system model and definition.
B. The system model
We define our target system in terms of network and topology elements and a fault model.
B.1 Elements of the network
Elements of the network consist of multicast capable nodes and bi-directional symmetric links. Nodes run same multicast routing, but not necessarily the same unicast routing. The topology is an N -router LAN modeled at the network level; we do not model the MAC layer.
For end-to-end performance evaluation, the multicast distribution tree is abstracted out as delays between end systems and patterns of loss for the multicast messages. Cascade of LANs or uniform topologies are addressed in future research.
B.2 The fault model
We distinguish between the terms error and fault. An error is a failure of the protocol as defined in the protocol design requirement and specification. For example, duplication in packet delivery is an error for multicast routing. A fault is a low level (e.g. physical layer) anomalous behavior, that may affect the behavior of the protocol under test. Note that a fault may not necessarily be an error for the low level protocol.
The fault model may include: (a) Loss of packets, such as packet loss due to congestion or link failures. We take into consideration selective packet loss, where a multicast packet may be received by some members of the group but not others, (b) Loss of state, such as multicast and/or unicast routing tables, due to machine crashes or insufficient memory resources, (c) The delay model, such as transmission, propagation, or queuing delays. For end-to-end multicast protocols, the delays are those of the multicast distribution tree and depend upon the multicast routing protocol, and (d) Unicast routing anomalies, such as route inconsistencies, oscillations or flapping.
Usually, a fault model is defined in conjunction with the robustness criteria for the protocol under study. For our robustness studies we study PIM. The designing robustness goal for PIM is to be able to recover gracefully (i.e. without going into erroneous stable states) from single protocol message loss. That is, being robust to a single message loss implies that transitions cause the protocol to move from one correct stable state to another, even in the presence of selective message loss. In addition, we study PIM protocol behavior in presence of crashes and route inconsistencies.
C. Test Sequence Definition
A fault model may include a single fault or multiple faults. For our robustness studies we adopt a single-fault model, where only a single fault may occur during a scenario or a test sequence.
We define two sequences, T =< e1, e2, . . . , en > and T ′ =< e1, e2, . . . , ej, f, e k , . . . , en >, where ei is an event and f is a fault. Let P (q, T ) be the sequence of states and stimuli of protocol P under test T starting from the initial state q. T ′ is a test sequence if final P (q, T ′ ) is incorrect; i.e. the stable state reached after the occurrence of the fault does not satisfy the protocol correctness conditions (see Section II-E) irrespective of P (q, T ). In case of a fault-free sequence, where T = T ′ , the error is attributed to a protocol design error. Whereas when T = T ′ , and final P (q, T ) is correct, the error is manifested by the fault. This definition ignores transient protocol behavior. We are only concerned with the stable (i.e. non-transient) behavior of a protocol.
D. Test Scenario
A test scenario is defined by a sequence of (host) events, a topology, and a fault model, as shown in Figure 3. The events are actions performed by the host and act as input to the system; for example, join, leave, or send packet. The topology is the routed topology of set of nodes and links. The nodes run the set of protocols under test or other supporting protocols. The links can be either point-to-point links or LANs. This model may be extended later to represent various delays and bandwidths between pairs of nodes, by using a virtual LAN matrix (see [10]). The fault model used to inject the fault into the test. According to our singlemessage loss model, for example, a fault may denote the 'loss of the second message of type prune traversing a certain link'. Knowing the location and the triggering action of the fault is important in analyzing the protocol behavior.
E. Brief description of PIM-DM
For our robustness studies, we apply our automatic test generation algorithms to a version of the Protocol Independent Multicast-Dense Mode, or PIM-DM. The description given here is useful for Sections III through V.
PIM-DM uses broadcast-and-prune to establish the multicast distribution trees. In this mode of operation, a multicast packet is broadcast to all leaf subnetworks. Subnetworks with no local members send prune messages towards the source(s) of the packets to stop further broadcasts.
Routers with new members joining the group trigger Graft messages towards previously pruned sources to re-establish the branches of the delivery tree. Graft messages are acknowledged explicitly at each hop using the Graft-Ack message.
PIM-DM uses the underlying unicast routing tables to get the next-hop information needed for the RPF (reverse-pathforwarding) checks. This may lead to situations where there are multiple forwarders for a LAN. The Assert mechanism prevents these situations and ensures there is at most one forwarder for a LAN.
The correct function of a multicast routing protocol in general, is to deliver data from senders to group members (only those that have joined the group) without any data loss. For our methods, we only assume that a correctness definition is given by the protocol designer or specification. For illustration, we discuss the protocol errors and the correctness conditions.
E.1 PIM Protocol Errors
In this study we target protocol design and specification errors. We are interested mainly in erroneous stable (i.e. non-transient) states. In general, the protocol errors may be defined in terms of the end-to-end behavior as functional correctness requirements. In our case, for PIM-DM, an error may manifest itself in one of the following ways: 1) black holes: consecutive packet loss between periods of packet delivery, 2) packet looping: the same packet traverses the same set of links multiple times, 3) packet duplication: multiple copies of the same packet are received by the same receiver(s), 4) join latency: lack of packet delivery after a receiver joins the group, 5) leave latency: unnecessary packet delivery after a receiver leaves the group 3 , and 6) wasted bandwidth: unnecessary packet delivery to network links that do not lead to group members.
E.2 Correctness Conditions
We assume that correctness conditions are provided by the protocol designer or the protocol specification. These conditions are necessary to avoid the above protocol errors in a LAN environment, and include 4 : 1. If one (or more) of the routers is expecting to receive packets from the LAN, then one other router must be a forwarder for the LAN. Violation of this condition may lead to data loss (e.g. join latency or black holes). 2. The LAN must have at most one forwarder at a time. Violation of this condition may lead to data packet duplication. 3. The delivery tree must be loop-free:
(a) Any router should accept packets from one incoming interface only for each routing entry. This condition is enforced by the RPF (Reverse Path Forwarding) check.
(b) The underlying unicast topology should be loop-free 5 . Violation of this condition may lead to data packet looping. 4. If one of the routers is a forwarder for the LAN, then there must be at least one router expecting packets from the LANs. Violation of this condition may lead to leave latency.
III. Search-based Approaches
The problem of test synthesis can be viewed as a search problem. By searching the possible sequences of events and 3 Join and leave latencies may be considered in other contexts as performance issues. However, in our study we treat them as errors. 4 These are the correctness conditions for stable states; i.e. not during transients, and are defined in terms of protocol states (as opposed to end point behavior). The mapping from functional correctness requirements for multicast routing to the definition in terms of the protocol model is currently done by the designer. The automation of this process is part of future research. 5 Some esoteric scenarios of route flapping may lead to multicast loops, in spite of RPF checks. Currently, our study does not address this issue, as it does not pertain to a localized behavior.
faults over network topologies and checking for design requirements (either correctness or performance), we can construct the test scenarios that stress the protocol. However, due to the state space explosion, techniques must be used to reduce the complexity of the space to be searched. We attempt to use these techniques to achieve high test quality and protocol coverage. Following we present the GFSM model for the case study protocol (PIM-DM), and use it as an illustrative example to analyze the complexity of the state space and the search problem, as well as illustrate the algorithmic details and principles involved in FITG and FOTG.
A. The Protocol Model
We represent the protocol as a finite state machine (FSM) and the overall LAN system by a global FSM (GFSM). I. FSM model: Every instance of the protocol, running on a single router, is modeled by a deterministic FSM consisting of: (i) a set of states, (ii) a set of stimuli causing state transitions, and (iii) a state transition function (or table) describing the state transition rules. For a system i, this is represented by the machine Mi = (S, τi, δi), where S is a finite set of state symbols, τi is the set of stimuli, and δi is the state transition function S × τi → S.
II. Global FSM model: The global state is defined as the composition of individual router states. The output messages from one router may become input messages to other routers. Such interaction is captured by the GFSM model in the global transition table. The behavior of a system with n routers may be described by MG = (SG, τG, δG), where SG: S1 × S2 × · · · × Sn is the global state space, τG:
n i=1
τi is the set of stimuli, and δG is the global state transition function SG × τG → SG.
The fault model is integrated into the GFSM model. For message loss, the transition caused by the message is either nullified or modified, depending on the selective loss pattern. Crashes may be treated as stimuli causing the routers affected by the crash to transit into a crashed state 6 . Network delays are modeled (when needed) through the delay matrix presented in Section VII. For a given group and a given source (i.e., for a specific source-group pair), we define the states w.r.t. a specific LAN to which the router Ri is attached. For example, a state may indicate that a router is a forwarder for (or a receiver expecting packets from) the LAN. 6 The crashed state maybe one of the states already defined for the protocol, like the empty state, or may be a new state that was not defined previously for the protocol. The possible states for upstream and downstream routers are as follows:
B. PIM-DM Model
Si = {Fi, Fi T imer , N Fi, EUi},
if the router is upstream;
{N Hi, N Hi T imer , N Ci, Mi, N Mi, EDi},
if the router is downstream.
B.1.b Stimuli (τ ).
The stimuli considered here include transmitting and receiving protocol messages, timer events, and external host events. Only stimuli leading to change of state are considered. For example, transmitting messages per se (vs. receiving messages) does not cause any change of state, except for the Graf t, in which case the Rtx timer is set. Following are the stimuli considered in our study:
1. Transmitting messages: Graft transmission (Graf tT x). 2. Receiving messages: Graft reception (Graf tRcv), Join reception (Join), Prune reception (P rune), Graft Acknowledgement reception (GAck), Assert reception (Assert), and forwarded packets reception (F P kt).
3. Timer events: these events occur due to timer expiration (Exp) and include the Graft re-transmission timer (Rtx), the event of its expiration (RtxExp), the forwarder-deletion timer (Del), and the event of its expiration (DelExp). We refer to the event of timer expiration as (T imerImplication).
4. External host events (Ext): include host sending packets (SP kt), host joining a group (HJoin or HJ), and host leaving a group (Leave or L). τ = {Join, P rune, Graf tT x, Graf tRcv, GAck, Assert, F P kt, Rtx, Del, SP kt, HJ, L}.
B.2 Global FSM model
Subscripts are added to distinguish different routers. These subscripts are used to describe router semantics and how routers interact on a LAN. An example global state for a topology of 4 routers connected to a LAN, with router 1 as a forwarder, router 2 expecting packets from the LAN, and routers 3 and 4 have negative caches, is given by {F1, N H2, N C3, N C4}. For the global stimuli τG, subscripts are added to stimuli to denote their originators and recipients (if any). The global transition rules δG are extended to encompass the router and stimuli subscripts 7 .
7 Semantics of the global stimuli and global transitions will be described as needed (see Section V).
C. Defining stable states
We are concerned with stable state (i.e. non-transient) behavior, defined in this section. To obtain erroneous stable states, we need to define the transition mechanisms between such states. We introduce the concept of transition classification and completion to distinguish between transient and stable states.
C.1 Classification of Transitions
We identify two types of transitions; externally triggered (ET) and internally triggered (IT) transitions. The former is stimulated by events external to the system (e.g., HJoin or Leave), whereas the latter is stimulated by events internal to the system (e.g., F P kt or Graf t).
We note that some transitions may be triggered due to either internal and external events, depending on the scenario. For example, a P rune may be triggered due to forwarding packets by an upstream router F P kt (which is an internal event), or a Leave (which is an external event).
A global state is checked for correctness at the end of an externally triggered transition after completing its dependent internally triggered transitions.
Following is a or Leave). Therefore, we should be able to identify a transition based upon its stimuli (either external or internal).
At the end of each complete transition sequence the system exists in either a correct or erroneous stable state. Eventtriggered timers (e.g., Del, Rtx) fire at the end of a complete transition.
D. Problem Complexity
The problem of finding test scenarios leading to protocol error can be viewed as a search problem of the protocol state space. Conventional reachability analysis [11] attempts to investigate this space exhaustively and incurs the 'state space explosion' problem. To circumvent this problem we use search reduction techniques using domain-specific information of multicast routing.
In this section, we give the complexity of exhaustive search, then discuss the reduction techniques we employ based on notion of equivalence, and give the complexity of the state space.
D.1 Complexity of exhaustive search
Exhaustive search attempts to generate all states reachable from initial system states. In the symbolic representation, r routers in state q are represented by q r . The global state for a system of n routers 8 Crashes force any state to the empty state. 9 Two system states (q 1 , q 2 , . . . , qn) and (p 1 , p 2 , . . . , pn) are strictly
equivalent iff q i = p i , where q i , p i ∈ S, ∀1 ≤ i ≤ n.
However, all routers use the same deterministic FSM model, hence all n! permutations of (q 1 , q 2 , . . . , qn) are equivalent. A global state for a system with n routers may be represented as 10 The notion of counting equivalence also applies to transitions and faults.
|S| i=1 s k i i , where k i is the number of routers in state s i ∈ S and Σ |S| i=1 k i = n. Formally, Counting Equivalence states that two system states |S| i=1 s k i i and |S| i=1 s l i i are equivalent if k i = l i ∀i.
Those transitions or faults leading to equivalent states are considered equivalent.
is represented by G = (q r 1 1 , q r 2 2 , . . . , q rm m ), where m = |S|, Σri = n. For symbolic representation of topologies where n is unknown ri ∈ [0, 1, 2, 1+, * ] ('1+' is 1 or more, and '*' is 0 or more).
To satisfy the correctness conditions for PIM-DM, the correct stable global states are those containing no forwarders and no routers expecting packets, or those containing one forwarder and one or more routers expecting packets from the link; symbolically this may be given by:
G1 = F 0 , N H 0 , N C * , and G2 = F 1 , N H 1+ , N C * . 11
We use X to denote any state si ∈ S. The correct space and the erroneous space must be disjoint and they must be complete (i.e. add up to the complete space), otherwise the specification is incorrect. See Appendix I-A for details.
We present two correctness definitions that are used in our case.
• The first definition considers the forwarder states as F and the routers expecting packets from the LAN as N H. Hence, the symbolic representation of the correct states becomes: 11 For convenience, we may represent these two states as G 1 = NC * , and G 2 = F, NH 1+ , NC * . 12 These conditions we have found to be reasonably sufficient to meet the functional correctness requirements. However, they may not be necessary, hence the search may generate false errors. Proving necessity is part of future work. and the number of correct states is:
({X − N H − F } * ), or (N H, F, {X − F } * ),C(n + s − 3, n) + C(n + s − 4, n − 2).
• The second definition considers the forwarder states as {Fi, F i Del } or simply FX , and the states expecting packets from the LAN as {N Hi, N Hi Rtx} or simply N HX . Hence, the symbolic representation of the correct states becomes:
({X − N HX − FX } * ), or (N HX , FX , {X − FX } * ),
and the number of correct states is:
C(n + s − 5, n) + 4 · C(n + s − 5, n − 2) − 2 · C(n + s − 6, n − 3).
Refer to Appendix I-B for more details on deriving the number of correct states.
In general, we find that the size of the error state space, according to both definitions, constitutes the major portion of the whole state space. This means that search techniques explicitly exploring the error states are likely to be more complex than others. We take this in consideration when designing our methods.
IV. Fault-independent Test Generation
Fault-independent test generation (FITG) uses the forward search technique to investigate parts of the state space. As in reachability analysis, forward search starts from initial states and applies the stimuli repeatedly to produce the reachable state space (or part thereof). Conventionally, an exhaustive search is conducted to explore the state space. In the exhaustive approach all reachable states are expanded until the reachable state space is exhausted. We use several manifestations of the notion of counting equivalence introduced earlier to reduce the complexity of the exhaustive algorithm and expand only equivalent subspaces. To examine robustness of the protocol, we incorporate selective loss scenarios into the search.
A. Reduction Using Equivalences
The search procedure starts from the initial states 13 and keeps a list of states visited to prevent looping. Each state is expanded by applying the stimuli and advancing the state machine forward by implementing the transition rules and returning a new stable state each time 14 . We use the counting equivalence notion to reduce the complexity of the search in three stages of the search:
1. The first reduction we use is to investigate only the equivalent initial states. To achieve this we simply treat the set of states constituting the global state as unordered set 13 For our case study the routers start as either a non-member (NM ) or empty upstream routers (EU ), that is, the initial states I.S. = {NM, EU }.
14 For details of the above procedures, see Appendix II-A. One procedure that produces such equivalent initial state space given in Appendix II-B. The complexity of the this algorithm is given by C(n + i.s. − 1, n) as was shown in Section III-D.2 and verified through simulation.
B. Applying the Method
In this section we discuss how the fault-independent test generation can be applied to the model of PIM-DM. We apply forward search techniques to study correctness of PIM-DM. We first study the complexity of the algorithms without faults. Then we apply selective message loss to study the protocol behavior and analyze the protocol errors.
B.1 Method input
The protocol model is provided by the designer or protocol specification, in terms of a transition 3 178 30 2840 263 4 644 48 14385 503 6 7480 106 271019 1430 8 80830 200 4122729 3189 10 843440 338 55951533 6092 12 8621630 528 708071468 10483 14 86885238 778 8.546E+09 16738 Transitions Errors Rtrs Exhaustive Reduced Exhaustive Reduced 3 343 65 33 6 4 1293 119 191 13 6 14962 307 3235 43 8 158913 633 41977 101 10 1638871 1133 491195 195 12 16666549 1843 5441177 333 14 167757882 2799 58220193 523 Similar results were obtained for the number of forwards, expanded states and number of error states. The reduction gained by using the counting equivalence is exponential.
More detailed presentation of the algorithmic details and results are given in Appendix II.
For robustness analysis (vs. verification), faults are included in the GFSM model. Intuitively, an increase in the overall complexity of the algorithms will be observed. Other kinds of equivalence may be investigated to reduce complexity in these cases 18 . Also, other techniques for complexity reduction may be investigated, such as statistical sampling based on randomization or hashing used in 16 This is one case where the correctness conditions for the model are sufficient but not necessary to meet the functional requirements for correctness, thus leading to a false error. Sufficiency and necessity proofs are subject of future work. 17 Repetition constructs include, for example, the '*' to represent zero or more states, or the '1+' to represent one or more states, '2+' two or more, so on. 18 An example of another kind of equivalence is fault dominance, where a system is proven to necessarily reach one error before reaching another, thus the former error dominates the latter error.
SPIN [14]. However, sampling techniques do not achieve full coverage of the state space.
• The topology used in this study is limited to a single-hop LAN. Although we found it quite useful to study multicast routing over LANs, the method needs to be extended to multi-hop LAN to be more general. Our work in [10] introduces the notion of virtual LAN, and future work addresses multi-LAN topologies.
In sum, the fault-independent test generation may be used for protocol verification given the symmetry inherent in the system studied (i.e., protocol and topology). For robustness studies, where the fault model is included in the search, the complexity of the search grows. In this approach we did not address performance issues or topology synthesis. These issues are addressed in the coming sections. However, we shall re-use the notion of forward search and the use of counting equivalence in the method discussed next.
V. Fault-oriented Test Generation
In this section, we investigate the fault-oriented test generation (FOTG), where the tests are generated for specific faults. In this method, the test generation algorithm starts from the fault(s) and searches for a possible error, establishing the necessary topology and events to produce the error.
Once the error is established, a backward search technique produces a test sequence leading to the erroneous state, if such a state is reachable. We use the FSM formalism presented in Section III to represent the protocol. We also re-use some ideas from the FITG algorithm previously presented, such as forward search and the notion of equivalence for search reduction.
A. FOTG Method Overview
Fault-oriented test generation (FOTG) targets specific faults or conditions, and so is better suited to study robustness in the presence of faults in general. FOTG has three main stages: a) topology synthesis, b) forward implication and error detection, and c) backward implication.
The topology synthesis establishes the necessary components (e.g., routers and hosts) of the system to trigger the given condition (e.g., trigger a protocol message
HJoin Ext N M → M, Graf t T x .(N C → N H) Leave Ext M → N M, P rune.(N H → N C), P rune.(N H Rtx → N C)
The above pre-conditions can be derived automatically from the post-conditions. In Appendix III, we describe the 'PreConditions' procedure that takes as input one form of the conventional post-condition transition
←− EU i , Join ←− F i Del , Join ←− NF i , Graf t Rcv ←− NF i , SP kt ←− EU i F i Del P rune ←− F i NF i Del ←− F i Del , Assert ←− F i NH i Rtx,GAck ←− NH i Rtx , HJ ←− NC i , F P kt ←− M i , F P kt ←− ED i NH i Rtx Graf t T x ←− NH i NC i F P kt ←− NM i , L ←− NH i Rtx , L ←− NH i EU i ← I.S. ED i ← I.S. M i HJ ←− NM i NM i L ←− M i , ← I.S.
In cases where the stimulus affects more than one router (e.g., multicast P rune), multiple states need to be simultaneously implied in one backward step, otherwise an I.S.
B. FOTG details
As previously mentioned, our FOTG approach consists of three phases: I) synthesis of the global state to inspect, II) forward implication, and III) backward implication. These phases are explained in more detail in this section. In Section V-C we present an illustrative example for the these phases.
B.1 Synthesizing the Global State
Starting from a condition (e.g., protocol message or stimulus), and using the information in the protocol model (i.e. the transition table), a global state is synthesized for investigation. We refer to this state as the global-state inspected (GI ), and it is obtained as follows:
C. Applying The Method
In this section we discuss how the fault-oriented test gen- Join topology synthesis, forward/backward implication Backward implication
G I = {N H i , N F k , N C j } P rune ←− G I−1 = {N H i , F k , N C j } F P kt ←− G I−2 = {M i , F k , N M j } SP kt ←− G I−3 = {M i , EU k , N M j } HJ i ←− G I−4 = {N M i , EU k , N M j } = I.S.
Losing the Join by the forwarding router R k leads to an error state where router Ri is expecting packets from the LAN, but the LAN has no forwarder.
C.3 Summary of Results
In this section we briefly discuss the results of applying Our method as presented here, however, may not be generalized to transform any type of timing problem into sequencing problem. This topic bears more research in the future.
We have used the sequences of events generated automatically by the algorithm to analyze protocol errors and suggest Forward implication is then applied, and behavior after the crash is checked for correct packet delivery. To achieve this, host stimuli (i.e. SP kt, HJ and L) are applied, then the system state is checked for correctness.
In lots of the cases studied, the system recovered from the crash (i.e. the system state was eventually correct). The recovery is mainly due to the nature of PIM-DM; where protocol states are re-created with reception of data packets. This result is not likely to extend to protocols of other natures; e.g. PIM Sparse-Mode [15].
However, in violation with robustness requirements, there existed cases in which the system did not recover. In Figure 7, the host joining in (II, a) did not have the sufficient state to send a Graf t and hence gets join latency until the negative cache state times out upstream and packets are forwarded onto the LAN as in (II, b). Crash leading to join latency
In Figure 8 (II, a), the downstream router incurs join latency due to the crash of the upstream router. The state is not corrected until the periodic broadcast takes place, and packets are forwarded onto the LAN as in (II, b).
Fig. 8
Crash leading to black holes
D. Challenges and Limitations
Although we have been able to apply FOTG to PIM-DM successfully, a discussion of the open issues and challenges is called for. In this section we address some of these issues. • The analysis for our case studies did not consider network delays. In order to study end-to-end protocols network delays must be considered in the model. In [10] we introduce the notion of virtual LAN to include end-to-end delay semantics.
• Minimal topologies that are necessary and sufficient to trigger the stimuli, may not be sufficient to capture all correctness violations. For example, in some cases it may require one member to trigger a Join, but two members to experience an error caused by Join loss. Hence, the topology synthesis stage must be complete in order to capture all possible errors. To achieve this we propose to use the symbolic representation. For example, to cover all topologies with one or more members we use (M 1+ ). Integration of this notation with the full method is part of future work. We believe that the strength of our fault-oriented method, as was demonstrated, lies in its ability to construct the necessary conditions for erroneous behavior by starting directly from the fault and avoiding the exhaustive walk of the state space. Also, converting timing problems into sequencing problems (as was shown for Graf t analysis) reduces the complexity required to study timers. FOTG as presented in this chapter seems best fit to study protocol robustness in the presence of faults. Faults presented in our studies include single selective loss of protocol messages and router crashes.
VII. Conclusions
In this study we have proposed the STRESS framework to integrate test generation into the protocol design process. More case studies are needed to show more general applicability of our methodology.
Appendix
I. State Space Complexity
In this appendix we present analysis for the state space complexity of our target system. In specific we present completeness proof of the state space and the formulae to compute the size of the correct state space.
A. State Space Completeness
We define the space of all states as X * , denoting zero or more routers in any state. We also define the algebraic operators for the space, where
X * = X 0 ∪ X 1 ∪ X 2+ (1) (Y n , X * ) = Y n+ , {X − Y } *(2)
A.1 Error states
In general, an error may manifest itself as packet duplicates, packet loss, or wasted bandwidth. This is mapped onto the state of the global FSM as follows:
1. The existence of two or more forwarders on the LAN with one or more routers expecting packet from the LAN (e.g., in the N HX state) indicates duplicate delivery of packets.
2. The existence of one or more routers expecting packets from the LAN with no forwarders on the LAN indicates a deficiency in packet delivery (join latency or black holes).
3. The existence of one or more forwarders for the LAN with no routers expecting packets from the LAN indicates wasted bandwidth (leave latency or extra overhead).
-for duplicates: one or more N HX with two or more FX ;
N HX , F 2+ X , X *(3)
-for extra bandwidth: one or more FX with zero N HX;
(FX , {X − N HX } * )(4)
-for blackholes or packet loss: one or more N HX with zero FX ;
(N HX , {X − FX } * )(5)
A.2 Correct states As described earlier, the correct states can be described by the following rule:
∃ exactly one forwarder for the LAN iff ∃ one or more routers expecting packets from the LAN.
-zero N HX with zero FX ;
({X − N HX − FX } * )(6)
-one or more N HX with exactly one FX ;
(N HX , FX , {X − FX } * )(7)
from (B.2) and (B.3) we get:
N HX , F 2+ X , {X − FX } *(8)
if we take the union of (B.8), (B.5) and (B.7), and apply (B.1) we get:
(N HX, X * ) = N H 1+ X , {X − N HX } *(9)
also, from (B.4) and (B.2) we get:
F 1+ X , {X − N HX − FX } *(10)
if we take the union of (B.10) and (B.6) we get:
(F * X , {X − N HX − FX } * ) = ({X − N HX } * )(11)
taking the union of (B.9) and (B.11) we get:
(N H * X , {X − N HX} * ) = (X * )(12)
which is the complete state space. To obtain the ErrorStates we can use: The percentage of the correct and error states
B. Number of Correct and Error State Spaces
ErrorStates = T otalStates − CorrectStates.
II. Forward Search Algorithms
This appendix includes detailed procedures that implement the forward search method as described in Section IV.
It also includes detailed statistics collected for the case study on PIM-DM.
A. Exhaustive Search
The ExpandSpace procedure given below implements an
Fig. 11
Simulation statistics for forward algorithms. The complexity of this procedure is given by (i.s.) n .
B. Reduction Using Equivalence
We use the counting equivalence notion to reduce the complexity of the search in 3 ways:
1. The first reduction we use is to investigate only the equivalent initial states, we call this algorithm Equiv. One procedure that produces such equivalent initial state space is the EquivInit procedure given below. We call the algorithm after the third reduction the reduced algorithm.
C. Complexity analysis of forward search for PIM-DM
The number of reachable states visited, the number of transitions and the number of erroneous states found were recorded. The result is given in Figures 10, 11, 12, 13. The reduction is the ratio of the numbers obtained using the exhaustive algorithm to those obtained using the reduced algorithm.
The This means that we have obtained exponential reduction in complexity, as shown in Figure 14.
A. Pre-Conditions
The procedure described below takes as input the set of post-conditions for the FSM stimuli and genrates the set of pre-conditions. The 'conds' array contains the postconditions (i.e., the effects of the stimuli on the system) and
is indexed by the stimulus.
C. Topology Synthesis
The following procedure synthesizes minimum topologies necessary to trigger the various stimuli of the protocol. It performs the third and forth steps of the topology synthesis procedure explained in Section V-B. were reachable. The statistics for the total and average number of backward calls, rewind calls and backtracks is given in Figure 15.
Although the topology synthesis study we have presented above is not complete, we have covered a large number of corner cases using only a manageable number of topologies and search steps.
To obtain a complete representation of the topologies, we suggest to use the symbolic representation 31 presented in Section III. Based on our initial estimates we expect the number of symbolic topology representations to be approximately 224 topologies, ranging from 2 to 8-router LAN topologies, for the single selective loss and single crash models.
F. Experimental statistics for PIM-DM
To investigate the utility of FOTG as a verification tool we ran this set of simulations. This is not, however, how FOTG is used to study protocol robustness (see previous section for case study analysis).
We also wanted to study the effect of unreachable states on the complexity of the verification. The simulations for our case study show that unreachable states do not contribute in a significant manner to the complexity of the backward search for larger topologies. Hence, in order to use FOTG as a verification tool, it is not sufficient to add the reachability detection capability to FOTG.
The backward search was applied to the equivalent error states (for LANs with 2 to 5 routers connected). The simulation setup involved a call to a procedure similar to 'EquivInit'
in Appendix II-B, with the parameter S as the set of state 31 We have used the repetition constructs '0', '1', '*'. Simulation statistics for backward algorithms symbols, and after an error check was done a call is made to the 'Backward' procedure instead of 'ExpandSpace'.
States were classified as reachable or unreachable. For the four topologies studied (LANs with 2 to 5 routers) statistics were measured (e.g., max, min, median, average, and total) for number of calls to the 'Backward' and 'Rewind' procedures, and the number of backTracks were measured. As shown in Figure 16, the statistics show that, as the topology grows, all the numbers for the reachable states get significantly larger than those for the unreachable states (as in Figure 17), despite the fact that that the percentage of unreachable states increases with the topology as in Figure 18.
The reason for such behavior is due to the fact that when the state is unreachable the algorithm reaches a dead-end relatively early (by exhausting one branch of the search tree).
However, for reachable states, the algorithm keeps on searching until it reaches an initial global state. Hence the reachable states search constitutes the major component that contributes to the complexity of the algorithm.
G. Results
We have implemented an early version of the algorithm in the NS/VINT environment (see http://catarina.usc.edu/vint) and used it to drive detailed simulations of PIM-DM therein, to verify our findings. In this section we discuss the results of applying our method to PIM-DM. The analysis is conducted for single selective message loss.
For the following analyzed messages, we present the steps for topology synthesis, forward and backward implication.
G.3 Graft
Following are the resulting steps for the Graf t loss:
= {N H, N F } Graf t T x −→ G I+1 = {N H Rtx , N F } T imer −→ G I+2 = {N H, N F } Graf t T x −→ G I+3 = {N H Rtx , N F } Graf t Rcv −→ G I+4 = {N H Rtx , F } GAck −→ G I+5 = {N H, F } correct state
We did not reach an error state when the Graf t was lost, with non-interleaving external events.
H. Interleaving events and Sequencing
A Graf t message is acknowledged by the Graf t − Ack (GAck) message, and if not acknowledged it is retransmitted when the retransmission timer expires. In an attempt to create an erroneous scenario, the algorithm generates sequences to clear the retransmission timer, and insert an adverse event.
Since the Graf t reception causes an upstream router to become a forwarder for the LAN, the algorithm interleaves a
Leave event as an adversary event to cause that upstream router to become a non-forwarder. Backward Implication:
Using backward implication, we can construct a sequence of events leading to conditions sufficient to trigger the GAck. Hence, when a Graf t followed by a P rune is interleaved with the Graf t loss, the retransmission timer is reset with the receipt of the GAck for the first Graf t, and the systems ends up in an error state. | 8,424 |
cs0109072 | 2077870744 | We address the problem of complementing higher-order patterns without repetitions of existential variables. Differently from the first-order case, the complement of a pattern cannot, in general, be described by a pattern, or even by a finite set of patterns. We therefore generalize the simply-typed λ-calculus to include an internal notion of strict function so that we can directly express that a term must depend on a given variable. We show that, in this more expressive calculus, finite sets of patterns without repeated variables are closed under complement and intersection. Our principal application is the transformational approach to negation in higher-order logic programs. | Lassez87 proposed the seminal algorithm for computing relative complements and introduced the now familiar restriction to linear terms. We quote the definition of the @math '' algorithm for the (singleton) complement problem given in @cite_22 which we generalize in Definition . Given a finite signature @math and a linear term @math they define: [ ] The relative complement problem is then solved by composing the above complement operation with term intersection implemented via first-order unification. | {
"abstract": [
"Abstract A transformation technique is introduced which, given the Horn-clause definition of a set of predicates p i , synthesizes the definitions of new predicate p i which can be used, under a suitable refutation procedure, to compute the finite failure set of p i . This technique exhibits some computational advantages, such as the possibility of computing nonground negative goals still preserving the capability of producing answers. The refutation procedure, named SLDN refutation, is proved sound and complete with respect to the completed program."
],
"cite_N": [
"@cite_22"
],
"mid": [
"2082994045"
]
} | 0 |
||
cs0109072 | 2077870744 | We address the problem of complementing higher-order patterns without repetitions of existential variables. Differently from the first-order case, the complement of a pattern cannot, in general, be described by a pattern, or even by a finite set of patterns. We therefore generalize the simply-typed λ-calculus to include an internal notion of strict function so that we can directly express that a term must depend on a given variable. We show that, in this more expressive calculus, finite sets of patterns without repeated variables are closed under complement and intersection. Our principal application is the transformational approach to negation in higher-order logic programs. | The class of higher-order patterns inherits many properties from first-order terms. However, as we will see, it is closed under complement, but a special subclass is. We call a canonical pattern @math if each occurrence of an existential variable @math under binders @math is applied to some permutation of the variables in @math and @math . Fully applied patterns play an important role in functional logic programming and rewriting @cite_6 , because any fully applied existential variable @math denotes all canonical terms of type @math with parameters from @math . It is this property which makes complementation particularly simple. | {
"abstract": [
"Functional logic languages with a sound and complete operational semantics are mainly based on narrowing. Due to the huge search space of simple narrowing, steadily improved narrowing strategies have been developed in the past. Needed narrowing is currently the best narrowing strategy for first-order functional logic programs due to its optimality properties w.r.t. the length of derivations and the number of computed solutions. In this paper, we extend the needed narrowing strategy to higher-order functions and λ-terms as data structures. By the use of definitional trees, our strategy computes only incomparable solutions. Thus, it is the first calculus for higher-order functional logic programming which provides for such an optimality result. Since we allow higher-order logical variables denoting λ-terms, applications go beyond current functional and logic programming languages."
],
"cite_N": [
"@cite_6"
],
"mid": [
"1550461844"
]
} | 0 |
||
cs0109033 | 1863841548 | Nomadic applications create replicas of shared objects that evolve independently while they are disconnected. When reconnecting, the system has to reconcile the divergent replicas. In the log-based approach to reconciliation, such as in the IceCube system, the input is a common initial state and logs of actions that were performed on each replica. The output is a consistent global schedule that maximises the number of accepted actions. The reconciler merges the logs according to the schedule, and replays the operations in the merged log against the initial state, yielding to a reconciled common final state. In this paper, we show the NP-completeness of the log-based reconciliation problem and present two programs for solving it. Firstly, a constraint logic program (CLP) that uses integer constraints for expressing precedence constraints, boolean constraints for expressing dependencies between actions, and some heuristics for guiding the search. Secondly, a stochastic local search method with Tabu heuristic (LS), that computes solutions in an incremental fashion but does not prove optimality. One difficulty in the LS modeling lies in the handling of both boolean variables and integer variables, and in the handling of the objective function which differs from a max-CSP problem. Preliminary evaluation results indicate better performance for the CLP program which, on somewhat realistic benchmarks, finds nearly optimal solutions up to a thousands of actions and proves optimality up to a hundreds of actions. | Log-based reconciliation is a new topic for which few algorithms have been developed. The only implementation we know of is the IceCube system reported in @cite_4 . It is worth noting that the objective function of maximizing the number of accepted actions, is different from maximizing the number of satisfied constraints. For that reason, the modeling of log-based reconciliation as a max-CSP problem is inadequate. This is also the main reason why in our second program based on local search, the min-conflict heuristics @cite_10 or the adaptive search method of @cite_0 do not perform well in our modeling, and we use instead a randomized Tabu heuristics. | {
"abstract": [
"",
"Abstract The paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic , that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n -queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.",
"We describe a novel approach to log-based reconciliation called IceCube. It is general and is parameterised by application and object semantics. IceCube considers more flexible orderings and is designed to ease the burden of reconciliation on the application programmers. IceCube captures the static and dynamic reconciliation constraints between all pairs of actions, proposes schedules that satisfy the static constraints, and validates them against the dynamic constraints. Preliminary experience indicates that strong static constraints successfully contain the potential combinatorial explosion of the simulation stage. With weaker static constraints, the system still finds good solutions in a reasonable time."
],
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_4"
],
"mid": [
"",
"2121766240",
"2154193287"
]
} | CLP versus LS on Log-based Reconciliation Problems | Data replication is a standard technique in distributed systems to make data available in different sites. The different sites may be disconnected (mobile computing) or connected (groupware) in which case shared data are replicated for efficiency reasons in order to avoid access through the network. Obviously the replication of mutable shared data may cause conflicts, the replicas may diverge into inconsistent states that have to be reconciled. Nomadic applications create replicas of shared objects that evolve independently while they are disconnected, when reconnecting the system has to reconcile the divergent replicas.
Statement of the optimization problem
We have to reconcile a set of logs of actions that have been realized independently, by trying to accept the greatest number of actions as possible.
Input: A finite set of L initial logs of actions {[T 1 i , ..., T ni i ] | 1 ≤ i ≤ L}, some dependencies between actions T j i ⇒ T l k , meaning that if T j i is accepted then T l k
must be accepted, and some precedence constraints T j i < T l k , meaning that if the actions are accepted they must be executed in that order. The precedence constraints are supposed to be satisfied inside the initial logs.
Output: A subset of accepted actions, of maximal cardinality, satisfying the dependency constraints, given with a global schedule T j i < ... < T l k satisfying the precedence constraints.
Note that the output depends solely of the precedence constraints between actions given in the input. In particular it is independent of the precise structure of the initial logs. The initial consistent logs can thus be used as starting solutions in some algorithms but can be forgotten as well without affecting the output.
Complexity
Proposition 1. The decision problem, i.e. finding a schedule of a given length, is NP-complete, even without dependency constraints.
Proof. The decision problem is obviously in NP. Indeed, for any guessed schedule, one can check in polynomial time whether the schedule is consistent.
NP-completeness is shown by encoding SAT into a reconciliation problem with singleton initial logs and precedence constraints only.
Let us assume a SAT problem over N boolean variables with C clauses. For each boolean variable p, we associate 2 * C actions p 1 0 , p 1 1 , ..., p C 0 , p C 1 , with precedence constraints p i 0 < p j 1 and p j 1 < p i 0 for all clause indices i, j in [1, C].
The actions p i 0 and p j 1 are thus mutually exclusive for all clause indices i, j. We represent the valuation false for p by accepting the actions p i 0 for all 1 ≤ i ≤ C, and the valuation true by accepting p j 1 for all 1 ≤ i ≤ C. This defines a one-toone mapping σ between valuations over N boolean variables, and the accepted actions in schedules of length N * C satisfying the mutual exclusion constraints.
For each clause, such as p∨q∨¬r, we associate the precedence constraints p i 0 < q i 0 < r i 1 < p i 0 where i is the index of the clause. Being cyclic, these precedence constraints forbid to take simultaneously the actions p i 0 , q i 0 , r i 1 and p i 0 , that is, they encode the equivalent formula ¬(¬p ∧ ¬q ∧ r). Hence a valuation η satisfies a clause if and only if the actions in σ(η) satisfy all the precedence constraints associated to the clause. Note that, unlike the mutual exclusion constraints, the precedence constraints are posted between action variables with the same clause index only. Now we prove that a set of C clauses over N variables is satisfiable if and only if there exists a schedule accepting N * C actions and satisfying the mutual exclusion constraints and the precedence constraints associated to the clauses.
The implication is clear: if η is a valuation which satisfies all the clauses, then σ(η) is a set of N * C actions which satisfies the mutual exclusion constraints, and which can be ordered with increasing clause indices and according to the precedence constraints for variables with the same clause index.
For the converse, let us suppose that we have a consistent schedule of N * C actions. Because of the mutual exclusion constraints, the schedule defines a valuation of the SAT problem: indeed for each propositional variable p, either p i O is accepted for all i, and p is false, either p i 1 is accepted for all i, and p is true. Furthermore the precedence constraints between actions of index i establish that that valuation satisfies the ith clause. Therefore the valuation associated to the schedule satisfies all the clauses.
QED.
A CLP(FD,B) approach
Modeling with mixed boolean and integer constraints
In this modeling of the problem, we forget the initial (consistent) logs of actions and consider that all actions are at the same level. We have n elementary actions to which we associate:
n boolean variables a 1 , ..., a n which say whether the action is accepted or not n integer variables p 1 , ..., p n which give the position of the accepted actions in the global schedule
We have some dependency constraints a i ⇒ a j and some precedence constraints
a i ∧ a j ⇒ (p i < p j )
or equivalently, assuming false is 0 and true is 1,
a i * a j * p i < p j
We want to maximize a 1 + ... + a n . The search for solutions goes through an enumeration of the boolean variables a i 's, with the heuristics of instanciating first the variable a i which has the greatest number of constraints on it (i.e. first-fail principle w.r.t. the number of posted constraints) and trying first the value 1 (i.e. best-first search for the maximization problem).
This leads to the following straightforward CLP(FD,B) program (given in GNU-Prolog syntax): Note that the labeling done in the optimization predicate proceeds through the boolean variables only. It is well known indeed that interval propagation algorithms provide indeed a complete procedure for checking the satisfiability of precedence constraints. It is thus not necessary to enumerate the possible values of the position variables in the schedule, as we know that the earliest dates are consistent. The labeling of the positions is done outside the optimization predicate, just to compute a ground schedule by taking the earliest dates for each action, without backtracking.
Benchmarks
At the present time, we do not have benchmarks of real-life log-based reconciliation problems. Nevertheless we expect that in real-life reconciliation problems, the optimal solutions accept more than 80% of the actions typically. These considerations, plus some preliminary inspections at some calendar applications or the jigsaw problem presented in [9], lead us to create a benchmark of randomly generated problems with density 1.5 for both precedence constraints and dependency constraints. We added a second series of more difficult benchmarks generated with the same density 1.5 for precedence constraints but without dependency constraints. Table 1 depicts the experimental results on both series of benchmarks. The size given in the second column is the total number of actions. The numbers of dependency constraints and precedence constraints are given in the following columns. These constraints are generated for each pair of actions randomly, with probability 1.5/size which gives 1.5 * size constraints of each type in average. The second series of benchmarks contains no dependency constraints.
The running times of this section have been measured in GNU-Prolog on a 866MHz Pentium III PC under Linux. We indicate, in order, the number of accepted actions in the first schedule found, the running time for finding this solution, the number of accepted actions in the optimal schedule, the running time for finding the optimal schedule (from the start), and finally the total running time including the proof of optimality.
The first solution found in these benchmarks is always very near the optimal solution. This indicates that the most-constrained variable choice heuristics combined with the best-first search heuristics perform very well in this modeling of the problem. Optimal solutions with their optimality proof are always computed in less than a second for problems with up to a hundreds of actions. On larger problems optimality proofs become difficult to obtain but the first solution found is always satisfying and fast to compute. The problems without dependency constraints are harder to solve. Optimality proofs are nevertheless always obtained on problems of size up to 50.
A stochastic local search approach
Approximated solutions to hard optimization problems can be found with heuristics. Local search is one of the fundamental heuristics that has been shown particularly effective for many classes of applications, including some classical benchmarks of constraint programming [10,2,7]. Local search methods iterate the transformation of some initial solution s 0 , by choosing at each step the next solution, s i+1 , in some neighborhood N (s i ) of s i . Descent methods choose at each iteration the neighbour which minimizes the objective function f : s i+1 = argmin s∈N (si) f (s) and stops whenever f (s i+1 ) > f (s i ). Descent methods thus stop in the first encountered local optimum. Multi-start methods iterate the application of the descent method from different initial solutions, and stop with the best encountered local optimum. Simulated annealing, variable neighborhood methods and Tabu search are local search methods that escape from local minima in an iterative fashion, without restarting descents. Tabu search [8] consists in choosing at each step i the state
s i+1 = argmin s∈N (si) f (s)
even if f (s i+1 ) is greater than the best solution found. A Tabu list L records the already visited states. The Tabu states cannot be revisited, except if they improve the objective function. Furthermore the Tabu list is a short term memory, after some number of iterations, already visited states are deleted from the Tabu list and can be freely reconsidered. In order to prevent cycles in the same set of solutions, the size of the Tabu list, as well as the neighborhoods, can be changed dynamically during the iterations. The log-based reconciliation problem can be modeled quite naturally as a local search problem for finding schedules that maximize the number of accepted actions. But there is one difficulty to mix boolean variables and position variable, and to define an appropriate evaluation function for guiding the search.
In this section we shall treat reconciliation problems with precedence constraints only. One solution is to represent in the states the positions of the actions in the current schedule, and to count in the objective function the number of actions which have all their precedence constraints satisfied. For guiding the search however, a more refined evaluation function is needed in order to have a measure of progress towards better solutions that do not yet improve the objective function. We thus use an evaluation function that counts for each violated precedence constraints
p i < p j the error 1 + (p i − p j )
The local moves are simply the incrementation or the decrementation by 1 of the position of an action in the schedule. The min-conflict heuristic [11] consists in choosing for a move the variable with a highest error and the move which minimizes that error. Nevertheless in the reconciliation problem, improving the evaluation function on the variables with the highest error does not necessarily leads to a good solution w.r.t. the objective function, as actions with high errors can be simply not accepted. Therefore we don't use the min-conflict heuristic and perform local moves on all actions as long as they improve the evaluation function. The value of a configuration w.r.t. the objective function is the number of actions with 0 errors. Note that this value is in fact a lower bound of the cost of the configuration w.r.t. the objective function, as constraints with unaccepted actions should be ignored. The cost of a configuration w.r.t. the objective function is thus computed as the number of actions remaining after successively removing the actions with the greatest number of violated constraints with non removed actions.
This defines the descent method evaluated in the next section. The Tabu search method adds an adaptive memory to escape from local minima. When a local minimum is reached on a variable it is marked in the Tabu list for a while, and is not reconsidered except if it is the only way to improve the solution. In order to increase the diversification, the number of iterations for which the variables is marked Tabu is randomly generated between 1 and some maximum Tabu length value (10 in the experiments). Furthermore if a local minimum is reached on all variables, one variable is chosen randomly to make a move that degrades the evaluation function.
The user interface of the program visualizes the movements of the actions in the schedule during the search, see figure 1. The graphical interface represents each action on a different line. The position of the action in the line indicates the scheduling of the action. Precedence constraints are materialized by lines between actions, those lines in green represent satisfied precedence constraints, those in yellow are violated. Table 2. Experimental results (benchmarks without dependency constraints).
Evaluation
The local search algorithm described in the previous section has been implemented in Java. Table 2 depicts the performance results on the previous series of benchmarks without dependency constraints. The running times have been measured with the Java 2 v1.3 JDK compiler. For each bench we recall the number of accepted actions in the optimal solution found by the CLP program, and present the best solution found by the (deterministic) descent method, and the best solution found by a run of the (randomized) Tabu search method.
These results indicate that the descent method leads to local minima of poor quality, and that the Tabu search program succeeds in escaping from these local minima. However on large instances, the convergence to better solutions is very long.
Better tuning of the Tabu search method might still improve the performance of the method in these benchmarks, in particular by improving the diversification strategies. Experimental results obtained with the min-conflict heuristic gave worse results for the reasons explained in the previous section. A different modeling seems necessary to use these heuristics, improve the handling of the objective function and treat dependency constraints.
Conclusion
Reconciliation problems in nomadic applications present interesting combinatorial optimization problems, with both static and on-line versions. We have studied an NP-hard log-based reconciliation problem. Its modeling with boolean and precedence constraints lead to a straightforward CLP(FD,B) program that finds nearly optimal solutions up to a thousands of actions, and proves optimality up to a hundreds of actions on realistic benchmarks. Enumeration proceeds through the boolean variables only. The precedence constraints are propagated in a complete way which makes enumeration superfluous. This program could still benefit however from more efficient algorithms for detecting cycles in precedence constraints, with a complexity independent of the size of the variables' domain [3].
We have developed a second program based on local search with Tabu heuristic. One potential advantage for the local search approach is that it can benefit from the initial logs as starting solution, and can provide solutions incrementally for the on-line reconciliation problem. Currently this program performs however poorly in comparison to the CLP program. One difficulty lies in the handling of both boolean variables and integer variables which represent the positions of the actions in the schedule. In our modeling of the problem, the adaptive search method of [2] did not perform well because of the difficulty to handle the objective function which differs from a max-CSP problem. Other modelings are thus currently under investigation. | 2,667 |
0706.2520 | 1820838581 | The traffic behavior of University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-VLAN traffic. The cross-correlation matrix C of the traffic rate changes between different traffic time series is calculated and tested against null-hypothesis of random interactions. The majority of the eigenvalues i of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The distribution of eigenvalues and eigenvectors outside of the RMT bounds displays prominent and systematic deviations from the RMT predictions. Moreover, these deviations are stable in time. The method we use provides a unique possibility to accomplish three concurrent tasks of traffic analysis. The method verifies the uncongested state of the network, by establishing the profile of random interactions. It recognizes the system-specific large-scale interactions, by establishing the profile of stable in time non-random interactions. Finally, by looking into the eigenstatistics we are able to detect and allocate anomalies of network traffic interactions. | The urgent need for a network-wide, scalable approach to the problem of healthy network traffic profile creation is expressed in works of @cite_20 @cite_15 @cite_23 @cite_10 @cite_0 @cite_25 . There are several studies with the promising results, which demonstrate that the traffic anomalous events cause the temporal changes in statistical properties of traffic features. Lakhina, Crovella and Diot presented the characterization of the network-wide anomalies of the traffic flows. The authors studied three different types of traffic flows and fused the information from flow measurements taken throughout the entire network. They obtained and classified a different set of anomalies for different traffic types using the subspace method @cite_15 . | {
"abstract": [
"IP forwarding anomalies, triggered by equipment failures, implementation bugs, or configuration errors, can significantly disrupt and degrade network service. Robust and reliable detection of such anomalies is essential to rapid problem diagnosis, problem mitigation, and repair. We propose a simple, robust method that integrates routing and traffic data streams to reliably detect forwarding anomalies. The overall method is scalable, automated and self-training. We find this technique effectively identifies forwarding anomalies, while avoiding the high false alarms rate that would otherwise result if either stream were used unilaterally.",
"Hidden semi-Markov Model (HsMM) has been well studied and widely applied to many areas. The advantage of using an HsMM is its efficient forward-backward algorithm for estimating model parameters to best account for an observed sequence. In this paper, we propose an HsMM to model the distribution of network-wide traffic and use an observation window to distinguish DoS flooding attacks mixed within the normal background traffic. Several experiments are conducted to validate our method.",
"Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.",
"",
"",
"The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework."
],
"cite_N": [
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"1964890720",
"2004469928",
"2144936818",
"",
"140023516",
"2164210932"
]
} | Analysis of Inter-Domain Traffic Correlations: Random Matrix Theory Approach | The infrastructure, applications and protocols of the system of communicating computers and networks are constantly evolving. The traffic, which is an essence of the communication, presently is a voluminous data generated on minute-byminute basis within multi-layered structure by different applications and according to different protocols. As a consequence, there are two general approaches in analysis of the traffic and in modeling of its healthy behavior. In the first approach, the traffic analysis considers the protocols, applications, traffic matrix and routing matrix estimates, independence of ingress and egress points and much more. The second approach treats the infrastructure between the points from which the traffic is obtained as a "black box" [33], [34].
Measuring interactions between logically and architecturally equivalent substructures of the system is a natural extension of the "black box" approach. Certain amount of work in this direction has already been done. Studies on statistical traffic flow properties revealed the "congested", "fluid" and "transitional" regimes of the flow at a large scale [1], [2]. The observed collective behavior suggests the existence of the large-scale network-wide correlations between the network subparts. Indeed, the [3] work showed the large-scale crosscorrelations between different connections of the Renater scientific network. Moreover, the analysis of correlations across all simultaneous network-wide traffic has been used in network distributed attacks detection [4].
The distributions and stability of established interactions statistics represent the characteristic features of the system and may be exploited in healthy network traffic profile creation, which is an essential part of network anomaly detection. As it is successfully demonstrated in [5], all tested traffic anomalies change the distribution of the traffic features.
Among numerous types of traffic monitoring variables, time series of traffic counts are free of applications "semantics" and thus more preferable for "black box" analysis. To extract the meaningful information about underlying interactions contained in time series, the empirical correlation matrix is a usual tool at hand. In addition, there are various classes of statistical tools, such as principal component analysis, singular value decomposition, and factor analysis, which in turn strongly rely on the validity of the correlation matrix and obtain the meaningful part of the time series. Thus, it is important to understand quantitatively the effect of noise, i.e. to separate the noisy, random interactions from meaningful ones. In addition, it is crucial to consider the finiteness of the time series in the determination of the empirical correlation, since the finite length of time series available to estimate cross correlations introduces "measurement noise" [19]. Statistically, it is also advisable to develop null-hypothesis tests in order to check the degree of statistical validity of the results obtained against cases of purely random interactions.
The methodology of random matrix theory (RMT) developed for studying the complex energy levels of heavy nuclei and is given a detailed account in [6], [7], [8], [9], [10], [11]. For our purposes this methodology comes in as a series of statistical tests run on the eigenvalues and eigenvectors of "system matrix", which in our case is traffic time series crosscorrelation matrix C (and is Hamiltonian matrix in case of nuclei and other RMT systems [6], [7], [8], [9], [10], [11]).
In our study, we propose to investigate the network traffic as a complex system with a certain degree of mutual interactions of its constituents, i.e. single-link traffic time series, using the RMT approach. We concentrate on the large scale correlations between the time series generated by Simple Network Manage Protocol (SNMP) traffic counters at every router-router and router-VLAN subnet connection of University of Louisville backbone routers system.
The contributions of this study are as follows:
• We propose the application constraints free methodology of network-wide traffic time series interactions analysis. Even though in this particular study, we know in advance that VLANs represent separate broadcast domains, VLAN-router incoming traffic is a traffic intended for other VLANs and VLAN-router outgoing traffic is a routed traffic from other VLANs. Nevertheless, this information is irrelevant for our analysis and acquired only at the interpretation of the analysis results. • Using the RMT, we are able to separate the random interactions from system specific interactions. The vast majority of traffic time series interact in random fashion. The time stable random interactions signify the healthy, and free of congestion traffic. The proposed analysis of eigenvector distribution allows to verify the time series content of uncongested traffic. • The time stable non-random interactions provide us with information about large-scale system-specific interactions. • Finally, the temporal changes in random and non-random interactions can be detected and allocated with eigenvalues and eigenvectors statistics of interactions. The organization of this paper is as follows. Section II presents the survey of related work. We describe the RMT methodology in Section III. Section IV contains the explanation of the data analyzed. In Section V we test the eigenvalue distribution of inter-VLAN traffic time series cross-correlation matrix C against the RMT predictions. In Section VI we analyze the content of inter-VLAN traffic interactions by mean of eigenvalues and eigenvectors deviated from RMT. Section VII discusses the characteristic traffic interactions parameters of the system such as time stability of the deviating eigenvalues and eigenvectors, inverse participation ratio (IPR) of eigenvalues spectra, localization points in IPR plot, overlap matrices of the deviating eigenvectors. With series of different experiments, we demonstrate how traffic interactions anomalies can be detected and allocated in time and space using various visualization techniques on eigenvalues and eigenvectors statistics in Section VIII. We present our conclusions and prospective research steps in Section IX.
III. RMT METHODOLOGY
The RMT was employed in the financial studies of stock correlations [18], [19], communication theory of wireless systems [20], array signal processing [21], bioinformatics studies of protein folding [22]. We are not aware of any work, except for [3], where RMT techniques were applied to the Internet traffic system.
We adopt the methodology used in works on financial time series correlations (see [18], [19] and references therein) and later in [3], which discusses cross-correlations in Internet traffic. In particular, we quantify correlations between N traffic counts time series of L time points, by calculating the traffic rate change of every time series T i = 1, . . . , N , over a time scale ∆t,
G i (t) ≡ ln T i (t + ∆t) − ln T i (t)(1)
where T i (t) denotes the traffic rate of time series i. This measure is independent from the volume of the traffic exchange and allows capturing the subtle changes in the traffic rate [3]. The normalized traffic rate change is
g i (t) ≡ G i (t) − G i (t) σ i (2) where σ i ≡ G 2 i − G i 2 is the standard deviation of G i .
The equal-time cross-correlation matrix C can be computed as follows
C ij ≡ g i (t) g j (t)(3)
The properties of the traffic interactions matrix C have to be compared with those of a random cross-correlation matrix [23]. In matrix notation, the interaction matrix C can be expressed as
C = 1 L GG T ,(4)
where G is N × L matrix with elements {g i m ≡ g i (m △ t) ; i = 1, . . . , N ; m = 0, . . . , L − 1} , and G T denotes the transpose of G. Just as was done in [19], we consider a random correlation matrix
R = 1 L AA T ,(5)
where A is N ×L matrix containing N time series of L random elements a i m with zero mean and unit variance, which are mutually uncorrelated as a null hypothesis. Statistical properties of the random matrices R have been known for years in physics literature [6], [10], [7], [8], [9], [11]. In particular, it was shown analytically [24] that, under the restriction of N → ∞, L → ∞ and providing that Q ≡ L/N (> 1) is fixed, the probability density function P rm (λ) of eigenvalues λ of the random matrix R is given by
P rm (λ) = Q 2π (λ + − λ) (λ − λ − ) λ(6)
where λ + and λ − are maximum and minimum eigenvalues of R, respectively and λ − ≤ λ i ≤ λ + . λ + and λ − are given analytically by
λ ± = 1 + 1 Q ± 2 1 Q .(7)
Random matrices display universal functional forms for eigenvalues correlations which depend on the general symmetries of the matrix only. First step to test the data for such a universal properties is to find a transformation called "unfolding", which maps the eigenvalues λ i to new variables, "unfolded eigenvalues" ξ i , whose distribution is uniform [9], [10], [11]. Unfolding ensures that the distances between eigenvalues are expressed in units of local mean eigenvalues spacing [9], and thus facilitates the comparison with analytical results. We define the cumulative distribution function of eigenvalues, which counts the number of eigenvalues in the interval
λ i ≤ λ, F (λ) = N λ −∞ P (x) dx,(8)
where P (x) denotes the probability density of eigenvalues and N is the total number of eigenvalues. The function F (λ) can be decomposed into an average and a fluctuating part,
F (λ) = F av (λ) + F f luc (λ) ,(9)
Since P f luc ≡ dF f luc (λ) /dλ = 0 on average,
P rm (λ) ≡ dF av (λ) dλ ,(10)
is the averaged eigenvalues density. The dimensionless, unfolded eigenvalues are then given by
ξ i ≡ F av (λ i ) .(11)
Three known universal properties of GOE matrices (matrices whose elements are distributed according to a Gaussian probability measure) are: (i) the distribution of nearestneighbor eigenvalues spacing P GOE (s)
P GOE (s) = πs 2 exp − π 4 s 2 ,(12)
(ii) the distribution of next-nearest-neighbor eigenvalues spacing, which is according to the theorem due to [8] is identical to the distribution of nearest-neighbor spacing of Gaussian symplectic ensemble (GSE),
P GSE (s) = 2 18 3 6 π 3 s 4 exp − 64 9π s 2(13)
and finally (iii) the "number variance" statistics Σ 2 , defined as the variance of the number of unfolded eigenvalues in the intervals of length l, around each ξ i [9], [11], [10].
Σ 2 (l) = [n (ξ, l) − l] 2 ξ ,(14)
where n (ξ, l) is the number of the unfolded eigenvalues in the interval ξ − l 2 , ξ + l 2 . The number variance is expressed as follows
Σ 2 (l) = l − 2 l 0 (l − x) Y (x) dx,(15)
where Y (x) for the GOE case is given by [9] Y
(x) = s 2 (x) + ds dx ∞ x s (x ′ ) dx ′ ,(16)
and
s (x) = sin (πx) πx .(17)
Just as was stressed in [19], [18], [25] the overall time of observation is crucial for explaining the empirical cross-correlation coefficients. On one hand, the longer we observe the traffic the more information about the correlations we obtain and less "noise" we introduce. On the other hand, the correlations are not stationary, i.e. they can change with time. To differentiate the "random" contribution to empirical correlation coefficients from "genuine" contribution, the eigenvalues statistics of C is contrasted with the eigenvalues statistics of a correlation matrix taken from the so called "chiral" Gaussian Orthogonal Ensemble [19]. Such an ensemble is one of the ensembles of RMT [25], [26], briefly discussed in Appendix A. A random cross-correlation matrix, which is a matrix filled with uncorrelated Gaussian random numbers, is supposed to represent transient uncorrelated in time network activity, that is, a completely noisy environment. In case the cross-correlation matrix C obeys the same eigenstatistical properties as the RMT-matrix, the network traffic is equilibrated and deemed universal in a sense that every single connection interacts with the rest in a completely chaotic manner. It also means a complete absence of congestions and anomalies. Meantime, any stable in time deviations from the universal predictions of RMT signify system-specific, nonrandom properties of the system, providing the clues about the nature of the underlying interactions. That allows us to establish the profile of systemspecific correlations.
IV. DATA In this paper, we study the averaged traffic count data collected from all router-router and router-VLAN subnet connections of the University of Louisville backbone routers system. The system consists of nine interconnected multigigabit backbone routers, over 200 Ethernet segments and over 300 VLAN subnets. We collected the traffic count data for 3 months, for the period from September 21, 2006 to December 20, 2006 from 7 routers, since two routers are reserved for server farms. The overall data amounted to approximately 18 GB.
The traffic count data is provided by Multi Router Traffic Grapher (MRTG) tool that reads the SNMP traffic counters. MRTG log file never grows in size due to the data consolidation algorithm: it contains records of average incoming, outgoing, max and min transfer rate in bytes per second with time intervals 300 seconds, 30 minutes, 1 day and 1 month. We extracted 300 seconds interval data for seven days. Then, we separated the incoming and outgoing traffic counts time series and considered them as independent. For 352 connections we formed L = 2015 records of N = 704 time series with 300 seconds interval.
We pursued the changes in the traffic rate, thus, we excluded from consideration the connections, where channel is open but the traffic is not established or there is just constant rate and equal low amount test traffic. Another reason for excluding the "empty" traffic time series is that they make the time series cross-correlation matrix unnecessary sparse. The exclusion does not influence the analysis and results. After the exclusions the number of the traffic time series became N = 497.
To calculate the traffic rate change G i (t) we used the logarithm of the ratio of two successive counts. As it is stated earlier, log-transformation makes the ratio independent from the traffic volume and allows capturing the subtle changes in the traffic rate. We added 1 byte to all data points, to avoid manipulations with log (0), in cases where traffic count is equal to zero bytes. This measure did not affect the changes in the traffic rate.
V. EIGENVALUE DISTRIBUTION OF CROSS-CORRELATION
MATRIX, COMPARISON WITH RMT
We constructed inter-VLAN traffic cross-correlation matrix C with number of time series N = 497 and number of observations per series L = 2015, (Q = 4.0625) so that, λ + = 2.23843 and λ − = 0.253876. Our first goal is to compare the eigenvalue distribution P (λ) of C with P rm (λ) [23]. To compute eigenvalues of C we used standard MATLAB function. The empirical probability distribution P (λ) is then given by the corresponding histogram. We display the resulting distribution P (λ) in Figure 1 and compare it to the probability distribution P rm (λ) taken from Eq. (6) calculated for the same value of traffic time series parameters (Q = 4.0625). The solid curve demonstrates P rm (λ) of Eq. (6). The largest eigenvalue shown in inset has the value λ 497 = 8.99. We zoom in the deviations from the RMT predictions on the inset to Figure 1. We note the presence of "bulk" (RMT-like) eigenvalues which fall within the bounds [λ − ,λ + ] for P rm (λ), and presence of the eigenvalues which lie outside of the "bulk", representing deviations from the RMT predictions. In particular, largest eigenvalue λ 497 = 8.99 for seven days period is approximately four times larger than the RMT upper bound λ + .
The histogram for well-defined bulk agrees with P rm (λ) suggesting that the cross-correlations of matrix C are mostly random. We observe that inter-VLAN traffic time series interact mostly in a random fashion.
Nevertheless, the agreement of empirical probability distribution P (λ) of the bulk with P rm (λ) is not sufficient to claim that the bulk of eigenvalue spectrum is random. Therefore, further RMT tests are needed [19].
To do that, we obtained the unfolded eigenvalues ξ i by following the phenomenological procedure referred to as Gaussian broadening [27], (see [27], [35], [19], [18]). The empirical cumulative distribution function of eigenvalues F (λ) agrees well with the F av (λ) (see Figure 2), where ξ i obtained with Gaussian broadening procedure with the broadening parameter a = 8. The first independent RMT test is the comparison of the distribution of the nearest-neighbor unfolded eigenvalue spacing P nn (s), where s ≡ ξ k+1 − ξ k with P GOE (s) [9], [10], [11]. The empirical probability distribution of nearestneighbor unfolded eigenvalues spacing P nn (s) and P GOE (s) are presented in Figure 3. The Gaussian decay of P GOE (s) for large s suggests that P GOE (s) "probes" scales only of the agreement between empirical probability distribution P nn (s) and the distribution of nearest-neighbor eigenvalues spacing of the GOE matrices P GOE (s) testifies that the positions of two adjacent empirical unfolded eigenvalues at the distance s are correlated just as the eigenvalues of the GOE matrices.
Next, we took on the distribution P nnn (s ′ ) of next-nearestneighbor spacings s ′ ≡ ξ k+2 − ξ k between the unfolded eigenvalues. According to [8] this distribution should fit to the distribution of nearest-neighbor spacing of the GSE. We demonstrate this correspondence in Figure 4. The solid line shows P GSE (s). Finally, the long-range two-point eigenvalue correlations were tested. It is known [9], [10], [11], that if eigenvalues are uncorrelated we expect the number variance to scale with l, Σ 2 ∼ l. Meanwhile, when the unfolded eigenvalues of C are correlated, Σ 2 approaches constant value, revealing "spectral rigidity" [9], [10], [11]. In Figure 5, we contrasted Poissonian number variance with the one we observed, and came to the conclusion that eigenvalues belonging to the "bulk" clearly exhibit universal RMT properties. The broadening parameter a = 8 was used in Gaussian broadening procedure to unfold the eigenvalues λ i [27], [35], [19], [18]. The dashed line corresponds to the case of uncorrelated eigenvalues. These findings show that the system of inter- VLAN traffic has a universal part of eigenvalues spectral correlations, shared by broad class of systems, including chaotic and disordered systems, nuclei, atoms and molecules. Thus it can be concluded, that the bulk eigenvalue statistics of the inter-VLAN traffic cross-correlation matrix C are consistent with those of real symmetric random matrix R, given by Eq. (5) [24]. Meantime, the deviations from the RMT contain the information about the system-specific correlations. The next section is entirely devoted to the analysis of the eigenvalues and eigenvectors deviating from the RMT, which signifies the meaningful inter-VLAN traffic interactions.
VI. INTER-VLAN TRAFFIC INTERACTIONS ANALYSIS
We overview the points of interest in eigenvectors of inter-VLAN traffic cross-correlation matrix C, which are determined according to Cu k = λ k u k , where λ k is k-th eigenvalue. Particularly important characteristics of eigenvectors, proven to be useful in physics of disordered conductors is the inverse participation ratio (IPR) (see, for example, Ref. [11]). In such systems, the IPR being a function of an eigenstate (eigenvector) allows to judge and clarify whether the corresponding eigenstate, and therefore electron is extended or localized.
A. Inverse participation ratio of eigenvectors components
For our purposes, it is sufficient to know that IPR quantifies the reciprocal of the number of significant components of the eigenvector. For the eigenvector u k it is defined as
I k ≡ N l=1 u k l 4 ,(18)
where u k l , l = 1, . . . , 497 are components of the eigenvector u k . In particular, the vector with one significant component has I k = 1, while vector with identical components u k l = 1/ √ N has I k = 1/N . Consequently, the inverse of IPR gives us a number of significant participants of the eigenvector. In Figure 6 we plot the IPR of cross-correlation matrix C as a function of eigenvalue λ. The control plot is IPR of eigenvectors of random cross-correlation matrix R of Eq. 5. As we can see, eigenvectors corresponding to eigenvalues from 0.25 to 3.5, what is within the RMT boundaries, have IPR close to 0. This means that almost all components of eigenvectors in the bulk interact in a random fashion. The number of significant components of eigenvectors deviating from the RMT is typically twenty times smaller than the one of the eigenvectors within the RMT boundaries, around twenty. For instance, IPR of eigenvector u 492 , which corresponds to the eigenvalue 5.9 in Figure 6, is 0.05, i.e. twenty time series are significantly contribute to u 492 . Another observation which we derive from Figure 6 is that the number of eigenvectors significant participants is considerably smaller at both edges of the eigenvalue spectrum. These findings resemble the results of [19], where the eigenvectors with a few participating components were referred to as localized vectors. The theory of localization is explained in the context of random band matrices, where elements independently drawn from different probability distributions [19]. These matrices despite their randomness, still contain probabilistic information. The localization in inter-VLAN traffic is explained as follows. The separated broadcast domains, i.e. VLANs forward traffic from one to another only through the router, reducing the routing for broadcast containment. Although the optimal VLAN deployment is to keep as much traffic as possible from traversing through the router, the bottleneck at the large number of VLANs is unavoidable.
B. Distribution of eigenvectors components
Another target of interest is the distribution of the components u k l ; l = 1, . . . , N of eigenvector u k of the interactions matrix C. To calculate vectors u we used the MATLAB routine again and obtained components distribution p (u) of the eigenvectors components. Then, we contrasted it with the RMT predictions for the eigenvector distribution p rm (u) of the random correlation matrix R. According to [11] p rm (u) has a Gaussian distribution with mean zero and unit variance, i.e.
p rm (u) = 1 √ 2π exp −u 2 2 .(19)
The weights of randomly interacting traffic counts time series, which are represented by the eigenvectors components has to be distributed normally. The results are presented in Figure 7.
One can see (from Figures 7a and 7b) that p (u) for two u k taken from the bulk is in accord with p rm (u). The distribution p (u) corresponding to the eigenvalue λ i , which exceeds the RMT upper bound (λ i > λ + ), is shown in Figure 7c.
C. Deviating eigenvalues and significant inter-VLAN traffic series contributing to the deviating eigenvectors.
The distribution of u 497 , the eigenvector corresponding to the largest eigenvalue λ 497 , deviates significantly from the Gaussian (as follows from Figure 7d). While Gaussian kurtosis has the value 3, the kurtosis of p u 497 comes out to 23.22. The smaller number of significant components of the eigenvector also influences the difference between Gaussian distribution and empirical distribution of eigenvector components. More than half of u 497 components have the same sign, thus slightly shifting the p (u) to one side. This result suggests the existence of the common VLAN traffic intended for inter-VLAN communication that affects all of the significant participants of the eigenvector u 497 with the same bias. We know that the number of significant components of u 497 is twenty two, since IPR of u 497 is 0.045. Hence, the largest eigenvector content reveals 22 traffic time series, which are affected by the same event. We obtain the time series, which affects 22 traffic time series by the following procedure. First of all, we calculate projection G 497 (t) of the time series G i (t) on the eigenvector u 497 ,
G 497 (t) ≡ 497 i=1 u 497 i G i (t)(20)
Next, we compare G 497 (t) with G i (t), by finding the corre-
lation coefficient G 497 (t) σ 497 Gi(t) σi
. The Fiber Distributed Data Interface (FDDI)-VLAN internet switch at one of the routers demonstrates the largest correlation coefficient of 0.89 (see Figure 8). The eigenvector u 497 has the following content: seven most significant participants are seven FDDI-VLAN switches at the seven routers. The presence of FDDI-VLAN switch provide us with information about VLAN membership definition. FDDI is layer 2 protocol, which means that at least one of two layer 2 membership is used, port group or/and MAC address membership. The next group of significant participants comprises of VLAN traffic intended for routing and already routed traffic from different VLANs. The final group of significant participants constitutes open switches, which pick up any "leaking" traffic on the router. Usually, the "leaking" traffic is the network management traffic, a very low level traffic which spikes when queried by the management systems.
If every deviating eigenvalue notifies a particular submodel of non-random interactions of the network, then every corresponding eigenvector presents the number of significant dimensions of sub-model. Thus, we can think of every deviating eigenvector as a representative network-wide "snapshot" of interactions within the certain dimensions.
The analysis of the significant participants of the deviating eigenvectors revealed three types of inter-VLAN traffic time series groupings. One group contains time series, which are interlinked on the router. We recognize them as, router1-VLAN_1000 traffic, router1-firewall traffic and VLAN_1000-router1 traffic. The time series, which are listed as router1-vlan_2000, router2-VLAN_2000, router3-VLAN_2000, etc., are reserved for the same service VLAN on every router and comprise another group. The content of these groups suggests the VLANs implementation, it is a mixture of infrastructural approach, where functional groups (departments, schools, etc.) are considered, and service approach, where VLAN provides a particular service (network management, firewall, etc.).
VII. STABILITY OF INTER-VLAN TRAFFIC INTERACTIONS
IN TIME
We expect to observe the stability of inter-VLAN traffic interactions in the period of time used to compute traffic crosscorrelation matrix C. The eigenvalues distribution at different time periods provides the information about the system stabilization, i.e. about the time after which the fluctuations of eigenvalues are not significant. Time periods of 1 hour, 3 hours and 6 hours are not sufficient to gain the knowledge about the system, which is demonstrated in Figure 9a. In Figure 9b the system stabilizes after 1 day period. To observe the time stability of inter-VLAN meaningful interactions we computed the "overlap matrix" of the deviating eigenvectors for the time period t and deviating eigenvectors for the time period t + τ , where t = 60h, τ = {0h, 3h, 12h, 24h, 36h, 48h}.
First, we obtained matrix D from p = 57 eigenvectors, which correspond to p eigenvalues outside of the RMT upper bound λ + . Then we computed the "overlap matrix" O (t, τ ) from D A D T B , where O ij is a scalar product of the eigenvector u i of period A (starting at time t = t) with u j of period B at the time t = t + τ ,
O ij (t, τ ) ≡ N k=1 D ik (t) D ik (t + τ )(21)
The values of O ij (t, τ ) elements at i = j, i.e. of diagonal elements of matrix O will be 1, if the matrix D (t + τ ) is identical to the matrix D (t). Clearly, the diagonal of the "overlap matrix" O can serve as an indicator of time stability of p eigenvectors outside of the RMT upper bound λ + . The gray scale colormap of the "overlap matrices" O (t = 60h, τ = {0h, 3h, 12h, 24h, 36h, 48h}) is presented in Figure 10.
VIII. DETECTING ANOMALIES OF TRAFFIC INTERACTIONS
We assume that the health of inter-VLAN traffic is expressed by stability of its interactions in time. Meanwhile, the temporal critical events or anomalies will cause the temporal instabilities. The "deviating" eigenvalues and eigenvectors provide us with stable in time snapshots of interactions representative of the entire network. Therefore, these eigenvectors judged on the basis of their IPR can serve as monitoring parameters of the system stability.
Among the essential anomalous events of VLAN infrastructure we can list violations in VLAN membership assignment, in address resolution protocol, in VLAN trunking protocol, router misconfiguration. The violation of membership assignment and router misconfiguration will cause the changes in the picture of random and non-random interactions of inter-VLAN traffic. To shed more light on the possibilities of anomaly detection we conducted the experiments to establish spatial-temporal traces of instabilities caused by artificial and temporal increase of the correlation in normal non-congested inter-VLAN traffic. We explored the possibility to distinguish different types of increased temporal correlations. Finally, we observed the consequences of breaking the interactions between time series, by injecting traffic counts obtained from sample of random distribution.
Experiment 1
We selected the traffic counts time series representing the components of the eigenvector which lies within the RMT bounds and temporarily increased the correlation between these series for three hour period. The proposed monitoring parameters show the dependence of system stability on the number of temporarily correlated time series (see Figure 11). Presented in Figure 11, left to right are (a) eigenvalue distribution of interactions with two temporarily correlated time series, (b) IPR of eigenvectors of interactions with two temporarily correlated time series, (c) the overlap matrix of deviating eigenvectors with two temporarily correlated time series. Top to bottom the layout shows these monitoring parameters when correlation is temporarily increased between 10 connections (d,e and f) and between 20 connections (g,h and i). One can conclude that increased temporal correlation between two time series and between ten time series does not affect system stability. Meanwhile, when the number of temporarily correlated time series reaches the number of and another type of correlation among other ten time series. Three different types of three hours correlations are induced among twenty traffic time series in Figure 15b. The sorted in decreasing order content of significant components shows that time series tend to group according to the type of correlation they are involved in.
Experiment 3
Next we turn our attention to disruption of normal picture of inter-VLAN traffic interactions. This can be done by injecting the traffic from random distribution to non-randomly interacting time series for three hours. We demonstrate it by examining the eigenvalue distribution, the IPR and the deviating eigenvectors overlap matrix plotted in Figure 16. After 60 hours of uninterrupted traffic, we injected elements from random distribution to significant participants of u 497 for three hours. The largest eigenvalue increases, from 10 to 12. Extended IPR tail shows the larger number of localized eigenvectors and we observe the dramatic break in deviating eigenvectors stability.
IX. CONCLUSION AND FUTURE WORK
The RMT methodology we used in this paper enables us to analyze the complex system behavior without the consideration of system constraints, type and structure. Our goal was to investigate the characteristics of day-to-day temporal dynamics of the system of interconnected routers with VLAN subnets of the University of Louisville. The type and structure of the system at hand suggests the natural interpretation of the RMTlike behavior and the RMT deviating results. The time stable random interactions signify the healthy, and free of congestion traffic. The time stable non-random interactions provide us with information about large-scale network-wide traffic interactions. The changes in the stable picture of random and nonrandom interactions signify the temporal traffic anomalies.
In general, the fact of sharing the universal properties of the bulk of eigenvalues spectrum of inter-VLAN traffic interactions with random matrices opens a new venue in network-wide traffic modeling. As stated in [19], in physical systems it is common to start with the model of dynamics of the system. This way, one would model the traffic time series interactions with the family of stochastic differential equations [28], [29], which describe the "instantaneous" traffic counts
g i (t) = (d/dt) lnT i (t) ,(22)
as a random walk with couplings. Then one would relate the revealed interactions to the correlated "modes" of the system. Additional question that RMT findings raise in networkwide traffic analysis is whether the found eigenvalues spectrum correlations and localized eigenvectors outside of RMT bulk can add to the explanation of the fundamental property of the network traffic, such as self-similarity [30].
To summarize, we have tested the eigenvalues statistics of inter-VLAN traffic cross-correlation matrix C against the null hypothesis of random correlation matrix. By separating the eigenvalues spectrum correlations of random matrices that are present in this system, the uncongested state of the network traffic is verified. We analyzed the stable in time systemspecific correlations. The analyzed eigenvalues and eigenvectors deviating from the RMT showed the principal groups of VLAN-router switches, groups of traffic time series interlinked through the firewalls and groups of same service VLANs at every router. With straightforward experiments on the traffic time series, we demonstrated that eigenvalue distribution, IPR of eigenvectors, overlap matrix and spatial-temporal patterns of deviating eigenvectors can monitor the stability of inter-VLAN traffic interactions, detect and spot in time and space of any network-wide changes in normal traffic time series interactions.
As the reservation for the future work, we would like to investigate the behavior of delayed traffic time series crosscorrelation matrix C d in the RMT terms. The importance of delay in measurement-based analysis of Internet is emphasized in [31]. To understand and quantify the effect of one time series on another at a later time, one can calculate the delay correlation matrix, where the entries are cross-correlation of one time series and another at a time delay τ [32]. In addition, we are interested in testing the fruitfulness of the RMT approach on the larger system of inter-domain interactions, for instance, on 5-minute averaged traffic count time series of underlying backbone circuits of Abilene backbone network.
have recently penetrated into econophysics, finance [26] and network traffic analysis [3].
For the statistical description of complex physical systems, such as, for example, atomic nucleus or acoustical reverberant structure, the RMT serves as guiding light when one is interested in the degree of mutual interaction of the constituents. As it turns out, the uncorrelated energy levels or acoustic eigenfrequencies would produce qualitatively different result from those obeying RMT-like correlations [25]. Therefore, real (experimentally measured) spectra can help to decide on the nature of interactions in the underlying system. To be specific, ideally, symmetric system is expected to exhibit spectral properties drastically different from the properties of generic one, and if the spectral properties are those of RMT systems, other ideas of RMT can be brought to the researcher aid.
To describe "awareness" of the structural constituents about each other, scientists in different fields use similar constructs. Physicists use Hamiltonian matrix, engineers stiffness matrix, finance and network analysts the equal-time cross-correlation matrix. Although the physical meaning of mentioned operators can be different, the eigenvalues/eigenvectors analysis seems to be a universally accepted tool. The eigenvalues have direct connection to spectrum of physical systems, while eigenvectors can be used for the description of excitation/signal/information propagation inside the system. In physics, the RMT approaches come about whenever the system of interest demonstrates certain qualitative features in their spectral behavior. For example, if one looks at nearest neighbor spacing distribution of eigenvalues and instead of Poisson law P (s) = exp (−s) , discovers "Wigner surmise" P (s) = π 2 s exp − π 2 s 2 , one concludes (upon running several additional statistical tests) that apparatus of RMT can be used for the system at hand, and system matrix can be replaced by a matrix with random entries. For mathematical convenience, these entries are given Gaussian weight. The only other ingredient of this rather succinct phenomenological model is recognizing the physical situation. For example, systems with and without magnetic field and/or central symmetry are described by different matrix ensembles (that is the set of matrices) with elements distributed within distribution corresponding to the same β
P (β) (H) ∝ exp − β 4v 2 trH 2 ,
where the constant v sets the length of the resulting eigenvalues spectrum.
The very fact that RMT can be helpful in statistical description of the broad range of systems suggests that these systems are analyzed in a certain special universal regime, in which physical or other laws are undermined by equilibrated and ergodic evolution. In most physical applications, a Hamiltonian matrix is rather sparse, indicating lack of interaction between different subparts of the corresponding object. However, if the universal regime is inferred from the above mentioned statistical tests, it is very beneficial to replace this single matrix with the ensemble of random matrices. Then, one can proceed with statistical analysis using matrix ensemble for calculation of statistical averages more relevant for the physical problem at hand than the statistics of eigenvalues. The latter can be mean or variance of the response to external or internal excitation. | 6,197 |
0706.2520 | 1820838581 | The traffic behavior of University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-VLAN traffic. The cross-correlation matrix C of the traffic rate changes between different traffic time series is calculated and tested against null-hypothesis of random interactions. The majority of the eigenvalues i of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The distribution of eigenvalues and eigenvectors outside of the RMT bounds displays prominent and systematic deviations from the RMT predictions. Moreover, these deviations are stable in time. The method we use provides a unique possibility to accomplish three concurrent tasks of traffic analysis. The method verifies the uncongested state of the network, by establishing the profile of random interactions. It recognizes the system-specific large-scale interactions, by establishing the profile of stable in time non-random interactions. Finally, by looking into the eigenstatistics we are able to detect and allocate anomalies of network traffic interactions. | The same group of researchers extended their work in @cite_20 . Under the new assumption that any network anomaly induces the changes in distributional aspects of packet header fields, they detected and identified large set of anomalies using the entropy measurement tool. | {
"abstract": [
"The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework."
],
"cite_N": [
"@cite_20"
],
"mid": [
"2164210932"
]
} | Analysis of Inter-Domain Traffic Correlations: Random Matrix Theory Approach | The infrastructure, applications and protocols of the system of communicating computers and networks are constantly evolving. The traffic, which is an essence of the communication, presently is a voluminous data generated on minute-byminute basis within multi-layered structure by different applications and according to different protocols. As a consequence, there are two general approaches in analysis of the traffic and in modeling of its healthy behavior. In the first approach, the traffic analysis considers the protocols, applications, traffic matrix and routing matrix estimates, independence of ingress and egress points and much more. The second approach treats the infrastructure between the points from which the traffic is obtained as a "black box" [33], [34].
Measuring interactions between logically and architecturally equivalent substructures of the system is a natural extension of the "black box" approach. Certain amount of work in this direction has already been done. Studies on statistical traffic flow properties revealed the "congested", "fluid" and "transitional" regimes of the flow at a large scale [1], [2]. The observed collective behavior suggests the existence of the large-scale network-wide correlations between the network subparts. Indeed, the [3] work showed the large-scale crosscorrelations between different connections of the Renater scientific network. Moreover, the analysis of correlations across all simultaneous network-wide traffic has been used in network distributed attacks detection [4].
The distributions and stability of established interactions statistics represent the characteristic features of the system and may be exploited in healthy network traffic profile creation, which is an essential part of network anomaly detection. As it is successfully demonstrated in [5], all tested traffic anomalies change the distribution of the traffic features.
Among numerous types of traffic monitoring variables, time series of traffic counts are free of applications "semantics" and thus more preferable for "black box" analysis. To extract the meaningful information about underlying interactions contained in time series, the empirical correlation matrix is a usual tool at hand. In addition, there are various classes of statistical tools, such as principal component analysis, singular value decomposition, and factor analysis, which in turn strongly rely on the validity of the correlation matrix and obtain the meaningful part of the time series. Thus, it is important to understand quantitatively the effect of noise, i.e. to separate the noisy, random interactions from meaningful ones. In addition, it is crucial to consider the finiteness of the time series in the determination of the empirical correlation, since the finite length of time series available to estimate cross correlations introduces "measurement noise" [19]. Statistically, it is also advisable to develop null-hypothesis tests in order to check the degree of statistical validity of the results obtained against cases of purely random interactions.
The methodology of random matrix theory (RMT) developed for studying the complex energy levels of heavy nuclei and is given a detailed account in [6], [7], [8], [9], [10], [11]. For our purposes this methodology comes in as a series of statistical tests run on the eigenvalues and eigenvectors of "system matrix", which in our case is traffic time series crosscorrelation matrix C (and is Hamiltonian matrix in case of nuclei and other RMT systems [6], [7], [8], [9], [10], [11]).
In our study, we propose to investigate the network traffic as a complex system with a certain degree of mutual interactions of its constituents, i.e. single-link traffic time series, using the RMT approach. We concentrate on the large scale correlations between the time series generated by Simple Network Manage Protocol (SNMP) traffic counters at every router-router and router-VLAN subnet connection of University of Louisville backbone routers system.
The contributions of this study are as follows:
• We propose the application constraints free methodology of network-wide traffic time series interactions analysis. Even though in this particular study, we know in advance that VLANs represent separate broadcast domains, VLAN-router incoming traffic is a traffic intended for other VLANs and VLAN-router outgoing traffic is a routed traffic from other VLANs. Nevertheless, this information is irrelevant for our analysis and acquired only at the interpretation of the analysis results. • Using the RMT, we are able to separate the random interactions from system specific interactions. The vast majority of traffic time series interact in random fashion. The time stable random interactions signify the healthy, and free of congestion traffic. The proposed analysis of eigenvector distribution allows to verify the time series content of uncongested traffic. • The time stable non-random interactions provide us with information about large-scale system-specific interactions. • Finally, the temporal changes in random and non-random interactions can be detected and allocated with eigenvalues and eigenvectors statistics of interactions. The organization of this paper is as follows. Section II presents the survey of related work. We describe the RMT methodology in Section III. Section IV contains the explanation of the data analyzed. In Section V we test the eigenvalue distribution of inter-VLAN traffic time series cross-correlation matrix C against the RMT predictions. In Section VI we analyze the content of inter-VLAN traffic interactions by mean of eigenvalues and eigenvectors deviated from RMT. Section VII discusses the characteristic traffic interactions parameters of the system such as time stability of the deviating eigenvalues and eigenvectors, inverse participation ratio (IPR) of eigenvalues spectra, localization points in IPR plot, overlap matrices of the deviating eigenvectors. With series of different experiments, we demonstrate how traffic interactions anomalies can be detected and allocated in time and space using various visualization techniques on eigenvalues and eigenvectors statistics in Section VIII. We present our conclusions and prospective research steps in Section IX.
III. RMT METHODOLOGY
The RMT was employed in the financial studies of stock correlations [18], [19], communication theory of wireless systems [20], array signal processing [21], bioinformatics studies of protein folding [22]. We are not aware of any work, except for [3], where RMT techniques were applied to the Internet traffic system.
We adopt the methodology used in works on financial time series correlations (see [18], [19] and references therein) and later in [3], which discusses cross-correlations in Internet traffic. In particular, we quantify correlations between N traffic counts time series of L time points, by calculating the traffic rate change of every time series T i = 1, . . . , N , over a time scale ∆t,
G i (t) ≡ ln T i (t + ∆t) − ln T i (t)(1)
where T i (t) denotes the traffic rate of time series i. This measure is independent from the volume of the traffic exchange and allows capturing the subtle changes in the traffic rate [3]. The normalized traffic rate change is
g i (t) ≡ G i (t) − G i (t) σ i (2) where σ i ≡ G 2 i − G i 2 is the standard deviation of G i .
The equal-time cross-correlation matrix C can be computed as follows
C ij ≡ g i (t) g j (t)(3)
The properties of the traffic interactions matrix C have to be compared with those of a random cross-correlation matrix [23]. In matrix notation, the interaction matrix C can be expressed as
C = 1 L GG T ,(4)
where G is N × L matrix with elements {g i m ≡ g i (m △ t) ; i = 1, . . . , N ; m = 0, . . . , L − 1} , and G T denotes the transpose of G. Just as was done in [19], we consider a random correlation matrix
R = 1 L AA T ,(5)
where A is N ×L matrix containing N time series of L random elements a i m with zero mean and unit variance, which are mutually uncorrelated as a null hypothesis. Statistical properties of the random matrices R have been known for years in physics literature [6], [10], [7], [8], [9], [11]. In particular, it was shown analytically [24] that, under the restriction of N → ∞, L → ∞ and providing that Q ≡ L/N (> 1) is fixed, the probability density function P rm (λ) of eigenvalues λ of the random matrix R is given by
P rm (λ) = Q 2π (λ + − λ) (λ − λ − ) λ(6)
where λ + and λ − are maximum and minimum eigenvalues of R, respectively and λ − ≤ λ i ≤ λ + . λ + and λ − are given analytically by
λ ± = 1 + 1 Q ± 2 1 Q .(7)
Random matrices display universal functional forms for eigenvalues correlations which depend on the general symmetries of the matrix only. First step to test the data for such a universal properties is to find a transformation called "unfolding", which maps the eigenvalues λ i to new variables, "unfolded eigenvalues" ξ i , whose distribution is uniform [9], [10], [11]. Unfolding ensures that the distances between eigenvalues are expressed in units of local mean eigenvalues spacing [9], and thus facilitates the comparison with analytical results. We define the cumulative distribution function of eigenvalues, which counts the number of eigenvalues in the interval
λ i ≤ λ, F (λ) = N λ −∞ P (x) dx,(8)
where P (x) denotes the probability density of eigenvalues and N is the total number of eigenvalues. The function F (λ) can be decomposed into an average and a fluctuating part,
F (λ) = F av (λ) + F f luc (λ) ,(9)
Since P f luc ≡ dF f luc (λ) /dλ = 0 on average,
P rm (λ) ≡ dF av (λ) dλ ,(10)
is the averaged eigenvalues density. The dimensionless, unfolded eigenvalues are then given by
ξ i ≡ F av (λ i ) .(11)
Three known universal properties of GOE matrices (matrices whose elements are distributed according to a Gaussian probability measure) are: (i) the distribution of nearestneighbor eigenvalues spacing P GOE (s)
P GOE (s) = πs 2 exp − π 4 s 2 ,(12)
(ii) the distribution of next-nearest-neighbor eigenvalues spacing, which is according to the theorem due to [8] is identical to the distribution of nearest-neighbor spacing of Gaussian symplectic ensemble (GSE),
P GSE (s) = 2 18 3 6 π 3 s 4 exp − 64 9π s 2(13)
and finally (iii) the "number variance" statistics Σ 2 , defined as the variance of the number of unfolded eigenvalues in the intervals of length l, around each ξ i [9], [11], [10].
Σ 2 (l) = [n (ξ, l) − l] 2 ξ ,(14)
where n (ξ, l) is the number of the unfolded eigenvalues in the interval ξ − l 2 , ξ + l 2 . The number variance is expressed as follows
Σ 2 (l) = l − 2 l 0 (l − x) Y (x) dx,(15)
where Y (x) for the GOE case is given by [9] Y
(x) = s 2 (x) + ds dx ∞ x s (x ′ ) dx ′ ,(16)
and
s (x) = sin (πx) πx .(17)
Just as was stressed in [19], [18], [25] the overall time of observation is crucial for explaining the empirical cross-correlation coefficients. On one hand, the longer we observe the traffic the more information about the correlations we obtain and less "noise" we introduce. On the other hand, the correlations are not stationary, i.e. they can change with time. To differentiate the "random" contribution to empirical correlation coefficients from "genuine" contribution, the eigenvalues statistics of C is contrasted with the eigenvalues statistics of a correlation matrix taken from the so called "chiral" Gaussian Orthogonal Ensemble [19]. Such an ensemble is one of the ensembles of RMT [25], [26], briefly discussed in Appendix A. A random cross-correlation matrix, which is a matrix filled with uncorrelated Gaussian random numbers, is supposed to represent transient uncorrelated in time network activity, that is, a completely noisy environment. In case the cross-correlation matrix C obeys the same eigenstatistical properties as the RMT-matrix, the network traffic is equilibrated and deemed universal in a sense that every single connection interacts with the rest in a completely chaotic manner. It also means a complete absence of congestions and anomalies. Meantime, any stable in time deviations from the universal predictions of RMT signify system-specific, nonrandom properties of the system, providing the clues about the nature of the underlying interactions. That allows us to establish the profile of systemspecific correlations.
IV. DATA In this paper, we study the averaged traffic count data collected from all router-router and router-VLAN subnet connections of the University of Louisville backbone routers system. The system consists of nine interconnected multigigabit backbone routers, over 200 Ethernet segments and over 300 VLAN subnets. We collected the traffic count data for 3 months, for the period from September 21, 2006 to December 20, 2006 from 7 routers, since two routers are reserved for server farms. The overall data amounted to approximately 18 GB.
The traffic count data is provided by Multi Router Traffic Grapher (MRTG) tool that reads the SNMP traffic counters. MRTG log file never grows in size due to the data consolidation algorithm: it contains records of average incoming, outgoing, max and min transfer rate in bytes per second with time intervals 300 seconds, 30 minutes, 1 day and 1 month. We extracted 300 seconds interval data for seven days. Then, we separated the incoming and outgoing traffic counts time series and considered them as independent. For 352 connections we formed L = 2015 records of N = 704 time series with 300 seconds interval.
We pursued the changes in the traffic rate, thus, we excluded from consideration the connections, where channel is open but the traffic is not established or there is just constant rate and equal low amount test traffic. Another reason for excluding the "empty" traffic time series is that they make the time series cross-correlation matrix unnecessary sparse. The exclusion does not influence the analysis and results. After the exclusions the number of the traffic time series became N = 497.
To calculate the traffic rate change G i (t) we used the logarithm of the ratio of two successive counts. As it is stated earlier, log-transformation makes the ratio independent from the traffic volume and allows capturing the subtle changes in the traffic rate. We added 1 byte to all data points, to avoid manipulations with log (0), in cases where traffic count is equal to zero bytes. This measure did not affect the changes in the traffic rate.
V. EIGENVALUE DISTRIBUTION OF CROSS-CORRELATION
MATRIX, COMPARISON WITH RMT
We constructed inter-VLAN traffic cross-correlation matrix C with number of time series N = 497 and number of observations per series L = 2015, (Q = 4.0625) so that, λ + = 2.23843 and λ − = 0.253876. Our first goal is to compare the eigenvalue distribution P (λ) of C with P rm (λ) [23]. To compute eigenvalues of C we used standard MATLAB function. The empirical probability distribution P (λ) is then given by the corresponding histogram. We display the resulting distribution P (λ) in Figure 1 and compare it to the probability distribution P rm (λ) taken from Eq. (6) calculated for the same value of traffic time series parameters (Q = 4.0625). The solid curve demonstrates P rm (λ) of Eq. (6). The largest eigenvalue shown in inset has the value λ 497 = 8.99. We zoom in the deviations from the RMT predictions on the inset to Figure 1. We note the presence of "bulk" (RMT-like) eigenvalues which fall within the bounds [λ − ,λ + ] for P rm (λ), and presence of the eigenvalues which lie outside of the "bulk", representing deviations from the RMT predictions. In particular, largest eigenvalue λ 497 = 8.99 for seven days period is approximately four times larger than the RMT upper bound λ + .
The histogram for well-defined bulk agrees with P rm (λ) suggesting that the cross-correlations of matrix C are mostly random. We observe that inter-VLAN traffic time series interact mostly in a random fashion.
Nevertheless, the agreement of empirical probability distribution P (λ) of the bulk with P rm (λ) is not sufficient to claim that the bulk of eigenvalue spectrum is random. Therefore, further RMT tests are needed [19].
To do that, we obtained the unfolded eigenvalues ξ i by following the phenomenological procedure referred to as Gaussian broadening [27], (see [27], [35], [19], [18]). The empirical cumulative distribution function of eigenvalues F (λ) agrees well with the F av (λ) (see Figure 2), where ξ i obtained with Gaussian broadening procedure with the broadening parameter a = 8. The first independent RMT test is the comparison of the distribution of the nearest-neighbor unfolded eigenvalue spacing P nn (s), where s ≡ ξ k+1 − ξ k with P GOE (s) [9], [10], [11]. The empirical probability distribution of nearestneighbor unfolded eigenvalues spacing P nn (s) and P GOE (s) are presented in Figure 3. The Gaussian decay of P GOE (s) for large s suggests that P GOE (s) "probes" scales only of the agreement between empirical probability distribution P nn (s) and the distribution of nearest-neighbor eigenvalues spacing of the GOE matrices P GOE (s) testifies that the positions of two adjacent empirical unfolded eigenvalues at the distance s are correlated just as the eigenvalues of the GOE matrices.
Next, we took on the distribution P nnn (s ′ ) of next-nearestneighbor spacings s ′ ≡ ξ k+2 − ξ k between the unfolded eigenvalues. According to [8] this distribution should fit to the distribution of nearest-neighbor spacing of the GSE. We demonstrate this correspondence in Figure 4. The solid line shows P GSE (s). Finally, the long-range two-point eigenvalue correlations were tested. It is known [9], [10], [11], that if eigenvalues are uncorrelated we expect the number variance to scale with l, Σ 2 ∼ l. Meanwhile, when the unfolded eigenvalues of C are correlated, Σ 2 approaches constant value, revealing "spectral rigidity" [9], [10], [11]. In Figure 5, we contrasted Poissonian number variance with the one we observed, and came to the conclusion that eigenvalues belonging to the "bulk" clearly exhibit universal RMT properties. The broadening parameter a = 8 was used in Gaussian broadening procedure to unfold the eigenvalues λ i [27], [35], [19], [18]. The dashed line corresponds to the case of uncorrelated eigenvalues. These findings show that the system of inter- VLAN traffic has a universal part of eigenvalues spectral correlations, shared by broad class of systems, including chaotic and disordered systems, nuclei, atoms and molecules. Thus it can be concluded, that the bulk eigenvalue statistics of the inter-VLAN traffic cross-correlation matrix C are consistent with those of real symmetric random matrix R, given by Eq. (5) [24]. Meantime, the deviations from the RMT contain the information about the system-specific correlations. The next section is entirely devoted to the analysis of the eigenvalues and eigenvectors deviating from the RMT, which signifies the meaningful inter-VLAN traffic interactions.
VI. INTER-VLAN TRAFFIC INTERACTIONS ANALYSIS
We overview the points of interest in eigenvectors of inter-VLAN traffic cross-correlation matrix C, which are determined according to Cu k = λ k u k , where λ k is k-th eigenvalue. Particularly important characteristics of eigenvectors, proven to be useful in physics of disordered conductors is the inverse participation ratio (IPR) (see, for example, Ref. [11]). In such systems, the IPR being a function of an eigenstate (eigenvector) allows to judge and clarify whether the corresponding eigenstate, and therefore electron is extended or localized.
A. Inverse participation ratio of eigenvectors components
For our purposes, it is sufficient to know that IPR quantifies the reciprocal of the number of significant components of the eigenvector. For the eigenvector u k it is defined as
I k ≡ N l=1 u k l 4 ,(18)
where u k l , l = 1, . . . , 497 are components of the eigenvector u k . In particular, the vector with one significant component has I k = 1, while vector with identical components u k l = 1/ √ N has I k = 1/N . Consequently, the inverse of IPR gives us a number of significant participants of the eigenvector. In Figure 6 we plot the IPR of cross-correlation matrix C as a function of eigenvalue λ. The control plot is IPR of eigenvectors of random cross-correlation matrix R of Eq. 5. As we can see, eigenvectors corresponding to eigenvalues from 0.25 to 3.5, what is within the RMT boundaries, have IPR close to 0. This means that almost all components of eigenvectors in the bulk interact in a random fashion. The number of significant components of eigenvectors deviating from the RMT is typically twenty times smaller than the one of the eigenvectors within the RMT boundaries, around twenty. For instance, IPR of eigenvector u 492 , which corresponds to the eigenvalue 5.9 in Figure 6, is 0.05, i.e. twenty time series are significantly contribute to u 492 . Another observation which we derive from Figure 6 is that the number of eigenvectors significant participants is considerably smaller at both edges of the eigenvalue spectrum. These findings resemble the results of [19], where the eigenvectors with a few participating components were referred to as localized vectors. The theory of localization is explained in the context of random band matrices, where elements independently drawn from different probability distributions [19]. These matrices despite their randomness, still contain probabilistic information. The localization in inter-VLAN traffic is explained as follows. The separated broadcast domains, i.e. VLANs forward traffic from one to another only through the router, reducing the routing for broadcast containment. Although the optimal VLAN deployment is to keep as much traffic as possible from traversing through the router, the bottleneck at the large number of VLANs is unavoidable.
B. Distribution of eigenvectors components
Another target of interest is the distribution of the components u k l ; l = 1, . . . , N of eigenvector u k of the interactions matrix C. To calculate vectors u we used the MATLAB routine again and obtained components distribution p (u) of the eigenvectors components. Then, we contrasted it with the RMT predictions for the eigenvector distribution p rm (u) of the random correlation matrix R. According to [11] p rm (u) has a Gaussian distribution with mean zero and unit variance, i.e.
p rm (u) = 1 √ 2π exp −u 2 2 .(19)
The weights of randomly interacting traffic counts time series, which are represented by the eigenvectors components has to be distributed normally. The results are presented in Figure 7.
One can see (from Figures 7a and 7b) that p (u) for two u k taken from the bulk is in accord with p rm (u). The distribution p (u) corresponding to the eigenvalue λ i , which exceeds the RMT upper bound (λ i > λ + ), is shown in Figure 7c.
C. Deviating eigenvalues and significant inter-VLAN traffic series contributing to the deviating eigenvectors.
The distribution of u 497 , the eigenvector corresponding to the largest eigenvalue λ 497 , deviates significantly from the Gaussian (as follows from Figure 7d). While Gaussian kurtosis has the value 3, the kurtosis of p u 497 comes out to 23.22. The smaller number of significant components of the eigenvector also influences the difference between Gaussian distribution and empirical distribution of eigenvector components. More than half of u 497 components have the same sign, thus slightly shifting the p (u) to one side. This result suggests the existence of the common VLAN traffic intended for inter-VLAN communication that affects all of the significant participants of the eigenvector u 497 with the same bias. We know that the number of significant components of u 497 is twenty two, since IPR of u 497 is 0.045. Hence, the largest eigenvector content reveals 22 traffic time series, which are affected by the same event. We obtain the time series, which affects 22 traffic time series by the following procedure. First of all, we calculate projection G 497 (t) of the time series G i (t) on the eigenvector u 497 ,
G 497 (t) ≡ 497 i=1 u 497 i G i (t)(20)
Next, we compare G 497 (t) with G i (t), by finding the corre-
lation coefficient G 497 (t) σ 497 Gi(t) σi
. The Fiber Distributed Data Interface (FDDI)-VLAN internet switch at one of the routers demonstrates the largest correlation coefficient of 0.89 (see Figure 8). The eigenvector u 497 has the following content: seven most significant participants are seven FDDI-VLAN switches at the seven routers. The presence of FDDI-VLAN switch provide us with information about VLAN membership definition. FDDI is layer 2 protocol, which means that at least one of two layer 2 membership is used, port group or/and MAC address membership. The next group of significant participants comprises of VLAN traffic intended for routing and already routed traffic from different VLANs. The final group of significant participants constitutes open switches, which pick up any "leaking" traffic on the router. Usually, the "leaking" traffic is the network management traffic, a very low level traffic which spikes when queried by the management systems.
If every deviating eigenvalue notifies a particular submodel of non-random interactions of the network, then every corresponding eigenvector presents the number of significant dimensions of sub-model. Thus, we can think of every deviating eigenvector as a representative network-wide "snapshot" of interactions within the certain dimensions.
The analysis of the significant participants of the deviating eigenvectors revealed three types of inter-VLAN traffic time series groupings. One group contains time series, which are interlinked on the router. We recognize them as, router1-VLAN_1000 traffic, router1-firewall traffic and VLAN_1000-router1 traffic. The time series, which are listed as router1-vlan_2000, router2-VLAN_2000, router3-VLAN_2000, etc., are reserved for the same service VLAN on every router and comprise another group. The content of these groups suggests the VLANs implementation, it is a mixture of infrastructural approach, where functional groups (departments, schools, etc.) are considered, and service approach, where VLAN provides a particular service (network management, firewall, etc.).
VII. STABILITY OF INTER-VLAN TRAFFIC INTERACTIONS
IN TIME
We expect to observe the stability of inter-VLAN traffic interactions in the period of time used to compute traffic crosscorrelation matrix C. The eigenvalues distribution at different time periods provides the information about the system stabilization, i.e. about the time after which the fluctuations of eigenvalues are not significant. Time periods of 1 hour, 3 hours and 6 hours are not sufficient to gain the knowledge about the system, which is demonstrated in Figure 9a. In Figure 9b the system stabilizes after 1 day period. To observe the time stability of inter-VLAN meaningful interactions we computed the "overlap matrix" of the deviating eigenvectors for the time period t and deviating eigenvectors for the time period t + τ , where t = 60h, τ = {0h, 3h, 12h, 24h, 36h, 48h}.
First, we obtained matrix D from p = 57 eigenvectors, which correspond to p eigenvalues outside of the RMT upper bound λ + . Then we computed the "overlap matrix" O (t, τ ) from D A D T B , where O ij is a scalar product of the eigenvector u i of period A (starting at time t = t) with u j of period B at the time t = t + τ ,
O ij (t, τ ) ≡ N k=1 D ik (t) D ik (t + τ )(21)
The values of O ij (t, τ ) elements at i = j, i.e. of diagonal elements of matrix O will be 1, if the matrix D (t + τ ) is identical to the matrix D (t). Clearly, the diagonal of the "overlap matrix" O can serve as an indicator of time stability of p eigenvectors outside of the RMT upper bound λ + . The gray scale colormap of the "overlap matrices" O (t = 60h, τ = {0h, 3h, 12h, 24h, 36h, 48h}) is presented in Figure 10.
VIII. DETECTING ANOMALIES OF TRAFFIC INTERACTIONS
We assume that the health of inter-VLAN traffic is expressed by stability of its interactions in time. Meanwhile, the temporal critical events or anomalies will cause the temporal instabilities. The "deviating" eigenvalues and eigenvectors provide us with stable in time snapshots of interactions representative of the entire network. Therefore, these eigenvectors judged on the basis of their IPR can serve as monitoring parameters of the system stability.
Among the essential anomalous events of VLAN infrastructure we can list violations in VLAN membership assignment, in address resolution protocol, in VLAN trunking protocol, router misconfiguration. The violation of membership assignment and router misconfiguration will cause the changes in the picture of random and non-random interactions of inter-VLAN traffic. To shed more light on the possibilities of anomaly detection we conducted the experiments to establish spatial-temporal traces of instabilities caused by artificial and temporal increase of the correlation in normal non-congested inter-VLAN traffic. We explored the possibility to distinguish different types of increased temporal correlations. Finally, we observed the consequences of breaking the interactions between time series, by injecting traffic counts obtained from sample of random distribution.
Experiment 1
We selected the traffic counts time series representing the components of the eigenvector which lies within the RMT bounds and temporarily increased the correlation between these series for three hour period. The proposed monitoring parameters show the dependence of system stability on the number of temporarily correlated time series (see Figure 11). Presented in Figure 11, left to right are (a) eigenvalue distribution of interactions with two temporarily correlated time series, (b) IPR of eigenvectors of interactions with two temporarily correlated time series, (c) the overlap matrix of deviating eigenvectors with two temporarily correlated time series. Top to bottom the layout shows these monitoring parameters when correlation is temporarily increased between 10 connections (d,e and f) and between 20 connections (g,h and i). One can conclude that increased temporal correlation between two time series and between ten time series does not affect system stability. Meanwhile, when the number of temporarily correlated time series reaches the number of and another type of correlation among other ten time series. Three different types of three hours correlations are induced among twenty traffic time series in Figure 15b. The sorted in decreasing order content of significant components shows that time series tend to group according to the type of correlation they are involved in.
Experiment 3
Next we turn our attention to disruption of normal picture of inter-VLAN traffic interactions. This can be done by injecting the traffic from random distribution to non-randomly interacting time series for three hours. We demonstrate it by examining the eigenvalue distribution, the IPR and the deviating eigenvectors overlap matrix plotted in Figure 16. After 60 hours of uninterrupted traffic, we injected elements from random distribution to significant participants of u 497 for three hours. The largest eigenvalue increases, from 10 to 12. Extended IPR tail shows the larger number of localized eigenvectors and we observe the dramatic break in deviating eigenvectors stability.
IX. CONCLUSION AND FUTURE WORK
The RMT methodology we used in this paper enables us to analyze the complex system behavior without the consideration of system constraints, type and structure. Our goal was to investigate the characteristics of day-to-day temporal dynamics of the system of interconnected routers with VLAN subnets of the University of Louisville. The type and structure of the system at hand suggests the natural interpretation of the RMTlike behavior and the RMT deviating results. The time stable random interactions signify the healthy, and free of congestion traffic. The time stable non-random interactions provide us with information about large-scale network-wide traffic interactions. The changes in the stable picture of random and nonrandom interactions signify the temporal traffic anomalies.
In general, the fact of sharing the universal properties of the bulk of eigenvalues spectrum of inter-VLAN traffic interactions with random matrices opens a new venue in network-wide traffic modeling. As stated in [19], in physical systems it is common to start with the model of dynamics of the system. This way, one would model the traffic time series interactions with the family of stochastic differential equations [28], [29], which describe the "instantaneous" traffic counts
g i (t) = (d/dt) lnT i (t) ,(22)
as a random walk with couplings. Then one would relate the revealed interactions to the correlated "modes" of the system. Additional question that RMT findings raise in networkwide traffic analysis is whether the found eigenvalues spectrum correlations and localized eigenvectors outside of RMT bulk can add to the explanation of the fundamental property of the network traffic, such as self-similarity [30].
To summarize, we have tested the eigenvalues statistics of inter-VLAN traffic cross-correlation matrix C against the null hypothesis of random correlation matrix. By separating the eigenvalues spectrum correlations of random matrices that are present in this system, the uncongested state of the network traffic is verified. We analyzed the stable in time systemspecific correlations. The analyzed eigenvalues and eigenvectors deviating from the RMT showed the principal groups of VLAN-router switches, groups of traffic time series interlinked through the firewalls and groups of same service VLANs at every router. With straightforward experiments on the traffic time series, we demonstrated that eigenvalue distribution, IPR of eigenvectors, overlap matrix and spatial-temporal patterns of deviating eigenvectors can monitor the stability of inter-VLAN traffic interactions, detect and spot in time and space of any network-wide changes in normal traffic time series interactions.
As the reservation for the future work, we would like to investigate the behavior of delayed traffic time series crosscorrelation matrix C d in the RMT terms. The importance of delay in measurement-based analysis of Internet is emphasized in [31]. To understand and quantify the effect of one time series on another at a later time, one can calculate the delay correlation matrix, where the entries are cross-correlation of one time series and another at a time delay τ [32]. In addition, we are interested in testing the fruitfulness of the RMT approach on the larger system of inter-domain interactions, for instance, on 5-minute averaged traffic count time series of underlying backbone circuits of Abilene backbone network.
have recently penetrated into econophysics, finance [26] and network traffic analysis [3].
For the statistical description of complex physical systems, such as, for example, atomic nucleus or acoustical reverberant structure, the RMT serves as guiding light when one is interested in the degree of mutual interaction of the constituents. As it turns out, the uncorrelated energy levels or acoustic eigenfrequencies would produce qualitatively different result from those obeying RMT-like correlations [25]. Therefore, real (experimentally measured) spectra can help to decide on the nature of interactions in the underlying system. To be specific, ideally, symmetric system is expected to exhibit spectral properties drastically different from the properties of generic one, and if the spectral properties are those of RMT systems, other ideas of RMT can be brought to the researcher aid.
To describe "awareness" of the structural constituents about each other, scientists in different fields use similar constructs. Physicists use Hamiltonian matrix, engineers stiffness matrix, finance and network analysts the equal-time cross-correlation matrix. Although the physical meaning of mentioned operators can be different, the eigenvalues/eigenvectors analysis seems to be a universally accepted tool. The eigenvalues have direct connection to spectrum of physical systems, while eigenvectors can be used for the description of excitation/signal/information propagation inside the system. In physics, the RMT approaches come about whenever the system of interest demonstrates certain qualitative features in their spectral behavior. For example, if one looks at nearest neighbor spacing distribution of eigenvalues and instead of Poisson law P (s) = exp (−s) , discovers "Wigner surmise" P (s) = π 2 s exp − π 2 s 2 , one concludes (upon running several additional statistical tests) that apparatus of RMT can be used for the system at hand, and system matrix can be replaced by a matrix with random entries. For mathematical convenience, these entries are given Gaussian weight. The only other ingredient of this rather succinct phenomenological model is recognizing the physical situation. For example, systems with and without magnetic field and/or central symmetry are described by different matrix ensembles (that is the set of matrices) with elements distributed within distribution corresponding to the same β
P (β) (H) ∝ exp − β 4v 2 trH 2 ,
where the constant v sets the length of the resulting eigenvalues spectrum.
The very fact that RMT can be helpful in statistical description of the broad range of systems suggests that these systems are analyzed in a certain special universal regime, in which physical or other laws are undermined by equilibrated and ergodic evolution. In most physical applications, a Hamiltonian matrix is rather sparse, indicating lack of interaction between different subparts of the corresponding object. However, if the universal regime is inferred from the above mentioned statistical tests, it is very beneficial to replace this single matrix with the ensemble of random matrices. Then, one can proceed with statistical analysis using matrix ensemble for calculation of statistical averages more relevant for the physical problem at hand than the statistics of eigenvalues. The latter can be mean or variance of the response to external or internal excitation. | 6,197 |
0706.2520 | 1820838581 | The traffic behavior of University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-VLAN traffic. The cross-correlation matrix C of the traffic rate changes between different traffic time series is calculated and tested against null-hypothesis of random interactions. The majority of the eigenvalues i of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The distribution of eigenvalues and eigenvectors outside of the RMT bounds displays prominent and systematic deviations from the RMT predictions. Moreover, these deviations are stable in time. The method we use provides a unique possibility to accomplish three concurrent tasks of traffic analysis. The method verifies the uncongested state of the network, by establishing the profile of random interactions. It recognizes the system-specific large-scale interactions, by establishing the profile of stable in time non-random interactions. Finally, by looking into the eigenstatistics we are able to detect and allocate anomalies of network traffic interactions. | Hidden Markov model has been proposed to model the distribution of network-wide traffic in @cite_23 . The observation window is used to distinguish denial of service (DoS) flooding attack mixed with the normal background traffic. | {
"abstract": [
"Hidden semi-Markov Model (HsMM) has been well studied and widely applied to many areas. The advantage of using an HsMM is its efficient forward-backward algorithm for estimating model parameters to best account for an observed sequence. In this paper, we propose an HsMM to model the distribution of network-wide traffic and use an observation window to distinguish DoS flooding attacks mixed within the normal background traffic. Several experiments are conducted to validate our method."
],
"cite_N": [
"@cite_23"
],
"mid": [
"2004469928"
]
} | Analysis of Inter-Domain Traffic Correlations: Random Matrix Theory Approach | The infrastructure, applications and protocols of the system of communicating computers and networks are constantly evolving. The traffic, which is an essence of the communication, presently is a voluminous data generated on minute-byminute basis within multi-layered structure by different applications and according to different protocols. As a consequence, there are two general approaches in analysis of the traffic and in modeling of its healthy behavior. In the first approach, the traffic analysis considers the protocols, applications, traffic matrix and routing matrix estimates, independence of ingress and egress points and much more. The second approach treats the infrastructure between the points from which the traffic is obtained as a "black box" [33], [34].
Measuring interactions between logically and architecturally equivalent substructures of the system is a natural extension of the "black box" approach. Certain amount of work in this direction has already been done. Studies on statistical traffic flow properties revealed the "congested", "fluid" and "transitional" regimes of the flow at a large scale [1], [2]. The observed collective behavior suggests the existence of the large-scale network-wide correlations between the network subparts. Indeed, the [3] work showed the large-scale crosscorrelations between different connections of the Renater scientific network. Moreover, the analysis of correlations across all simultaneous network-wide traffic has been used in network distributed attacks detection [4].
The distributions and stability of established interactions statistics represent the characteristic features of the system and may be exploited in healthy network traffic profile creation, which is an essential part of network anomaly detection. As it is successfully demonstrated in [5], all tested traffic anomalies change the distribution of the traffic features.
Among numerous types of traffic monitoring variables, time series of traffic counts are free of applications "semantics" and thus more preferable for "black box" analysis. To extract the meaningful information about underlying interactions contained in time series, the empirical correlation matrix is a usual tool at hand. In addition, there are various classes of statistical tools, such as principal component analysis, singular value decomposition, and factor analysis, which in turn strongly rely on the validity of the correlation matrix and obtain the meaningful part of the time series. Thus, it is important to understand quantitatively the effect of noise, i.e. to separate the noisy, random interactions from meaningful ones. In addition, it is crucial to consider the finiteness of the time series in the determination of the empirical correlation, since the finite length of time series available to estimate cross correlations introduces "measurement noise" [19]. Statistically, it is also advisable to develop null-hypothesis tests in order to check the degree of statistical validity of the results obtained against cases of purely random interactions.
The methodology of random matrix theory (RMT) developed for studying the complex energy levels of heavy nuclei and is given a detailed account in [6], [7], [8], [9], [10], [11]. For our purposes this methodology comes in as a series of statistical tests run on the eigenvalues and eigenvectors of "system matrix", which in our case is traffic time series crosscorrelation matrix C (and is Hamiltonian matrix in case of nuclei and other RMT systems [6], [7], [8], [9], [10], [11]).
In our study, we propose to investigate the network traffic as a complex system with a certain degree of mutual interactions of its constituents, i.e. single-link traffic time series, using the RMT approach. We concentrate on the large scale correlations between the time series generated by Simple Network Manage Protocol (SNMP) traffic counters at every router-router and router-VLAN subnet connection of University of Louisville backbone routers system.
The contributions of this study are as follows:
• We propose the application constraints free methodology of network-wide traffic time series interactions analysis. Even though in this particular study, we know in advance that VLANs represent separate broadcast domains, VLAN-router incoming traffic is a traffic intended for other VLANs and VLAN-router outgoing traffic is a routed traffic from other VLANs. Nevertheless, this information is irrelevant for our analysis and acquired only at the interpretation of the analysis results. • Using the RMT, we are able to separate the random interactions from system specific interactions. The vast majority of traffic time series interact in random fashion. The time stable random interactions signify the healthy, and free of congestion traffic. The proposed analysis of eigenvector distribution allows to verify the time series content of uncongested traffic. • The time stable non-random interactions provide us with information about large-scale system-specific interactions. • Finally, the temporal changes in random and non-random interactions can be detected and allocated with eigenvalues and eigenvectors statistics of interactions. The organization of this paper is as follows. Section II presents the survey of related work. We describe the RMT methodology in Section III. Section IV contains the explanation of the data analyzed. In Section V we test the eigenvalue distribution of inter-VLAN traffic time series cross-correlation matrix C against the RMT predictions. In Section VI we analyze the content of inter-VLAN traffic interactions by mean of eigenvalues and eigenvectors deviated from RMT. Section VII discusses the characteristic traffic interactions parameters of the system such as time stability of the deviating eigenvalues and eigenvectors, inverse participation ratio (IPR) of eigenvalues spectra, localization points in IPR plot, overlap matrices of the deviating eigenvectors. With series of different experiments, we demonstrate how traffic interactions anomalies can be detected and allocated in time and space using various visualization techniques on eigenvalues and eigenvectors statistics in Section VIII. We present our conclusions and prospective research steps in Section IX.
III. RMT METHODOLOGY
The RMT was employed in the financial studies of stock correlations [18], [19], communication theory of wireless systems [20], array signal processing [21], bioinformatics studies of protein folding [22]. We are not aware of any work, except for [3], where RMT techniques were applied to the Internet traffic system.
We adopt the methodology used in works on financial time series correlations (see [18], [19] and references therein) and later in [3], which discusses cross-correlations in Internet traffic. In particular, we quantify correlations between N traffic counts time series of L time points, by calculating the traffic rate change of every time series T i = 1, . . . , N , over a time scale ∆t,
G i (t) ≡ ln T i (t + ∆t) − ln T i (t)(1)
where T i (t) denotes the traffic rate of time series i. This measure is independent from the volume of the traffic exchange and allows capturing the subtle changes in the traffic rate [3]. The normalized traffic rate change is
g i (t) ≡ G i (t) − G i (t) σ i (2) where σ i ≡ G 2 i − G i 2 is the standard deviation of G i .
The equal-time cross-correlation matrix C can be computed as follows
C ij ≡ g i (t) g j (t)(3)
The properties of the traffic interactions matrix C have to be compared with those of a random cross-correlation matrix [23]. In matrix notation, the interaction matrix C can be expressed as
C = 1 L GG T ,(4)
where G is N × L matrix with elements {g i m ≡ g i (m △ t) ; i = 1, . . . , N ; m = 0, . . . , L − 1} , and G T denotes the transpose of G. Just as was done in [19], we consider a random correlation matrix
R = 1 L AA T ,(5)
where A is N ×L matrix containing N time series of L random elements a i m with zero mean and unit variance, which are mutually uncorrelated as a null hypothesis. Statistical properties of the random matrices R have been known for years in physics literature [6], [10], [7], [8], [9], [11]. In particular, it was shown analytically [24] that, under the restriction of N → ∞, L → ∞ and providing that Q ≡ L/N (> 1) is fixed, the probability density function P rm (λ) of eigenvalues λ of the random matrix R is given by
P rm (λ) = Q 2π (λ + − λ) (λ − λ − ) λ(6)
where λ + and λ − are maximum and minimum eigenvalues of R, respectively and λ − ≤ λ i ≤ λ + . λ + and λ − are given analytically by
λ ± = 1 + 1 Q ± 2 1 Q .(7)
Random matrices display universal functional forms for eigenvalues correlations which depend on the general symmetries of the matrix only. First step to test the data for such a universal properties is to find a transformation called "unfolding", which maps the eigenvalues λ i to new variables, "unfolded eigenvalues" ξ i , whose distribution is uniform [9], [10], [11]. Unfolding ensures that the distances between eigenvalues are expressed in units of local mean eigenvalues spacing [9], and thus facilitates the comparison with analytical results. We define the cumulative distribution function of eigenvalues, which counts the number of eigenvalues in the interval
λ i ≤ λ, F (λ) = N λ −∞ P (x) dx,(8)
where P (x) denotes the probability density of eigenvalues and N is the total number of eigenvalues. The function F (λ) can be decomposed into an average and a fluctuating part,
F (λ) = F av (λ) + F f luc (λ) ,(9)
Since P f luc ≡ dF f luc (λ) /dλ = 0 on average,
P rm (λ) ≡ dF av (λ) dλ ,(10)
is the averaged eigenvalues density. The dimensionless, unfolded eigenvalues are then given by
ξ i ≡ F av (λ i ) .(11)
Three known universal properties of GOE matrices (matrices whose elements are distributed according to a Gaussian probability measure) are: (i) the distribution of nearestneighbor eigenvalues spacing P GOE (s)
P GOE (s) = πs 2 exp − π 4 s 2 ,(12)
(ii) the distribution of next-nearest-neighbor eigenvalues spacing, which is according to the theorem due to [8] is identical to the distribution of nearest-neighbor spacing of Gaussian symplectic ensemble (GSE),
P GSE (s) = 2 18 3 6 π 3 s 4 exp − 64 9π s 2(13)
and finally (iii) the "number variance" statistics Σ 2 , defined as the variance of the number of unfolded eigenvalues in the intervals of length l, around each ξ i [9], [11], [10].
Σ 2 (l) = [n (ξ, l) − l] 2 ξ ,(14)
where n (ξ, l) is the number of the unfolded eigenvalues in the interval ξ − l 2 , ξ + l 2 . The number variance is expressed as follows
Σ 2 (l) = l − 2 l 0 (l − x) Y (x) dx,(15)
where Y (x) for the GOE case is given by [9] Y
(x) = s 2 (x) + ds dx ∞ x s (x ′ ) dx ′ ,(16)
and
s (x) = sin (πx) πx .(17)
Just as was stressed in [19], [18], [25] the overall time of observation is crucial for explaining the empirical cross-correlation coefficients. On one hand, the longer we observe the traffic the more information about the correlations we obtain and less "noise" we introduce. On the other hand, the correlations are not stationary, i.e. they can change with time. To differentiate the "random" contribution to empirical correlation coefficients from "genuine" contribution, the eigenvalues statistics of C is contrasted with the eigenvalues statistics of a correlation matrix taken from the so called "chiral" Gaussian Orthogonal Ensemble [19]. Such an ensemble is one of the ensembles of RMT [25], [26], briefly discussed in Appendix A. A random cross-correlation matrix, which is a matrix filled with uncorrelated Gaussian random numbers, is supposed to represent transient uncorrelated in time network activity, that is, a completely noisy environment. In case the cross-correlation matrix C obeys the same eigenstatistical properties as the RMT-matrix, the network traffic is equilibrated and deemed universal in a sense that every single connection interacts with the rest in a completely chaotic manner. It also means a complete absence of congestions and anomalies. Meantime, any stable in time deviations from the universal predictions of RMT signify system-specific, nonrandom properties of the system, providing the clues about the nature of the underlying interactions. That allows us to establish the profile of systemspecific correlations.
IV. DATA In this paper, we study the averaged traffic count data collected from all router-router and router-VLAN subnet connections of the University of Louisville backbone routers system. The system consists of nine interconnected multigigabit backbone routers, over 200 Ethernet segments and over 300 VLAN subnets. We collected the traffic count data for 3 months, for the period from September 21, 2006 to December 20, 2006 from 7 routers, since two routers are reserved for server farms. The overall data amounted to approximately 18 GB.
The traffic count data is provided by Multi Router Traffic Grapher (MRTG) tool that reads the SNMP traffic counters. MRTG log file never grows in size due to the data consolidation algorithm: it contains records of average incoming, outgoing, max and min transfer rate in bytes per second with time intervals 300 seconds, 30 minutes, 1 day and 1 month. We extracted 300 seconds interval data for seven days. Then, we separated the incoming and outgoing traffic counts time series and considered them as independent. For 352 connections we formed L = 2015 records of N = 704 time series with 300 seconds interval.
We pursued the changes in the traffic rate, thus, we excluded from consideration the connections, where channel is open but the traffic is not established or there is just constant rate and equal low amount test traffic. Another reason for excluding the "empty" traffic time series is that they make the time series cross-correlation matrix unnecessary sparse. The exclusion does not influence the analysis and results. After the exclusions the number of the traffic time series became N = 497.
To calculate the traffic rate change G i (t) we used the logarithm of the ratio of two successive counts. As it is stated earlier, log-transformation makes the ratio independent from the traffic volume and allows capturing the subtle changes in the traffic rate. We added 1 byte to all data points, to avoid manipulations with log (0), in cases where traffic count is equal to zero bytes. This measure did not affect the changes in the traffic rate.
V. EIGENVALUE DISTRIBUTION OF CROSS-CORRELATION
MATRIX, COMPARISON WITH RMT
We constructed inter-VLAN traffic cross-correlation matrix C with number of time series N = 497 and number of observations per series L = 2015, (Q = 4.0625) so that, λ + = 2.23843 and λ − = 0.253876. Our first goal is to compare the eigenvalue distribution P (λ) of C with P rm (λ) [23]. To compute eigenvalues of C we used standard MATLAB function. The empirical probability distribution P (λ) is then given by the corresponding histogram. We display the resulting distribution P (λ) in Figure 1 and compare it to the probability distribution P rm (λ) taken from Eq. (6) calculated for the same value of traffic time series parameters (Q = 4.0625). The solid curve demonstrates P rm (λ) of Eq. (6). The largest eigenvalue shown in inset has the value λ 497 = 8.99. We zoom in the deviations from the RMT predictions on the inset to Figure 1. We note the presence of "bulk" (RMT-like) eigenvalues which fall within the bounds [λ − ,λ + ] for P rm (λ), and presence of the eigenvalues which lie outside of the "bulk", representing deviations from the RMT predictions. In particular, largest eigenvalue λ 497 = 8.99 for seven days period is approximately four times larger than the RMT upper bound λ + .
The histogram for well-defined bulk agrees with P rm (λ) suggesting that the cross-correlations of matrix C are mostly random. We observe that inter-VLAN traffic time series interact mostly in a random fashion.
Nevertheless, the agreement of empirical probability distribution P (λ) of the bulk with P rm (λ) is not sufficient to claim that the bulk of eigenvalue spectrum is random. Therefore, further RMT tests are needed [19].
To do that, we obtained the unfolded eigenvalues ξ i by following the phenomenological procedure referred to as Gaussian broadening [27], (see [27], [35], [19], [18]). The empirical cumulative distribution function of eigenvalues F (λ) agrees well with the F av (λ) (see Figure 2), where ξ i obtained with Gaussian broadening procedure with the broadening parameter a = 8. The first independent RMT test is the comparison of the distribution of the nearest-neighbor unfolded eigenvalue spacing P nn (s), where s ≡ ξ k+1 − ξ k with P GOE (s) [9], [10], [11]. The empirical probability distribution of nearestneighbor unfolded eigenvalues spacing P nn (s) and P GOE (s) are presented in Figure 3. The Gaussian decay of P GOE (s) for large s suggests that P GOE (s) "probes" scales only of the agreement between empirical probability distribution P nn (s) and the distribution of nearest-neighbor eigenvalues spacing of the GOE matrices P GOE (s) testifies that the positions of two adjacent empirical unfolded eigenvalues at the distance s are correlated just as the eigenvalues of the GOE matrices.
Next, we took on the distribution P nnn (s ′ ) of next-nearestneighbor spacings s ′ ≡ ξ k+2 − ξ k between the unfolded eigenvalues. According to [8] this distribution should fit to the distribution of nearest-neighbor spacing of the GSE. We demonstrate this correspondence in Figure 4. The solid line shows P GSE (s). Finally, the long-range two-point eigenvalue correlations were tested. It is known [9], [10], [11], that if eigenvalues are uncorrelated we expect the number variance to scale with l, Σ 2 ∼ l. Meanwhile, when the unfolded eigenvalues of C are correlated, Σ 2 approaches constant value, revealing "spectral rigidity" [9], [10], [11]. In Figure 5, we contrasted Poissonian number variance with the one we observed, and came to the conclusion that eigenvalues belonging to the "bulk" clearly exhibit universal RMT properties. The broadening parameter a = 8 was used in Gaussian broadening procedure to unfold the eigenvalues λ i [27], [35], [19], [18]. The dashed line corresponds to the case of uncorrelated eigenvalues. These findings show that the system of inter- VLAN traffic has a universal part of eigenvalues spectral correlations, shared by broad class of systems, including chaotic and disordered systems, nuclei, atoms and molecules. Thus it can be concluded, that the bulk eigenvalue statistics of the inter-VLAN traffic cross-correlation matrix C are consistent with those of real symmetric random matrix R, given by Eq. (5) [24]. Meantime, the deviations from the RMT contain the information about the system-specific correlations. The next section is entirely devoted to the analysis of the eigenvalues and eigenvectors deviating from the RMT, which signifies the meaningful inter-VLAN traffic interactions.
VI. INTER-VLAN TRAFFIC INTERACTIONS ANALYSIS
We overview the points of interest in eigenvectors of inter-VLAN traffic cross-correlation matrix C, which are determined according to Cu k = λ k u k , where λ k is k-th eigenvalue. Particularly important characteristics of eigenvectors, proven to be useful in physics of disordered conductors is the inverse participation ratio (IPR) (see, for example, Ref. [11]). In such systems, the IPR being a function of an eigenstate (eigenvector) allows to judge and clarify whether the corresponding eigenstate, and therefore electron is extended or localized.
A. Inverse participation ratio of eigenvectors components
For our purposes, it is sufficient to know that IPR quantifies the reciprocal of the number of significant components of the eigenvector. For the eigenvector u k it is defined as
I k ≡ N l=1 u k l 4 ,(18)
where u k l , l = 1, . . . , 497 are components of the eigenvector u k . In particular, the vector with one significant component has I k = 1, while vector with identical components u k l = 1/ √ N has I k = 1/N . Consequently, the inverse of IPR gives us a number of significant participants of the eigenvector. In Figure 6 we plot the IPR of cross-correlation matrix C as a function of eigenvalue λ. The control plot is IPR of eigenvectors of random cross-correlation matrix R of Eq. 5. As we can see, eigenvectors corresponding to eigenvalues from 0.25 to 3.5, what is within the RMT boundaries, have IPR close to 0. This means that almost all components of eigenvectors in the bulk interact in a random fashion. The number of significant components of eigenvectors deviating from the RMT is typically twenty times smaller than the one of the eigenvectors within the RMT boundaries, around twenty. For instance, IPR of eigenvector u 492 , which corresponds to the eigenvalue 5.9 in Figure 6, is 0.05, i.e. twenty time series are significantly contribute to u 492 . Another observation which we derive from Figure 6 is that the number of eigenvectors significant participants is considerably smaller at both edges of the eigenvalue spectrum. These findings resemble the results of [19], where the eigenvectors with a few participating components were referred to as localized vectors. The theory of localization is explained in the context of random band matrices, where elements independently drawn from different probability distributions [19]. These matrices despite their randomness, still contain probabilistic information. The localization in inter-VLAN traffic is explained as follows. The separated broadcast domains, i.e. VLANs forward traffic from one to another only through the router, reducing the routing for broadcast containment. Although the optimal VLAN deployment is to keep as much traffic as possible from traversing through the router, the bottleneck at the large number of VLANs is unavoidable.
B. Distribution of eigenvectors components
Another target of interest is the distribution of the components u k l ; l = 1, . . . , N of eigenvector u k of the interactions matrix C. To calculate vectors u we used the MATLAB routine again and obtained components distribution p (u) of the eigenvectors components. Then, we contrasted it with the RMT predictions for the eigenvector distribution p rm (u) of the random correlation matrix R. According to [11] p rm (u) has a Gaussian distribution with mean zero and unit variance, i.e.
p rm (u) = 1 √ 2π exp −u 2 2 .(19)
The weights of randomly interacting traffic counts time series, which are represented by the eigenvectors components has to be distributed normally. The results are presented in Figure 7.
One can see (from Figures 7a and 7b) that p (u) for two u k taken from the bulk is in accord with p rm (u). The distribution p (u) corresponding to the eigenvalue λ i , which exceeds the RMT upper bound (λ i > λ + ), is shown in Figure 7c.
C. Deviating eigenvalues and significant inter-VLAN traffic series contributing to the deviating eigenvectors.
The distribution of u 497 , the eigenvector corresponding to the largest eigenvalue λ 497 , deviates significantly from the Gaussian (as follows from Figure 7d). While Gaussian kurtosis has the value 3, the kurtosis of p u 497 comes out to 23.22. The smaller number of significant components of the eigenvector also influences the difference between Gaussian distribution and empirical distribution of eigenvector components. More than half of u 497 components have the same sign, thus slightly shifting the p (u) to one side. This result suggests the existence of the common VLAN traffic intended for inter-VLAN communication that affects all of the significant participants of the eigenvector u 497 with the same bias. We know that the number of significant components of u 497 is twenty two, since IPR of u 497 is 0.045. Hence, the largest eigenvector content reveals 22 traffic time series, which are affected by the same event. We obtain the time series, which affects 22 traffic time series by the following procedure. First of all, we calculate projection G 497 (t) of the time series G i (t) on the eigenvector u 497 ,
G 497 (t) ≡ 497 i=1 u 497 i G i (t)(20)
Next, we compare G 497 (t) with G i (t), by finding the corre-
lation coefficient G 497 (t) σ 497 Gi(t) σi
. The Fiber Distributed Data Interface (FDDI)-VLAN internet switch at one of the routers demonstrates the largest correlation coefficient of 0.89 (see Figure 8). The eigenvector u 497 has the following content: seven most significant participants are seven FDDI-VLAN switches at the seven routers. The presence of FDDI-VLAN switch provide us with information about VLAN membership definition. FDDI is layer 2 protocol, which means that at least one of two layer 2 membership is used, port group or/and MAC address membership. The next group of significant participants comprises of VLAN traffic intended for routing and already routed traffic from different VLANs. The final group of significant participants constitutes open switches, which pick up any "leaking" traffic on the router. Usually, the "leaking" traffic is the network management traffic, a very low level traffic which spikes when queried by the management systems.
If every deviating eigenvalue notifies a particular submodel of non-random interactions of the network, then every corresponding eigenvector presents the number of significant dimensions of sub-model. Thus, we can think of every deviating eigenvector as a representative network-wide "snapshot" of interactions within the certain dimensions.
The analysis of the significant participants of the deviating eigenvectors revealed three types of inter-VLAN traffic time series groupings. One group contains time series, which are interlinked on the router. We recognize them as, router1-VLAN_1000 traffic, router1-firewall traffic and VLAN_1000-router1 traffic. The time series, which are listed as router1-vlan_2000, router2-VLAN_2000, router3-VLAN_2000, etc., are reserved for the same service VLAN on every router and comprise another group. The content of these groups suggests the VLANs implementation, it is a mixture of infrastructural approach, where functional groups (departments, schools, etc.) are considered, and service approach, where VLAN provides a particular service (network management, firewall, etc.).
VII. STABILITY OF INTER-VLAN TRAFFIC INTERACTIONS
IN TIME
We expect to observe the stability of inter-VLAN traffic interactions in the period of time used to compute traffic crosscorrelation matrix C. The eigenvalues distribution at different time periods provides the information about the system stabilization, i.e. about the time after which the fluctuations of eigenvalues are not significant. Time periods of 1 hour, 3 hours and 6 hours are not sufficient to gain the knowledge about the system, which is demonstrated in Figure 9a. In Figure 9b the system stabilizes after 1 day period. To observe the time stability of inter-VLAN meaningful interactions we computed the "overlap matrix" of the deviating eigenvectors for the time period t and deviating eigenvectors for the time period t + τ , where t = 60h, τ = {0h, 3h, 12h, 24h, 36h, 48h}.
First, we obtained matrix D from p = 57 eigenvectors, which correspond to p eigenvalues outside of the RMT upper bound λ + . Then we computed the "overlap matrix" O (t, τ ) from D A D T B , where O ij is a scalar product of the eigenvector u i of period A (starting at time t = t) with u j of period B at the time t = t + τ ,
O ij (t, τ ) ≡ N k=1 D ik (t) D ik (t + τ )(21)
The values of O ij (t, τ ) elements at i = j, i.e. of diagonal elements of matrix O will be 1, if the matrix D (t + τ ) is identical to the matrix D (t). Clearly, the diagonal of the "overlap matrix" O can serve as an indicator of time stability of p eigenvectors outside of the RMT upper bound λ + . The gray scale colormap of the "overlap matrices" O (t = 60h, τ = {0h, 3h, 12h, 24h, 36h, 48h}) is presented in Figure 10.
VIII. DETECTING ANOMALIES OF TRAFFIC INTERACTIONS
We assume that the health of inter-VLAN traffic is expressed by stability of its interactions in time. Meanwhile, the temporal critical events or anomalies will cause the temporal instabilities. The "deviating" eigenvalues and eigenvectors provide us with stable in time snapshots of interactions representative of the entire network. Therefore, these eigenvectors judged on the basis of their IPR can serve as monitoring parameters of the system stability.
Among the essential anomalous events of VLAN infrastructure we can list violations in VLAN membership assignment, in address resolution protocol, in VLAN trunking protocol, router misconfiguration. The violation of membership assignment and router misconfiguration will cause the changes in the picture of random and non-random interactions of inter-VLAN traffic. To shed more light on the possibilities of anomaly detection we conducted the experiments to establish spatial-temporal traces of instabilities caused by artificial and temporal increase of the correlation in normal non-congested inter-VLAN traffic. We explored the possibility to distinguish different types of increased temporal correlations. Finally, we observed the consequences of breaking the interactions between time series, by injecting traffic counts obtained from sample of random distribution.
Experiment 1
We selected the traffic counts time series representing the components of the eigenvector which lies within the RMT bounds and temporarily increased the correlation between these series for three hour period. The proposed monitoring parameters show the dependence of system stability on the number of temporarily correlated time series (see Figure 11). Presented in Figure 11, left to right are (a) eigenvalue distribution of interactions with two temporarily correlated time series, (b) IPR of eigenvectors of interactions with two temporarily correlated time series, (c) the overlap matrix of deviating eigenvectors with two temporarily correlated time series. Top to bottom the layout shows these monitoring parameters when correlation is temporarily increased between 10 connections (d,e and f) and between 20 connections (g,h and i). One can conclude that increased temporal correlation between two time series and between ten time series does not affect system stability. Meanwhile, when the number of temporarily correlated time series reaches the number of and another type of correlation among other ten time series. Three different types of three hours correlations are induced among twenty traffic time series in Figure 15b. The sorted in decreasing order content of significant components shows that time series tend to group according to the type of correlation they are involved in.
Experiment 3
Next we turn our attention to disruption of normal picture of inter-VLAN traffic interactions. This can be done by injecting the traffic from random distribution to non-randomly interacting time series for three hours. We demonstrate it by examining the eigenvalue distribution, the IPR and the deviating eigenvectors overlap matrix plotted in Figure 16. After 60 hours of uninterrupted traffic, we injected elements from random distribution to significant participants of u 497 for three hours. The largest eigenvalue increases, from 10 to 12. Extended IPR tail shows the larger number of localized eigenvectors and we observe the dramatic break in deviating eigenvectors stability.
IX. CONCLUSION AND FUTURE WORK
The RMT methodology we used in this paper enables us to analyze the complex system behavior without the consideration of system constraints, type and structure. Our goal was to investigate the characteristics of day-to-day temporal dynamics of the system of interconnected routers with VLAN subnets of the University of Louisville. The type and structure of the system at hand suggests the natural interpretation of the RMTlike behavior and the RMT deviating results. The time stable random interactions signify the healthy, and free of congestion traffic. The time stable non-random interactions provide us with information about large-scale network-wide traffic interactions. The changes in the stable picture of random and nonrandom interactions signify the temporal traffic anomalies.
In general, the fact of sharing the universal properties of the bulk of eigenvalues spectrum of inter-VLAN traffic interactions with random matrices opens a new venue in network-wide traffic modeling. As stated in [19], in physical systems it is common to start with the model of dynamics of the system. This way, one would model the traffic time series interactions with the family of stochastic differential equations [28], [29], which describe the "instantaneous" traffic counts
g i (t) = (d/dt) lnT i (t) ,(22)
as a random walk with couplings. Then one would relate the revealed interactions to the correlated "modes" of the system. Additional question that RMT findings raise in networkwide traffic analysis is whether the found eigenvalues spectrum correlations and localized eigenvectors outside of RMT bulk can add to the explanation of the fundamental property of the network traffic, such as self-similarity [30].
To summarize, we have tested the eigenvalues statistics of inter-VLAN traffic cross-correlation matrix C against the null hypothesis of random correlation matrix. By separating the eigenvalues spectrum correlations of random matrices that are present in this system, the uncongested state of the network traffic is verified. We analyzed the stable in time systemspecific correlations. The analyzed eigenvalues and eigenvectors deviating from the RMT showed the principal groups of VLAN-router switches, groups of traffic time series interlinked through the firewalls and groups of same service VLANs at every router. With straightforward experiments on the traffic time series, we demonstrated that eigenvalue distribution, IPR of eigenvectors, overlap matrix and spatial-temporal patterns of deviating eigenvectors can monitor the stability of inter-VLAN traffic interactions, detect and spot in time and space of any network-wide changes in normal traffic time series interactions.
As the reservation for the future work, we would like to investigate the behavior of delayed traffic time series crosscorrelation matrix C d in the RMT terms. The importance of delay in measurement-based analysis of Internet is emphasized in [31]. To understand and quantify the effect of one time series on another at a later time, one can calculate the delay correlation matrix, where the entries are cross-correlation of one time series and another at a time delay τ [32]. In addition, we are interested in testing the fruitfulness of the RMT approach on the larger system of inter-domain interactions, for instance, on 5-minute averaged traffic count time series of underlying backbone circuits of Abilene backbone network.
have recently penetrated into econophysics, finance [26] and network traffic analysis [3].
For the statistical description of complex physical systems, such as, for example, atomic nucleus or acoustical reverberant structure, the RMT serves as guiding light when one is interested in the degree of mutual interaction of the constituents. As it turns out, the uncorrelated energy levels or acoustic eigenfrequencies would produce qualitatively different result from those obeying RMT-like correlations [25]. Therefore, real (experimentally measured) spectra can help to decide on the nature of interactions in the underlying system. To be specific, ideally, symmetric system is expected to exhibit spectral properties drastically different from the properties of generic one, and if the spectral properties are those of RMT systems, other ideas of RMT can be brought to the researcher aid.
To describe "awareness" of the structural constituents about each other, scientists in different fields use similar constructs. Physicists use Hamiltonian matrix, engineers stiffness matrix, finance and network analysts the equal-time cross-correlation matrix. Although the physical meaning of mentioned operators can be different, the eigenvalues/eigenvectors analysis seems to be a universally accepted tool. The eigenvalues have direct connection to spectrum of physical systems, while eigenvectors can be used for the description of excitation/signal/information propagation inside the system. In physics, the RMT approaches come about whenever the system of interest demonstrates certain qualitative features in their spectral behavior. For example, if one looks at nearest neighbor spacing distribution of eigenvalues and instead of Poisson law P (s) = exp (−s) , discovers "Wigner surmise" P (s) = π 2 s exp − π 2 s 2 , one concludes (upon running several additional statistical tests) that apparatus of RMT can be used for the system at hand, and system matrix can be replaced by a matrix with random entries. For mathematical convenience, these entries are given Gaussian weight. The only other ingredient of this rather succinct phenomenological model is recognizing the physical situation. For example, systems with and without magnetic field and/or central symmetry are described by different matrix ensembles (that is the set of matrices) with elements distributed within distribution corresponding to the same β
P (β) (H) ∝ exp − β 4v 2 trH 2 ,
where the constant v sets the length of the resulting eigenvalues spectrum.
The very fact that RMT can be helpful in statistical description of the broad range of systems suggests that these systems are analyzed in a certain special universal regime, in which physical or other laws are undermined by equilibrated and ergodic evolution. In most physical applications, a Hamiltonian matrix is rather sparse, indicating lack of interaction between different subparts of the corresponding object. However, if the universal regime is inferred from the above mentioned statistical tests, it is very beneficial to replace this single matrix with the ensemble of random matrices. Then, one can proceed with statistical analysis using matrix ensemble for calculation of statistical averages more relevant for the physical problem at hand than the statistics of eigenvalues. The latter can be mean or variance of the response to external or internal excitation. | 6,197 |
1502.00712 | 1987841188 | Constructing effective representations is a critical but challenging problem in multimedia understanding. The traditional handcraft features often rely on domain knowledge, limiting the performances of exiting methods. This paper discusses a novel computational architecture for general image feature mining, which assembles the primitive filters (i.e. Gabor wavelets) into compositional features in a layer-wise manner. In each layer, we produce a number of base classifiers (i.e. regression stumps) associated with the generated features, and discover informative compositions by using the boosting algorithm. The output compositional features of each layer are treated as the base components to build up the next layer. Our framework is able to generate expressive image representations while inducing very discriminate functions for image classification. The experiments are conducted on several public datasets, and we demonstrate superior performances over state-of-the-art approaches. | In the past few decades, many works focus on designing different types of features to capture the characteristics of images such as color, SIFT and HoG @cite_2 . Based on these feature descriptors, Bag-of-Feature (BoF) model seems to be the most classical image representation method in computer vision and related multimedia applications. Several promising studies @cite_9 @cite_5 @cite_10 were published to improve this traditional approach in different aspects. Among these extension, a class of sparse coding based methods @cite_5 @cite_10 , which employ spatial pyramid matching kernel (SPM) proposed by Lazebnik , has achieved great success in image classification problem. Despite we are developing more and more effective representation methods, the lack of high-level image expression still plagues us to build up the ideal vision system. | {
"abstract": [
"Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors.",
"This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors.",
"The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC utilizes the locality constraints to project each descriptor into its local-coordinate system, and the projected coordinates are integrated by max pooling to generate the final representation. With linear classifier, the proposed approach performs remarkably better than the traditional nonlinear SPM, achieving state-of-the-art performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a K-nearest-neighbor search and then solving a constrained least square fitting problem, bearing computational complexity of O(M + K2). Hence even with very large codebooks, our system can still process multiple frames per second. This efficiency significantly adds to the practical values of LLC for real applications.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds."
],
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_10",
"@cite_2"
],
"mid": [
"2097018403",
"2162915993",
"2027922120",
"2161969291"
]
} | DEEP BOOSTING: LAYERED FEATURE MINING FOR GENERAL IMAGE CLASSIFICATION | Feature engineering (i.e. constructing effective image representation) has been actively studied in machine learning and computer vision [1,2,3] . In literature, the terms feature selection or feature mining often refer to selecting a subset of relevant feature from a special feature space [1,4,2]. One of the typical feature selection method is Adaboost algorithm, which merge the feature selection together with the learning procedure. According to previous work in [5], Adaboost constructs a pool of features (i.e. weak classifier) and selects the discriminative ones to form the final strong classifier. These boosting-based approaches provide an effective way for image classification task and achieve outstanding results in the past decade.
Despite the admitted success, such boosting methods are suffered from two essential problems. First, the weak classifier selected at each boosting step is limited by their own discriminative ability when faces with complex classification problems. In order to decrease the training error, the final classifier is linearly combined by a large numbers of weak classifiers through boosting [6]. On the other hand, amounts of effective learning procedure always lead the training error approaching to zero. However, under the unknown decision boundary, how to decrease the test error when training error is approaching zero is still an open issue [7].
In recent decades, the hierarchical models, also known as deep models [8,9] have played an irreplaceable role in multimedia and computer vision literature. Generally, such hierarchical architecture represents different layer of vision primitives such as pixels, edges, object parts and so on. The basic principles of hierarchical models are concentrated on two folds: (1) layerwise learning philosophy, whose goal is to learn single layer of the model individually and stack them to form the final architecture; (2) feature combination rules, which aim at utilizing the combination of low layer detected features to construct the high layer impressive features by introducing the activation function. In this paper, the related exciting researches inspire us to employ such compositional representation to construct the impressive features with more discriminative power. Different from previous works [8,9,10] applying the hierarchical generative model, we address the problem on general image classification directly and design the final classifier leveraging the generalization and discrimination abilities.
This paper proposes a novel feature mining framework, namely deep boosting, which aims to construct the effective discriminative features for image classification task. Compared with the concept 'mining' proposed in [2], whose goal is picking a subset of features as well as modeling the entire feature space, we utilize the word to describe the processing of feature selection and combination, which is more related to [6]. For each layer, following the famous boosting method [7], our deep model sequentially selects visual features to learn the classifier to reduce the training error. In order to construct high-level discriminative representations, we composite selected features in the same layer and feed into higher layer to build a multilayer architecture. Another key to our approach is introducing the spatial information when combining the individual features, that inspires upper layer representation more structured on the local scale. The experiment shows that our method achieves excellent performance on image classification task.
RELATED WORK
In the past few decades, many works focus on designing different types of features to capture the characteristics of images such as color, SIFT and HoG [11]. Based on these feature descriptors, Bag-of-Feature (BoF) model seems to be the most classical image representation method in computer vision and related multimedia applications. Several promising studies [12,13,14] were published to improve this traditional approach in different aspects. Among these extension, a class of sparse coding based methods [13,14], which employ spatial pyramid matching kernel (SPM) proposed by Lazebnik et al, has achieved great success in image classification problem. Despite we are developing more and more effective representation methods, the lack of high-level image expression still plagues us to build up the ideal vision system.
On the other hand, learning hierarchical models to simultaneously construct multiple levels of visual representation has received much attention recently [15]. Our deep boosting method is partially motivated by recent developed deep learning techniques [8,9,16]. Different from previous handcraft feature design method, deep model learns the feature representation from raw data and validly generates the highlevel semantic representation. However, as shown in recent study [16], these network-based hierarchical models always contain thousands of nodes in a single layer, and is too complex to control in real multimedia application. In contrast, an obvious characteristic of our study is that we build up the deep architecture to generate expressive image representation simply and obtains the near optimal classification rate in each layer.
DEEP BOOSTING FOR IMAGE RECOGNITION
Preprocessing
The basic units in the Gentle Adaboost algorithm are individual features, also known as weak classifiers. Unlike the rectangle feature in [5] for face detection, we employ Gabor wavelets response as the image feature representation. Let I be an image defined on image lattice domain and G be the Gabor wavelet elements with parameters (w, h, α, s), where (w, h) is the central position belonging to the lattice domain, α and s denote the orientation and scale parameters. Following [18], we utilize the normalized term to make the Gabor responses comparable between different training images:
ξ 2 (s) = 1 |P |A α w,h | I, G w,h,α,s | 2 ,(3)
where |P | is the total number of pixels in image I, and A is the number of orientations. · denotes the convolution process. For each image I, we normalize the local energy as | I, G w,h,α,s | 2 /ξ 2 (s) and define positive square root of such normalized result as feature response. In practice, we resize image into 120 × 120 pixels and apply one scale and eight orientations in our implementation, so there are total 120 × 120 × 1 × 8 filter responses for each grayscale image.
Discriminative Feature Selection
In this subsection, we set up the relationship between the weak classifier and Gabor wavelet representation. After the Gabor responses calculated, we learn the classification function utilizing the given feature set and the training set including both positive and negative images. Suppose the size of the training set is N . In our deep boosting system, the weak learning method is to select the single feature ( i.e. weak classifier ) which best divides the positive and negative samples.
To fix the notation, let x i ∈ R D be the feature representation of image I i , where D is the dimension of the feature space. It is obvious that D = 120 × 120 × 1 × 8 in the first layer, corresponding to Gabor wavelets in Sec. (3.2). Specifically, each element of x i is a special Gabor response of image I i (in the first layer) or their composition (in other layers). Note that in the rest of the paper, we apply x d i to denote the value of x i in the d-th dimension. In each round of feature selection procedure, instead of using the indictor function in Eq.(2), we introduce the sigmoid function defined by the formula:
φ(x) = 1/(1 + e −x )(4)
In this way, we consider a collection of regressive function {f 1 , f 2 , ..., f D } where each f d is a candidate weak classifier whose definition is given in Definition. 1.
Definition 1 (Discriminative Feature Selection)
In each round, the algorithm retrieves all of the candidate regression functions, each of which is formulated as:
f d (x i ) = aφ(x d i − δ) + b,(5)
where φ(·) is a sigmoid function defined in Eq.(4). The candidate function with current minimum training error is selected as the current weak classifier f , such that
min d N i=1 w i f d (x i ) − y i 2 ,(6)
where f d (x i ) is associate with the d-th element of x i and the function parameter (δ, a, b).
According to the above discussion, we build the bridge between the weak classifier and the special Gabor wavelet ( or their composition ), thus the weak classifiers learning can be viewed as the feature selection procedure in our deep boosting model.
Composite Feature Construction
Since the classification accuracy based on an individual feature or single weak classifier is usually low and the strong classifier, which is the weighted linear combination of weak classifiers, is hardly to decease the test error when training error is approaching to zero. It is of our interest to improve the discriminative ability of features and learn high-level representations as well.
In order to achieve the goal above, we introduce the feature combination strategy in Definition.2. All features selected in the feature selection stage are combined in a pairwise manner with spatial constraints, and the output composition features of each layer are treated as base components to construct the next layer.
Definition 2 (Feature Combination Rule)
For each image I, whose feature representation is denoted by x, we combine two selected features in local area as,
[x j ] l+1 = β s [x s ] l + β t [x t ] l ∃s, t ∈ Ω(j) (7)
where [x s ] l and [x t ] l indicate the s-th and t-th feature response corresponding to the image I in the layer l.
As illustrate in the Fig.(1), x s and x t are response values of selected features which are indicated by the red circles in each layer. β s and β t are the combination weights proportion to the training error rates of s-th and t-th weak classifiers calculated over the training set. Ω(j) is the local area determined by the projection coordinate of composition feature j on the normalized image ( i.e. the image with the size of 120 × 120 pixels in practice ). In the higher layer, the feature selection process is the same as the lower layer, which can be formulated as Eq. (6). Please refer to Fig.(2) for more details about feature combination.
Integrating the two stages in Sec.
Multi-class Decision
We employ the naive one-against-all strategy to handle the multi-class classification task in this paper. Given the training data {(x i , y i )} N i=1 ,y i ∈ {1, 2, ..., K}, we train K binary strong classifiers, each of which returns a classification score for a special test image. In the testing phrase, we predict the label of image referring to the classifier with the maximum score.
Image
EXPERIMENT
Dataset and Experiment Setting
We apply the proposed method on general classification task, using Caltech 256 Dataset [19] and the 15 Scenes Dataset [12] for validation. For both datasets, we split the data into training and test, utilize the training set to discover the discriminative features and learn the strong classifiers, and apply the test to evaluate classification performance. As mentioned in Sec. (3.2). For both datasets, we resize each image as 120 × 120 pixels, and simply set the Gabor wavelets with one scale and eight orientations. In each layer, the strong classifier training is performed in a supervised manner and the number of selected features are set as 1000, 800, 500 respectively. We combine the selected features in the 3 × 3 block densely and capture 3000 ∼ 8000 composite features every layer. According to the experiment, the number of composite features in each layer relies on the complexity of image content seriously. The visualization of feature map in each layer is shown in Fig.(3).
We carry out the experiments on a PC with Core i7-3960X 3.30 GHZ CPU and 24GB memory. On average, it takes 5 ∼ 9 hours for training a special category model, depending on the numbers of training examples and the complexity of image content. The time cost for recognizing a image is around 25 ∼ 40 seconds.
Experiment I: Caltech 256 Dataset
We evaluate the performance of our deep boosting algorithm on the Caltech 256 Dataset [19] which is widely used as the benchmark for testing the general image classification task [13,14]. The Caltech 256 Dataset contains 30607 images in 256 categories. We consider the image classification problem on Easy10 and Var10 image sets according to [20]. We evaluate classification results from 10 random splits of the training and testing data ( i.e. 60 training images and the rest as testing images ) and report the performance using the mean of each class classification rate. Besides our own implementations, we refer some released Matlab code from previous published literature [13,14] in our experiments as well. As Tab.(1) and Tab.(2) report, our method reaches the classification rate of 94.9% and 85.4% on Easy10 and Var10 datasets, outperforming other approaches [11,14,13].
Experiment II: 15 Scenes Dataset
We also test our method on the 15 Scenes Dataset [12]. This dataset totally includes 4485 images collected from 15 representative scene categories. Each category contains at least 200 images. The categories vary from mountain and forest to office and living room. As the standard benchmark procedure in [13,12], we select 100 images per class for training and others for testing. The performance is evaluated by randomly taking the training and testing images 10 times. The mean and standard deviation of the recognition rates are shown in Table(3). In this experiment, our deep boosting method achieves better performance than previous works [21,13] as well. Note that, instead of HoG+SVM, we compare our approach with GIST+SVM method in this experiment, due to the effectiveness of GIST [21] in the scene classification task. Considering the subtle engineering details, we can hardly achieve desired results applying [14] and [13] methods in our own implementations. So we quote the reported result directly from [13] and abandon [14] as a way of comparison. We also compare the recognition rate utilizing different layer's strong classifier, the results of top five outstanding categories on 15 Sences Dataset are reported in Fig.(4). It is obvious that our proposed feature combination strategy improve the performance effectively. Fig. 4: Classification accuracy of our proposed deep boosting method applying each layer's strong classifier. We select results from top five categories in 15 Scenes Dataset to report.
CONCLUSION
This paper studies a novel layered feature mining framework named deep boosting. According to the famous boosting algorithm, this model sequentially selects the visual feature in each layer and composites selected features in the same layer as the input of upper layer to construct the hierarchical architecture. Our approach achieves the excellent success on several image classification tasks. Moreover, the philosophy of such deep model is very general and can be applied to other multimedia applications. | 2,433 |
1502.00712 | 1987841188 | Constructing effective representations is a critical but challenging problem in multimedia understanding. The traditional handcraft features often rely on domain knowledge, limiting the performances of exiting methods. This paper discusses a novel computational architecture for general image feature mining, which assembles the primitive filters (i.e. Gabor wavelets) into compositional features in a layer-wise manner. In each layer, we produce a number of base classifiers (i.e. regression stumps) associated with the generated features, and discover informative compositions by using the boosting algorithm. The output compositional features of each layer are treated as the base components to build up the next layer. Our framework is able to generate expressive image representations while inducing very discriminate functions for image classification. The experiments are conducted on several public datasets, and we demonstrate superior performances over state-of-the-art approaches. | On the other hand, learning hierarchical models to simultaneously construct multiple levels of visual representation has received much attention recently @cite_13 . Our deep boosting method is partially motivated by recent developed deep learning techniques @cite_3 @cite_4 @cite_14 . Different from previous hand-craft feature design method, deep model learns the feature representation from raw data and validly generates the high-level semantic representation. However, as shown in recent study @cite_14 , these network-based hierarchical models always contain thousands of nodes in a single layer, and is too complex to control in real multimedia application. In contrast, an obvious characteristic of our study is that we build up the deep architecture to generate expressive image representation simply and obtains the near optimal classification rate in each layer. | {
"abstract": [
"Recent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach.",
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.",
"This paper illustrates a hierarchical generative model for representing and recognizing compositional object categories with large intra-category variance. In this model, objects are broken into their constituent parts and the variability of configurations and relationships between these parts are modeled by stochastic attribute graph grammars, which are embedded in an And-Or graph for each compositional object category. It combines the power of a stochastic context free grammar (SCFG) to express the variability of part configurations, and a Markov random field (MRF) to represent the pictorial spatial relationships between these parts. As a generative model, different object instances of a category can be realized as a traversal through the And-Or graph to arrive at a valid configuration (like a valid sentence in language, by analogy). The inference recognition procedure is intimately tied to the structure of the model and follows a probabilistic formulation consisting of bottom-up detection steps for the parts, which in turn recursively activate the grammar rules for top-down verification and searches for missing parts. We present experiments comparing our results to state of art methods and demonstrate the potential of our proposed framework on compositional objects with cluttered backgrounds using training and testing data from the public Lotus Hill and Caltech datasets.",
""
],
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_13",
"@cite_3"
],
"mid": [
"2143352446",
"2136922672",
"1994205671",
"2310919327"
]
} | DEEP BOOSTING: LAYERED FEATURE MINING FOR GENERAL IMAGE CLASSIFICATION | Feature engineering (i.e. constructing effective image representation) has been actively studied in machine learning and computer vision [1,2,3] . In literature, the terms feature selection or feature mining often refer to selecting a subset of relevant feature from a special feature space [1,4,2]. One of the typical feature selection method is Adaboost algorithm, which merge the feature selection together with the learning procedure. According to previous work in [5], Adaboost constructs a pool of features (i.e. weak classifier) and selects the discriminative ones to form the final strong classifier. These boosting-based approaches provide an effective way for image classification task and achieve outstanding results in the past decade.
Despite the admitted success, such boosting methods are suffered from two essential problems. First, the weak classifier selected at each boosting step is limited by their own discriminative ability when faces with complex classification problems. In order to decrease the training error, the final classifier is linearly combined by a large numbers of weak classifiers through boosting [6]. On the other hand, amounts of effective learning procedure always lead the training error approaching to zero. However, under the unknown decision boundary, how to decrease the test error when training error is approaching zero is still an open issue [7].
In recent decades, the hierarchical models, also known as deep models [8,9] have played an irreplaceable role in multimedia and computer vision literature. Generally, such hierarchical architecture represents different layer of vision primitives such as pixels, edges, object parts and so on. The basic principles of hierarchical models are concentrated on two folds: (1) layerwise learning philosophy, whose goal is to learn single layer of the model individually and stack them to form the final architecture; (2) feature combination rules, which aim at utilizing the combination of low layer detected features to construct the high layer impressive features by introducing the activation function. In this paper, the related exciting researches inspire us to employ such compositional representation to construct the impressive features with more discriminative power. Different from previous works [8,9,10] applying the hierarchical generative model, we address the problem on general image classification directly and design the final classifier leveraging the generalization and discrimination abilities.
This paper proposes a novel feature mining framework, namely deep boosting, which aims to construct the effective discriminative features for image classification task. Compared with the concept 'mining' proposed in [2], whose goal is picking a subset of features as well as modeling the entire feature space, we utilize the word to describe the processing of feature selection and combination, which is more related to [6]. For each layer, following the famous boosting method [7], our deep model sequentially selects visual features to learn the classifier to reduce the training error. In order to construct high-level discriminative representations, we composite selected features in the same layer and feed into higher layer to build a multilayer architecture. Another key to our approach is introducing the spatial information when combining the individual features, that inspires upper layer representation more structured on the local scale. The experiment shows that our method achieves excellent performance on image classification task.
RELATED WORK
In the past few decades, many works focus on designing different types of features to capture the characteristics of images such as color, SIFT and HoG [11]. Based on these feature descriptors, Bag-of-Feature (BoF) model seems to be the most classical image representation method in computer vision and related multimedia applications. Several promising studies [12,13,14] were published to improve this traditional approach in different aspects. Among these extension, a class of sparse coding based methods [13,14], which employ spatial pyramid matching kernel (SPM) proposed by Lazebnik et al, has achieved great success in image classification problem. Despite we are developing more and more effective representation methods, the lack of high-level image expression still plagues us to build up the ideal vision system.
On the other hand, learning hierarchical models to simultaneously construct multiple levels of visual representation has received much attention recently [15]. Our deep boosting method is partially motivated by recent developed deep learning techniques [8,9,16]. Different from previous handcraft feature design method, deep model learns the feature representation from raw data and validly generates the highlevel semantic representation. However, as shown in recent study [16], these network-based hierarchical models always contain thousands of nodes in a single layer, and is too complex to control in real multimedia application. In contrast, an obvious characteristic of our study is that we build up the deep architecture to generate expressive image representation simply and obtains the near optimal classification rate in each layer.
DEEP BOOSTING FOR IMAGE RECOGNITION
Preprocessing
The basic units in the Gentle Adaboost algorithm are individual features, also known as weak classifiers. Unlike the rectangle feature in [5] for face detection, we employ Gabor wavelets response as the image feature representation. Let I be an image defined on image lattice domain and G be the Gabor wavelet elements with parameters (w, h, α, s), where (w, h) is the central position belonging to the lattice domain, α and s denote the orientation and scale parameters. Following [18], we utilize the normalized term to make the Gabor responses comparable between different training images:
ξ 2 (s) = 1 |P |A α w,h | I, G w,h,α,s | 2 ,(3)
where |P | is the total number of pixels in image I, and A is the number of orientations. · denotes the convolution process. For each image I, we normalize the local energy as | I, G w,h,α,s | 2 /ξ 2 (s) and define positive square root of such normalized result as feature response. In practice, we resize image into 120 × 120 pixels and apply one scale and eight orientations in our implementation, so there are total 120 × 120 × 1 × 8 filter responses for each grayscale image.
Discriminative Feature Selection
In this subsection, we set up the relationship between the weak classifier and Gabor wavelet representation. After the Gabor responses calculated, we learn the classification function utilizing the given feature set and the training set including both positive and negative images. Suppose the size of the training set is N . In our deep boosting system, the weak learning method is to select the single feature ( i.e. weak classifier ) which best divides the positive and negative samples.
To fix the notation, let x i ∈ R D be the feature representation of image I i , where D is the dimension of the feature space. It is obvious that D = 120 × 120 × 1 × 8 in the first layer, corresponding to Gabor wavelets in Sec. (3.2). Specifically, each element of x i is a special Gabor response of image I i (in the first layer) or their composition (in other layers). Note that in the rest of the paper, we apply x d i to denote the value of x i in the d-th dimension. In each round of feature selection procedure, instead of using the indictor function in Eq.(2), we introduce the sigmoid function defined by the formula:
φ(x) = 1/(1 + e −x )(4)
In this way, we consider a collection of regressive function {f 1 , f 2 , ..., f D } where each f d is a candidate weak classifier whose definition is given in Definition. 1.
Definition 1 (Discriminative Feature Selection)
In each round, the algorithm retrieves all of the candidate regression functions, each of which is formulated as:
f d (x i ) = aφ(x d i − δ) + b,(5)
where φ(·) is a sigmoid function defined in Eq.(4). The candidate function with current minimum training error is selected as the current weak classifier f , such that
min d N i=1 w i f d (x i ) − y i 2 ,(6)
where f d (x i ) is associate with the d-th element of x i and the function parameter (δ, a, b).
According to the above discussion, we build the bridge between the weak classifier and the special Gabor wavelet ( or their composition ), thus the weak classifiers learning can be viewed as the feature selection procedure in our deep boosting model.
Composite Feature Construction
Since the classification accuracy based on an individual feature or single weak classifier is usually low and the strong classifier, which is the weighted linear combination of weak classifiers, is hardly to decease the test error when training error is approaching to zero. It is of our interest to improve the discriminative ability of features and learn high-level representations as well.
In order to achieve the goal above, we introduce the feature combination strategy in Definition.2. All features selected in the feature selection stage are combined in a pairwise manner with spatial constraints, and the output composition features of each layer are treated as base components to construct the next layer.
Definition 2 (Feature Combination Rule)
For each image I, whose feature representation is denoted by x, we combine two selected features in local area as,
[x j ] l+1 = β s [x s ] l + β t [x t ] l ∃s, t ∈ Ω(j) (7)
where [x s ] l and [x t ] l indicate the s-th and t-th feature response corresponding to the image I in the layer l.
As illustrate in the Fig.(1), x s and x t are response values of selected features which are indicated by the red circles in each layer. β s and β t are the combination weights proportion to the training error rates of s-th and t-th weak classifiers calculated over the training set. Ω(j) is the local area determined by the projection coordinate of composition feature j on the normalized image ( i.e. the image with the size of 120 × 120 pixels in practice ). In the higher layer, the feature selection process is the same as the lower layer, which can be formulated as Eq. (6). Please refer to Fig.(2) for more details about feature combination.
Integrating the two stages in Sec.
Multi-class Decision
We employ the naive one-against-all strategy to handle the multi-class classification task in this paper. Given the training data {(x i , y i )} N i=1 ,y i ∈ {1, 2, ..., K}, we train K binary strong classifiers, each of which returns a classification score for a special test image. In the testing phrase, we predict the label of image referring to the classifier with the maximum score.
Image
EXPERIMENT
Dataset and Experiment Setting
We apply the proposed method on general classification task, using Caltech 256 Dataset [19] and the 15 Scenes Dataset [12] for validation. For both datasets, we split the data into training and test, utilize the training set to discover the discriminative features and learn the strong classifiers, and apply the test to evaluate classification performance. As mentioned in Sec. (3.2). For both datasets, we resize each image as 120 × 120 pixels, and simply set the Gabor wavelets with one scale and eight orientations. In each layer, the strong classifier training is performed in a supervised manner and the number of selected features are set as 1000, 800, 500 respectively. We combine the selected features in the 3 × 3 block densely and capture 3000 ∼ 8000 composite features every layer. According to the experiment, the number of composite features in each layer relies on the complexity of image content seriously. The visualization of feature map in each layer is shown in Fig.(3).
We carry out the experiments on a PC with Core i7-3960X 3.30 GHZ CPU and 24GB memory. On average, it takes 5 ∼ 9 hours for training a special category model, depending on the numbers of training examples and the complexity of image content. The time cost for recognizing a image is around 25 ∼ 40 seconds.
Experiment I: Caltech 256 Dataset
We evaluate the performance of our deep boosting algorithm on the Caltech 256 Dataset [19] which is widely used as the benchmark for testing the general image classification task [13,14]. The Caltech 256 Dataset contains 30607 images in 256 categories. We consider the image classification problem on Easy10 and Var10 image sets according to [20]. We evaluate classification results from 10 random splits of the training and testing data ( i.e. 60 training images and the rest as testing images ) and report the performance using the mean of each class classification rate. Besides our own implementations, we refer some released Matlab code from previous published literature [13,14] in our experiments as well. As Tab.(1) and Tab.(2) report, our method reaches the classification rate of 94.9% and 85.4% on Easy10 and Var10 datasets, outperforming other approaches [11,14,13].
Experiment II: 15 Scenes Dataset
We also test our method on the 15 Scenes Dataset [12]. This dataset totally includes 4485 images collected from 15 representative scene categories. Each category contains at least 200 images. The categories vary from mountain and forest to office and living room. As the standard benchmark procedure in [13,12], we select 100 images per class for training and others for testing. The performance is evaluated by randomly taking the training and testing images 10 times. The mean and standard deviation of the recognition rates are shown in Table(3). In this experiment, our deep boosting method achieves better performance than previous works [21,13] as well. Note that, instead of HoG+SVM, we compare our approach with GIST+SVM method in this experiment, due to the effectiveness of GIST [21] in the scene classification task. Considering the subtle engineering details, we can hardly achieve desired results applying [14] and [13] methods in our own implementations. So we quote the reported result directly from [13] and abandon [14] as a way of comparison. We also compare the recognition rate utilizing different layer's strong classifier, the results of top five outstanding categories on 15 Sences Dataset are reported in Fig.(4). It is obvious that our proposed feature combination strategy improve the performance effectively. Fig. 4: Classification accuracy of our proposed deep boosting method applying each layer's strong classifier. We select results from top five categories in 15 Scenes Dataset to report.
CONCLUSION
This paper studies a novel layered feature mining framework named deep boosting. According to the famous boosting algorithm, this model sequentially selects the visual feature in each layer and composites selected features in the same layer as the input of upper layer to construct the hierarchical architecture. Our approach achieves the excellent success on several image classification tasks. Moreover, the philosophy of such deep model is very general and can be applied to other multimedia applications. | 2,433 |
1502.00500 | 338621103 | In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz). | Finding the transformation given by a set of point correspondences is a common problem in computer vision, e.g., for ego-motion estimation. A method to get a closed-form solution by means of a Least-Squares Estimation (LSE) is given in @cite_19 . However, when dealing with sets of point correspondences containing wrong associations, the result given by a LSE is distorted by the outliers. This is problem is commonly addressed by using a sample consensus method such as RANSAC @cite_26 . | {
"abstract": [
"In many applications of computer vision, the following problem is encountered. Two point patterns (sets of points) (x sub i ) and (x sub i ); i=1, 2, . . ., n are given in m-dimensional space, and the similarity transformation parameters (rotation, translation, and scaling) that give the least mean squared error between these point patterns are needed. Recently, K.S. (1987) and B.K.P. (1987) presented a solution of this problem. Their solution, however, sometimes fails to give a correct rotation matrix and gives a reflection instead when the data is severely corrupted. The proposed theorem is a strict solution of the problem, and it always gives the correct transformation parameters even when the data is corrupted. >",
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing"
],
"cite_N": [
"@cite_19",
"@cite_26"
],
"mid": [
"2128019145",
"2085261163"
]
} | Fast and Robust Feature Matching for RGB-D Based Localization | Localization is a fundamental requirement for most tasks in mobile robotics. The correct operation of mobile robots, such as UAVs, transportation robots or tour guides, is based on their ability to estimate their position and orientation in the world. SLAM approaches face the problem when no map is given and, therefore, the creation of the map and localization with respect to it have to be performed simultaneously. Given the map, fast and robust localization methods become necessary to efficiently perform localization within large environments.
The recent emergence of low cost versions of RGB-D sensors [1], [2], has made them attractive as an alternative to the widely used laser scanners, in particular for inexpensive robotic applications. Since such sensors gather dense 3D information of the scene at high frame rates (30Hz), it is possible to use them for six degrees of freedom (DoF) localization, which becomes necessary for robots whose 3D movement does not obey clear constrains that can be modeled beforehand. Recent approaches that perform RGB-D SLAM [3] can create a database of visual features ( Figure 1) as a sparse representation of the environment. This sparse map of features can be used to perform global six-DoF global localization using only a RGB-D sensor. State-of-the-art methods to perform localization found in the literature are probabilistic filter algorithms, e.g., the extended Kalman filter (EKF) [4] and particle filters [5]. Both use a Bayesian model to integrate sensor measurements (e.g., odometry, laser scans) into the current state representation. Particle filters overcome the severe scale limitations of Kalman filtering and can deal with highly non-linear models and non-Gaussian posteriors [6]. Particle filters are typically used in 2D and, therefore, particles need to represent a state with three DoF. In 3D localization, however, particles would have to be generated in a six DoF space, which dramatically increases the number of required particles. Most of approaches to visual localization are using feature descriptors-based feature matching to take perform data association by nearest neighbor searches in descriptor space. While this technique demonstrated to be useful and robust for matching two images to each other, it is prone to failure when matching against a huge database of features such as a large-scale map.
The contribution of this paper is a novel approach for vision-based global localization that performs independently from the previous state of the system using exclusively an RGB-D sensor and a sparse map of visual features. Our system overcomes issues of traditional feature matching techniques when using large feature maps, such as the existence of different points of interest with similar descriptor values or the existence of the same keypoint in different objects of the same class (e.g., several chairs of the same model). In order to cope with this, we develop a two-step algorithm. In the first step we perform a fast guess of the sensor pose. We use it in the second step to allow for a more efficient spatial search for matching features. Using a spatial search method is possible due to the availability of depth information for the features. To ensure robust matching, we exploit information about the descriptor distances of matched features encountered during the map creation. The results we obtain in terms of accuracy, robustness and execution time show that our approach is suitable to be used for online navigation of a mobile robot.
III. METHODOLOGY
Our goal is to localize a RGB-D sensor in a given map of the world using only the data provided by the device. Our approach (schematically depicted in Figure 3) performs the following high-level tasks: First visual features are detected in the monochromatic image. Then, a descriptor is computed for each feature. These descriptors are used to perform data association between the features detected in the image and those contained in a feature map of the world. This matching process is based on proximity between features in the high-dimensional feature space. Correspondences are used to compute a coarse estimation of the sensor pose. This estimation is used to guide nearest neighbor searches in the 3-D space looking for each feature found in the image where it is expected to be in the map. Matches established this way are used to compute a fine pose estimation.
Nearest neighbor searches are an important task in our approach. Searches are performed both in the high dimensional feature space and in 3D space. During the development of our system they showed to be the most time-consuming task. The OpenCV [27] implementation of the FLANN library provided by Muja and Lowe [28] is used to perform approximate nearest neighbor (ANN) searches. The search performance is highly dependent on the used data structure and algorithm. A series of experiments were performed to optimize the search for use with large maps of up to 150, 000 features. This optimization process successfully reduced the runtime for nearest neighbor search up to 25% in the 3D index and up to 9% in the high-dimensionality descriptor space. However, the ANN search remains the computational bottleneck and reducing it is the main focus of the presented approach.
The initial steps of our approach are similar to an image matching process. FLANN indices are created offline, in order to minimize searching time during operation. The main differences lie in the process of pose estimation from the sensor data, which is the major contribution of this work ( Figure 3). Figure 2 shows some pairs of typical data from the Kinect gathered during experiments. The first step is the visual feature detection on the monochromatic image using SURF [29]. Then, SURF descriptors are computed for each one of the detected features. The next step is a coarse pose estimation process. To obtain a coarse estimation of the camera pose with respect to the world, it is necessary to establish several initial matches. Therefore, the process presents two steps that are repeated until a good coarse estimation is found. First some matches between features detected on the camera image and map features are established. The correspondent feature in the map is the nearest neighbor on the 64-dimensional feature space (the SURF features used in our implementation have 64 dimensions). Then, this small group of matches is used to compute the coarse pose estimation. This process involves finding the largest subset of mutually consistent matches in a similar way to [16]. If the matches are not good enough to produce a satisfactory coarse estimation, more features from the camera image are matched against the feature database and the estimation is computed again. The process ends when an acceptable estimation is reached or when all the features detected on the image have been already matched. Once the coarse pose of the sensor is estimated, it is used to predict where the features observed by the sensor are with respect to the map frame. For each feature, all the map features located in a certain neighborhood of the predicted spatial location are taken as match candidates. This matching process guided by spatial information is faster than descriptor-based matching, because the dimensionality is much lower.
Given a sensor reading with N features, a standard matching approach would take N × t 64D , where t 64D is the search time in the descriptor space. In contrast, the overall runtime for matching features in our approach is
M × t 64D + N × t 3D .
Where M is the average number of nearest neighbors required to establish the coarse pose estimation and t 3D is the spatial nearest neighbor search time per feature. In an analysis on selecting the optimal approximate nearest neighbor algorithm for matching SIFT features to a large database, Muja and Lowe [28] achieved an speedup factor of 30 with respect to a linear search, for 90 % search precision. Since M N and t 3D < t 64D our approach outperforms a standard matching approach (timings for t 3D and t 64D are given in Table I). For each candidate feature found, a match between the camera feature and this candidate is established only if the Euclidean distance between them in descriptor space is lower than a threshold. This threshold is set to the standard deviation observed when matching this feature during the mapping process. Finally, a RANSAC method is used to compute the final estimation of the camera pose with respect to the map.
IV. EXPERIMENTS
In this section we evaluate our approach in terms of accuracy of the estimate with respect to the ground truth, robustness of the estimation method and process runtimes. We present experimental results comparing the proposed approach to a baseline method of performing feature matching based exclusively on proximity on feature space.
This baseline approach represents a standard way to face the problem of visual localization, as in a image matching process. As illustrated in Figure 3, it consists of the following basic phases: Keypoint detection and descriptor extraction, for which we use the SURF implementation from OpenCV. Then, the detected features are matched against the nearest map features in the 64D space using the OpenCV FLANN library. Finally, the camera pose with respect to the map is computed using RANSAC for robustness.
The goal of our approach is to estimate the Kinect sensor pose. In order to evaluate the quality of the estimation, the result is compared to a ground truth sensor pose for each estimate. In order to get the ground truth trajectory, we use a PR2 robot with a Kinect sensor mounted on its head. The PR2 base laser scanner is used to obtain a highly accurate 2D position estimate for the robot. To record the data for our experiments, the PR2 was navigated through the laboratory of the AIS department along the corridor of the ground floor. A top view of the environment is shown in Figure 4.
To evaluate our approach, we gathered two datasets of the same area, which we use for training and evaluation, i.e., one to create the map and the other one to evaluate our algorithm. The robot pose estimate obtained from the laser range measurements is used to create the feature map from the training data and as ground truth for the evaluation dataset. Nevertheless, the feature matching process is still performed during mapping to obtain standard deviations for matching features. The resulting map is stored as a database of feature locations, feature descriptors and the descriptor matching deviation.
The number of nearest neighbors considered for correspondences determines the tradeoff between the success rate and the runtime. Therefore three different versions of the baseline approach are evaluated, performing single nearest neighbor matching, ten nearest neighbor matching and 20 nearest neighbor matching, respectively.
Robustness is given by the number of successful final estimates of the camera pose with respect to the total number of tries. An estimation is considered as failed if the translational error along any direction is higher than 0.5 m or no pose can be computed. Failures are not taken into account to calculate the RMSE values. Figure 5 shows the robustness values for the three versions of the baseline approach and the proposed approach as percentage of success. Notice how the proposed approach widely outperforms the baseline approaches.
A. Accuracy
Accuracy is evaluated by computing the root mean square errors (RMSE) of the pose estimate (blue crosses in Figure 4) with respect to the ground truth pose (red pluses in Figure 4). Let XY Z be the fixed coordinate frame of the map, where the X axis is parallel to the main direction of the motion -i.e., the long corridor described above-and the Y axis is perpendicular to X, both in the horizontal plane. Let U V W be the mobile frame that represents the robot pose, estimated by means of the laser range scanner (ground truth), and U V W the same frame estimated using the considered approach. The translational errors RMSE X and RMSE Y refer to the RMS value of the distance between the origins of U V W and U V W along the respective axis of the map frame. The rotational errors RMSE α, RMSE β and RMSE γ are the RMS values of the angle between the axis U and U , V and V , W and W , respectively. Figure 6 shows the accuracy results obtained for each approach. Table I shows all the runtimes of the relevant components of our approach and compares them to the equivalent runtimes registered for the baseline approaches. It is easy to observe that our dual nearest neighbor search is faster than a single search in descriptor space, due to the low number of correspondences we need to compute a coarse pose estimate.
B. Runtime
The approach we introduce in this work outperforms all the versions of the baseline in terms of accuracy and robustness. The translational RMS error along the X axis is 6.2 cm and only 2.9 cm along the Y axis. Note that the translational RMSE Y is lower for the third baseline experiment, but the rest of the measurements show higher errors than in our approach. Apart from that, the poor robustness of this baseline version makes our approach the best alternative. All baseline approaches exhibit values of the ratio below 50%. In contrast, our approach produces a successful estimate in 77% of the cases. Further, the rest of the cases do not lead to a wrong estimate. In most cases, no coarse pose estimate can be computed due to a lack of mutual coherence between matches. Therefore, the rest of the pose estimation process can be skipped, leading to fast runtimes for failure cases. In terms of runtime, our algorithm shows an average runtime of 32.3 ms. This means that a rate of 30 Hz (sensor rate) is reachable if the feature detection and the descriptors extraction run in parallel (e.g., in a GPU) to the process.
V. CONCLUSIONS
In this paper we presented an approach to estimate the pose of a robot using a single RGB-D sensor. Visual features are extracted from the monochromatic images. Nearest neighbor searches in the feature descriptor space are carried out to generate sets of correspondences between the image and the feature map. A restrictive coherency check based on preservation of mutual distances between features in the 3D space is carried out to reach a small set of high quality correspondences. These correspondences are used to compute a coarse sensor pose estimate, which guides a spatial matching process. The final pose estimation is computed from the correspondences obtained from the spatial-guided matching.
We carried out experiments for global evaluation of the approach using real data gathered from an office environment, large enough to generate a high number of features in the feature map. Our approach showed to deal well with large feature maps, even when several features are present in the map to represent the same point in the world. It also works properly through environments with repeated objects chairs, tables, screens, etc. and repetitive structure. It outperforms methods based on feature matching only in feature space, taken as baseline approaches. In our experiments, the number of successful pose estimates is 54 % higher than the number of correct estimations provided by the best baseline approach for the same dataset. The maximum translational RMS error is 6.2 cm and the maximum rotational RMS error is 2.1 • . In the current implementation the runtime was 0.5 seconds, but this value is largely influenced by the processes of features detection and descriptors extraction. If these processes are performed fast enough in parallel for example in a GPU with the estimation process, our approach is able to operate in real time (30Hz). In conclusion, the approach we introduced here is better in terms of accuracy, robustness and runtime than approaches that perform features matching based only in proximity between descriptors.
VI. ACKNOWLEDGMENT
We would like to thank the people of the University of Freiburg (Germany) for their assistance during Miguel Heredias visit to the Autonomous Intelligent Systems Laboratory. | 2,664 |
1502.00500 | 338621103 | In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz). | Zhang @cite_24 provide an uncertainty estimation of a 3D stereo-based localization approach using a correspondence-based method to estimate the robot pose. As in our work, visual features are extracted from the image and RANSAC is used to remove the outliers from the initial matches between features in two consecutive images. In contrast, our approach establishes correspondences by image-to-map matching. Thus, additional sources of false correspondences arise, such as repeated objects in the world, the presence of several features in the map extracted from the same point in the world, or the much larger number of features in the map, which increases the chance for random matches. | {
"abstract": [
"Stereo camera is a very important sensor for mobile robot localization and mapping. Its consecutive images can be used to estimate the location of the robot with respect to its environment. This estimation will be fused with location estimates from other sensors for a globally optimal location estimate. In the data fusion context, it is important to compute the uncertainty of the stereo-based localization. In this paper, we propose an approach to obtain the uncertainty of localization when a correspondence-based method is used to estimate the robot pose. The computational complexity of this approach is O(n). Where n is the number of corresponding image points. Experimental results show that this approach is promising."
],
"cite_N": [
"@cite_24"
],
"mid": [
"1541573393"
]
} | Fast and Robust Feature Matching for RGB-D Based Localization | Localization is a fundamental requirement for most tasks in mobile robotics. The correct operation of mobile robots, such as UAVs, transportation robots or tour guides, is based on their ability to estimate their position and orientation in the world. SLAM approaches face the problem when no map is given and, therefore, the creation of the map and localization with respect to it have to be performed simultaneously. Given the map, fast and robust localization methods become necessary to efficiently perform localization within large environments.
The recent emergence of low cost versions of RGB-D sensors [1], [2], has made them attractive as an alternative to the widely used laser scanners, in particular for inexpensive robotic applications. Since such sensors gather dense 3D information of the scene at high frame rates (30Hz), it is possible to use them for six degrees of freedom (DoF) localization, which becomes necessary for robots whose 3D movement does not obey clear constrains that can be modeled beforehand. Recent approaches that perform RGB-D SLAM [3] can create a database of visual features ( Figure 1) as a sparse representation of the environment. This sparse map of features can be used to perform global six-DoF global localization using only a RGB-D sensor. State-of-the-art methods to perform localization found in the literature are probabilistic filter algorithms, e.g., the extended Kalman filter (EKF) [4] and particle filters [5]. Both use a Bayesian model to integrate sensor measurements (e.g., odometry, laser scans) into the current state representation. Particle filters overcome the severe scale limitations of Kalman filtering and can deal with highly non-linear models and non-Gaussian posteriors [6]. Particle filters are typically used in 2D and, therefore, particles need to represent a state with three DoF. In 3D localization, however, particles would have to be generated in a six DoF space, which dramatically increases the number of required particles. Most of approaches to visual localization are using feature descriptors-based feature matching to take perform data association by nearest neighbor searches in descriptor space. While this technique demonstrated to be useful and robust for matching two images to each other, it is prone to failure when matching against a huge database of features such as a large-scale map.
The contribution of this paper is a novel approach for vision-based global localization that performs independently from the previous state of the system using exclusively an RGB-D sensor and a sparse map of visual features. Our system overcomes issues of traditional feature matching techniques when using large feature maps, such as the existence of different points of interest with similar descriptor values or the existence of the same keypoint in different objects of the same class (e.g., several chairs of the same model). In order to cope with this, we develop a two-step algorithm. In the first step we perform a fast guess of the sensor pose. We use it in the second step to allow for a more efficient spatial search for matching features. Using a spatial search method is possible due to the availability of depth information for the features. To ensure robust matching, we exploit information about the descriptor distances of matched features encountered during the map creation. The results we obtain in terms of accuracy, robustness and execution time show that our approach is suitable to be used for online navigation of a mobile robot.
III. METHODOLOGY
Our goal is to localize a RGB-D sensor in a given map of the world using only the data provided by the device. Our approach (schematically depicted in Figure 3) performs the following high-level tasks: First visual features are detected in the monochromatic image. Then, a descriptor is computed for each feature. These descriptors are used to perform data association between the features detected in the image and those contained in a feature map of the world. This matching process is based on proximity between features in the high-dimensional feature space. Correspondences are used to compute a coarse estimation of the sensor pose. This estimation is used to guide nearest neighbor searches in the 3-D space looking for each feature found in the image where it is expected to be in the map. Matches established this way are used to compute a fine pose estimation.
Nearest neighbor searches are an important task in our approach. Searches are performed both in the high dimensional feature space and in 3D space. During the development of our system they showed to be the most time-consuming task. The OpenCV [27] implementation of the FLANN library provided by Muja and Lowe [28] is used to perform approximate nearest neighbor (ANN) searches. The search performance is highly dependent on the used data structure and algorithm. A series of experiments were performed to optimize the search for use with large maps of up to 150, 000 features. This optimization process successfully reduced the runtime for nearest neighbor search up to 25% in the 3D index and up to 9% in the high-dimensionality descriptor space. However, the ANN search remains the computational bottleneck and reducing it is the main focus of the presented approach.
The initial steps of our approach are similar to an image matching process. FLANN indices are created offline, in order to minimize searching time during operation. The main differences lie in the process of pose estimation from the sensor data, which is the major contribution of this work ( Figure 3). Figure 2 shows some pairs of typical data from the Kinect gathered during experiments. The first step is the visual feature detection on the monochromatic image using SURF [29]. Then, SURF descriptors are computed for each one of the detected features. The next step is a coarse pose estimation process. To obtain a coarse estimation of the camera pose with respect to the world, it is necessary to establish several initial matches. Therefore, the process presents two steps that are repeated until a good coarse estimation is found. First some matches between features detected on the camera image and map features are established. The correspondent feature in the map is the nearest neighbor on the 64-dimensional feature space (the SURF features used in our implementation have 64 dimensions). Then, this small group of matches is used to compute the coarse pose estimation. This process involves finding the largest subset of mutually consistent matches in a similar way to [16]. If the matches are not good enough to produce a satisfactory coarse estimation, more features from the camera image are matched against the feature database and the estimation is computed again. The process ends when an acceptable estimation is reached or when all the features detected on the image have been already matched. Once the coarse pose of the sensor is estimated, it is used to predict where the features observed by the sensor are with respect to the map frame. For each feature, all the map features located in a certain neighborhood of the predicted spatial location are taken as match candidates. This matching process guided by spatial information is faster than descriptor-based matching, because the dimensionality is much lower.
Given a sensor reading with N features, a standard matching approach would take N × t 64D , where t 64D is the search time in the descriptor space. In contrast, the overall runtime for matching features in our approach is
M × t 64D + N × t 3D .
Where M is the average number of nearest neighbors required to establish the coarse pose estimation and t 3D is the spatial nearest neighbor search time per feature. In an analysis on selecting the optimal approximate nearest neighbor algorithm for matching SIFT features to a large database, Muja and Lowe [28] achieved an speedup factor of 30 with respect to a linear search, for 90 % search precision. Since M N and t 3D < t 64D our approach outperforms a standard matching approach (timings for t 3D and t 64D are given in Table I). For each candidate feature found, a match between the camera feature and this candidate is established only if the Euclidean distance between them in descriptor space is lower than a threshold. This threshold is set to the standard deviation observed when matching this feature during the mapping process. Finally, a RANSAC method is used to compute the final estimation of the camera pose with respect to the map.
IV. EXPERIMENTS
In this section we evaluate our approach in terms of accuracy of the estimate with respect to the ground truth, robustness of the estimation method and process runtimes. We present experimental results comparing the proposed approach to a baseline method of performing feature matching based exclusively on proximity on feature space.
This baseline approach represents a standard way to face the problem of visual localization, as in a image matching process. As illustrated in Figure 3, it consists of the following basic phases: Keypoint detection and descriptor extraction, for which we use the SURF implementation from OpenCV. Then, the detected features are matched against the nearest map features in the 64D space using the OpenCV FLANN library. Finally, the camera pose with respect to the map is computed using RANSAC for robustness.
The goal of our approach is to estimate the Kinect sensor pose. In order to evaluate the quality of the estimation, the result is compared to a ground truth sensor pose for each estimate. In order to get the ground truth trajectory, we use a PR2 robot with a Kinect sensor mounted on its head. The PR2 base laser scanner is used to obtain a highly accurate 2D position estimate for the robot. To record the data for our experiments, the PR2 was navigated through the laboratory of the AIS department along the corridor of the ground floor. A top view of the environment is shown in Figure 4.
To evaluate our approach, we gathered two datasets of the same area, which we use for training and evaluation, i.e., one to create the map and the other one to evaluate our algorithm. The robot pose estimate obtained from the laser range measurements is used to create the feature map from the training data and as ground truth for the evaluation dataset. Nevertheless, the feature matching process is still performed during mapping to obtain standard deviations for matching features. The resulting map is stored as a database of feature locations, feature descriptors and the descriptor matching deviation.
The number of nearest neighbors considered for correspondences determines the tradeoff between the success rate and the runtime. Therefore three different versions of the baseline approach are evaluated, performing single nearest neighbor matching, ten nearest neighbor matching and 20 nearest neighbor matching, respectively.
Robustness is given by the number of successful final estimates of the camera pose with respect to the total number of tries. An estimation is considered as failed if the translational error along any direction is higher than 0.5 m or no pose can be computed. Failures are not taken into account to calculate the RMSE values. Figure 5 shows the robustness values for the three versions of the baseline approach and the proposed approach as percentage of success. Notice how the proposed approach widely outperforms the baseline approaches.
A. Accuracy
Accuracy is evaluated by computing the root mean square errors (RMSE) of the pose estimate (blue crosses in Figure 4) with respect to the ground truth pose (red pluses in Figure 4). Let XY Z be the fixed coordinate frame of the map, where the X axis is parallel to the main direction of the motion -i.e., the long corridor described above-and the Y axis is perpendicular to X, both in the horizontal plane. Let U V W be the mobile frame that represents the robot pose, estimated by means of the laser range scanner (ground truth), and U V W the same frame estimated using the considered approach. The translational errors RMSE X and RMSE Y refer to the RMS value of the distance between the origins of U V W and U V W along the respective axis of the map frame. The rotational errors RMSE α, RMSE β and RMSE γ are the RMS values of the angle between the axis U and U , V and V , W and W , respectively. Figure 6 shows the accuracy results obtained for each approach. Table I shows all the runtimes of the relevant components of our approach and compares them to the equivalent runtimes registered for the baseline approaches. It is easy to observe that our dual nearest neighbor search is faster than a single search in descriptor space, due to the low number of correspondences we need to compute a coarse pose estimate.
B. Runtime
The approach we introduce in this work outperforms all the versions of the baseline in terms of accuracy and robustness. The translational RMS error along the X axis is 6.2 cm and only 2.9 cm along the Y axis. Note that the translational RMSE Y is lower for the third baseline experiment, but the rest of the measurements show higher errors than in our approach. Apart from that, the poor robustness of this baseline version makes our approach the best alternative. All baseline approaches exhibit values of the ratio below 50%. In contrast, our approach produces a successful estimate in 77% of the cases. Further, the rest of the cases do not lead to a wrong estimate. In most cases, no coarse pose estimate can be computed due to a lack of mutual coherence between matches. Therefore, the rest of the pose estimation process can be skipped, leading to fast runtimes for failure cases. In terms of runtime, our algorithm shows an average runtime of 32.3 ms. This means that a rate of 30 Hz (sensor rate) is reachable if the feature detection and the descriptors extraction run in parallel (e.g., in a GPU) to the process.
V. CONCLUSIONS
In this paper we presented an approach to estimate the pose of a robot using a single RGB-D sensor. Visual features are extracted from the monochromatic images. Nearest neighbor searches in the feature descriptor space are carried out to generate sets of correspondences between the image and the feature map. A restrictive coherency check based on preservation of mutual distances between features in the 3D space is carried out to reach a small set of high quality correspondences. These correspondences are used to compute a coarse sensor pose estimate, which guides a spatial matching process. The final pose estimation is computed from the correspondences obtained from the spatial-guided matching.
We carried out experiments for global evaluation of the approach using real data gathered from an office environment, large enough to generate a high number of features in the feature map. Our approach showed to deal well with large feature maps, even when several features are present in the map to represent the same point in the world. It also works properly through environments with repeated objects chairs, tables, screens, etc. and repetitive structure. It outperforms methods based on feature matching only in feature space, taken as baseline approaches. In our experiments, the number of successful pose estimates is 54 % higher than the number of correct estimations provided by the best baseline approach for the same dataset. The maximum translational RMS error is 6.2 cm and the maximum rotational RMS error is 2.1 • . In the current implementation the runtime was 0.5 seconds, but this value is largely influenced by the processes of features detection and descriptors extraction. If these processes are performed fast enough in parallel for example in a GPU with the estimation process, our approach is able to operate in real time (30Hz). In conclusion, the approach we introduced here is better in terms of accuracy, robustness and runtime than approaches that perform features matching based only in proximity between descriptors.
VI. ACKNOWLEDGMENT
We would like to thank the people of the University of Freiburg (Germany) for their assistance during Miguel Heredias visit to the Autonomous Intelligent Systems Laboratory. | 2,664 |
1502.00500 | 338621103 | In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz). | MCL overcomes the limitations of EKFs as mentioned earlier. It was successfully used in @cite_21 for vision-based localization of a tour-guide robot in a museum using a map of the ceiling and a camera pointing to it. In contrast to this approach, we do not rely on odometry measurements to predict the pose, and are not restricted to planar motion. Additionally, MCL would report incorrect locations after unexpected robot motions or sensor outages. Sensor Resetting Localization @cite_15 partially substitutes particles by new ones directly generated from the sensor measurements when the position estimate is uncertain. Mixture-MCL @cite_16 combines standard MCL with dual-MCL to drastically reduce computational cost and localization error. Dual-MCL also generates particles from the current sensor measurements and was shown to deal well with the kidnapped robot problem when properly combined with standard MCL. We do not need a reset process, since our estimation is independent from the prior state. Our approach could be used in combination with Monte Carlo Localization (MCL) for efficient particle initialization and weighting. However, the results of our experiments show that our estimate is accurate enough to be used as final result, without any filtering. | {
"abstract": [
"",
"This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation \"where needed.\" The number of samples is adapted on-line, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement.",
"Mobile robot localization is the problem of determining a robot’s pose from sensor data. This article presents a family of probabilistic localization algorithms known as Monte Carlo Localization (MCL). MCL algorithms represent a robot’s belief by a set of weighted hypotheses (samples), which approximate the posterior under a common Bayesian formulation of the localization problem. Building on the basic MCL algorithm, this article develops a more robust algorithm called MixtureMCL, which integrates two complimentary ways of generating samples in the estimation. To apply this algorithm to mobile robots equipped with range finders, a kernel density tree is learned that permits fast sampling. Systematic empirical results illustrate the robustness and computational efficiency of the approach. 2001 Published by Elsevier Science B.V."
],
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_16"
],
"mid": [
"",
"2131865378",
"2160584648"
]
} | Fast and Robust Feature Matching for RGB-D Based Localization | Localization is a fundamental requirement for most tasks in mobile robotics. The correct operation of mobile robots, such as UAVs, transportation robots or tour guides, is based on their ability to estimate their position and orientation in the world. SLAM approaches face the problem when no map is given and, therefore, the creation of the map and localization with respect to it have to be performed simultaneously. Given the map, fast and robust localization methods become necessary to efficiently perform localization within large environments.
The recent emergence of low cost versions of RGB-D sensors [1], [2], has made them attractive as an alternative to the widely used laser scanners, in particular for inexpensive robotic applications. Since such sensors gather dense 3D information of the scene at high frame rates (30Hz), it is possible to use them for six degrees of freedom (DoF) localization, which becomes necessary for robots whose 3D movement does not obey clear constrains that can be modeled beforehand. Recent approaches that perform RGB-D SLAM [3] can create a database of visual features ( Figure 1) as a sparse representation of the environment. This sparse map of features can be used to perform global six-DoF global localization using only a RGB-D sensor. State-of-the-art methods to perform localization found in the literature are probabilistic filter algorithms, e.g., the extended Kalman filter (EKF) [4] and particle filters [5]. Both use a Bayesian model to integrate sensor measurements (e.g., odometry, laser scans) into the current state representation. Particle filters overcome the severe scale limitations of Kalman filtering and can deal with highly non-linear models and non-Gaussian posteriors [6]. Particle filters are typically used in 2D and, therefore, particles need to represent a state with three DoF. In 3D localization, however, particles would have to be generated in a six DoF space, which dramatically increases the number of required particles. Most of approaches to visual localization are using feature descriptors-based feature matching to take perform data association by nearest neighbor searches in descriptor space. While this technique demonstrated to be useful and robust for matching two images to each other, it is prone to failure when matching against a huge database of features such as a large-scale map.
The contribution of this paper is a novel approach for vision-based global localization that performs independently from the previous state of the system using exclusively an RGB-D sensor and a sparse map of visual features. Our system overcomes issues of traditional feature matching techniques when using large feature maps, such as the existence of different points of interest with similar descriptor values or the existence of the same keypoint in different objects of the same class (e.g., several chairs of the same model). In order to cope with this, we develop a two-step algorithm. In the first step we perform a fast guess of the sensor pose. We use it in the second step to allow for a more efficient spatial search for matching features. Using a spatial search method is possible due to the availability of depth information for the features. To ensure robust matching, we exploit information about the descriptor distances of matched features encountered during the map creation. The results we obtain in terms of accuracy, robustness and execution time show that our approach is suitable to be used for online navigation of a mobile robot.
III. METHODOLOGY
Our goal is to localize a RGB-D sensor in a given map of the world using only the data provided by the device. Our approach (schematically depicted in Figure 3) performs the following high-level tasks: First visual features are detected in the monochromatic image. Then, a descriptor is computed for each feature. These descriptors are used to perform data association between the features detected in the image and those contained in a feature map of the world. This matching process is based on proximity between features in the high-dimensional feature space. Correspondences are used to compute a coarse estimation of the sensor pose. This estimation is used to guide nearest neighbor searches in the 3-D space looking for each feature found in the image where it is expected to be in the map. Matches established this way are used to compute a fine pose estimation.
Nearest neighbor searches are an important task in our approach. Searches are performed both in the high dimensional feature space and in 3D space. During the development of our system they showed to be the most time-consuming task. The OpenCV [27] implementation of the FLANN library provided by Muja and Lowe [28] is used to perform approximate nearest neighbor (ANN) searches. The search performance is highly dependent on the used data structure and algorithm. A series of experiments were performed to optimize the search for use with large maps of up to 150, 000 features. This optimization process successfully reduced the runtime for nearest neighbor search up to 25% in the 3D index and up to 9% in the high-dimensionality descriptor space. However, the ANN search remains the computational bottleneck and reducing it is the main focus of the presented approach.
The initial steps of our approach are similar to an image matching process. FLANN indices are created offline, in order to minimize searching time during operation. The main differences lie in the process of pose estimation from the sensor data, which is the major contribution of this work ( Figure 3). Figure 2 shows some pairs of typical data from the Kinect gathered during experiments. The first step is the visual feature detection on the monochromatic image using SURF [29]. Then, SURF descriptors are computed for each one of the detected features. The next step is a coarse pose estimation process. To obtain a coarse estimation of the camera pose with respect to the world, it is necessary to establish several initial matches. Therefore, the process presents two steps that are repeated until a good coarse estimation is found. First some matches between features detected on the camera image and map features are established. The correspondent feature in the map is the nearest neighbor on the 64-dimensional feature space (the SURF features used in our implementation have 64 dimensions). Then, this small group of matches is used to compute the coarse pose estimation. This process involves finding the largest subset of mutually consistent matches in a similar way to [16]. If the matches are not good enough to produce a satisfactory coarse estimation, more features from the camera image are matched against the feature database and the estimation is computed again. The process ends when an acceptable estimation is reached or when all the features detected on the image have been already matched. Once the coarse pose of the sensor is estimated, it is used to predict where the features observed by the sensor are with respect to the map frame. For each feature, all the map features located in a certain neighborhood of the predicted spatial location are taken as match candidates. This matching process guided by spatial information is faster than descriptor-based matching, because the dimensionality is much lower.
Given a sensor reading with N features, a standard matching approach would take N × t 64D , where t 64D is the search time in the descriptor space. In contrast, the overall runtime for matching features in our approach is
M × t 64D + N × t 3D .
Where M is the average number of nearest neighbors required to establish the coarse pose estimation and t 3D is the spatial nearest neighbor search time per feature. In an analysis on selecting the optimal approximate nearest neighbor algorithm for matching SIFT features to a large database, Muja and Lowe [28] achieved an speedup factor of 30 with respect to a linear search, for 90 % search precision. Since M N and t 3D < t 64D our approach outperforms a standard matching approach (timings for t 3D and t 64D are given in Table I). For each candidate feature found, a match between the camera feature and this candidate is established only if the Euclidean distance between them in descriptor space is lower than a threshold. This threshold is set to the standard deviation observed when matching this feature during the mapping process. Finally, a RANSAC method is used to compute the final estimation of the camera pose with respect to the map.
IV. EXPERIMENTS
In this section we evaluate our approach in terms of accuracy of the estimate with respect to the ground truth, robustness of the estimation method and process runtimes. We present experimental results comparing the proposed approach to a baseline method of performing feature matching based exclusively on proximity on feature space.
This baseline approach represents a standard way to face the problem of visual localization, as in a image matching process. As illustrated in Figure 3, it consists of the following basic phases: Keypoint detection and descriptor extraction, for which we use the SURF implementation from OpenCV. Then, the detected features are matched against the nearest map features in the 64D space using the OpenCV FLANN library. Finally, the camera pose with respect to the map is computed using RANSAC for robustness.
The goal of our approach is to estimate the Kinect sensor pose. In order to evaluate the quality of the estimation, the result is compared to a ground truth sensor pose for each estimate. In order to get the ground truth trajectory, we use a PR2 robot with a Kinect sensor mounted on its head. The PR2 base laser scanner is used to obtain a highly accurate 2D position estimate for the robot. To record the data for our experiments, the PR2 was navigated through the laboratory of the AIS department along the corridor of the ground floor. A top view of the environment is shown in Figure 4.
To evaluate our approach, we gathered two datasets of the same area, which we use for training and evaluation, i.e., one to create the map and the other one to evaluate our algorithm. The robot pose estimate obtained from the laser range measurements is used to create the feature map from the training data and as ground truth for the evaluation dataset. Nevertheless, the feature matching process is still performed during mapping to obtain standard deviations for matching features. The resulting map is stored as a database of feature locations, feature descriptors and the descriptor matching deviation.
The number of nearest neighbors considered for correspondences determines the tradeoff between the success rate and the runtime. Therefore three different versions of the baseline approach are evaluated, performing single nearest neighbor matching, ten nearest neighbor matching and 20 nearest neighbor matching, respectively.
Robustness is given by the number of successful final estimates of the camera pose with respect to the total number of tries. An estimation is considered as failed if the translational error along any direction is higher than 0.5 m or no pose can be computed. Failures are not taken into account to calculate the RMSE values. Figure 5 shows the robustness values for the three versions of the baseline approach and the proposed approach as percentage of success. Notice how the proposed approach widely outperforms the baseline approaches.
A. Accuracy
Accuracy is evaluated by computing the root mean square errors (RMSE) of the pose estimate (blue crosses in Figure 4) with respect to the ground truth pose (red pluses in Figure 4). Let XY Z be the fixed coordinate frame of the map, where the X axis is parallel to the main direction of the motion -i.e., the long corridor described above-and the Y axis is perpendicular to X, both in the horizontal plane. Let U V W be the mobile frame that represents the robot pose, estimated by means of the laser range scanner (ground truth), and U V W the same frame estimated using the considered approach. The translational errors RMSE X and RMSE Y refer to the RMS value of the distance between the origins of U V W and U V W along the respective axis of the map frame. The rotational errors RMSE α, RMSE β and RMSE γ are the RMS values of the angle between the axis U and U , V and V , W and W , respectively. Figure 6 shows the accuracy results obtained for each approach. Table I shows all the runtimes of the relevant components of our approach and compares them to the equivalent runtimes registered for the baseline approaches. It is easy to observe that our dual nearest neighbor search is faster than a single search in descriptor space, due to the low number of correspondences we need to compute a coarse pose estimate.
B. Runtime
The approach we introduce in this work outperforms all the versions of the baseline in terms of accuracy and robustness. The translational RMS error along the X axis is 6.2 cm and only 2.9 cm along the Y axis. Note that the translational RMSE Y is lower for the third baseline experiment, but the rest of the measurements show higher errors than in our approach. Apart from that, the poor robustness of this baseline version makes our approach the best alternative. All baseline approaches exhibit values of the ratio below 50%. In contrast, our approach produces a successful estimate in 77% of the cases. Further, the rest of the cases do not lead to a wrong estimate. In most cases, no coarse pose estimate can be computed due to a lack of mutual coherence between matches. Therefore, the rest of the pose estimation process can be skipped, leading to fast runtimes for failure cases. In terms of runtime, our algorithm shows an average runtime of 32.3 ms. This means that a rate of 30 Hz (sensor rate) is reachable if the feature detection and the descriptors extraction run in parallel (e.g., in a GPU) to the process.
V. CONCLUSIONS
In this paper we presented an approach to estimate the pose of a robot using a single RGB-D sensor. Visual features are extracted from the monochromatic images. Nearest neighbor searches in the feature descriptor space are carried out to generate sets of correspondences between the image and the feature map. A restrictive coherency check based on preservation of mutual distances between features in the 3D space is carried out to reach a small set of high quality correspondences. These correspondences are used to compute a coarse sensor pose estimate, which guides a spatial matching process. The final pose estimation is computed from the correspondences obtained from the spatial-guided matching.
We carried out experiments for global evaluation of the approach using real data gathered from an office environment, large enough to generate a high number of features in the feature map. Our approach showed to deal well with large feature maps, even when several features are present in the map to represent the same point in the world. It also works properly through environments with repeated objects chairs, tables, screens, etc. and repetitive structure. It outperforms methods based on feature matching only in feature space, taken as baseline approaches. In our experiments, the number of successful pose estimates is 54 % higher than the number of correct estimations provided by the best baseline approach for the same dataset. The maximum translational RMS error is 6.2 cm and the maximum rotational RMS error is 2.1 • . In the current implementation the runtime was 0.5 seconds, but this value is largely influenced by the processes of features detection and descriptors extraction. If these processes are performed fast enough in parallel for example in a GPU with the estimation process, our approach is able to operate in real time (30Hz). In conclusion, the approach we introduced here is better in terms of accuracy, robustness and runtime than approaches that perform features matching based only in proximity between descriptors.
VI. ACKNOWLEDGMENT
We would like to thank the people of the University of Freiburg (Germany) for their assistance during Miguel Heredias visit to the Autonomous Intelligent Systems Laboratory. | 2,664 |
1502.00500 | 338621103 | In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz). | To the best of our knowledge, at the time being, there is no other dedicated global localization approach for the recently introduced RGB-D sensors. However, a number of novel approaches for visual odometry have been proposed, which exploit the available combination of color, density of depth and the high frame rate to improve alignment performance as compared, e.g., to the iterative closest point algorithm @cite_4 . In @cite_18 and @cite_2 adaptations of ICP are proposed to process the high amounts of data more efficiently. Steinbruecker @cite_28 present a transformation estimation based on the minimization of an energy function. For frames close to each other, they achieve enhanced runtime performance and accuracy compared to Generalized ICP @cite_22 . Using the distribution of normals, Osteen @cite_27 improve the initialization of ICP by efficiently computing the difference in orientation between two frames, which allows a substantial drift reduction. These approaches work well for computing the transformation for small incremental changes between consecutive frames, but they are of limited applicability for global localization in a map. | {
"abstract": [
"An RGB-D camera is a sensor which outputs range and color information about objects. Recent technological advances in this area have introduced affordable RGB-D devices in the robotics community. In this paper, we present a real-time technique for 6-DoF camera pose estimation through the incremental registration of RGB-D images. First, a set of edge features are computed from the depth and color images. An initial motion estimation is calculated through aligning the features. This initial guess is refined by applying the Iterative Closest Point algorithm on the dense point cloud data. A rigorous error analysis assesses several sets of RGB-D ground truth data via an error accumulation metric. We show that the proposed two-stage approach significantly reduces error in the pose estimation, compared to a state-of-the-art ICP registration technique.",
"The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >",
"",
"We present an energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera. To this end we propose an energy function which aims at finding the best rigid body motion to map one RGB-D image into another one, assuming a static scene filmed by a moving camera. We then propose a linearization of the energy function which leads to a 6×6 normal equation for the twist coordinates representing the rigid body motion. To allow for larger motions, we solve this equation in a coarse-to-fine scheme. Extensive quantitative analysis on recently proposed benchmark datasets shows that the proposed solution is faster than a state-of-the-art implementation of the iterative closest point (ICP) algorithm by two orders of magnitude. While ICP is more robust to large camera motion, the proposed method gives better results in the regime of small displacements which are often the case in camera tracking applications.",
"We present a technique to estimate the egomotion of an RGB-D sensor based on rotations of functions defined on the unit sphere. In contrast to traditional approaches, our technique is not based on image features and does not require correspondences to be generated between frames of data. Instead, consecutive functions are correlated using spherical harmonic analysis. An Extended Gaussian Image (EGI), created from the local normal estimates of a point cloud, defines each function. Correlations are efficiently computed using Fourier transformations, resulting in a 3 Degree of Freedom (3-DoF) rotation estimate. An Iterative Closest Point (ICP) process then refines the initial rotation estimate and adds a translational component, yielding a full 6-DoF egomotion estimate. The focus of this work is to investigate the merits of using spherical harmonic analysis for egomotion estimation by comparison with alternative 6-DoF methods. We compare the performance of the proposed technique with that of stand-alone ICP and image feature based methods. As with other egomotion techniques, estimation errors accumulate and degrade results, necessitating correction mechanisms for robust localization. For this report, however, we use the raw estimates; no filtering or smoothing processes are applied. In-house and external benchmark data sets are analyzed for both runtime and accuracy. Results show that the algorithm is competitive in terms of both accuracy and runtime, and future work will aim to combine the various techniques into a more robust egomotion estimation framework.",
"The increasing number of ICP variants leads to an explosion of algorithms and parameters. This renders difficult the selection of the appropriate combination for a given application. In this paper, we propose a state-of-the-art, modular, and efficient implementation of an ICP library. We took advantage of the recent availability of fast depth cameras to demonstrate one application example: a 3D pose tracker running at 30 Hz. For this application, we show the modularity of our ICP library by optimizing the use of lean and simple descriptors in order to ease the matching of 3D point clouds. This tracker is then evaluated using datasets recorded along a ground truth of millimeter accuracy. We provide both source code and datasets to the community in order to accelerate further comparisons in this field."
],
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_27",
"@cite_2"
],
"mid": [
"2111610923",
"2049981393",
"",
"2091226544",
"2077022309",
"2127045032"
]
} | Fast and Robust Feature Matching for RGB-D Based Localization | Localization is a fundamental requirement for most tasks in mobile robotics. The correct operation of mobile robots, such as UAVs, transportation robots or tour guides, is based on their ability to estimate their position and orientation in the world. SLAM approaches face the problem when no map is given and, therefore, the creation of the map and localization with respect to it have to be performed simultaneously. Given the map, fast and robust localization methods become necessary to efficiently perform localization within large environments.
The recent emergence of low cost versions of RGB-D sensors [1], [2], has made them attractive as an alternative to the widely used laser scanners, in particular for inexpensive robotic applications. Since such sensors gather dense 3D information of the scene at high frame rates (30Hz), it is possible to use them for six degrees of freedom (DoF) localization, which becomes necessary for robots whose 3D movement does not obey clear constrains that can be modeled beforehand. Recent approaches that perform RGB-D SLAM [3] can create a database of visual features ( Figure 1) as a sparse representation of the environment. This sparse map of features can be used to perform global six-DoF global localization using only a RGB-D sensor. State-of-the-art methods to perform localization found in the literature are probabilistic filter algorithms, e.g., the extended Kalman filter (EKF) [4] and particle filters [5]. Both use a Bayesian model to integrate sensor measurements (e.g., odometry, laser scans) into the current state representation. Particle filters overcome the severe scale limitations of Kalman filtering and can deal with highly non-linear models and non-Gaussian posteriors [6]. Particle filters are typically used in 2D and, therefore, particles need to represent a state with three DoF. In 3D localization, however, particles would have to be generated in a six DoF space, which dramatically increases the number of required particles. Most of approaches to visual localization are using feature descriptors-based feature matching to take perform data association by nearest neighbor searches in descriptor space. While this technique demonstrated to be useful and robust for matching two images to each other, it is prone to failure when matching against a huge database of features such as a large-scale map.
The contribution of this paper is a novel approach for vision-based global localization that performs independently from the previous state of the system using exclusively an RGB-D sensor and a sparse map of visual features. Our system overcomes issues of traditional feature matching techniques when using large feature maps, such as the existence of different points of interest with similar descriptor values or the existence of the same keypoint in different objects of the same class (e.g., several chairs of the same model). In order to cope with this, we develop a two-step algorithm. In the first step we perform a fast guess of the sensor pose. We use it in the second step to allow for a more efficient spatial search for matching features. Using a spatial search method is possible due to the availability of depth information for the features. To ensure robust matching, we exploit information about the descriptor distances of matched features encountered during the map creation. The results we obtain in terms of accuracy, robustness and execution time show that our approach is suitable to be used for online navigation of a mobile robot.
III. METHODOLOGY
Our goal is to localize a RGB-D sensor in a given map of the world using only the data provided by the device. Our approach (schematically depicted in Figure 3) performs the following high-level tasks: First visual features are detected in the monochromatic image. Then, a descriptor is computed for each feature. These descriptors are used to perform data association between the features detected in the image and those contained in a feature map of the world. This matching process is based on proximity between features in the high-dimensional feature space. Correspondences are used to compute a coarse estimation of the sensor pose. This estimation is used to guide nearest neighbor searches in the 3-D space looking for each feature found in the image where it is expected to be in the map. Matches established this way are used to compute a fine pose estimation.
Nearest neighbor searches are an important task in our approach. Searches are performed both in the high dimensional feature space and in 3D space. During the development of our system they showed to be the most time-consuming task. The OpenCV [27] implementation of the FLANN library provided by Muja and Lowe [28] is used to perform approximate nearest neighbor (ANN) searches. The search performance is highly dependent on the used data structure and algorithm. A series of experiments were performed to optimize the search for use with large maps of up to 150, 000 features. This optimization process successfully reduced the runtime for nearest neighbor search up to 25% in the 3D index and up to 9% in the high-dimensionality descriptor space. However, the ANN search remains the computational bottleneck and reducing it is the main focus of the presented approach.
The initial steps of our approach are similar to an image matching process. FLANN indices are created offline, in order to minimize searching time during operation. The main differences lie in the process of pose estimation from the sensor data, which is the major contribution of this work ( Figure 3). Figure 2 shows some pairs of typical data from the Kinect gathered during experiments. The first step is the visual feature detection on the monochromatic image using SURF [29]. Then, SURF descriptors are computed for each one of the detected features. The next step is a coarse pose estimation process. To obtain a coarse estimation of the camera pose with respect to the world, it is necessary to establish several initial matches. Therefore, the process presents two steps that are repeated until a good coarse estimation is found. First some matches between features detected on the camera image and map features are established. The correspondent feature in the map is the nearest neighbor on the 64-dimensional feature space (the SURF features used in our implementation have 64 dimensions). Then, this small group of matches is used to compute the coarse pose estimation. This process involves finding the largest subset of mutually consistent matches in a similar way to [16]. If the matches are not good enough to produce a satisfactory coarse estimation, more features from the camera image are matched against the feature database and the estimation is computed again. The process ends when an acceptable estimation is reached or when all the features detected on the image have been already matched. Once the coarse pose of the sensor is estimated, it is used to predict where the features observed by the sensor are with respect to the map frame. For each feature, all the map features located in a certain neighborhood of the predicted spatial location are taken as match candidates. This matching process guided by spatial information is faster than descriptor-based matching, because the dimensionality is much lower.
Given a sensor reading with N features, a standard matching approach would take N × t 64D , where t 64D is the search time in the descriptor space. In contrast, the overall runtime for matching features in our approach is
M × t 64D + N × t 3D .
Where M is the average number of nearest neighbors required to establish the coarse pose estimation and t 3D is the spatial nearest neighbor search time per feature. In an analysis on selecting the optimal approximate nearest neighbor algorithm for matching SIFT features to a large database, Muja and Lowe [28] achieved an speedup factor of 30 with respect to a linear search, for 90 % search precision. Since M N and t 3D < t 64D our approach outperforms a standard matching approach (timings for t 3D and t 64D are given in Table I). For each candidate feature found, a match between the camera feature and this candidate is established only if the Euclidean distance between them in descriptor space is lower than a threshold. This threshold is set to the standard deviation observed when matching this feature during the mapping process. Finally, a RANSAC method is used to compute the final estimation of the camera pose with respect to the map.
IV. EXPERIMENTS
In this section we evaluate our approach in terms of accuracy of the estimate with respect to the ground truth, robustness of the estimation method and process runtimes. We present experimental results comparing the proposed approach to a baseline method of performing feature matching based exclusively on proximity on feature space.
This baseline approach represents a standard way to face the problem of visual localization, as in a image matching process. As illustrated in Figure 3, it consists of the following basic phases: Keypoint detection and descriptor extraction, for which we use the SURF implementation from OpenCV. Then, the detected features are matched against the nearest map features in the 64D space using the OpenCV FLANN library. Finally, the camera pose with respect to the map is computed using RANSAC for robustness.
The goal of our approach is to estimate the Kinect sensor pose. In order to evaluate the quality of the estimation, the result is compared to a ground truth sensor pose for each estimate. In order to get the ground truth trajectory, we use a PR2 robot with a Kinect sensor mounted on its head. The PR2 base laser scanner is used to obtain a highly accurate 2D position estimate for the robot. To record the data for our experiments, the PR2 was navigated through the laboratory of the AIS department along the corridor of the ground floor. A top view of the environment is shown in Figure 4.
To evaluate our approach, we gathered two datasets of the same area, which we use for training and evaluation, i.e., one to create the map and the other one to evaluate our algorithm. The robot pose estimate obtained from the laser range measurements is used to create the feature map from the training data and as ground truth for the evaluation dataset. Nevertheless, the feature matching process is still performed during mapping to obtain standard deviations for matching features. The resulting map is stored as a database of feature locations, feature descriptors and the descriptor matching deviation.
The number of nearest neighbors considered for correspondences determines the tradeoff between the success rate and the runtime. Therefore three different versions of the baseline approach are evaluated, performing single nearest neighbor matching, ten nearest neighbor matching and 20 nearest neighbor matching, respectively.
Robustness is given by the number of successful final estimates of the camera pose with respect to the total number of tries. An estimation is considered as failed if the translational error along any direction is higher than 0.5 m or no pose can be computed. Failures are not taken into account to calculate the RMSE values. Figure 5 shows the robustness values for the three versions of the baseline approach and the proposed approach as percentage of success. Notice how the proposed approach widely outperforms the baseline approaches.
A. Accuracy
Accuracy is evaluated by computing the root mean square errors (RMSE) of the pose estimate (blue crosses in Figure 4) with respect to the ground truth pose (red pluses in Figure 4). Let XY Z be the fixed coordinate frame of the map, where the X axis is parallel to the main direction of the motion -i.e., the long corridor described above-and the Y axis is perpendicular to X, both in the horizontal plane. Let U V W be the mobile frame that represents the robot pose, estimated by means of the laser range scanner (ground truth), and U V W the same frame estimated using the considered approach. The translational errors RMSE X and RMSE Y refer to the RMS value of the distance between the origins of U V W and U V W along the respective axis of the map frame. The rotational errors RMSE α, RMSE β and RMSE γ are the RMS values of the angle between the axis U and U , V and V , W and W , respectively. Figure 6 shows the accuracy results obtained for each approach. Table I shows all the runtimes of the relevant components of our approach and compares them to the equivalent runtimes registered for the baseline approaches. It is easy to observe that our dual nearest neighbor search is faster than a single search in descriptor space, due to the low number of correspondences we need to compute a coarse pose estimate.
B. Runtime
The approach we introduce in this work outperforms all the versions of the baseline in terms of accuracy and robustness. The translational RMS error along the X axis is 6.2 cm and only 2.9 cm along the Y axis. Note that the translational RMSE Y is lower for the third baseline experiment, but the rest of the measurements show higher errors than in our approach. Apart from that, the poor robustness of this baseline version makes our approach the best alternative. All baseline approaches exhibit values of the ratio below 50%. In contrast, our approach produces a successful estimate in 77% of the cases. Further, the rest of the cases do not lead to a wrong estimate. In most cases, no coarse pose estimate can be computed due to a lack of mutual coherence between matches. Therefore, the rest of the pose estimation process can be skipped, leading to fast runtimes for failure cases. In terms of runtime, our algorithm shows an average runtime of 32.3 ms. This means that a rate of 30 Hz (sensor rate) is reachable if the feature detection and the descriptors extraction run in parallel (e.g., in a GPU) to the process.
V. CONCLUSIONS
In this paper we presented an approach to estimate the pose of a robot using a single RGB-D sensor. Visual features are extracted from the monochromatic images. Nearest neighbor searches in the feature descriptor space are carried out to generate sets of correspondences between the image and the feature map. A restrictive coherency check based on preservation of mutual distances between features in the 3D space is carried out to reach a small set of high quality correspondences. These correspondences are used to compute a coarse sensor pose estimate, which guides a spatial matching process. The final pose estimation is computed from the correspondences obtained from the spatial-guided matching.
We carried out experiments for global evaluation of the approach using real data gathered from an office environment, large enough to generate a high number of features in the feature map. Our approach showed to deal well with large feature maps, even when several features are present in the map to represent the same point in the world. It also works properly through environments with repeated objects chairs, tables, screens, etc. and repetitive structure. It outperforms methods based on feature matching only in feature space, taken as baseline approaches. In our experiments, the number of successful pose estimates is 54 % higher than the number of correct estimations provided by the best baseline approach for the same dataset. The maximum translational RMS error is 6.2 cm and the maximum rotational RMS error is 2.1 • . In the current implementation the runtime was 0.5 seconds, but this value is largely influenced by the processes of features detection and descriptors extraction. If these processes are performed fast enough in parallel for example in a GPU with the estimation process, our approach is able to operate in real time (30Hz). In conclusion, the approach we introduced here is better in terms of accuracy, robustness and runtime than approaches that perform features matching based only in proximity between descriptors.
VI. ACKNOWLEDGMENT
We would like to thank the people of the University of Freiburg (Germany) for their assistance during Miguel Heredias visit to the Autonomous Intelligent Systems Laboratory. | 2,664 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.