• Nebyly nalezeny žádné výsledky

Text práce (5.401Mb)

N/A
N/A
Protected

Academic year: 2022

Podíl "Text práce (5.401Mb)"

Copied!
55
0
0

Načítání.... (zobrazit plný text nyní)

Fulltext

(1)

Charles University in Prague Faculty of Mathematics and Physics

MASTER THESIS

Martin Dobiaˇs

Structural Recognition of Facades

Department of Software and Computer Science Education

Supervisor: Doc. Dr. Techn. Ing. Radim ˇS´ara, Department of Cybernetics, FEL ˇCVUT

Study Program: Computer science, Software systems

2010

(2)

Acknowledgements

I would like to thank my supervisor, Radim ˇS´ara, for his invaluable help dur- ing the preparation of the thesis — for the numerous brainstorming meetings, for the patient guidance and for all the feedback.

I also thank my girlfriend Barbara and my parents for the great support and encouragement.

I hereby claim that I have written this master thesis on my own, using ex- clusively cited sources. I permit the lending of the thesis.

Prague, April 15, 2010 Martin Dobiaˇs

(3)

Contents

1 Introduction 6

1.1 Facades . . . 6

1.2 Motivation . . . 7

1.3 Related Work . . . 8

1.4 Our Approach . . . 10

1.5 Structure of the Thesis . . . 11

2 Data Model 12 2.1 Bayesian Inference . . . 13

2.1.1 Hierarchical Models . . . 14

2.2 Camera Model . . . 15

2.3 The Prior Model . . . 16

2.3.1 Single-Row Model . . . 16

2.3.2 Prior Distributions . . . 17

2.3.3 Hyperparameters for Prior Distributions . . . 21

2.3.4 Two-dimensional Facade Model . . . 22

2.4 The Likelihood Term . . . 23

2.4.1 Classifier Evaluation . . . 25

3 Sampling 27 3.1 Overview . . . 27

3.1.1 Introduction to Monte Carlo Methods . . . 28

3.1.2 Rejection Sampling . . . 29

3.1.3 Markov Chain Monte Carlo Methods . . . 29

3.2 Computation with MCMC . . . 31

3.3 Metropolis-Hastings Algorithm . . . 32

3.4 Estimation of Window Positions . . . 32

3.5 Reversible Jump MCMC . . . 33

3.6 Determining Number of Windows . . . 34

3.7 Estimation of Window Size . . . 36

(4)

4 Model Improvements 37

4.1 Sampling with Dependent Proposals . . . 37

4.2 Hybrid Monte Carlo . . . 38

4.3 Further Model Improvements . . . 39

5 Evaluation of Performance 41 5.1 Solution Quality . . . 41

5.2 Effectivity . . . 42

6 Towards the Grammars 44 6.1 Facades and Grammars . . . 44

6.2 Substitution Rules . . . 45

6.3 Substitutions for Facades . . . 46

6.4 Outlook . . . 47

7 Conclusion 48

A Solutions 49

(5)

Title: Structural Recognition of Facades Author: Martin Dobiaˇs

Department: Department of Software and Computer Science Education Supervisor: Doc. Dr. Techn. Ing. Radim ˇS´ara, Department of Cybernetics, FEL ˇCVUT

Supervisor’s e-mail address: sara@cmp.felk.cvut.cz

Abstract: We investigate a method for interpretation of facades from single images. The emphasis is on the separation of knowledge about facade struc- ture and detection of facade elements. The interpretation task is formulated as a Bayesian inference problem of finding maximum a posteriori estimate.

A stochastic model that encompasses the structural knowledge about facade elements is presented and an it is used together with an integrated classifier to determine the correct positions of facade elements. We construct a Markov chain Monte Carlo sampler that solves the problem. Various improvements of the model and sampling algorithm are discussed. Finally, we propose a more general approach for structural recognition using context-free grammars that could be used for other computer vision tasks.

Keywords: facade, vision, MCMC, grammar

N´azov pr´ace: ˇStruktur´alne rozpozn´avanie fas´ad Autor: Martin Dobiaˇs

Katedra (´ustav): Kabinet software a v´yuky informatiky

Ved´uci diplomovej pr´ace: Doc. Dr. Techn. Ing. Radim ˇS´ara, Katedra ky- bernetiky, FEL ˇCVUT

E-mail ved´uceho: sara@cmp.felk.cvut.cz

Abstrakt: Sk´umame met´odu na interpret´aciu fas´ad z jednotliv´ych obr´azkov.

Dˆoraz je kladen´y na oddelenie znalost´ı o ˇstrukt´ure fas´ady a detekcie prvkov fas´ady. Interpret´acia je formulovan´a ako Bayesovk´y probl´em maxim´alnej apo- steri´ornej pravdepodobnosti. Prezentuje sa stochastick´y model, ktor´y zah´rˇna ˇstruktur´alne znalosti o prvkoch fas´ady a je pouˇzit´y spolu s integrovan´ym klasifik´atorom na urˇcenie spr´avnej polohy prvkov fasady. Met´odami Monte Carlo pre Markovovsk´e ret’azce konˇstruujeme vzorkovac´ı algoritmus. Disku- tuj´u sa rˆozne vylepˇsenia modelu a vzorkovacieho algoritmu. Nakoniec navr- hujeme obecnejˇsiu met´odu na ˇstruktur´alne rozpozn´avanie s pouˇzit´ım bezkon- textov´ych gramat´ık, vhodn´u aj na in´e ´ulohy poˇc´ıtaˇcov´eho videnia.

Kl’´uˇcov´e slov´a: fas´ada, videnie, MCMC, gramatika

(6)

Chapter 1 Introduction

This work aims to introduce a computer vision method for recognition of facades of buildings. The method divides the task of facade recognition into two distinct parts: ‘high-level’ recognition of structure of the scene and ‘low- level’ classification of regions in images into object classes. We aim to make the method general enough to be useful also in various other contexts. The importance of such division is because it allows to separate knowledge about the structure of the scene and knowledge about the appearance of objects.

We have been successful with this approach within a subset of facade styles and we propose a generalization based on context-free grammars.

1.1 Facades

A facade can be defined as a one side of exterior of a building, especially the front side. Facades are typically the only part of building we can see from the exterior. Its appearance allows us to infer a lot about the building, since they vary greatly depending on the architectural style, purpose of the build- ing, geographical area, settlement size and other factors. Typical facades are composed from various elements, most importantly windows and doors, op- tionally balconies, cornices, rain gutters, balusters, pilasters and other types of decorations.

This thesis aims to introduce a computational framework that would enable us, given a single input image, to determine whether there is a facade (or more facades), what elements are on it and where are they located. This is a problem from the domain of computer vision since we have to segment the input image into several classes: facade, facade elements and background.

The huge variability of facades makes recognition of facade and its struc- ture a hard problem. Even facades in the same architectural style in one city

(7)

Figure 1.1: Examples of facades from Prague. Left: typical facade from the beginning of 20th century. Middle: modern office building (user Kachle,

”City Tower, Prague” via Flickr). Right: postmodern style (user C|ick,

”Dancing House” via Flickr).

on one street show quite some variance. Moreover, architecture styles that emerged within several past decades change the look and structure of facades completely. Modernist style forms monolithic facades of glass and concrete avoiding any decorations, on the other hand, postmodern architecture evince very complex and irregularly shaped buildings with lots of ornamentality.

This variability is illustrated in Figure 1.1 that gives few examples of archi- tectural styles. In wider Prague city center, the most common architectural style is depicted on the left side of the figure. This style involves a good amount of regularities in structure, on the other hand the facade is quite complex and involves various ornaments.

To limit the scope of the thesis, we have decided to concentrate on this

‘classical’ architectural style. The world of strongly structured facades is a rich playground for developing general methods for structural pattern recog- nition. Virtually all problems of structural recognition and learning are present.

1.2 Motivation

In recent years there is an increased interest in processing of imagery to create 3D models of cities. Traditionally, aerial imagery was used to determine the models of buildings using photogrammetric methods. These methods are however limited due to the high cost of acquisition of such imagery and due to the lack of details. In vertical aerial photography the facades are not invisible, oblique photography the facades are occluded.

(8)

Various companies started to gather ground-level imagery of the cities as an addition to aerial imagery. One such successful project is Google Street View service that allows users to explore many places online within web browser.1 The data are typically gathered using vehicles equipped with cam- eras and a range finder, then processed to 360 panoramic images distributed across the road network.

To make such virtual tours more realistic, it is necessary to create 3D model of the environment. This is typically solved by introducing 3D models of buildings. They are modeled from basic geometric primitives (e.g. box, pyramid, cone, cylinder) and a texture is applied. Although such approach has clear advantages for human perception of the environment, the facades of buildings still look very ‘flat’. Plane of elements of a facade is usually slightly shifted from the plane of facade: windows and doors are usually inset, while other elements such as cornices and ornaments are set out. Similarly, dif- ferent materials have different visual properties: windows have glossy look and reflect the environment, facade wall has no reflection and typically rough surface. To enhance the models of buildings with such details for more real- istic appearance, it is necessary to properly know where the elements of the facade are located.

This is where the task of recognition of facade structure comes into the play.

1.3 Related Work

Some research has been already done in this area and in this section we give an overview of related work on the topic of modeling and recognition of facades. The proposed methods vary greatly both in their assumptions for input data and the expected results.

Geometrical approach is employed in [Werner and Zisserman, 2002]. Given a series of images, a coarse polyhedral model is created by doing projective reconstruction, detection of vanishing points and line matching. The model is refined in the second step and facade elements are matched.

Different approach is taken in [ˇCech and ˇS´ara, 2009]. They aim to assign labels to pixels of the input image to discriminate between windows and background using various pixel-based languages that constrain the labeling.

The disadvantage of such methods is that they insist on labeling of all image pixels, even non-facade ones, which requires the entire model to be encoded by label compatibilities. Such model representation may be difficult (if not impossible) to obtain. In facade images, however, the results are very good.

1Available athttp://maps.google.com/

(9)

In [Dick et al., 2004], 3D model of a building is constructed together with identified facade elements, given a series of images of the building. First, the images are self-calibrated and planes of the walls are detected. Next stage involves search for primitives (facade elements) on each wall and most likely ones are identified. Finally, maximum aposteriori estimation using MCMC simulation is used to determine the most probable set of primitives.

Context-free grammars are used in various methods. In [Alegre and Del- laert, 2004], a grammar is proposed that partitions the input image into regions with regions of identical color. Two operators (non-terminal sym- bols) enable the partitioning: split (into regions with different properties) and division (into regions of similar appearance). The partitioning is done using either x or y axis and computation is done with MCMC simulation in which the tree of partitions gets altered to form new possible partitions and the proposed ‘ideal’ image is compared to the input image.

Grammars that hierarchically partition the image were used also in [Rip- perda and Brenner, 2007], but on higher level. Terminal symbols represent facade elements such as doors, windows and walls, non-terminal symbols de- termine the intermediate divisions (array of facades, symmetric facade etc.).

Using an image and range data, the likelihood of symbols is determined by various features: depth, color, correlation (for similarity of elements), entropy and variance (for homogenity of arrays of elements). Computation is done again with MCMC, doing splits, changes in structure of a split and replace- ment of symbols. Their work uses heuristic, hand-designed quasi-classifiers that are not learnable and do not generalize beyond doors and windows.

The method cannot decide if the input image is indeed a facade or not — it would be still interpreted as a facade. Methods developed in this thesis will be much more general, they will not be specific to building facades, they will use generic learnable classifiers, a more general structure model, and more complex, generic attribute constraints.

The method introduced in [Reznik and Mayer, 2007] works with sequences of images and using implicit shape models technique that detects windows from the knowledge of important regions of windows (e.g. corners areas).

MCMC is employed to find out the most probable set of windows.

Other methods that do not use a Bayesian framework include [Ali et al., 2007] that take advantage of Haar-like features and AdaBoost algorithm to detect windows on facades.

Single input image is used in [M¨uller et al., 2007] to recognize structure of the facade. The image is first partitioned into two-dimensional array of tiles, which are later refined by segmentation into smaller rectangles (using split grammars). Finally, the facade elements are recognized from the rectangles by matching them to the database of elements.

(10)

[Pauly et al., 2008] proposed a method for discovering regular structures in 3D scans. The work is based on regular discrete groups and relies on a sufficient number of observed repetitions to discover a regular structure.

Somehow related problem to facade recognition is generation of facades and more generally, modeling of artificial buildings and cities. Very interest- ing results are shown in [M¨uller et al., 2006], where buildings are modeled using context-free grammars that assemble simple solids. Changing grammar rules allows modification of the appearance of buildings, samples are given for family houses and office buildings. There is a software product called CityEngine, which is based on this work.

1.4 Our Approach

We aim to create a framework for recognition of facade elements from a single ground-level image. Facades typically exhibit a lot of regular structure.

The rules of structures depend greatly on the architecture style, but some rules seem to be valid for majority of facades, for example, floors have the same height, windows are regularly spaced across the whole facade, there are various symmetries etc.

The structure describes relationships between the objects and determines in what contexts the objects are likely to be found. When applied to facades, the roof is expected to be above the facade, cars can be seen on the ground level, but not along other floors. Similarly, the structure can tell in what modalities the object can be in given context: appearance of a window is different from the default look if it is covered by the window shutter or flowers are present in the window. Finally, we perceive that complex scenes or object can be described in terms of hierarchical structure: a scene in a city consists of buildings, roads, cars, sideways and pedestrians, exterior of a building consists of walls (facades) and a roof, the facade consists of facade elements etc.

As illustrated above, the structure can give us hints about typical situa- tions. Therefore in our approach we will try to take advantage of knowledge about the structure. Various methods have been proposed (for example [Liu and Gagalowicz, 2010]) that hard code the structural knowledge in the algo- rithm. Such algorithms usually have a narrow domain where they are useful, moreover it is hard to extend them beyond their original scope. Our in- tent is to design the facade recognition framework in a way that structural information is not an inherent part of the algorithm.

Ideally, the algorithm should receive structural knowledge and classifiers along with an image as input data. The algorithm will apply the structural

(11)

Figure 1.2: Left picture shows a facade with strong structure: many elements, aligned. In contract, right picture shows weakly structured facade. Pictures from [Korˇc and F¨orstner, 2009].

knowledge to infer hypotheses what objects are likely to be found in the image (and where) and classifiers will be used to accept or reject these hypotheses.

Such algorithm would be very general and could be used in a wide variety of uses, because it would make clear distinction between low-level vision recognition methods and high-level modeling of the problem.

In the context of facade recognition, we are not trying to solve the general problem of facade recognition. We will concentrate on facades with typical city architecture where the facade elements such as windows and doors can be clearly identified. Additionally, we consider facades withstrong structure.

That means there are meaningful global constraints among many elements of the same type (e.g. alignment). In weak structures, there are few elements of a given type and the constraints are weak (above, left/right). Figure 1.2 illustrates facades with strong and weak structure.

1.5 Structure of the Thesis

In Chapter 2 we present a stochastic data model that records knowledge about facades and the classifier. Chapter 3 presents methods for approximate computation of most probable facade interpretation. Various improvements to the model and sampling process are described in Chapter 4. The results in terms of solution quality and effectiveness are given in Chapter 5. Chapter 6 proposes a generalization of the models using context-free grammars and finally Chapter 7 concludes the thesis.

(12)

Chapter 2 Data Model

In this chapter we are going to describe the computational model we have employed for the process of facade parsing. Before getting into any formal details, let us summarize our task in one statement:

Given an image of a facade, find out its most probable interpre- tation.

We will interpret the facades by the means of models. A model of a facade determines its appearance: how many floors the building has, how many windows are present in each floor, whether there is an entrance door or not, whether there are balconies etc. In other words, a model represents the composition of facade elements.

The task of finding out what elements are present on the facade is inte- grally connected with the task of finding position of those elements. Therefore we introduce attributes for models. Attributes capture information about a concrete instance of a facade. Most importantly, attributes store elements’

positions and size. Other properties can be stored as well, such as element’s appearance or color.

Figure 2.1 gives an example how a facade could be interpreted. Red rectangles mark windows, blue rectangles denote window decorations and green rectangle marks the entrance door.

Interpretation of a facade from an image is typically not unique. This ambiguity is inherent to most of the tasks of recognition. The bottom-left red rectangle marking a window in Figure 2.1 is a good example of such ambiguity: the window apparently used to be in that position, however it has been bricked out. Should it be interpreted as a window or not? With increased complexity of models the ambiguity naturally gets more apparent.

(13)

Figure 2.1: An example of an interpretation of a facade.

In following sections we construct a set of models that try to interpret images of facades. Then, given an image we choose the most probable model from the available ones and the attributes of most probable interpretation.

2.1 Bayesian Inference

Formally, let us suppose we have a collection of candidate models {Mj, j ∈ J } that provide hypotheses of facade interpretation. Each modelMj has a vector θj of unknown parameters (attributes of our models), θj ∈ Rnj where nj is number of dimensions and varies among models. We can represent the joint probability of model Mj, its attributes θj and observed data (image) I as p(j, θj,I). To determine the most probable interpretation, we need to solve the following equation:

(j, θj) = argmax

j,θj

p(j, θj|I). (2.1)

For that, we use joint posterior probabilityp(j, θj|I) that expresses proba- bility of model and its attributes given an instance of data. We use Bayesian inference about j and θj based on this joint posterior:

p(j, θj|I) = p(I|j, θj) p(j, θj) p(I)

where p(I|j, θj) is likelihood of data I given model Mj with attributes θj, p(j, θj) is prior distribution and p(I) is probability of producing data I.

On its own, p(I) is hard to calculate, because we are usually unable to determine which one of two different instances of data is more probable.

But that does not pose a problem for us: p(I) is not a function of j and θj. The relationship between likelihood, prior and posterior distributions is often expressed as

p(j, θj|I)∝p(I|j, θj)p(j, θj)

(14)

where A ∝ B relation stands for A being proportional to B, leaving out the probability of data p(I). Because we are looking for the most probable interpretation, we can safely ignore p(I) as it does not have influence on modes of the probability distribution. The problem (2.1) can be rewritten as

(j, θj) = argmax

j,θj

p(I|j, θj) p(j, θj). (2.2) In following sections we are going to define prior distributionp(j, θj) and data likelihood term p(I|j, θj).

2.1.1 Hierarchical Models

When working with more complex problems in Bayesian statistics, it is nec- essary to go beyond the simple structure of prior distribution, likelihood and posterior distribution. In many statistical tasks the model parameters are related to each other from the nature of the problem. Therefore joint prob- ability model should reflect these dependencies. Applied to our problem, it can be seen that e.g. size of the facade elements depends on the size of the facade, or position of elements depends on their number.

Hierarchical models are often natural way how to describe the model.

The observable outcomes can be modeled conditionally on certain parame- ters. These parameters are stochastic too, having their own distribution that depends on other set of parameters, called hyperparameters [Gelman et al., 2003].

If compared to non-hierarchical models, those with low number of param- eters often fail to fit larger datasets well, while models with higher number of parameters tend to overfit the model to the existing data. Hierarchical models may have enough parameters with specified dependence among them to avoid overfitting. The hierarchical structure can help to understand the problem better and to develop strategies for computation.

The exact values of hyperparametersφare not known and they have their own distribution p(φ), so the distribution p(θ) can be expressed as

p(θ) =p(θ|φ)p(φ).

In most real-world problems, there is some knowledge about φ to constrain the hyperparameters. The distributionp(φ) can be inferred from the observed data.

We use hierarchical models in following sections to simplify the creation of the models of facades.

(15)

2.2 Camera Model

Camera model is a part of each process of vision. It introduces further param- eters which are not dependent on model Mj. These parameters are intristic camera calibration (focal length, principal point and radial distortion) and external camera calibration (camera position and orientation) [Hartley and Zisserman, 2000].

When the target object is a plane then the external camera calibration reduces to determining the homography between the facade plane and the image projection plane. When intristic camera calibration is known, the homography is induced by camera rotation with regard to the facade plane and it is given by three parameters.

To limit the scope of this work, we employ images that have been pre- processed in order to eliminate the necessity to estimate camera model. Ad- ditionally, there is an independent research project going on at CMP which aims to automatically determine the parameters and do the correction as a consistent part of the facade interpretation process.

The images have been preprocessed as follows to make them suitable for interpretation:

• Correction of radial distortion. Introduced by camera’s lens, the dis- tortion makes lines appear curved instead of being straight. It can be removed by calibrating the lens, calculating the amount of distor- tion and applying appropriate inverse transformation to the pictures [Fitzgibbon, 2001].

• Correction of perspective projection, that is, determining the above mentioned homography. This distortion occurs if the plane of camera’s sensor is not parallel to the lines that are expected to be parallel in the picture. This usually happens when taking pictures of tall buildings from ground level and tilting the camera to fit the building into the frame. For the correction, vertical (resp. horizontal) lines in the reality should be vertical (resp. horizontal) also in the picture [Hartley and Zisserman, 2000].

For elimination of the former distortion, it is good to use a properly calibrated camera. For the latter, currently this correction has to be done manually in a graphical software.

(16)

w

scope

c1 c2 c3

s=c0 t=c4

Figure 2.2: Model for one row of windows

2.3 The Prior Model

In this section, we will present our hierarchical probabilistic model for facade representation.

In our first model we are going to represent facade as arrays of windows.

Windows are essential parts of all buildings, they are usually very well struc- tured and exhibit relatively common appearance (although the appearance still varies greatly). In most cases, windows are distributed throughout the whole facade, so being able to detect correctly all of them is of great help when parsing the whole scene.

2.3.1 Single-Row Model

Let us start with a simple row of several windows. The model for one row of windows is illustrated in Figure 2.2. The row is delimited by the start s and the end t which are the x coordinates in the image. The tuple (s, t) represents the model’s scope, i.e. the area of image we are going to inspect.

By default the scope of the facade is set to the whole extent of the image.

The scope can be set to a smaller subset of if we are parsing only a part of the image: the rest of the image will stay uninterpreted. In some sense, scope can be determined as a part of interpretation. That is described in hierarchical model described in Chapter 6.

All windows have common y coordinate, half-widthw and half-height h.

This basically means that we expect the windows to be roughly of the same type within the row. That is not always true for real facades, but it is a sufficient model for our case. The position of a window along the row is given by the center position ci. So the i-th window spans from ci −w to ci +w. Windows naturally must not overlap each other, meaning that the

(17)

distance of centers of two windows is no less than 2w. There are always one or more windows, k is the actual count of them. There is an upper limit N for the number of windows in one row given by the scope and window width:

N(w) =

t−s 2w −1

(2.3) Centers of all windows are represented by vector c= (c1, c2, . . . , ck) such that c1 < c2 <· · ·< ck. For convenience, we definec0 =s and ck+1 =t.

The probability of our model is denoted as p(k, w,c). This distribution is in fact the prior distributionp(j, θj) from (2.2). A set of single-row models M1 = {Mk, k ∈ N} consists of models with varying number of windows k.

Attributes of each model are θk = (w, c1, . . . , ck).

Instead of defining the distributionp(w, k,c) directly, it is very convenient to use a hierarchical model:

p(w, k,c) = p(w)p(k|w)p(c|w, k). (2.4) This allows us to define probability distribution for each variable while having other variables it depends on fixed. We will now describe individual terms in (2.4).

2.3.2 Prior Distributions

Having the stochastic model clear, it is necessary to evaluate the prior dis- tributions for window size, number of windows and window center positions.

We have tried to keep the prior distributions as simple as possible, since fewer free parameters accounts for less tuning in later stages. Another point to consider is to choose distributions that are easy to draw samples from (i.e.

not computationally intensive). Finally, it should be fairly easy to learn the parameters from existing data.

For window size prior we have chosen beta distribution. Its probability density function is

f(x) = 1

B(α, β)xα−1(1−x)β−1 where B(α, β) stands for Beta function

B(x, y) =Z 1

0 tx−1(1−t)y−1dt

defined for x, y > 0. Beta distribution has support on interval (0,1) and has two positive real parameters α and β. The parameters allow tweaking

(18)

0.0 0.2 0.4 0.6 0.8 1.0 0.0

0.5 1.0 1.5 2.0 2.5

α= 2, β= 2 α= 2, β= 5 α= 4, β= 4

Figure 2.3: Beta distribution for various values of α, β.

the shape of the density function. For α < 1, β < 1 it is U-shaped, for α = 1, β = 1 it becomes uniform distribution and forα >1, β >1 it ends up unimodal. Figure 2.3 illustrates the density function for few possible values of parameters.

We expect the window sizew to be somewhere between zero and t−s2 , so we transform beta distribution to work on this interval:

p(w) = 1

B(α, β) 2 t−s

2w t−s

α−1

1− 2w t−s

β−1

(2.5) Number of windowsk is modeled by binomial distribution

p(k|w) =

N(w) k

λk(1−λ)N−k. (2.6) This distribution reflects well the fact there is at least one window and at most N(w) windows as defined in (2.3). A sample from this distribution is literally a number of successes in a sequence ofN(w) trials to select a window or not. The parameter λ determines the probability of a success, the mean is λN(w).

The effective length of scope (i.e. scope interval reduced by widths of windows) is denoted as

L(k, w) =t−s−2(k+ 1)w. (2.7)

(19)

The positions of centers of windows can be modeled with the help of Dirichlet distribution in which xi will correspond to inter-window distances.

Its probability density function is defined as f(x1, . . . , xK) = 1

B(α)

K

Y

i=1

xαii1

where xi > 0, i ∈ [1, K] and xK = 1 −PK−1

i=1 xi, otherwise the density is zero. Vector α = (α1, . . . , αK) is a set of parameters. The term B(α) is multinomial beta function, it can be expressed using gamma functions Γ(n) as follows:

B(α) = QK

i=1Γ(αi) Γ(PK

i=1αi).

Gamma function for positive integers is factorial with argument shifted down by one: Γ(n) = (n−1)!, for the real and complex numbers (with positive real part) it is defined as an integral Γ(z) =R

0 tz−1e−tdt.

Dirichlet distribution is unimodal if αi > 1,∀i = 1, . . . , K, the mode is in xi = αα0i−K1 where α0 = PK

i=1αi. The mean is E[xi] = αα0i. Marginal distributions are xi ∼Beta(αi, α0−αi).

The density function is parametrized by K ≥ 2 positive parameters α1, . . . , αK. These parameters can be viewed as desired proportions among variables xi. The variance ofxi decreases with higher values of α0. Thus us- ing equalαi parameters will prefer roughly equal values ofxi. In our context, xi are meant to be distances between window centers ci−1 and ci. Consider- ing that we want to model equispaced windows as the most probable case, we choose αi to be equal (α01 =· · ·=αK). To simplify the calculation, we constrain possible values of parameterα0 to positive integers, resulting in

B(α0) =

0 −1)!K

(Kα0−1)! .

Figure 2.4 illustrates Dirichlet distribution for K = 3. Values of x1 and x2 vary over the two axes, the last variable x3 = 1−x1 −x2 is implicit.

Figure 2.5 shows marginal distributions of the partial sums sl =Pl i=1xi

for K = 4, α0 = 3. It can be seen that distributions p(sl) have evenly distributed means.

Applied to the distribution of windows in a row, we need to transform dis- tances xi to the actual window positionsck. For that, we set xi = ci+1L−c(k,wi)2w and K =k+ 1. The transformed density is

p(c1, . . . , ck) = 1 L(k, w)k

k

Y

i=0

ci+1−ci−2w L(k, w)

α01

. (2.8)

(20)

0.0 0.2 0.4 0.6 0.8 1.0 x1

0.0 0.2 0.4 0.6 0.8 1.0

x2

Figure 2.4: Dirichlet distribution for K = 3, α0 = 3

0.0 0.2 0.4 0.6 0.8 1.0

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5

x1

x1+x2

x1+x2+x3

Figure 2.5: Marginal distributions ofsl=Pl

i=1xi for K = 4, α0 = 3

(21)

The factor L(k, w)−k has appeared due the transformation of the density.

The generation of random samples x from Dirichlet distribution is done in two steps [MacKay, 2003, chap. 23]:

1. draw K samples from gamma distribution: yi ∼Gamma(αi,1),

2. calculate the components xi of the resulting vector x using following formula

xi = yi

PK j=1yj

.

It could be argued that multinomial distribution is a better choice than Dirichlet distribution: input imageIis represented as a grid, so it seems more natural to use a discrete distribution. The reason behind using a continuous distribution is that we would like to treat the input dataIas a signal rather than a grid and thus allow transformations. Another concern against multi- nomial distribution is that its mode is much narrower and therefore gives too much preference to equispaced windows.

2.3.3 Hyperparameters for Prior Distributions

Probability distributions of single-row model set the parameters as θ = (w, k, c1, . . . , ck).

These parameters uniquely define interpretation of a single row of windows.

All the distributions we use in the above model are parametric:

Term Parameters

p(w) α, β

p(k|w) λ p(c|k, w) α0

Additionally, scope is parametrized by its startsand the end of the inter- valt. We considers andt fixed. This gives us a vectorφof hyperparameters that influence the probability distributions:

φ= (s, t, α, β, λ, α0).

Components of vector φ can be either set to fixed values or determined from the input data I. We set the scope interval (s, t) from the data, the rest of the parameters are kept fixed. The hyperparameter for positions of windows α0has been set to 3 so that the preference for mean position is not too strong.

The values of hyperparameters α, β and λ were first roughly estimated, then learned from the training data. The data have been used from the eTRIMS image database [Korˇc and F¨orstner, 2009].

(22)

y1

y2

sy =y0

x1 x2 x3

ty=y3

sx=x0 tx=x4

Figure 2.6: Two-dimensional array of windows

2.3.4 Two-dimensional Facade Model

To work with real facades, we need to adapt our single-row model to work with facades that consist of multiple floors. In fact, it will be a natural extensions of the former model. Figure 2.6 illustrates the new model.

The facade is expected to have the windows aligned both horizontally and vertically – this can be seen on most facades. Window size stays the same for all windows, it is given by half-width w and half-height h. We denote the vectors of window positions in vertical direction as y = (y1, . . . , yky), in horizontal direction (previously c) as x = (x1, . . . , xkx) where kx and ky

determine number of windows in horizontal resp. vertical direction. Other variables are marked by subscript xand yin a similar manner to distinguish their meaning, i.e. horizontal scope is (sx, tx), vertical scope is (sy, ty).

The horizontal and vertical parameters are independent, thus the joint prior distribution is

p(w, h, kx, ky,x,y) =p(w, kx,x) p(h, ky,y)

where the right-side terms are prior distributions from the single-row model.

In terms of the p(j, θj) prior from (2.2), we create a set of models M2 = {Mk,k ∈ N×N} where k = (kx, ky). Each of the models Mk represents facades with determined horizontal and vertical number of windows. The attributes of each model Mk are θk = (w, h,x,y).

(23)

2.4 The Likelihood Term

Data likelihood expresses the belief how well the given data model fits the real data. Our model determines positions of the windows and their size, so we need to evaluate whether windows really do exist in the selected regions of image.

For that, we use an external window classifier which has been devel- oped within eTRIMS project [eTRIMS Consortium, 2009]. We give a brief overview of the inner structure of the classifier.

The classifier is based on object detection framework introduced in [Viola and Jones, 2002]. For detection of windows it uses rectangular Haar-like features. Essentially they can be defined as a difference of the sum of pixels of rectangle’s areas. The rectangles can be used at any position and scale within the input image and indicate certain characteristics of the given area.

Computation of Haar-like features can be done in constant time when using technique of Integral images with few lookups per feature.

The framework uses AdaBoost algorithm. The algorithm can give good results using a set of base classifiers that have poor performance by them- selves (called weak classifiers). Considering a two-class classification prob- lem, a weak classifiers are functions hk(x) that returns label t ∈ {−1,+1}. In the learning process AdaBoost uses training data x1, . . . ,xN and their corresponding labels t1, . . . , tN. At each stage of training, a new classifier is learned and a distribution of weights over the training set is updated. The weights indicate importance of data, they get increased for the data misclas- sified by the recently created classifier and will be used for classifier in the next stage [Bishop, 2006].

The resulting classifierg(x) is given by g(x) =

K

X

k=1

αkhk(x) (2.9)

where K is the total number of base classifiers and αk are weighting coef- ficients, αk > 0 and PK

k=1αk = 1. The coefficients are determined when training the classifier (more accurate base classifiers get greater weight αk).

The classifier g(x) returns values from interval [−1,1], the label is assigned as sign g(x)

. AdaBoost learning process is expected to converge asymptot- ically to

k→∞lim g(x) = 1

2log P(t = +1|x)

P(t=−1|x) (2.10)

where PP(t=+1(t=1|x)|x) is ratio of probabilities between positive and negative class given an instance of data [ˇSochman, 2009].

(24)

Given the model (j, θj), we segment the input data I into two regions:

‘windows’ Iw and ‘background’ Ib and we interpret them separately. Iw and Ib are disjunct partition of the image I. For simplicity we consider them to be independent:

p(I|j, θj) = p(Iw|j, θj)p(Ib|j, θj).

The terms on the right side form joint likelihood for all pixels of Iw resp. Ib: p(Iw|j, θj) =

j

Y

i=1

pw(Iw|xi, yi, w, h) (2.11) p(Ib|j, θj) = Y

x,y∈Ib

pb(Ib|x, y) (2.12)

where pw is joint probability of all pixels of a window given by its center [xi, yi] and sizew×h, pb is probability of a pixel at [x, y] to be a part of the background.

We assume the background pixels to have uniform distribution:

pb(Ib|x, y) =pB.

Likelihood of window pixels is set equal for all pixels within one window and the value is given by the window classifier. The outcome of classifier Gw

is supposed to have form is in (2.10) – it represents a ratio of probability of the region being a window compared to probability of being background. The possible outcomes are from interval [−1,1], negative values classify the input to be a window, positive values classify the data as background, absolute value indicates certainty of the classification. A probability distribution gw

from Gw could be constructed as gw(Iw|xi, yi, w, h) = 1

Z exp −Gw(I, xi, yi, w, h) .

Unfortunately the normalizing constant Z cannot be determined by other means than by enumerating all possible outcomes, which would be intractable.

The likelihood of pixels of a window then can be defined to have equal probability, therefore

pw(Iw|xi, yi, w, h) =

gw(Iw|xi, yi, w, h)22h

.

It is convenient to use logarithms to express the terms, we sayenergy to negative logarithm E(x) = −logp(x). The energy for likelihood p(I|j, θj) becomes

E(I|j, θj) = 4whω

j

X

i=1

Gw(Iw, xi, yi, w, h) + (|I| −4whj)EB (2.13)

(25)

Figure 2.7: An instance of T-style

window used in the training set Figure 2.8: Red frames indicate win- dows detected by the window classifier where ω stands for the unknown normalizing constantZ and EB stands for energy of a background pixel. The equation can be further simplified to

E(I|j, θj) =γ(w, h)

j

X

i=1

Gw(Iw, xi, yi, w, h) +δ(|I|) (2.14) whereγ(w, h) accounts for both the normalizing constantZand window size, δ(|I|) expresses the background energy. The functions γ and δ have to be chosen appropriately to reflect the typical response from the classifier.

We found it reasonable to use δ(|I|) = 0 for any input data I, however γ(w, h) needs some tuning, otherwise either prior or likelihood can have too much impact on the result. When a new window from the correct class is added, the change in data likelihood should be comparable to the change in prior model.

The general problem (2.2) can be in terms of energies rewritten as (j, θj) = argmin

j,θj

E(I|j, θj) +E(j, θj) (2.15) where E(j, θj) =−logp(j, θj). As we will see in next chapter, working with unnormalized distributions does not pose a great problem for the computa- tion.

2.4.1 Classifier Evaluation

The window classifier has been learned on manually labeled instances of windows. For the instances of negative (background) class, random regions

(26)

Figure 2.9: Image of a facade (left) and calculated map of responses from classifier (right) for correct window size (58×96). Black color means high likelihood of a window.

of facade images were used. The training set consisted of several hundreds

‘T-style’ windows, an instance is shown in Figure 2.7.

Figure 2.8 shows detected windows on a facade allowing various scales of window size with fixed aspect ratio. It can be seen that the classifier detects T-style windows fairly well, though there are many misclassifications in various window sizes. Also, the wide windows (on the left/right sides of the facade) are not detected correctly. The false positives do not pose a big problem — they get filtered in the high-level structural interpretation.

Windows missed by the classifier are of a greater concern, because there is no other way to identify them. The classifier is not overfit to the T-style windows from training set, so it generalizes well and it is able to recognize also previously unseen window styles. Although not being a perfect classifier, it gives good responses for our purpose.

For the purposes of this work, classifier’s response maps have been gen- erated for images from the image databases. Each response map correspond to a specific window size, has the same size as the input image and value of each pixel is the energy returned by classifier for a window centered at that pixel. The classifier was run to generate response maps for various fixed window sizes for each image. An example of a response from the window classifier is shown in Figure 2.9. The correct window positions stand out as black bullets.

(27)

Chapter 3 Sampling

In the previous chapter we have presented a model for interpretation of fa- cades and defined the likelihood term that determines our posterior target distributionp(j, θj|I) up to a multiplication constant. This chapter presents a method how to evaluate the target distribution to find out the most probable interpretation (j, θj).

3.1 Overview

First we need to choose a suitable Bayesian inference method for computa- tion. The methods can be categorized as follows. [MacKay, 2003]

• Exact methods compute directly the required values, e.g. by complete enumeration.

• Approximate methods: either deterministic approximations (among others, maximum likelihood and steepest descent methods belong here) or Monte Carlo methods which use random sampling to compute the results.

Computation with our target distribution is a complex task because mod- els Mj vary in dimensionality of their attributes θj. Most of the methods are unable to deal with probability space with varying dimensionality, that is why we start with evaluation of simpler distributionp(θj|I, j). This distribu- tion has a model Mj fixed and we are only trying to determine components of θj with fixed number of dimensions.

This subtask is not trivial either: the likelihood term cannot be expressed analytically due to the fact it is evaluated by an external classifier. Addition- ally the likelihood returned by classifier can be in general case unboundedly high. These constraints limit the variety of methods we could use.

(28)

Exact methods can be used only for a very limited set of tasks due their computational demands. Deterministic approximations are not available for our problem because data likelihood does not have a simple analytical defini- tion. Obviously Monte Carlo methods are the only available tools and they provide the most general approach to the calculation. We are interested in evaluation of effectivity and robustness of the Monte Carlo methods.

3.1.1 Introduction to Monte Carlo Methods

As already stated above, Monte Carlo methods are computational techniques that make use of random numbers. Their basic task is to generate ran- dom samples from a given probability distribution p(x). With the generated samples it is possible to do a deterministic calculation and aggregate the outcomes into a final result. Monte Carlo methods are used with success in optimization problems, numerical integration and other problems wherep(x) is sufficiently complex and deterministic methods would need too much time to obtain the result.

The term p(x) does not have to be necessarily a proper probability dis- tribution (i.e. it integrates to one). Then

p0(x) = p(x) Z

is a proper distribution if Z is the normalizing constant:

Z =Z

p(x)dx

where Ω is domain of x. Often, Z is not known and it is not required to be known for the problem solution. If needed, Z can be estimated from the acquired samples.

Applied to our problem, having a set of random samples from a target distributionp(θj|I, j) allows us to evaluate which sample is the most probable one. Thus a set of samples large enough should include samples from all modes of a distribution, allowing us to estimate the optimal values of θj.

It can be shown that sampling fromp(x) is a hard task. Although we are typically able to calculate easily the value p(x) for any particular x, we are unable to say where p(x) has thetypical set, that is a set ofxwhere the total probability is close to one. At the same time, we want to avoid evaluating p(x) everywhere: if x was a vector with d dimensions, naive sampling with uniform grid using t points for each dimension xi would require as many as tdcalculations. This is clearly unusable even if used with only small number of dimensions.

(29)

That is why we need some more sophisticated methods. The following sections give a brief overview of selected Monte Carlo techniques.

3.1.2 Rejection Sampling

Rejection sampling is a simple method for sampling fromp(x). Considering a one-dimensional problem of sampling from p(x), we assume there is another distribution q(x) from which we can create samples directly. Furthermore, we expect that a constant c exists to comply cq(x) > p(x) for all x. The distribution q(x) is called proposal distribution. The process of sampling works as follows:

1. Draw a random sample x fromq(x).

2. Evaluatecq(x) and draw a random sampleu from uniform distribution on interval [0, cq(x)].

3. If u≤p(x) then accept the sample, otherwise reject it.

This process can be viewed as choosing a point (x, u) in graph below the function cq(x) and evaluating whether the point lies also below the function p(x). That implies the accepted points are proportional to the density p(x).

The rejection sampling method assumes the proposal distributionq(x) has a similar shape asp(x) to be effective. Ifp(x) andq(x) are very different, there are many rejections and the generation of samples is slow. Another drawback of this method is that it does not work well in higher dimensions, because the constantcgrows significantly with increasing number of dimensions, resulting in very low acceptance ratios [MacKay, 2003].

3.1.3 Markov Chain Monte Carlo Methods

More general and powerful than rejection sampling are methods based on Markov chain simulations, called Markov chain Monte Carlo (MCMC) meth- ods. As with rejection sampling, we sample from a proposal distribution, but the condition for being approximation for the target distribution is relaxed.

We generate a sequence of states {z(1), z(2), z(3), . . .} that form a Markov chain.

Before diving into MCMC algorithms, it is necessary to introduce the concepts of Markov chains and sampling with them. Markov chain is a discrete stochastic process where future states depend only on the current state and not on the previous states. This condition is called Markov property and can be expressed as

p(z(t+1)|z(t), . . . , z(1)) = p(z(t+1)|z(t)).

(30)

Markov chain can be specified by the means of its initial state p(z(0)) and transition kernel Tt(z(t+1)|z(t)) that determines the probability of transition from state z(t) to z(t+1) in time t. Markov chains in which the transition kernel is the same for all t is called homogenous. The marginal probability of a state is given recursively as

p(z(t+1)) = X

z(t)

p(z(t))Tt(z(t+1)|z(t)).

Invariant distribution is a distribution that stays invariant after any step in Markov chain. Considering a homogenous Markov chain with transition kernel T, a distribution p(z0) is invariant if

p(z0) =Z

p(z)T(z0|z) dx

If we want to use Markov chain to draw samples from our target distribution π(z), it has to be invariant distribution of the Markov chain. Additionally, we need to be sure that the chain is ergodic: p(z(t)) converges to the target distribution π(z), that is p(z(t)) → π(z) for t → ∞ for any initial state p(z(0)).

If Markov chain with transition kernel T satisfies the detailed balance condition for the target distribution π(z), then both π(z) is invariant distri- bution for the chain and the chain is ergodic. The detailed balance requires that probability that a randomly chosenzfromπ(z) has the same probability of getting to state z0 as randomly chosen z0 fromπ(z) of getting to z:

π(z0) T(z|z0) =π(z)T(z0|z).

Markov chains that satisfy detailed balance are also called reversible.

In contrast to rejection sampling, random samples generated by Markov chains are not independent because state z(t) depends on the previous state z(t−1). To obtain independent samples, it is necessary to run the chain for significantly longer time and use only every n-th sample (an estimation can be given for lower bound of n [MacKay, 2003]). Fortunately, we do not require the samples to be independent. Since we are going to search for the most probable state, dependent states underway do not pose a problem.

Another problem is to determine how long the chain should run until it can be said it converged to the target distribution [Winkler, 2003]. There are some theoretical estimates on an upper bound, but they are too large to be useful in practice.

When constructing Markov chains in practice, it is useful to start with base transitions{Bk}and createmixtures andconcatenations based on them.

(31)

A mixture consist of mixing coefficients αk and base transitions Bk: T(z0|z) =X

k

αk Bk(z0|z).

When transition T is applied, a base transitionBk is chosen randomly with respect to the coefficients αk and then applied. If all Bk satisfy detailed balance individually, T will satisfy it too. Concatenations are created by successive application of base transitions Bk. In case of concatenation of two transitions

T(z0|z) =Z

B2(z0|z00) B1(z00|z) dz0

one intermediate state z0 is used, Ω being its domain. Detailed balance does not hold for concatenation [Bishop, 2006].

When doing MCMC simulations, initial samples are usually discarded.

This phase is commonly referred to as ‘burn-in’ period. The rationale be- hind burn-in is that the initial state is often chosen randomly and the chain needs some time to ‘converge’ to the target distribution. It is however gen- erally unclear how long the burn-in period should take. We do not explicitly perform burn-in, instead we start the simulation from a state with one win- dow on position with the best response of classifier.

3.2 Computation with MCMC

In the following sections we construct a Markov chain which will be used to draw random samples from the target distribution p(j, θj|I). There will be two types of transitions within the chain.

• exploratory moves — used to search for the most probable positions of the windows. Section 3.3 describes Metropolis-Hastings algorithm used for exploratory moves and they are described in detail in Section 3.4.

• structural moves — serve for finding out how many windows should be used. They involve reversible jumps that will be described in Section 3.5, the details about structural moves are given in Section 3.6.

When performing an update of the Markov chain, the structural move is done with probability ps, the exploratory move is done otherwise with probability 1−ps. We use ps = 0.05.

Note that for now we work with fixed window size parameter. Section 3.7 discusses estimation of window size.

(32)

3.3 Metropolis-Hastings Algorithm

This algorithm proves to be a very general and powerful MCMC method.

One step of the algorithm can be described as follows:

1. Draw a random sample z0 from proposal distribution q(z0|z(t)).

2. Evaluate acceptance ratio of the proposed new sample:

α = min

1, p(z0)

p(z(t)) ×q(z(t)|z0) q(z0|z(t))

(3.1) 3. Draw a random sample u from the uniform distribution on (0,1).

4. If α > u, the proposed samplez0 is accepted and z(t+1)=z0, otherwise z0 is discarded and z(t+1) =z(t).

The target distribution p(z) is invariant distribution in Markov chain generated by Metropolis-Hastings algorithm and the Markov chain satisfies detailed balance condition, so the produced samples converge the target dis- tribution [Bishop, 2006].

An important factor regarding effectivity of this algorithm is the choice of the proposal distribution. For continuous spaces, usually a Gaussian centered on the current state is used. If the variance of the proposal distribution is too small, the chain moves slowly through the space and it takes long time to converge. On the other hand, if the variance is too big, the proposed steps are often very big and result in states with low probability, so they are mostly rejected and the chain again converges slowly. That is why this method often needs some tuning in order to be efficient.

The algorithm works on multidimensional spaces: Although it takes much longer times to run the chain, higher dimensionality does not pose such a problem as it does in rejection sampling.

There is a related method called Gibbs sampling which can be viewed as a special case of Metropolis-Hastings algorithm. Gibbs sampling is use- ful for multidimensional probability spaces of z where it is not possible to sample directly from p(z), but it is possible to use conditional probabilities p(zi|{zj}i6=j) for all components ofp(z). New samples are drawn by sampling one component zi at a time based on the rest of the components. Gibbs sampling has the property that all the proposals are accepted since α= 1.

3.4 Estimation of Window Positions

First we are going to construct a sampler for search of window positions with known number of windows and size of the windows. If applied on the

(33)

single-row model, the target distribution is

p(x1, . . . , xk|I, k, w) =p(I|k, w, x1, . . . , xk)p(x1, . . . , xk|k, w).

Since the two-dimensional model is actually a mixture of two independent single-row models, we introduce the exploratory move for single-row model.

In the transition from the current state z(t) (after t iterations) to a new state z0 we want to modify the vector of window positions x= (x1, . . . , xk).

We employ Dirichlet distribution from (2.8) as the proposal distribution for the positions of windows and we draw independent samples, so the proposal distribution ratio becomes:

q(z(t)|z0)

q(z0|z(t)) := p(x(1t), . . . , x(kt)|k, w)

p(x01, . . . , x0k|k, w) (3.2) and the ratio of target distributions is

p(z0)

p(z(t)) := p(I|k, w, x01, . . . , x0k)p(x01, . . . , x0k|k, w)

p(I|k, w, x(1t), . . . , x(kt))p(x(1t), . . . , x(kt)|k, w). (3.3) The formula for acceptance ratio α thus gets simplified to ratio of data like- lihoods.

When working with two-dimensional facade model, we need to explore also vector y containing vertical positions of windows. Instead of drawing random samples for both x and y, we create a mixture of transitions: first we randomly choose which side (horizontal or vertical) will be modified, then the above-defined exploratory transition is applied to the chosen side.

We do not claim this transition is particularly good in terms of effective- ness. The exploratory transition will be improved in chapter 4.

3.5 Reversible Jump MCMC

Reversible jump MCMC (RjMCMC) is a method introduced in [Green, 1995]

that allows sampling in probability spaces with varying dimensions. Given a set of models{Mj, j ∈ J }, we want to construct ergodic Markov chain hav- ing p(j, θj) as the invariant distribution where θj is model-dependent vector of parameters.

Metropolis-Hastings algorithm compares densities of the target distribu- tion, but it is not possible to compare densities from spaces of different dimensionality. To enable the comparison, we need to compare them under the same measure of volume. A solution would be to map the parameters

Odkazy

Související dokumenty

It clearly illustrates how the strong decrease in effort levels at low values of lifetime utility, together with the expected future utility increase, translate into significantly

Pro stálé voliče, zvláště ty na pravici, je naopak – s výjimkou KDU- ČSL – typická silná orientace na jasnou až krajní politickou orientaci (u 57,6 % voličů ODS

As a political project, endogenous development is based on well-estab- lished principles and an experimental method that can be defi ned as follows: a territorial approach rather than

By using Lowell photometry with dense lightcurves, WISE data, photometry from Gaia, etc., the number of available models will increase and the statistical studies of spin and

When compared with simple model, the addition of native country appreciation caused reemigration of blue agents to their home grid and hence lower diversity of agents in grid B..

From the point of view of the input data we can distinguish so-called fixed methods which store the data purely on the basis of their model and adaptive methods, where also sample

Key words: Shell models of turbulence, viscosity coefficient and inviscid models, stochastic PDEs, large deviations.. AMS 2000 Subject Classification: Primary 60H15, 60F10;

The replication test (the 2 nd step of hybrid computation computed for the same model as the 1 st step) is performed using ray and DWN excitations for both ’homogeneous halfspace’