• Nebyly nalezeny žádné výsledky

Discussion

In document Text práce (5.831Mb) (Stránka 58-64)

All the previously described MRF models are estimated on the levels of Gaussian down-sampled pyramid, because it enables to capture larger spatial relations in a texture.

An alternative method is a texture analysis by means of models with larger contex-tual neighbourhoods. Unfortunately, parameters of such larger models tend to be more sensitive and they can fluctuate with insignificant changes in a texture. On contrary, 34

3.3 Discussion

models on the Gaussian pyramid are more robust, because Gaussian smoothing and down-sampling suppress insignificant details. Moreover, models on the Gaussian pyra-mid are more efficient, since the computational complexity is polynomial with respect to radius of contextual neighbourhood.

In our texture recognition or retrieval applications, we use models with a fixed con-textual neighbourhood Ir for all processed textures. Although different optimal neigh-bourhood could be found for each texture (3.12), it would be difficult to compare the model parameters for different neighbourhoods. It is also possible to estimate the model parameters for different neighbourhood sizes and combine them in a feature vector. The parameters of a model with smaller neighbourhoodIr0 can be efficiently estimated during the estimation of model with neighbourhoodIr, if Ir0 ⊂Ir. See Haindl and ˇSimberov´a (1995) for more details.

The final remark concerns monospectral (grey-scale) textures. They can be either modelled as a single spectral textures or the models can be estimated on the gradient image ∇Yr = [∂Y∂rr

1,∂Y∂rr

2]T, which enlarge feature vector and simplifies the modelling.

Additionally, the gradient image is more robust to illumination changes. Moreover, it will be derived (Section 4.3) that the part of feature vector which include only features As, ∀s ∈ Ir is invariant to simple brightness changes. However, full invariants to illumination colour, brightness and other conditions will be derived in the following chapter.

Chapter 4

Illumination Invariance

Illumination conditions of an image acquisition can change due to various reasons. In our approach, we allow changes of brightness and spectrum of illumination sources, and we derive illumination invariants based on the textural features from the previous chapter.

It enables us to create textural representation, which is invariant to illumination colour brightness (Vacha and Haindl, 2007a, 2010a).

We assume that a textured surface is illuminated with several illumination sources and that positions of viewpoint and illumination sources remain unchanged. We start with the assumption of a single illumination, which is far enough to produce uniform illu-mination, and planar Lambertian surfaces with varying albedo and surface texture nor-mal. However, these restrictive assumptions will be further relieved to incorporate more illumination sources, nonuniform illumination, and surfaces with a natural reflectance model. Still, the assumption of fixed illumination positions might sound limiting. Never-theless, our experiments with natural and artificial surface materials (Sections 6.1.3 and 6.1.4) show that the derived features are very robust even if the illumination positions changes dramatically.

4.1 Illumination models

Let us assume that a textured Lambertian (ideally diffuse) surface is illuminated with one uniform illumination. The value acquired by thej-th sensor at the pixel location r can be expressed as

Yr,j = Z

E(ω)S(r, ω)Rj(ω)dω , (4.1) whereω is wavelength,E(ω) is the spectral power distribution of a single illumination, S(r, ω) is a Lambertian reflectance coefficient at the positionr,Rj(ω) is the j-th sensor response function, and the integral is taken over the visible spectrum Ω. The Lambertian reflectance termS(r, ω) depends on surface normal, illumination direction, and surface albedo.

Chapter 4. Illumination Invariance

Following the works of Finlayson (1995); Healey and Wang (1995) we approximate the surface reflectanceS(r, ω) by a linear combination of a fixed basis of functionssc(ω):

S(r, ω) =

C

X

c=1

dr,csc(ω) . (4.2)

The functions sc(ω) are optimal basis functions that represent the data. The method for finding suitable basis was introduced by Marimont and Wandell (1992). They also concluded that, given the human receptive cones, a 3-dimensional basis set is sufficient to model colour observations. However, finding such basis set is not needed in our method, because the key assumption is only its existence. Provided that j = 1, . . . , C sensor measurements are available, the acquired values can be approximated by

Yr,j

An image of the same scene illuminated with a different spectrum ˜E(ω) is composed of Y˜r,j

where ˜B0 is aC×C matrix. Consequently, the two images ˜Y,Y acquired with different illumination brightness or spectrum can be transformed to each other by the linear transformation:

r=B Yr ∀r , (4.3)

which is same for all the pixels. If we change the response functions of receptors Rj(ω) instead of changes of illumination spectrumE(ω) , the derivation is almost the same and the formula (4.3) holds again.

Multiple illumination sources

The linear model (4.3) is valid even for several illumination sources with variable spec-tra provided that the specspec-tra of all sources are the same and the positions of the illu-mination sources remain fixed. LetS(p)(r, ω) denotes Lambertian reflectance coefficient corresponding to thep-th illumination andP is the number of illumination sources. The acquired values Yr,j can be expressed and approximated as

Yr,j =

4.1 Illumination models

where d(p)r,c are respective coefficients from approximation (4.2). Consequently, the image acquired with a different illumination spectrum is expressed and related as

r = ˜B0

where the second row complies with formula (4.3). However, illuminations with different spectra would break this relation (see Appendix A.1 for more details).

Natural illumination model

The surface reflectance can be further generalized to the natural model of Bidirectional Texture Function (BTF) (Dana et al., 1999), where the surface reflectance is function of surface position, wavelength, incoming and outgoing light directions. Let L(r, ω, vi, vo) is the surface reflectance,viis illumination direction andvoviewing direction then equation (4.1) becomes

Yr,j = Z

E(ω)L(r, ω, vi, vo)Rj(ω)dω . (4.4) On the condition that Q is an arbitrary number of reflectance components in the re-flectance model (e.g. Lambertian component, different isotropic or anisotropic spectac-ular components) and each component is separable inω, the reflectance can be decom-posed and approximated (Vacha et al., submitted):

L(r, ω, vi, vo) = component at position r dependent on the angles, while S˙(q)(r, ω) is the reflectance dependent on ω. The second row is again approximation with optimal basis functions sc(ω) (4.2). Substitution into (4.4) provides equations for the images with a different illumination spectrum:

Yr=B0XQ

q=1d(q)r Λ(q)(r, vi, vo) =B0d0r ∀r , Y˜r= ˜B0XQ

q=1d(q)r Λ(q)(r, vi, vo) = ˜B0d0r ∀r ,

which is in accordance with the linear model (4.3). For a fixed position r, the func-tion PQ

q=1Λ(q)(r, vi, vo) becomes the well-known Bidirectional Reflectance Distribution Function (BRDF) (Nicodemus et al., 1977). Obviously, the previous illumination model

Chapter 4. Illumination Invariance

includes simpler models as dichromatic reflection model (Shafer, 1985) or the well-known Phong reflection model.

The assumption of wavelength separability in (4.5) neglects effects, where the colour of surface depends on viewing or illumination angle. An example of the material with such effect is a furry textile, where the colour of fur is different from the colour of base textile. Consequently, we see either hairs or base textile depending on view angle and position of hairs.

Naturally, the linear model (4.3) includes all other colour models, which can be trans-formed linearly, i.e. CIE XYZ, opponent colours, Gaussian colour model (Geusebroek et al., 2003) when computed from RGB images, and YCbCr used in video coding.

Other illumination effects

We briefly review illumination related effects, which are not considered in the previous models and therefore they might be either approximated or completely neglected if they cannot fit in the linear model (4.3).

We considered opaque surfaces and their reflectance, which is the process when the incident light is immediately radiated without change of frequency. (However different frequencies are reflected or absorbed in different amount.) The BTF model includes inter-reflections and sub-surface light scattering, but they cannot be separated and examined separately. The previous illumination models do not account for polarisation effects.

Unlike reflectance, the fluorescence or phosphorescence is the process when the energy of incident light is absorbed and subsequently emitted at different wavelength. Accord-ing to Kasha-Vavilov rule, the emitted wavelength does not depend on the excitation wavelength for most of fluorescent substances (Turro, 1978). However, different incident wavelength carries a different amount of energy, which results in different intensity of emitted light. The appearance of purely florescent surfaces under different illumination spectra can be represented by the linear model (4.3). Unfortunately, the transforma-tion matrix B would be different for fluorescence and reflectance. Therefore a common matrix B cannot model the appearance change of both fluorescent and simple reflec-tive surfaces in one image and it cannot model appearance of surfaces, which exhibit combination of fluoresce and reflectance.

In document Text práce (5.831Mb) (Stránka 58-64)