• Nebyly nalezeny žádné výsledky

Contribution to the dissertation project

4.4 The role of IT Governance in digital operating models

4.4.5 Contribution to the dissertation project

4.4 The role of IT Governance in digital operating models | 75 IT (Fröhlich & Glasner, 2007, p. 227). The requirements, based on the capabilities of the company build the basis of the IT operations model. These requirements build a common understanding and view onto goals of an corporate strategy and can always be defined from performance or compliance point of view. These views can contain requirements of measura­

bility and steering of the IT or regulatory requirements such as data protection laws (Fröhlich

& Glasner, 2007, p. 231). Fröhlich and Glasner (2007, p. 228) mentions six elements of an IT operating model. These elements areorganisation, humans, technologies, processes, controlsundmeasures.

From operational point of view, the explained frameworks ITIL (for processes) and CobiT (for controls metrics and alignment) are well defined standards in order to implement an integral ICT operations to ensure«availability, trust, integrity of data but also efficiency and effectiveness of the IT organisation or provider» (Fröhlich & Glasner, 2007, p. 229). All findings of this section do conclude in the next subsection Business­IT­Alignment.

The simple model let understand, that communication differs from dimension to dimen­

sion. Different subject matter experts and knowledge domains (competency) speak different languages. Subject matter experts speak another language among their peers. Stakeholders (authority) have other interests than members of the supply chain (competency, enabling).

Defining new digital business operating models needs to understand, that markets (capacity) will be addressed different. For future management, it implies that future digital business models will link a business operations model and an IT operations model tight. The models may easy become congruent. We also saw in all recent studies, that IT business alignment can be achieved high levels of communication. Understanding each other is a prerequisite of that fact. This also implies that the knowledge (competency) in information management will become a mandatory skill set of any future management. The introduced general operat­

ing model is generic and let understand the basic difference of different knowledge domains of communication. But communication between business and IT is just one of many com­

munications styles. Mastering different communication styles will become a key asset to future management to compete with digital business models. As the Digital Switzerland Ru­

oss (2015) study showed, established structures of big companies are not that agile as for small enterprises. Enterprise Governance of IT is not widely­used although benefits are ev­

ident and proofed science­based. Many authors therefore see the digital transformation as a chance for SME’s to reinvent their business strategy and business operating models (Kager­

mann, Österle, & Jordan, 2011).

seem to have a high level of trust in processes. A high level of trust is also placed in process management. This contradicted the findings from the preliminary study in the context of a bank, where processes had not led to the desired effect of governance.

5 | Model of design principles for the definition of data

This chapter focuses on designing the principles of data definition. It is based on the previous studies, which reflect the relevance cycle from figure. It concentrates on the middle part of figure 4.3, calledIS Researchas shown in figure 5.1.

Figure 5.1 IS research (ad. Hevner, 2007)

This final discussion completes the dissertation project. It consists of making the artefact that defines the design goal as arisen in subsection 4.1.6. In this case, the principles which are ultimately to answer the research question are those. As shown in graphic 5.1, the found artefacts should be justified and evaluated. Thus an iteration is started between design and verification as well as in the second last step in Figure 4.2. There are two steps of this iteration, AssessandRefine. The result of both steps shall be called theDesigningin this thesis.

5.1 Design building process of the model

The build part of this IS research is primarily based on existing theorie as proposed by Hevner et al. (2004). Various methods were used to assess the design. For Hevner et al. (2004) it is important that on the one hand the design blocks are derived from a theory and on the other

hand that a possible evaluation of the artefact also takes place by applying scientific methods.

This cycle is called the rigorous cycle. The theory that underlies this design was introduced in chapters 2 and 3. The preliminary studies provide the identification of the business re­

quirement to combine these theoretical foundations with practice and thus to evaluate them right at the beginning.

The discussion about principles and fundamental ontological models is already discussed in the field of Knowledge Management (Gruber, 1995). It is about common ground rules that allow to share knowledge. The advantage of the systematic information refinement, as it is usual with Artificial Intelligence (AI) approaches, lies in the plannability of the data prepa­

ration. Even if this is known, a data preparation can take a considerable time (Davenport, Barth, & Bean, 2012). Even though the computation times have become faster, this prepara­

tion step takes the longest time. This makes the question all the more difficult if one has no influence on the data generation. Due to the developments in digitization, it must be assumed in the future that sources of external data will increase strongly in the age of Big Data (Lohr, 2012) . Already Galanis, Wang, Jeffery, and DeWitt (2003) described the problems of data processing with many systems involved. It is therefore all the more important that the prop­

erties of the data are already taken into account during data definition. The hypothesis is justified by the fact that data defined according to the same principles can be better analyzed and processed.

For the first version the study of Wang and Strong (1996) offered a good start. The attributes of data quality as shown in table 3.5 were used to derive the first version of the principles. The extensive listing of attributes of data could be assigned to three categories.

Many of them are of the same type and in this case were reduced or grouped into a subgroup.

Others again were not further refined. Three categories were identified in the development of the principles. The principles were to be derived along these categories from the attributes of the data. The most obvious category is the Contents itself. However, many attributes of data are based on traceability and meaning. Thus a comprehensibility can be recognized from sentences and/or the distances of the words therein over vectors. The contents are thus brought into a context which can also result from the syntactic (Foltz, Kintsch, & Landauer, 1998; Louis & Nenkova, 2012). Principles which serve such a coherence were summarized under the categoryCoherence. Finally, there are also properties of data that have nothing to do with content or meaning. They are those which are necessary for the processing of the data. In theDesigningit was summarized under the category Function. It also refers to the understanding of Shanks and Corbitt (1999) according to which the principles also concern the social perception of data quality. The first version can be seen in Figure 5.2.

As discussed in chapter 3, the perceived data quality plays an important component of data in general. The principles of data definition must correspond to this circumstance. For this reason, many time­based design principles, or rather the principles associated with the management of time in data, play an important role. Time dependencies make data seem obsolete at a specific point in time, no matter how well it is technically defined. Data is therefore always in a state at the current time. Seen over a time series, this state changes.

This degree of maturity of data is the main reason why information is perceived as inaccurate.

5.1 Design building process of the model | 79

Figure 5.2 First model of design principles for the definition of data derived from theory A circumstance that is largely ignored in the companies observed in this dissertation project.

In terms of time, the majority of data is interpreted ashereandnow. One can conclude from this that if this degree of maturity cannot be established in a context, the data is useless from the point of view of data quality. In the context of data analysis, historisation is a process that should not be underestimated, since it provides information about the temporal change of the data and thus also provides a broader foundation for its interpretation. In addition to the chronology of the data, the origin of the data should not be underestimated in an analysis.

The principles thus only serve the purpose of historicisation, determination of origin and so­

called assertion. The word assertion means theSingle Version of Truth of a date. Due to the importance of the data definition principles, which depend on it, the understanding of historization of data with the help of the method Temporal resp. Bitemporal data storage in the following subsections is explained theoretically as well as with examples.