Structure of relational database pdf




















In addi- tion, the process is performed with or without the help of an intermediate conceptual representation, e. The input source schema is enriched semantically and translated into a target schema.

Data stored in the source database are converted into the target database based on the target schema. Gen- erally, relations and attributes are translated into equivalent target objects.

Foreign keys may be replaced by another domain or relationship attributes. Other relationships, such as associations and inheri- tance, can also be extracted by analysing data dependencies or database instances. In data conversion, attributes that are not foreign keys become literal attribute values of objects, elements or sets of elements. This is because of the heterogeneity of concepts and structures in the source and target data models.

In some of these techniques, data might be converted based on the resulting target schema. Source-to-target S2T technique This type of technique translates a physical schema source code directly into an equivalent target.

However, as the target schema is generated using one-step mapping with no intermediate stage for enrichment, this technique usually results in an ill- designed database because some of the data semantics e. Foreign keys are mapped into references to connect objects. However, due to the one-to-one mapping, the flattened form of RDBs is preserved in the generated database, so that object-based model features and the hierarchical form of the XML model are not exploited. This means that the target database is semantically weaker and of a poorer quality than the source.

Moreover, creating too many references causes degraded performance during data retrieval. Clustering technique: This technique is performed recursively by grouping entities and relationships together starting from atomic entities to construct more complex entities until the desired level of abstraction e. A strong entity is wrapped with all of its direct weak entities, forming a complex cluster labelled with the strong entity name. This technique works well when the aim is to produce hierarchical forms with one root.

Nesting technique: This technique uses the iterated mechanism of a nest operator to generate a nested target structure from tuples of an input relation [Lee et al. The target type is extracted from the best possible nesting outcome. However, the technique has various limitations, e. Besides, the process is quite expensive, since it needs all tuples of a table to be scanned repeatedly in order to achieve the best possible nesting. Source-to-conceptual-to-target SCT technique This type of technique enriches a source schema by data semantics that might not have been clearly expressed.

The schema is translated from a logical into a concep- tual schema through recovering the domain semantics e. The results are represented as a conceptual schema using database reverse engineering DBRE [Chiang et al. The resulting conceptual schema can be translated into the target logical schema effectively using database forward engineering DBFE. In this way, the technique results in a well-designed target database. Database reverse engineering DBRE : DBRE is a process for enriching a source schema using semantics that might have not been clearly expressed by acquir- ing as much information as possible about objects and the relationships that exist among them [Castellanos et al.

This process is also known as semantic enrich- ment. Such conversions are usually specified by rules, which describe how to derive RDB constructs e. Data and query statements have also been used in some studies to extract data semantics. Some proposals consult expert users or use data dictionaries to provide metadata, whereas other proposals employ database design techniques. However, some of these proposals could be combined together to form a more comprehensive solution.

Table 3. Three algorithms are proposed to extract a conceptual ER from an existing RDB based of the classifi- cation of relations and attributes [Navathe and Awong, ; Davis and Arora, ; Johannesson and Kalman, ]. However, all those algorithms do not consider inheritance relationships. Fonkam and Gray [] presented a more general algorithm that is based on these algorithms, where the original con- tribution of this algorithm was to establish generalisation hierarchies.

Chiang et al. This type of method uses a variety of heuristics to recover domain semantics through the classification of relations, attributes and key-based inclu- sion dependencies using the schema.

However, expert involvement is required to distinguish between similar EER constructs, i. In addition, the consistency of key naming and a well-formed schema is assumed. Soutou [] proposed a process for extracting the cardinalities of n-ary relations represent- ing relationships by generating a set of SQL queries.

Data instances are used for relation classifications with respect to their keys [Chiang et al. Alhajj [] developed algorithms that utilise data to derive all possible candidate keys for identifying the foreign keys of each given relation in a legacy RDB.

This information is then used to derive a graph called RID, which includes all possible relationships among RDB relations. The RID graph works as a conceptual schema [Alhajj, ]. Petit et al. In common with Andersson [], Petit et al.

The method uses a join condition and the distinct keyword for attribute elimination during key identification. The process starts with RDB physical schema containing de-normalised relations, and then a set of ap- propriate rules is applied to de-optimize the schema through the analysis of application source codes DDL, DML and data mining techniques. Relational operators such as join, project and restrict in a physical schema are de- tected and used for de-normalisation of relations.

Akoka et al. Since an RDB does not enable a natural way of representing inheritances, several heuristic and algorithmic methods have been proposed to elicit inheritance relationships hidden in RDBs [Fonkam and Gray, ; Akoka et al. Data instances, schemas, DDL and DML specifications, along with understanding null value semantics, are used to detect inheritance.

A method based on a generic schema specification model and DBRE techniques has been proposed to deal with design and re-engineering database applications [Hainaut et al. Marcos et al. The problems of semantic enrichment arise from processing badly-designed and poorly documented applications [Hainaut, ]. Many RDBs might have been specified without definition of constraints, such as keys and integrity con- straints [Behm et al.

For example, foreign keys are not possible in Oracle 5. Moreover, many RDBs do not contain semantic constraints for optimisation reasons, and not all databases are built by experienced developers, who may produce poor or inadequate struc- tures [Hainaut, ].

A conceptual schema generated from the DBRE process can be translated into a high level data model through the application of a set of rules, called schema map- ping rules. Several proposals have been made for transforming conceptual schemas, e. These proposals and many others have been used as a basis for middlewares, gateways and CASE tools.

A review of database design transformations based on the ER model may be found [Fahrner and Vossen, a]. Those properties are discussed in Section 3. Indeed, each proposal has its properties, e. These properties lead to different mapping rules for the migration process, which in turn affect the results and quality of the process.

In addition, Table 3. These properties surveyed in both tables are explained below. These include the consistency of naming attributes, the availability of all keys and schema, inclusion and functional dependencies, and database instances. Most existing proposals are limited by the assumptions that they make.

For instance, a source schema is required to be available for further normalisation to third normal form 3NF [Chiang et al.

However, this is not a practical choice for existing RDBs. Data dependency, which is most often represented by key constraints, plays the most important role in this process. Evaluation of functional, inclusion and key-based dependency is assumed in many proposals.

Other kinds of data dependency may also be required, e. Premerlani and Blaha [] assume that the problem of synonyms and homonyms has been resolved prior to database migration. Also, the classification of relations with respect to their keys, e. Other frequent assumptions are that the initial schema is well-designed and that all basic relevant constraints are given in the descriptions of the schema or provided by the user [Behm et al.

Input and output models: In existing work, the RDB migration process usually takes one RDB as input and aims to generate one target database. A source schema is translated into another equivalent schema and data are converted in accordance with schema translation.

However, most work to date has focused on translating RDB schemas directly into schemas of other non-standardised data models, in the context of database integration [Castellanos et al. Few attempts have been made to generate target data models based on their conceptual schemas or other representations, as an intermediate stage for enrichment.

A large body of literature exists on DBFE or database de- sign aiming to transform such conceptual models to logical data models. In addition, only few works consider current standards, i. This graph, similar to an ER diagram is used for identifying relationships and cardinalities. Its main goal is to upgrade the semantic level of the local schemas of different databases and to facilitate their integration.

Behm et al. Another model, called ORA-SS, has been proposed to support the design of non-redundant storage of semi-structured data models [Dobbie et al. The model has its own diagrammatic notations for expressing class attributes and relationships, similar to those of ER and OO data models. However, it uses the technique of nesting and referencing in representing relationships among objects. Semantic preservation: RDBs typically contain implicit and explicit data se- mantics, concerning integrity constraints and relationships among relations.

Target databases should hold equivalents to these semantics. Several previous proposals have failed to explicitly maintain all of the data semantics e. Constraints are instead mapped into class methods [Fahrner and Vossen, b] or into separate constraint classes [Narasimhan et al. Rela- tionships are translated in most of the work, however, inheritance relationships have not been fully addressed.

Few studies address database optimisation issues, e. Object-based data models consist of static properties attributes and relation- ships and dynamic properties methods or functions , which make them richer than relational data models. Most existing methods focus on constructing a static rather than dynamic target schema. User involvement: A common observation in the different proposals is that user interaction is necessary at some point to provide additional information to achieve the desired results.

User intervention might be required for the classification and understanding of keys in an RDB [Castellanos et al. User involvement is also required for resolving optimisation issues such as naming conflicts and vertically or horizontally partitioned relations [Chiang et al. Monk et al. However, various semantic constraints, schema-mapping constructs and data migration techniques were not addressed adequately in this work.

Jahnke et al. The conversion process is provided by an adapted set of schema mapping rules to produce an initial OO conceptual schema. Once the OO schema is produced, it can be refined to exploit OO concepts, e. However, Varlet focuses on migrating legacy databases, which are enriched with semantic information inferred using other tech- niques i.

Data conversion is performed in more than one transaction according to three criteria: a a certain number of objects are mapped in one transaction, b each relation is fragmented into several partitions during its mapping into an object class, and c a period of time is assigned within which each transaction is finished.

However, the tool does not exploit all of the fea- tures provided by the OODB paradigm, such as inverse references, inheritance and aggregation, and the fragmentation of tables during conversion might cause unneces- sary complexity. In this section, existing proposals for the migration of RDBs into OODBs are discussed in further detail, including semantic enrichment, schema translation and data migration. The work suggests creating a separate constraint class with methods as a sub-class for each of the OODB classes.

Weak entities and aggregations are mapped into component and composite object-classes, respectively. However, all these proposals, except Fong [], concern only schema translation.

An OMT schema is produced by representing each RDB re- lation with its attributes as an OMT class, and primary keys and foreign keys are determined by resolving synonyms and homonyms. Then, horizontally partitioned classes are refined into single classes, and associations and generalisations are identi- fied using the evaluation of keys. Finally, OO classes are refined through eliminating redundant associations.

This method makes extensive use of inclusion and exclusion dependencies. Moreover, the resulting schema is then restructured by the user with respect to OO paradigm options, e. Castellanos []; Castellanos et al. The method consists of two phases. An RDB schema is improved semantically based on a knowledge acquisition process to discover implicit semantics by analysing the schema and data instances.

The knowl- edge acquisition phase involves the determination of keys and their types, of data dependencies such as functional, inclusion and exclusion dependencies, and of the normalisation of the schema to 3NF.

However, unlike in Premerlani and Blaha [] method optimization structures, e. Relation tuples are con- verted, downloaded into sequential files, and then reloaded into the OODB. However, weak entities and multi-valued and composite attributes are not clearly tackled in this work. Ramanathan and Hodges [, ] presented a method for mapping an RDB schema that is at least in 2NF into an OODB schema without the explicit use of inclusion dependencies, and without changing the existing schema.

All of the information required during the process comes from information on primary keys and foreign keys.

However, the method also addresses database optimisation issues such as BLOBs, horizontal and vertical partitioning, which cannot be mapped into object schema without using data dependencies. Zhang et al. A composition process is proposed to reduce the input RDB schema. Then, the simplified relations are mapped into equivalent OO classes.

A strong entity is wrapped with all of its direct weak entities, forming a complex cluster which holds the strong entity name. The method proposes generating OIDs for identified objects by concatenating the key values of each tuple with the relation name.

Missaoui et al. In this method, re- lated entities are identified and defined as one unit. The diagram produced is then translated into an OO schema. The references represent the relationship between SOTs. Every SOT and attribute is identified by a unique identifier to avoid naming conflicts. Transformation rules consist of five parts, namely, definitions, patterns, preconditions, schema and data operations [Behm et al.

The data migration process is accomplished automatically. However, neither constraints nor resolving synonym and homonym issues are considered. Transforming conceptual models e. A common finding from these studies is that the logical structure of an ORDB schema is achieved by creating object-types from UML diagrams.

An association relationship is mapped using ref or a collection of refs depending on the multiplicity of the association. A method of mapping and preserving collection semantics into an ORDB has recently been proposed [Pardede et al. Urban et al. Grant et al. Their analysis might aid in the standardisation of these techniques and the development of a tool that could support in ORDBs design.

However, if a migration process uses a conceptual model as an intermediate stage, then these proposals could be useful in schema translation. Some work uses data dictionaries and assumes well-designed RDB [Du et al. The structure of the generated XML document is based on user specification into a flat or nested structure. Each attribute is mapped into a sub-element within a related complex type.

Relationships among en- tities are mapped using key and keyref elements. However, inheritance and aggregation relationships are not considered properly in this study. Fong et al. However, some relationship types, e. One table is determined to be the main root element, and then columns of that table, which are neither primary key nor foreign keys are mapped as its sub-elements. The primary key is added to its root elements as an attribute.

For each foreign key included in the primary key, a new sub-element with PCDATA type is generated, holding the same name as its reference table. Foreign keys that are not included in the primary key are converted into sub-elements in the root.

How- ever, some data semantics cannot be represented, e. Moreover, some relationships, i. However, the algorithm tries to create a hierarchal structure that is deeper rather than larger. This may cause redundancy or disconnected ele- ments in the resulting XML document.

However, although UML can model data semantics such as aggregation and inheritance, it is still weak and unsuitable in handling the hierarchal structure of the XML data model [Fong and Cheung, ]. However, they adopted an exceptionally deep clustering technique, which is prone to errors such as data redundancy, loss of semantics and breaking of relationships among objects. Based on data dependency constraints, this work de-normalises an RDB into joined tables, which are then translated to document object models DOMs.

Based on the DTD schema generated and data dependencies, each tuple of the joined tables is loaded into an object instance in DOM and then transformed into a DTD document.

However, the algorithm nei- ther utilises features provided by the XML model nor considers integrity constraints. Another algorithm known as nesting-based translation NeT has been proposed to remedy the drawbacks of FT using an iterated mechanism of the nest operator to generate nested structures of DTD schema from relational inputs [Lee et al.

However, this algorithm has some limitations, e. Together with NeT, Lee et al. Each proposal has made certain assumptions to facilitate the migration process, which might be a point of limitations or a drawback.

While existing works for migrating into OODBs focus on schema translation using source-to-target techniques, we have noted that most works for migrating to XML have used source-to-conceptual-to-target techniques, focusing on generating a DTD schema and data.

Moreover, all research on the generation of ORDBs has focused on design rather than migration. Due to their focus on schema rather than data, the proposals reviewed above either ignore data conversion or assume working on virtual target databases using map- ping and gateways middleware and data retain stored in RDBs.

Moreover, there are still shortcomings in the implementation of RDB data conversion in a more ef- fective manner into more than one environment.

Using middleware may lead to slow performance, making the process expensive at run-time because of the dynamic map- ping of tuples to complex objects [Behm et al. However, using object-based DBMSs and native-XML, objects can be stored and retrieved directly without any need for translation layers, hence saving development time and increasing performance. This is mainly due to their lack of support for such semantics either in source or target data models, e.

Despite the ability of UML to model data semantics such as aggregations and inheritances, UML is still weak and unsuitable for handling the hierarchical structure of the XML data model [Fong and Cheung, ].

Although inheritance relationships could be indirectly realised in an RDB, they have been either ignored or only briefly consid- ered. Different types of inheritance have not been tackled, such as unions, mutual exclusion, partition and multiple-inheritance; and neither have their constraints, e.

There has been less effort to use standards such as the ODMG 3. The adoption of standards is essential for better semantic preservations, portability and flexibility. In the ODMG 3. Compared to DTD, the XML Schema offers a much more extensive set of data types, and provides powerful referencing, nesting and inheritance mechanisms of attributes and elements.

It would be desirable to avoid the flattened form and to reduce the levels of clustering object structures as much as possible in order to increase the utilisations of the target models and to avoid undesirable redundancy. This requires the preservation of the semantics of the source database into a conceptual model, which takes into account the relatively richer data model of the target database environment.

The success of the migration process depends on the extent to which data semantics are retained in the conceptual model and how they are translated into a target database.

Although known conceptual models, e. In addition, several dependent models have been developed for specific applications, but these are inappro- priate to be applied to generate three different data models. The SOT model [Behm et al. The evaluation of the different techniques and proposals has shown that very few of the existing studies provide solutions to the problems mentioned above or to the general problem raised in Chapter 1.

Viewing objects on top of existing RDBs and establishing gateways to access existing data for only data retrieval purposes cannot solve the problem of mismatch between different paradigms or preserve RDB data semantics. In addition, the existing work on database migration does not provide a complete solution for more than one target database for either schema or data conversion. Three aspects of migration have been discussed: semantics enrichment, schema translation and data conversion.

On the other hand, translation techniques are divided into two categories: i source-to-target translation, in which a source database is translated directly into a target database, and ii source-to- conceptual-to-target translation, in which a source schema is enriched by semantics or recovered to a conceptual schema before being translated into a target schema.

The pro- posals for RDB migration in the literature have been discussed in separate categories according to the different target databases. Within each category, existing proposals have been compared in terms of translation techniques, prerequisites, and specific features. The aims have been to provide a comprehensive view of the problem of RDB migration, to review various techniques and proposals, to identify their com- monalities and differences, to assess the impact of pervious research, and to show how it has shaped current and future research in this area.

As mentioned in Section 1. The method has three phases: semantic enrichment, schema translation and data conversion. Section 4. Sections 4. The problem is how to effectively migrate an existing RDB as a source into the newer databases as targets, and what is the best way to enrich the semantics and constraints of the RDB in order to appropriately capture the characteristics of these targets? This section outlines the main principles of the solution.

The remain- ing sections briefly describe the different phases of the solution, whereas detailed descriptions are provided in Chapters Based on the RSR, a canonical data model CDM is generated, which captures the essential characteristics of the target data models, for the purpose of mi- gration.

Due to the heterogeneity of the three target data models, we believe that it is necessary to develop a CDM to bridge the semantic gap among them and to facilitate the migration process.

The CDM is designed to preserve the integrity constraints and data semantics of the RDB so as to fit in with the target database characteristics. The method is more beneficial compared to the existing solutions as it produces three different output databases based on the user choice, as shown in Figure 4.

In addition, the method exploits the range of powerful features provided by target data models, i. The CDM so obtained is mapped into target schemas in the second phase. We have designed sets of rules which are integrated in algorithms to translate the CDM into each of the three target schemas. The third phase converts RDB data into its equivalents in the new database environment. We have developed algorithms for converting source data into targets based on the CDM.

Chapter 8 explains how the prototype has been developed, and Chapter 9 shows how the experimental study was conducted to evaluate the prototype. An RDB schema is in 3NF if every relation is in 2NF and non-primary key attributes are only dependent on the primary key, so that no relation has transitive dependency [Elmasri and Navathe, ].

A relation that is not in 3NF may have redundant data, update anomalies or no clear semantics of whether it represents one real world entity or relationship type. Besides, the advantages provided by such models, which have motivated migrating RDBs into them, are not exploited.

As a result, the target database may be flatter than expected. Unlike with base relations, views may combine attributes from various entity and relationship types, although RDMSs may materialise repeatedly used views in order to reduce the JOIN operations of base relations for performance reasons [Elmasri and Navathe, ]. Other representations may lead to different target database con- structs.

For example, a relation L is a strong relation if its primary key is not fully or partially composed of any foreign keys. Similarly, L is a sub-class if its primary key is entirely composed of the primary key of a super-class relation. However, such types of specialisation can be repre- sented indirectly in relational data models in many alternative ways [Elmasri and Navathe, ]. The most common alternative represents inheritance as one relation for a super-class and one relation for every sub-class.

This alternative works for a total, partial, disjoint or overlapping specialisation. Another alternative is to have one relation for each sub-class, containing the attributes and key of the super-class. This alternative works only when each entity in the super-class belongs to at least one of the sub-classes.

In addition, an inheritance can be represented using null values in tuples, by exam- ining inclusion dependencies in DDL and queries in DML specifications [Akoka et al. However, metadata provides only information that a primary key is referenced by a foreign key but it does not provide information as to how many instances of a foreign key point to how many instances of a primary key.

In addition, it may not be possible to obtain information about attribute nullabilty from the data dictionary, so that the data content may be examined analysed or queried. Therefore, we assume that the RDB being migrated is complete. Different database states give different information about cardinality. For example, assume the Employee and Project relations participate in an M:N relationship, as each employee is working on several projects and each project is staffed by many employees.

However, if data instances show that every project is staffed by many employees, but each employee is working on only one project, then cardinality from the data would indicate a 1:M instead of an M:N relationship. The success of the process depends on the amount of information that can be extracted from the existing schema, the method that is followed to extract that information, and the technique by which an enriched semantic representation is constructed.

Consequently, additional domain semantics need to be investigated, such as the classification of relations and attributes, and the determination of rela- tionships and cardinalities. In our approach, the semantic enrichment phase involves extraction of data semantics of an RDB by obtaining a copy of its metadata and enriching it with required semantics, and constructing an enhanced RSR.

In this section, the thinking behind RSR and CDM is presented, including why they are needed, what purpose they serve, and their definitions. Figure 4. The process starts by extracting the basic metadata information about a given RDB e. The next step is to identify CDM constructs based on a classification of the RSR constructs, including relationships and cardinalities, which is performed through data access.

Finally, the CDM structure is generated. Data semantics can be extracted using a variety of ways such as from catalogues i. However, conceptual schemas may not be recovered precisely in the DBRE from the final logical or physical schemas, and a full understanding of a database is easily lost when experienced users are unavailable or design documents are missing [Alhajj, ].

In RDBMSs, metadata is usually stored in data dictionaries, which can be accessed to derive information about database structures. Relations in RDBs are designed conventionally from entities and relationships in a conceptual schema. Therefore, relations represent entities or relationships among entities. Relations in- clude attributes, primary keys, and foreign keys.

Each attribute has a type and arbitrary domain. Binary relationships can be or 1:M. The basic information needed to proceed with the semantic enrichment phase includes relation names and the properties of attributes, including attribute names, data types, length, default values, and whether or not the attribute is nullable. Relations and their attributes need to be classified for systematic mapping into the corresponding structure of the target data model.

The most important information needed for relationship identification concerns data dependency. Functional and inclusion dependencies, which are basically used to en- force data integrity, can be recovered from primary keys, foreign keys and unique keys. A primary key of a relation L may be referenced by another relation L1 exported from L to L1 for relationship participation, where the primary key is then called a foreign key of L1.

At the same time the foreign key is then called an exported key of L; thus the inverse of a foreign key is an exported key. Extracting and matching keys is sufficient for the classification of relations, e. Besides, information concerning cardinality constraints and whether a relationship is optional or mandatory is also needed in order to generate the CDM correctly. Much of the semantics needed for the enrichment process may not be found in an RDB schema due to poor design or limited RDB expressiveness [Chiang et al.

Generally speaking, a relational model has less structural and behavioural expressiveness compared to other data models [Saltor et al. In contrast to RDBs that model a static part of entities, object-based models capture more semantics by specifying dynamic object behaviour.

Compared to the richer data models, the relational data model provides less structural expres- siveness, e. Moreover, the relational data model provides relatively less behavioural expressiveness because, for example, it does not support the definition of new operations rather than generic operations [Saltor et al.

Therefore, a portion of the intended semantics would be lost either due to this limited expressiveness [Chiang et al. User interaction might be necessary to provide basic missing semantics. Corresponding to each attribute name ai is a set of an arbitrary domain. Each ri is a state of Ri and must satisfy specified integrity constraints. Definition 4. The efficient construction of RSR overcomes the complications that occur during matching keys in order to classify relations e.

The relation Rrsr is constructed with its semantics as one 6-tuple, which is easily identifiable and upon which set theoretical operations can be applied for matching keys.

Each part of Rrsr describes a specific aspect of it e. The set EK holds keys that are exported from Rrsr to other relations as foreign keys. For a given RDB, the algorithm finds the names, attributes and integrity constraints of all the relations, and constructs the RSR. Each element in the Arsr contains one attribute and its properties. Each key attribute pa is inferred and given a sequence number s and added to the set P K in pairs hpa, si. The sequence s distinguishes between single and composite keys.

Each key attribute f a is assigned s based on matching the key constraint name con. Attributes that have the same con are given an ascending order of s and together form one key, which is added to the set F K. Finally, the algorithm returns the constructed rsr, which is later used together with data stored in an RDB to generate the CDM. Example 4. Consider the RDB shown in Figure 4.

Table 4. The information includes all attributes and keys for each relation. The CDM is a source of valuable semantics giving an enriched and well organised data model, which can be converted flexibly into any of the target databases.

Besides taking into account the characteris- tics of the target models, the CDM retains all data semantics that could be extracted from an RDB and the integrity constraints imposed on it.

Moreover, it acts as a key mediator for converting existing RDB data into target databases based on the structure and the concepts of the target models. Based on the CDM definition, target attributes that represent relationships among classes are materialised into references or changed into other domains. Because they cannot represent relationships between data, much less use such relationships to retrieve data.

The problem is that list management software has been marketed for years as database software, and many purchasers do not understand exactly what they are purchasing. Making the problem worse is that a rectangular area of a spreadsheet is also called a database. As you will see later in this book, a group of cells in a spreadsheet is even less of a database than a stand-alone list.

Because this problem of terminology remains, confusion about exactly what a database happens to be remains as well. Here is a quick overview of the primary elements of a relational database. These elements will be described in more detail as you continue with the course. A relational database is a collection of related data tables. Columns describe the specific pieces of information in the table and each row stores the corresponding data.

A primary key is one column or compilation of several columns that has a unique value, making each row unique in the table. Ans: Information processing drives the growth of computers, as it has from the earliest days of commercial computers. In fact, automation of data processing tasks predates computers. Ans: A primary goal of a database system is to retrieve information from and store new information into the database.

People who work with a database can be categorized as database users or database administrators. Ans: Researchers have developed several data-models to deal with these application domains, including object-based data models and semi-structured data models.

Ans: A relational database consists of a collection of tables, each of which is assigned a unique name. Ans: The database schema is the logical design of the database. Ans: A super-key is a set of one or more attributes that, taken collectively, allow us to identify uniquely a tuple in the relation. Ans: DBMS typically includes a database security and authorization subsystem that is responsible for ensuring the security of portions of a database against unauthorized access view more..

Ans: The typical method of enforcing discretionary access control in a database system is based on the granting and revoking of privileges. Let us consider privileges in the context of a relational DBMS. Ans: This chapter discusses techniques for securing databases against a variety of threats. It also presents schemes of providing access privileges to authorized users. Ans: Object databases is the power they give the designer to specify both the structure of complex objects and the operations that can be applied to these objects view more..

XML can be used to provide information about the structure and meaning of the data in the Web pages rather than just specifying how the Web pages are formatted for display on the screen view more.. Ans: A database schema, along with primary key and foreign key dependencies, can be depicted by schema diagrams.

Ans: A query language is a language in which a user requests information from the database. Ans: All procedural relational query languages provide a set of operations that can be applied to either a single relation or a pair of relations. Ans: An object database is a database management system in which information is represented in the form of objects as used in object-oriented programming. Object databases are different from relational databases which are table-oriented. Object-relational databases are a hybrid of both approaches.



0コメント

  • 1000 / 1000