Tools for building a physical database model, CA ERWin...

Tools for building a physical database model

The construction of the physical model of the database in its representation of the level of the data and physical modeling can be realized both separately from the logical construction of the model and with it. This allows to differentiate the levels of developers or provide parallel modeling with simultaneous synchronization of models of different levels. A single-valued decision on how to properly model: separately in levels or in parallel - does not exist, but the practice of implementing complex projects shows that parallel modeling significantly complicates the development process, requiring a regular return to the previous stages of analysis and modeling, which leads to the need for the developer independently monitor the correctness of the changes made to the logical and physical models.

Separate modeling, when a logical model is first constructed, and then a physical representation of the model is formed on it, is the most preferable option, especially for large projects, because it allows us to first work out in detail all the mechanisms for presenting data and their interactions, and then, taking into account the peculiarities of working with data in a particular DBMS, to determine the elements of the physical layer. This approach, taking into account the need for the subsequent transformation of the logical model into the physical with the renaming of objects from the national alphabet to the Latin alphabet, is often realized by the developers within the framework of applying uniform requirements to the logical and physical levels of representation of database models; use in the names of objects at all levels of uniform requirements (naming conventions)

and the common alphabet (Latin). This significantly reduces the possibility of interaction at the model level with the customer, but simplifies the procedures for switching between levels of representation of models for developers.

CA ERWin Data Modeler tool

The physical modeling based on the previously formed logical model of the database is performed on the model obtained after the corresponding transformation. As in any other modeling tool, ERWin Data Modeler stores all the names in the original version, as shown in the logical model of the database, but replaces spaces and special characters with the symbol defined in the transformation rules. Usually such a symbol is _ (underscore).

In addition to the standard renaming, the tool makes a change in the attribute data types declared in the logical model, creating an identical data type that is defined in the database chosen for the transformation, preserving the dimensions and descriptions of the corresponding data type.

As a result, after the transformation (Figure 5.4), the developer must correctly, in accordance with the name convention, submit the names of all objects of the new database model, including tables and zero (columns). The implementation of the renaming process is realized through the dialog boxes of the description of tables and fields (columns), similar to the process when creating a logical model of the database.

Fig. 5.4. Resulting model after transformation into a physical model

The

Suppose that the developer defined a convention of names, as in the previous section. When performing the renaming procedure, the developer has a contradiction in the model under consideration, which consists in the fact that in the table enMy2 there will be two fields with the same name 16 & quot ;, which is not allowed by the rules for naming database objects (Figure 5.5).

Fig. 5.5. Database model in accordance with the naming convention

The

To resolve this contradiction, taking into account the abstractness of the model in question, we will assign the serial numbers to the tables, reflecting their values ​​also in the primary key fields. This change in the naming rules must be reflected in the name convention table (Table 5.3).

Table 53

Correcting the naming convention table

No.

п/п

Object

Pre

fix

Name

Post

fix

Description

1

Table

functional

name & gt;

& lt; # & gt;

Tables are called a functional name based on the logical model of the database, using Latin characters in lower case and adding the ordinal number of the table as the postfix

2

Field

(column)

id

& lt; # & gt;

The identification field of the surrogate property is called a fixed name with the addition of the ordinal number of the table in which it is located as the postfix

The

The

Thus, now all the surrogate primary keys will have unique names corresponding to the table numbers in the model, and casting object names to the correct form will not cause the contradiction in question, allowing to create the correct foreign key in the child table.

When organizing a physical database model, it is necessary to check the correctness of the generated data types describing specific attributes. Initially, you can see this on the model itself, by defining the appropriate display rules for the model (the operation is identical to the representation of the logical model). In the case when the developer has detected a data type mismatch, since in a DBMS of data types of a certain type there may be more than is represented in the list of standard data types of the logical model level, it is important to correctly determine the correct data type for each field (Figure 5.6).

Determining the dimensions of the primary keys, it was previously revealed that the field "id 1" will in five years contain a maximum value of 18,750, and zero "id2" - 3100. As for these fields the type "Integer" was defined, which as the maximum value of the range has the number 32 767, which is much larger than the calculated value for both primary keys, and the data type of smaller dimension, which would allow storing the calculated values, is not exists, this type of data can be saved for the physical model and its implementation in the physical database.

Fig. 5.6. Setting the basic properties of the table field

The

If the maximum possible value of the primary key is greater than the range defined for the data type, then it must be corrected in the field properties. Also, you need to correct the data type for the field in the section of the selected DBMS so that the database structure update commands are correctly formed during synchronization.

By configuring the fields according to the selected database (Figure 5.7), the developer can specify not only the data type of the field, but also a number of additional characteristics that are applied in the database:

• Generated - the property determines whether the field can be populated with the generated values ​​in accordance with mathematical or linguistic rules;

• Generation Expression - A property represents a mathematical or linguistic expression whose execution should result in the generation of a value, and the field values ​​for the expression must be used only from the same instance of the same table;

• Generated Identity - A property specifies a characteristic of the need to form a unique value, which is usually provided by arithmetic progression calculations and is usually applied to surrogate keys;

• Starting Value - The property specifies the value that the DBMS will repulse when calculating the value according to the rules of the arithmetic progression;

• Increment By - A property defines the value that should be used in the next calculation according to the rules of the arithmetic progression.

Also, to determine more precise characteristics of the field (column), the developer is given the opportunity to specify other properties. However, you need to remember that depending on the selected DBMS, the list of properties that can be determined for a field (column) can be different.

Fig. 5.7. Setting field properties for the selected DBMS

The

After completing all operations to bring the physical model of the database to the form for further implementation in the database, you can begin to refine and expand the capabilities of the future database, taking into account the features of data storage and processing. One such action is the definition of table spaces and the distribution, but it is available in the model tables.

Tablespace object (Figure 5.8), being solely a database object, may not be present in the descriptions of the physical model if the DBMS is chosen to be the one where this object is not present. Nevertheless, if the distribution of tables across tablespaces is possible, then it is desirable to do this, especially if a complex information system is being developed. Creating a table space is done through the menu item "Model/Target .../Tablespeces ... (Model/Assignment .../Table spaces). Depending on the selected DBMS, the menu item Target ... will change the name of this database. You can also create a table space using the context menu of the model tree in the left pane of the workspace.

Fig. 5.8. The scope of the description of the main properties of the table space

The

The

Creating a new table space, you need to determine its main characteristics:

• Tour (Type) - the characteristic defines the rules of the DBMS with a table space and identifies five main types:

- Regular - All tables in the database are defined in this table space, except for large data sizes that are described by the Text, CLOB, BLOB, Binary, etc. types;

- Large - A table space is oriented to place large data from the corresponding attributes;

- System Temporary - used to store temporary information about the database and its objects, which defines it as not available to the user;

- User Temporary - used to store temporary tables that are generated as a result of the user's work with views, stored procedures, etc .;

- Index - The table space is designed to store index tables used to control the uniqueness of values, search for data and sorting;

• Managed By Type (Control Type) -The property specifies the rules by which the DBMS will monitor the resources of the computer and the operating system, assuming the following options:

- System - the resource management is transferred to the operating system level;

- Database - the management will manage the DBMS;

- Automatic Storage - resource management will be provided by the method that, in the opinion of the DBMS, will be the most preferable.

These basic properties form the basic parameters of the table space and, when synchronized, indicate the mandatory characteristics of the command to create a table space in the database. In addition, it is necessary to determine the dimension of tablespaces, or more precisely, the rules for dividing the area of ​​the table space into separate blocks.

The implementation of table space management processes is based on the distribution of tables by extents and pages, which should be defined in addition to the basic settings (Figure 5.9). Thus, in the main tab General additional settings for the table space, the developer is asked to specify the page size (Page Size) by selecting one of the four dimensional options, Extent Size, measured in the number of pages, and other characteristics that determine the physical layout of tables and data in the table space.

The developer also defines the rules for managing the size of the table space, if the amount of data exceeds the parameters previously set for it. Here it is suggested to specify a number of properties:

• Use Auto Resize-determines whether you need to increase the size of the table space and database file;

The

• Initial Size specifies the size of the table space that must be marked at the time the database was created;

• Increase Size By - specify the calculation option for changing the size of the table space, assuming an exact quantitative expression and percentage calculation;

• Increase Size - specifies the amount by which the size of the table space changes as it increases;

• Max Size - determines the size of the maximum size of the table space, more than which it should not increase.

The

Puc. 5.9. Defining tablespace management rules

As a result of specifying these parameters and based on a certain principle of managing the table space, it will place the tables and their data fixed in it (see Figure 5.10).

Choosing table properties in a separate tab "Tablespace (Table space), you can select the table spaces where the data from the table should be placed, and also specify in the property "Index In Tablespace"; (Indexes in the table space) then the table space, as a rule, is a separate table space of the usual type (Regular), where information about indexing keys and separate sets of attributes should be stored and distributed, forming the corresponding index tables. By specifying this location, the physical model of the database determines the rules for the physical distribution of tables, indexes and large data, organizing efficient data management at the physical (technical) level, since these settings are used specifically to organize the work with the database file (s) on magnetic media .

Fig. 5.10. Attaching a table to a table space

The

Having determined the parameters of the physical organization of the database, you can proceed to its logical organization, which consists of settings, taking into account the selected DBMS, field parameters (columns), restrictions, procedures, etc. One of these objects that you need to define is the default. Selecting a specific DBMS, the tool can predefine in advance the preset defaults in the DBMS (Figure 5.11), which can be used in other objects, for example, when establishing verification constraints.

Fig. 5.11. List of predefined defaults

The

These defaults are usually the values ​​of the current date and time, empty values ​​in the context of individual data types, the name of the session connected to the user database, the name of the connected user, etc.

Creating a new default is similar to the process for a logical database model by calling the "Default Values" tab from the shortcut menu. (Default value) in the Project Tree dialog box where you must specify the default name that will be used in the database as the unique name of the object, and the value that should be assigned to the fields and used in constraints.

Speaking about the defaults of the physical database model, taking into account the chosen DBMS, their application is implemented when determining the value automatically set for the fields, and, in view of their description as database variables, can be used in determining constraints, having in computational and check (logical) expressions. This feature allows the developer to define a large number of standard values ​​that can be used in the database for efficient and standardized data processing in various applications: filling fields, calculating values, validating values, performing processing and sampling procedures.

The next element of the physical database model that needs to be specified and defined accurately is the referential integrity constraint, defined by specifying the actions to be taken when the processing action occurs (adding, changing, deleting) the data in the parent or child table. Configure the referential integrity constraints in the link settings between the tables (see Figure 5.12) using the "RI Actions" tab.

For each action on the data, the tool offers a complete set of operations that were considered when describing the logical model of the database. Of course, you need to understand that different DBMSs have a different set of possible referential integrity constraints, but this tool treats these operations not so much as standard mechanisms, but as a potential rule for writing the program code of a trigger that is also administered in the DBMS.

>

For a physical database, most DBMSs have definitions for standard referential integrity constraints for the following actions on data:

- adding a new record to the child table, realizing that it has at least one foreign key, the value of which must be controlled in the part of having the same value in the primary key of the parent table;

- changing the value of the primary key of the parent table, because the child table has a precisely matched value and uncontrolled change can lead to data integrity violation;

- deleting an entry in the parent table, given that child tables can have associated entries and the absence of a referential integrity constraint, can lead to the inability to access the associated data of the child table.

Fig. 5.12. Configuring Referential Integrity Constraints

The

All other actions are usually considered in the DBMS at the level of a program trigger action, the code of which is usually determined automatically by the tool, but can also be represented by the program code defined by the developer (see Figure 5.13). To create your own trigger action, just use the context menu of the folder "Triggers (Triggers) of the project tree to create it and go to its editing.

The

Puc. 5.13. Selection for editing the trigger action

The trigger action, although it affects the relationship between the keys, is defined for the table, because it is called by executing the action on the data in the table (see Figure 5.14). Therefore, you need to create it in the folder of the project tree that is located within a specific table.

Fig. 5.14. Definition of the basic parameters of the trigger

The

Since the trigger action is performed when an event occurs due to data modification, the main characteristics that it needs to determine are the action to be performed on the data that this trigger should call, which is indicated in the main parameters area by the "Insert" , Update & quot ;, Delete & quot ;. In this case, you need to specify the time of the trigger action execution: After (after performing the action on the data) or "Before" (before performing the action on the data). In addition to the basic characteristics, developers specify data access rules, the actions that led to the execution of this trigger.

The basic settings are configured in the General tab, where you can determine the availability to the data being modified (Figure 5.15):

• Scope - the accessibility rule for modifiable data, which specifies the level of a data row (Row) or a table with multiple rows (Table);

• New - the name of the time domain of the data represented by a single data line formed by the results of the operation on the data;

• Old - the name of the time range of the data represented by a single data string containing the information before performing the operation on the data;

• New Table - the name of the temporary table with many data lines obtained after the operation on the data;

• Old Table - the name of the temporary table with the set of data lines received before the operation on the data;

• When Clause (Execution Condition) is the indication of a logical expression using the rules of the DBMS programming language, which specifies additional conditions for performing the trigger action.

The presence of additional conditions for the execution of the trigger action makes it possible to substantially limit the necessity of using the trigger. This must be done for the reason that the trigger action,

run in automatic mode, can not be accessed without special access modes to stop it or any control. This uncontrolled triggering of trigger actions can lead to a significant decrease in the speed of the database, and often to its full inoperability.

Puc. 5.15. Specify the basic settings of the trigger

The

When the trigger action is determined to update the table's data, you can also specify the fields (columns) whose actions will trigger the trigger, which can be determined in the "Update Columns" tab; (Changing columns) of the trigger settings (Figure 5.16).

The

Puc. 5.16. Definition of columns

for triggering a trigger

All columns (columns) of the table fall into the list of columns available for selection, but, in this case, the most interesting columns are

ns refer to primary keys. True, sometimes it is necessary to define an action if the value of one of the attributes of the concatenated primary key is corrected. For all other columns, it may be necessary to perform a trigger action. Most often, such trigger actions are defined for fields (columns) that are not key and require data to be changed in other tables.

For example, if you divide information about the price of a product and its quantity into different tables in order to calculate the cost of the ordered goods, you need to refer to two tables, which is impossible if the method of automatic calculation of the value of a field (column) by a mathematical expression is used. Here, trigger action can come to the aid, which, if you change, for example, the price of the goods, will start the process of calculating the new cost of the goods and will make this information in the corresponding field (column) of the other table.

This use of trigger action avoids the developer from writing in the procedure or application an additional action to recalculate the value of the goods, shifting this function to the process of automatic execution of operations.

As a result of defining these parameters, a trigger code will be generated, but it will not contain directly operations on the actions to be performed. To do this, using the special macro language of the tool (Figure 5.17), in the (program code) of the trigger settings, the developer can generate the necessary commands and their sequence. The program code written in the macro language allows you to switch from one DBMS to another with minimum expenses, forming the settings of the program code independent of the DBMS.

Fig. 5.17. Setting the program code of the trigger on the macro language

The

The

The result of writing this program code will be, taking into account the chosen DBMS, the program code in the language of this DBMS (see Figure 5.18), which can be viewed and adjusted using its programming features in the tab "Expanded" (advanced).

Puc. 5.18. The program code in DBMS language

The

This code will be given the syntax and features of programming the language used in the DBMS, taking into account the existing structures there, and with appropriate adjustments, special procedures and functions presented only in this selected DBMS.

Trigger actions, being a program procedure, are nevertheless not used very often. The most commonly used are stored procedures (see Figure 5.19), the program code of which can be managed and accessed, including running for execution. Creating a stored procedure is done through the context menu of the folder "Stored procedures (Stored procedures) of the project tree.

Like any program procedure or function, the stored procedure has the basic mandatory characteristics: name, input parameters and output. The developer can assign a list of input parameters for the stored procedure within the settings in the Parameter (Parameter), where each parameter is specified:

Parameter - the name of the parameter by which it will be used in the program code of the procedure;

• Type - a parameter type selected from two options (In - in, Out - out),

• Physical Data Type - the type of data that describes the allowable values ​​for the parameter;

• Default is the default value that must be assigned to a parameter if it is not defined by the user when calling the procedure.

The

Fig. 5.19. Basic settings of a stored procedure

The

Specifying these characteristics for all parameters of the stored procedure will generate a procedure header that specifies the key rules for using it in the DBMS.

Within the General the procedure settings determine the basic parameters of the procedure itself, which will allow you to accurately generate the basic program code that forms the result of the procedure and the principles of constructing the program code (Figure 5.20). This is necessary for the reason that stored procedures can perform in two ways:

is a software module for modifying data, when a lot of actions are performed to add, change, delete data and apply various software constructs, such as conditions, cycles and cursors, as well as specific functions implemented only for the programming language of the selected DBMS;

>

is a parametric representation that assumes the execution of a single selection command with the return of the result in the form of a set of table-organized data, but, unlike the classical representation, implemented on the basis of input parameters.

Puc. 5.20. Stored procedure settings

The

In the framework of the basic settings of the stored procedure, the most important are two:

The

• Specific Name - the name of the procedure by which it will be invoked to perform data processing;

• SQL Data Access - the type of the procedure being performed with data modification options (Modifies SQL Data) and data retrieval (Reads SQL Data).

In addition to the basic settings that involve the use of the standard SQL language, the tool provides the ability to define some additional settings (see Figure 5.21), which are indicated in the tab "Other Options" (Other settings). Important settings are:

- Language - the programming language on the basis of which the program code of the stored procedure will be generated;

- Coded Character Set Identifier - use a certain encoding of character data to be written to character variables and parameters, which is especially important when using a database in an Internet application where the encoding of data is determined on the Internet page;

Commit Transaction On Return - The stored procedure, in its essence, is a single transaction, i.e. executed completely or not performed at all, which requires fixing the result of the execution of the commands specified in the procedure.

Fig. 5.21. Other settings of the stored procedure

The

As a result of indicating all the initial parameters of the stored procedure, as in the case of the trigger, the program code will be generated in the macro language of the tool (Figure 5.22).

In order to universalize the program code, the use of macro language for stored procedures is a very useful component of the tool, enabling you to quickly switch between different DBMSs and create databases that function equally in different DBMSs. Of course, when creating a program code on a macro language, it can include standard operations in the SQL language or another language that is specified for writing the stored procedure, as well as specific functions and procedures defined only for the selected database language. However, in this case, when explicit instructions are used -

The


Fig. 5.22. Presentation of the program code in the macro language
You need to be prepared for the fact that the converted code when changing a DBMS will not be functionally functional and will have to be corrected, taking into account the peculiarities of another DBMS for which the database model needs to be reformed. This significantly reduces the ability to quickly and inexpensively change the database and creates a less flexible database model.

The

The

As a result of the full description of the program code in the tab "Expanded" the program code generated on the basis of the DBMS programming language specified in the procedure will be displayed (Figure 5.23). This program code can be adjusted, taking into account the peculiarities of the application of individual DBMS functions, but it is assumed that the developer, with the aim of universalizing the model, made all necessary instructions within the program code written in the macro language of the tool.

The

Puc. 5.23. Result of the stored procedure

in the language of DBMS

To change the DBMS for which the database model is implemented, just in the "Actions/Target Database ... (Actions/Target database ...) call the appropriate dialog box, where you can specify the required DBMS and its version (see Figure 5.24).

Puc. 5.24. Changing the choice of the DBMS

The

IBM InfoSphere Data Architect tool

Using the tools for modeling a physical model of a database, the developer sets the task not only to build a model and transform it into a physical database, but also be able to manage this database using the existing model. The IBM tool, considering the development of the information system as a single, indivisible process that supports all stages of the life cycle, implements many functions within a single facility, as, for example, in IBM InfoSphere Data Architect, logical and physical modeling stages are implemented, as well as database management in parts of programming and structuring of database objects directly from the tool itself, taking into account the descriptions of the physical model of the database.

It is with these features in the use of tools that the development of a physical model is one of the most important tasks that must be performed in the most qualitative and complete way, affecting not only and not so much the descriptions of tables and fields (columns), but all other components: stored procedures, functions, and triggers. Using the principles of transition to the physical model of the database, the developers, following the transformation of the logical model into the physical model and the formation of the name convention table, build a physical model with the definition of all necessary properties of tables and fields (columns), including defaults, value constraints, generation of identifying values ​​for surrogate keys.

In the process of developing models for the example of the electronic store, when choosing the physical model of the database, IBM DB2 was selected (see Figure 5.25). Of particular interest is the transformation of the primary keys of the surrogate type, under which the data type was changed. For example, in essence Orders primary key order ID was defined

type serial and declared a surrogate key. Transformation to the table orders the generated field id_order has acquired another type of data - Varchar (l) for bit data. For the selected DBMS, this is a special data type, which is represented by a symbolic type, but with a processing procedure as a numerical attribute that involves storing a numerical value generated according to the rules of the arithmetic progression.

Fig. 5.25. Result of transition to the physical model of the database

The

It is important to note that, with the previously established rule not to use symbolic attributes as primary keys, this option is quite acceptable, since the DBMS implements special mechanisms that accelerate the processing of this character type. In this case, special parameters are set for this attribute (Figure 5.26).

Fig. 5.26. Setting up a surrogate primary key

The

The presented settings of the surrogate primary key indicate the need to use the generation of values ​​according to the rules of arithmetic progression, which is indicated by the flag "Generated" (Generated). Additionally, in order for the rule to generate a value to be processed in the DBMS correctly, other characteristics are indicated:

• Start - the initial value from which the DBMS will repel when creating the first record of the table;

• Increment - the step of changing the key values ​​when adding another table entry, calculated as a rule, on the basis of a special object "Counter" (Sequence), which stores the current value, which must be changed;

• Minimum - the minimum value that the field (column) of a given primary key can receive, where, in the case of removing the check box, the condition will not be controlled;

• Maximum - the maximum value that the field (column) of the primary key can take, taking into account the previously formed conditions for filling the values ​​of the primary keys.

If the developer determines that the value of the field (column) should not follow the rules of the arithmetic progression, but some other computational expressions, then in the "Expression" property, (Expression) indicate this expression, taking into account that it can include the computational functions, constants and fields of the table in which the expression is formed. Thus, using these properties of fields, you can configure the rules for filling them.

By bringing the database model to the rules for using object names, the conditions for filling zeros with values ​​and restrictions on these values, as stipulated in the development procedure, we turn to the formation of table spaces and the placement of tables in these tablespaces.

To create a table space, you need to refer to the project tree, which reflects all the objects of the model, dividing them into groups (Figure 5.27);

- Diagrams - diagrams of the database model with the display of tables and views;

- SQL Statement - software models for processing the database, including functions and stored procedures;

- Storage Diagrams - snoring and data placement diagrams for table spaces and containers that determine the physical location of data on a magnetic medium

- Schema is a schema (role) of a database that groups a variety of data representation objects, including tables, views, and triggers.

Fig. 5.27. Project tree of the physical database model

The

Use the context menu of the "Storage Diagrams (Figure 5.28), represented by two types: Regular - the standard table space for using -

tions with tables and indexes; Large is a large table space for storing large data types.

Fig. 5.28. Transferring tables to a table space

The

After creating a diagram, you can create an empty table space on the workspace and in the Tables (Tables) of the table space properties field, specify tables, indexes and tables with large data that must be placed in it (fox 5.29).

Fig. 5.29. The result of creating a table space with tables

The

As a result, a table space with a list of tables assigned to it will be presented in the diagram and in the project tree. However, it is not enough to put tables in a table space. You must correctly configure this table space, which is done in the General tab (Basic) of the properties area of ​​the table space.

In addition to the name of the table space, the developer must define the rules for managing the storage of the database file on the magnetic medium and the possibility of restoring the tables when they are deleted (Figure 5.30). As with other tools, the possible settings for storage management rules are defined in accordance with the DBMS, which for IBM DB2 is determined by the options: System, Database and Automatic Storage, which consider management at the operating system, DBMS, or automatic detection levels, respectively.

Fig. 530. Setting the main properties of the table space

The

After configuring the type of storage management, it is important to specify the quantitative characteristics of the dimension of the table space and the changes in its size when fully filled with data. These characteristics are defined in the tab "Size the properties of the table space (see Figure 5.31).

The

Puc. 531. Adjusting the size of a table space

Similarly to other tools for a table space, you need to determine the page size and its initial size by specifying the unit of measurement (KB, MB, GB). Also specify the size of the extent (Extent size), taking into account the dimension of the page.

Specifying that for the physical space of the table space to automatically resize (Autoresize), if it is completely filled with data, you must additionally define the following parameters:

• Increase size - the size of the change in size, indicating the principle of change (percentage or quantitative);

• Maximum size - the maximum size of the table space, specified in a quantitative amount (KB, MB, GB).

Thus, after describing all the tablespaces, you can check the correctness of the table fixing, especially if the tables were not previously placed in the table spaces. To perform the operation of inserting a table into a table space using only the definition of the properties of the table space itself, you can refer to the table properties, where in the "Tablespace" tab, you can specify how to place the table data.

Because tables, indexes and large data are supposed to use different tablespaces, the table properties require the developer to choose how to place the corresponding elements. Proceeding from the fact that the standard table space is used for tables and indexes, you can specify the same table space in the appropriate fields (Regular, Index) (see Figure 5.32), as it should be understood that the processing speed for large amounts of data can be significantly reduced , than with the variant of spreading tables and indexes across different tablespaces.

Puc. 5.32. Attaching a table to a table space

The

When it comes to large data represented by large data types (Text, CLOB, BLOB, Binary, etc.), then to store this data you need to use a special tablespace (Large tablespace), which, by default, is configured to work effectively with such data.

Finally, there is the last set of table description properties that defines the rules for managing data storage. This set of properties is placed in the tab "Volumetries (Meters) of the properties area of ​​the table (Figure 5.33). It is important, first of all, to indicate the information about the number of records that can be in the table:

- Initial number of rows - this property shows that when you create a table, you need to place a certain number of records;

Row growth per month - this property determines the number of rows (records) that will be added to the table within a month, determining the degree of variability and increase in the size of the table;

Maximum number of rows - this property is limited to the number of records that can be entered into the table, providing limitations on the ability to add new entries, which is very important for standard classifiers whose variability tends to zero.

>

The second area of ​​the meters is the definition of the properties of the changes and sizes of records and index information. Since the index is represented by the same table as the usual data, the set of properties for the indices is identical to the properties of the table. The tool immediately indicates the average size of one record and suggests using

properties Projected in month the predicted record size for a month, considering that some columns (columns) of the table have a type of dynamic string size (varchar).

Puc. 533. Setting up table meters

Similar properties are specified for the table as a whole and indices:

• Initial Size - the base size of the table entry or the corresponding index;

• Projected Size - the predicted size;

• Maximum Size - the maximum size.

Having defined the tables, fields (columns) and table spaces, the developer needs to establish the correct rules of referential integrity, but the relationships between the tables (Figure 5.34), which is performed through the tab "Referential Integrity" (referential integrity) of the communication properties area.

Fig. 534. Link Integrity Settings

The

This tool, unlike many others, focuses only on those referential integrity rules that are defined in the selected DBMS, giving the developer the ability to specify the control features of communication between the tables for only two actions - changing and deleting data. Among other things, among the available actions, there are also limitations of the DBMS that are reproduced in the tool. For example, for an Update action, the tool offers only three options for controlling referential integrity: No Action, Restrict, and Cascade. This is explained by the fact that this simulation tool implements trigger operations only at the developer's instruction and does not form them in an automatic mode, as, for example, it is provided in ERWin.

Thus, by defining the rules of referential integrity, the developer should think about creating trigger actions that do not obey the basic constraints. Creation of triggers is realized through the context menu of the table in the project tree.

In the basic properties of the trigger (Figure 5.35), as in other tools, the characteristics of the trigger's execution time and the action on the data that should trigger it are determined. So, the developer is asked to indicate:

- Name - the name of the trigger object so that it can be accessed through database structure management commands;

- Schema - the scheme (role) behind which the trigger is assigned, where, by default, the table schema is defined;

- Action time - the time when the trigger should execute when it is triggered by an action on the data, offering three options to choose from: after, before, instead of,

- Granularity - considered data areas for use in trigger actions: statement - for each entry in the record (field, column), row - for the entire record;

- Trigger event - events that trigger the trigger, such as insert, update and delete, where, if an initiating action is selected as a data change, you can select fields (columns) whose change will cause the trigger to run.

Puc. 535 Defining the basic properties of the trigger

The

Going to the next stage of the trigger description, the developer must determine the conditions of the program logic and the program code itself. The configuration of the program code is performed in the tab "Details"; (Details) of the trigger properties area (Figure 5.36), where you need to determine the level of work with the data:

• Temporary table names for transition tables - definition of names of temporary tables, where data will be stored before (Old) and after (New) operation, but data modification;

• Correlation names for transition rows - defines the names of access to a single modifiable data record, characterizing the data before (Old) and after (New) performing the operation for data modification.

Fig. 5/30. Trigger programming area

The

It is important to understand that access to a record allows you to refer directly to the record field in the program code using their names. In the case where the level of the temporary table is used, access to the data is possible through the execution of work commands with a set of table entries, including the Select command.

Next, the developer specifies in the When a logical expression that will be checked before the trigger is executed on the fact that the operations recorded in it are feasible. In the Action the developer specifies the SQL code itself, which must be executed when the trigger is started.

Another object in the physical database model that you want to define in it is a view, denoted as a standard query without using the parameters provided by the user, or parameters represented by constants. In the main tab General the name of the view is specified so that when other views, stored procedures, triggers and applications are implemented, you can access it (Figure 5.37).

The

Fig. Ї.37. Result of creating a view

The implementation of the view is performed through the context menu of the project tree, where in the properties the developer specifies the selection command in the language

5 () using the tab "5 () b", on the basis of which the sets of fields (columns) formed by the representation (Figure 5.38) will be determined.

Fig. 5.38. Expression for presentation

The

The data fetch command that generates views can be quite branched using other views, multiple linked tables, and subqueries.

The last object formed in the physical database model is the stored procedure (Figure 5.39), represented by the programming code in the programming language supported in the DBMS. This tool, focusing on the DBMS, initially suggests creating a stored procedure in the language that is available in the DBMS.

Fig. 5.39. Basic settings of a stored procedure

The

Creating a stored procedure through the context menu in the project tree of the physical model of the database, the properties specify the name of the procedure for the subsequent possibility of calling it for execution and the language in which the stored procedure will be programmed. The next ethane of creating a stored procedure is to define a set of parameters (see Figure 5.40) that must be passed to the procedures and used in its implementation, indicating not only the name of the parameter, but also the type of data that will be transferred, the type of usage, highlighting incoming and outgoing parameters .

Fig. 5.40. Defining the parameters of a stored procedure

The

The specified parameters of the stored procedure will be generated in the definition of the program code and used inside the procedure when performing operations on data that involve options for modifying and retrieving information.

The choice of the application variant (Figure 5.41), namely the type of the result to be returned, is done in the "Options" tab; properties of the stored procedure, where:

Modifies SQL Data means creating a procedure for modifying data in database tables, assuming the use of all the features of the DBMS programming language;

Reads SQL Data refers to fetching data from tables and database views, using parameters as logical constraints on selectable data.

Fig. 5.41. Configuring options for using a stored procedure

The

If you use the stored procedure variant as a Reads SQL Data, the program code is implemented as a single selection command. The code itself is generated by the developer within the Source the properties area of ​​the stored procedure.

As a result of the implementation of all the actions to create a physical database model, taking into account the features of the presentation of the rules of referential integrity and techniques for modeling software processing of data, a model will be prepared whose synchronization with the database will create a database that does not require additional operations for its configuration and structuring.

thematic pictures

Also We Can Offer!

Other services that we offer

If you don’t see the necessary subject, paper type, or topic in our list of available services and examples, don’t worry! We have a number of other academic disciplines to suit the needs of anyone who visits this website looking for help.

How to ...

We made your life easier with putting together a big number of articles and guidelines on how to plan and write different types of assignments (Essay, Research Paper, Dissertation etc)