Stages of the evolution of information technology
Fig. 3.1. Temporary data management development phases
In the zero generation (4000 BC - 1900), for six thousand years there was an evolution from clay tables to papyrus, then to parchment and, finally, to paper. There were many innovations in the presentation of data: phonetic alphabets, works, books, libraries, paper and print. These were great achievements, but the processing of information in this era was carried out manually.
The first generation (1900-1955) is associated with the technology of punched cards, when the data record was presented to them in the form of binary structures. Prosperity of IBM in the period 1915-1960. is associated with the production of electromechanical equipment for recording data on maps, sorting and compiling tables. The cumbersome nature of the equipment, the need to store a huge number of punched cards, predetermined the emergence of a new technology that was supposed to supplant electromechanical computers.
The second generation (programmable recording equipment, 1955-1980) is associated with the advent of magnetic tape technology, each of which could store information of ten thousand punched cards. To process information, electronic computers with stored programs were developed that could process hundreds of records per second. The key point of this new technology was software, with which it was relatively easy to program and use computers.
The software of this time supported a file-based file-handling model. Typical programs consistently read several input files and output new files. To facilitate the definition of these write-oriented sequential tasks, COBOL and several other programming languages were created. Operating systems provided an abstraction of the file for storing these records, a language for managing the execution of tasks and a job scheduler for managing workflow.
Batch transaction processing systems stored them on cards or tapes and collected in packets for later processing. Once a day, these transaction packets are sorted. Sorted transactions merged with a much larger database (the main file) stored on the tape to produce the new main file. On the basis of this main file, a report was also produced, which was used as a ledger on the next business day. Batch processing made it possible to use computers very effectively, but had two serious limitations: the impossibility of recognizing the error before processing the main file and the lack of operational knowledge about the current information.
The third generation (operational databases, 1965-1980) is associated with the introduction of online access to data in an interactive mode, based on the use of database systems with operational transactions.
Technical means for connecting interactive computer terminals to the computer have evolved from teletypes to simple alphanumeric displays and, finally, to today's intelligent terminals based on personal computer technology.
The operational databases were stored on magnetic disks or drums, which provided access to any data element in a fraction of a second. These devices and data management software enabled programs to read several records, modify them and then return new values to the operational user. At the beginning of the system, a simple data search was provided: either a direct search by the record number, or an associative key search.
Simple index-sequential record organizations quickly evolved into a more powerful model, focused on sets. Data models have evolved from hierarchical and network to relational.
These early databases supported three kinds of data schemas:
• Boolean, which defines the global logical design of database records and records relationships
• Physical, describing the physical location of database records on memory devices and in files, as well as the indexes needed to maintain logical relationships;
• Provided to each subcircuit application, which reveals only a part of the logical scheme that the program uses.
The mechanism of logical and physical circuits and sub-schemes ensured the independence of data. And in fact, many programs written in that era still work today using the same subcircuit with which everything started, although the logical and physical schemes have completely changed.
By 1980, network (and hierarchical) data models focused on record sets became very popular. However, the navigational programming interface was of a low level, which served as an impetus to further improvement of information technologies.
The fourth generation (relational databases: architecture client-server & quot ;, 1980-1995) was an alternative to a low-level interface. The idea of a relational model consists in a uniform representation of essence and connection. The relational data model has a unified language for defining data, navigating through data, and manipulating data. Work in this direction gave birth to a language called SQL, adopted as a standard.
Today, almost all database systems provide an SQL interface. In addition, all systems support native extensions that go beyond this standard.
In addition to improving productivity and ease of use, the relational model has some unexpected advantages. It turned out to be well-suited for use in the client-server architecture, parallel processing and graphical user interfaces. Client-server app is divided into two parts. The client part is responsible for supporting the input and output of the output to the user or client device. The server is responsible for storing the database, processing client requests to the database, returning to the client a common response. The relational interface is especially convenient for use in the client-server architecture, because it leads to the exchange of high-level queries and responses. A high-level SQL interface minimizes communication between the client and the server. Today, many client-server tools are built on the Open Database Connectivity (ODBC) protocol, which provides the client with a standard high-level query engine to the server. Architecture client-server continues to develop. As explained in the next section, there is an increasing trend in the integration of procedures in database servers. In particular, procedural languages such as BASIC and Java have been added to servers so that clients can call the application routines that are executed on them.
Parallel processing of databases was the second unexpected advantage of the relational model. Relations are homogeneous sets of records. The relational model includes a set of operations that are closed by composition: each operation receives relations at the input and produces a relation as a result. Therefore, relational operations naturally provide the capabilities of pipeline concurrency by directing the output of one operation to the next input.
Relational data is also well suited to graphical user interfaces (GUIs). Users can easily create relationships in the form of spreadsheets and visually manipulate them.
Meanwhile, file systems and set-oriented systems remained "workhorses" many corporations. Over the years, these corporations built huge applications and could not easily switch to the use of relational systems. Relational systems rather have become a key tool for new client-server applications.
The fifth generation (multimedia databases, since 1995) is connected with the transition from traditional storing numbers and symbols, to object-relational data containing data with complex behavior. For example, geographers should be able to implement maps, specialists in the field of text make sense to implement indexing and sampling of texts, specialists in graphic images would be worth implementing type libraries for working with images. A concrete example is the widespread objective type of time series. Instead of embedding this object in the database system, it is recommended to implement the corresponding type in the form of a class library with methods for creating, updating and deleting time series.
The rapid development of the Internet enhances this debate. Internet clients and servers are built using applets and "helpers" that store, process, and display data of one type or another. Users insert these applets into a browser or server. Common applets control sound, graphics, video, spreadsheets, graphs. For each of the data types associated with these applets, there is a class library. Desktop computers and Web browsers are common sources and receivers of most data. Therefore, the types and object models used in desktop computers will dictate which class libraries should be supported on the database servers.
To summarize, it should be noted that databases are designed to store not only numbers and text. They are used to store many kinds of objects and connections between these objects, which we see on the World Wide Web. The difference between the database and the rest of the Web becomes unclear.
In order to approach the current state of data management technology, it makes sense to describe two large projects that use the extreme capabilities of today's technology . The Earth Observation System/Data Information System (EOS/DIS) is being developed by NASA and its contractors to store all data that began to come from the satellites of the series "Mission to Planet Earth". since 1977. The volume of the database, including data from remote sensor sensors, will grow by 5 terabytes per day (1 TB - 106 GB). By 2007, the database size will grow to 15 PB. This is 1000 times larger than the volume of the largest modern operational databases. NASA wants this database to be accessible to everyone anywhere and anytime. Anyone can search, analyze and visualize data from this database. To build EOS/DIS, you will need the most advanced methods of storing, searching and visualizing data. Most of the data will have spatial and temporal characteristics, so that the system will require significant development of data storage technology of these types, as well as class libraries for various scientific data sets. For example, this application requires a library to determine the snow cover, the catalog of plant forms, cloud analysis, and other physical properties of the images.
Another impressive example of the database is the world library being created. Many departmental libraries open access to their repositories on-line. New scientific literature is published on-line. This type of publication raises difficult social questions about copyright and intellectual property and forces to solve deep technical problems. Scare the size and variety of information. Information appears in many languages, in many data formats and in huge volumes. When applying traditional approaches to the organization of such information (author, topic, name), the computer powers are not used to search for information on content, to link documents and to group similar documents. Finding the required information in the sea of documents, maps, photographs, audio and video information is an exciting and difficult problem.
The rapid development of information storage technologies, communications and processing allows you to move all information to cyberspace. The software for determining, retrieving and visualizing information that is readily available is the key to creating and accessing such information. The main tasks that need to be solved are:
• Defining data models for their new types (for example, spatial, temporal, graphic) and their integration with traditional database systems;
• Scaling of databases in size (up to petabytes), spatial allocation (distributed), and diversity (heterogeneous);
• automatic detection of trends in data structures and anomalies (search, data analysis);
• Integration (combination) of data from several sources;
• creating scripts and managing workflow (process) and data in organizations;
• Automation of designing and administration of databases.
Also We Can Offer!
- Argumentative essay
- Best college essays
- Buy custom essays online
- Buy essay online
- Cheap essay
- Cheap essay writing service
- Cheap writing service
- College essay
- College essay introduction
- College essay writing service
- Compare and contrast essay
- Custom essay
- Custom essay writing service
- Custom essays writing services
- Death penalty essay
- Do my essay
- Essay about love
- Essay about yourself
- Essay help
- Essay writing help
- Essay writing service reviews
- Essays online
- Fast food essay
- George orwell essays
- Human rights essay
- Narrative essay
- Pay to write essay
- Personal essay for college
- Personal narrative essay
- Persuasive writing
- Write my essay
- Write my essay for me cheap
- Writing a scholarship essay