Distributed DBMS [GTU]
Free
-
ddb
- Introduction to Distributed Database
- Design Issues in Distributed database
- Architectures in Distributed database
- Fragmentation and correctness rules
- Horizontal fragmentation
- Vertical fragmentation
- Derived horizontal fragmentation
- Query processing cost
- Concurrency Control Protocol
- Locking based concurrency
- Time Stamp concurrency protocol
- Dead lock and its prevention techniques
- Dead lock detection method
- Two phase commit protocol
- Three phase commit protocol
Distributed DBMS
The Prerequisite for studying this subject Database Management Systems & Networking. A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A software system used to maintain relational databases is a relational database management system (RDBMS). Many relational database systems have the option of using SQL (Structured Query Language) for querying and maintaining the database.
Chapter Introduction consists of the following subtopics Distributed Data Processing, Distributed Database Systems, Promises of DDBSs, Complicating factors, Problem areas. Chapter Overview of RDBMS consists of the following subtopics Concepts, Integrity, Normalization. Chapter Distributed DBMS Architecture consists of the following subtopics Models- Autonomy, Distribution, Heterogeneity DDBMS Architecture – Client/Server, Peer to peer, MDBS.
Chapter Data Distribution Alternatives consists of the following subtopics Design Alternatives – localized data, distributed data Fragmentation – Vertical, Horizontal (primary & derived), hybrid, general guidelines, correctness rules Distribution transparency – location, fragmentation, replication Impact of distribution on user queries – No Global Data Dictionary(GDD), GDD containing location information, Example on fragmentation. Chapter Semantic Data Control consists of the following subtopics View Management, Authentication – database authentication, OS authentication, Access Rights, Semantic Integrity Control – Centralized & Distributed , Cost of enforcing semantic integrity.
Chapter Query Processing consists of the following subtopics Query Processing Problem, Layers of Query Processing Query Processing in Centralized Systems – Parsing & Translation, Optimization, Code generation, Example Query Processing in Distributed Systems – Mapping global query to local, Optimization. Chapter Optimization of Distributed Queries consists of the following subtopics Query Optimization, Centralized Query Optimization, Join Ordering Distributed Query Optimization Algorithms. Chapter Distributed Transaction Management & Concurrency Control consists of the following subtopics Transaction concept, ACID property, Objectives of transaction management, Types of transactions, Objectives of Distributed Concurrency Control, Concurrency Control anomalies, Methods of concurrency control, Serializability and recoverability, Distributed Serializability, Enhanced lock based and timestamp based protocols, Multiple granularity, Multi version schemes, Optimistic Concurrency Control techniques. Chapter Distributed Deadlock & Recovery consists of the following subtopics Deadlock concept, Deadlock in Centralized systems, Deadlock in Distributed Systems – Detection, Prevention, Avoidance, Wait-Die Algorithm, Wound-Wait algorithm Recovery in DBMS – Types of Failure, Methods to control failure, Different techniques of recoverability, Write- Ahead logging Protocol, Advanced recovery techniques- Shadow Paging, Fuzzy checkpoint, ARIES, RAID levels, Two Phase and Three Phase commit protocols.
Database normalization is the process of structuring a database, usually a relational database, in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by Edgar F. Codd as part of his relational model. Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).
Query optimization is a feature of many relational database management systems and other databases such as graph databases. The query optimizer attempts to determine the most efficient way to execute a given query by considering the possible query plans. Generally, the query optimizer cannot be accessed directly by users: once queries are submitted to the database server, and parsed by the parser, they are then passed to the query optimizer where optimization occurs. However, some database engines allow guiding the query optimizer with hints. A query is a request for information from a database. It can be as simple as “finding the address of a person with Social Security number 123-45-6789,” or more complex like “finding the average salary of all the employed married men in California between the ages 30 to 39, that earn less than their spouses.” Queries results are generated by accessing relevant database data and manipulating it in a way that yields the requested information. Since database structures are complex, in most cases, and especially for not-very-simple queries, the needed data for a query can be collected from a database by accessing it in different ways, through different data-structures, and in different orders. Each different way typically requires different processing time. Processing times of the same query may have large variance, from a fraction of a second to hours, depending on the way selected. The purpose of query optimization, which is an automated process, is to find the way to process a given query in minimum time. The large possible variance in time justifies performing query optimization, though finding the exact optimal way to execute a query, among all possibilities, is typically very complex, time-consuming by itself, may be too costly, and often practically impossible. Thus query optimization typically tries to approximate the optimum by comparing several common-sense alternatives to provide in a reasonable time a “good enough” plan which typically does not deviate much from the best possible result.
Prepare For Your Placements: https://lastmomenttuitions.com/courses/placement-preparation/
/ Youtube Channel: https://www.youtube.com/channel/UCGFNZxMqKLsqWERX_N2f08Q
Follow For Latest Updates, Study Tips & More Content!
Course Features
- Lectures 15
- Quizzes 0
- Students 50
- Certificate No
- Assessments Yes