Database design
Database design is the process of producing a detailed data model of a database. This data model contains all the necessary logical and physical design choices and physical storage parameters needed to generate a design in a data definition language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity.
Overview[edit | edit source]
Database design involves classifying data and identifying interrelationships. This theoretical representation of the data is called an entity-relationship model (ER model). The process of database design includes the following steps:
Requirement Analysis[edit | edit source]
Requirement analysis is the first step in the database design process. It involves gathering the requirements of the database from stakeholders and understanding the data needs of the organization. This step is crucial for ensuring that the database will meet the needs of its users.
Conceptual Design[edit | edit source]
In the conceptual design phase, the data requirements are translated into a conceptual model using an ER model. This model includes entities, attributes, and relationships. The ER model is a high-level representation of the data and does not include any details about how the data will be stored physically.
Logical Design[edit | edit source]
The logical design phase involves converting the conceptual model into a logical model. This model is more detailed and includes the structure of the database, such as tables, columns, and relationships. The logical model is independent of any specific database management system (DBMS).
Physical Design[edit | edit source]
The physical design phase involves translating the logical model into a physical model. This model includes the actual implementation details, such as the specific DBMS to be used, indexing strategies, and storage parameters. The physical design ensures that the database will perform efficiently and meet the required performance criteria.
Normalization[edit | edit source]
Normalization is a process used in database design to minimize redundancy and dependency by organizing fields and table of a database. The main goal of normalization is to separate data into different tables to reduce data redundancy and improve data integrity. The process involves dividing large tables into smaller tables and defining relationships between them.
Denormalization[edit | edit source]
Denormalization is the process of combining normalized tables to improve database performance. This process is used when read performance is more critical than write performance. Denormalization can lead to data redundancy, but it can also reduce the number of joins needed to retrieve data, thus improving query performance.
Related Pages[edit | edit source]
- Database management system
- Entity-relationship model
- Normalization (database)
- Denormalization
- Data modeling
- SQL
- NoSQL
See Also[edit | edit source]
Search WikiMD
Ad.Tired of being Overweight? Try W8MD's physician weight loss program.
Semaglutide (Ozempic / Wegovy and Tirzepatide (Mounjaro / Zepbound) available.
Advertise on WikiMD
WikiMD's Wellness Encyclopedia |
Let Food Be Thy Medicine Medicine Thy Food - Hippocrates |
Translate this page: - East Asian
中文,
日本,
한국어,
South Asian
हिन्दी,
தமிழ்,
తెలుగు,
Urdu,
ಕನ್ನಡ,
Southeast Asian
Indonesian,
Vietnamese,
Thai,
မြန်မာဘာသာ,
বাংলা
European
español,
Deutsch,
français,
Greek,
português do Brasil,
polski,
română,
русский,
Nederlands,
norsk,
svenska,
suomi,
Italian
Middle Eastern & African
عربى,
Turkish,
Persian,
Hebrew,
Afrikaans,
isiZulu,
Kiswahili,
Other
Bulgarian,
Hungarian,
Czech,
Swedish,
മലയാളം,
मराठी,
ਪੰਜਾਬੀ,
ગુજરાતી,
Portuguese,
Ukrainian
WikiMD is not a substitute for professional medical advice. See full disclaimer.
Credits:Most images are courtesy of Wikimedia commons, and templates Wikipedia, licensed under CC BY SA or similar.
Contributors: Prab R. Tumpati, MD