Sei sulla pagina 1di 10

DBMS - database management systems

Definition: Database management system is a collection of programs that enables you to store, modify, and extract information from a database. DBMS allows organizations to easily develop databases for various applications. Examples of DBMS include computerized library system, automatized teller machine, and flight reservation system.

Description Retailers have benefited from the developments with data warehousing, recording customer transactions. Online transactions have become tremendously popular for e-business. Consumers and businesses are able to make payments securely through some company websites. Case study Analysis of sales and other data The new database system needs to have the capability to produce detailed reports, in text and graphical format, with statistical analysis. This may require the purchase of a business intelligence add-on package for the database system. An efficient data management system performances and customer relations Each theatre company needs to be able to manage its part of the consortium web site; specifically inputting details of performances and setting prices at least a year in advance for all performances through a content management system (CMS), possibly web-based. Also, details of customers who purchase tickets need to be accessed and used to send them specific information on special deals and newsletters, using a customer relationship management (CRM) system.

DBMS vs Database A system intended for easily organizing, storing and retrieving large amounts of data, is called a database. In other words, a database holds a bundle of organized data (typically in digital form) for one or more users. Databases, often abbreviated DB, are classified according to their content, such as document-text, bibliographic and statistical. But, a DBMS (Database Management System) is actually the whole system used for managing digital databases which allows storage of database content, creation/maintenance of data, search and other functionalities. In todays world a database itself is useless if there is no DBMS associated with it for accessing its data. But, increasingly, the term Database is used as shorthand for Database Management System. Database

A Database may contain different levels of abstraction in its architecture. Typically, the three levels: external, conceptual and internal make up the database architecture. External level defines how the users view the data. A single database can have multiple views. The internal level defines how the data is physically stored. The conceptual level is the communication medium between internal and external levels. It provides a unique view of the database regardless of how it is stored or viewed. There are several types of databases such as Analytical database, Data warehouses and Distributed databases. Databases (more correctly, relational databases) are made up of tables and they contain rows and columns, much like spreadsheets in Excel. Each column corresponds to an attribute while each row represents a single record. For example, in a database, which stores employee information of a company, the columns could contain employee name, employee Id and salary, while a single row represents a single employee. DBMS DBMS, sometimes just called a database manager, is a collection of computer programs that is dedicated for the management (i.e. organization, storage and retrieval) of all databases that are installed in a system (i.e. hard drive or network). There are different types of Database Management Systems existing in the world, and some of them are designed for the proper management of databases configured for specific purposes. Most popular commercial Database Management Systems are Oracle, DB2 and Microsoft Access. All these products provide means of allocation of different levels of privileges for different users, making it possible for a DBMS to be controlled centrally by a single administrator or to be allocated to several different people. There are four important elements in any Database Management System. They are the modeling language, data structures, query language and mechanism for transactions. The modeling language defines the language of each database hosted in the DBMS. Currently several popular approaches like hierarchal, network, relational and object are in practice. Data structures help organize the data such as individual records, files, fields and their definitions and objects such as visual media. Data query language maintains the security of the database by monitoring login data, access rights to different users, and protocols to add data to the system. SQL is a popular query language which is used in Relational Database Management Systems. Finally, the mechanism that allows for transactions help concurrency and multiplicity. That mechanism will make sure same record will not be modified by multiple users at the same time, thus keeping the data integrity in tact. Additionally, DBMSs provide backup and other facilities as well. Difference between DBMS and Database A database is a collection of organized data and the system that manages a collection of databases is called a Database Management System. The database holds the records, fields and cells of data. The DBMS is the tool used to manipulate the data inside the database. However, the term database is increasingly used as shorthand for Database Management System. To make the distinction simple, consider and operating system and the individual files stored in the system. Just like you need an operating system to access and modify files in the system, you need a DBMS to manipulate databases stored in the database system.

A Data Warehouse Case Study Abstract Maximizing Decision-making Through Communications, Command and Control of Data from Capture to Presentation of Results. The essential concept of a data warehouse is to provide the ability to gather data into optimized databases without regard for the generating applications or platforms. Data warehousing can be formally defined as the coordinated, architected, and periodic copying of data from various sources into an environment optimized for analytical and informational processing1. The Challenge Meaningful analysis of data requires us to unite information from many sources in many forms, including: images; text; audio/video recordings; databases; forms, etc. The information sources may never have been intended to be used for data analysis purposes. These sources may have different formats, contain inaccurate or outdated information, be of low transcription quality, be mislabeled or be incompatible. New sources of information may be needed periodically and some elements of information may be one time only artifacts. A data warehouse system designed for analysis must be capable of assimilating these data elements from many disparate sources into a common form. Correctly labeling and describing search keys and transcribing data in a form for analysis is critical. Qualifying the accuracy of the data against its original source of authority is imperative. Any such system must also be able to: apply policy and procedure for comparing information from multiple sources to select the most accurate source for a data element; correct data elements as needed; and check inconsistencies amongst the data. It must accomplish this while maintaining a complete data history of every element before and after every change with attribution of the change to person, time and place. It must be possible to apply policy or procedure within specific periods of time by processing date or event data to assure comparability of data within a calendar or a processing time horizon. When data originates from a source where different policies and procedures are applied, it must be possible to reapply new policies and procedures. Where quality of transcription is low qualifying the data through verification or sampling against original source documents and media is

required. Finally, it must be possible to recreate the exact state of all data at any date by processing time horizon or by event horizon. The analytical system applied to a data warehouse must be applicable to all data and combinations of data. It must take into account whether sufficient data exists at the necessary quality level to make conclusions at the desired significance level. Where possible it must facilitate remediation of data from original primary source(s) of authority. When new data is acquired from new sources, it must be possible to input and register the data automatically. Processing must be flexible enough to process these new sources according to their own unique requirements and yet consistently apply policy and procedure so that data from new sources is comparable to existing data. When decisions are made to change the way data is processed, edited, or how policy and procedure is applied, it must be possible to exactly determine the point in time that this change was made. It must be possible to apply old policies and procedures for comparison to old analyses, and new policy and procedure for new analyses. Defining Data Warehouse Issues The Lolopop partners served as principals in a data warehouse effort with objectives that are shared by most users of data warehouses. During business analysis and requirements gathering phase, we found that high quality was cited as the number one objective. Many other objectives were actually quality objectives, as well. Based on our experiences, Lolopop defines the generalized objectives in order of importance as: Quality information to create data and/or combine with other data sources. In this case, only about one in eight events could be used for analysis across databases. Stakeholders said that reporting of the same data from the same incoming information varied wildly when re-reported at a later date or when it came from another organizations analysis of the same data. Frequently the data in computer databases was demonstrably not contained in the original documents from which they were transcribed. Conflicting applications of policy and procedure by departments with different objectives, prejudices and perspectives were applied inconsistently without recording the changes or their sources, leaving the data for any given event a slave to who last interpreted it. Timely response to requests for data. Here, the data was processed in time period batches. In some instances, it could take up to four years to finalize a data period. Organizations requiring data for analysis simply went to the reporting source and got their own copies for analysis, entirely bypassing the official data warehouse and analytical sources. Consistent relating of information.

An issue as simple as a name -- the information that could be used to connect data events to histories for individuals or other uniting objects -- had no consistent method to standardize or simplify naming conventions. Another example, Geographical Information System (GIS) location information had an extravagant infrastructure that was constantly changing. This made comparisons of data from two different time periods extremely difficult. Easy access to information. Often data warehouse technologies assume or demand a sophisticated understanding of relational databases and statistical analysis. This prevents ordinary stakeholders from using data effectively and with confidence. In some instances, the personnel responsible for analysis lack the professional and technical skills to develop effective solutions. This issue can stultify reporting to a few kinds of reports and variants that have been programmed over time, and reduces data selection for the analyses to kind of magic applied by clerical personnel responsible for generating reports Unleash management to formulate and uniformly apply policy and procedure. We found that management decisions and mandates could be hindered by an inability to effectively capture, store, retrieve and analyze data. In this particular instance, no management controls existed to analyze: source of low quality; work rates; work effort to remediate (or even a concept of remediation); effectiveness of procedures; effectiveness of work effort; etc. Remediation is a good case in point. Management experienced difficulty with the concept of remedying data transcription from past paper forms -- even though the forms existed in images that could be automatically routed. The perception was that quantity of data, not quality, was the objective and that no one would ever attempt to fix data by verifying it or comparing it to original documents. Manage incoming data from non-integrated sources. Data from multiple, unrelated sources requires a plan to convert electronic data, manage imaging and documents inputs, manage workflow and manage the analysis of data. In this case, every interface required manual intervention. Since there was no system awareness at the beginning of the capture process as to what was needed for analysis at the end, it was very difficult to make rapid and time effective changes to accommodate changing stakeholder needs. Reproducible Reporting Results We found that reporting of data was not reproducible and the reasons for differences in reporting were not retrievable, undermining confidence in the data, analysis and reporting. One may essentially summarize these objectives as quality challenges that require a basic systems engineering approach for resolution. Concepts Foremost among these is the concept of Source of Authority (SOA) as a starting point for tracking and measuring quality. A data warehouse must accurately and precisely reflect the truth if it is to provide usable analysis and accurate decision making. To the extent it is truthful, analyses and decisions are

accurate and usable. If data element values cannot be accountably traced to their sources, and the truth of the sources assessed, one never knows whether information is reliable or not. Lolopop defines SOA as an information s ource designated by the organization as unquestionably correct. SOA enforces the rule for attribution of accepted value or any change and offers the ability to apply new policy to prefer another source as more authoritative. The SOA concept originated from our discovery that data entry clerks were transcribing what they felt should have been reported versus what was reported; with no consistent or documented basis for making their corrections. Example of SOA use: Suppose an organization assigns (or accepts) the authority for a data element to be an onsite reporters written verification. If theres an error in the data, the organization applies a data correction policy (or rule) that accepts higher authority from an alternate source (SOA #2), which now becomes the SOA. Next year, effective January 1st, the organization equips all its on-site reporters with scanning devices to record the data (SOA #3). Now, the organization applies a new policy essentially saying prior to 12/31 the highest SOA for this data element is source two and beginning 1/1 its source three. Resolution of indistinct or conflicting naming conventions is required to prevent imprecise or erroneous comparisons and ensure the ability to relate data from different sources. Lolopop naming conventions establishes the ability to standardize object identification and comparison fields to distinguish what is being compared ... regardless of source name, object instantiation, or variable. Lolopop quality components ensure the accuracy and precision of data acquisition, transcription, processing and remediation as measured against the authoritative origin (SOA) of the data element information. Remediation is required to correct erroneous capture or transcription of data elements, and allow transcription of additional data elements not originally captured Lolopop remediation components include quality verification, quality assurance and quality measurement. Quality and confidence become part of the provenance for a dataset indicating what quality levels it represents and what confidence can be expected in statistical analyses. The Lolopop Data Warehouse Communications, Command and Control Center (DWC3) solution is comprised of many components as illustrated below. Future papers will discuss Lolopops: architecture; rules engine2; workbench utilities; data acquisition and coordination; and, compliance with the Federal Data Quality Act of 2002. Data Acquisition

Data provenance and r eproducible results require unique data storage. The Lolopop DataStackTM3 ensures that subsequent analyses: 1) exactly reproduce the content of the previous dataset analysis; or 2) reports the changes recorded based on dates of occurrence that explain absolutely the difference from the earlier analysis. Datasets have fixed content that has computed or measured quality and confidence levels, and change logging included in the dataset Routing and Scheduling Lolopop includes a proprietary state seeking routing and scheduling component. State seeking means that routing initiates with a set of unpopulated data elements. User management specifies rules for data quality and confidence objectives which are converted into a set of unsatisfied data elements the Dataset Definition. Lolopops routing and scheduling system processes incoming data according to rules until: objectives are met; objectives cannot be met; or a remediation plan is presented. Analytics One Lolopop component populates the Dataset Instantiation with data elements meeting the specified quality traceable to SOA while supporting the required result confidence level. Another builds the analysis plan, selects the appropriate analysis tool and defines all calculations that will be performed against the Dataset Instantiation: including data selection and interpretation; aggregate calculation; and testing. One may choose among: comparison by different times, locations or conditions; trend by time period; significant counts, central values and range values; statistical display; topographical display; and, alarm or trigger. A non-parametric approach is selected when the data is abnormally distributed or the sample is too small.

Case Summary: A waiter takes an order at a table, and then enters it online via one of the six terminals located in the restaurant dining room. The order is routed to a printer in the appropriate preparation area: the cold item printer if it is a salad, the hot-item printer if it is a hot sandwich or the bar printer if it is a drink. A customers meal check-listing (bill) the items ordered and the respective prices are automatically generated. This ordering system eliminates the old three-carbon-copy guest check system as well as any problems caused by a waiters handwriting. When the kitchen runs out of a food item, the cooks send out an out of stock message, which will be displayed on the

dining room terminals when waiters try to order that item. This gives the waiters faster feedback, enabling them to give better service to the customers. Other system features aid management in the planning and control of their restaurant business. The system provides up-to-the-minute information on the food items ordered and breaks out percentages showing sales of each item versus total sales. This helps management plan menus according to customers tastes. The system also compares the weekly sales totals versus food costs, allowing planning for tighter cost controls. In addition, whenever an order is voided, the reasons for the void are keyed in. This may help later in management decisions, especially if the voids consistently related to food or service. Acceptance of the system by the users is exceptionally high since the waiters and waitresses were involved in the selection and design process. All potential users were asked to give their impressions and ideas about the various systems available before one was chosen. Questions: 1. In the light of the system, describe the decisions to be made in the area of strategic planning, managerial control and operational control? What information would you require to make such decisions? 2. What would make the system a more complete MIS rather than just doing transaction processing? 3. Explain the probable effects that making the system more formal would have on the customers and the management.

A Case Study: Knowledge Management Systems to Enhance a Nursing Curriculum


INTRODUCTION As the use of computer and information technology in health care continues to increase, so will the applications of such technology in nursing practice and patient education. Innovative teaching strategies incorporating technology-based teaching and learning assignments have increased student achievement, including retention, motivation, and class participation; improved learning and critical thinking, provided instructional consistency, and enhanced clinical education.2 Furthermore, creating nursing curriculum that links people and information resources into a web of learners fosters professional community, communication, and group collaboration as the nursing student engages in their journey from a student to a novice practitioner A primary role of the nurse as a health care provider has been one of patient educator. In result, nurse professionals and nurse educators need to examine how the nursing profession can use the potential of the Internet to redesign patient education and transform nursing practice MATERIALS AND METHODS In this case study the knowledge management system was integrated into a Health Sciences and Interprofessional Education and Research course. Students convened in small groups to create on-line patient education materials addressing either Diabetes or Alzheimers using dynamic templates adapted from the health information web site. research staff observed students uses of the knowledge management system and administered a survey to investigate their perceptions of the technology at the conclusion of the course.

THE CONCEPTUAL FRAMEWORK: KNOWLEDGE MANAGEMENT SYSTEMS The conceptual framework work of this educational technology is based on the oxymoron: Knowledge Management. Being that knowledge is largely cognitive and highly personal, while management involves organizational processes, it must be understood how these two contrasting concepts are intertwined. WHAT IS KNOWLEDGE? Throughout the literature knowledge is similarly defined. According to Websters Dictionary, knowledge is the fact or condition of knowing something with familiarity gained through experience or association. It is a fluid mix of framed experience, values, contextual information, and expert insight that provides a framework for evaluating and incorporating new experiences and information. Key concepts of knowledge include experience, truth, judgment, and rules of thumb. Knowledge is embodied in people, and acquired on an individual basis. Therefore the entirety of knowledge on any one subject is usually impossible to obtain. No one person can take responsibility for collective knowledge.7 It goes beyond any single discipline: knowledge is a function of collaboration. WHAT IS KNOWLEDGE MANAGEMENT? Knowledge management is the explicit and systematic organization of vital knowledge and its associated processes of finding, selecting, organizing, distilling, and presenting information in a way that improves an individuals comprehension of a specific area of interest. It requires turning personal knowledge into knowledge for learners-at-large through the organization of information across disciplines. Initiatives when managing knowledge generally focuses on two fundamental objectives: enabling knowledge sharing and using knowledge to generate community. Specific knowledge management activities that have been implemented within the business model have facilitated an organization to focus on acquiring, storing, and utilizing knowledge for such things as problem solving, dynamic learning, strategic planning and decision making. This knowledge management conceptual framework is highly adaptable to the higher educational environment. THE KNOWLEDGE MANAGEMENT SYSTEM With the interest of developing a technology with the appropriate infrastructure to support learning, the KMS was adapted from the informational web site, developed by the Department of Orthopaedics and Sports Medicine. This site aims to serve as a knowledge-building community, a knowledge management tool, and a source of information and education regarding orthopedic conditions, arthritis, and sports injury. The website is a successful system to create and provide high quality, easy-to find information for learners-at-large: this knowledge source is visited once every 20 seconds by individuals from over 40 countries each day. Development of these templates was an iterative process. Multiple data collection mechanisms were used to determine what questions patients have when visiting a health information website. These methods included: web site visitors feedback log file analysis empirical research observations consultations with experts in the condition addressed in the templates literature research The resulting templates aid students to create articles regarding health conditions that are Extra ordinarily complete, and functional for multiple health conditions.

{turn into a more graphic bulleted list} Properties of templates: (1) Provide the author with an outline and an example of appropriate language and knowledge sought by patients. (2) Store the content in a searchable web-based database. (3) Automatically convert the content into a web-based article.

Potrebbero piacerti anche