• Publications
  • Conferences & Events
  • Professional Learning
  • Science Standards
  • Awards & Competitions
  • Instructional Materials
  • Free Resources
  • American Rescue Plan
  • For Preservice Teachers

NCCSTS Case Collection

  • Science and STEM Education Jobs
  • Interactive eBooks+
  • Digital Catalog
  • Regional Product Representatives
  • e-Newsletters
  • Bestselling Books
  • Latest Books
  • Popular Book Series
  • Submit Book Proposal
  • Web Seminars
  • National Conference • New Orleans 24
  • Leaders Institute • New Orleans 24
  • Exhibits & Sponsorship
  • Submit a Proposal
  • Conference Reviewers
  • Past Conferences
  • Latest Resources
  • Professional Learning Units & Courses
  • For Districts
  • Online Course Providers
  • Schools & Districts
  • College Professors & Students
  • The Standards
  • Teachers and Admin
  • eCYBERMISSION
  • Toshiba/NSTA ExploraVision
  • Junior Science & Humanities Symposium
  • Teaching Awards
  • Climate Change
  • Earth & Space Science
  • New Science Teachers
  • Early Childhood
  • Middle School
  • High School
  • Postsecondary
  • Informal Education
  • Journal Articles
  • Lesson Plans
  • e-newsletters
  • Science & Children
  • Science Scope
  • The Science Teacher
  • Journal of College Sci. Teaching
  • Connected Science Learning
  • NSTA Reports
  • Next-Gen Navigator
  • Science Update
  • Teacher Tip Tuesday
  • Trans. Sci. Learning

MyNSTA Community

  • My Collections

Case Study Listserv

Permissions & Guidelines

Submit a Case Study

Resources & Publications

Enrich your students’ educational experience with case-based teaching

The NCCSTS Case Collection, created and curated by the National Center for Case Study Teaching in Science, on behalf of the University at Buffalo, contains over a thousand peer-reviewed case studies on a variety of topics in all areas of science.

Cases (only) are freely accessible; subscription is required for access to teaching notes and answer keys.

Subscribe Today

Browse Case Studies

Latest Case Studies

NSF logo

Development of the NCCSTS Case Collection was originally funded by major grants to the University at Buffalo from the National Science Foundation , The Pew Charitable Trusts , and the U.S. Department of Education .

T4Tutorials.com

Case Studies Examples Scenarios Database System DBMS

Most of the time you see the case studies and scenario-based questions in the Database System (DBMS) paper. Keeping in view, I am sharing with you some of the case study base questions of the database course.

Examples of Case Studies and Scenarios questions from DBMS

  • Examples of Case Studies and scenarios from the Database System.
  • How you can make a database from the scenario mentioned below.
  • How you can normalize the database tables from the case studies mentioned below.
  • How to draw the Entity-relationship diagram from the given case study.
  • How to draw the Data flow diagram from the case studies mentioned below.
  • What database model is suitable for the case studies mentioned below.
  • What kind of database users are suitable for the given case study.
  • What kind of database redundancies and inconsistencies are possible in the given scenario.
  • How You can write SQL Queries on the tables of the mentioned case study.
  • Find the possible database keys from the tables of these case studies.
  • Suggest the relationships among the tables of the given scenarios.
Vehicle information dissemination system for Cloud  Android Project for BCS BSIT MCS BSSE
Gym and Fitness Management System Project IN C# for BCS BSIT MCS BSSE
HR Management System Project in C# and VB.NET for BCS BSIT MCS BSSE
Employees Attendance System via Fingerprint  in C# and VB.NET for BCS BSIT MCS BSSE
Pharmacy Record Management System Project in PHP, ASP or C#.NET
Car information System using Android and Arduino final year Project for BSCS BSIT MCS BSSE
Agile File Master App final year project for BSCS BSIT MCS BSSE
Android Messenger App final year project for BSCS BSIT MCS BSSE
Android Call Recorder App final year project for BSCS BSIT MCS BSSE
Music Listening App final year project for BSCS BSIT MCS BSSE
Like mind matches Android application – Final year project for MCS
Financial Helper Using QR/Barcode Scanner Android Final year project for MCS BSCS BSSE
My Grocery List Mobile Application Project in  Android

If you are still in reading the more case studies, then you can read 100+ case studies .

Related Posts:

  • Case Studies Examples Scenarios OOP
  • History of Database System (DBMS)
  • Leadership Case Studies MCQs
  • Ethical Dilemmas and Case Studies MCQs
  • Data Independence in DBMS (Database)
  • 3 Tier Database Architecture in DBMS

You must be logged in to post a comment.

U.S. flag

An official website of the United States government

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Case studies & examples

Agencies mobilize to improve emergency response in puerto rico through better data.

Federal agencies' response efforts to Hurricanes Irma and Maria in Puerto Rico was hampered by imperfect address data for the island. In the aftermath, emergency responders gathered together to enhance the utility of Puerto Rico address data and share best practices for using what information is currently available.

Federal Data Strategy

BUILDER: A Science-Based Approach to Infrastructure Management

The Department of Energy’s National Nuclear Security Administration (NNSA) adopted a data-driven, risk-informed strategy to better assess risks, prioritize investments, and cost effectively modernize its aging nuclear infrastructure. NNSA’s new strategy, and lessons learned during its implementation, will help inform other federal data practitioners’ efforts to maintain facility-level information while enabling accurate and timely enterprise-wide infrastructure analysis.

Department of Energy

data management , data analysis , process redesign , Federal Data Strategy

Business case for open data

Six reasons why making your agency's data open and accessible is a good business decision.

CDO Council Federal HR Dashboarding Report - 2021

The CDO Council worked with the US Department of Agriculture, the Department of the Treasury, the United States Agency for International Development, and the Department of Transportation to develop a Diversity Profile Dashboard and to explore the value of shared HR decision support across agencies. The pilot was a success, and identified potential impact of a standardized suite of HR dashboards, in addition to demonstrating the value of collaborative analytics between agencies.

Federal Chief Data Officer's Council

data practices , data sharing , data access

CDOC Data Inventory Report

The Chief Data Officers Council Data Inventory Working Group developed this paper to highlight the value proposition for data inventories and describe challenges agencies may face when implementing and managing comprehensive data inventories. It identifies opportunities agencies can take to overcome some of these challenges and includes a set of recommendations directed at Agencies, OMB, and the CDO Council (CDOC).

data practices , metadata , data inventory

DSWG Recommendations and Findings

The Chief Data Officer Council (CDOC) established a Data Sharing Working Group (DSWG) to help the council understand the varied data-sharing needs and challenges of all agencies across the Federal Government. The DSWG reviewed data-sharing across federal agencies and developed a set of recommendations for improving the methods to access and share data within and between agencies. This report presents the findings of the DSWG’s review and provides recommendations to the CDOC Executive Committee.

data practices , data agreements , data sharing , data access

Data Skills Training Program Implementation Toolkit

The Data Skills Training Program Implementation Toolkit is designed to provide both small and large agencies with information to develop their own data skills training programs. The information provided will serve as a roadmap to the design, implementation, and administration of federal data skills training programs as agencies address their Federal Data Strategy’s Agency Action 4 gap-closing strategy training component.

data sharing , Federal Data Strategy

Data Standdown: Interrupting process to fix information

Although not a true pause in operations, ONR’s data standdown made data quality and data consolidation the top priority for the entire organization. It aimed to establish an automated and repeatable solution to enable a more holistic view of ONR investments and activities, and to increase transparency and effectiveness throughout its mission support functions. In addition, it demonstrated that getting top-level buy-in from management to prioritize data can truly advance a more data-driven culture.

Office of Naval Research

data governance , data cleaning , process redesign , Federal Data Strategy

Data.gov Metadata Management Services Product-Preliminary Plan

Status summary and preliminary business plan for a potential metadata management product under development by the Data.gov Program Management Office

data management , Federal Data Strategy , metadata , open data

PDF (7 pages)

Department of Transportation Case Study: Enterprise Data Inventory

In response to the Open Government Directive, DOT developed a strategic action plan to inventory and release high-value information through the Data.gov portal. The Department sustained efforts in building its data inventory, responding to the President’s memorandum on regulatory compliance with a comprehensive plan that was recognized as a model for other agencies to follow.

Department of Transportation

data inventory , open data

Department of Transportation Model Data Inventory Approach

This document from the Department of Transportation provides a model plan for conducting data inventory efforts required under OMB Memorandum M-13-13.

data inventory

PDF (5 pages)

FEMA Case Study: Disaster Assistance Program Coordination

In 2008, the Disaster Assistance Improvement Program (DAIP), an E-Government initiative led by FEMA with support from 16 U.S. Government partners, launched DisasterAssistance.gov to simplify the process for disaster survivors to identify and apply for disaster assistance. DAIP utilized existing partner technologies and implemented a services oriented architecture (SOA) that integrated the content management system and rules engine supporting Department of Labor’s Benefits.gov applications with FEMA’s Individual Assistance Center application. The FEMA SOA serves as the backbone for data sharing interfaces with three of DAIP’s federal partners and transfers application data to reduce duplicate data entry by disaster survivors.

Federal Emergency Management Agency

data sharing

Federal CDO Data Skills Training Program Case Studies

This series was developed by the Chief Data Officer Council’s Data Skills & Workforce Development Working Group to provide support to agencies in implementing the Federal Data Strategy’s Agency Action 4 gap-closing strategy training component in FY21.

FederalRegister.gov API Case Study

This case study describes the tenets behind an API that provides access to all data found on FederalRegister.gov, including all Federal Register documents from 1994 to the present.

National Archives and Records Administration

PDF (3 pages)

Fuels Knowledge Graph Project

The Fuels Knowledge Graph Project (FKGP), funded through the Federal Chief Data Officers (CDO) Council, explored the use of knowledge graphs to achieve more consistent and reliable fuel management performance measures. The team hypothesized that better performance measures and an interoperable semantic framework could enhance the ability to understand wildfires and, ultimately, improve outcomes. To develop a more systematic and robust characterization of program outcomes, the FKGP team compiled, reviewed, and analyzed multiple agency glossaries and data sources. The team examined the relationships between them, while documenting the data management necessary for a successful fuels management program.

metadata , data sharing , data access

Government Data Hubs

A list of Federal agency open data hubs, including USDA, HHS, NASA, and many others.

Helping Baltimore Volunteers Find Where to Help

Bloomberg Government analysts put together a prototype through the Census Bureau’s Opportunity Project to better assess where volunteers should direct litter-clearing efforts. Using Census Bureau and Forest Service information, the team brought a data-driven approach to their work. Their experience reveals how individuals with data expertise can identify a real-world problem that data can help solve, navigate across agencies to find and obtain the most useful data, and work within resource constraints to provide a tool to help address the problem.

Census Bureau

geospatial , data sharing , Federal Data Strategy

How USDA Linked Federal and Commercial Data to Shed Light on the Nutritional Value of Retail Food Sales

Purchase-to-Plate Crosswalk (PPC) links the more than 359,000 food products in a comercial company database to several thousand foods in a series of USDA nutrition databases. By linking existing data resources, USDA was able to enrich and expand the analysis capabilities of both datasets. Since there were no common identifiers between the two data structures, the team used probabilistic and semantic methods to reduce the manual effort required to link the data.

Department of Agriculture

data sharing , process redesign , Federal Data Strategy

How to Blend Your Data: BEA and BLS Harness Big Data to Gain New Insights about Foreign Direct Investment in the U.S.

A recent collaboration between the Bureau of Economic Analysis (BEA) and the Bureau of Labor Statistics (BLS) helps shed light on the segment of the American workforce employed by foreign multinational companies. This case study shows the opportunities of cross-agency data collaboration, as well as some of the challenges of using big data and administrative data in the federal government.

Bureau of Economic Analysis / Bureau of Labor Statistics

data sharing , workforce development , process redesign , Federal Data Strategy

Implementing Federal-Wide Comment Analysis Tools

The CDO Council Comment Analysis pilot has shown that recent advances in Natural Language Processing (NLP) can effectively aid the regulatory comment analysis process. The proof-ofconcept is a standardized toolset intended to support agencies and staff in reviewing and responding to the millions of public comments received each year across government.

Improving Data Access and Data Management: Artificial Intelligence-Generated Metadata Tags at NASA

NASA’s data scientists and research content managers recently built an automated tagging system using machine learning and natural language processing. This system serves as an example of how other agencies can use their own unstructured data to improve information accessibility and promote data reuse.

National Aeronautics and Space Administration

metadata , data management , data sharing , process redesign , Federal Data Strategy

Investing in Learning with the Data Stewardship Tactical Working Group at DHS

The Department of Homeland Security (DHS) experience forming the Data Stewardship Tactical Working Group (DSTWG) provides meaningful insights for those who want to address data-related challenges collaboratively and successfully in their own agencies.

Department of Homeland Security

data governance , data management , Federal Data Strategy

Leveraging AI for Business Process Automation at NIH

The National Institute of General Medical Sciences (NIGMS), one of the twenty-seven institutes and centers at the NIH, recently deployed Natural Language Processing (NLP) and Machine Learning (ML) to automate the process by which it receives and internally refers grant applications. This new approach ensures efficient and consistent grant application referral, and liberates Program Managers from the labor-intensive and monotonous referral process.

National Institutes of Health

standards , data cleaning , process redesign , AI

FDS Proof Point

National Broadband Map: A Case Study on Open Innovation for National Policy

The National Broadband Map is a tool that provide consumers nationwide reliable information on broadband internet connections. This case study describes how crowd-sourcing, open source software, and public engagement informs the development of a tool that promotes government transparency.

Federal Communications Commission

National Renewable Energy Laboratory API Case Study

This case study describes the launch of the National Renewable Energy Laboratory (NREL) Developer Network in October 2011. The main goal was to build an overarching platform to make it easier for the public to use NREL APIs and for NREL to produce APIs.

National Renewable Energy Laboratory

Open Energy Data at DOE

This case study details the development of the renewable energy applications built on the Open Energy Information (OpenEI) platform, sponsored by the Department of Energy (DOE) and implemented by the National Renewable Energy Laboratory (NREL).

open data , data sharing , Federal Data Strategy

Pairing Government Data with Private-Sector Ingenuity to Take on Unwanted Calls

The Federal Trade Commission (FTC) releases data from millions of consumer complaints about unwanted calls to help fuel a myriad of private-sector solutions to tackle the problem. The FTC’s work serves as an example of how agencies can work with the private sector to encourage the innovative use of government data toward solutions that benefit the public.

Federal Trade Commission

data cleaning , Federal Data Strategy , open data , data sharing

Profile in Data Sharing - National Electronic Interstate Compact Enterprise

The Federal CDO Council’s Data Sharing Working Group highlights successful data sharing activities to recognize mature data sharing practices as well as to incentivize and inspire others to take part in similar collaborations. This Profile in Data Sharing focuses on how the federal government and states support children who are being placed for adoption or foster care across state lines. It greatly reduces the work and time required for states to exchange paperwork and information needed to process the placements. Additionally, NEICE allows child welfare workers to communicate and provide timely updates to courts, relevant private service providers, and families.

Profile in Data Sharing - National Health Service Corps Loan Repayment Programs

The Federal CDO Council’s Data Sharing Working Group highlights successful data sharing activities to recognize mature data sharing practices as well as to incentivize and inspire others to take part in similar collaborations. This Profile in Data Sharing focuses on how the Health Resources and Services Administration collaborates with the Department of Education to make it easier to apply to serve medically underserved communities - reducing applicant burden and improving processing efficiency.

Profile in Data Sharing - Roadside Inspection Data

The Federal CDO Council’s Data Sharing Working Group highlights successful data sharing activities to recognize mature data sharing practices as well as to incentivize and inspire others to take part in similar collaborations. This Profile in Data Sharing focuses on how the Department of Transportation collaborates with the Customs and Border Patrol and state partners to prescreen commercial motor vehicles entering the US and to focus inspections on unsafe carriers and drivers.

Profiles in Data Sharing - U.S. Citizenship and Immigration Service

The Federal CDO Council’s Data Sharing Working Group highlights successful data sharing activities to recognize mature data sharing practices as well as to incentivize and inspire others to take part in similar collaborations. This Profile in Data Sharing focuses on how the U.S. Citizenship and Immigration Service (USCIS) collaborated with the Centers for Disease Control to notify state, local, tribal, and territorial public health authorities so they can connect with individuals in their communities about their potential exposure.

SBA’s Approach to Identifying Data, Using a Learning Agenda, and Leveraging Partnerships to Build its Evidence Base

Through its Enterprise Learning Agenda, Small Business Administration’s (SBA) staff identify essential research questions, a plan to answer them, and how data held outside the agency can help provide further insights. Other agencies can learn from the innovative ways SBA identifies data to answer agency strategic questions and adopt those aspects that work for their own needs.

Small Business Administration

process redesign , Federal Data Strategy

Supercharging Data through Validation as a Service

USDA's Food and Nutrition Service restructured its approach to data validation at the state level using an open-source, API-based validation service managed at the federal level.

data cleaning , data validation , API , data sharing , process redesign , Federal Data Strategy

The Census Bureau Uses Its Own Data to Increase Response Rates, Helps Communities and Other Stakeholders Do the Same

The Census Bureau team produced a new interactive mapping tool in early 2018 called the Response Outreach Area Mapper (ROAM), an application that resulted in wider use of authoritative Census Bureau data, not only to improve the Census Bureau’s own operational efficiency, but also for use by tribal, state, and local governments, national and local partners, and other community groups. Other agency data practitioners can learn from the Census Bureau team’s experience communicating technical needs to non-technical executives, building analysis tools with widely-used software, and integrating efforts with stakeholders and users.

open data , data sharing , data management , data analysis , Federal Data Strategy

The Mapping Medicare Disparities Tool

The Centers for Medicare & Medicaid Services’ Office of Minority Health (CMS OMH) Mapping Medicare Disparities Tool harnessed the power of millions of data records while protecting the privacy of individuals, creating an easy-to-use tool to better understand health disparities.

Centers for Medicare & Medicaid Services

geospatial , Federal Data Strategy , open data

The Veterans Legacy Memorial

The Veterans Legacy Memorial (VLM) is a digital platform to help families, survivors, and fellow veterans to take a leading role in honoring their beloved veteran. Built on millions of existing National Cemetery Administration (NCA) records in a 25-year-old database, VLM is a powerful example of an agency harnessing the potential of a legacy system to provide a modernized service that better serves the public.

Veterans Administration

data sharing , data visualization , Federal Data Strategy

Transitioning to a Data Driven Culture at CMS

This case study describes how CMS announced the creation of the Office of Information Products and Data Analytics (OIPDA) to take the lead in making data use and dissemination a core function of the agency.

data management , data sharing , data analysis , data analytics

PDF (10 pages)

U.S. Department of Labor Case Study: Software Development Kits

The U.S. Department of Labor sought to go beyond merely making data available to developers and take ease of use of the data to the next level by giving developers tools that would make using DOL’s data easier. DOL created software development kits (SDKs), which are downloadable code packages that developers can drop into their apps, making access to DOL’s data easy for even the most novice developer. These SDKs have even been published as open source projects with the aim of speeding up their conversion to SDKs that will eventually support all federal APIs.

Department of Labor

open data , API

U.S. Geological Survey and U.S. Census Bureau collaborate on national roads and boundaries data

It is a well-kept secret that the U.S. Geological Survey and the U.S. Census Bureau were the original two federal agencies to build the first national digital database of roads and boundaries in the United States. The agencies joined forces to develop homegrown computer software and state of the art technologies to convert existing USGS topographic maps of the nation to the points, lines, and polygons that fueled early GIS. Today, the USGS and Census Bureau have a longstanding goal to leverage and use roads and authoritative boundary datasets.

U.S. Geological Survey and U.S. Census Bureau

data management , data sharing , data standards , data validation , data visualization , Federal Data Strategy , geospatial , open data , quality

USA.gov Uses Human-Centered Design to Roll Out AI Chatbot

To improve customer service and give better answers to users of the USA.gov website, the Technology Transformation and Services team at General Services Administration (GSA) created a chatbot using artificial intelligence (AI) and automation.

General Services Administration

AI , Federal Data Strategy

resources.data.gov

An official website of the Office of Management and Budget, the General Services Administration, and the Office of Government Information Services.

This section contains explanations of common terms referenced on resources.data.gov.

Real-world Database Integration Case Studies: Success Stories, Benefits, and Outcomes

avatar

In today's fast-paced business world, staying ahead of the competition requires efficient and accurate data management. Real-world database integration case studies offer valuable insights into the benefits and outcomes of integrating databases and systems. These success stories showcase how businesses have transformed their operations, achieving increased efficiency, improved data accuracy , enhanced decision-making, better customer experiences, and significant cost savings . In this blog post, we delve into various case studies that highlight the key benefits and outcomes of successful database integration projects. Whether you're a business owner or a data enthusiast, these real-world examples will inspire you to harness the power of database integration for your own organization's success.

Real-world Database Integration Case Studies: Success Stories, Benefits, and Outcomes

Increased efficiency and productivity

Case study: integration of customer data across multiple platforms.

In today's digital age, businesses are collecting vast amounts of customer data from various sources such as sales systems, marketing platforms, and customer support software. However, without proper integration of this data, it can be challenging to gain a comprehensive view of customers and their interactions with the company. This is where database integration plays a crucial role.

One success story in database integration involves the integration of customer data across multiple platforms. By integrating data from sales, marketing, and customer support systems into a centralized database, companies can eliminate manual data entry and reduce errors that often occur when transferring information between different systems. This automation of processes not only saves time but also improves operational efficiency .

For example, imagine a scenario where a customer places an order through the company's website. With integrated databases, this order information can be automatically synced with the sales system, updating inventory levels and triggering fulfillment processes. This eliminates the need for manual intervention and reduces the risk of errors or delays in processing orders.

Case study: Streamlining inventory management through database integration

Another real-world case study showcasing the benefits of database integration revolves around streamlining inventory management . Inventory management is a critical aspect of any business that deals with physical products. Without accurate and up-to-date information about inventory levels, companies may face stockouts or overstock situations that can lead to lost sales or increased carrying costs.

By integrating inventory data with sales and procurement systems, businesses can achieve real-time visibility into their inventory levels. This means that whenever a sale is made or new stock is received, the inventory database is automatically updated across all relevant systems. As a result, companies can make more informed decisions about purchasing and fulfillment based on accurate and timely information.

Furthermore, database integration enables improved demand forecasting by providing access to historical sales data alongside current inventory levels. This allows businesses to identify trends and patterns in consumer behavior and adjust their production or procurement strategies accordingly. By avoiding stockouts and overstock situations, companies can optimize their inventory levels, reduce costs, and improve customer satisfaction.

Tapdata: Real-time data capture & sync

When it comes to database integration solutions, Tapdata is an industry leader that offers a comprehensive set of features to enhance efficiency and productivity. With Tapdata's real-time data capture and sync capabilities, businesses can ensure that their databases are always up-to-date with the latest information from various sources.

One key advantage of using Tapdata is its guaranteed data freshness. By capturing data in real-time and syncing it across systems instantaneously, companies can rely on accurate and timely information for decision-making processes. This eliminates the need for manual data updates or batch processing, saving time and reducing the risk of outdated or inconsistent data.

Tapdata also provides a flexible and adaptive schema that allows businesses to easily consolidate data from multiple sources. Whether it's sales data from an e-commerce platform, marketing analytics from a CRM system, or customer feedback from social media channels, Tapdata can seamlessly integrate all these datasets into a unified database structure. This ensures that businesses have a holistic view of their operations without the need for complex data transformations or manual mapping.

In addition to its powerful integration capabilities, Tapdata offers a low code/no code pipeline development and transformation environment. This means that even non-technical users can design and deploy data pipelines without relying on IT resources. With its intuitive user interface and drag-and-drop functionality, Tapdata empowers business users to take control of their data integration processes.

Furthermore, Tapdata provides comprehensive data validation and monitoring features to ensure the accuracy and quality of integrated datasets. Businesses can set up automated checks and alerts to identify any anomalies or inconsistencies in the data flow. This proactive approach helps maintain data integrity throughout the integration process.

Enhanced data accuracy and quality

Case study: enhancing marketing campaigns with integrated customer insights.

In today's digital age, businesses have access to vast amounts of customer data from various sources such as CRM systems and website analytics. However, the challenge lies in effectively utilizing this data to enhance marketing campaigns and drive better results. This is where database integration plays a crucial role in improving data accuracy and quality.

By integrating customer data from different sources, businesses can create comprehensive customer profiles that provide valuable insights into their preferences, behaviors, and purchasing patterns. For example, by combining CRM data with website analytics, businesses can gain a deeper understanding of how customers interact with their brand across different touchpoints.

With this integrated customer insight, businesses can then create targeted marketing campaigns that are tailored to specific segments or individual customers. By personalizing the messaging and offers based on comprehensive customer profiles, businesses can significantly improve campaign effectiveness and achieve higher return on investment (ROI).

For instance, let's consider a case study of an e-commerce company that successfully enhanced its marketing campaigns through integrated customer insights . By integrating data from their CRM system, which contained information about past purchases and customer demographics, with website analytics data that tracked user behavior on their online store, they were able to gain a holistic view of each customer.

Using this integrated data, the company was able to identify patterns and trends in customer behavior. They discovered that certain products were more popular among specific demographic groups and that customers who made repeat purchases had distinct preferences compared to first-time buyers.

Armed with these insights, the company created targeted marketing campaigns for different segments of their customer base. They personalized email newsletters with product recommendations based on past purchases and sent exclusive offers to loyal customers. As a result of these efforts, they saw a significant increase in click-through rates and conversion rates for their email campaigns.

This case study highlights how integrating customer insights from multiple sources can lead to improved data accuracy and quality. By breaking down silos and bringing together data from different systems, businesses can gain a more comprehensive understanding of their customers and make informed marketing decisions.

Case study: Improving decision-making through real-time data integration

In today's fast-paced business environment, timely access to accurate and up-to-date information is crucial for making informed decisions. However, many businesses struggle with data scattered across different systems, leading to delays in decision-making and missed opportunities. Database integration can address this challenge by enabling real-time data integration from various sources.

Consider the case of a manufacturing company that successfully improved its decision-making process through real-time data integration. The company had multiple systems in place for sales, finance, and operations, each generating valuable data. However, these systems operated independently, resulting in delays in accessing critical information.

By implementing a database integration solution, the company was able to bring together real-time data from all these systems into a centralized platform. This allowed them to have a holistic view of their business operations at any given time. For example, they could monitor sales performance in real-time, track inventory levels accurately, and analyze financial metrics without manual intervention.

With this timely access to accurate information, the company's decision-makers were empowered to make informed choices quickly. They could identify trends and patterns as they emerged and respond promptly to market changes. This improved agility enabled them to adjust production schedules based on demand fluctuations and optimize inventory levels to avoid stockouts or excess inventory.

Furthermore, the integrated data provided valuable insights into the company's overall performance. Decision-makers could easily identify areas of improvement or inefficiencies by analyzing key performance indicators (KPIs) across different departments. This led to targeted initiatives for process optimization and cost reduction.

Improved data accessibility and insights

Case study: optimizing supply chain management through database integration.

In today's fast-paced business environment, supply chain management plays a crucial role in the success of any organization. The ability to seamlessly integrate data from suppliers, warehouses, and transportation systems is essential for achieving end-to-end visibility and improving coordination. This case study explores how a company optimized its supply chain management through effective database integration, resulting in improved data accessibility and valuable insights.

Integration of supply chain data from suppliers, warehouses, and transportation systems

The first step in optimizing the supply chain management process was to integrate data from various sources such as suppliers, warehouses, and transportation systems. By leveraging advanced database integration techniques, the company was able to establish seamless connectivity between these disparate systems. This integration allowed for real-time data synchronization and eliminated the need for manual data entry or reconciliation.

End-to-end visibility of the supply chain and improved coordination

With the successful integration of supply chain data, the company gained complete visibility into its entire supply chain network. This end-to-end visibility enabled them to track inventory levels, monitor order status, and identify potential bottlenecks or delays in the system. By having access to accurate and up-to-date information at all times, they were able to make informed decisions and take proactive measures to address any issues that arose.

Furthermore, this enhanced visibility also facilitated better coordination among different stakeholders involved in the supply chain. Suppliers could now align their production schedules with demand forecasts more effectively. Warehouses could optimize their inventory levels based on real-time sales data. Transportation systems could plan routes more efficiently by considering factors like traffic conditions or weather forecasts. As a result, the entire supply chain became more streamlined and responsive.

Reduced lead times and optimized inventory levels

One of the significant benefits of integrating supply chain data was the reduction in lead times throughout the process. With real-time access to information about product availability, order status updates, and delivery schedules, the company could expedite order processing and minimize delays. This not only improved customer satisfaction but also allowed them to fulfill orders more quickly, gaining a competitive edge in the market.

Moreover, by having accurate and timely data on inventory levels, the company was able to optimize its stock management practices. They could identify slow-moving or obsolete items and take appropriate actions such as liquidation or promotions to clear excess inventory. On the other hand, they could also identify high-demand products and ensure sufficient stock levels to meet customer demands. This optimization of inventory levels resulted in cost savings and reduced carrying costs.

Actionable insights for informed decision-making

The integration of supply chain data provided the company with valuable insights that were previously inaccessible. By analyzing this integrated data, they could identify trends, patterns, and correlations that helped them make informed decisions. For example, they could identify which suppliers consistently delivered high-quality products on time and establish stronger partnerships with them. They could also analyze customer buying behavior to anticipate future demand and adjust production or procurement accordingly.

Furthermore, these actionable insights enabled the company to implement continuous improvement initiatives throughout their supply chain network. By identifying areas of inefficiency or bottlenecks through data analysis, they could implement process improvements or invest in technology solutions that addressed these issues directly. This iterative approach to supply chain management led to increased efficiency, reduced costs, and enhanced overall performance.

Enhanced customer experience

In today's digital age, providing an enhanced customer experience has become a top priority for businesses. By integrating customer data across multiple platforms, companies can gain a 360-degree view of their customers and deliver personalized interactions that cater to their unique needs and preferences. Let's explore a case study that highlights the benefits of such integration.

360-degree view of customers and personalized interactions

One company that successfully integrated its customer data across various platforms is XYZ Corporation, a leading e-commerce retailer. Prior to implementing this integration, XYZ Corporation faced challenges in understanding their customers' behavior and preferences due to fragmented data sources. They had separate databases for online purchases, in-store transactions, and customer support interactions.

By integrating these disparate data sources into a centralized database, XYZ Corporation was able to create a comprehensive profile for each customer. This 360-degree view allowed them to gain valuable insights into their customers' purchase history, browsing patterns, and communication preferences.

With this holistic understanding of their customers, XYZ Corporation was able to deliver personalized interactions at every touchpoint. For example, when a customer visited their website, they would be greeted with tailored product recommendations based on their previous purchases or browsing history. This level of personalization not only increased the likelihood of conversion but also improved overall customer satisfaction.

Targeted marketing campaigns based on comprehensive customer profiles

Another significant benefit of integrating customer data across multiple platforms is the ability to run targeted marketing campaigns. With access to comprehensive customer profiles, businesses can segment their audience based on various criteria such as demographics, purchase behavior, or engagement levels.

XYZ Corporation leveraged this capability by creating highly targeted email marketing campaigns. They sent personalized offers and promotions to specific segments of their customer base who were most likely to be interested in those products or services. As a result, they experienced higher open rates, click-through rates, and ultimately increased sales.

Faster issue resolution and improved customer satisfaction

Integrating customer data across multiple platforms also enables faster issue resolution and improved customer satisfaction. When a customer contacts XYZ Corporation's customer support team, the representative can quickly access the customer's complete history, including previous purchases, interactions, and any ongoing issues.

This comprehensive view allows the representative to provide a more personalized and efficient resolution to the customer's problem. They can address the issue promptly without requiring the customer to repeat information or go through unnecessary steps. This streamlined process not only saves time for both parties but also enhances overall customer satisfaction.

Cost savings and ROI

One of the key benefits of integrating databases in real-world scenarios is the potential for cost savings and a high return on investment (ROI). By streamlining inventory management processes, businesses can significantly reduce manual tasks, associated labor costs, and improve overall operational efficiency.

In a recent case study, Company XYZ implemented a database integration solution to automate their inventory management system. Prior to the integration, the company relied heavily on manual processes, which were time-consuming and prone to errors. With the new integrated system in place, they were able to eliminate data inconsistencies and reduce errors caused by human intervention.

The reduction in manual processes not only saved valuable time but also resulted in substantial cost savings for Company XYZ. By automating various inventory-related tasks such as stock tracking, order processing, and replenishment, they were able to optimize their inventory levels and reduce carrying costs. The integrated system provided real-time visibility into stock levels, enabling them to make informed decisions about when to reorder products and avoid overstocking or stockouts.

Furthermore, the improved accuracy of data facilitated better demand forecasting and planning. This allowed Company XYZ to optimize their production schedules and minimize wastage due to excess inventory or stock shortages. As a result, they experienced significant cost savings in terms of reduced storage expenses and improved cash flow.

Another area where businesses can achieve substantial cost savings and ROI is supply chain management. By integrating databases across different stages of the supply chain process, companies can improve coordination, reduce disruptions, optimize transportation routes, and ultimately lower logistics costs.

Company ABC implemented a database integration solution that connected their suppliers, manufacturers, distributors, and retailers into a single unified system. This enabled seamless communication and collaboration among all stakeholders involved in the supply chain.

With real-time access to accurate data on inventory levels, production schedules, customer orders, and delivery status, Company ABC was able to improve coordination and reduce supply chain disruptions. This resulted in faster order fulfillment, reduced lead times, and improved customer satisfaction.

Moreover, the integrated system allowed Company ABC to optimize transportation routes based on real-time data. By analyzing factors such as distance, traffic conditions, and delivery schedules, they were able to minimize transportation costs and improve overall efficiency. The ability to track shipments in real-time also helped them identify potential bottlenecks or delays and take proactive measures to mitigate any issues.

The optimization of supply chain management through database integration not only led to cost savings but also increased revenue for Company ABC. With improved order fulfillment processes and enhanced customer satisfaction, they experienced higher repeat business and customer loyalty. This translated into increased sales and ultimately higher profitability.

In conclusion, real-world database integration case studies provide compelling evidence of the numerous benefits and outcomes that businesses can achieve through successful integration projects. These success stories highlight the potential for increased efficiency, enhanced data accuracy, improved decision-making, better customer experiences, and substantial cost savings.

By learning from these case studies, organizations can gain valuable insights into how database integration can drive growth and success. The ability to streamline operations, improve data accuracy, and deliver exceptional customer experiences is within reach for businesses willing to embrace the power of integration.

If you're ready to unlock the potential of database integration for your business, we encourage you to take action today. Contact us to explore how our expertise can help you harness the benefits of integration. Our team is dedicated to helping you streamline your operations, improve data accuracy, and ultimately drive success.

Don't miss out on the opportunity to transform your business through database integration. Take the first step towards achieving increased efficiency, improved decision-making, and better customer experiences by reaching out to us today. Together, we can unlock the full potential of your data and propel your business towards growth and success.

Harnessing the Potential of Real-Time Processing: Advantages and Benefits

Comparing Real-Time Processing to Batch and Near Real-Time: An In-depth Analysis

Conquering Obstacles in Real-Time Processing: Effective Tips and Strategies

Streamlining Data Integration and Transformation with Leading ETL Tools for SQL Server

Unleashing the Potential of Real-Time Processing: Instant Data Insights

Everything you need for enterprise-grade data replication

Logo

© Copyright 2024 Tapdata - All Rights Reserved

Banner

  • Databases by Type: Case Studies

Helpful Databases for Finding Case Studies

The following databases contain filters specifically for finding case studies. Keep in mind that in many databases and search tools, adding the phrase "case study" to your search terms can also help you find case studies.

Coverage: 1971-  Full text: Yes

Coverage: 1922- Full Text: Yes Journals and Other Sources Included

  • Computer Science Database This link opens in a new window This collection provides unmatched discipline-specific coverage spanning thousands of publications, many in full text. Subject coverage: Computer Science; Information Systems; Computer Security; Database Design; Software Development; Web Commerce; LANs; WANs; Intranets; Internet. To find case studies, go to the Advanced Search page , go to the box labeled "Document Type," and select "Case Study."
  • Emerald Emerging Markets Case Studies Collection This link opens in a new window Emerald Emerging Markets Case Studies (EEMCS) is an online collection of peer-reviewed case studies focusing on business decision making and management development throughout key global emerging markets . Cases are written by case writers working in or closely with developing economies, offering local perspectives with global appeal.
  • ProQuest One Business This link opens in a new window An intuitive and comprehensive business library containing millions of full-text items across scholarly and popular periodicals, newspapers, market research reports, dissertations, books, videos and more. To find case studies, go to the Advanced Search page , go to the box labeled "Document Type," and select "Case Study."
  • SAGE Journals This link opens in a new window Access more than 650 journals spanning the Humanities, Social Sciences, and Science, Technology, and Medicine. To find case studies, enter your search terms. On the search results page, go to the Article Type filter on the right and select Case Report. Note: you may need to click "More" under Article Type, and you may then need to search for "Case Report" under the Article Type filters.
  • Science Database This link opens in a new window Subject areas include: • Physics • Engineering • Astronomy • Biology • Earth Science • Chemistry • and more. To find case studies, go to the Advanced Search page , go to the box labeled "Document Type," and select "Case Study."
  • SciTech Premium Collection This link opens in a new window Subject areas include: • Advanced technologies • Aerospace • Agricultural science • Aquatic science • Atmospheric science • Biological science • Computer science • Earth science • Environmental science • Engineering • Materials science • Polymer science. To find case studies, go to the Advanced Search page , go to the box labeled "Document Type," and select "Case Study."
  • Technology Collection This link opens in a new window The Technology Collection provides broad indexing coverage of the scholarly literature in advanced technology, computer science, engineering, materials science and related areas. To find case studies, go to the Advanced Search page , go to the box labeled "Document Type," and select "Case Study."
  • Last Updated: Jan 10, 2024 10:39 AM
  • URL: https://libguides.wpi.edu/casestudies

How can we help?

Banner

Case Studies: Databases

  • Getting Started
  • Harvard (HBR) Case Studies
  • Free Case Studies

Library Databases

Find case studies using the library databases   on many topics, such as business, psychology, public administration, etc., it's easy when you use journal finder..

Option 1: Search by a journal title for case studies.

Start at the Business Library homepage . Look for the SEARCH E-JOURNALS search box.

Type in the SEARCH E-JOURNALS search box and enter the title of the journal. Select the database from the dropdown. Click Search within this publication .

Consider these journals:

  • Journal of Information Technology Teaching Cases
  • International Journal of Management Cases : IJMC
  • Journal of the International Academy for Case Studies
  • Business Case Journal
  • Journal of Critical Incidents

Option 2: Find journal titles and then search for case studies.

In the SEARCH E-JOURNALS search box type case studies or case research . Browse the journal title list and select a title - click the database (under the Journal's name). Click Search within this publication.

case study of database

Click Search within this publication . The database opens.

Now, click search and browse the full list of available case studies. Filter by publication year.

Add a simple keyword to focus your search on your area of interest. Keep the concept broad: management, strategic, discrimination, information management.

case study of database

Search tips for the library catalog & databases:

  • Keyword search: Add quotation marks to group significant words together. For example: "case studies" "supply chain".
  • Filter your results: If the database has the menu option, select case studies. For example, Business Source Complete. Advanced Search screen - Publication Type - Case Studies.
  • The term "case study" often appears in the document title- try adding "case study" to a title search
  • Try searching with various terms, case study or case studies ; or use punctuation to find all forms - type this: case stud*

Business Library Databases with case studies:

  • Academic Search Complete (via Ebsco) This link opens in a new window Multidisciplinary and peer-reviewed journals, periodicals, reports, and eBooks with comprehensive coverage in key areas of academic study. Help Guide
  • Business Source Complete (via Ebsco) This link opens in a new window All disciplines of business are included--marketing, management, MIS, POM, accounting, finance and economics. Additional full text, non-journal content includes financial data, eBooks, monographs, major reference works, book digests, conference proceedings, case studies, investment research reports, industry reports, market research reports, country reports, Company profiles, SWOT analyses and more. . Help Guide
  • CaseBase (via Gale eBooks) Covers business case studies focused on issues in emerging markets and emerging industries across the globe. (2012)
  • Encyclopedia of Global Brands (via Gale eBooks) Contains 270 entries, written in case-study style, that highlight interesting details including how a product originated and was first marketed, how it developed commercially and how it fares today compared with its competitors and its own past history. Included are anecdotes pertaining to the famous, or infamous, marketing strategies and advertising campaigns that managed to capture the sometimes jaded viewers' attention.
  • Encyclopedia of Major Marketing Strategies (via Gale eBooks) Over 100 essays covering some of the top global and emerging brands that appeared from 2014 to 2018. Essays are aligned to the strategic marketing framework, ensuring that the marketing strategies covered can easily be utilized in an academic environment as case studies, or illustrative examples often described as “war stories” by professors.
  • O'Reilly eBooks and Video This link opens in a new window More than 40,000 books, videos, case studies, and interactive tutorials. Especially strong for IT and business-related resources.
  • ProQuest (Multi-Database Search) This link opens in a new window Access to multiple ProQuest databases in one click. Provides full text journal, magazine, and newspaper articles. Scholarly and general interest literature in business, social sciences, science and technology, and humanities. Help Guide
  • SAGE Journals This link opens in a new window Peer-reviewed articles covering a wide range of subject areas including business, humanities, social sciences, and science, technology, and medicine. Access all journal volumes/issues from January 1, 1999 to the present. Help Guide
  • Science Direct Search for peer-reviewed journals, articles, and open access content. The library has access to the Social and Behavioral Sciences subject collection of journals.
  • Social Science Research Network (SSRN) eLibrary This link opens in a new window Provides over 900,000 research papers in the social sciences (including business) from over 470,000 researchers in more than 50 disciplines. FAQs
  • << Previous: Ebooks
  • Next: Harvard (HBR) Case Studies >>
  • Last Updated: Jun 7, 2024 1:08 PM
  • URL: https://ggu.libguides.com/casestudies

How to Find Case Studies

Library search box, proquest database, ebsco database, gale database.

  • Free Case Studies
  • Journals with Case Studies
  • Ask Emily for Help

Case Studies in Databases

To find a case study:

  • Start at the Advanced Search page .
  • Type your topic in the first box.
  • Type "case studies" in the second box.

advance search page

  • Start at the Advanced Search Page.
  • Type your topic into the first search box.
  • Under Document Type  click on the box next to Case Study , so it limits your search to case studies.

case study of database

  • Start at the Advanced Search page
  • Type your topic into the first search box
  • Scroll down the page to Document Type and click on Case Study, so it limits your search to Case Studies.

ebsco database

  • Use the advanced search
  • Scroll down the page and under Document Type , use the drop down menu to select Case Study.

gale database

  • Next: Free Case Studies >>
  • Last Updated: Sep 15, 2023 10:42 AM
  • URL: https://davenport.libguides.com/case-study

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

A comprehensive collection of SQL case studies, queries, and solutions for real-world scenarios. This repository provides a hands-on approach to mastering SQL skills through a series of case studies, including table structures, sample data, and SQL queries.

tituHere/SQL-Case-Study

Folders and files.

NameName
1 Commit

SQL DBA School

Case Studies and Real-World Scenarios

Case Study 1: Query Optimization

A financial institution noticed a significant performance slowdown in their central database application, affecting their ability to serve customers promptly. After monitoring and analyzing SQL Server performance metrics, the IT team found that a specific query, part of a core banking operation, took much longer than expected.

Using SQL Server’s Query Execution Plan feature, they found that the query was doing a full table scan on a large transaction table. The team realized the query could be optimized by adding an index on the columns involved in the WHERE clause. After adding the index and testing, the query’s execution time dropped significantly, resolving the application slowdown.

Case Study 2: TempDB Contention

An online retail company was experiencing sporadic slowdowns during peak times, which affected its website’s responsiveness. SQL Server Performance Monitoring revealed that the tempDB database was experiencing latch contention issues, a common performance problem.

The company’s DBA team divided the tempDB into multiple data files equal to the number of logical cores, up to eight, as recommended by Microsoft. This reduced contention and improved the performance of operations using the tempDB.

Case Study 3: Inefficient Use of Hardware Resources

A software development company was experiencing poor performance on their SQL Server, despite running on a high-end server with ample resources. Performance metrics showed that SQL Server was not utilizing all the available CPU cores and memory.

Upon investigation, the team found that SQL Server was running on default settings, which did not allow it to utilize all available resources. By adjusting SQL Server configuration settings, such as max degree of parallelism (MAXDOP) and cost threshold for parallelism, they were able to allow SQL Server to better use the available hardware, significantly improving server performance.

Case Study 4: Database Locking Issues

A large manufacturing company’s ERP system started to experience slowdowns that were affecting their production lines. The IT department, upon investigation, found that there were blocking sessions in their SQL Server database, causing delays.

Using the SQL Server’s built-in reports for “All Blocking Transactions” and “Top Sessions,” they found a poorly designed stored procedure holding locks for extended periods, causing other transactions to wait. After refactoring the stored procedure to hold locks for as short as possible, the blocking issue was resolved, and the system’s performance was back to normal.

These case studies represent common scenarios in SQL Server performance tuning. The specifics can vary, but identifying the problem, isolating the cause, and resolving the issue remains the same.

Case Study 5: Poor Indexing Strategy

A hospital’s patient records system began to experience performance issues over time. The plan was built on a SQL Server database and took longer to pull up patient records. The IT team noticed that the database had grown significantly more prominent over the years due to increased patient volume.

They used SQL Server’s Dynamic Management Views (DMVs) to identify the most expensive queries regarding I/O. The team found that the most frequently used queries lacked appropriate indexing, causing SQL Server to perform costly table scans.

They worked on a comprehensive indexing strategy, including creating new indexes and removing unused or duplicates. They also set up periodic index maintenance tasks (rebuilding or reorganizing) to optimize them. Post these changes, the time to retrieve patient records improved dramatically.

Case Study 6: Outdated Statistics

An e-commerce platform was dealing with sluggish performance during peak shopping hours. Their SQL Server-based backend was experiencing slow query execution times. The DBA team found that several execution plans were inefficient even though there were appropriate indexes.

After further investigation, they discovered that the statistics for several large tables in the database were outdated. SQL Server uses statistics to create the most efficient query execution plans. However, with outdated statistics, it was starting poor execution plans leading to performance degradation.

The team updated the statistics for these tables and set up an automatic statistics update job to run during off-peak hours. This change brought a noticeable improvement in the overall system responsiveness during peak hours.

Case Study 7: Memory Pressure

A cloud-based service provider was experiencing erratic performance issues on their SQL Server databases. The database performance would degrade severely at certain times, affecting all their customers.

Performance monitoring revealed that SQL Server was experiencing memory pressure during these periods. It turned out that the SQL Server instance was hosted on a shared virtual machine, and other applications used more memory during specific times, leaving SQL Server starved for resources.

The team decided to move SQL Server to a dedicated VM where it could have all the memory it needed. They also tweaked the ‘min server memory’ and ‘max server memory’ configurations to allocate memory to SQL Server optimally. This reduced memory pressure, and the erratic performance issues were solved.

Case Study 8: Network Issues

A multinational company with several branches worldwide had a centralized SQL Server-based application. Departments complained about slow performance, while the head office had no issues.

Upon investigation, it turned out to be a network latency issue. The branches that were geographically far from the server had higher latency, which resulted in slow performance. The company used a Content Delivery Network (CDN) to cache static content closer to remote locations. It implemented database replication to create read replicas in each geographical region. This reduced network latency and improved the application performance for all branches.

These examples demonstrate the wide range of potential SQL Server performance tuning issues. The key to effective problem resolution is a thorough understanding of the system, systematic troubleshooting, and the application of appropriate performance-tuning techniques.

Case Study 9: Bad Parameter Sniffing

An insurance company’s SQL Server database was experiencing fluctuating performance. Some queries ran fast at times, then slowed down unexpectedly. This inconsistent behavior impacted the company’s ability to process insurance claims efficiently.

After studying the execution plans and the SQL Server’s cache, the DBA team discovered that the issue was due to bad parameter sniffing. SQL Server uses parameter sniffing to create optimized plans based on the parameters passed the first time a stored procedure is compiled. However, if later queries have different data distributions, the initial execution plan might be suboptimal.

To resolve this, they used OPTIMIZE FOR UNKNOWN query hint for the stored procedure parameters, instructing SQL Server to use statistical data instead of the initial parameter values to build an optimized plan. After implementing this, the fluctuating query performance issue was resolved.

Case Study 10: Inadequate Disk I/O

An online gaming company started receiving complaints about slow game loading times. The issue was traced back to their SQL Server databases. Performance metrics showed that the disk I/O subsystem was a bottleneck, with high disk queue length and disk latency.

Upon investigation, they found that all their databases were hosted on a single, slower disk. To distribute the I/O load, they moved their TempDB and log files to separate, faster SSD drives. They also enabled Instant File Initialization (IFI) for data files to speed up the creation and growth of data files. These changes significantly improved disk I/O performance and reduced game loading times.

Case Study 11: SQL Server Fragmentation

A logistics company’s SQL Server database began to experience slower data retrieval times. Their system heavily relied on GPS tracking data, and they found that fetching this data was becoming increasingly slower.

The DBA team discovered high fragmentation on the GPS tracking data table, which had frequent inserts and deletes. High fragmentation can lead to increased disk I/O and degrade performance. They implemented a routine maintenance plan that reorganized or rebuilt indexes depending on the fragmentation level. They set up fill factor settings to reduce future fragmentation. This greatly improved data retrieval times.

Case Study 12: Excessive Compilation and Recompilation

A web hosting provider had a SQL Server database with high CPU usage. No heavy queries were running, and the server was not low on resources.

The DBA team found that the issue was due to excessive compilations and recompilations of queries. SQL Server compiles queries into execution plans, which can be CPU intensive. When queries are frequently compiled and recompiled, it can lead to high CPU usage.

They discovered that the application used non-parameterized queries, which led SQL Server to compile a new plan for each query. They worked with the development team to modify the application to use parameterized queries or stored procedures, allowing SQL Server to reuse existing execution plans and thus reducing CPU usage.

These cases emphasize the importance of deep knowledge of SQL Server internals, observant monitoring, and a systematic approach to identifying and resolving performance issues.

Case Study 13: Database Auto-growth Misconfiguration

A social media company faced performance issues on its SQL Server database during peak usage times. Their IT team noticed that the performance drops coincided with auto-growth events on the database.

SQL Server databases are configured by default to grow automatically when they run out of space. However, this operation is I/O intensive and can cause performance degradation if it happens during peak times.

The team decided to manually grow the database during off-peak hours to a size that could accommodate several months of data growth. They also configured auto-growth to a fixed amount rather than a percentage to avoid more extensive growth operations as the database size increased. This prevented auto-growth operations from occurring during peak times, improving overall performance.

Case Study 14: Unoptimized Cursors

A travel booking company’s SQL Server application was suffering from poor performance. The application frequently timed out during heavy load times, frustrating their users.

Upon analyzing, the DBA team found that the application heavily used SQL Server cursors. Cursors perform poorly compared to set-based operations as they process one row at a time.

The team worked with the developers to refactor the application code to use set-based operations wherever possible. They also ensured that the remaining cursors were correctly optimized. The change resulted in a significant improvement in application performance.

Case Study 15: Poorly Configured SQL Server Instance

An IT service company deployed a new SQL Server instance for one of their clients, but the client reported sluggish performance. The company’s DBA team checked the server and found it was not correctly configured.

The server was running on the default SQL Server settings, which weren’t optimized for the client’s workload. The team performed a series of optimizations, including:

  • Configuring the ‘max server memory’ option to leave enough memory for the OS.
  • Setting ‘max degree of parallelism’ to limit the number of processors used for parallel plan execution.
  • Enabling ‘optimize for ad hoc workloads’ to improve the efficiency of the plan cache.

After these changes, the SQL Server instance ran much more efficiently, and the client reported a noticeable performance improvement.

Case Study 16: Lack of Partitioning in Large Tables

A telecommunications company stored call records in a SQL Server database. The call records table was huge, with billions of rows, which caused queries to take a long time to run.

The DBA team decided to implement table partitioning. They partitioned the call records table by date, a standard filter condition in their queries. This allowed SQL Server to eliminate irrelevant partitions and only scan the necessary data when running queries. As a result, query performance improved dramatically.

In all these cases, thorough investigation and an in-depth understanding of SQL Server’s features and best practices led to performance improvement. Regular monitoring and proactive optimization are crucial to preventing performance problems and ensuring the smooth operation of SQL Server databases.

Case Study 17: Inappropriate Data Types

An educational institution’s student management system, built on a SQL Server database, suffered from slow performance when dealing with student records. The IT department discovered that the database design included many columns with data types that were larger than necessary.

For instance, student ID numbers were stored as NVARCHAR(100) even though they were always 10-digit numbers. This wasted space and slowed down queries due to the increased data size. The IT team worked on redesigning the database schema to use more appropriate data types and transformed existing data. The database size was significantly reduced, and query performance improved.

Case Study 18: Lack of Database Maintenance

A software firm’s application was facing intermittent slow performance issues. The application was built on a SQL Server database which had not been maintained properly for a long time.

The DBA team discovered that several maintenance tasks, including index maintenance and statistics updates, had been neglected. High index fragmentation and outdated statistics were causing inefficient query execution. They implemented a regular maintenance plan, including index defragmentation and statistics updates, which helped improve the query performance.

Case Study 19: Deadlocks

A stock trading company faced frequent deadlock issues in their SQL Server database, affecting their trading operations. Deadlocks occur when two or more tasks permanently block each other by having a lock on a resource that the different functions try to lock.

Upon reviewing the deadlock graph (a tool provided by SQL Server to analyze deadlocks), the DBA team found that specific stored procedures accessed tables differently. They revised the stored procedures to access tables in the same order. They introduced error-handling logic to retry the operation in case of a deadlock. This reduced the occurrence of deadlocks and improved the application’s stability.

Case Study 20: Improper Use of SQL Server Functions

A retail company’s inventory management system was suffering from poor performance. The DBA team, upon investigation, discovered that a critical query was using a scalar-valued function that contained a subquery.

Scalar functions can cause performance issues by forcing SQL Server to perform row-by-row operations instead of set-based ones. They refactored the query to eliminate the scalar function and replaced the subquery with a join operation. This change significantly improved the performance of the critical query.

In all these situations, the DBA teams had first to understand the problem, investigate the cause, and apply appropriate techniques to resolve the issues. Understanding SQL Server internals and keeping up with its best practices is vital for the smooth functioning of any application built on SQL Server.

Case Study 21: Excessive Use of Temp Tables

A media company faced a slow response time in its content management system (CMS). A SQL Server database powered this CMS. The system became particularly sluggish during peak hours when content-related activities surged.

Upon investigating, the DBA team found that several stored procedures excessively used temporary tables for intermediate calculations. While temporary tables can be handy, their excessive use can increase I/O on tempDB, leading to slower performance.

The team revised these stored procedures to minimize the use of temporary tables. Wherever possible, they used table variables or derived tables, which often have lower overhead. After the optimization, the CMS significantly improved performance, especially during peak hours.

Case Study 22: Frequent Table Scans

An e-commerce company experienced a gradual decrease in its application performance. The application was backed by a SQL Server database, which was found to be frequently performing table scans on several large tables upon investigation.

Table scans can be resource-intensive, especially for large tables, as they involve reading the entire table to find relevant records. Upon closer examination, the DBA team realized that many of the queries issued by the application did not have appropriate indexes.

The team introduced well-thought-out indexes on the tables. It made sure the application queries were written in a way to utilize these indexes. After these adjustments, the application performance improved significantly, with most queries executing much faster due to the reduced number of table scans.

Case Study 23: Unoptimized Views

A financial institution noticed slow performance in their loan processing application. This application relied on several complex views in a SQL Server database.

On review, the DBA team found that these views were not optimized. Some views were nested within other views, creating multiple layers of complexity, and some were returning more data than needed, including columns not used by the application.

They flattened the views to remove the unnecessary nesting and adjusted them to return only the required data. They also created indexed views for the ones most frequently used. These optimizations significantly improved the performance of the loan processing application.

Case Study 24: Log File Management Issues

A data analytics firm was facing a slowdown in their SQL Server-based data processing tasks. On investigation, the DBA team discovered that the log file for their central database was becoming extremely large, causing slow write operations.

The team found that the recovery model for the database was set to Full. Still, no transaction log backups were taken. In the Full recovery model, transaction logs continue to grow until a log backup is taken. They set up regular transaction log backups to control the log file size. They also moved the log file to a faster disk to improve the write operation speed. These changes helped in speeding up the data processing tasks.

In all these situations, systematic problem identification, root cause analysis, and applying the appropriate solutions were vital to improving SQL Server performance. Regular monitoring, preventive maintenance, and understanding SQL Server’s working principles are crucial in maintaining optimal database performance.

Case Study 25: Locking and Blocking Issues

A healthcare institution’s patient management system, running on a SQL Server database, was encountering slow performance. This was especially noticeable when multiple users were updating patient records simultaneously.

Upon investigation, the DBA team identified locking and blocking as the root cause. In SQL Server, when a transaction modifies data, locks are placed on the data until the transaction is completed to maintain data integrity. However, excessive locking can lead to blocking, where other transactions must wait until the lock is released.

To reduce the blocking issues, the team implemented row versioning-based isolation levels (like Snapshot or Read Committed Snapshot Isolation). They also optimized the application code to keep transactions as short as possible, thus reducing the time locks were held. These steps significantly improved the system’s performance.

Case Study 26: Outdated Statistics

An online marketplace experienced slow performance with its product recommendation feature. The feature relied on a SQL Server database that contained historical sales data.

The DBA team identified that the statistics on the sales data table were outdated. SQL Server uses statistics to create efficient query plans. However, if the statistics are not up-to-date, SQL Server may choose sub-optimal query plans.

The team implemented a routine job to update statistics more frequently. They also enabled the ‘Auto Update Statistics’ option on the database to ensure statistics were updated automatically when necessary. This led to an immediate improvement in the recommendation feature’s performance.

Case Study 27: Non-Sargable Queries

A sports statistics website saw a decrease in its website performance, especially when visitors were querying historical game statistics. A SQL Server database backed their site.

Upon reviewing the SQL queries, the DBA team found several non-sargable queries. These queries cannot take full advantage of indexes due to how they are written (e.g., using functions on the column in the WHERE clause).

The team worked with the developers to rewrite these queries in a sargable manner, ensuring they could fully use the indexes. This led to a substantial increase in query performance and improved the website’s speed.

Case Study 28: Over-Normalization

An HR application backed by a SQL Server database ran slowly, particularly when generating reports. The database schema was highly normalized, following the principles of reducing data redundancy.

However, the DBA team found that over-normalization led to excessive JOIN operations, resulting in slow query performance. They implemented denormalization in certain areas, introducing calculated and redundant fields where it made sense. This reduced the need for some JOIN operations and improved the application’s overall performance.

These cases show that performance troubleshooting in SQL Server involves understanding various components and how they interact. Addressing performance problems often requires a comprehensive approach, combining database configuration, query tuning, hardware adjustments, and, occasionally, changes to application design.

Case Study 29: Poor Query Design

A manufacturing company’s inventory management system was experiencing slow performance, especially when generating specific reports. The system was built on a SQL Server database.

The DBA team found that some queries used in report generation were poorly designed. They used SELECT * statements, which return all columns from the table, even though only a few columns were needed. This caused unnecessary data transfer and slowed down the performance.

The team revised these queries only to fetch the necessary columns. They also made other optimizations, such as avoiding unnecessary nested queries and replacing correlated subqueries with more efficient JOINs. These changes significantly improved the performance of the report generation process.

Case Study 30: Inefficient Indexing

A logistics company’s tracking system, running on a SQL Server database, was experiencing slow performance. Users were complaining about long loading times when tracking shipments.

Upon investigation, the DBA team discovered that the main shipment table in the database was not optimally indexed. Some critical queries didn’t have corresponding indexes, leading to table scans, while some existing indexes were barely used.

The DBA team created new indexes based on the query patterns and removed the unused ones. They also ensured to keep the indexing balanced, as excessive indexing could hurt performance by slowing down data modifications. After these indexing changes, the tracking system’s performance noticeably improved.

Case Study 31: Network Latency

A multinational corporation used a SQL Server database hosted in a different geographical location from the main user base. Users were experiencing slow response times when interacting with the company’s internal applications.

The IT team identified network latency as a critical issue. The physical distance between the server and the users was causing a delay in data transfer.

To solve this, they used SQL Server’s Always On Availability Groups feature to create a secondary replica of the database closer to the users. The read-only traffic was then directed to this local replica, reducing the impact of network latency and improving application response times.

Case Study 32: Resource-Intensive Reports

A fintech company ran daily reports on their SQL Server database during business hours. These reports were resource-intensive and caused the application performance to degrade when they were running.

The DBA team offloaded the reporting workload to a separate reporting server using SQL Server’s transaction replication feature. This ensured that the resource-intensive reports didn’t impact the performance of the primary server. They also scheduled the reports during off-peak hours to minimize user impact. This significantly improved the overall application performance during business hours.

These case studies underline the necessity of a proactive and comprehensive approach to managing SQL Server performance. Regular monitoring, appropriate database design, optimized queries, and a good understanding of how the database interacts with hardware and network can go a long way in maintaining optimal performance.

Case Study 33: Application with Heavy Write Operations

A social media application powered by a SQL Server database was facing slow performance due to a high volume of write operations from user posts, likes, and comments.

The DBA team found that the frequent write operations were causing high disk I/O, slowing down the application performance. They decided to use In-Memory OLTP, a feature in SQL Server designed for high-performance transactional workloads, by migrating the most frequently accessed tables to memory-optimized tables.

The team also introduced natively compiled stored procedures for the most common operations. In-memory OLTP significantly improved the write operation speed and overall application performance.

Case Study 34: Large Transactional Tables with No Archiving

A telecom company’s billing system was experiencing performance degradation over time. The system was built on a SQL Server database and retained years of historical data in the main transactional tables.

The DBA team found that the large size of the transactional tables was leading to slow performance, especially for queries involving range or full table scans. They introduced a data archiving strategy, moving older data to separate archive tables and keeping only recent data in the main transactional tables.

This reduced the transactional tables’ size, leading to faster queries and improved performance. In addition, it made maintenance tasks such as backups and index rebuilds quicker and less resource-intensive.

Case Study 35: Suboptimal Storage Configuration

A gaming company’s game-state tracking application was experiencing slow response times. A SQL Server database backed the application.

Upon investigation, the DBA team discovered that the database files were spread across multiple disks in a way that was not optimizing I/O performance. Some of the heavily used database files were located on slower disks.

The team reconfigured the storage, placing the most frequently accessed database files on SSDs (Solid State Drives) to benefit from their higher speed. They also ensured that data files and log files were separated onto different disks to balance the I/O load. After these adjustments, the application’s performance improved noticeably.

Case Study 36: Inefficient Use of Cursors

A government department’s record-keeping system, built on a SQL Server database, ran slow. The system was particularly sluggish when executing operations involving looping over large data sets.

The DBA team identified that the system used SQL Server cursors to perform these operations. Cursors are database objects used to manipulate rows a query returns on a row-by-row basis. However, they can be inefficient compared to set-based operations.

The team rewrote these operations to use set-based operations, replacing cursors with joins, subqueries, or temporary tables. These changes significantly improved the efficiency and performance of the data looping operations.

Each case study presents a unique scenario and solution, highlighting that SQL Server performance tuning can involve many factors. From the application design to the database schema, from the hardware configuration to the SQL Server settings – each aspect can significantly impact performance. By taking a methodical approach to identifying and addressing performance bottlenecks, it is possible to achieve substantial improvements.

Case Study 37: Use of Entity Framework without Optimization

A logistics company’s web application, backed by a SQL Server database, was experiencing slow load times. The application was built using. NET’s Entity Framework (EF) allows developers to interact with the database using .NET objects.

Upon review, the DBA team found that the Entity Framework was not optimally configured. For instance, “lazy loading” was enabled, which can lead to performance problems due to excessive and unexpected queries.

The team worked with developers to make necessary optimizations, like turning off lazy loading and using eager loading where appropriate, filtering data at the database level instead of the application level, and utilizing stored procedures for complex queries. After these optimizations, the web application’s performance significantly improved.

Case Study 38: Poorly Defined Data Types

An e-commerce platform was noticing slow performance when processing transactions. The platform’s backend was a SQL Server database.

The DBA team discovered that some of the columns in the transaction table were using data types larger than necessary. For instance, a column storing a small range of values used an INT data type when a TINYINT would suffice.

They adjusted the data types to match the data being stored more closely. This reduced the storage space and memory used by these tables, resulted in faster queries, and improved overall performance.

Case Study 39: Fragmented Indexes

A banking application was experiencing slow response times during peak usage hours. The application’s data was stored in a SQL Server database.

Upon reviewing the database, the DBA team found that the indexes on several critical tables were heavily fragmented. Index fragmentation can happen over time as data is added, updated, or deleted, leading to decreased query performance.

The DBA team implemented a regular maintenance plan to rebuild or reorganize fragmented indexes. They also adjusted some indexes’ fill factors to leave more free space and reduce future fragmentation. These steps led to improved query performance and faster response times for the banking application.

Case Study 40: Misconfigured Memory Settings

A CRM system was running slow, especially during data-heavy operations. The system was running on a SQL Server database.

Upon checking the SQL Server settings, the DBA team found that the maximum server memory was not correctly configured. The server was not utilizing the available memory to its full potential, which can impact SQL Server’s performance.

The team adjusted the memory settings to allow SQL Server to use more of the available memory, leaving enough memory for the operating system and other applications. This allowed more data to be kept in memory, reducing disk I/O and improving SQL Server performance.

These case studies further illustrate that performance tuning in SQL Server requires a multifaceted approach involving the database system and the related applications. Regular monitoring and maintenance and a good understanding of SQL Server’s working principles are essential in ensuring optimal database performance.

Case Study 41: Underutilized Parallelism

An analytics company was struggling with slow data processing times. They had a SQL Server database with multi-core processors, but the performance was unexpected.

The DBA team found that the server’s parallelism settings were not optimally configured. The “max degree of parallelism” (MAXDOP) setting, which controls how many processors SQL Server can use for single query execution, was set to 1, which meant SQL Server was not fully utilizing the available cores.

The team adjusted the MAXDOP setting to a more appropriate value considering the number of available cores and the workload characteristics. This allowed SQL Server to execute large queries more efficiently by spreading the work across multiple centers, improving data processing times.

Case Study 42: Bad Parameter Sniffing

An insurance company’s application was experiencing sporadic slow performance. The application was built on a SQL Server database and used stored procedures extensively.

Upon investigation, the DBA team discovered that the performance issues were due to “bad parameter sniffing.” SQL Server can create sub-optimal execution plans for stored procedures based on the parameters of the first execution, which may not work for subsequent executions with different parameters.

The team implemented the OPTION (RECOMPILE) query hint for the problematic stored procedures to force SQL Server to generate a new execution plan for each execution. They also used parameter masking for some procedures. This helped avoid bad parameter sniffing and improved the application’s performance consistency.

Case Study 43: Auto-Shrink Enabled

A retail company’s inventory management system, backed by a SQL Server database, was experiencing performance problems, slowing down irregularly.

The DBA team found that the “auto-shrink” option was enabled on the database. Auto-shrink can cause performance issues because it is resource-intensive and can lead to index fragmentation.

The team disabled auto-shrink and implemented a proper database size management strategy, manually shrinking the database only when necessary and immediately reorganizing indexes afterward. This resolved the irregular performance slowdowns and stabilized the system’s performance.

Case Study 44: Tempdb Contention

A travel booking website was noticing performance degradation during peak hours. Their system was built on a SQL Server database.

Upon review, the DBA team found signs of contention in tempdb, a system database used for temporary storage. Tempdb contention can slow down the system as queries wait for resources.

The team implemented several measures to reduce tempdb contention, including configuring multiple equally sized tempdb data files, adding more tempdb files, and using trace flag 1118 to change how SQL Server allocates extents. These steps helped alleviate the tempdb contention and improved the system’s peak performance.

These case studies showcase that SQL Server performance tuning is dynamic, requiring ongoing adjustments and a deep understanding of SQL Server’s various features and settings. By monitoring the system closely and being ready to investigate and address issues promptly, you can ensure your SQL Server databases run efficiently and reliably.

Case Study 45: Locking and Blocking

A healthcare company’s patient record system, powered by a SQL Server database, was experiencing slow performance during high user activity periods.

Upon investigation, the DBA team found high locking and blocking. This was due to a few long-running transactions that were locking critical tables for a significant amount of time, preventing other transactions from accessing these tables.

The DBA team optimized the problematic transactions to make them more efficient and faster. They also implemented row versioning by enabling Read Committed Snapshot Isolation (RCSI) on the database to allow readers not to block writers and vice versa. This alleviated the locking and blocking issue and led to a significant improvement in performance.

Case Study 46: Over-normalization

An e-commerce website was experiencing slow load times, particularly in product categories and search pages. The company’s product catalog was stored in a SQL Server database.

Upon review, the DBA team found that the database schema was overly normalized. While normalization is generally a good practice as it reduces data redundancy, in this case, it led to an excessive number of query joins, causing slower performance.

The DBA team worked with the developers to denormalize the database schema slightly. They created computed columns for frequently calculated fields and materialized views for commonly executed queries with multiple joins. These changes reduced the number of joins required in the queries and improved the website’s performance.

Case Study 47: Suboptimal Statistics

A software company’s project management application was running slow. The application was built on a SQL Server database.

Upon checking the database, the DBA team found that the statistics were not up-to-date on several large tables. Statistics in SQL Server provide critical information about the data distribution in a table, which the query optimizer uses to create efficient query execution plans.

The team set up a maintenance job to regularly update statistics on the database tables. They also adjusted the “auto update statistics” option to ensure that statistics are updated more frequently. These steps helped the query optimizer generate more efficient execution plans, improving query performance.

Case Study 48: Improper Use of Functions in Queries

A media company’s content management system was experiencing slow response times. The system was built on a SQL Server database.

The DBA team identified several frequently executed queries using scalar functions on columns in the WHERE clause. This practice prevents SQL Server from effectively using indexes on those columns, leading to table scans and slower performance.

The team avoided using functions on indexed columns in the WHERE clause, allowing SQL Server to use the indexes efficiently. This significantly improved the performance of these queries and the overall response time of the system.

As these case studies illustrate, various issues can affect SQL Server performance. Addressing them requires a good understanding of SQL Server, a methodical approach to identifying problems, and collaboration with other teams, such as developers, to implement optimal solutions.

Case Study 49: Excessive Use of Temp Tables

A finance firm’s risk assessment software, built on SQL Server, was experiencing slower performance. The software was executing numerous calculations and transformations, using temp tables extensively.

Upon reviewing the operations, the DBA team found that the excessive use of temp tables led to high I/O operations and caused contention in tempdb. They also found that some temp tables were unnecessary as the same operations could be achieved using more straightforward queries or table variables, which have a lower overhead than temp tables.

The DBA team and developers collaborated to refactor the procedures to reduce the use of temp tables. They replaced temp tables with table variables where possible and sometimes rewrote queries to avoid needing temporary storage. This reduced the load on tempdb and improved the software’s performance.

Case Study 50: High Network Latency

An international company was experiencing slow performance with its distributed applications. These applications interacted with a centralized SQL Server database in their headquarters.

Upon investigation, the DBA team found that network latency was a significant factor causing the slow performance. The network latency was exceptionally high for the company’s overseas offices.

To address this, they implemented SQL Server’s data compression feature to reduce the amount of data sent over the network. They also combined caching data at the application level and local read-only replicas for overseas offices. This resulted in reduced network latency and improved application performance.

Case Study 51: Large Data Loads During Business Hours

A manufacturing company’s ERP system was experiencing slow performance during specific periods of the day. A SQL Server database backed the system.

The DBA team found that large data loads were being run during business hours, impacting the system’s performance. These data loads were locking tables and consuming significant server resources.

The team rescheduled the data loads to off-peak hours, ensuring minimal impact on business users. They also optimized the data load processes using techniques such as bulk insert and minimally logged operations to make them run faster and consume fewer resources.

Case Study 52: Inefficient Code

A software company’s internal tool was running slow. The tool was built on a SQL Server database and used stored procedures extensively.

The DBA team found that some of the stored procedures were written inefficiently. There were instances of cursor use where set-based operations would be more appropriate, and some procedures called other procedures in a loop, causing many executions.

The team worked with developers to optimize the stored procedures. They replaced cursors with set-based operations and unrolled loops where possible, reducing the number of procedure executions. They also added appropriate indexes to support the queries in the stored procedures. These changes improved the code’s efficiency and the tool’s overall performance.

These case studies underscore that SQL Server performance issues can arise from different areas – from inefficient code to infrastructure factors like network latency. Keeping a keen eye on performance metrics, proactively managing server resources, and maintaining efficient database code are all part of the toolkit for managing SQL Server performance.

Leave A Reply Cancel reply

You must be logged in to post a comment.

Login with your site account

Remember Me

Not a member yet? Register now

Register a new account

I accept the Terms of Service

Are you a member? Login now

Modal title

case study of database

8 Week SQL Challenge

Start your SQL learning journey today!

  • Case Study #1 - Danny's Diner

Danny Ma · May 1, 2021

case study of database

Introduction

Danny seriously loves Japanese food so in the beginning of 2021, he decides to embark upon a risky venture and opens up a cute little restaurant that sells his 3 favourite foods: sushi, curry and ramen.

Danny’s Diner is in need of your assistance to help the restaurant stay afloat - the restaurant has captured some very basic data from their few months of operation but have no idea how to use their data to help them run the business.

Problem Statement

Danny wants to use the data to answer a few simple questions about his customers, especially about their visiting patterns, how much money they’ve spent and also which menu items are their favourite. Having this deeper connection with his customers will help him deliver a better and more personalised experience for his loyal customers.

He plans on using these insights to help him decide whether he should expand the existing customer loyalty program - additionally he needs help to generate some basic datasets so his team can easily inspect the data without needing to use SQL.

Danny has provided you with a sample of his overall customer data due to privacy issues - but he hopes that these examples are enough for you to write fully functioning SQL queries to help him answer his questions!

Danny has shared with you 3 key datasets for this case study:

You can inspect the entity relationship diagram and example data below.

Entity Relationship Diagram

Example datasets.

All datasets exist within the dannys_diner database schema - be sure to include this reference within your SQL scripts as you start exploring the data and answering the case study questions.

Table 1: sales

The sales table captures all customer_id level purchases with an corresponding order_date and product_id information for when and what menu items were ordered.

customer_id order_date product_id
A 2021-01-01 1
A 2021-01-01 2
A 2021-01-07 2
A 2021-01-10 3
A 2021-01-11 3
A 2021-01-11 3
B 2021-01-01 2
B 2021-01-02 2
B 2021-01-04 1
B 2021-01-11 1
B 2021-01-16 3
B 2021-02-01 3
C 2021-01-01 3
C 2021-01-01 3
C 2021-01-07 3

Table 2: menu

The menu table maps the product_id to the actual product_name and price of each menu item.

product_id product_name price
1 sushi 10
2 curry 15
3 ramen 12

Table 3: members

The final members table captures the join_date when a customer_id joined the beta version of the Danny’s Diner loyalty program.

customer_id join_date
A 2021-01-07
B 2021-01-09

Interactive SQL Session

You can use the embedded DB Fiddle below to easily access these example datasets - this interactive session has everything you need to start solving these questions using SQL.

You can click on the Edit on DB Fiddle link on the top right hand corner of the embedded session below and it will take you to a fully functional SQL editor where you can write your own queries to analyse the data.

You can feel free to choose any SQL dialect you’d like to use, the existing Fiddle is using PostgreSQL 13 as default.

Serious SQL students have access to a dedicated SQL script in the 8 Week SQL Challenge section of the course which they can use to generate relevant temporary tables like we’ve done throughout the entire course!

Case Study Questions

Each of the following case study questions can be answered using a single SQL statement:

  • What is the total amount each customer spent at the restaurant?
  • How many days has each customer visited the restaurant?
  • What was the first item from the menu purchased by each customer?
  • What is the most purchased item on the menu and how many times was it purchased by all customers?
  • Which item was the most popular for each customer?
  • Which item was purchased first by the customer after they became a member?
  • Which item was purchased just before the customer became a member?
  • What is the total items and amount spent for each member before they became a member?
  • If each $1 spent equates to 10 points and sushi has a 2x points multiplier - how many points would each customer have?
  • In the first week after a customer joins the program (including their join date) they earn 2x points on all items, not just sushi - how many points do customer A and B have at the end of January?

Bonus Questions

Join all the things.

The following questions are related creating basic data tables that Danny and his team can use to quickly derive insights without needing to join the underlying tables using SQL.

Recreate the following table output using the available data:

customer_id order_date product_name price member
A 2021-01-01 curry 15 N
A 2021-01-01 sushi 10 N
A 2021-01-07 curry 15 Y
A 2021-01-10 ramen 12 Y
A 2021-01-11 ramen 12 Y
A 2021-01-11 ramen 12 Y
B 2021-01-01 curry 15 N
B 2021-01-02 curry 15 N
B 2021-01-04 sushi 10 N
B 2021-01-11 sushi 10 Y
B 2021-01-16 ramen 12 Y
B 2021-02-01 ramen 12 Y
C 2021-01-01 ramen 12 N
C 2021-01-01 ramen 12 N
C 2021-01-07 ramen 12 N

Rank All The Things

Danny also requires further information about the ranking of customer products, but he purposely does not need the ranking for non-member purchases so he expects null ranking values for the records when customers are not yet part of the loyalty program.

customer_id order_date product_name price member ranking
A 2021-01-01 curry 15 N null
A 2021-01-01 sushi 10 N null
A 2021-01-07 curry 15 Y 1
A 2021-01-10 ramen 12 Y 2
A 2021-01-11 ramen 12 Y 3
A 2021-01-11 ramen 12 Y 3
B 2021-01-01 curry 15 N null
B 2021-01-02 curry 15 N null
B 2021-01-04 sushi 10 N null
B 2021-01-11 sushi 10 Y 1
B 2021-01-16 ramen 12 Y 2
B 2021-02-01 ramen 12 Y 3
C 2021-01-01 ramen 12 N null
C 2021-01-01 ramen 12 N null
C 2021-01-07 ramen 12 N null

It’s highly recommended to save all of your code in a separate IDE or text editor as you are trying to solve the problems in the provided SQL Fiddle instance above!

If you’d like to use this case study for one of your portfolio projects or in a personal blog post - please remember to link back to this URL and also don’t forget to share some LinkedIn updates using the #8WeekSQLChallenge hashtag and remember to tag me!

Ready for the next 8 Week SQL challenge case study? Click on the banner below to get started with case study #2!

case study of database

I really hope you enjoyed this fun little case study - it definitely was fun for me to create!

Official Solutions

If you’d like to see the official code solutions and explanations for this case study and a whole lot more, please consider joining me for the Serious SQL course - you’ll get access to all course materials and I’m on hand to answer all of your additional SQL questions directly!

Serious SQL is priced at $49USD and $29 for students and includes access to all written course content, community events as well as live and recorded SQL training videos!

Please send an email to [email protected] from your educational email or include your enrolment details or student identification for a speedy response!

The following topics relevant to the Danny’s Diner case study are covered lots of depth in the Serious SQL course:

  • Common Table Expressions
  • Group By Aggregates
  • Window Functions for ranking
  • Table Joins

Don’t forget to review the comprehensive list of SQL resources I’ve put together for the 8 Week SQL Challenge on the Resources page!

Community Solutions

This section will be updated in the future with any community member solutions with a link to their respective GitHub repos!

Final Thoughts

The 8 Week SQL Challenge is proudly brought to you by me - Danny Ma and the Data With Danny virtual data apprenticeship program.

Students or anyone undertaking further studies are eligible for a $20USD student discount off the price of Serious SQL please send an email to [email protected] from your education email or include information about your enrolment for a fast response!

We have a large student community active on the official DWD Discord server with regular live events, trainings and workshops available to all Data With Danny students, plus early discounted access to all future paid courses.

There are also opportunities for 1:1 mentoring, resume reviews, interview training and more from myself or others in the DWD Mentor Team.

From your friendly data mentor, Danny :)

All 8 Week SQL Challenge Case Studies

All of the 8 Week SQL Challenge case studies can be found below:

  • Case Study #2 - Pizza Runner
  • Case Study #3 - Foodie-Fi
  • Case Study #4 - Data Bank
  • Case Study #5 - Data Mart
  • Case Study #6 - Clique Bait
  • Case Study #7 - Balanced Tree Clothing Co.
  • Case Study #8 - Fresh Segments

Share: Twitter , Facebook

  • Skip to main navigation
  • Skip to guide navigation
  • Skip to main content
  • Library Home
  • Library Guides

Case Studies

  • Sources for Case Studies

Where Can I find Harvard Business School Case Studies?

How do i find articles with case studies, where can i find free case studies, subject specialists.

Profile Photo

Harvard Business Publishing makes a great deal of money selling these for business school course packs and will not make them available to libraries. You can, however, order them directly from HBS, around $8.95 each How to find them:

  • Harvard Business Review publishes one case study per issue. These generally deal with fictitious companies but are very good studies of current problems faced by companies.
  • Harvard Business School Publishing Search by company name or topic. Abstracts are usually included. Harvard also sells cases from Babson College and Northwestern's Kellogg School of Management, among others.

Use keyword searches in article databases . For example: "case studies and airlines" or "case  studies and management". Full-text articles and abstracts are available, depending on the journal.

Tip: Use the subject heading "case studies" in ABI/INFORM and Business Source Complete

Article database that indexes academic journals, trade publications, newspapers and magazines in business and economics. Full text is often available. Use the FindIt links to locate full text of articles that are not included in the database.

  • Business Source Complete This link opens in a new window & more less... Article database that includes trade publications, academic journals, industry profiles, country information and company profiles, which include SWOT analyses. Full text is often available. Use the FindIt links to locate full text of articles that are not included in the database.
  • EconLit with Full Text This link opens in a new window & more less... EconLit indexes articles from economics journals, books, book chapters, dissertations and working papers. It is a very good source for empirical studies on economics and finance. Use the FindIt links to locate full text of articles that are not included in the database.

Most cases published for teaching in business schools are not free to use. These are a few resources that do offer free cases, but only LearningEdge offers their entire catalog for free.

  • LearningEdge Cases developed at the MIT Sloan School of Management.
  • Free cases from Stanford Graduate School of Business More are available for purchase through Harvard Business School Publishing
  • Free cases from the Case Centre A selection of cases. Many more available for purchase
  • Subjects: Business
  • Tags: harvard
  • Updated: Sep 6, 2023 3:16 PM
  • URL: https://guides.lib.uchicago.edu/case_studies
  • Report a problem
  • Login to LibApps

Open sourcetools

  • Boston University Libraries

Business Case Studies

Databases with cases.

  • Getting Started
  • Harvard Business School Cases
  • Diverse Business Cases
  • Journals with Cases
  • Books with Cases
  • Open Access Cases
  • Case Analysis
  • Case Interviews
  • Case Method (Teaching)
  • Writing Case Studies
  • Citing Business Sources

case study of database

One strategy for finding business cases is to search databases known to have case studies. All of the databases listed below include case studies.

Profile Photo

  • << Previous: Diverse Business Cases
  • Next: Journals with Cases >>
  • Last Updated: Jun 25, 2024 1:35 PM
  • URL: https://library.bu.edu/business-case-studies

Research Excellence Framework 2021

Impact case study database

Search and filter, filter by higher education institution, filter by unit of assessment, filter by continued case study, filter by summary impact type, filter by impact uk location, filter by impact global location, filter by underpinning research subject.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Med Libr Assoc
  • v.102(2); 2014 Apr

Cases Database

236 Gray's Inn Road, London, WC1X 8HB, United Kingdom;   [email protected] ; www.casesdatabase.com ; continuously updated; open access; no cost. BioMed Central.

Cases in clinical medicine can be defined as reports on patients where symptoms, diagnosis, treatment, and outcome are described, and some new side effect, disease association, symptom, or unusual presentation is observed. Usually in case reports, the patient is also described in detail: age, weight, gender, ethnicity, or some other clinically relevant attribute. Cases Database, launched by BioMed Central (BMC) on December 10, 2012, aims to pull together similar cases from published journals to assist researchers in finding trends in this subset of the medical literature. According to the Cases Database website, the purpose of this resource is to provide clinicians, researchers, regulators, and patients a starting place to explore content and identify emerging trends in medicine. Matt Cockerill, BMC's managing director explains, “Our vision, when we launched Journal of Medical Case Reports , was that case reports would have much more value if they could be assembled together in large numbers and made easy to find and compare” [1].

Content in Cases Database is pulled from BMC's Journal of Medical Case Reports, cases published in other BMC journals, and cases from other published medical journals using a text-mining method designed and built just for the database. New content is added on an ongoing basis as new articles are published. All cases indexed in Cases Database are peer reviewed, and because the resource is open access, everyone can use it at no cost. This makes the database highly accessible. Because all the cases are pulled from previously published material, there is also no charge to publish in Cases Database; however, authors cannot submit directly to Cases Database. They must publish in either the Journal of Medical Case Reports or in a journal already indexed by Cases Database. The inclusion of papers is completely at the discretion of the editors.

One of the most noticeable features of the database is the Google-like search box on the home page. It stands out on the page because it is quite large and really invites a user to start a keyword search right away. There is an option to do an advanced search right below the main box. The advanced options include limits by patient information (sex, age, ethnicity); clinical characteristics (condition, symptom, medication, etc.); and a particular publication. Author country is, interestingly enough, the first limitation offered in that section.

The home page also has some other interesting features like a tag cloud of sorts of popular searches, the current number of reports in the database, and the number of journals indexed (as of November 22, 2013, those numbers are 30,081 and 257, respectively). That is about as complicated as a search gets in this resource. Once a search is performed, more limits can be applied. The author of this review did a keyword search for “diabetic ketoacidosis” and noticed that for the 112 cases returned, the same limits mentioned previously all appeared in a left-hand navigation pane. The clinical characteristics were also pre-scoped to those associated with diabetic ketoacidosis.

Another option is to create an account with Cases Database in order to save cases and/or searches, download results to the computer, and receive alerts when new cases match a saved search. Overall, the usability of this database is good, the interface is clean and simple to navigate, and the usual functions and limits are all present. Librarians who are interested in usage statistics do not have such luck, however. This reviewer asked the vendor if there is an administrative utility that could capture use data based on Internet protocol (IP), but no statistics are available.

The most difficult part of this review is determining whether Cases Databases is worth recommending to clinical providers as a worthwhile source for information. All the cases in the database can be found elsewhere, either by searching BioMed Central's site or searching MEDLINE and limiting to cases. In fact, pretty much any instance of MEDLINE would assuredly provide a more comprehensive amount of cases on a particular topic. With all the medical and health databases out there vying for a researcher's time, is Cases Database worth recommending and using? Yes, but only in particular situations and with some explanation.

Academic health sciences librarians may frequently be asked by students for assistance in locating particular kinds of studies, usually by assignment from their instructors. Cases Database would be a really good source for students to find case reports. Because the resource is limited to cases and everything in it is peer reviewed, this database could be very helpful in this scenario. Cases Database would also be a suitable recommendation for a clinician who is looking to compare information about a patient to other similar patients mentioned in quality case reports. The caveat is that the clinician would need to know that the database does not have the depth of coverage that a source like MEDLINE would provide. The benefit for this kind of user would be Cases Database's quick and easy-to-use interface.

Cases Database is limited by the fact that only 252 titles are currently indexed, but the developers have a standing call for other publishers to allow indexing of more journals. Until then, it would be difficult to recommend this database to other user groups, as a limit to case studies in MEDLINE sources would provide a more authoritative search. Cases Database would not be recommended, therefore, to librarians or researchers conducting intensive literature reviews.

University of Illinois Chicago

University library, search uic library collections.

Find items in UIC Library collections, including books, articles, databases and more.

Advanced Search

Search UIC Library Website

Find items on the UIC Library website, including research guides, help articles, events and website pages.

  • Search Collections
  • Search Website
  • UIC Library
  • Subject and Course Guides
  • Business Case Studies
  • Databases That Contain Case Studies

Business Case Studies: Databases That Contain Case Studies

  • Harvard Case Studies
  • Journals that contain case studies
  • Open Access Cases

In most of these databases you can limit to Case Study in the Document Type.

ABI/INFORM is an extensive business research database with full-text articles from business journals, business and trade magazines and newspapers.

  • Business Source Complete Business Source Complete provides full-text to a large number of business sources, including business journals, trade publications and business magazines.
  • Entrepreneurial Studies Source Entrepreneurial Studies Source provides the latest insight into topics relevant to entrepreneurship and small business. This database offers users full text for more than 125 key periodicals, 135 reference books, case studies, thousands of company profiles and over 600 videos with transcripts and related articles from the Harvard Faculty Series and Vator.TV.

Gartner Research contains research and analysis done by the Gartner Group that can be used for planning, decision making, and measurement purposes for making continual process improvements towards industry best practices. It does not include Dataquest documents nor special reports. Other information is only available by request to the University's Gartner representative. Individuals needing help accessing the content should contact the AITS Service Desk .

  • O'Reilly Safari Technical Books and Videos Acess to case studies as well as technical books, videos, tutorials, and more. A UIC email address is required to access. Note: At the O'Reilly welcome page, go to SELECT YOUR INSTITUTION and at the drop down menu, select NOT LISTED? CLICK HERE .Enter your UIC email address. If you already have an account with O'Reilly, you can skip this step. Instead go to ALREADY A USER? CLICK HERE. This will take you to a page where you have to login with your username and password.
  • ProQuest Dissertations & Theses Global Dissertations and theses often include case studies. Try using search terms such as "case studies" or "business cases."
  • << Previous: Open Access Cases
  • Last Updated: Jan 23, 2024 3:34 PM
  • URL: https://researchguides.uic.edu/c.php?g=1177135

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

COMMENTS

  1. Case Study

    How Heroku reduced their operational overhead by migrating their 30 TB self-managed database from Amazon EC2 to Amazon DynamoDB. Heroku is a fully managed platform as a service (PaaS) solution that makes it straightforward for developers to deploy, operate, and scale applications on AWS. Founded in 2007 and a part of Salesforce since 2010 ...

  2. NCCSTS Case Studies

    Enrich your students' educational experience with case-based teaching The NCCSTS Case Collection, created and curated by the National Center for Case Study Teaching in Science, on behalf of the University at Buffalo, contains over a thousand peer-reviewed case studies on a variety of topics in all areas of science.

  3. Case Studies Examples Scenarios Database System DBMS

    Case Studies Examples Scenarios Database System DBMS Most of the time you see the case studies and scenario-based questions in the Database System (DBMS) paper. Keeping in view, I am sharing with you some of the case study base questions of the database course.

  4. Case studies & examples

    Case studies & examples Articles, use cases, and proof points describing projects undertaken by data managers and data practitioners across the federal government

  5. 10 Database Examples in Real Life

    In this case, databases help organize products, pricing, customer information, and purchasing history. The eCommerce store owner can then leverage their database to recommend other potential products to customers.

  6. Real-world Database Integration Case Studies: Success Stories, Benefits

    Real-world database integration case studies offer valuable insights into the benefits and outcomes of integrating databases and systems. These success stories showcase how businesses have transformed their operations, achieving increased efficiency, improved data accuracy, enhanced decision-making, better customer experiences, and significant ...

  7. PDF A Suite of Case Studies in Relational Database Design

    The objective of this thesis is to design and develop a collection of ten projects that would be usable as term projects in relational database system design for a typical undergraduate database course. To this end a suite of ten case studies are presented.

  8. Case Studies in SQL: Real-World Data Analysis with SQL Queries

    SQL is a versatile language for data analysis, and these real-world case studies demonstrate its practical application in various domains. By using SQL queries, you can extract valuable insights ...

  9. Databases by Type: Case Studies

    An intuitive and comprehensive business library containing millions of full-text items across scholarly and popular periodicals, newspapers, market research reports, dissertations, books, videos and more. To find case studies, go to the Advanced Search page, go to the box labeled "Document Type," and select "Case Study." SAGE Journals. Access ...

  10. Databases

    This guide lists websites and databases containing case studies including where to purchase Harvard Business Review Case Studies. Also provided are search tips for locating case studies.

  11. Case Studies in Databases

    Ebsco Database To find a case study: Start at the Advanced Search page Type your topic into the first search box Scroll down the page to Document Type and click on Case Study, so it limits your search to Case Studies.

  12. tituHere/SQL-Case-Study

    A comprehensive collection of SQL case studies, queries, and solutions for real-world scenarios. This repository provides a hands-on approach to mastering SQL skills through a series of case studies, including table structures, sample data, and SQL queries. - GitHub - tituHere/SQL-Case-Study: A comprehensive collection of SQL case studies, queries, and solutions for real-world scenarios. This ...

  13. PDF Lecture 10: Case Studies

    Case Studies Case Study: Microsoft Azure SQL. MSSQL CTR: Recovery Protocol. •Phase 1: Analysis. Identify the sate of every txn in the log. •Phase 2: Redo. Recover the main table and version store to their state at the time of the crash. The database is available and online after this phase. •Phase 3: Undo.

  14. PDF Microsoft Word

    The aim of this case study is to design and develop a database for the hospital to maintain the records of various departments, rooms, and doctors in the hospital. It also maintains records of the regular patients, patients admitted in the hospital, the check up of patients done by the doctors, the patients that have been operated, and patients ...

  15. Case Studies and Real-World Scenarios

    Case Studies and Real-World Scenarios. Case Study 1: Query Optimization. A financial institution noticed a significant performance slowdown in their central database application, affecting their ability to serve customers promptly. After monitoring and analyzing SQL Server performance metrics, the IT team found that a specific query, part of a ...

  16. Research: Business Case Studies: Open Access Cases

    Find open access business case studies from Boston University and other sources. Learn from real-world examples of business challenges and solutions.

  17. Case Study #1

    Example Datasets All datasets exist within the dannys_diner database schema - be sure to include this reference within your SQL scripts as you start exploring the data and answering the case study questions. Table 1: sales The sales table captures all customer_id level purchases with an corresponding order_date and product_id information for when and what menu items were ordered.

  18. Library Guides: Case Studies: Sources for Case Studies

    How Do I Find Articles with Case Studies? Use keyword searches in article databases . For example: "case studies and airlines" or "case studies and management". Full-text articles and abstracts are available, depending on the journal.

  19. How to develop and manage a case study database as suggested by Yin

    The case study database as depicted in Figure 1.3 comprises secondary and primary data sourced from a combination of qualitative and quantitative r esearch methods within the

  20. Research: Business Case Studies: Databases with Cases

    Databases with Cases One strategy for finding business cases is to search databases known to have case studies. All of the databases listed below include case studies. Business (Gale OneFile) Business Abstracts with Full Text Business Source Complete Gartner Research Hospitality & Tourism Complete International Bibliography of the Social ...

  21. Impact database : Results and submissions : REF 2021

    The impact case study database allows you to browse and search for impact case studies submitted to the REF 2021. Use the search and filters below to find the impact case studies you are looking for.

  22. Cases Database

    Academic health sciences librarians may frequently be asked by students for assistance in locating particular kinds of studies, usually by assignment from their instructors. Cases Database would be a really good source for students to find case reports.

  23. Reducing the Size of Access Databases Linked to SharePoint: A 2GB Case

    In this article, we explore a real-world scenario where a 2GB Access database, with around 750 entries, linked to a SharePoint backend, is causing performance issues. We discuss strategies to optimize and reduce the database size.

  24. Business Case Studies: Databases That Contain Case Studies

    Entrepreneurial Studies Source provides the latest insight into topics relevant to entrepreneurship and small business. This database offers users full text for more than 125 key periodicals, 135 reference books, case studies, thousands of company profiles and over 600 videos with transcripts and related articles from the Harvard Faculty Series ...

  25. Case Studies

    By discussing actual and historic case studies, the development of the tools and methods of forensic engineering as well as of the findings from these for the design, erection, supervision, inspection and maintenance of civil engineering structures will be addressed.

  26. Week 4 Discussion Forum 1 Current Assets and Current Liabilities

    Week 4 Discussion Forum 1 Current Assets and Current Liabilities Analysis Case Study Calculate the current ratio and quick ratio for the latest two years and obtain the industry average ratios from the Mergent Online database, available through the UAGC Library, or use another outside resource of your choice, and then analyze the results. Be sure to show our calculations.