level
Note. N = 150 ( n = 50 for each condition). Participants were on average 39.5 years old ( SD = 10.1), and participant age did not differ by condition.
a Reflects the number and percentage of participants answering “yes” to this question.
Results of Curve-Fitting Analysis Examining the Time Course of Fixations to the Target
Logistic parameter | 9-year-olds | 16-year-olds | (40) |
| Cohen's | ||
Maximum asymptote, proportion | .843 | .135 | .877 | .082 | 0.951 | .347 | 0.302 |
Crossover, in ms | 759 | 87 | 694 | 42 | 2.877 | .006 | 0.840 |
Slope, as change in proportion per ms | .001 | .0002 | .002 | .0002 | 2.635 | .012 | 2.078 |
Note. For each subject, the logistic function was fit to target fixations separately. The maximum asymptote is the asymptotic degree of looking at the end of the time course of fixations. The crossover point is the point in time the function crosses the midway point between peak and baseline. The slope represents the rate of change in the function measured at the crossover. Mean parameter values for each of the analyses are shown for the 9-year-olds ( n = 24) and 16-year-olds ( n = 18), as well as the results of t tests (assuming unequal variance) comparing the parameter estimates between the two ages.
Descriptive Statistics and Correlations for Study Variables
Variable |
|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | |
1. Internal– external status | 3,697 | 0.43 | 0.49 | — | ||||||
2. Manager job performance | 2,134 | 3.14 | 0.62 | −.08 | — | |||||
3. Starting salary | 3,697 | 1.01 | 0.27 | .45 | −.01 | — | ||||
4. Subsequent promotion | 3,697 | 0.33 | 0.47 | .08 | .07 | .04 | — | |||
5. Organizational tenure | 3,697 | 6.45 | 6.62 | −.29 | .09 | .01 | .09 | — | ||
6. Unit service performance | 3,505 | 85.00 | 6.98 | −.25 | −.39 | .24 | .08 | .01 | — | |
7. Unit financial performance | 694 | 42.61 | 5.86 | .00 | −.03 | .12 | −.07 | −.02 | .16 | — |
Means, Standard Deviations, and One-Way Analyses of Variance in Psychological and Social Resources and Cognitive Appraisals
Measure | Urban | Rural | (1, 294) | η | ||
Self-esteem | 2.91 | 0.49 | 3.35 | 0.35 | 68.87 | .19 |
Social support | 4.22 | 1.50 | 5.56 | 1.20 | 62.60 | .17 |
Cognitive appraisals | ||||||
Threat | 2.78 | 0.87 | 1.99 | 0.88 | 56.35 | .20 |
Challenge | 2.48 | 0.88 | 2.83 | 1.20 | 7.87 | .03 |
Self-efficacy | 2.65 | 0.79 | 3.53 | 0.92 | 56.35 | .16 |
*** p < .001.
Results From a Factor Analysis of the Parental Care and Tenderness (PCAT) Questionnaire
PCAT item | Factor loading | ||
1 | 2 | 3 | |
Factor 1: Tenderness—Positive | |||
20. You make a baby laugh over and over again by making silly faces. | .04 | .01 | |
22. A child blows you kisses to say goodbye. | −.02 | −.01 | |
16. A newborn baby curls its hand around your finger. | −.06 | .00 | |
19. You watch as a toddler takes their first step and tumbles gently back down. | .05 | −.07 | |
25. You see a father tossing his giggling baby up into the air as a game. | .10 | −.03 | |
Factor 2: Liking | |||
5. I think that kids are annoying (R) | −.01 | .06 | |
8. I can’t stand how children whine all the time (R) | −.12 | −.03 | |
2. When I hear a child crying, my first thought is “shut up!” (R) | .04 | .01 | |
11. I don’t like to be around babies. (R) | .11 | −.01 | |
14. If I could, I would hire a nanny to take care of my children. (R) | .08 | −.02 | |
Factor 3: Protection | |||
7. I would hurt anyone who was a threat to a child. | −.13 | −.02 | |
12. I would show no mercy to someone who was a danger to a child. | .00 | −.05 | |
15. I would use any means necessary to protect a child, even if I had to hurt others. | .06 | .08 | |
4. I would feel compelled to punish anyone who tried to harm a child. | .07 | .03 | |
9. I would sooner go to bed hungry than let a child go without food. | .46 | −.03 |
Note. N = 307. The extraction method was principal axis factoring with an oblique (Promax with Kaiser Normalization) rotation. Factor loadings above .30 are in bold. Reverse-scored items are denoted with an (R). Adapted from “Individual Differences in Activation of the Parental Care Motivational System: Assessment, Prediction, and Implications,” by E. E. Buckels, A. T. Beall, M. K. Hofer, E. Y. Lin, Z. Zhou, and M. Schaller, 2015, Journal of Personality and Social Psychology , 108 (3), p. 501 ( https://doi.org/10.1037/pspp0000023 ). Copyright 2015 by the American Psychological Association.
Moderator Analysis: Types of Measurement and Study Year
Effect | Estimate |
| 95% CI | ||
Fixed effects | |||||
Intercept | .119 | .040 | .041 | .198 | .003 |
Creativity measurement | .097 | .028 | .042 | .153 | .001 |
Academic achievement measurement | −.039 | .018 | −.074 | −.004 | .03 |
Study year | .0002 | .001 | −.001 | .002 | .76 |
Goal | −.003 | .029 | −.060 | .054 | .91 |
Published | .054 | .030 | −.005 | .114 | .07 |
Random effects | |||||
Within-study variance | .009 | .001 | .008 | .011 | <.001 |
Between-study variance | .018 | .003 | .012 | .023 | <.001 |
Note . Number of studies = 120, number of effects = 782, total N = 52,578. CI = confidence interval; LL = lower limit; UL = upper limit.
Master Narrative Voices: Struggle and Success and Emancipation
Discourse and dimension | Example quote |
Struggle and success | |
Self-actualization as member of a larger gay community is the end goal of healthy sexual identity development, or “coming out” | “My path of gayness ... going from denial to saying, well this is it, and then the process of coming out, and the process of just sort of, looking around and seeing, well where do I stand in the world, and sort of having, uh, political feelings.” (Carl, age 50) |
Maintaining healthy sexual identity entails vigilance against internalization of societal discrimination | “When I'm like thinking of criticisms of more mainstream gay culture, I try to ... make sure it's coming from an appropriate place and not like a place of self-loathing.” (Patrick, age 20) |
Emancipation | |
Open exploration of an individually fluid sexual self is the goal of healthy sexual identity development | “[For heterosexuals] the man penetrates the female, whereas with gay people, I feel like there is this potential for really playing around with that model a lot, you know, and just experimenting and exploring.” (Orion, age 31) |
Questioning discrete, monolithic categories of sexual identity | “LGBTQI, you know, and added on so many letters. Um, and it does start to raise the question about what the terms mean and whether ... any term can adequately be descriptive.” (Bill, age 50) |
Integrated Results Matrix for the Effect of Topic Familiarity on Reliance on Author Expertise
Quantitative results | Qualitative results | Example quote |
When the topic was more familiar (climate change) and cards were more relevant, participants placed less value on author expertise. | When an assertion was considered to be more familiar and considered to be general knowledge, participants perceived less need to rely on author expertise. | Participant 144: “I feel that I know more about climate and there are several things on the climate cards that are obvious, and that if I sort of know it already, then the source is not so critical ... whereas with nuclear energy, I don't know so much so then I'm maybe more interested in who says what.” |
When the topic was less familiar (nuclear power) and cards were more relevant, participants placed more value on authors with higher expertise. | When an assertion was considered to be less familiar and not general knowledge, participants perceived more need to rely on author expertise. | Participant 3: “[Nuclear power], which I know much, much less about, I would back up my arguments more with what I trust from the professors.” |
Note . We integrated quantitative data (whether students selected a card about nuclear power or about climate change) and qualitative data (interviews with students) to provide a more comprehensive description of students’ card selections between the two topics.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.
First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :
Second, decide how you will analyze the data .
Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.
Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.
Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.
For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .
If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .
Qualitative | to broader populations. . | |
---|---|---|
Quantitative | . |
You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.
Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).
If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.
Primary | . | methods. |
---|---|---|
Secondary |
In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .
In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .
To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.
Descriptive | . . | |
---|---|---|
Experimental |
Discover proofreading & editing
Research method | Primary or secondary? | Qualitative or quantitative? | When to use |
---|---|---|---|
Primary | Quantitative | To test cause-and-effect relationships. | |
Primary | Quantitative | To understand general characteristics of a population. | |
Interview/focus group | Primary | Qualitative | To gain more in-depth understanding of a topic. |
Observation | Primary | Either | To understand how something occurs in its natural setting. |
Secondary | Either | To situate your research in an existing body of work, or to evaluate trends within a research topic. | |
Either | Either | To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study. |
Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.
Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.
Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:
Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .
Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).
You can use quantitative analysis to interpret data that was collected either:
Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.
Research method | Qualitative or quantitative? | When to use |
---|---|---|
Quantitative | To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). | |
Meta-analysis | Quantitative | To statistically analyze the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner. |
Qualitative | To analyze data collected from interviews, , or textual sources. To understand general themes in the data and how they are communicated. | |
Either | To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources. Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words). |
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
The research methods you use depend on the type of data you need to answer your research question .
Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.
Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).
In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .
In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.
Other students also liked, writing strong research questions | criteria & examples.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Your Account
Manage your account, subscriptions and profile.
MyKomen Health
ShareForCures
In Your Community
In Your Community
View resources and events in your local community.
Change your location:
Susan G. Komen®
One moment can change everything.
The tables in this section present the research findings that drive many recommendations and standards of practice related to breast cancer.
Research tables are useful for presenting data. They show a lot of information in a simple format, but they can be hard to understand if you don’t work with them every day.
Here, we describe some basic concepts that may help you read and understand research tables. The sample table below gives examples.
The numbered table items are described below. You will see many of these items in all of the tables.
|
Studies vary in how well they help answer scientific questions. When reviewing the research on a topic, it’s important to recognize “good” studies. Good studies are well-designed.
Most scientific reviews set standards for the studies they include. These standards are called “selection criteria” and are listed for each table in this section. These selection criteria help make sure well-designed studies are included in the table.
The types of studies (for example, randomized controlled trial, prospective cohort, case-control) included in each table are listed in the selection criteria.
Learn about the strengths and weaknesses of different types of research studies .
Selection criteria for most tables include the minimum number of cases of breast cancer or participants for the studies in the table.
Large studies have more statistical power than small studies. This means the results from large studies are less likely to be due to chance than results from small studies.
You can see the power of large numbers if you think about flipping a coin. Say you are trying to figure out whether a coin is fixed so that it lands on “heads” more than “tails.” A fair coin would land on heads half the time. So, you want to test whether the coin lands on heads more than half of the time.
If you flip the coin twice and get 2 heads, you don’t have a lot of evidence. It wouldn’t be surprising to flip a fair coin and get 2 heads in a row. With 2 coin flips, you can’t be sure whether you have a fair coin or not. Even 3 or 4 heads in a row wouldn’t be surprising for a fair coin.
If, however, you flipped the coin 20 times and got mostly heads, you would start to think the coin might be fixed.
With an increasing number of observations, you have more evidence on which to base your conclusions. So, you have more confidence in your conclusions. It’s a similar idea in research.
Say you’re interested in finding out whether or not alcohol use increases the risk of breast cancer.
If there are only a few cases of breast cancer among the alcohol drinkers and the non-drinkers, you won’t have much confidence drawing conclusions.
If, however, there are hundreds of breast cancer cases, it’s easier to draw firm conclusions about a link between alcohol and breast cancer. With more evidence, you have more confidence in your findings.
Study design (the type of research study) and study quality are also important. For example, a small, well-designed study may be better than a large, poorly-designed study. However, when all else is equal, a larger number of people in a study means the study is better able to answer research questions.
Learn about different types of research studies .
The first column (from the left) lists either the name of the study or the name of the first author of the published article.
Below each table, there’s a reference list so you can find the original published articles.
Sometimes, a table will report the results of only one analysis. This can occur for a few reasons. Either there’s only one study that meets the selection criteria or there’s a report that combines data from many studies into one large analysis.
The second column describes the people in each study.
In some tables, more details on the people in the study are included.
Randomized controlled trials and prospective cohort studies follow people forward in time to see who will have the outcome of interest (such as breast cancer).
For these studies, one column shows the length of follow-up time. This is the number or months or years people in the study were followed.
Because case-control studies don’t follow people forward in time, there are no data on follow-up time for these studies.
Tables that focus on cumulative risk may also show the length of follow-up. These tables give the length of time, or age range, used to compute cumulative risk (for example, the cumulative risk of breast cancer up to age 70).
Learn more about cumulative risk .
Some tables have columns with other information on the study population or the topic being studied. For example, the table Exercise and Breast Cancer Risk has a column with the comparisons of exercise used in the studies.
This extra information gives more details about the studies and shows how the studies are similar to (and different from) each other.
Studies on the same topic can differ in important ways. They may define “high” and “low” levels of a risk factor differently. Studies may look at outcomes among women of different ages or menopausal status.
These differences are important to keep in mind when you review the findings in a table. They may help explain differences in study findings.
All of the information in the tables is important, but the main purpose of the tables is to present the numbers that show the risk, survival or other measures for each topic. These numbers are shown in the remaining columns of the tables.
The headings of the columns tell you what the numbers represent. For example:
Most often, findings are reported as relative risks. A relative risk shows how much higher, how much lower or whether there’s no difference in risk for people with a certain risk factor compared to the risk in people without the factor.
A relative risk compares 2 absolute risks.
The absolute risk of those with the factor divided by the absolute risk of those without the factor gives the relative risk.
|
|
|
Greater than 1 | People with the risk factor have a higher risk than people without the risk factor. A relative risk of 1.5 means someone with the risk factor has a 50 percent higher risk of breast cancer than someone without the factor. A relative risk of 2.0 means someone with the risk factor has twice the risk (or 2-fold the risk) of someone without the factor. |
Less than 1 | People with the risk factor have a lower risk than people without the risk factor. A relative risk of 0.8 means someone with the risk factor has a 20 percent lower risk of breast cancer than someone without the factor. |
1 | A relative risk of 1 means there’s no difference in risk between people with and without the risk factor. |
The confidence interval around a relative risk helps show whether or not the relative risk is statistically significant (whether or not the finding is likely due to chance).
Learn more about confidence intervals .
Say a study shows women who don’t exercise (inactive women) have a 25 percent increase in breast cancer risk compared to women who do exercise (active women).
This statistic is a relative risk (the relative risk is 1.25). It means the inactive women were 25 percent more likely to develop breast cancer than women who exercised.
Learn more about relative risk .
A 95 percent confidence interval (95% CI) around a risk measure means there’s a 95 percent chance the “true” measure falls within the interval.
Because there’s random error in studies, and study populations are only samples of much larger populations, a single study doesn’t give the “one” correct answer. There’s always a range of likely answers. A single study gives a “best estimate” along with a 95 % CI of a likely range.
Most scientific studies report risk measures, such as relative risks, odds ratios and averages, with 95% CI.
For relative risks and odds ratios, a 95% CI that includes the number 1.0 means there’s no link between an exposure (such as a risk factor or a treatment) and an outcome (such as breast cancer or survival).
When this happens, the results are not statistically significant. This means any link between the exposure and outcome is likely due to chance.
If a 95% CI does not include 1.0, the results are statistically significant. This means there’s likely a true link between an exposure and an outcome.
A few examples from the sample table above may help explain statistical significance.
The EPIC study found a relative risk of breast cancer of 1.07, with a 95% CI of 0.96 to 1.19. In the table, you will see 1.07 (0.96-1.19).
Women in the EPIC study who drank 1-2 drinks per day had a 7 percent higher risk of breast cancer than women who did not drink alcohol. The 95% CI of 0.96 to 1.19 includes 1.0. This means these results are not statistically significant and the increased risk of breast cancer is likely due to chance.
The Million Women’s Study found a relative risk of breast cancer of 1.13 with a 95% CI of 1.10 to 1.16. This is shown as 1.13 (1.10-1.16) in the table.
Women in the Million Women’s Study who drank 1-2 drinks per day had a 13 percent higher risk of breast cancer than women who did not drink alcohol. In this case, the 95% CI of 1.10 to 1.16 does not include 1.0. So, these results are statistically significant and suggest there’s likely a true link between alcohol and breast cancer.
For any topic, it’s important to look at the findings as a whole. In the sample table above, most studies show a statistically significant increase in risk among women who drink alcohol compared to women who don’t drink alcohol. Thus, the findings as a whole suggest alcohol increases the risk of breast cancer.
Summary relative risks from meta-analyses.
A meta-analysis takes relative risks reported in different studies and “averages” them to come up with a single, summary measure. Findings from a meta-analysis can give stronger conclusions than findings from a single study.
A pooled analysis uses data from multiple studies to give a summary measure. It combines the data from each person in each of the studies into one large data set and analyses the data as if it were one big study. A pooled analysis is almost always better than a meta-analysis.
In a meta-analysis, researchers combine the results from different studies. In a pooled analysis, researchers combine the individual data from the different studies. This usually gives more statistical power than a meta-analyses. More statistical power means it’s more likely the results are not simply due to chance.
Sometimes, study findings are presented as a cumulative risk (risk up to a certain age). This risk is often shown as a percentage.
A cumulative risk may show the risk of breast cancer for a certain group of people up to a certain age. Say the cumulative risk up to age 70 for women with a risk factor is 20 percent. This means by age 70, 20 percent of the women (or 1 in 5) with the risk factor will get breast cancer.
Lifetime risk is a cumulative risk. It shows the risk of getting breast cancer during your lifetime (or up to a certain age). Women in the U.S. have a 13 percent lifetime risk of getting breast cancer. This means 1 in 8 women in the U.S. will get breast cancer during their lifetime.
Learn more about lifetime risk .
Some tables show study findings on the sensitivity and specificity of screening tests. These measures describe the quality of a breast cancer screening test.
The goals of any screening test are:
A perfect test would correctly identify everyone with no mistakes. There would be no:
No screening test has perfect (100 percent) sensitivity and perfect (100 percent) specificity. There’s always a trade-off between the two. When a test gains sensitivity, it loses some specificity.
Learn more about sensitivity and specificity .
You may want more detail about a study than is given in the summary table. To help you find this information, the references for all the studies in a table are listed below the table.
Each reference includes the:
PubMed , the National Library of Medicine’s search engine, is a good source for finding summaries of science and medical journal articles (called abstracts).
For some abstracts, PubMed also has links to the full text articles. Most medical journals have websites and offer their articles either for free or for a fee.
If you live near a university with a medical school or public health school, you may be able to go to the school’s medical library to get a copy of an article. Local public libraries may not carry medical journals, but they may be able to find a copy of an article from another source.
If you’re interested in learning more about health research, a basic epidemiology textbook may be a good place to start. The National Cancer Institute also has information on epidemiology studies and randomized controlled trials.
Updated 07/25/22
How has having breast cancer changed your outlook?
Share Your Story or Read Others
Home Market Research
Content Index
Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.
Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense.
Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.
LEARN ABOUT: Research Process Steps
On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.
We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”
Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.
Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research.
Create a Free Account
Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.
Learn More : Examples of Qualitative Data in Education
Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .
Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words.
For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find “food” and “hunger” are the most commonly used words and will highlight them for further analysis.
LEARN ABOUT: Level of Analysis
The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.
For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’
The scrutiny-based technique is also one of the highly recommended text analysis methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other.
For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .
Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.
Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.
LEARN ABOUT: Qualitative Research Questions and Questionnaires
There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,
LEARN ABOUT: 12 Best Tools for Researchers
The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.
Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages
More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.
Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.
LEARN ABOUT: Steps in Qualitative Research
After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .
This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.
For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided sample without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.
Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.
Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected sample to reason that about 80-90% of people like the movie.
Here are two significant areas of inferential statistics.
These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.
Here are some of the commonly used methods for data analysis in research.
LEARN ABOUT: Best Data Collection Tools
LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.
LEARN ABOUT: Average Order Value
QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.
Jul 9, 2024
Jul 8, 2024
Jul 5, 2024
Presenting your qualitative analysis findings: tables to include in chapter 4.
The earliest stages of developing a doctoral dissertation—most specifically the topic development and literature review stages—require that you immerse yourself in a ton of existing research related to your potential topic. If you have begun writing your dissertation proposal, you have undoubtedly reviewed countless results and findings sections of studies in order to help gain an understanding of what is currently known about your topic.
In this process, we’re guessing that you observed a distinct pattern: Results sections are full of tables. Indeed, the results chapter for your own dissertation will need to be similarly packed with tables. So, if you’re preparing to write up the results of your statistical analysis or qualitative analysis, it will probably help to review your APA editing manual to brush up on your table formatting skills. But, aside from formatting, how should you develop the tables in your results chapter?
In quantitative studies, tables are a handy way of presenting the variety of statistical analysis results in a form that readers can easily process. You’ve probably noticed that quantitative studies present descriptive results like mean, mode, range, standard deviation, etc., as well the inferential results that indicate whether significant relationships or differences were found through the statistical analysis . These are pretty standard tables that you probably learned about in your pre-dissertation statistics courses.
But, what if you are conducting qualitative analysis? What tables are appropriate for this type of study? This is a question we hear often from our dissertation assistance clients, and with good reason. University guidelines for results chapters often contain vague instructions that guide you to include “appropriate tables” without specifying what exactly those are. To help clarify on this point, we asked our qualitative analysis experts to share their recommendations for tables to include in your Chapter 4.
Demographics Tables
As with studies using quantitative methods , presenting an overview of your sample demographics is useful in studies that use qualitative research methods. The standard demographics table in a quantitative study provides aggregate information for what are often large samples. In other words, such tables present totals and percentages for demographic categories within the sample that are relevant to the study (e.g., age, gender, job title).
If conducting qualitative research for your dissertation, however, you will use a smaller sample and obtain richer data from each participant than in quantitative studies. To enhance thick description—a dimension of trustworthiness—it will help to present sample demographics in a table that includes information on each participant. Remember that ethical standards of research require that all participant information be deidentified, so use participant identification numbers or pseudonyms for each participant, and do not present any personal information that would allow others to identify the participant (Blignault & Ritchie, 2009). Table 1 provides participant demographics for a hypothetical qualitative research study exploring the perspectives of persons who were formerly homeless regarding their experiences of transitioning into stable housing and obtaining employment.
Participant Demographics
Participant ID | Gender | Age | Current Living Situation |
P1 | Female | 34 | Alone |
P2 | Male | 27 | With Family |
P3 | Male | 44 | Alone |
P4 | Female | 46 | With Roommates |
P5 | Female | 25 | With Family |
P6 | Male | 30 | With Roommates |
P7 | Male | 38 | With Roommates |
P8 | Male | 51 | Alone |
Tables to Illustrate Initial Codes
Most of our dissertation consulting clients who are conducting qualitative research choose a form of thematic analysis . Qualitative analysis to identify themes in the data typically involves a progression from (a) identifying surface-level codes to (b) developing themes by combining codes based on shared similarities. As this process is inherently subjective, it is important that readers be able to evaluate the correspondence between the data and your findings (Anfara et al., 2002). This supports confirmability, another dimension of trustworthiness .
A great way to illustrate the trustworthiness of your qualitative analysis is to create a table that displays quotes from the data that exemplify each of your initial codes. Providing a sample quote for each of your codes can help the reader to assess whether your coding was faithful to the meanings in the data, and it can also help to create clarity about each code’s meaning and bring the voices of your participants into your work (Blignault & Ritchie, 2009).
Table 2 is an example of how you might present information regarding initial codes. Depending on your preference or your dissertation committee’s preference, you might also present percentages of the sample that expressed each code. Another common piece of information to include is which actual participants expressed each code. Note that if your qualitative analysis yields a high volume of codes, it may be appropriate to present the table as an appendix.
Initial Codes
Initial code | of participants contributing ( =8) | of transcript excerpts assigned | Sample quote |
---|---|---|---|
Daily routine of going to work enhanced sense of identity | 7 | 12 | “It’s just that good feeling of getting up every day like everyone else and going to work, of having that pattern that’s responsible. It makes you feel good about yourself again.” (P3) |
Experienced discrimination due to previous homelessness | 2 | 3 | “At my last job, I told a couple other people on my shift I used to be homeless, and then, just like that, I get put into a worse job with less pay. The boss made some excuse why they did that, but they didn’t want me handling the money is why. They put me in a lower level job two days after I talk to people about being homeless in my past. That’s no coincidence if you ask me.” (P6) |
Friends offered shared housing | 3 | 3 | “My friend from way back had a spare room after her kid moved out. She let me stay there until I got back on my feet.” (P4) |
Mental health services essential in getting into housing | 5 | 7 | “Getting my addiction treated was key. That was a must. My family wasn’t gonna let me stay around their place without it. So that was a big help for getting back into a place.” (P2) |
Tables to Present the Groups of Codes That Form Each Theme
As noted previously, most of our dissertation assistance clients use a thematic analysis approach, which involves multiple phases of qualitative analysis that eventually result in themes that answer the dissertation’s research questions. After initial coding is completed, the analysis process involves (a) examining what different codes have in common and then (b) grouping similar codes together in ways that are meaningful given your research questions. In other words, the common threads that you identify across multiple codes become the theme that holds them all together—and that theme answers one of your research questions.
As with initial coding, grouping codes together into themes involves your own subjective interpretations, even when aided by qualitative analysis software such as NVivo or MAXQDA. In fact, our dissertation assistance clients are often surprised to learn that qualitative analysis software does not complete the analysis in the same ways that statistical analysis software such as SPSS does. While statistical analysis software completes the computations for you, qualitative analysis software does not have such analysis capabilities. Software such as NVivo provides a set of organizational tools that make the qualitative analysis far more convenient, but the analysis itself is still a very human process (Burnard et al., 2008).
Because of the subjective nature of qualitative analysis, it is important to show the underlying logic behind your thematic analysis in tables—such tables help readers to assess the trustworthiness of your analysis. Table 3 provides an example of how to present the codes that were grouped together to create themes, and you can modify the specifics of the table based on your preferences or your dissertation committee’s requirements. For example, this type of table might be presented to illustrate the codes associated with themes that answer each research question.
Grouping of Initial Codes to Form Themes
Theme Initial codes grouped to form theme | of participants contributing ( =8) | of transcript excerpts assigned |
Assistance from friends, family, or strangers was instrumental in getting back into stable housing | 6 | 10 |
Family member assisted them to get into housing | ||
Friends offered shared housing | ||
Stranger offered shared housing | ||
Obtaining professional support was essential for overcoming the cascading effects of poverty and homelessness | 7 | 19 |
Financial benefits made obtaining housing possible | ||
Mental health services essential in getting into housing | ||
Social services helped navigate housing process | ||
Stigma and concerns about discrimination caused them to feel uncomfortable socializing with coworkers | 6 | 9 |
Experienced discrimination due to previous homelessness | ||
Feared negative judgment if others learned of their pasts | ||
Routine productivity and sense of making a contribution helped to restore self-concept and positive social identity | 8 | 21 |
Daily routine of going to work enhanced sense of identity | ||
Feels good to contribute to society/organization | ||
Seeing products of their efforts was rewarding |
Tables to Illustrate the Themes That Answer Each Research Question
Creating alignment throughout your dissertation is an important objective, and to maintain alignment in your results chapter, the themes you present must clearly answer your research questions. Conducting qualitative analysis is an in-depth process of immersion in the data, and many of our dissertation consulting clients have shared that it’s easy to lose your direction during the process. So, it is important to stay focused on your research questions during the qualitative analysis and also to show the reader exactly which themes—and subthemes, as applicable—answered each of the research questions.
Below, Table 4 provides an example of how to display the thematic findings of your study in table form. Depending on your dissertation committee’s preference or your own, you might present all research questions and all themes and subthemes in a single table. Or, you might provide separate tables to introduce the themes for each research question as you progress through your presentation of the findings in the chapter.
Emergent Themes and Research Questions
Research question
| Themes that address question
|
RQ1. How do adults who have previously experienced homelessness describe their transitions to stable housing?
| Theme 1: Assistance from friends, family, or strangers was instrumental in getting back into stable housing Theme 2: Obtaining professional support was essential for overcoming the cascading effects of poverty and homelessness
|
RQ2. How do adults who have previously experienced homelessness describe returning to paid employment?
| Theme 3: Self-perceived stigma caused them to feel uncomfortable socializing with coworkers Theme 4: Routine productivity and sense of making a contribution helped to restore self-concept and positive social identity |
Bonus Tip! Figures to Spice Up Your Results
Although dissertation committees most often wish to see tables such as the above in qualitative results chapters, some also like to see figures that illustrate the data. Qualitative software packages such as NVivo offer many options for visualizing your data, such as mind maps, concept maps, charts, and cluster diagrams. A common choice for this type of figure among our dissertation assistance clients is a tree diagram, which shows the connections between specified words and the words or phrases that participants shared most often in the same context. Another common choice of figure is the word cloud, as depicted in Figure 1. The word cloud simply reflects frequencies of words in the data, which may provide an indication of the importance of related concepts for the participants.
As you move forward with your qualitative analysis and development of your results chapter, we hope that this brief overview of useful tables and figures helps you to decide on an ideal presentation to showcase the trustworthiness your findings. Completing a rigorous qualitative analysis for your dissertation requires many hours of careful interpretation of your data, and your end product should be a rich and detailed results presentation that you can be proud of. Reach out if we can help in any way, as our dissertation coaches would be thrilled to assist as you move through this exciting stage of your dissertation journey!
Anfara Jr., V. A., Brown, K. M., & Mangione, T. L. (2002). Qualitative analysis on stage: Making the research process more public. Educational Researcher , 31 (7), 28-38. https://doi.org/10.3102/0013189X031007028
Blignault, I., & Ritchie, J. (2009). Revealing the wood and the trees: Reporting qualitative research. Health Promotion Journal of Australia , 20 (2), 140-145. https://doi.org/10.1071/HE09140
Burnard, P., Gill, P., Stewart, K., Treasure, E., & Chadwick, B. (2008). Analysing and presenting qualitative data. British Dental Journal , 204 (8), 429-432. https://doi.org/10.1038/sj.bdj.2008.292
Numbers, Facts and Trends Shaping Your World
Read our research on:
Full Topic List
Read Our Research On:
This report is a collaborative effort based on the input and analysis of the following individuals.
Laura Silver, Associate Director, Global Attitudes Research Christine Huang, Research Associate Laura Clancy, Research Analyst Andrew Prozorovsky, Research Assistant
Dorene Asare-Marfo, Panel Manager Sarah Austin, Research Assistant Peter Bell, Associate Director, Design and Production Janakee Chavda, Assistant Digital Producer Manolo Corichi, Research Analyst Jonathan Evans, Senior Researcher Moira Fagan, Research Associate Janell Fetterolf, Senior Researcher Shannon Greenwood, Digital Production Manager Sneha Gubbala, Research Assistant Anna Jackson, Editorial Assistant Hannah Klein, Senior Communications Manager Gar Meng Leong, Communications Manager Kirsten Lesage, Research Associate Jordan Lippert, Research Assistant Carolyn Lau, International Research Methodologist John Carlo Mandapat, Information Graphics Designer William Miner, Research Assistant Patrick Moynihan, Associate Director, International Research Methods Georgina Pizzolitto, Research Methodologist Jacob Poushter, Associate Director, Global Attitudes Research Dana Popky, Associate Panel Manager Sofia Hernandez Ramones, Research Assistant Sofi Sinozich, International Research Methodologist Maria Smerkovich, Research Associate Kelsey Jo Starr, Research Analyst Brianna Vetter, Administrative Associate Richard Wike, Director, Global Attitudes Research
We appreciate the following individuals for advising us on strategic outreach: Eugenia Mitchelstein, Associate Professor of Communication at Universidad de San Andrés (Argentina); Naziru Mikail Abubakar, Executive Director and Editor-in-Chief at the Daily Trust (Nigeria); Sebastián Lacunza, Columnist at elDiarioAR.com (Argentina); Anton Harber, Executive Director at the Campaign for Free Expression and Founder of the Mail & Guardian (South Africa); and Admire Mare, Associate Professor and Head of Department of Communication and Media Studies, University of Johannesburg (South Africa); ); and Monicah Waceke Ndungu, Chief Operating Officer, Nation Media Group (Kenya).
Fresh data delivery Saturday mornings
Weekly updates on the world of news & information
Globally, biden receives higher ratings than trump, americans remain critical of china, how people in hong kong view mainland china and their own identity, in east asia, many people see china’s power and influence as a major threat, most popular, report materials.
1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 | Media Inquiries
ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .
© 2024 Pew Research Center
Published on 9.7.2024 in Vol 26 (2024)
Authors of this article:
RTI International, Research Triangle Park, NC, United States
Claudia M Squire, MS
RTI International
3040 East Cornwallis Road
Research Triangle Park, NC, 27709-2194
United States
Phone: 1 9195416613
Email: [email protected]
Background: In-depth interviews are a common method of qualitative data collection, providing rich data on individuals’ perceptions and behaviors that would be challenging to collect with quantitative methods. Researchers typically need to decide on sample size a priori. Although studies have assessed when saturation has been achieved, there is no agreement on the minimum number of interviews needed to achieve saturation. To date, most research on saturation has been based on in-person data collection. During the COVID-19 pandemic, web-based data collection became increasingly common, as traditional in-person data collection was possible. Researchers continue to use web-based data collection methods post the COVID-19 emergency, making it important to assess whether findings around saturation differ for in-person versus web-based interviews.
Objective: We aimed to identify the number of web-based interviews needed to achieve true code saturation or near code saturation.
Methods: The analyses for this study were based on data from 5 Food and Drug Administration–funded studies conducted through web-based platforms with patients with underlying medical conditions or with health care providers who provide primary or specialty care to patients. We extracted code- and interview-specific data and examined the data summaries to determine when true saturation or near saturation was reached.
Results: The sample size used in the 5 studies ranged from 30 to 70 interviews. True saturation was reached after 91% to 100% (n=30-67) of planned interviews, whereas near saturation was reached after 33% to 60% (n=15-23) of planned interviews. Studies that relied heavily on deductive coding and studies that had a more structured interview guide reached both true saturation and near saturation sooner. We also examined the types of codes applied after near saturation had been reached. In 4 of the 5 studies, most of these codes represented previously established core concepts or themes. Codes representing newly identified concepts, other or miscellaneous responses (eg, “in general”), uncertainty or confusion (eg, “don’t know”), or categorization for analysis (eg, correct as compared with incorrect) were less commonly applied after near saturation had been reached.
Conclusions: This study provides support that near saturation may be a sufficient measure to target and that conducting additional interviews after that point may result in diminishing returns. Factors to consider in determining how many interviews to conduct include the structure and type of questions included in the interview guide, the coding structure, and the population under study. Studies with less structured interview guides, studies that rely heavily on inductive coding and analytic techniques, and studies that include populations that may be less knowledgeable about the topics discussed may require a larger sample size to reach an acceptable level of saturation. Our findings also build on previous studies looking at saturation for in-person data collection conducted at a small number of sites.
In-depth interviews are commonly used to collect qualitative data for a wide variety of research purposes across many subject matter areas. These types of interviews are an ideal approach for examining individuals’ perceptions and behaviors at a level of depth, complexity, and richness that would be challenging to achieve with quantitative data collection methods. Typically, trained interviewers conduct interviews using a guide designed to address the study’s key research aims by asking a series of questions and probes ordered by topic. These interview guides can range from highly structured to completely unstructured (eg, loosely organized conversations). Following the completion of data collection, interview notes and transcripts generated from audio recordings of the interviews are analyzed to assess for patterns in responses among the interviewees or subsets of the participants [ 1 , 2 ].
During the COVID-19 pandemic, web-based data collection became increasingly common, as traditional in-person data collection was not possible, and researchers continue to use web-based data collection methods post the COVID-19 emergency, citing advantages such as accessing marginalized populations, achieving greater geographic diversity, being able to offer a more flexible schedule, and saving on travel expenses [ 3 ]. Potential concerns about web-based data collection, such as the inability to build rapport and data richness, have been largely unfounded [ 3 , 4 ].
While we do not expect web-based data collection to supplant in-person research, it continues to show signs of growth. To date, much of the research on qualitative methods has focused on in-person data collection. Consequently, it will be important to conduct research to determine if previous widely accepted findings hold true for web-based data collection.
Researchers typically make a priori decisions about the number of interviews to conduct with the aim of balancing the need for sufficient data with resource limitations and respondent burden. The concept of saturation is frequently used to justify the study’s rigor with respect to the selected sample size. To provide empirically based recommendations on adequate minimum sample sizes, researchers have conducted studies to assess when saturation occurs. However, multiple types of saturation exist—such as theoretical, thematic, code, and meaning—and within each type of saturation, the definitions and measurement approaches used by investigators vary substantially, as does the level of detail researchers report in publications about their methods for achieving and assessing saturation [ 5 ].
This study aimed to examine the number of interviews needed to obtain code saturation for 5 recently conducted studies funded by the Food and Drug Administration [ 6 ] involving web-based interviews. Specifically, how many web-based interviews are needed to obtain true code saturation (ie, the use of 100% of all codes applied in the study) and how many web-based interviews are needed to achieve near code saturation (ie, the use of 90% of all codes applied in the study)?
Multiple authors have defined saturation as the point during data collection and analysis, at which no new additional data are found that reveal a new conceptual category [ 7 - 13 ] or theme related to the research question—an indicator that further data collection is redundant [ 11 ]. Additionally, Coenen et al [ 14 ] specified that no new second-level themes are revealed in 2 consecutive focus groups or interviews.
Other authors have distinguished between various types of saturation. One of the most common types of saturation mentioned in the literature is theoretical saturation, which emerges from grounded theory and occurs when the concepts of a theory are fully reflected in the data and no new insights, themes, or issues are identified from the data [ 5 , 11 , 12 , 15 - 18 ]. Hennink et al [ 17 ] expanded this definition, adding that all relevant conceptual categories should have been identified, thus emphasizing the importance of sample adequacy over sample size. Guest et al [ 15 ] operationalized the concept of theoretical saturation as the point in data collection and analysis when new information produces little or no change to the codebook, and van Rijnsoever [ 19 ] operationalized it as being when all the codes have been observed once in the sample.
Some authors have defined theoretical saturation, thematic saturation, and data saturation as the same concept [ 16 , 18 ], whereas others have defined these terms differently [ 12 , 20 ]. For example, some authors have defined thematic saturation as the point where no new codes or themes are emerging from the data [ 12 , 21 ]. For thematic saturation to be achieved, data should be collected until nothing new is generated [ 20 , 22 ]. Data saturation has been defined as the level to which new data are repetitive of the data that have been collected [ 12 , 23 , 24 ].
Furthermore, Hennink et al [ 17 ] distinguished between code saturation and meaning saturation. Code saturation is based on primary or parent codes and relates to the quantity of the data (“hearing it all”). Meaning saturation is based on sub or child codes and relates to the quality or richness of the data (“understanding it all”). Constantinou et al [ 7 ] made the point that it is the categorization of the raw data, rather than the data, that are saturated.
The literature reflects multiple methods that have been used to determine saturation [ 7 - 10 , 13 - 18 , 21 , 25 ]. Sim et al [ 26 ] discussed the four general approaches that have been used to determine sample size for qualitative research: (1) rules of thumb, based on a combination of methodological considerations and past experience; (2) conceptual models, based on specific characteristics of the proposed study; (3) numerical guidelines derived from the empirical investigation; and (4) statistical approaches, based on the probability of obtaining a sufficient sample size.
For example, Galvin [ 9 ] used a statistical approach based on binomial logic to establish the relationship between identifying a theme in a particular sample and within the larger population; for example, number of chances of detecting a theme if that theme exists within number of the population. Using the probability equation, the researcher can determine the number of interviews needed for a stated level of confidence that all relevant themes held by a certain proportion of the population will occur within the interview sample. This method assumes the researcher knows in advance the emergent themes from the study and at what rate they may occur.
Constantinou et al [ 7 ] used the comparative method for themes saturation, which relies on both a deductive and an inductive approach to generate codes (keywords extracted from the participants’ words) and themes (codes that fall into similar categories). Themes are compared across interviews, and theme saturation is reached when the next interview does not produce any new themes. The sequence of interviews is reordered multiple times to check for order-induced error. When exploring the various methods for determining saturation, researchers reached different conclusions on when saturation was achieved (findings on saturation by other authors are present in Multimedia Appendix 1 ) [ 7 - 10 , 13 - 17 , 21 , 25 , 27 , 28 ].
Most studies assessing saturation focused on in-person data collection or did not specify the data collection method. Given recent increases in web-based data collection, studies assessing saturation for web-based interviews are critical to ensure that recommendations regarding sample size are tailored to the mode of data collection [ 4 ]. While there is evidence to suggest that the content of data coded from in-person as compared with web-based interviews is conceptually similar [ 29 ], this is a relatively new area of exploration. Rapport may be higher with in-person as compared with web-based interviews [ 30 ], which may impact the amount and type of content generated. Additionally, participants in web-based data collection studies are more geographically diverse and may be more likely to be non-White, less educated, and less healthy than participants in in-person data collection studies [ 31 ].
This study was based on analyses from data collected for 5 Food and Drug Administration–funded studies conducted using web-based platforms, such as Zoom (Zoom Video Communications) and Adobe Connect (Adobe Systems), and focused on patients with underlying medical conditions or on health care providers who provide primary or specialty care to patients. All platforms used for these interviews offered audio and video components and allowed for the sharing of stimuli on screen. A brief description of each study is provided in Table 1 . Each study’s data had been coded and stored using NVivo software (version 11; QSR International).
Study name | Sample size, n | General eligibility criteria | Primary objectives | Summary of topics | Length of interview (minutes) | Number of interview questions | Regions and states covered |
Study A | 30 | Patients diagnosed with a condition treated by biologic medications (eg, cancer, inflammatory bowel disease, and diabetes) | Obtain feedback on multimedia educational materials about biosimilar biologic medications | 90 | |||
Study B | 48 | Patients diagnosed with vulvovaginal atrophy or type 2 diabetes | Explore how patients use boxed warnings when making decisions about prescription drugs and how well the warnings meet patients’ information needs | 30 | |||
Study C | 70 | Primary care physicians or specialists who write at least 50 prescriptions per week | Assess how primary care physicians and specialists access, understand, and use prescription drug labeling information, including information on labels for drugs that have multiple indications. | 60 | |||
Study D | 35 | Patients diagnosed with type 2 diabetes | Understand how patients weigh the potential benefits against possible risks and side effects, dosage and administration characteristics, and costs when selecting treatments for chronic health conditions. | 60 | |||
Study E | 35 | Patients diagnosed with psoriasis | Understand how patients weigh the potential benefits against possible risks and side effects, dosage and administration characteristics, and costs when selecting treatments for chronic health conditions. | 60 |
This project was determined to not research with human participants by Research Triangle Institute’s institutional review board (STUDY00021985). The original 5 studies that this project is based on were reviewed by Research Triangle Institute’s institutional review board and were determined to be exempt under category 2ii. Participants in these studies were provided information about measures used to protect their privacy and the confidentiality of their data in the study’s consent forms. All participants were provided compensation for their time (the amount and type varied by study).
We established and applied a systematic approach to analyze all 5 study data sets. Our analytic approach was organized into 2 stages—data preparation and data analysis.
First, because previous interviews sometimes influence moderator probes—for example, the moderator asks a follow-up question based on something they heard in a previous interview—we sorted interviews from each study by interview order. We then extracted code- and interview-specific data from the NVivo databases—including transcript name, code name, number of files coded, number of associated parent and child codes, and number of coding references—and compiled these data in an Excel (Microsoft Corp) file. We then updated the Excel file with important code and interview characteristics, including the order in which interviews were conducted, whether each code was directly (ie, child codes) or indirectly (ie, parent codes) applied to transcripts (in a tiered coding scheme, direct codes are those that have no child codes, whereas indirect codes function as “parents” that have additional codes nested beneath them), and the point at which each code was first applied to an interview. Finally, we created pivot tables within each Excel file to compile the data.
Once the data were compiled, the data summaries were examined to determine when true saturation and near saturation occurred during data collection. True saturation was defined as 100% of all applied codes being used; near saturation was defined as 90% of all applied codes being used. We calculated saturation separately for each study’s data set, and we calculated saturation separately for all codes (ie, parent and child codes) as compared with direct codes (ie, child codes only). True saturation and near saturation points were identified by calculating the cumulative percentage of new codes for each interview, flagging when 100% and 90% of applied codes had been used.
The number of web-based interviews used across the 5 studies ranged from 30 to 70 ( Table 2 ). True saturation (100% use of all applied codes) was reached in the final or near final interview ( Figure 1 ), suggesting that, even with a large sample size, additional interviews are likely to continue uncovering a small number of new codes or findings.
Study | Total interviews, n | Coding: total codes in codebook, n | True saturation: interviews needed, n (%) | Near saturation: interviews needed, n (%) |
Study A | 30 | 657 | 30 (100) | 18 (60) |
Study B | 48 | 313 | 47 (98) | 21 (44) |
Study C | 70 | 362 | 67 (96) | 23 (33) |
Study D | 35 | 205 | 33 (94) | 15 (43) |
Study E | 35 | 200 | 32 (91) | 15 (43) |
Across all studies, near saturation (90% use of all applied codes) was reached near—and often before—the midpoint of data collection. In other words, only a small number of new codes or findings were uncovered once the first half of the sample had been interviewed. In terms of absolute numbers, the point at which near saturation was reached occurred between 33% and 60% (n=15-23) of planned interviews ( Table 2 ). Despite the participants being more geographically, and possibly demographically, diverse compared with typical in-person participants, our findings were similar to previous studies on saturation [ 10 , 15 , 17 ].
We examined the types of codes applied after near saturation had been reached. In 4 of the 5 studies, most of these codes (n=8-33, 57%-62%) represented previously established core concepts or themes, such as a trusted source of information, a behavioral intention, or a recommended change to educational material. Codes representing newly identified concepts (n=2-8, 10%-15%), other miscellaneous responses (eg, “in general”; n=6-9, 13%-41%), uncertainty or confusion (eg, “don’t know”; n=0-6, 0%-11%), or categorization for analysis (eg, “correct as compared with incorrect”; n=0-3, 0%-4%) were less commonly applied after near saturation had been reached.
The overwhelming majority of codes applied after near saturation (n=9-41, 73%-82%) had already been established in study codebooks before analysis. Only a small number of codes applied after this point (n=4-20, 18%-27%) were conceptually distinct enough to merit updating the study codebooks by including them. Likewise, most of the codes used after near saturation (n=11-35, 44%-64%) were applied to only a single interview. Far fewer codes were applied to 2 interviews (n=0-13, 0%-27%), 3 interviews (n=0-6, 0%-21%), or 4 or more interviews (n=0-12, 0%-21%).
Study B was an outlier in terms of codes applied after near saturation. This study had fewer codes representing core established concepts (n=8, 28%) and more codes representing newly identified concepts (n=7, 24%) or providing categorization for analysis (n=3, 10%) than other studies. The study also had a much higher proportion of new codes (n=20, 69%) that were added to the study codebook during analysis. These differences may be because the study sampled 2 populations with very different medical conditions (ie, type 2 diabetes as compared with vulvovaginal atrophy), leading to a broader range of applied codes.
In examining the relationship between the number of codes in the codebook for each study, the study with the most codes (study A: 657 codes) required the largest number of interviews to reach both true saturation and near saturation. However, this pattern did not hold true for the remainder of the studies. The study with the next highest number of codes (study C: 362 codes) was third to reach true saturation and last to reach near saturation.
All 5 study codebooks included both parent (ie, top-level codes) and child codes (ie, subcodes). We examined saturation using two analytic lenses—(1) all codes (parent and child) and (2) parent codes only—to determine if there were differences in when saturation was reached. We found no differences in when true saturation was reached. However, near saturation was reached slightly later (ie, after an additional 3 to 4 interviews) when examining only parent codes ( Figure 2 ).
In total, 3 of the studies had codebooks that consisted almost entirely of deductive (ie, concept-driven) codes, whereas the codebooks in the remaining 2 studies contained a mix of both deductive and inductive (ie, data-driven) codes. Although the results were largely consistent across the 5 studies, as expected, the studies that relied heavily on deductive coding reached both true saturation and near saturation sooner. This finding suggests that studies using more inductive coding and analytic techniques may require slightly larger sample sizes to reach saturation.
Although all the studies used a semistructured interview guide, the level of structure varied across studies. The 3 studies (ie, studies C, D, and E) that had a more structured interview guide (eg, questions for which participants were asked their preference among discrete choices or the range of likely answers was limited) reached both true saturation and near saturation sooner. In fact, the study with the most structured guide reached near saturation the soonest, although it fell in the middle for true saturation. This finding suggests that studies using a less structured interview guide may need to conduct more interviews to reach an acceptable level of saturation.
Although true saturation was not reached until the final interview or close to the final interview, near saturation was reached much sooner, ranging from just below to just above the midpoint of data collection, with most of the studies falling just below the midpoint. Although additional interviews conducted after near saturation may result in new information, our findings suggest there may be diminishing returns relative to the resources expended. We have identified several study characteristics that researchers can consider when making decisions on sample size for web-based interviews.
Although our findings were mostly consistent across the 5 studies we examined, near saturation was reached sooner on the studies that consisted of largely deductive codes compared with those that had a greater number of inductive codes. Consequently, researchers should consider their analytic approach when determining sample size. Studies that intend for the coding scheme to be iterative throughout the coding process may want to err on the side of having a slightly higher sample size than if the codebook is expected to consist largely of deductive codes tied to the interview guide.
These studies ranged in length from 30 to 90 minutes, and a majority (n=3) lasted 60 minutes. Although the 90-minute study reached both true saturation and near saturation at the latest point, the shortest interview (at 30 minutes) required the second-highest number of interviews to reach both saturation points. Although the length of the interview may be a minor consideration, the level of structure of the interview guide and the types of codes used seem to be larger drivers.
Our findings point to the need for a slightly higher number of interviews to reach an acceptable level of saturation—categorized by us as near code saturation—than what has been found in other studies. For example, Guest et al [ 15 ] found that 6 interviews were enough to get high-level themes, reaching a plateau at 10 to 12 interviews. Similarly, Young and Casey [ 27 ] found that near code saturation was reached at 6 to 9 interviews.
Our findings also build on previous studies looking at saturation for in-person data collection conducted at a small number of sites. Data from our studies included participants from all US Census Bureau regions, which provides support that these findings may be more generalizable than previous studies.
Our study had several limitations. First, our analysis was conducted on a sample of 5 studies that had similarities. All the studies were related to the medical field, and our study populations (patients with an identified medical condition and health care providers) were knowledgeable about the topics discussed. Second, all the studies were conducted using semistructured interview guides that leaned toward being more structured (ie, interviewers largely stuck to scripted probes as compared with guides that allow for unscripted follow-up probes and unstructured conversations). Additionally, all the studies used a similar approach to coding by using a mix of both deductive and inductive codes (though to varying extents). Consequently, studies with a less structured approach to both the interview and coding process may yield different results. Finally, all our studies are broadly classified as social science research. The findings for other fields of inquiry, such as economic or medical studies, may differ.
Saturation is an important consideration in planning and conducting qualitative research, yet, there is no definitive guidance on how to define and measure saturation, particularly for web-based data collection, which allows for data to be collected from a more geographically diverse sample. Our study provides support that near saturation may be a sufficient measure to target and that conducting additional interviews after that point may result in diminishing returns. Factors to consider in determining how many interviews to conduct include the structure and type of questions included in the interview guide, the coding structure, and the population being studied. Studies with less structured interview guides, studies that rely heavily on inductive coding and analytic techniques, and studies that include populations that may be less knowledgeable about the topics discussed may require a larger sample size to reach an acceptable level of saturation. Rather than trying to reach a consensus on the number of interviews needed to achieve saturation in qualitative research overall, we recommend that future research should explore saturation within different types of studies, such as different fields of inquiry, subject matter, and populations being studied. Creating a robust body of knowledge in this area will allow researchers to identify the guidance that best meets the needs of their work.
Research Triangle Institute–affiliated authors received support for the development of this manuscript from the RTI Fellow’s program under RTI Fellow, Leila Kahwati, MPH, MD. All studies included in the analyses were funded by the Food and Drug Administration. The authors would like to thank the following Food and Drug Administration staff for their contribution to this research: Kit Aikin, Kevin Betts, Amie O’Donoghue, and Helen Sullivan.
The data sets analyzed during this study are available from the corresponding author on reasonable request.
None declared.
Achieving saturation in interviews: saturation type, methods for achieving saturation, and findings by other authors.
Edited by A Mavragani; submitted 22.09.23; peer-reviewed by K Kelly, G Guest; comments to author 24.10.23; revised version received 30.01.24; accepted 09.05.24; published 09.07.24.
©Claudia M Squire, Kristen C Giombi, Douglas J Rupert, Jacqueline Amoozegar, Peyton Williams. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 09.07.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Respiratory Research volume 25 , Article number: 268 ( 2024 ) Cite this article
Metrics details
Lung ultrasound (LUS) in an emerging technique used in the intensive care unit (ICU). The derivative LUS aeration score has been shown to have associations with mortality in invasively ventilated patients. This study assessed the predictive value of baseline and early changes in LUS aeration scores in critically ill invasively ventilated patients with and without ARDS (Acute Respiratory Distress Syndrome) on 30- and 90-day mortality.
This is a post hoc analysis of a multicenter prospective observational cohort study, which included patients admitted to the ICU with an expected duration of ventilation for at least 24 h. We restricted participation to patients who underwent a 12-region LUS exam at baseline and had the primary endpoint (30-day mortality) available. Logistic regression was used to analyze the primary and secondary endpoints. The analysis was performed for the complete patient cohort and for predefined subgroups (ARDS and no ARDS).
A total of 442 patients were included, of whom 245 had a second LUS exam. The baseline LUS aeration score was not associated with mortality (1.02 (95% CI: 0.99 – 1.06), p = 0.143). This finding was not different in patients with and in patients without ARDS. Early deterioration of the LUS score was associated with mortality (2.09 (95% CI: 1.01 – 4.3), p = 0.046) in patients without ARDS, but not in patients with ARDS or in the complete patient cohort.
In this cohort of critically ill invasively ventilated patients, the baseline LUS aeration score was not associated with 30- and 90-day mortality. An early change in the LUS aeration score was associated with mortality, but only in patients without ARDS.
ClinicalTrials.gov, ID NCT04482621.
Acute respiratory distress syndrome (ARDS) is characterized by bilateral pulmonary opacities on imaging, accompanied by hypoxemia within one week of a known clinical insult [ 1 ]. The presence of ARDS in invasively ventilated patients is associated with high mortality and morbidity [ 2 ]. The pulmonary edema, present in ARDS, can be quantified at the bedside by using the chest X-ray based Radiographic Assessment of Lung Edema (RALE) score or by estimating extravascular lung water with a transpulmonary thermodilution method [ 3 , 4 ]. These techniques showed to have predictive value for mortality in ARDS patients [ 5 , 6 , 7 , 8 ]. However, are invasive or require radiation.
Lung ultrasound (LUS) is a non-invasive, easy to learn, bedside technique that does not require radiation. It can accurately quantify the extent of pulmonary edema through the LUS aeration score [ 9 , 10 , 11 ]. The LUS aeration score was identified as a predictor for mortality by several studies in adult patients with COVID-19 [ 12 , 13 , 14 ]. However, the predictive value of the LUS aeration score remains unknown in ARDS patients without COVID-19 or in invasively ventilated patients without ARDS on mortality. Furthermore, the previous studies only assessed the predictive value of LUS aeration scores on admission, while early changes in the extent of pulmonary edema could be of additional predictive value [ 15 ].
In this study, we assessed the association of the baseline LUS aeration score and of early changes in LUS aeration scores with mortality in critically ill invasively ventilated patients with and without ARDS. We hypothesized that a both a higher baseline LUS aeration score and an early increase in LUS aeration score are associated with higher 30 and 90-day mortality in patients with and without ARDS.
This is a post hoc analysis of patients included in the ‘Diagnosis of Acute Respiratory disTress Syndrome’ (DARTS) project. This multicentre prospective observational cohort study recruited patients from March 27, 2019 until February 27, 2021 in two hospitals in the Netherlands; (1) Amsterdam University Medical Center (Amsterdam UMC), location Academic Medical Center (AMC) and (2) Maastricht University Medical Center + (MUMC +). The protocol was approved by the institutional ethics committees of both centers (ref: W18_311 #18.358 and 2019–1137) and patients or legal representatives provided deferred consent for the use of data. The protocol of the DARTS project was previously published [ 16 ].
Adult patients were included in the study if they were admitted to a participating ICU and were expected to be invasively ventilated for at least 24 h. Patients were excluded if they had received invasive ventilation more than 48 h in the last 7 days or were receiving invasive ventilation by a tracheostomy. This post hoc analysis was restricted to patients who received a 12-region LUS exam at inclusion and had data on 30-day mortality available. ARDS was diagnosed by an expert panel according to the Berlin criteria using chest imaging, clinical parameters, and blood gas analysis [ 17 ].
Patients received a 12-region LUS exam at inclusion and 24 h after inclusion by three dedicated investigators [ 16 , 18 ]. During the LUS exam, patients were positioned in supine position. LUS exams were performed with a linear probe using the clinically available ultrasound device. The use of other probes was allowed when the linear probe did not generate a sufficient image. Patients were scanned at two anterior, two lateral and two posterior locations per hemi thorax [ 16 ]. Each LUS image was scored as ‘0’ when A-lines were present, as ‘1’ when more than two B-lines covered < 50% of the pleura, as ‘2’ when B-lines covered > 50% of the pleura, and as ‘3’ when a consolidation of the lung was present (Fig. 1 ). If a lung region could not be scored or scanned (e.g., subcutaneous emphysema, chest drains, or wounds) the mean LUS aeration score of the same lung region (anterior, lateral, or posterior) was used as a substitute. Patients with more than four missing regions were excluded from this analysis. The LUS aeration score was calculated as the sum of LUS aeration scores in the 12 regions and could range from 0–36.
A-pattern; repeating horizontal A-lines parallel to the pleural line. B1-pattern; three or more vertical B-lines starting from the pleural line and reaching the bottom of the screen cover ≤ 50% of the pleural line (score 1). B2-pattern; B-lines cover ≥ 50% of the pleural line. C-pattern; consolidated lung [ 19 ]
A sensitivity analysis was conducted on the LUS aeration score with only anterolateral regions, as the posterior regions might contain less signal as they commonly present loss of aeration, and the anterolateral regions are easy to reach (LUS darts). The LUS aeration score for the anterolateral fields can range from 0–24. Patients with more than three regions missing were excluded from this sensitivity analysis.
The early changes in LUS aeration score were calculated by subtracting the LUS aeration score at inclusion from the LUS aeration score 24 h after inclusion. A negative score correlates with an improvement of the LUS aeration score as a positive score correlates with a deterioration of the LUS aeration score.
The primary endpoint of the study was the association between LUS aeration score at baseline and the 30 and 90-day mortality. Additional endpoints were (1) association between early changes and deterioration of the LUS aeration score and 30-day mortality, (2) differences in LUS aeration scores between the predefined subgroups (ARDS and no ARDS), (3) the association between the baseline LUS aeration score and ARDS severity, and (4) the association between the baseline and early changes of the anterolateral LUS aeration score and 30-day mortality. Endpoints were adjusted for age, gender and the Acute Physiology and Chronic Health Evaluation II (APACHE II) score as they are prognostic variables for outcomes in the general ICU population [ 3 , 20 ].
The DARTS project sample size was based on an expected sensitivity of 80% for the exhaled breath analyses, with a minimal acceptable lower confidence limit of 65%, requiring at least 52 ARDS patients. Given a predicted ARDS incidence of 10.4%, a total sample size of at least 500 patients was needed to meet the primary endpoint. We did not calculate a sample size or perform a power analysis for this post hoc analysis.
Continuous data was reported as mean with standard deviation (SD) or median with interquartile range (IQR), depending on the distribution of the data. Categorical data was reported as number with percentage. The respective appropriate test was used, either normal distributed (t-test) or non-normal distributed (Kruskal Wallis or Mann–Whitney U test). The statistical distribution of data was controlled by the visual assessment of histograms and Q-Q plots. Logistic regression was used to analyze the primary and secondary endpoints. Independent variables were assessed for multicollinearity using the variance inflation factor. A locally estimated scatterplot smoothing (LOESS) regression was employed to visualize the association between LUS aeration scores and mortality, aiming to assess the feasibility of categorization without relying on arbitrary cut-off values. Data was tested two-sided, a type I error below 5% was considered statistically significant. The analyses were performed using RStudio (version 4.2.1, R Foundation for Statistical Computing, Vienna, Austria).
A total of 442 (85%) of the 519 patients within the DARTS project had a LUS exam at inclusion and the primary endpoint available (Fig. 2 , Table 1 ). ARDS was present in 152 (34%) of the patients and 171 (39%) patients were deceased by day 30. Patients who were deceased at day 30 were significantly older, had higher lactate levels, and had a higher APACHE II and Sequential Organ Failure Assessment (SOFA) score. Two hundred forty-five patients (55%) had a second LUS exam 24 h after inclusion and could be included in the analyses for the early changes in the LUS aeration score (Additional file 1 ).
CONSORT figure of the patient enrolment in the DARTS consortium with additional exclusion criteria for the secondary analysis of this study. MV = Mechanically ventilated; DARTS = ‘Diagnosis of Acute Respiratory Distress Syndrome’ project [ 16 ]; LUS = Lung Ultrasound
The median baseline LUS aeration score was significantly higher in patients with ARDS in comparison to patients without ARDS (13 [IQR 8, 16] vs. 5 [IQR 2, 9], p < 0.001, Fig. 3 , Additional file 2). Patients with severe ARDS had a significantly higher median baseline LUS aeration scores than patients with mild ARDS (15 [IQR 8, 20] vs. 11 [IQR 5, 13], p = 0.007). The distribution of LUS scoring in the six regions of the lungs are presented in Fig. 4 , stratified for patients with and without ARDS.
Differences in distributions of the baseline LUS aeration scores in the predefined groups. Individual patients are displayed as single-coloured dots. When a significant difference was found, the p-value was displayed above the figure. ARDS = Acute Respiratory Distress Syndrome; LUS = Lung Ultrasound
Distribution of the LUS patterns in patients with and without ARDS at baseline. The scores of the left and right lung are combined resulting in six regions per group. ARDS = acute respiratory distress syndrome; LUS = lung ultrasound, UTS = unable to score
The baseline LUS aeration scores in patients with and without ARDS were not associated with mortality at day 30 and day 90 in invasively ventilated patients on the ICU (Tables 2 and 3 , Fig. 5 ). The results remained consistent across both univariable and multivariable analyses. Visualization of the individual data points did not result in a cut-off value to dichotomize the baseline LUS aeration score to improve these results (Additional file 3-5). The results remained consistent when only anterolateral regions were analysed (Additional file 6).
Differences in the baseline and early changes (Δ) of the LUS aeration scores in survivors and deceased patients with and without ARDS. ARDS = Acute Respiratory Distress Syndrome; LUS = Lung Ultrasound
In patients without ARDS ( n = 75), deterioration of LUS aeration score was associated with mortality (Table 4 ). This relation remained in the multivariable analysis. However, there was no association between mortality and the deterioration of LUS aeration score in patients with ARDS, or in all patients in the multivariable analysis. Furthermore, the early changes in the LUS aeration scores and analysis of anterolateral fields did not have any additional predictive value in across patients and in the predefined subgroups (Fig. 5 , Additional file 7).
In this post hoc analysis of the DARTS project, we did not find an association between the baseline LUS aeration scores and 30- and 90-day mortality in invasively ventilated ICU patients and in the predefined ARDS subgroups. For early changes of the LUS aeration score, we did find that deterioration of the LUS aeration score in patients without ARDS was associated with 30-day mortality. However, this association was not found in ARDS patients nor in the whole cohort.
In the context of patients with ARDS, several studies assessed the predictive value of the LUS aeration score on mortality, but predominately in COVID-19 patients. While some of these studies showed an association between mortality and the LUS aeration score at baseline [ 12 , 14 , 21 ], other studies did not find this association [ 13 , 22 ]. In addition to these contradictory findings, there is considerable variation in the timing of the LUS exam across these studies. Some studies conduct the exam upon admission, while another study performed the LUS exam seven days after admission. The studies using a larger timeframe from admission seem to find a better association between the LUS aeration score and mortality, potentially explaining why we did not find predictive value of the baseline LUS aeration score and early changes in the LUS aeration score in ARDS patients on mortality.
It is noteworthy that within the DARTS project, a similar study assessed the predictive value of the radiography-based RALE score on mortality in patients with and without ARDS [ 5 ]. This study showed that the early changes in the RALE score have predictive value for 30-day mortality in patients with ARDS, but not in patients without ARDS. Discrepancies in the findings between this and our study may arise from the differences in assessment of lung edema between the two imaging modalities. LUS has a tomographic approach, is sensitive to changes in lung aeration and typically scans a subpleural layer of the lung. On the other hand, chest X-ray (CXR) acquires a two-dimensional image of the entire lung, is less sensitive for changes in aeration than LUS and therefore probably requires more edema for the RALE score to increase [ 23 ]. Furthermore, our study cohort is a different patient group because the LUS exams were performed per protocol in the DARTS project, while the CXRs were performed on clinical indication. Studies on the RALE score as a predictive tool on mortality in ventilated ICU patients with ARDS show conflicting results, similar to the LUS aeration score [ 3 , 5 , 6 , 24 , 25 , 26 , 27 ].
A strength of this prospective study is the large sample size containing multiple LUS exams per patient. Furthermore, unlike previous studies that mainly concentrated on the predictive value of LUS aeration score on mortality in COVID-19 patients, only 11% of the patients in this study were tested positive for SARS-CoV-2. This makes the findings of this study more generalizable for the ICU population. Additionally, LUS knows a high inter observer agreement [ 28 ]. Finally, in the current study, ARDS diagnosis was performed by a panel of experts, mitigating the typical challenges associated with substantial inter-observer variability in diagnosing ARDS [ 17 ]. A potential limitation of this study is the relatively short follow-up period of 24 h between the first and second LUS exam. This could have attributed to the absence of differences in the early changes of the LUS aeration score among ARDS patients, as severe pulmonary distress might not resolve or decrease within 24 h. Lastly, the study did not incorporate ventilator-free days as an endpoint, and therefore, the predictive value of LUS for duration of ventilation remains unknown.
This is the first study to highlight the predictive potential of LUS in determining mortality at day 30 in invasively ventilated patients without ARDS. While baseline LUS aeration scores did not demonstrate an association with mortality, such association was found in the early changes analysis with a repeated LUS exam after 24 h. After further validation of these findings, early changes in LUS aeration scores might serve as a potential indicator for predictive enrichment or as an early sign of treatment response in invasively ventilated patients without ARDS. Moving forward, the present findings should be externally validated and additional research on the timing of the LUS exam in invasively ventilated patients is warranted. Furthermore, incorporating subpleural consolidations and pleural abnormalities with the LUS aeration score could potentially improve the predictive value on mortality in ARDS patients.
In conclusion, this study showed that early changes in the LUS aeration score have a predictive value for 30-day mortality in invasively ventilated ICU patients without ARDS. There was no association found between the baseline LUS aeration score and 30- and 90-mortality in patients with and without ARDS.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Intensive Care Unit
Acute Respiratory Distress Syndrome
Radiographic Assessment of Lung Edema
Coronavirus disease 2019
Diagnosis of Acute Respiratory disTress Syndrome
Amsterdam University Medical Center
Academic Medical Center
Maastricht University Medical Center
Acute Physiology and Chronic Health Evaluation II
Standard Deviation
Interquartile range
Locally estimated scatterplot smoothing
Body Mass Index
Sequential Organ Failure Assessment
Positive End-Expiratory Pressure
Bellani G, Laffey JG, Pham T, Fan E, Brochard L, Esteban A, et al. Epidemiology, Patterns of Care, and Mortality for Patients With Acute Respiratory Distress Syndrome in Intensive Care Units in 50 Countries. JAMA. 2016;315(8):788–800.
Article CAS PubMed Google Scholar
Matthay MA, Zemans RL, Zimmerman GA, Arabi YM, Beitler JR, Mercat A, et al. Acute respiratory distress syndrome. Nat Rev Dis Primers. 2019;5(1):18.
Article PubMed PubMed Central Google Scholar
Warren MA, Zhao Z, Koyama T, Bastarache JA, Shaver CM, Semler MW, et al. Severity scoring of lung oedema on the chest radiograph is associated with clinical outcomes in ARDS. Thorax. 2018;73(9):840–6.
Article PubMed Google Scholar
Patroniti N, Bellani G, Maggioni E, Manfio A, Marcora B, Pesenti A. Measurement of pulmonary edema in patients with acute respiratory distress syndrome. Crit Care Med. 2005;33(11):2547–54.
Filippini DFL, Hagens LA, Heijnen NFL, Zimatore C, Atmowihardjo LN, Schnabel RM, et al. Prognostic Value of the Radiographic Assessment of Lung Edema Score in Mechanically Ventilated ICU Patients. J Clin Med. 2023;12(4). https://www.mdpi.com/about/announcements/784 .
Jabaudon M, Audard J, Pereira B, Jaber S, Lefrant JY, Blondonnet R, et al. Early changes over time in the radiographic assessment of lung edema score are associated with survival in ARDS. Chest. 2020;158(6):2394–403.
Tagami T, Nakamura T, Kushimoto S, Tosa R, Watanabe A, Kaneko T, et al. Early-phase changes of extravascular lung water index as a prognostic indicator in acute respiratory distress syndrome patients. Ann Intensive Care. 2014;4:27.
Jozwiak M, Silva S, Persichini R, Anguel N, Osman D, Richard C, et al. Extravascular lung water is an independent prognostic factor in patients with acute respiratory distress syndrome. Crit Care Med. 2013;41(2):472–80.
See KC, Ong V, Wong SH, Leanda R, Santos J, Taculod J, et al. Lung ultrasound training: curriculum implementation and learning trajectory among respiratory therapists. Intensive Care Med. 2016;42(1):63–71.
Mojoli F, Bouhemad B, Mongodi S, Lichtenstein D. Lung Ultrasound for Critically Ill Patients. Am J Respir Crit Care Med. 2019;199(6):701–14.
Chiumello D, Mongodi S, Algieri I, Vergani GL, Orlando A, Via G, et al. Assessment of Lung Aeration and Recruitment by CT Scan and Ultrasound in Acute Respiratory Distress Syndrome Patients. Crit Care Med. 2018;46(11):1761–8.
Lichter Y, Topilsky Y, Taieb P, Banai A, Hochstadt A, Merdler I, et al. Lung ultrasound predicts clinical course and outcomes in COVID-19 patients. Intensive Care Med. 2020;46(10):1873–83.
Article CAS PubMed PubMed Central Google Scholar
Pierrakos C, Lieveld A, Pisani L, Smit MR, Heldeweg M, Hagens LA, et al. A Lower Global Lung Ultrasound Score Is Associated with Higher Likelihood of Successful Extubation in Invasively Ventilated COVID-19 Patients. Am J Trop Med Hyg. 2021;105(6):1490–7.
Ji L, Cao C, Gao Y, Zhang W, Xie Y, Duan Y, et al. Prognostic value of bedside lung ultrasound score in patients with COVID-19. Crit Care. 2020;24(1):700.
van Vught LA, Bos LDJ. COVID-19 Pathophysiology: An Opportunity to Start Appreciating Time-Dependent Variation. Am J Respir Crit Care Med. 2022;205(5):483–5.
Hagens LA, Heijnen NFL, Smit MR, Verschueren ARM, Nijsen TME, Geven I, et al. Diagnosis of acute respiratory distress syndrome (DARTS) by bedside exhaled breath octane measurements in invasively ventilated patients: protocol of a multicentre observational cohort study. Ann Transl Med. 2021;9(15):1262.
Hagens LA, Van der Ven F, Heijnen NFL, Smit MR, Gietema HA, Gerretsen SC, et al. Improvement of an interobserver agreement of ARDS diagnosis by adding additional imaging and a confidence scale. Front Med (Lausanne). 2022;9: 950827.
Smit MR, Pisani L, de Bock EJE, van der Heijden F, Paulus F, Beenen LFM, et al. Ultrasound versus Computed Tomography Assessment of Focal Lung Aeration in Invasively Ventilated ICU Patients. Ultrasound Med Biol. 2021;47(9):2589–97.
Pierrakos C, Smit MR, Pisani L, Paulus F, Schultz MJ, Constantin JM, et al. Lung Ultrasound Assessment of Focal and Non-focal Lung Morphology in Patients With Acute Respiratory Distress Syndrome. Front Physiol. 2021;12.
Ciceri F, Castagna A, Rovere-Querini P, De Cobelli F, Ruggeri A, Galli L, et al. Early predictors of clinical outcomes of COVID-19 outbreak in Milan. Italy Clin Immunol. 2020;217: 108509.
Zhao Z, Jiang L, Xi X, Jiang Q, Zhu B, Wang M, et al. Prognostic value of extravascular lung water assessed with lung ultrasound score by chest sonography in patients with acute respiratory distress syndrome. BMC Pulm Med. 2015;15:98.
Yasukawa K, Minami T, Boulware DR, Shimada A, Fischer EA. Point-of-Care Lung Ultrasound for COVID-19: Findings and Prognostic Implications From 105 Consecutive Patients. J Intensive Care Med. 2021;36(3):334–42.
Winkler MH, Touw HR, van de Ven PM, Twisk J, Tuinman PR. Diagnostic Accuracy of Chest Radiograph, and When Concomitantly Studied Lung Ultrasound, in Critically Ill Patients With Respiratory Symptoms: A Systematic Review and Meta-Analysis. Crit Care Med. 2018;46(7):e707–14.
Kotok D, Yang L, Evankovich JW, Bain W, Dunlap DG, Shah F, et al. The evolution of radiographic edema in ARDS and its association with clinical outcomes: A prospective cohort study in adult patients. J Crit Care. 2020;56:222–8.
Valk CMA, Zimatore C, Mazzinari G, Pierrakos C, Sivakorn C, Dechsanga J, et al. The Prognostic Capacity of the Radiographic Assessment for Lung Edema Score in Patients With COVID-19 Acute Respiratory Distress Syndrome-An International Multicenter Observational Study. Front Med (Lausanne). 2021;8: 772056.
Herrmann J, Adam EH, Notz Q, Helmer P, Sonntagbauer M, Ungemach-Papenberg P, et al. COVID-19 induced acute respiratory distress syndrome-a multicenter observational study. Front Med (Lausanne). 2020;7: 599533.
Al-Yousif N, Komanduri S, Qurashi H, Korzhuk A, Lawal HO, Abourizk N, et al. Inter-rater reliability and prognostic value of baseline Radiographic Assessment of Lung Edema (RALE) scores in observational cohort studies of inpatients with COVID-19. BMJ Open. 2023;13(1): e066626.
Smit MR, de Vos J, Pisani L, Hagens LA, Almondo C, Heijnen NFL, et al. Comparison of Linear and Sector Array Probe for Handheld Lung Ultrasound in Invasively Ventilated ICU Patients. Ultrasound Med Biol. 2020;46(12):3249–56.
Download references
Not applicable.
The DARTS study received funding by Health Holland (10.2.17.181PPS) via the Dutch Lung Foundation. The funders played no part in the DARTS study design, data collection, data analysis and data interpretation. Furthermore, no specific funding was allocated for this secondary analysis; resources were sourced from institutional and/or departmental channels.
Authors and affiliations.
Department of Intensive Care, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, Amsterdam, 1105 AZ, the Netherlands
Jante S. Sinnige, Daan F. L. Filippini, Laura A. Hagens, Marcus J. Schultz, Lieuwe D. J. Bos & Marry R. Smit
Department of Intensive Care, Maastricht UMC+, Maastricht University, Maastricht, 6229 HX, The Netherlands
Nanon F. L. Heijnen, Ronny M. Schnabel & Dennis C. J. J. Bergmans
School of Nutrition and Translational Research in Metabolism (NUTRIM), Maastricht University, Maastricht, 6229 ER, The Netherlands
Nanon F. L. Heijnen & Dennis C. J. J. Bergmans
Mahidol Oxford Tropical Medicine Research Unit (MORU), Mahidol University, Bangkok, 10400, Thailand
Marcus J. Schultz
Nuffield Department of Medicine, University of Oxford, Oxford, OX3 7BN, UK
Department of Anesthesia, General Intensive Care and Pain Management, Division of Cardiothoracic and Vascular Anesthesia & Critical Care Medicine, Medical University of Vienna, Vienna, Austria
Department of Pulmonology, Amsterdam UMC, University of Amsterdam, Amsterdam, 1105 AZ, The Netherlands
Lieuwe D. J. Bos
Laboratory of Experimental Intensive Care and Anaesthesiology (L.E.I.C.A.), University of Amsterdam, Amsterdam, 1105 AZ, The Netherlands
Marcus J. Schultz & Lieuwe D. J. Bos
You can also search for this author in PubMed Google Scholar
Conceptualization and Methodology (present analysis): JS, DF, LB, and MRS; Conceptualization and Methodology (DARTS): LH, NH, RS, MJS, DB, LB, and MRS; Data Collection: LH, NH, and MRS; Writing of Original Draft Preparation, JS, LB, and MRS; Writing, Critical Review and Editing: JS, DF, LH, NH, RS, MJS, DB, LB and MRS. All authors have read and agreed to the published version of the manuscript.
Correspondence to Jante S. Sinnige .
Ethics approval and consent to participate.
Ethical approval for the protocol was obtained from the ethics committee of the Amsterdam UMC (ref: W18_311 #18.358) and from the MUMC + (ref: 2019–1137). All included patients or legal representatives provided deferred consent for the use of data.
Competing interests.
LB declares that he have received a grant from Health Holland (10.2.17.181PPS) via the Dutch Lung Foundation for the submitted work. The grant provider had no role in the study design, data collection, analysis, and interpretation of the results. Outside of the submitted work, LB declares receiving grants from the Longfonds, Innovative Medicine Initiative, Amsterdam UMC, Health Holland, ZonMW, Volition, and Santhera. Furthermore, LB has contributed to advisory boards for Sobi NL, Impentri, Novartis, AstraZeneca, CSL Behring, and has received consulting fees from Scailyte.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., supplementary material 5., supplementary material 6., supplementary material 7., rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Sinnige, J.S., Filippini, D.F.L., Hagens, L.A. et al. Associations of early changes in lung ultrasound aeration scores and mortality in invasively ventilated patients: a post hoc analysis. Respir Res 25 , 268 (2024). https://doi.org/10.1186/s12931-024-02893-0
Download citation
Received : 19 April 2024
Accepted : 26 June 2024
Published : 08 July 2024
DOI : https://doi.org/10.1186/s12931-024-02893-0
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1465-993X
View the latest institution tables
View the latest country/territory tables
The 2024 Research Leaders are based on Nature Index data from 1 January 2023 to 31 December 2023.
Position | Institution | Share 2022 | Share 2023 | Count 2023 | Change in Adjusted Share* 2022–2023 |
---|---|---|---|---|---|
1 | 0.45 | 2.70 | 13 | 461.0% | |
2 | 0.19 | 1.58 | 5 | 683.0% | |
3 | 0.00 | 1.39 | 6 | 0.0% | |
4 | 1.58 | 1.12 | 7 | -34.1% | |
5 | 0.00 | 0.83 | 1 | 0.0% | |
6 | 0.34 | 0.67 | 1 | 79.9% | |
7 | 0.00 | 0.51 | 2 | 0.0% | |
8 | 0.36 | 0.50 | 3 | 28.4% | |
9 | 0.13 | 0.33 | 1 | 147.1% | |
10 | 0.16 | 0.30 | 10 | 74.7% | |
11 | 0.00 | 0.21 | 2 | 0.0% | |
12 | 0.84 | 0.17 | 1 | -81.7% | |
12 | 0.00 | 0.17 | 1 | 0.0% | |
13 | 0.00 | 0.07 | 1 | 0.0% | |
13 | 0.60 | 0.07 | 1 | -89.0% | |
14 | 0.01 | 0.03 | 3 | 376.4% | |
15 | 0.03 | 0.03 | 2 | -0.6% | |
16 | 0.00 | 0.00 | 1 | 0.0% |
Each year, the Nature Index publishes tables based on counts of high-quality research outputs in the previous calendar year. Users please note:
The Nature Index database undergoes regular updating, corrections, adjustment of institutional hierarchies, and removal of retracted papers and thus the live website can differ from the frozen research leaders.
Numerical information only is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .
IMAGES
VIDEO
COMMENTS
Literature Review Matrix. As you read and evaluate your literature there are several different ways to organize your research. Courtesy of Dr. Gary Burkholder in the School of Psychology, these sample matrices are one option to help organize your articles. These documents allow you to compile details about your sources, such as the foundational ...
Research Tables and Synthesis Tables are useful tools for organizing and analyzing your research as you assemble your literature review. They represent two different parts of the review process: assembling relevant information and synthesizing it. ... Think of your work on the research table as the foundational step for your analysis of the ...
The following table is an example of how to arrange data for critical analysis. Note that the columns from left to right suggest steps in the thought process: This Research Analysis Table has been very beneficial to researchers. A sample of it is shown below. To download, click File:Research Analysis Table.doc. Click on the link that appears ...
Tables in Research Paper. Definition: In Research Papers, Tables are a way of presenting data and information in a structured format.Tables can be used to summarize large amounts of data or to highlight important findings. They are often used in scientific or technical papers to display experimental results, statistical analyses, or other quantitative information.
Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research.1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis ...
This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.
In this essay, we discuss how tables can be used to ensure—and reassure about—trustworthiness in qualitative research. We posit that in qualitative research, tables help not only increase transparency about data collection, analysis, and findings, but also—and no less importantly—organize and analyze data effectively.
Table of contents. Step 1: Write your hypotheses and plan your research design. Step 2: Collect data from a sample. Step 3: Summarize your data with descriptive statistics. Step 4: Test hypotheses or make estimates with inferential statistics.
Conclusion. The steps of a systematic review/meta-analysis include developing a research question and validating it, forming criteria, searching databases, importing all results to a library and exporting to an excel sheet, protocol writing and registration, title and abstract screening, full-text screening, manual searching, extracting data ...
nominal. It is important to know w hat kind of data you are planning to collect or analyse as this w ill. affect your analysis method. A 12 step approach to quantitative data analysis. Step 1 ...
For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...
In a table, readers can look up exact values, compare those values between pairs or groups of related measurements (e.g., growth rates or outcomes of a medical procedure over several years), look at ranges and intervals, and select specific factors to search for patterns. Tables are not restrained to a specific type of data or measurement.
Trials with a large number of drop-outs that are not included in the analysis are considered to be weaker evidence for efficacy. (For systematic reviews the number of studies included is reported. For meta-analyses, the number of total subjects included in the analysis or the number of studies may be reported.) P= pending verification.
1) Determine the purpose and information to be conveyed. 2) Plan the layout, including rows, columns, and headings. 3) Use spreadsheet software like Excel to design and format the table. 4) Input accurate data into cells, aligning it logically. 5) Include column and row headers for context.
Tables are widely used for the communication of research findings because they can summarise large amounts of data. Compared to graphs, tables are the better choice when the exact values are of interest and when the relationships between the constructs are relatively simple (Boers 2018b; Few 2005; Wensing et al. 2017).Also, including data in tables rather than text helps to reduce the length ...
Sample results of several t tests table. Sample correlation table. Sample analysis of variance (ANOVA) table. Sample factor analysis table. Sample regression table. Sample qualitative table with variable descriptions. Sample mixed methods table. These sample tables are also available as a downloadable Word file (DOCX, 37KB).
Table of contents. When to use thematic analysis. Different approaches to thematic analysis. Step 1: Familiarization. Step 2: Coding. Step 3: Generating themes. Step 4: Reviewing themes. Step 5: Defining and naming themes. Step 6: Writing up.
Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:
The EPIC study found a relative risk of breast cancer of 1.07, with a 95% CI of 0.96 to 1.19. In the table, you will see 1.07 (0.96-1.19). Women in the EPIC study who drank 1-2 drinks per day had a 7 percent higher risk of breast cancer than women who did not drink alcohol. The 95% CI of 0.96 to 1.19 includes 1.0.
Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...
Tables to Present the Groups of Codes That Form Each Theme. As noted previously, most of our dissertation assistance clients use a thematic analysis approach, which involves multiple phases of qualitative analysis that eventually result in themes that answer the dissertation's research questions. After initial coding is completed, the analysis process involves (a) examining what different ...
The analysis and interpretation of data is carried out in two phases. The. first part, which is based on the results of the questionnaire, deals with a quantitative. analysis of data. The second, which is based on the results of the interview and focus group. discussions, is a qualitative interpretation.
Mediver General Information Description. Manufacturer of a healthcare device intended for convenient access in everyday life. The company offers a diverse range of products, including skin lasers, radiofrequency devices, ultrasound equipment, medical devices, and negative pressure therapy, providing users with chronic and intractable pain management..
Table of Contents. Table of Contents. Most People in 35 Countries Say China Has a Large Impact on Their National Economy ... It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable ...
Researchers from MIT and elsewhere developed an easy-to-use tool that enables someone to perform complicated statistical analyses on tabular data using just a few keystrokes. Their method combines probabilistic AI models with the programming language SQL to provide faster and more accurate results than other methods.
Background: In-depth interviews are a common method of qualitative data collection, providing rich data on individuals' perceptions and behaviors that would be challenging to collect with quantitative methods. Researchers typically need to decide on sample size a priori. Although studies have assessed when saturation has been achieved, there is no agreement on the minimum number of ...
Thematic analysis is a research method used to identify and interpret patterns or themes in a data set; it often leads to new insights and understanding (Boyatzis, 1998; Elliott, 2018; Thomas, 2006). ... Table 3 provides definitions for the 4Rs, which are significant characteristics used in the process of theming during thematic analysis. These ...
Table 4 Association between early changes (Δ) in the LUS aeration scores and 30-day mortality in all patients and the predefined subgroups (No ARDS and ARDS), values are obtained using logistic regression and are presented as OR with 95% CI, indicating the increase per 1 point increment of the predictor variable in the Δ LUS analysis.
The data behind the tables are based on a relatively small proportion of total research papers, they cover the natural sciences and health sciences only and outputs are non-normalized (that is ...
This cross-sectional study examined electronic health record data from US adults aged 21 to 79 years in a large national research network (PCORnet), to describe use of 8 preventive health services (N = 30,783,825 patients) and new diagnoses of 9 chronic diseases (N = 31,588,222 patients) during 2018 through 2022.