- All Services
- UX Strategy
- User Research
- UX/UI Design
- EmTech Strategy
- Our Approach
- Enterprise Clients
- Public Sector
- Competitive Indexes
- White Papers
The Value of Old-School Literature Reviews for Modern UX Research
Chances are if you have spent any amount of time in academia, you have either encountered or been asked to conduct your own literature review. Many folks want to roll their eyes at the idea of having to do a “large book report,” but this discredits the powerful research methodology that is the literature review. Spending 6 years in academia prior to my time in UX research, I have been able to see a tremendous amount of value (and critical need) for the application of literature reviews in modern UX research.
Often the work, practices, and thought of academia sit in the lofty “ivory tower” - where it is deemed to be used only by other “worthy” academics and rarely makes it to the larger public who could also benefit from this work. This causes methods like the Literature Review to become lost in their application to non-academic research since it is deemed as only being relegated to the world of academia. Additionally, it causes academic and empirical articles to not be utilized by non-academic researchers, since they feel it is not relevant to them and their work (plus the loads of academic jargon certainly don’t help in making these texts accessible). However, once this notion of the “ivory tower” is broken down, and that “academic research methods” are simply “research methods,” then you realize you can skip over all the jargon to get to the root of the article and the real value of the literature review is able to come through.
A literature review is conducted by referencing published academic papers and other information in a particular subject area (and sometimes a particular time period) to gain an understanding of the work that was conducted prior, as well as where the current research questions fit into this research. It can help to piece together old information in a new way or be used to trace the way a particular research field has progressed. Additionally, a literature review might further evaluate the information presented, and help the reader identify which pieces of information are the most relevant. The goal of the literature review is not to add any new contributions to the body of research, but to summarize and synthesize the work that has already been done. This methodology is critical because it helps you as the researcher determine if the problem you want to solve is one that other researchers and academics agree is worth solving- which is arguably one of the most important aspects of conducting UX research as well.
Being able to determine if the problem you want to solve is worth solving is just one of the critical insights and benefits that literature reviews can provide to UX research. Conducting a literature review in UX will help researchers cover the gaps in their research, speed up time by determining which questions of theirs might have been already answered, and also validate if the work you are doing is going to add something new and valuable in return. A literature review is basically like a guide to a particular topic or research question. Moreover, conducting a literature review for UX allows researchers the chance to draw inspiration and insight from the literature and ensure the research they conduct is grounded in theory and thought, rather than based on assumptions. Furthermore, academic articles are not just theoretical pieces of research - they can provide insights into new and innovative research methods and concrete findings, and even tell the reader what further research the author thinks should be done to help solve this problem.
Breaking down the idea that literature reviews belong solely in the world of academia helps researchers to be able to see the real-world value and application of this methodology in modern research efforts. I think we have just scratched the surface of the value of literature reviews for UX research!
More by this Author
You might also like.
Add Comment
Key Lime Interactive is a user experience research and service design agency, with a sweet spot for emerging technology. As UX experts, our goal is to make your life easier, optimize user experiences, and make the world a better place.
Post By Topics
- Research Methods
- Emerging Trends
- Events & News
- UX Research
- User Experience
- User Interface
- Conferences & Events
- Competitive Research
- Usability Testing
- Eye tracking & Biometrics
- Qualitative Research
- Journey Mapping
- Remote Testing
- Customer Experience
- User Experience Consultant
- Mobile Banking
- Multicultural UX
- Quantitative Research
- App Development
- Conversational UI
- Mobile Devices
- User Experience Research Studies
- Diversity Equity Inclusion
- Expert Review
- Voice Technology
- Behavioral Personas
- Industries & Market
- Millennials
- Accessibility
- Diary Study & Ethnography
- Global Research
- Inclusivity Index
- Card Sort & Tree Test
- Human-first Approach
- Personalization
- Prototyping
- Diary Study
- Gamification
- Adaptive Design
- Datafication
- Entertainment
- In-Depth Interviews
- Inclusivity
- Information Architecture
- User Testing
- CX Research
- Responsive Design
- Women in UX
- A/B Testing
- Co-Creation
- Customer Journey Mapping
- Food & Beverage
- Framing Effect
- Programming
- Remote Research
- Research Analysis
- Research Report
- Transparency
- Authenticity
- Competitive Benchmarking
Competitive Insights
- Competitive Intelligence
- Confirmation Bias
- Design Thinking
- Ethnography
- Healthcare CX Design
- Healthcare Customer Experience Design
- Internet of Things
- Machine-learning
- Note-taking
- Research Participant Recruitment
- Abstract Thinking
- Algorithmic Disturbance
- Analysis Paralysis
- Anthropology
- Brand Goodwill
- Brand Value
- Change Blindness
- Competitive Intelligence Analysis
- Custom Projects
- Decentralization
- Dedicated Services
- Design Futures
- Electronic Program Guide
- Ethics of AI
- Ethnographic Analysis
- False Memories
- Financial Services
- Front-End Development
- Future State
- Games User Research
- Heuristic Evaluation
- Heuristic Principles
- Inattentional Blindness
- Invisible Link
- Keystroke Level Model
- Love Is Blind
- Low-Fidelity Wireframes
- Mental Health
- Negotiation in UX
- Netnography
- Nonprofit UX
- Patient Experiences
- Playtesting
- Rapid Research
- Research with Children
- Social Desirability Bias
- Speculative Design
- Storyboarding
- Strategic Foresight
- Swiss Cheese Model
- Systemic Design
- Systems Thinking
- UX Templates
- World Usability Day
Miami New York
Subscribe to Our CX Newsletter
Copyright © 2024, Key Lime Interactive. All Rights Reserved . Privacy Policy.
Quick Lit Reviews Reduce UX Research Time and Supercharge Your Design
A quick and dirty literature review (Lit Review) is a way to capture and synthesize information about a topic (a design problem, a new technology, an unfamiliar business area, etc.). It’s a simple structure that will allow you to document relevant information in an organized and intentional format. Creating the Lit Review can take a relatively short time compared with formal UX research; but leaves you with a lasting resource that can organize your thoughts, inform your strategy, educate others, and positively influence team behavior and design.
What is a Literature Review?
You may have been exposed to a Lit Review in school as a part of undergraduate or graduate work. Lit Reviews are often performed in preparation for a master’s thesis, doctoral dissertation, or when writing journal articles (“Literature review,” 2019). A Lit Review is a survey of the available published information on a particular topic. A simple review can be composed of just a summary of sources but often includes an overview of the information available and a synthesis of the major findings (The Writing Center, n.d.).
When most people think of a Lit Review they associate it with the highly rigorous, complex, and time consuming Systematic Review and Meta-Analysis. This type is familiar because it is often referenced in journal articles and is performed by graduate students and academic researchers. It includes an exhaustive review of scholarly papers and recent research and an assessment of the search results to offset bias and ensure all relevant research is included. It then uses qualitative and quantitative methods to synthesize findings and has strict rules for the structuring of results (Pare & Kisiou, 2017; Uman, 2011; Venebio, 2017). The average time to conduct a Systematic Review is 1,139 hours (J Med Libr Assoc, 2018)—hardly practical for UX!
What people don’t realize is that the format of the Lit Review can be modified for different fields of study and by purpose. The simple Narrative Review provides a broad perspective on a topic and can be produced quickly and cheaply. It can be performed in mere hours, allows authors to select the material that interests them, ignores selection bias, and permits simple thematic or content analysis (Pare & Kisiou, 2017; The Writing Center, n.d.).
What is a Quick & Dirty Lit Review?
A Quick and Dirty Lit Review (Q&D Lit Review) is a Narrative Review that does not concern itself with formatting for final presentation, liberally uses copy and paste to capture useful information, and —most importantly — leverages qualitative coding techniques to analyze information as it is collected. In business we don’t have the time or budget for deep rigor, long analysis, or well-written prose; but we can still benefit from capturing information from multiple sources for analysis, reuse, and dissemination.
The Q&D Lit Review is also broadened to include non-peer reviewed work and other, non-published work. Often, in business our specific problem may not be supported by an existing body of research so information must be acquired from other sources such as informal, online articles, development forums, social media, talking with colleagues, user interviews, etc. Capturing these other, less reputable sources allows us to consider and incorporate the newest information and trends, while qualitative coding techniques allow us to easily compare themes across sources and quickly compare the value of new ideas against older, tested ones.
When to do a Q&D Lit Review?
A Lit Review can be performed any time you want to quickly get up to speed on a topic. However it is not a replacement for deeper, more rigorous research. Think of it as the first step in your UX research strategy. The Lit Review should bring your UX Research needs into focus. It is ideal when you don’t yet know the questions to ask, or when you want to know what you don’t know. Expect more focused questions to arise out of your initial Lit Review.
How to perform a Q&D Lit Review
A Q&D Lit Review follows the six basic steps of all Lit Reviews (Pare & Kisiou, 2017); but to save time and increase efficiency, steps 3, 4, 5 & 6 are done concurrently:
- Formulate your research question
- Search the literature
- Screen for the material you want to include
- Assess the quality of what you are including
- Extract the data
- Analyze the data
Figure 1 The Quick and Dirty Lit Review is structured for speed and efficiency. The six basic steps of the Narrative Review are condensed to shorten data collection and coding time.
Formulate Your Research Question & Set-Up (15-20 min)
The first step in performing a Q&D Lit Review is to consider what you are researching and formulate a clear research question. This may seem like a trivial step but clearly formulating a research question will keep you focused and guide the rest of your actions (McCombes, 2020). At this stage your research question may be very broad. Some example questions from my own experiences include:
- What should I consider when designing a Log On screen?
- How will the transition to WCAG2.1 affect accessibly testing and accessible design?
- How can I make Tableau as accessible as possible?
- What is the best way to collect user feedback on a Drupal site page?
Often I find that the process of articulating the question yields keywords or additional sub-questions that I will use later. It also gives me a start on my inductive code set.
Note: To get an introduction to developing codes and coding qualitative data read Themes Don’t Just Emerge — Coding the Qualitative Data (Yi, 2018).
At this stage you must also set up your code book (the document where you ‘code’ your data). I like to use a table in Word because it’s easy to copy and paste into, it allows me to add formatting (bold and bullets) to my text, yet still retains a tabular format that makes it easy to sort and filter codes or sources and reorganize data rows. At a minimum, your code book should have three columns: Codes, Data, and source URLs. You may choose additional rows if you want primary and secondary codes, or if you want to easily track source type (i.e. journals, news, social media, interview, etc.), or the keywords you used to find the content.
Search the Literature (30-60 seconds per source)
Information can be acquired from any source: online magazines and journals, informal online posts, online training, development forums, social media, prior usability testing transcripts, impromptu interviews with colleagues or clients, office memos, competitor websites, etc. Printed material is also useful, but you may want to scan it to reduce keyboarding time, or be prepared to summarize the text. I have a shelf with a number of UX and software development books that I like to thumb through and extract ideas from before I begin my online search.
The broader your search is the more comprehensive your review will be; and more comprehensive equals more time. Don’t lose sight of the fact that this is supposed to be quick! If you’re short on time, limit yourself to 30 or 60 minutes. If you have more time, continue searching and reviewing sources until you see the core ideas and guidance repeating.
Screen, Assess, Extract, & Analyze (5-10 min per source)
For each article (or post, interview transcript, etc.) you find, skim for content relevant to your research question. As you see relevant ideas or concepts copy and paste them into your code book. Your codes can be words or phrases, whatever helps you organize the information.
You can also add your own commentary to the cell. I notate the data with my thoughts and questions as they occur. I’ll italicize that text so I can quickly review it later. My notes may lead me to search for additional information, or simply help me interpret the text and recall more valuable information.
Figure 2 Illustration of a code book used to answer the question “What should I consider when designing a Log On screen?” Other codes appearing in the book are also displayed.
Visuals are a major part of UX. If you see a great design pattern or illustration of ideas, take a screenshot and add it to an appendix below the table. Use image captions to briefly summarize its importance and capture the source URL.
As you cut, paste, and organize content you’ll start to see similarities between articles. You may see the same phrase or guidance repeated (sometimes often enough to suspect plagiarism). Occasionally you’ll see content that directly contradicts other guidance. This may cause you to review previous articles and re-examine their statements. You’ll find that you’re reading articles from a more analytical perspective than you would be if you were not coding the data.
As you add sources, continue to organize and re-order the code book so that similar ideas are grouped together. Create theme statements as they occur to you. Merge cells that contain very similar ideas, so that one theme represents ideas repeated by different sources. Combining screening, assessment and extraction with analysis as you read allows you to quickly synthesize and internalize the information.
If a source lacks valuable information, copy the URL to the bottom of your table and provide a short sentence to summarize the article for yourself and why you did not extract information from it. Provide a code like “No Info” so you can sort them out. This will allow you to capture the full breadth of your research effort. It may also prove useful if, as your research develops, you realize that you may have overlooked something valuable and you want to reread a source, or if a source has very basic information that you later realize may be valuable to junior team members. It is also a useful way to keep yourself on task. If you’re not copying valuable information into your code book you may not be reading the articles you should be reading, you may be falling victim to distraction and click-bait. Keeping yourself honest is a good way to conserve and manage your time.
Final Analysis & Report Out (5-60 min)
Once you’ve used the time you have, or once you start to see information repeating, it’s time to stop searching and start reviewing what you’ve collected. At this point themes and high level conclusions will be evident. Skim your entire code book to see if anything new jumps out when you look at the full data set. Occasionally, key guidance is not exciting enough to draw your attention; but when you see it repeated several times you realize its importance. Incorporate these late stage thoughts into your theme statements and conclusions.
Figure 3. This sample code book for a Login page redesign resulted in a list of best practices, design heuristics, and common issues which helped drive requirements and design. It also facilitated a deep partnership with the security team to balance ease of access with data-security concerns. Total sources: 8. Research time: 2 hours.
Review all your themes, conclusions, and notes to ensure that they are written in a manner that is meaningful to others. Create full and complete thoughts that summarize what you’ve learned and relate it to actions, behaviors, or processes that can be performed to solve your research problem. This is important for several reasons. First, it forces you to think reflexively. Reflective thinking is critical to complex problem-solving; it forces you to step back and think about how to solve a problem and how a set of problem-solving strategies can be leveraged to achieve a goal. (University of Hawaii, n.d.) Secondly, much of the value of the Lit Review is in its ability to quickly transfer information to others. If your thoughts are not clear and instructive, you cannot transfer knowledge. Finally, projects may be delayed or compete with other priorities. If you must revisit a project in six months, or if you have to balance multiple projects, you want your research to be meaningful to you.
When you do share your review, you may need to reorganize it so it tells a cohesive story for new readers. Depending on your audience, you can simply add a Table of Images to display the screen shots you’ve assembled. Or, if you plan to share your report with a client, you may want to convert your findings into a more narrative format as well as enter full citations for your sources.
As a beginner, expect to spend at least four hours to a day, on your first Lit Review. Your reading speed will affect your time. (I took a course in speed reading years ago and that allows me to skim many articles and quickly make value judgments. I then slowly re-read the material that I believe has value for my research question.) It takes time to integrate valuable information from various sources and you may need additional time to revisit and compare articles. If you are new to qualitative coding, expect a learning curve. It can be difficult to discern the correct code-set for your research problem if you are not a seasoned coder. Consider learning more about qualitative coding before you begin.
Top 10 Reasons & Tips for a Quick and Dirty Lit Review
You’re likely doing the research already.
To stay abreast of current design trends, technology innovations, and accessibility guidelines it’s likely you already read a great many UX articles, attend conferences or trainings, and network with other UX professionals. In other words, you’re already reviewing the “literature”; you’re just not documenting it in a way that makes it useful to you. If you’ve ever found yourself thinking “where did I see that?” or “what are the best practices?” in response to a design problem or question, then the structure of the Lit Review will help you.
Keep focused when researching online
We’ve all had the experience of reading an article online then getting distracted by click bait. Suddenly you’ve wasted an hour and have nothing to show for it. The Lit Review keeps you focused on drilling into a very specific topic. If you’re not cutting and pasting into the document, then you’re not reading relevant content and you have to move on.
Quickly identify patterns and contradictions
As you cut, paste, and organize content you’ll start to see similarities and contradictions between articles. This will cause you to review previous articles and re-examine their statements. You’ll find that you’re reading articles from a more analytical perspective.
Citations matter
When engaging with a client, design or development team, disagreements are bound to arise. Your research will support your ideas and provide persuasive justifications for design or process decisions. It’s not just you saying how it should be done; it’s coming from numerous well respected professionals. Using citations from reputable sources will add to your own trust and credibility.
Stand on the shoulders of giants
Merriam Webster defines an Expert as “one with the special skill or knowledge representing mastery of a particular subject”. The Lit Review provides a broad understanding of the topic area and equips you with the relevant facts as well as access to the authoritative sources of those facts. That equates to mastery. Congratulations, you are now an expert.
Establish a custom heuristics set to evaluate your design
As you collect and organize your information you will begin to see patterns that define the attributes of good design. You and your team can use these as heuristics to inform your design process and to evaluate and usability test your prototypes.
Avoid the mistakes of others
People are eager to share what works and what doesn’t. With a handful of articles or informal interviews, you can assemble a quick list of potential pitfalls and then establish strategies to avoid them.
Save time in the long run
Uninspired design cycles, falling victim to common mistakes, and late stage rework are all costly and time consuming. Knowledge can be the competitive edge that distinguishes your product’s user experience from that of the competition and shortens overall development time.
Your colleagues will love you
By performing the research and distilling it down to the core themes and issues, you shorten the learning curve of your colleagues. You also increase their confidence in you.
Someone is paying you to learn new things!
The Lit Review is a great excuse to get inspired, expand your knowledge, and create a useful deliverable at the same time.
J Med Libr Assoc. (2018). It takes longer than you think: librarian time spent on systematic review tasks. Journal of the Medical Library Association (JMLA), 198–207. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/29632442
Literature review. (2019). Retrieved January 2, 2019, from https://en.wikipedia.org/wiki/Literature_review
McCombes, S. (2020). https://www.scribbr.com/research-process/research-questions/
Pare, G., & Kisiou, S. (2017). Handbook of eHealth Evaluation: An Evidence-based Approach [Internet Ed.]. Victoria (BC): University of Victoria. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK481583/
The Writing Center. (n.d.). Literature Reviews. Retrieved from https://writingcenter.unc.edu/tips-and-tools/literature-reviews/
Uman, L. S. (2011). Systematic Reviews and Meta-Analyses. J Can Acad Child Adolesc Psychiatry, 20(1), 57–59.
University of Hawaii. (n.d.). Reflective Thinking: RT. Retrieved from http://www.hawaii.edu/intlrel/pols382/Reflective Thinking – UH/reflection.html
Venebio. (2017). 5 differences between a systematic review and other types of literature review. Retrieved January 2, 2019, from https://venebio.com/news/2017/09/5-differences-between-a-systematic-review-and-other-types-of-literature-review/
Yi, E. (2018). Themes Don’t Just Emerge — Coding the Qualitative Data. Medium, Project UX. Retrieved from https://medium.com/@projectux/themes-dont-just-emerge-coding-the-qualitative-data-95aff874fdce%0D
- Make Design Decisions with a Purpose
- Creating Personas
- The Ethics of UX Research
- Five Tips for Conducting Scientific Research in the UX World
- Running a Successful User Workshop
UX research - or as it’s sometimes called, design research - informs our work, improves our understanding, and validates our decisions in the design process. In this Complete Beginner's Guide, readers will get a head start on how to use design research techniques in their work, and improve experiences for all users.
UX Booth is trusted by over 100,000 user experience professionals. Start your subscription today for free.
- Advanced Search
UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library
- 23 citation
New Citation Alert added!
This alert has been successfully added and will be sent to:
You will be notified whenever a record that you have chosen has been cited.
To manage your alert preferences, click on the button below.
New Citation Alert!
Please log in to your account
Information & Contributors
Bibliometrics & citations, view options, supplementary material.
- Carvalho L Valle P Leifheit B Cabrejos L Nakamura W Guerino G Garcia R Silva W (2024) What Do We Know About Usability Evaluation for Chatbots?: A Systematic Mapping Study Proceedings of the 20th Brazilian Symposium on Information Systems 10.1145/3658271.3658324 (1-11) Online publication date: 20-May-2024 https://dl.acm.org/doi/10.1145/3658271.3658324
- Rayan J Kanetkar D Gong Y Yang Y Palani S Xia H Dow S (2024) Exploring the Potential for Generative AI-based Conversational Cues for Real-Time Collaborative Ideation Proceedings of the 16th Conference on Creativity & Cognition 10.1145/3635636.3656184 (117-131) Online publication date: 23-Jun-2024 https://dl.acm.org/doi/10.1145/3635636.3656184
- Liu J Yao Y An P Wang Q (2024) PeerGPT: Probing the Roles of LLM-based Peer Agents as Team Moderators and Participants in Children's Collaborative Learning Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems 10.1145/3613905.3651008 (1-6) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613905.3651008
- Show More Cited By
Index Terms
Human-centered computing
Human computer interaction (HCI)
HCI design and evaluation methods
User studies
Recommendations
Changes in verbal and nonverbal conversational behavior in long-term interaction.
We present an empirical investigation of conversational behavior in dyadic interaction spanning multiple conversations, in the context of a developing interpersonal relationship between a health counselor and her clients. Using a longitudinal video ...
Conversational Error Analysis in Human-Agent Interaction
Conversational Agents (CAs) present many opportunities for changing how we interact with information and computer systems in a more natural, accessible way. Building on research in machine learning and HCI, it is now possible to design and test multi-...
A Taxonomy of Social Cues for Conversational Agents
- Development and evaluation of a taxonomy of social cues for conversational agents.
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many ...
Information
Published in.
PUC-Rio, Brazil
University of Michigan, USA
Université Paris-Saclay, France
Toyota Research Institute, USA
Microsoft Research, USA
University of Glasgow, UK
University of Tokyo, Japan
- SIGCHI: ACM Special Interest Group on Computer-Human Interaction
Association for Computing Machinery
New York, NY, United States
Publication History
Permissions, check for updates, author tags.
- Conversational AI
- Conversational Agent
- Literature Review
- UX Research
- Research-article
- Refereed limited
Funding Sources
- National Science Foundation
Acceptance Rates
Upcoming conference, contributors, other metrics, bibliometrics, article metrics.
- 23 Total Citations View Citations
- 5,140 Total Downloads
- Downloads (Last 12 months) 3,330
- Downloads (Last 6 weeks) 426
- Kwon S Yoo D Kang Y (2024) Spiritual AI: Exploring the Possibilities of a Human-AI Interaction Beyond Productive Goals Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems 10.1145/3613905.3650743 (1-8) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613905.3650743
- De A Lu Z (2024) #PoetsOfInstagram: Navigating The Practices And Challenges Of Novice Poets On Instagram Proceedings of the CHI Conference on Human Factors in Computing Systems 10.1145/3613904.3642173 (1-16) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613904.3642173
- Kuang E Li M Fan M Shinohara K (2024) Enhancing UX Evaluation Through Collaboration with Conversational AI Assistants: Effects of Proactive Dialogue and Timing Proceedings of the CHI Conference on Human Factors in Computing Systems 10.1145/3613904.3642168 (1-16) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613904.3642168
- Kuhail M Farooq S Almutairi S (2023) Recent Developments in Chatbot Usability and Design Methodologies Trends, Applications, and Challenges of Chatbot Technology 10.4018/978-1-6684-6234-8.ch001 (1-23) Online publication date: 24-Feb-2023 https://doi.org/10.4018/978-1-6684-6234-8.ch001
- Abdulhamid N Ochieng M Bali K Ankrah E Karusala N Ronen K O'Neill J (2023) Can Large Language Models Support Medical Facilitation Work? A Speculative Analysis Proceedings of the 4th African Human Computer Interaction Conference 10.1145/3628096.3628752 (64-70) Online publication date: 27-Nov-2023 https://dl.acm.org/doi/10.1145/3628096.3628752
- Cox S Lee Y Ooi W (2023) Comparing How a Chatbot References User Utterances from Previous Chatting Sessions: An Investigation of Users' Privacy Concerns and Perceptions Proceedings of the 11th International Conference on Human-Agent Interaction 10.1145/3623809.3623875 (105-114) Online publication date: 4-Dec-2023 https://dl.acm.org/doi/10.1145/3623809.3623875
- Heo J Lee U (2023) Form to Flow: Exploring Challenges and Roles of Conversational UX Designers in Real-world, Multi-channel Service Environments Proceedings of the ACM on Human-Computer Interaction 10.1145/3610189 7 :CSCW2 (1-24) Online publication date: 4-Oct-2023 https://dl.acm.org/doi/10.1145/3610189
View options
View or Download as a PDF file.
View online with eReader .
HTML Format
View this article in HTML Format.
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Full Access
Share this publication link.
Copying failed.
Share on social media
Affiliations, export citations.
- Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
- Download citation
- Copy citation
We are preparing your search results for download ...
We will inform you here when the file is ready.
Your file of search results citations is now ready.
Your search export query has expired. Please try again.
- User Research Designers
- UX Researchers
- Digital Designers
- UX Designers
- UX Strategists
- Product Designers
- UI Designers
- Mobile App Designers
The Complete Guide to UX Research Methods
UX research provides invaluable insight into product users and what they need and value. Not only will research reduce the risk of a miscalculated guess, it will uncover new opportunities for innovation.
By Miklos Philips
Miklos is a UX designer, product design strategist, author, and speaker with more than 18 years of experience in the design field.
PREVIOUSLY AT
“Empathy is at the heart of design. Without the understanding of what others see, feel, and experience, design is a pointless task.” —Tim Brown, CEO of the innovation and design firm IDEO
User experience (UX) design is the process of designing products that are useful, easy to use, and a pleasure to engage. It’s about enhancing the entire experience people have while interacting with a product and making sure they find value, satisfaction, and delight. If a mountain peak represents that goal, employing various types of UX research is the path UX designers use to get to the top of the mountain.
User experience research is one of the most misunderstood yet critical steps in UX design. Sometimes treated as an afterthought or an unaffordable luxury, UX research, and user testing should inform every design decision.
Every product, service, or user interface designers create in the safety and comfort of their workplaces has to survive and prosper in the real world. Countless people will engage our creations in an unpredictable environment over which designers have no control. UX research is the key to grounding ideas in reality and improving the odds of success, but research can be a scary word. It may sound like money we don’t have, time we can’t spare, and expertise we have to seek.
In order to do UX research effectively—to get a clear picture of what users think and why they do what they do—e.g., to “walk a mile in the user’s shoes” as a favorite UX maxim goes, it is essential that user experience designers and product teams conduct user research often and regularly. Contingent upon time, resources, and budget, the deeper they can dive the better.
What Is UX Research?
There is a long, comprehensive list of UX design research methods employed by user researchers , but at its center is the user and how they think and behave —their needs and motivations. Typically, UX research does this through observation techniques, task analysis, and other feedback methodologies.
There are two main types of user research: quantitative (statistics: can be calculated and computed; focuses on numbers and mathematical calculations) and qualitative (insights: concerned with descriptions, which can be observed but cannot be computed).
Quantitative research is primarily exploratory research and is used to quantify the problem by way of generating numerical data or data that can be transformed into usable statistics. Some common data collection methods include various forms of surveys – online surveys , paper surveys , mobile surveys and kiosk surveys , longitudinal studies, website interceptors, online polls, and systematic observations.
This user research method may also include analytics, such as Google Analytics .
Google Analytics is part of a suite of interconnected tools that help interpret data on your site’s visitors including Data Studio , a powerful data-visualization tool, and Google Optimize, for running and analyzing dynamic A/B testing.
Quantitative data from analytics platforms should ideally be balanced with qualitative insights gathered from other UX testing methods , such as focus groups or usability testing. The analytical data will show patterns that may be useful for deciding what assumptions to test further.
Qualitative user research is a direct assessment of behavior based on observation. It’s about understanding people’s beliefs and practices on their terms. It can involve several different methods including contextual observation, ethnographic studies, interviews, field studies, and moderated usability tests.
Jakob Nielsen of the Nielsen Norman Group feels that in the case of UX research, it is better to emphasize insights (qualitative research) and that although quant has some advantages, qualitative research breaks down complicated information so it’s easy to understand, and overall delivers better results more cost effectively—in other words, it is much cheaper to find and fix problems during the design phase before you start to build. Often the most important information is not quantifiable, and he goes on to suggest that “quantitative studies are often too narrow to be useful and are sometimes directly misleading.”
Not everything that can be counted counts, and not everything that counts can be counted. William Bruce Cameron
Design research is not typical of traditional science with ethnography being its closest equivalent—effective usability is contextual and depends on a broad understanding of human behavior if it is going to work.
Nevertheless, the types of user research you can or should perform will depend on the type of site, system or app you are developing, your timeline, and your environment.
Top UX Research Methods and When to Use Them
Here are some examples of the types of user research performed at each phase of a project.
Card Sorting : Allows users to group and sort a site’s information into a logical structure that will typically drive navigation and the site’s information architecture. This helps ensure that the site structure matches the way users think.
Contextual Interviews : Enables the observation of users in their natural environment, giving you a better understanding of the way users work.
First Click Testing : A testing method focused on navigation, which can be performed on a functioning website, a prototype, or a wireframe.
Focus Groups : Moderated discussion with a group of users, allowing insight into user attitudes, ideas, and desires.
Heuristic Evaluation/Expert Review : A group of usability experts evaluating a website against a list of established guidelines .
Interviews : One-on-one discussions with users show how a particular user works. They enable you to get detailed information about a user’s attitudes, desires, and experiences.
Parallel Design : A design methodology that involves several designers pursuing the same effort simultaneously but independently, with the intention to combine the best aspects of each for the ultimate solution.
Personas : The creation of a representative user based on available data and user interviews. Though the personal details of the persona may be fictional, the information used to create the user type is not.
Prototyping : Allows the design team to explore ideas before implementing them by creating a mock-up of the site. A prototype can range from a paper mock-up to interactive HTML pages.
Surveys : A series of questions asked to multiple users of your website that help you learn about the people who visit your site.
System Usability Scale (SUS) : SUS is a technology-independent ten-item scale for subjective evaluation of the usability.
Task Analysis : Involves learning about user goals, including what users want to do on your website, and helps you understand the tasks that users will perform on your site.
Usability Testing : Identifies user frustrations and problems with a site through one-on-one sessions where a “real-life” user performs tasks on the site being studied.
Use Cases : Provide a description of how users use a particular feature of your website. They provide a detailed look at how users interact with the site, including the steps users take to accomplish each task.
You can do user research at all stages or whatever stage you are in currently. However, the Nielsen Norman Group advises that most of it be done during the earlier phases when it will have the biggest impact. They also suggest it’s a good idea to save some of your budget for additional research that may become necessary (or helpful) later in the project.
Here is a diagram listing recommended options that can be done as a project moves through the design stages. The process will vary, and may only include a few things on the list during each phase. The most frequently used methods are shown in bold.
Reasons for Doing UX Research
Here are three great reasons for doing user research :
To create a product that is truly relevant to users
- If you don’t have a clear understanding of your users and their mental models, you have no way of knowing whether your design will be relevant. A design that is not relevant to its target audience will never be a success.
To create a product that is easy and pleasurable to use
- A favorite quote from Steve Jobs: “ If the user is having a problem, it’s our problem .” If your user experience is not optimal, chances are that people will move on to another product.
To have the return on investment (ROI) of user experience design validated and be able to show:
- An improvement in performance and credibility
- Increased exposure and sales—growth in customer base
- A reduced burden on resources—more efficient work processes
Aside from the reasons mentioned above, doing user research gives insight into which features to prioritize, and in general, helps develop clarity around a project.
What Results Can I Expect from UX Research?
In the words of Mike Kuniaysky, user research is “ the process of understanding the impact of design on an audience. ”
User research has been essential to the success of behemoths like USAA and Amazon ; Joe Gebbia, CEO of Airbnb is an enthusiastic proponent, testifying that its implementation helped turn things around for the company when it was floundering as an early startup.
Some of the results generated through UX research confirm that improving the usability of a site or app will:
- Increase conversion rates
- Increase sign-ups
- Increase NPS (net promoter score)
- Increase customer satisfaction
- Increase purchase rates
- Boost loyalty to the brand
- Reduce customer service calls
Additionally, and aside from benefiting the overall user experience, the integration of UX research into the development process can:
- Minimize development time
- Reduce production costs
- Uncover valuable insights about your audience
- Give an in-depth view into users’ mental models, pain points, and goals
User research is at the core of every exceptional user experience. As the name suggests, UX is subjective—the experience that a person goes through while using a product. Therefore, it is necessary to understand the needs and goals of potential users, the context, and their tasks which are unique for each product. By selecting appropriate UX research methods and applying them rigorously, designers can shape a product’s design and can come up with products that serve both customers and businesses more effectively.
Further Reading on the Toptal Blog:
- How to Conduct Effective UX Research: A Guide
- The Value of User Research
- UX Research Methods and the Path to User Empathy
- Design Talks: Research in Action with UX Researcher Caitria O'Neill
- Swipe Right: 3 Ways to Boost Safety in Dating App Design
- How to Avoid 5 Types of Cognitive Bias in User Research
Understanding the basics
How do you do user research in ux.
UX research includes two main types: quantitative (statistical data) and qualitative (insights that can be observed but not computed), done through observation techniques, task analysis, and other feedback methodologies. The UX research methods used depend on the type of site, system, or app being developed.
What are UX methods?
There is a long list of methods employed by user research, but at its center is the user and how they think, behave—their needs and motivations. Typically, UX research does this through observation techniques, task analysis, and other UX methodologies.
What is the best research methodology for user experience design?
The type of UX methodology depends on the type of site, system or app being developed, its timeline, and environment. There are 2 main types: quantitative (statistics) and qualitative (insights).
What does a UX researcher do?
A user researcher removes the need for false assumptions and guesswork by using observation techniques, task analysis, and other feedback methodologies to understand a user’s motivation, behavior, and needs.
Why is UX research important?
UX research will help create a product that is relevant to users and is easy and pleasurable to use while boosting a product’s ROI. Aside from these reasons, user research gives insight into which features to prioritize, and in general, helps develop clarity around a project.
- UserResearch
Miklos Philips
London, United Kingdom
Member since May 20, 2016
About the author
World-class articles, delivered weekly.
By entering your email, you are agreeing to our privacy policy .
Toptal Designers
- Adobe Creative Suite Experts
- Agile Designers
- AI Designers
- Art Direction Experts
- Augmented Reality Designers
- Axure Experts
- Brand Designers
- Creative Directors
- Dashboard Designers
- Digital Product Designers
- E-commerce Website Designers
- Full-Stack Designers
- Information Architecture Experts
- Interactive Designers
- Mockup Designers
- Presentation Designers
- Prototype Designers
- SaaS Designers
- Sketch Experts
- Squarespace Designers
- User Flow Designers
- Virtual Reality Designers
- Visual Designers
- Wireframing Experts
- View More Freelance Designers
Join the Toptal ® community.
Skip navigation
World Leaders in Research-Based User Experience
Secondary research in ux.
February 20, 2022 2022-02-20
- Email article
- Share on LinkedIn
- Share on Twitter
You don’t have to do all the user-research work yourself. If somebody else already ran a study (and published it), grab it!
Have you ever completed a project only to find out that something very similar has already been done in your organization a couple of years ago? That situation is common, especially with rising employee-churn rates, and fueled the popularity of research repositories (e.g., Microsoft Human Insights System) and the growth of the research-operations community . It should also inspire practitioners to do more secondary research.
Secondary research, also known as desk research or, in academic contexts, literature review, refers to the act of gathering prior research findings and other relevant information related to a new project. It is a foundational part of any emerging research project and provides the project with background and context. Secondary research allows us to stand on the shoulders of giants and not to reinvent the wheel every time we initiate a new program or plan a study.
This article provides a step-by-step guide on how to conduct secondary research in UX. The key takeaway is that this type of research is not solely an intellectual exercise, but a way to minimize research costs, win internal stakeholders and get scaffolding for your own projects.
Academic publications include a literature review at the beginning to showcase context or known gaps and to justify the motivation for the research questions. However, the task of incorporating previous results is becoming more and more challenging with a growing number of publications in all fields. Therefore, practitioners across disciplines (for instance in eHealth, business, education, and technology) develop method guidelines for secondary research.
In This Article:
When to conduct secondary research, types of secondary research, how to conduct secondary research.
Secondary research should be a standard first step in any rigorous research practice, but it’s also often cost-effective in more casual settings. Whether you are just starting a new project, joining an existing one, or planning a primary research effort for your team, it is always good to start with a broad overview of the field and existent resources. That would allow you to synthesize findings and uncover areas where more research is needed.
Secondary research shows which topics are particularly popular or important for your organization and what problems other researchers are trying to solve. This research method is widely discussed in library and information sciences but is often neglected in UX. Nonetheless, secondary research can be useful to uncover industry trends and to inspire further studies. For example, Jessica Pater and her colleagues looked at the foundational question of participant compensation in user studies. They could have opted for user interviews or a costly large-scale survey, yet through secondary research, they were able to review 2250 unique user studies across 1662 manuscripts published in 2018-2019. They found inconsistencies in participant compensation and suggested changes to the current practices and further research opportunities.
Secondary research can be divided into two main types: internal and external research.
Internal secondary research involves gathering all relevant research findings already available in your organization. These might include artifacts from the past primary research projects, maps (e.g., customer-journey map , service blueprint ), deliverables from external consultants, or results from different kinds of workshops (e.g., discovery, design thinking, etc.). Hopefully, these will be available in a research repository .
External secondary research is focused on sources outside of your organization, such as academic journals, public libraries, open data repositories, internet searches, and white papers published by reputable organizations. For example, external resources for the field of human-computer interaction (HCI) can be found at the Association for Computing Machinery (ACM) digital library , Journal of Usability Studies (JUS ), or research websites like ours . University libraries and labs like UCSD Geisel Library , Carnegie Mellon University Libraries , MIT D-Lab , Stanford d.school , and specialized portals like Google Scholar offer another avenue for directed search.
Our goal is to have the necessary depth, rigor, and usefulness for practitioners. Here are the 4 steps for conducting secondary research:
- Choose the topic of research & write a problem statement .
Write a concise description of the problem to be solved. For example, if you are doing a website redesign, you might want to both learn the current standards and look at all the previous design iterations to avoid issues that your team already identified.
- Identi fy external and internal resources.
Peer-reviewed publications (such as those published in academic journals and conferences) are a fairly reliable source. They always include a section describing methods, data-collection techniques, and study limitations. If a study you plan to use does not include such information, that might be a red flag and a reason to further scrutinize that source. Public datasets also often present some challenges because of errors and inclusion criteria, especially if they were collected for another purpose.
One should be cautious of the seemingly reputable “research” findings published across different websites in a form of blog posts, which could be opinion pieces, not backed up by primary research. If you encounter such a piece, ask yourself — is the conclusion of the writeup based on a real study? If the study was quantitative, was it properly analyzed (e.g., at the very least, are confidence intervals reported, and was statistical significance evaluated?). For all studies, was the method sound and nonbiased (e.g., did the study have internal and external validity )?
A more nuanced challenge involves evaluating findings based on a different audience, which might not be always generalizable to your situation, but may form hypotheses worthy of investigating. For example, if a design pattern is found okay to use by young adults, you may still want to know if this finding will also be valid for older generations.
- Collect and analyze data from external and internal resources.
Remember that secondary research involves both the existing data and existing research. Both of those categories become helpful resources when they are critically evaluated for any inherent biases, omissions, and limitations. If you already have some secondary data in your organization, such as customer service logs or search logs, you should include them in secondary research alongside any existent analysis of such logs and previous reports. It is helpful to revisit previous findings, compare how they have or have not been implemented to refresh institutional memory and support future research initiatives.
- Refine your problem statement and determine what still needs to be investigated.
Once you collected the relevant information, write a summary of findings, and discuss them with your team. You might need to refine your problem statement to determine what information you still need to answer your research questions. Next time your team is planning to adopt a trendy new design pattern, it may be a good idea to go back and search the web or an academic database for any evaluations of that pattern.
It is important to note that secondary research is not a substitute for primary research. It is always better to do both. Although secondary research is often cost-effective and quick, its quality depends to a large extent on the quality of your sources. Therefore, before using any secondary sources, you need to identify their validity and limitations.
Secondary (or desk) research involves gathering existing data from inside and outside of your organization. A literature review should be done more frequently in UX because it is a viable option even for researchers with limited time and budget. The most challenging part is to persuade yourself and your team that the existing data is worth being summarized, compared, and collated to increase the overall effectiveness of your primary research.
Jessica Pater, Amanda Coupe, Rachel Pfafman, Chanda Phelan, Tammy Toscos, and Maia Jacobs. 2021. Standardizing Reporting of Participant Compensation in HCI: A Systematic Literature Review and Recommendations for the Field. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 141, 1–16. https://doi.org/10.1145/3411764.3445734
Hannah Snyder. 2019. Literature review as a research methodology: An overview and guidelines. Journal of business research 104, 333-339. DOI: https://doi.org/10.1016/j.jbusres.2019.07.039.
Related Courses
Discovery: building the right thing.
Conduct successful discovery phases to ensure you build the best solution
User Research Methods: From Strategy to Requirements to Design
Pick the best UX research method for each stage in the design process
ResearchOps: Scaling User Research
Orchestrate and optimize research to amplify its impact
Related Topics
- Research Methods Research Methods
Learn More:
15 User Research Methods to Know Beyond Usability Testing
Samhita Tankala · 3 min
Always Pilot Test User Research Studies
Kim Salazar · 3 min
Level Up Your Focus Groups
Therese Fessenden · 5 min
Related Articles:
Open-Ended vs. Closed Questions in User Research
Maria Rosala · 5 min
UX Research Methods: Glossary
Raluca Budiu · 12 min
Recruiting and Screening Candidates for User Research Projects
Therese Fessenden · 10 min
ResearchOps: Study Guide
Kate Kaplan · 5 min
International Usability Testing: Why You Need It
Feifei Liu · 10 min
Triangulation: Get Better Research Results by Using Multiple UX Methods
Kathryn Whitenton · 3 min
- Career Coaching
Literature Review
Ux informational interview.
- UX Resume Review & Critique
- UX Portfolio Review & Critique
UX Practice Job Interview
A literature review is a summary and evaluation of the existing research on a particular topic. In UX, a literature review can help UX researchers and designers understand the current state of knowledge on a topic and to identify gaps or areas for further research.
A literature review typically involves searching for research materials on a specific topic, such as user behavior or design principles. The search can be conducted using databases, search engines, or other sources of research materials. Once the research materials have been identified, they are reviewed and summarized, and their quality and relevance are evaluated.
A literature review can provide several benefits for UX. First, it can help UX teams gain a better understanding of the existing research on a topic and identify key themes, trends, and gaps in the literature. This can be useful for identifying areas where further research is needed or for informing the design of a product or service.
Second, a literature review can help to identify the most relevant and reliable research materials on a topic. This can be useful for UXresearchers and designers looking for evidence or guidance on a specific design problem or who want to avoid repeating research that has already been done.
Third, a literature review can help to contextualize a UX project within the broader field of UX research . It can provide a basis for comparing and contrasting a UX project with other research, and it can help to establish the contribution of the project to the existing body of knowledge.
UX Resume Review & Critique
UX Portfolio Review & Critique
Main content
Links to this project.
Title | Authors |
---|---|
Loading projects and components...
Measurement Practices in UX Research: A Systematic Quantitative Literature Review
- Fork this Project
- Duplicate template
- View Forks (0)
- Bookmark Remove from bookmarks
- Log in to request access
- Sebastian A. C. Perrig
- Lena Fanya Aeschbach
- Nicolas Scharowski
- Nick von Felten
- Florian Brühlmann
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: User experience research relies heavily on survey scales to measure users' subjective experiences with technology. However, repeatedly raised concerns regarding the improper use of survey scales in UX research and adjacent fields call for a systematic review of current measurement practices. Therefore, we conducted a systematic literature review, screening 153 papers from four years of the ACM Conference on Human Factors in Computing Systems proceedings, of which 60 were eligible empirical studies using survey scales to study users' experiences. We identified 85 different scales and 172 distinct constructs measured. Most scales were used once (70.59%), and most constructs were measured only once (66.28%). Furthermore, results show that papers rarely contained complete rationales for scale selection (20.00%) and seldom provided all scale items used (30.00%). More than a third of all scales were adapted (34.19%), while only one-third of papers reported any scale quality investigation (36.67%). On the basis of our results, we highlight questionable measurement practices in UX research and suggest opportunities to improve scale use for UX-related constructs. Additionally, we provide recommendations to promote improved rigor in following best practices for scale-based UX research.
Link other OSF projects
- Registrations
Loading files...
Loading citations...
Get more citations
Recent Activity
Loading logs...
Start managing your projects on the OSF today.
Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.
Copyright © 2011-2024 Center for Open Science | Terms of Use | Privacy Policy | Status | API TOP Guidelines | Reproducibility Project: Psychology | Reproducibility Project: Cancer Biology
A Complete Guide to Primary and Secondary Research in UX Design
To succeed in UX design, you must know what UX research methods to use for your projects.
This impacts how you:
- Understand and meet user needs
- Execute strategic and business-driven solutions
- Differentiate yourself from other designers
- Be more efficient in your resources
- Innovate within your market
Primary and secondary research methods are crucial to uncovering this. The former is when you gather firsthand data directly from sources, while the latter synthesizes existing data and translates them into insights and recommendations.
Let's dive deep into each type of research method and its role in UX research.
If you are still hungry to learn more, specifically how to apply it practically in the real world, you should check out Michael Wong's UX research course . He teaches you the exact process and tactics he used that helped him build a UX agency that generated over $10M+ million in revenue.
What is p rimary research in UX design
Primary UX research gathers data directly from the users to understand their needs, behaviors, and preferences.
It's done through interviews, surveys, and observing users as they interact with a product.
Primary research in UX: When and why to use it
Primary research typically starts at the start of a UX project. This is so that the design process is grounded in a deep understanding of user needs and behaviors.
By collecting firsthand information early on, teams can tailor their designs to address real user problems.
Here are the reasons why primary research is important in UX design:
1. It fast-tracks your industry understanding
Your knowledge about the industry may be limited at the start of the project. Primary research helps you get up to speed because you interact directly with real customers. As a result, this allows you to work more effectively.
Example: Imagine you're designing an app for coffee lovers. But you're not a coffee drinker yourself. Through user interviews, you learn how they prefer to order their favorite drink, what they love or hate about existing coffee apps, and their "wishlist" features by talking directly to them.
This crucial information will guide you on what to focus on in later stages when you do the actual designing.
2. You'll gain clarity and fill knowledge gaps
There are always areas we know less about than we'd like. Primary research helps fill these gaps by observing user preferences and needs directly.
Example: Let's say you're working on a website for online learning. You might assume that users prefer video lessons over written content, but your survey results show that many users prefer written material because they can learn at their own pace.
With that in mind, you'll prioritize creating user-friendly design layouts for written lessons.
3. You get to test and validate any uncertainties
When unsure about a feature, design direction, or user preference, primary research allows you to test these elements with real users.
This validation process helps you confidently move forward since you have data backing your decisions.
Example: You're designing a fitness app and can't decide between a gamified experience (with points and levels) or a more straightforward tracking system.
By prototyping both options and testing them with a group of users, you discover that the gamified experience concept resonates more.
Users are more motivated when they gain points and progress levels. As a result, you pivot to designing a better-gamified experience.
Types of primary research methods in UX design
Here's a detailed look at common primary research methods in UX:
1. User interviews
- What is it: User interviews involve one-on-one conversations with users to gather detailed insights, opinions, and feedback about their experiences with a product or service.
- Best used for: Gathering qualitative insights on user needs, motivations, and pain points.
- Tools: Zoom and Google Meet for remote interviews; Calendly for scheduling; Otter.ai for transcription.
- What is it: Surveys are structured questionnaires designed to collect quantitative data on user preferences, behaviors, and demographics.
- Best used for: Collecting data from many users to identify patterns and trends.
- Tools: Google Forms, SurveyMonkey, and Typeform for survey creation; Google Sheets and Notion for note taking.
3. Usability testing
- What is it: Usability testing involves observing users interact with a prototype or the actual product to identify usability issues and areas for improvement.
- Best used for: Identifying and addressing usability problems.
- Tools: FigJam, Lookback.io , UserTesting, Hotjar for conducting and recording sessions; InVision, Figma for prototype testing; Google Sheets to log usability issues and track task completion rates.
4. Contextual inquiry
- What is it: This method involves observing and interviewing users in their natural environment to understand how they use a product in real-life situations.
- Best used for: Gaining deep insights into user behavior and the context in which a product is used.
- Tools: GoPro or other wearable cameras for in-field recording; Evernote for note-taking; Miro for organizing insights.
5. Card sorting
- What is it: Card sorting is when users organize and categorize content or information.
- Best used for: Designing or evaluating the information architecture of a website or application.
- Tools: FigJam, Optimal Workshop, UXPin, and Trello for digital card sorting; Mural for collaborative sorting sessions.
6. Focus groups
- What is it: Group discussions with users that explore their perceptions, attitudes, and opinions about a product.
- Best used for: Gathering various user opinions and ideas in an interactive setting.
- Tools: Zoom, Microsoft Teams for remote focus groups; Menti or Slido for real-time polling and feedback.
7. Diary studies
- What is it: A method where users record their experiences, thoughts, and frustrations while interacting with a product over a certain period of time.
- Best used for: Understanding long-term user behavior, habits, and needs.
- Tools: Dscout, ExperienceFellow for mobile diary entries; Google Docs for simple text entries.
8. Prototype testing
- What is it: Prototype testing is when users evaluate the usability and design of early product prototypes with users.
- Best used for: Identifying usability issues and gathering feedback on design concepts
- Tools: Figma for creating and sharing prototypes; Maze for unmoderated testing and analytics.
9. Eye-tracking
- What is it: A method that analyzes where and how long users look at different areas on a screen.
- Best used for: Understanding user attention, readability, and visual hierarchy effectiveness.
- Tools: Tobii, iMotions for hardware; Crazy Egg for website heatmaps as a simpler alternative.
10. A/B testing
- What is it: A/B testing compares two or more versions of a webpage or app feature to determine which performs better in achieving specific goals.
- Best used for: Making data-driven decisions on design elements that impact user behavior.
- Tools: Optimizely, Google Optimize for web-based A/B testing; VWO for more in-depth analysis and segmentation.
11. Field studies
- What is it: Research done in real-world settings to observe and analyze user behavior and interactions in their natural environment.
- Best used for: Gaining insights into how products are used in real-world contexts and identifying unmet user needs.
- Tools: Notability, OneNote for note-taking; Voice Memos for audio recording; Trello for organizing observations.
12. Think-aloud protocols
- What is it: A method involves users verbalizing their thought process while interacting with a product. It helps uncover their decision-making process and pain points.
- Best used for: Understanding user reasoning, expectations, and experiences when using the product.
- Tools: UsabilityHub, Morae for recording think-aloud sessions; Zoom for remote testing with screen sharing.
Challenges of primary research in UX
Here are the obstacles that UX professionals may face with primary research:
- Time-consuming : Primary research requires significant planning, conducting, and analyzing. This is particularly relevant for methods that involve a lot of user interaction.
- Resource intensive : A considerable amount of resources is needed, including specialized tools or skills for data collection and analysis.
- Recruitment difficulties : Finding and recruiting suitable participants willing to put in the effort can be challenging and costly.
- Bias and validity : The risk of bias in collecting and interpreting data highlights the importance of carefully designing the research strategy. This is so that the findings are accurate and reliable.
What is secondary research in UX design
Once primary research is conducted, secondary research analyzes and converts this data into insights. They may also find common themes and ideas and convert them into meaningful recommendations.
Using journey maps, personas, and affinity diagrams can help them better understand the problem.
Secondary research also involves reviewing existing research, published books, articles, studies, and online information. This includes competitor websites and online analytics to support design ideas and concepts.
Secondary research in UX: Knowing when and why to use it
Secondary research is a flexible method in the design process. It fits in both before and after primary research.
At the project's start, looking at existing research and what's already known can help shape your design strategy. This groundwork helps you understand the design project in a broader context.
After completing your primary research, secondary research comes into play again. This time, it's about synthesizing your findings and forming insights or recommendations for your stakeholders.
Here's why it's important in your design projects:
1. It gives you a deeper understanding of your existing research
Secondary research gathers your primary research findings to identify common themes and patterns. This allows for a more informed approach and uncovers opportunities in your design process.
Example: When creating personas or proto-personas for a fitness app, you might find common desires for personalized workout plans and motivational features.
This data shapes personas like "Fitness-focused Fiona," a detailed profile that embodies a segment of your audience with her own set of demographics, fitness objectives, challenges, and likes.
2. Learn more about competitors
Secondary research in UX is also about leveraging existing data in the user landscape and competitors.
This may include conducting a competitor or SWOT analysis so that your design decisions are not just based on isolated findings but are guided by a comprehensive overview. This highlights opportunities for differentiation and innovation.
Example: Suppose you're designing a budgeting app for a startup. You can check Crunchbase, an online database of startup information, to learn about your competitors' strengths and weaknesses.
If your competitor analysis reveals that all major budgeting apps lack personalized advice features, this shows an opportunity for yours to stand out by offering customized budgeting tips and financial guidance.
Types of secondary research methods in UX
1. competitive analysis.
- What is it: Competitive analysis involves systematically comparing your product with its competitors in the market. It's a strategic tool that helps identify where your product stands about the competition and what unique value proposition it can offer.
- Best used for: Identifying gaps in the market that your product can fill, understanding user expectations by analyzing what works well in existing products, and pinpointing areas for improvement in your own product.
- Tools: Google Sheets to organize and visualize your findings; Crunchbase and SimilarWeb to look into competitor performance and market positioning; and UserVoice to get insights into what users say about your competitors.
2. Affinity mapping
- What is it: A collaborative sorting technique used to organize large sets of information into groups based on their natural relationships.
- Best used for: Grouping insights from user research, brainstorming sessions, or feedback to identify patterns, themes, and priorities. It helps make sense of qualitative data, such as user interview transcripts, survey responses, or usability test observations.
- Tools: Miro and FigJam for remote affinity mapping sessions.
3. Customer journey mapping
- What is it: The process of creating a visual representation of the customer's experience with a product or service over time and across different touchpoints.
- Best used for: Visualizing the user's path from initial engagement through various interactions to the final goal.
- Tools: FigJam and Google Sheets for collaborative journey mapping efforts.
4. Literature and academic review
- What is it: This involves examining existing scholarly articles, books, and other academic publications relevant to your design project. The goal is to deeply understand your project's theoretical foundations, past research findings, and emerging trends.
- Best used for: Establishing a solid theoretical framework for your design decisions. A literature review can uncover insights into user behavior and design principles that inform your design strategy.
- Tools: Academic databases like Google Scholar, JSTOR, and specific UX/UI research databases. Reference management tools like Zotero and Mendeley can help organize your sources and streamline the review process.
Challenges of secondary research in UX design
These are the challenges that UX professionals might encounter when carrying out secondary research:
- Outdated information : In a world where technology changes fast, the information you use must be current, or it might not be helpful.
- Challenges with pre-existing data : Using data you didn't collect yourself can be tricky because you have less control over its quality. Always review how it was gathered to avoid mistakes.
- Data isn't just yours : Since secondary data is available to everyone, you won't be the only one using it. This means your competitors can access similar findings or insights.
- Trustworthiness : Look into where your information comes from so that it's reliable. Watch out for any bias in the data as well.
The mixed-method approach: How primary and secondary research work together
Primary research lays the groundwork, while secondary research weaves a cohesive story and connects the findings to create a concrete design strategy.
Here's how this mixed-method approach works in a sample UX project for a health tech app:
Phase 1: Groundwork and contextualization
- User interviews and surveys (Primary research) : The team started their project by interviewing patients and healthcare providers. The objective was to uncover the main issues with current health apps and what features could enhance patient care.
- Industry and academic literature review (Secondary research) : The team also reviewed existing literature on digital health interventions, industry reports on health app trends, and case studies on successful health apps.
Phase 2: Analysis and strategy formulation
- Affinity mapping (Secondary research) : Insights from the interviews and surveys were organized using affinity mapping. It revealed key pain points like needing more personalized and interactive care plans.
- Competitive benchmarking (Secondary research) : The team also analyzed competitors’ apps through secondary research to identify common functionalities and gaps. They noticed a lack of personalized patient engagement and, therefore, positioned their app to fill this void in the market.
Phase 3: Design and validation
- Prototyping (Secondary research) : With a good grasp of what users need and the opportunities in the market, the startup created prototypes. These prototypes include AI-powered personalized care plans, reminders for medications, and interactive tools to track health.
- Usability testing (Primary research) : The prototypes were tested with a sample of the target user group, including patients and healthcare providers. Feedback was mostly positive, especially for the personalized care plans. This shows that the app has the potential to help patients get more involved in their health.
Phase 4: Refinement and market alignment
- Improving design through iterations: The team continuously refined the app's design based on feedback from ongoing usability testing.
- Ongoing market review (Secondary research) : The team watched for new studies, healthcare reports, and competitors' actions. This helped them make sure their app stayed ahead in digital health innovation.
Amplify your design impact and impress your stakeholders in 10+ hours
Primary and secondary research methods are part of a much larger puzzle in UX research.
However, understanding the theoretical part is not enough to make it as a UX designer nowadays.
The reason?
UX design is highly practical and constantly evolving. To succeed in the field, UX designers must do more than just design.
They understand the bigger picture and know how to deliver business-driven design solutions rather than designs that look pretty.
Sometimes, the best knowledge comes from those who have been there themselves. That's why finding the right mentor with experience and who can give practical advice is crucial.
In just 10+ hours, the Practical UX Research & Strategy Course dives deep into strategic problem-solving. By the end, you'll know exactly how to make data-backed solutions your stakeholders will get on board with.
Master the end-to-end UX research workflow, from formulating the right user questions to executing your research strategy and effectively presenting your findings to stakeholders.
Learn straight from Mizko—a seasoned industry leader with a track record as a successful designer, $10M+ former agency owner, and advisor for tech startups.
This course equips you with the skills to:
- Derive actionable insights through objective-driven questions.
- Conduct unbiased, structured interviews.
- Select ideal participants for quality data.
- Create affinity maps from research insights.
- Execute competitor analysis with expertise.
- Analyze large data sets and user insights systematically.
- Transform research and data into actionable frameworks and customer journey maps.
- Communicate findings effectively and prioritize tasks for your team.
- Present metrics and objectives that resonate with stakeholders.
Designed for flexible and independent learning, this course allows you to progress independently.
With 4000+ designers from top tech companies like Google, Meta, and Squarespace among its alumni, this course empowers UX designers to integrate research skills into their design practices.
Here's what students have to say about the 4.9/5 rated course:
"I'm 100% more confident when talking to stakeholders about User Research & Strategy and the importance of why it needs to be included in the process. I also have gained such a beautiful new understanding of my users that greatly influences my designs. All of the "guesswork" that I was doing is now real, meaningful work that has stats and research behind it." - Booking.com Product Designer Alyssa Durante
"I had no proper clarity of how to conduct a research in a systematically form which actually aligns to the project. Now I have a Step by Step approach from ground 0 to final synthesis." - UX/UI Designer Kaustav Das Biswas
"The most impactful element has been the direct application of the learnings in my recent projects at Amazon. Integrating the insights gained from the course into two significant projects yielded outstanding results, significantly influencing both my career and personal growth. This hands-on experience not only enhanced my proficiency in implementing UX strategies but also bolstered my confidence in guiding, coaching, mentoring, and leading design teams." - Amazon.com UX designer Zohdi Rizvi
Gain expert UX research skills and outshine your competitors.
Mizko, also known as Michael Wong, brings a 14-year track record as a Founder, Educator, Investor, and Designer. His career evolved from lead designer to freelancer, and ultimately to the owner of a successful agency, generating over $10M in revenue from Product (UX/UI) Design, Web Design, and No-code Development. His leadership at the agency contributed to the strategy and design for over 50 high-growth startups, aiding them in raising a combined total of over $400M+ in venture capital.
Notable projects include: Autotrader (Acquired. by eBay), PhoneWagon (Acquired by CallRails), Spaceship ($1B in managed funds), Archistar ($15M+ raised) and many more.
Table of contents
The design pulse.
New website, course and product updates - April 2024
Using Quantitative and Qualitative Research in UX Design
10 Best Free Fonts for UI Design (2024 Edition)
The Ultimate Guide to UX/UI Design in 2024
16 Best UX Research Tools in 2024: Gather Faster & Better Insights
15 Most Effective UX Research Methods: Pros and Cons
The Ultimate Guide to Figma for Beginners (Updated 2024)
The Ultimate Guide to UX Research (Updated 2024)
5 Reasons Why You Need UX Research in 2024
Join our newsletter.
Get 10% off on your first purchase
- Victor Yocco
- Apr 9, 2024
Connecting With Users: Applying Principles Of Communication To UX Research
- 30 min read
- UX , User Research , Communication
- Share on Twitter , LinkedIn
About The Author
Victor Yocco, PhD, has over a decade of experience as a UX researcher and research director. He is currently affiliated with Allelo Design and is taking on … More about Victor ↬
Email Newsletter
Weekly tips on front-end & UX . Trusted by 200,000+ folks.
Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.
In this article, I’ll focus on UX research. Communication is a core component of UX research , as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.
Fundamentals of Communication Theory
Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.
In this article, I will focus on the Transactional Model of Communication . There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.
The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback . Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.
Understanding the Transactional Model
Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:
- Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
- Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
- Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
- Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
- Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
- Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.
Applying the Transactional Model of Communication to Preparing for UX Research
We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:
- Improve Clarity The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
- Minimize misunderstanding By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
- Enhance research participant participation With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.
You can address the specific elements of the Transactional Model through the following steps while preparing for research:
Defining the Sender and Receiver
In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.
For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.
Message Creation
The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.
Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.
Choosing the Right Channel
The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.
Minimizing Noise
Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.
Facilitating Feedback
You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.
Track Your Alignment to the Framework
You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.
You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.
Applying the Transactional Model to Common UX Research Methods
UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.
Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.
Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.
Asking Open-ended Questions
To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.
Actively Listening
As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques , such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.
Being Responsive
Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.
We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews . This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.
For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.
Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.
Strategizing the Flow
Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.
A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.
While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.
Context Considerations
You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.
For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.
Power Dynamics
Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.
We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible . You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.
Post-Interview Actions
You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:
- Debriefing Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
- Short surveys Send a brief survey via email or an online platform to gather feedback on the interview experience.
- Follow-up calls Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
- Thank you emails Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.
You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.
As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings. “
We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.
By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.
The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.
Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).
Designing the Survey
Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.
Crafting Questions
The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.
For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?” .
Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.
Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.
When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.
For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.
You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.
The Transactional Model suggests active participation in communication is necessary for effective communication . Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.
This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.
Encouraging Feedback
While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.
You should be mindful that too many open-ended questions can cause fatigue , so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.
Post-Survey Actions
After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.
You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.
Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.
Usability Testing
Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.
Here are some ways a researcher can use elements of the Transactional Model during usability testing:
Task Assignment as Message Sending
When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.
You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.
Observing and Listening as Message Receiving
As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.
Real-time Interaction
The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).
In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.
The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.
One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.
Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols , where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.
You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.
We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.
Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model
The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.
The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.
Understanding Context
Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.
Deciphering Noise
In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.
Considering Sender-Receiver Dynamics
Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.
For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.
Reporting Results
Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:
- Identify Stakeholder Groups Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
- Prioritize Information Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
- Adapt Communication Style Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.
Acknowledging Feedback
Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.
Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.
Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.
By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs. “
Tracking Your Application of the Transactional Model to Your Practice
You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.
You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.
Stage | Columns | Description | Example |
---|---|---|---|
Pre-Interview Planning | Topic/Question (Aligned with research goals) | Identify the research question and design questions that encourage open-ended responses and co-construction of meaning. | Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions? |
Participant Context | Note relevant demographic and personal information to tailor questions and avoid biased assumptions. | 35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay. | |
Engagement Strategies | Outline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport. | Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”). | |
Shared Understanding | List potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning. | Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page. | |
During Interview | Verbal Cues | Track participant’s language choices, including metaphors, pauses, and emotional expressions. | Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault]. |
Nonverbal Cues | Note participant’s nonverbal communication like body language, facial expressions, and eye contact. | Frowning and crossed arms when discussing specific pain points. | |
Researcher Reflexivity | Record moments where your own biases or assumptions might influence the interview and potential mitigation strategies. | Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions. | |
Power Dynamics | Identify instances where power differentials emerge and actions taken to address them. | Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback. | |
Unplanned Questions | List unplanned questions prompted by the participant’s responses that deepen understanding. | What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay). | |
Post-Interview Reflection | Meaning Co-construction | Analyze how both parties contributed to building shared meaning and insights. | Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well. |
Openness and Flexibility | Evaluate how well you adapted to unexpected responses and maintained an open conversation. | Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised. | |
Participant Feedback | Record any feedback received from participants regarding the interview process and areas for improvement. | Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies. | |
Ethical Considerations | Reflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics. | Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol. | |
Key Themes/Quotes | Use this column to identify emerging themes or save quotes you might refer to later when creating the report. | Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options. | |
Analysis Notes | Use as many lines as needed to add notes for consideration during analysis. | Add notes here. |
You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:
- Date of Interview,
- Participant ID,
- Interview Format (e.g., in person, remote, video, phone).
By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.
As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.
As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice . By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.
Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.
References and Further Reading
- The Mathematical Theory of Communication (PDF), Shannon, C. E., & Weaver, W.
- From organizational effectiveness to relationship indicators: Antecedents of relationships, public relations strategies, and relationship outcomes , Grunig, J. E., & Huang, Y. H.
- Communication and persuasion: Psychological studies of opinion change, Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Yale University Press
- Communication research as an autonomous discipline, Chaffee, S. H. (1986). Communication Yearbook, 10, 243-274
- Interpersonal Communication: Everyday Encounters (PDF), Wood, J. (2015)
- Theories of Human Communication , Littlejohn, S. W., & Foss, K. A. (2011)
- McQuail’s Mass Communication Theory (PDF), McQuail, D. (2010)
- Bridges Not Walls: A Book About Interpersonal Communication , Stewart, J. (2012)
Smashing Newsletter
Tips on front-end & UX, delivered weekly in your inbox. Just the things you can actually use.
Front-End & UX Workshops, Online
With practical takeaways, live sessions, video recordings and a friendly Q&A.
TypeScript in 50 Lessons
Everything TypeScript, with code walkthroughs and examples. And other printed books.
Understanding the challenges affecting food-sharing apps’ usage: insights using a text-mining and interpretable machine learning approach
- Original Research
- Published: 27 June 2024
Cite this article
- Praveen Puram ORCID: orcid.org/0000-0003-4871-7409 1 ,
- Soumya Roy 2 &
- Anand Gurumurthy 2
Explore all metrics
Food waste is a serious problem affecting societies and contributing to climate change. About one-third of all food produced globally is wasted, while millions of people remain food insecure. Food-sharing apps attempt to simultaneously address ‘hunger’ and ‘food waste’ at the community level. Though highly beneficial, these apps experience low usage. Existing studies have explored multiple challenges affecting food-sharing usage, but are constrained by limited data and narrow geographical focus. To address this gap, this study analyzes online user reviews from top food-sharing apps operating globally. A unique approach of analyzing text data with interpretable machine learning (IML) tools is utilized. Eight challenges affecting food-sharing app usage are obtained using the topic modeling approach. Further, the review scores representing user experience (UX) are assessed for their dependence on each challenge using the document-topic matrix and machine learning (ML) procedures. Tree-based ML algorithms, namely regression tree, bagging, random forest, boosting, and Bayesian additive regression tree are employed. The best-performing algorithm is then complemented with IML tools such as accumulated local effects and partial dependence plots, to assess the impact of each challenge on UX. Critical improvement areas to increase food-sharing apps’ usage are highlighted, such as service responsiveness, app design, food variety, and unethical behavior. This study contributes to the nascent literature on food-sharing and IML applications. A significant advantage of the methodological approach utilized includes better explainability of ML models involving text data, at both the global and local interpretability levels, in terms of the associated features and feature interactions.
This is a preview of subscription content, log in via an institution to check access.
Access this article
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Source: Authors
Data availability
The data (online user reviews) used in this study were collected from various food-sharing apps in the Google Play Store and Apple App Store.
Code availability
The freely available software “Orange Data Mining”, “RStudio version 2023.03.0 + 386”, and “R version 4.3.0” were used for data analysis.
https://www.ft.com/content/c4160635-ad88-49bd-accb-9d5b2197a5c7
https://wedocs.unep.org/bitstream/handle/20.500.11822/35280/FoodWaste.pdf
https://ec.europa.eu/eurostat/documents/2995521/7695750/3-17102016-BP-EN.pdf/30c2ca2d-f7eb-4849-b1e1-b329f48338dc.
https://www.ers.usda.gov/data-products/ag-and-food-statistics-charting-the-essentials/food-security-and-nutrition-assistance
Python packages ‘Google-Play-Scraper’ and ‘Beautiful Soap’ were used for web scraping.
Python-based tool ‘Orange Text Mining’ was used for NLP procedures.
https://wisevoter.com/country-rankings/meat-consumption-by-country/
Apostolidis, C., Brown, D., Wijetunga, D., & Kathriarachchi, E. (2021). Sustainable value co-creation at the Bottom of the Pyramid: Using mobile applications to reduce food waste and improve food security. Journal of Marketing Management, 37 (9–10), 856–886.
Article Google Scholar
Barbosa, B., Saura, J. R., Zekan, S. B., & Ribeiro-Soriano, D. (2023). Defining content marketing and its influence on online user behavior: A data-driven prescriptive analytics method. Annals of Operations Research . https://doi.org/10.1007/s10479-023-05261-1
Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 25 (3), 351–370.
Brasse, J., Broder, H. R., Förster, M., Klier, M., & Sigler, I. (2023). Explainable artificial intelligence in information systems: A review of the status quo and future research directions. Electronic Markets, 33 (1), 26.
Çallı, L. (2023). Exploring mobile banking adoption and service quality features through user-generated content: The application of a topic modeling approach to Google Play Store reviews. International Journal of Bank Marketing, 41 (2), 428–454.
Chen, X., Wang, H., & Li, X. (2022). Doctor recommendation under probabilistic linguistic environment considering patient’s risk preference. Annals of Operations Research . https://doi.org/10.1007/s10479-022-04843-9
Cinelli, M., Ficcadenti, V., & Riccioni, J. (2021). The interconnectedness of the economic content in the speeches of the US Presidents. Annals of Operations Research, 299 (1), 593–615.
D’Ambrosi, L. (2018). Pilot study on food sharing and social media in Italy. British Food Journal, 120 (5), 1046–1058.
Darko, A. P., Liang, D., Zhang, Y., & Kobina, A. (2023). Service quality in football tourism: An evaluation model based on online reviews and data envelopment analysis with linguistic distribution assessments. Annals of Operations Research, 325 (1), 185–218.
Feng, Y., Yin, Y., Wang, D., Dhamotharan, L., Ignatius, J., & Kumar, A. (2022). Diabetic patient review helpfulness: Unpacking online drug treatment reviews by text analytics and design science approach. Annals of Operations Research . https://doi.org/10.1007/s10479-022-05121-4
Giannakis, M., Dubey, R., Yan, S., Spanaki, K., & Papadopoulos, T. (2022). Social media and sensemaking patterns in new product development: Demystifying the customer sentiment. Annals of Operations Research, 308 (1), 145–175.
Goto, H., Belal, H. M., & Shirahada, K. (2022). Value co-destruction causing customers to stop service usage: A topic modelling analysis of dental service complaint data. Annals of Operations Research . https://doi.org/10.1007/s10479-022-05045-z
Grover, P., Kar, A. K., & Dwivedi, Y. K. (2022). Understanding artificial intelligence adoption in operations management: Insights from the review of academic literature and social media discussions. Annals of Operations Research, 308 (1), 177–213.
Han, W., Wang, X., Ahsen, M. E., & Wattal, S. (2022). The societal impact of sharing economy platform self-regulations—An empirical investigation. Information Systems Research, 33 (4), 1303–1323.
Harvey, J., Smith, A., Goulding, J., & Branco Illodo, I. (2020). Food sharing, redistribution, and waste reduction via mobile applications: A social network analysis. Industrial Marketing Management, 88 , 437–448.
Hovy, D. (2021). Text Analysis in Python for Social Scientists: Discovery and Exploration . Cambridge University Press; https://doi.org/10.1017/9781108873352
Hu, Y. (2024). Quantitative food loss in the global supply chain. Nature Food, 5 (2), 100–101.
Huang, A. H., Wang, H., & Yang, Y. (2023). FinBERT: A large language model for extracting information from financial text*. Contemporary Accounting Research, 40 (2), 806–841.
Hussain, A., Hannan, A., & Shafiq, M. (2023). Exploring mobile banking service quality dimensions in Pakistan: A text mining approach. International Journal of Bank Marketing, 41 (3), 601–618.
James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning: With Applications in R . Springer. https://faculty.marshall.usc.edu/gareth-james/ISL/
Joung, J., & Kim, H. (2023). Interpretable machine learning-based approach for customer segmentation for new product development from online product reviews. International Journal of Information Management, 70 , 102641.
Kapelner, A., & Bleich, J. (2015). Prediction with missing data via Bayesian additive regression trees. Canadian Journal of Statistics, 43 (2), 224–239.
Kar, A. K., Tripathi, S. N., Malik, N., Gupta, S., & Sivarajah, U. (2022). How does misinformation and capricious opinions impact the supply chain—A study on the impacts during the pandemic. Annals of Operations Research . https://doi.org/10.1007/s10479-022-04997-6
Kuhn, M., Wing, J., Weston, S., Williams, A., Keefer, C., Engelhardt, A., Cooper, T., Mayer, Z., Kenkel, B., Team, R. C., & others. (2020). Package ‘caret.’ The R Journal , 223 (7).
Kumar, P., Kushwaha, A. K., Kar, A. K., Dwivedi, Y. K., & Rana, N. P. (2022). Managing buyer experience in a buyer–supplier relationship in MSMEs and SMEs. Annals of Operations Research . https://doi.org/10.1007/s10479-022-04954-3
Kushwaha, A. K., Kumar, P., & Kar, A. K. (2021). What impacts customer experience for B2B enterprises on using AI-enabled chatbots? Insights from Big data analytics. Industrial Marketing Management, 98 , 207–221.
Lucas, B., Francu, R. E., Goulding, J., Harvey, J., Nica-Avram, G., & Perrat, B. (2021). A note on data-driven actor-differentiation and SDGs 2 and 12: insights from a food-sharing app. Research Policy, 50 (6), 104266.
Mazzucchelli, A., Gurioli, M., Graziano, D., Quacquarelli, B., & Aouina-Mejri, C. (2021). How to fight against food waste in the digital era: Key factors for a successful food sharing platform. Journal of Business Research, 124 , 47–58.
Michelini, L., Grieco, C., Ciulli, F., & Di Leo, A. (2020). Uncovering the impact of food sharing platform business models: A theory of change approach. British Food Journal, 122 (5), 1437–1462.
Michelini, L., Principato, L., & Iasevoli, G. (2018). Understanding food sharing models to tackle sustainability challenges. Ecological Economics, 145 , 205–217.
Molnar, C. (2020). Interpretable machine learning . Lulu. com. https://christophm.github.io/interpretable-ml-book/
Molnar, C., Casalicchio, G., & Bischl, B. (2018). iml: An R package for interpretable machine learning. Journal of Open Source Software, 3 (26), 786.
Nguyen, J. K., Karg, A., Valadkhani, A., & McDonald, H. (2022). Predicting individual event attendance with machine learning: A ‘step-forward’ approach. Applied Economics, 54 (27), 3138–3153.
Puram, P., & Gurumurthy, A. (2023). Sharing economy in the food sector: A systematic literature review and future research agenda. Journal of Hospitality and Tourism Management, 56 , 229–244.
Puram, P., Roy, S., Srivastav, D., & Gurumurthy, A. (2023). Understanding the effect of contextual factors and decision making on team performance in Twenty20 cricket: An interpretable machine learning approach. Annals of Operations Research, 325 (1), 261–288.
Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022a). Adopting digital reservation systems to enable circular economy in entrepreneurship. Management Decision , ahead-of-print (ahead-of-print). https://doi.org/10.1108/MD-02-2022-0190
Saura, J. R., Palacios-Marqués, D., & Ribeiro-Soriano, D. (2023a). Exploring the boundaries of open innovation: Evidence from social media mining. Technovation, 119 , 102447.
Saura, J. R., Palacios-Marqués, D., & Ribeiro-Soriano, D. (2023b). Leveraging SMEs technologies adoption in the Covid-19 pandemic: A case study on Twitter-based user-generated content. The Journal of Technology Transfer, 48 (5), 1696–1722.
Saura, J. R., Ribeiro-Navarrete, S., Palacios-Marqués, D., & Mardani, A. (2023c). Impact of extreme weather in production economics: Extracting evidence from user-generated content. International Journal of Production Economics, 260 , 108861.
Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022b). Assessing behavioral data science privacy issues in government artificial intelligence deployment. Government Information Quarterly, 39 (4), 101679.
Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2024). Data-driven strategies in operation management: Mining user-generated content in Twitter. Annals of Operations Research, 333 (2), 849–869.
Schanes, K., & Stagl, S. (2019). Food waste fighters: What motivates people to engage in food sharing? Journal of Cleaner Production, 211 , 1491–1501.
Srinivas, S., & Ramachandiran, S. (2023). Passenger intelligence as a competitive opportunity: Unsupervised text analytics for discovering airline-specific insights from online reviews. Annals of Operations Research . https://doi.org/10.1007/s10479-022-05162-9
Tibshirani, R., Hastie, T., Witten, D., & James, G. (2021). An introduction to statistical learning: With applications in R . Springer. https://hastie.su.domains/ISLR2/ISLRv2_website.pdf
Topuz, K., Davazdahemami, B., & Delen, D. (2023). A Bayesian belief network-based analytics methodology for early-stage risk detection of novel diseases. Annals of Operations Research . https://doi.org/10.1007/s10479-023-05377-4
Wu, J., Zhao, H., & Chen(Allan), H. (2021). Coupons or free shipping? Effects of price promotion strategies on online review ratings. Information Systems Research, 32 (2), 633–652.
Yang, N., Korfiatis, N., Zissis, D., & Spanaki, K. (2023). Incorporating topic membership in review rating prediction from unstructured data: A gradient boosting approach. Annals of Operations Research . https://doi.org/10.1007/s10479-023-05336-z
Yeomans, M., Minson, J., Collins, H., Chen, F., & Gino, F. (2020). Conversational receptiveness: Improving engagement with opposing views. Organizational Behavior and Human Decision Processes, 160 , 131–148.
Zhu, L., Lin, Y., & Cheng, M. (2020). Sentiment and guest satisfaction with peer-to-peer accommodation: When are online ratings more trustworthy? International Journal of Hospitality Management, 86 , 102369.
Download references
Not Applicable.
Author information
Authors and affiliations.
Institute of Management Technology, Hyderabad, 501218, India
Praveen Puram
Indian Institute of Management Kozhikode, Kozhikode, 673570, India
Soumya Roy & Anand Gurumurthy
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Praveen Puram .
Ethics declarations
Conflicts of interest.
No conflicts of interest were reported.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
About this article
Puram, P., Roy, S. & Gurumurthy, A. Understanding the challenges affecting food-sharing apps’ usage: insights using a text-mining and interpretable machine learning approach. Ann Oper Res (2024). https://doi.org/10.1007/s10479-024-06130-1
Download citation
Received : 04 August 2023
Accepted : 19 June 2024
Published : 27 June 2024
DOI : https://doi.org/10.1007/s10479-024-06130-1
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- User-generated content
- Natural language processing
- Explainable machine learning
- Sharing economy
- Surplus food redistribution
- Sustainability
- Find a journal
- Publish with us
- Track your research
Communicative Sciences and Disorders
- Online Learners: Quick Links
- ASHA Journals
- Research Tip 1: Define the Research Question
- Reference Resources
- Evidence Summaries & Clinical Guidelines
- Drug Information
- Health Data & Statistics
- Patient/Consumer Facing Materials
- Images/Streaming Video
- Database Tutorials
- Crafting a Search
- Cited Reference Searching
- Research Tip 4: Find Grey Literature
- Research Tip 5: Save Your Work
- Cite and Manage Your Sources
- Critical Appraisal
- What are Literature Reviews?
- Conducting & Reporting Systematic Reviews
- Finding Systematic Reviews
- Tutorials & Tools for Literature Reviews
- Point of Care Tools (Mobile Apps)
Choosing a Review Type
For guidance related to choosing a review type, see:
- "What Type of Review is Right for You?" - Decision Tree (PDF) This decision tree, from Cornell University Library, highlights key difference between narrative, systematic, umbrella, scoping and rapid reviews.
- Reviewing the literature: choosing a review design Noble, H., & Smith, J. (2018). Reviewing the literature: Choosing a review design. Evidence Based Nursing, 21(2), 39–41. https://doi.org/10.1136/eb-2018-102895
- What synthesis methodology should I use? A review and analysis of approaches to research synthesis Schick-Makaroff, K., MacDonald, M., Plummer, M., Burgess, J., & Neander, W. (2016). What synthesis methodology should I use? A review and analysis of approaches to research synthesis. AIMS Public Health, 3 (1), 172-215. doi:10.3934/publichealth.2016.1.172 More information less... ABSTRACT: Our purpose is to present a comprehensive overview and assessment of the main approaches to research synthesis. We use "research synthesis" as a broad overarching term to describe various approaches to combining, integrating, and synthesizing research findings.
- Right Review - Decision Support Tool Not sure of the most suitable review method? Answer a few questions and be guided to suitable knowledge synthesis methods. Updated in 2022 and featured in the Journal of Clinical Epidemiology 10.1016/j.jclinepi.2022.03.004
Types of Evidence Synthesis / Literature Reviews
Literature reviews are comprehensive summaries and syntheses of the previous research on a given topic. While narrative reviews are common across all academic disciplines, reviews that focus on appraising and synthesizing research evidence are increasingly important in the health and social sciences.
Most evidence synthesis methods use formal and explicit methods to identify, select and combine results from multiple studies, making evidence synthesis a form of meta-research.
The review purpose, methods used and the results produced vary among different kinds of literature reviews; some of the common types of literature review are detailed below.
Common Types of Literature Reviews 1
Narrative (literature) review.
- A broad term referring to reviews with a wide scope and non-standardized methodology
- Search strategies, comprehensiveness of literature search, time range covered and method of synthesis will vary and do not follow an established protocol
Integrative Review
- A type of literature review based on a systematic, structured literature search
- Often has a broadly defined purpose or review question
- Seeks to generate or refine and theory or hypothesis and/or develop a holistic understanding of a topic of interest
- Relies on diverse sources of data (e.g. empirical, theoretical or methodological literature; qualitative or quantitative studies)
Systematic Review
- Systematically and transparently collects and categorize existing evidence on a question of scientific, policy or management importance
- Follows a research protocol that is established a priori
- Some sub-types of systematic reviews include: SRs of intervention effectiveness, diagnosis, prognosis, etiology, qualitative evidence, economic evidence, and more.
- Time-intensive and often takes months to a year or more to complete
- The most commonly referred to type of evidence synthesis; sometimes confused as a blanket term for other types of reviews
Meta-Analysis
- Statistical technique for combining the findings from disparate quantitative studies
- Uses statistical methods to objectively evaluate, synthesize, and summarize results
- Often conducted as part of a systematic review
Scoping Review
- Systematically and transparently collects and categorizes existing evidence on a broad question of scientific, policy or management importance
- Seeks to identify research gaps, identify key concepts and characteristics of the literature and/or examine how research is conducted on a topic of interest
- Useful when the complexity or heterogeneity of the body of literature does not lend itself to a precise systematic review
- Useful if authors do not have a single, precise review question
- May critically evaluate existing evidence, but does not attempt to synthesize the results in the way a systematic review would
- May take longer than a systematic review
Rapid Review
- Applies a systematic review methodology within a time-constrained setting
- Employs methodological "shortcuts" (e.g., limiting search terms and the scope of the literature search), at the risk of introducing bias
- Useful for addressing issues requiring quick decisions, such as developing policy recommendations
Umbrella Review
- Reviews other systematic reviews on a topic
- Often defines a broader question than is typical of a traditional systematic review
- Most useful when there are competing interventions to consider
1. Adapted from:
Eldermire, E. (2021, November 15). A guide to evidence synthesis: Types of evidence synthesis. Cornell University LibGuides. https://guides.library.cornell.edu/evidence-synthesis/types
Nolfi, D. (2021, October 6). Integrative Review: Systematic vs. Scoping vs. Integrative. Duquesne University LibGuides. https://guides.library.duq.edu/c.php?g=1055475&p=7725920
Delaney, L. (2021, November 24). Systematic reviews: Other review types. UniSA LibGuides. https://guides.library.unisa.edu.au/SystematicReviews/OtherReviewTypes
Further Reading: Exploring Different Types of Literature Reviews
- A typology of reviews: An analysis of 14 review types and associated methodologies Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26 (2), 91-108. doi:10.1111/j.1471-1842.2009.00848.x More information less... ABSTRACT: The expansion of evidence-based practice across sectors has lead to an increasing variety of review types. However, the diversity of terminology used means that the full potential of these review types may be lost amongst a confusion of indistinct and misapplied terms. The objective of this study is to provide descriptive insight into the most common types of reviews, with illustrative examples from health and health information domains.
- Clarifying differences between review designs and methods Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and methods. Systematic Reviews, 1 , 28. doi:10.1186/2046-4053-1-28 More information less... ABSTRACT: This paper argues that the current proliferation of types of systematic reviews creates challenges for the terminology for describing such reviews....It is therefore proposed that the most useful strategy for the field is to develop terminology for the main dimensions of variation.
- Are we talking the same paradigm? Considering methodological choices in health education systematic review Gordon, M. (2016). Are we talking the same paradigm? Considering methodological choices in health education systematic review. Medical Teacher, 38 (7), 746-750. doi:10.3109/0142159X.2016.1147536 More information less... ABSTRACT: Key items discussed are the positivist synthesis methods meta-analysis and content analysis to address questions in the form of "whether and what" education is effective. These can be juxtaposed with the constructivist aligned thematic analysis and meta-ethnography to address questions in the form of "why." The concept of the realist review is also considered. It is proposed that authors of such work should describe their research alignment and the link between question, alignment and evidence synthesis method selected.
- Meeting the review family: Exploring review types and associated information retrieval requirements Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: Exploring review types and associated information retrieval requirements. Health Information & Libraries Journal, 36(3), 202–222. doi: 10.1111/hir.12276
Integrative Reviews
"The integrative review method is an approach that allows for the inclusion of diverse methodologies (i.e. experimental and non-experimental research)." (Whittemore & Knafl, 2005, p. 547).
- The integrative review: Updated methodology Whittemore, R., & Knafl, K. (2005). The integrative review: Updated methodology. Journal of Advanced Nursing, 52 (5), 546–553. doi:10.1111/j.1365-2648.2005.03621.x More information less... ABSTRACT: The aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process....An integrative review is a specific review method that summarizes past empirical or theoretical literature to provide a more comprehensive understanding of a particular phenomenon or healthcare problem....Well-done integrative reviews present the state of the science, contribute to theory development, and have direct applicability to practice and policy.
- Conducting integrative reviews: A guide for novice nursing researchers Dhollande, S., Taylor, A., Meyer, S., & Scott, M. (2021). Conducting integrative reviews: A guide for novice nursing researchers. Journal of Research in Nursing, 26(5), 427–438. https://doi.org/10.1177/1744987121997907
- Rigour in integrative reviews Whittemore, R. (2007). Rigour in integrative reviews. In C. Webb & B. Roe (Eds.), Reviewing Research Evidence for Nursing Practice (pp. 149–156). John Wiley & Sons, Ltd. https://doi.org/10.1002/9780470692127.ch11
Scoping Reviews
Scoping reviews are evidence syntheses that are conducted systematically, but begin with a broader scope of question than traditional systematic reviews, allowing the research to 'map' the relevant literature on a given topic.
- Scoping studies: Towards a methodological framework Arksey, H., & O'Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8 (1), 19-32. doi:10.1080/1364557032000119616 More information less... ABSTRACT: We distinguish between different types of scoping studies and indicate where these stand in relation to full systematic reviews. We outline a framework for conducting a scoping study based on our recent experiences of reviewing the literature on services for carers for people with mental health problems.
- Scoping studies: Advancing the methodology Levac, D., Colquhoun, H., & O'Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5 (1), 69. doi:10.1186/1748-5908-5-69 More information less... ABSTRACT: We build upon our experiences conducting three scoping studies using the Arksey and O'Malley methodology to propose recommendations that clarify and enhance each stage of the framework.
- Methodology for JBI scoping reviews Peters, M. D. J., Godfrey, C. M., McInerney, P., Baldini Soares, C., Khalil, H., & Parker, D. (2015). The Joanna Briggs Institute reviewers’ manual: Methodology for JBI scoping reviews [PDF]. Retrieved from The Joanna Briggs Institute website: http://joannabriggs.org/assets/docs/sumari/Reviewers-Manual_Methodology-for-JBI-Scoping-Reviews_2015_v2.pdf More information less... ABSTRACT: Unlike other reviews that address relatively precise questions, such as a systematic review of the effectiveness of a particular intervention based on a precise set of outcomes, scoping reviews can be used to map the key concepts underpinning a research area as well as to clarify working definitions, and/or the conceptual boundaries of a topic. A scoping review may focus on one of these aims or all of them as a set.
Systematic vs. Scoping Reviews: What's the Difference?
YouTube Video 4 minutes, 45 seconds
Rapid Reviews
Rapid reviews are systematic reviews that are undertaken under a tighter timeframe than traditional systematic reviews.
- Evidence summaries: The evolution of a rapid review approach Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: The evolution of a rapid review approach. Systematic Reviews, 1 (1), 10. doi:10.1186/2046-4053-1-10 More information less... ABSTRACT: Rapid reviews have emerged as a streamlined approach to synthesizing evidence - typically for informing emergent decisions faced by decision makers in health care settings. Although there is growing use of rapid review "methods," and proliferation of rapid review products, there is a dearth of published literature on rapid review methodology. This paper outlines our experience with rapidly producing, publishing and disseminating evidence summaries in the context of our Knowledge to Action (KTA) research program.
- What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments Harker, J., & Kleijnen, J. (2012). What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. International Journal of Evidence‐Based Healthcare, 10 (4), 397-410. doi:10.1111/j.1744-1609.2012.00290.x More information less... ABSTRACT: In recent years, there has been an emergence of "rapid reviews" within Health Technology Assessments; however, there is no known published guidance or agreed methodology within recognised systematic review or Health Technology Assessment guidelines. In order to answer the research question "What is a rapid review and is methodology consistent in rapid reviews of Health Technology Assessments?", a study was undertaken in a sample of rapid review Health Technology Assessments from the Health Technology Assessment database within the Cochrane Library and other specialised Health Technology Assessment databases to investigate similarities and/or differences in rapid review methodology utilised.
- Rapid Review Guidebook Dobbins, M. (2017). Rapid review guidebook. Hamilton, ON: National Collaborating Centre for Methods and Tools.
- NCCMT Summary and Tool for Dobbins' Rapid Review Guidebook National Collaborating Centre for Methods and Tools. (2017). Rapid review guidebook. Hamilton, ON: McMaster University. Retrieved from http://www.nccmt.ca/knowledge-repositories/search/308
- << Previous: Literature Reviews
- Next: Conducting & Reporting Systematic Reviews >>
- Last Updated: Jun 26, 2024 3:00 PM
- URL: https://guides.nyu.edu/speech
AI Assistance for UX: A Literature Review Through Human-Centered AI
Recent advancements in HCI and AI research attempt to support user experience (UX) practitioners with AI-enabled tools. Despite the potential of emerging models and new interaction mechanisms, mainstream adoption of such tools remains limited. We took the lens of Human-Centered AI and presented a systematic literature review of 359 papers, aiming to synthesize the current landscape, identify trends, and uncover UX practitioners’ unmet needs in AI support. Guided by the Double Diamond design framework, our analysis uncovered that UX practitioners’ unique focuses on empathy building and experiences across UI screens are often overlooked. Simplistic AI automation can obstruct the valuable empathy-building process. Furthermore, focusing solely on individual UI screens without considering interactions and user flows reduces the system’s practical value for UX designers. Based on these findings, we call for a deeper understanding of UX mindsets and more designer-centric datasets and evaluation metrics, for HCI and AI communities to collaboratively work toward effective AI support for UX.
1. Introduction
Advancements in Artificial Intelligence (AI) enabled applications in numerous sectors, with the user experience (UX) industry being a notable potential beneficiary. AI models can facilitate processes that involve various data modalities, ranging from text-based affinity diagrams (Goldman et al . , 2022 ; Borlinghaus and Huber, 2021 ) and user interface (UI) development codes (Beltramelli, 2017 ; Feng et al . , 2021a ) to image-based UI screenshots (Leiva et al . , 2022a ; Wang et al . , 2021 ; Zhao et al . , 2021 ) . The enhancements of language-based and multi-modal AI models have expanded the possibilities of applications in UX design and research (Dhinakaran, [n. d.] ; Di Fede et al . , 2022 ; Kim et al . , 2023 ) . Notably, the impressive capabilities of large-language models (LLMs) further promoted AI adoption in real applications (Dhinakaran, [n. d.] ) . Diffusion-based, text-to-image generative AI such as Stable Diffusion (Rombach et al . , 2022 ) and Midjourney 1 1 1 https://www.midjourney.com/ also opens up new avenues for creative professionals to utilize AI in their work (Verheijden and Funk, 2023 ; Wei et al . , 2023 ) .
However, creating usable, effective, and enjoyable AI-enabled experiences for UX practitioners remains challenging (Yang et al . , 2020 ) . A technology-driven mindset, prevalent in AI communities, can lead to applications that are driven by the latest technology, but do not necessarily address UX practitioners’ unique goals such as empathy-building. Furthermore, the fluid, nonlinear UX methodologies (Gray, 2016 ) are not the same as logical, computational thinking and can be hard to grasp for AI researchers. The lack of insight into designer workflow and practices can create challenges for AI research to create effective and seamless support for UX professionals.
Not all UX processes are desired to be delegated to AI (Marathe and Toyama, 2018 ; Lubars and Tan, 2019 ) , leading to concerns about the diminished empathy of the designer when valuable research processes become automated. Such concerns question the real-world efficacy of these AI models in providing meaningful UX support. Early research prototypes on AI-enabled design support systems have received positive feedback in user studies (Cheng et al . , 2023b ; Hegemann et al . , 2023 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ) . At the same time, unique data modalities, user needs, and workflows in UX also created new practical challenges for AI researchers to tackle (Li et al . , 2021b ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ; Wang et al . , 2021 ) .
The field of human-centered AI (HCAI) provides valuable perspectives for investigating the current gap and future risks in AI for UX support. HCAI sits at the intersection of AI and Human-Computer Interaction (HCI) and embraces the human-centered philosophy. It aims to ensure that AI systems align with human values and mitigate potential harms to individuals, communities, and societies (Shneiderman, 2022 ) . As AI models integrate into more real-world applications, it becomes imperative to prioritize human-centered design and research principles in AI adoption. Researchers in HCAI have investigated useful design metaphors and paradigms for AI systems (Yang et al . , 2019b ; Shneiderman, 2022 ) .
In this work, we conducted a systematic literature review (SLR) through the lens of HCAI and analyzed the state of technical and system research in AI assistance for UX practitioners. We outline the role of AI in different phases of UX practices using the classic Double Diamond design framework (Council, [n. d.] ) . Our SLR sought to understand AI’s current technical capabilities with UX-related tasks and map out the rapidly expanding design space of AI for UX support. Our general goal is to pinpoint opportunities for both HCI and AI communities, to identify the critical needs of UX professionals, and to find common ground between UX practices and frontier academic AI research. Thus, we define our research questions as follows.
What capabilities do the latest AI models possess for different UX-related tasks?
Regarding UX practitioners’ needs and preferences for AI assistance, what insights have been revealed from past research?
What are the gaps between existing empirical studies and opportunities for future AI research and interactive system development?
Through our SLR with 359 papers, we found that past work has a higher focus on technology-driven approaches than human-centered investigations. Our analysis underscored the contrast between AI’s data-driven nature and the human-centric philosophy of UX. Building on this, our study maps existing research onto the Double Diamond framework (Council, [n. d.] ) , identifying key technical capabilities of AI in UX (Section 4 ) and underscoring overlooked areas such as empathy-building and enhancing user experiences across multiple UI screens (Section 4.6 ). The UX industry can also benefit from embracing data-driven strategies to capture feedback from ever-expanding user bases. We emphasize the need for a deeper understanding of UX methodologies and goals, the expansion of quantitative UX metrics, and careful consideration of AI delegability based on existing Human-Centered AI frameworks (Lubars and Tan, 2019 ) . This work aims to offer valuable insights and direction for future research to the HCI, UX, and AI communities, highlighting the potential of this promising interdisciplinary, translational research domain.
2. Background and Related Work
2.1. ui/ux design and support tools.
UI/UX as a profession has established its status in both the tech industry and academia over the past decades. Nielson estimated that the population of UX professionals worldwide grew from about 1,000 to 1 million between 1983 and 2017. It is also estimated that in 2050, the number will increase by another 100-fold to 100 million (Nielsen, 2017 ) . UX practitioners aim to create products and experiences that are user-friendly, enjoyable, and effective. They often try to understand target users’ needs through human-centered methodologies, e.g. contextual interviews, and iteratively prototype their design solutions and elicit feedback from users. Such a process is well captured in British Design Council’s Double Diamond framwork (Council, [n. d.] ) . Through two divergent-convergent processes, UX practitioners brainstorm and select particular aspects of an issue to tackle, then iteratively prototype a few potential solutions and finalize on one through user feedback.
Numerous support tools have been developed for UX design. From early HCI research, the SILK system was one of the first no-code designer-support UI prototyping tools (Landay, 1996 ) . Later, Sketch and Figma are among the most popular tools for UX prototyping. More related to the early exploratory phases, platforms such as Miro, Mural, and FigJam are created for UX professionals to organize ideas, conduct brainstorming, or qualitatively analyze user data. Evaluation platforms such as UserTesting 2 2 2 https://www.usertesting.com/ and Maze 3 3 3 https://maze.co provide support for conducting user evaluations, while researchers also investigated automated design testing (Deka et al . , 2017b ) and remote user testing (Martelaro and Ju, 2017 ) . Notably, design systems such as Google Material Design and Apple Human Interface Guidelines also provided tools to help designers create user-friendly, consistent, and accessible UIs.
Recently, we have witnessed an increase in AI integration into design support tools in both academia and industry. In academia, many researchers have been exploring AI-enabled support tools for UX practitioners (Li et al . , 2021b ; Sermuga Pandian et al . , 2021b ; Lu et al . , 2022 ; Knearem et al . , 2023 ) . In the industry, design tools like Uizard 4 4 4 https://uizard.io/ and Framer 5 5 5 https://www.framer.com/ai have rolled out AI features to generate UI screens from natural language descriptions. Figma also recently acquired Diagram 6 6 6 https://diagram.com/ , a startup that previously focused on AI-enabled Figma plugins, and started to roll out AI features in their tool. However, the UX industry embodies a human-centered principle, which is inherently different from the technology-first mindset prevalent in AI communities. This has created friction in designing better AI experiences (Yang et al . , 2020 ) as well as creating effective AI support for UX practitioners (Lu et al . , 2022 ) . We have yet to observe any of these AI-enabled tools become mainstream and adopted by a significant portion of the UX industry. This might reflect a “research-practice gap” that is common across HCI research (Norman, 2010 ) , solving which requires more translational research and resources to fulfill the needs of practitioners (Colusso et al . , 2017 ) .
2.2. Human-Centered AI
Human-Centered AI (HCAI) is an emergent interdisciplinary research field that bridges AI and HCI. HCAI embraces the human-centered philosophy and takes a humanistic and ethical view towards the latest AI technology: how to enhance humans rather than replace them (Xu, 2019 ) . Researchers in HCAI have predicted that by embracing a human-centered future, the AI community’s impact will likely grow even greater (Shneiderman, 2022 ) .
The primary research focuses of HCAI include: (1) improving AI-driven technology to better augment human needs, (2) identifying design methodologies for safe and trustworthy AI systems, and (3) understanding and safeguarding the impact of AI on individuals, communities, and societies (Xu, 2019 ; Shneiderman, 2022 ) . In this work, we investigate AI support for UX practitioners through the lens of HCAI, proposing our research questions (see the Introduction section) based on the research focuses above. We refer to past research in HCAI, including Principles of Mixed-Initiative Interfaces (Horvitz, 1999 ) , Guidelines for Human-AI Interaction (Amershi et al . , 2019 ) , and books on Human-Centered AI (Shneiderman, 2022 ) . Particularly, we balance our analysis on both the technical and design aspects, seeking to understand existing AI models’ capabilities in UX tasks, as well as practitioners’ needs for automation in current methodologies and practices.
2.3. Literature Review in AI Support for UI/UX design
Past literature review studies in computing and HCI have successfully identified trends and gaps and proposed new research directions in different specific domains (Dell and Kumar, 2016 ; Dillahunt et al . , 2017 ; Lopez and Guerrero, 2017 ; Pater et al . , 2021 ; Stefanidi et al . , 2023 ) . We consider the call for more literature review studies in HCI, CSCW, and Ubicomp (Lopez and Guerrero, 2017 ) and specifically look at the emerging field of AI for UI/UX design support.
While many researchers conducted general investigations on this topic (Lu et al . , 2022 ; Knearem et al . , 2023 ; Isgrò et al . , 2022 ; Liao et al . , 2020 ; Grigera et al . , 2023 ) , only 3 papers used systematic literature review by the time we conducted this study. Malik et al. reviewed 100 papers and analyzed the deep learning approaches that have been utilized to support UI/UX design work (Malik et al . , 2023 ) . Their analysis results revealed potential for cross-platform datasets, more advanced UI generation models, and a centralized deep-learning-based design automation system.
In addition, Abbas et al. (Abbas et al . , 2022 ) analyzed 18 papers in this field and analyzed UX designers’ current challenges in incorporating ML in their design process. Their results showed that most ML-enabled UX design tools fail to be integrated in practical settings. They argued the need to build support tools by considering existing design practices, rather than simply based on existing ML models’ capabilities. Interestingly, the paper did not distinguish designing with ML support (the focus of our paper) from designing ML-involved systems and experience (i.e., AI as a design material, outside of our scope). Many of their summaries and discussions were centered around the need for designers’ understanding of ML, which is beyond the scope of our analysis.
In 2022, Stige et al. (Stige et al . , 2023 ) conducted a literature review on 46 articles in this field to analyze how AI is currently used in UX design (namely, user requirement specification, solution design, and design evaluation) and potential future research themes. Compared to their analysis sample (N=46), our sample was more comprehensive (N=359) and up-to-date (conducted in 2023), resulting in a more complete analysis of the recent empirical and technical research landscape (Section 4 ). In addition, by mapping previous research into the four phases of the Double Diamond framework, we revealed more details regarding AI’s involvement in UX research and design activities. Our analysis also uncovered more in-depth differences between AI and UX communities’ mindsets and pointed out meaningful gaps to bridge for future research (Section 5 ).
3. Literature Review Method
To address our research questions (see Introduction ), we conducted a systematic literature review (SLR) of papers in relevant research fields. SLRs are designed to help understand and interpret a large volume of information, to explain “what works” (i.e., current landscape) and “what should work” (i.e., potential gaps and future directions) in a given field. The “systematic” aspect of SLR focuses on identifying all research that addresses a specific question to conduct a balanced and unbiased summary (Nightingale, 2009 ) . We followed previous guidelines on conducting SLRs (Xiao and Watson, 2019 ; Nightingale, 2009 ) and referred to previous SLR studies in adjacent fields to form our methods (Kaluarachchi and Wickramasinghe, 2023 ; Dillahunt et al . , 2017 ; Pater et al . , 2021 ; Wohlin, 2014 ) .
We used snowball sampling , a widely adopted literature search strategy, to select our literature sample 7 7 7 More explanation of our rationale to use snowball sampling can be found in Appendix A . . It begins with a starter set of a few relevant papers, then iteratively includes related papers that were cited by, or cited, papers in the starter set (i.e., the backward and forward snowballing processes) (Wohlin, 2014 ) . Google Scholar was used as our primary search engine, as it is one of the largest online academic search engines, and is commonly used in literature review studies (Wohlin, 2014 ; Xiao and Watson, 2019 ; Siddaway et al . , 2019 ; Cheng, 2016 ) . We did not restrict the publication venues to reduce bias and get a diverse sample across disciplines (Nightingale, 2009 ) . We depict our process in Fig. 1 by following an adapted version of the PRISMA statement (Moher et al . , 2009 ) . Below, we detail our literature selection process, including our inclusion/exclusion criteria, the selection of a starter set, and iterative backward and forward samplings.
3.1. Inclusion/Exclusion Criteria
In line with our research scope, we included papers that satisfy both of the following criteria:
providing support for methodologies or artifacts in UI/UX design and research,
incorporating the use of artificial intelligence for such support.
We referred to articles regarding UX design and research practices (Rosala, 2020 ; Farrell, 2017 ; Pernice, 2019 ; Rosala, 2022 ) to inform our selection process against the first criteria. Specifically, we used the Double Diamond design framework (Council, [n. d.] ) to map out opportunities for AI support in the UX workflow, similar to past studies in this domain (Yang et al . , 2020 ) . We excluded papers that focused only on UI development without relevance to UI/UX design or research.
As discussed in previous work, coming up with a precise, comprehensive definition of AI is hard, even within AI research communities (Stone et al . , 2022 ) . It is even harder when considered in the HCI and UX contexts (Yang et al . , 2020 ) and is beyond the scope of our paper. We use the term as a reference to a suite of computational techniques generally considered in the domain of AI, from neuron-network-based deep learning models to statistical, machine learning approaches (Russell and Norvig, 2010 ) . We excluded papers investigating the design of AI systems, often referred to as “AI as a design material” (Yang et al . , 2020 ; Yildirim et al . , 2022 ) . These papers often work on the designerly understanding of AI (Liao et al . , 2023 ) and design processes that account for AI safety and accountability (Moore et al . , 2023 ) . They focus more on the design of AI instead of supporting design with the helpf of AI (our focus).
It is also noteworthy that our focus is specifically on AI adoption in UX support. While relevant, we do not aim to conduct a comprehensive literature review on creativity support tools, human-AI co-creation, or human-centered AI, given these are much broader research topics independent of our scope. However, we did draw inspiration from papers from these domains that do not fit our scope exactly and include them in our Discussion section for better generalizability of our findings.
3.2. Starter Set
In the beginning, four researchers collaboratively searched for and filtered relevant papers using research search engines including Google Scholar and ACM Digital Library, based on our inclusion criteria defined in Section 3.1 . When selecting our starter set, we followed previous work (Nightingale, 2009 ) and aimed at the diversity of topics to minimize bias. Specifically, we also ensured to include a balanced set of papers addressing every phase of the Double Diamond process (Council, [n. d.] ) . The four researchers frequently communicated and discussed in-depth during the selection process to ensure the representativeness and quality of our starter set. In the end, we included 17 papers related to the four Double Diamond phases (four, three, five, and five papers from discover, define, develop, and deliver, respectively). We also included two papers that investigate the same problem domain but do not specifically fit into any phase above to ensure representativeness and comprehensiveness. In all, our starter set consisted of 19 representative papers.
3.3. Backward and Forward Sampling
After selecting the starter set, we conducted two rounds of iterative sampling. In each iteration, both the papers that our sample cited (backward sampling of past papers) and the papers that cited our sample (forward sampling of later papers) were examined by four researchers. Researchers examined the full text of identified papers to determine their relevance, eligibility, and quality. A minimum of two researchers independently evaluated each paper and settled disagreements through discussions. Details of the iterations were depicted in Fig. 1 , following an adapted version of the PRISMA statement (Moher et al . , 2009 ) .
We stopped after the second snowballing iteration because we had already obtained a large sample (N=359) that is representative of the existing work in our domain. Also, in the second iteration, we observed that papers from the first iteration repeatedly appeared in papers of interest. In the analysis process, upon detailed examination, 68 papers were excluded due to their relative lack of relevance to our research questions. Our final sample contained a total of 359 papers, sourced from March to July 2023 (Fig 1 ). To the best of our knowledge, it is to date the largest repository of existing literature on the topic of AI for UX support compared to past literature reviews in this field (Abbas et al . , 2022 ; Malik et al . , 2023 ; Stige et al . , 2023 ) .
4. Analysis
After all papers were selected and screened, the research team mapped their main topic into one of the four phases in the Double Diamond design framework (Council, [n. d.] ) , a classic framework that comprehensively covers various activities in a design process. It has guided many previous academic research on UX design (Gustafsson, 2019 ; Yang et al . , 2020 ; Ammarullah et al . , 2021 ) . It encapsulates the two divergent–convergent processes in design, where designers explore potential problems to address in the domain, then converge to main target issues; prototype a few potential solutions, and decide on the most effective one through testings and evaluations (Council, [n. d.] ) . It should be noted that modern design processes are mostly iterative, so designers can go back and forth between different phases.
Given that we also focus on the technical feasibility of AI models in UX, two additional categories were also included: “Datasets” on UX-related datasets, and “General AI Models”, about AI models that work with UX-related data and can be applied to more than one phase in the Double Diamond framework. When a paper fits more than one phase, we include it in the primary phase it belongs to.
The papers in each phase were analyzed and discussed by at least two researchers. For each paper, based on our research questions and our human-centered AI perspective, we define the following seven aspects to focus on:
Research contribution type (according to (Wobbrock and Kientz, 2016 ) )
Target problem/task
Study/discussion of user needs
Supporting empirical evidence from previous work (if any)
AI model architecture and data modality
Other important model aspects (e.g. user control, explainability)
UX artifacts involved
Researchers also took notes on meaningful information outside of these aspects. In a shared spreadsheet, researchers filled in information about the paper for the above aspects and discussed them for our analysis.
Fig. 3 depicts the trend of paper counts for each year in our sample, and Fig. 4 provides a more detailed view of the six phases. They show that research in this field has significantly increased since 2020. Note that the literature review was conducted from March to July 2023, so we only included papers published before this. Through further analysis of the general trends, we identified two imbalances in the current research landscape:
Imbalance between technology-centric and human-centered approaches
We visualized the proportion of papers that studied or analyzed the needs of their target users using human-centered methodologies defined in previous literature (Olson and Kellogg, 2014 ; Rosala, 2020 ; Farrell, 2017 ; Rosala, 2022 ; Moran, 2018 ) , such as ethnographic interviews, usability studies, etc. The result is shown in Figure 5 : in total, only 24.3% papers (N=76) from all papers in these 4 phases (N=309) used human-centered methodologies and discussed user needs in their scenarios. This reflects the current technology-centric tendency of research in AI assistance for UX. Although this phenomenon is not uncommon given the nascent nature of this field, it also calls for a more balanced approach that incorporates human-centered investigations. Emphasizing human-centered research not only addresses the preferences of users but also enhances the overall value and impact of AI solutions (Shneiderman, 2022 ) .
Imbalance between studies in Double Diamond phases
As depicted in Fig. 5 (b), the papers in our sample display a noticeable inclination towards the develop and deliver phases, while seemingly underrepresenting the define phase. Determining the exact cause of this observed trend is challenging and beyond the scope of our review. Nevertheless, we hypothesize that this bias stems from the wealth of data available for the latter two phases (as discussed in Section Datasets ), coupled with the inherently subjective and task-dependent nature of evaluating design concepts during the define phase (Council, [n. d.] ; Gray, 2016 ) .
In the following sections, we dive deeper into our analysis of previous work in each of these six categories. At the end, we compare our findings from all six phases and summarize the results of our general analysis.
4.1. Discover
Discover is the divergent phase in the first diamond. It is the beginning phase where most exploratory user research is conducted. Designers need to understand the design problems and build user empathy in this phase. Common methodologies and artifacts involved in this phase include personas, user interviews, and brainstorming (Council, [n. d.] ) . Our analysis summarized related research themes from past works as follows: Review Mining (N=27), Data-driven Persona (N=21), and AI-supported Brainstorming (N=18).
4.1.1. Review Mining
For UX researchers, analyzing user reviews helps them identify current design problems, potential user requirements, and other user experience-relevant information (Hedegaard and Simonsen, 2013 ; Baj-Rogowska and Sikorski, 2023 ; Yang et al . , 2019a ; Mendes and Furtado, 2017 ) . The old-fashioned practice is manually coding data or using rule-based algorithms to classify user reviews into several topics and conducting statistical analysis (Mendes et al . , 2015 ; Maalej et al . , 2016 ) . The introduction of machine learning to this task could date back to the 2010s (Dąbrowski et al . , 2022 ) . It automated the process of addressing vast amounts of textual data and advanced traditional algorithms with a better understanding of natural language. For example, it facilitates extracting structured information from the narratives, such as product features and user attitudes (Tuch et al . , 2013 ) .
One concern is how these works quantify the goal of mining user reviews. Most of them simplify design practitioners’ needs to classify user narratives based on some empirically defined computational models (Hedegaard and Simonsen, 2013 ; Yang et al . , 2019a ) or quantified metrics such as user sentiments (Guzman and Maalej, 2014 ; Li et al . , 2020c ) , satisfaction levels (Jang and Park, 2022 ; Jang and Yi, 2017 ) . Only a limited number of these works validated the effectiveness of this equivalence in meeting designers’ needs.
Another concern is the generalizability of these formulations in identifying design problems in different scenarios. Some recent works indicated that review analysis could be fine-grained to user needs by integrating more advanced language models. For example, Wang et al. (Wang et al . , 2022 ) increased the granularity of the extracted information and indicated specific problematic features for further improvements.
4.1.2. Data-Driven Persona
Data-driven persona refers to the adoption of algorithm methods to develop personas from numerical data (Salminen et al . , 2021 ) . Machine learning pushes the purview further with its capacity in clustering and segmenting a variety of user data, such as feedback posts (Tan et al . , 2022 ; Zhang et al . , 2016 ; Jisun et al . , 2017 ) and survey responses (Hou et al . , 2020 ) . Besides, it also makes large-scale user data with time-changing behaviors feasible for persona development. For instance, user profiles and their interaction history (Jansen et al . , 2019 ; Salminen et al . , 2021 ; An et al . , 2016 ) are introduced and make the persona construction more comprehensive.
A common criticism of this data-driven approach is its automation process hinders design practitioners from building as deep user empathy as they could with a qualitative approach (Salminen et al . , 2020 ) . Efforts have been made to integrate mixed methods in recent years. For instance, quantitative results are considered archetypes and inform the following qualitative analysis (Tan et al . , 2022 ; Zhang et al . , 2016 ; Jansen et al . , 2019 ) . Some other approaches verified qualitative insights via quantitative results (Jung et al . , 2022 ) . However, evaluations of these mixed-method approaches are far from standardized and overlook examining their effectiveness as user-empathizing processes.
4.1.3. AI-supported Brainstorming
Ideation is another divergent thinking scenario with which many studies have tried to integrate AI. Research on AI for brainstorming includes individual support and human-human collaboration support.
For individual ideation supports, early systems adopted machine learning for retrieving inspirational ideas and searching associative knowledge from a defined collection (Gilon et al . , 2018 ; Feng et al . , 2022 ; Andolina et al . , 2015 ; Kita and Rekimoto, 2018 ) , among which only a limited number of works considered learning from specific design contexts (Koch et al . , 2019 ) . Recently, the advancement of Large Language Models (LLMs) has enhanced the capacity for divergent thinking in these ideation systems, but it also confines them primarily to textual modalities (Memmert and Tavanapour, 2023 ; López, [n. d.] ; Di Fede et al . , 2022 ) . Besides, these systems mostly followed a series of linear structured stages in order to make AI integration more feasible , like a sequence model consisting of warm-up, generating ideas, discussing ideas with groups (López, [n. d.] ; Memmert and Tavanapour, 2023 ; Tavanapour et al . , 2020 ) .
For collaborative brainstorming, researchers investigated how machine learning could be involved and support various interactions for team communication, like face-to-face ideation (Andolina et al . , 2015 ) and table-top interfaces (Hunter and Maes, [n. d.] ) . Machine learning could also be a team facilitator for human group ideations (Bittner and Shoury, 2019 ; Tavanapour et al . , 2020 ) . The prosperity of generative AI provides more engaging roles for machine learning (Shin et al . , 2023 ) such as experts (Memmert and Tavanapour, 2023 ; Bittner and Shoury, 2019 ) and mediators (Löbbers et al . , 2023 ) , and leads to further research opportunities.
Another research focus is on how social effects in human-human teaming transfer in human-AI collaborative ideation (Hwang and Won, 2021 ; Memmert and Tavanapour, 2023 ) , which sheds light on the negative impacts that AI has introduced in this process, like distraction (Kita and Rekimoto, 2018 ) , cognitive loads (Zhang et al . , 2022 ) , and free-riding (Memmert and Tavanapour, 2023 ) , which are not limited to UX ideation.
4.1.4. Additional Topics
In addition to the aforementioned topics, some other emerging works merit mention. Some studies addressed challenges in traditional qualitative research, such as communication fatigue and evaluation apprehension, by introducing AI-powered conversational agents (Xiao et al . , 2020b ; Bulygin, 2022 ) . Researchers have explored its adoption in conducting user interviews, facilitating engaging communication with users, and information elicitation (Han et al . , 2021 ; Xiao et al . , 2020a ) . Moreover, conducting user interviews at scale would be more accessible. How this interview mode affects interviewers, interviewees, and in-depth understanding leaves opportunities for future studies.
4.1.5. Summary
Current ML integrations in UX research mostly provide automation support for laborious work and enhance traditional processes in work with large-scale and large-variety user data, especially for review mining and data-driven persona. Studies on ideation are more diverse and consider different collaborative settings and potential roles of AI beyond automation. Based on a human-centered scope, an apparent question is how the integration of machine learning aligns with the need of the UX discover phase, which is, understanding design problems and building user empathy.
What we found from our analysis is an oversight of the empathy-building process and a limited interpretation of design practitioners’ needs . For example, constructing personas is regarded as making deliverables that could be automated by machines, while it is primarily a process where designers synthesize materials and build user understanding; quantitative metrics are adopted without validating its effectiveness for design practitioners and its generalizability for various design contexts. Future studies would be enriched by delving deeper into specific design contexts and designers’ cognitive processes, especially in enhancing the empathetic comprehension of users as highlighted by (Zhu and Luo, 2023 ) . This should complement the focus on the informational necessities that bolster the designers’ empathetic processes.
4.2. Define
Define is the convergent phase in the first diamond, where designers define the problem statement and pinpoint the products’ desired impact based on previous research findings. The main themes we identified in the define phase are Qualitative Analysis (N=22) and AI for Design Idea Evaluation (N=2) 8 8 8 We specifically looked for other papers involving design idea evaluation and AI, but did not find any beyond our snowball sampling results. . Methodologies and artifacts involved in this phase include affinity diagramming and focus groups (Council, [n. d.] ) . The primary objective of this phase is to sort through the extensive research data, discerning the most promising directions that align with user requirements, business objectives, and technical viability (Rosala, 2022 ) .
4.2.1. Qualitative Analysis
AI support for qualitative analysis has been an active research area and is prevalent in our sample (N=22). UX professionals and HCI researchers use this methodology to organize, label, and analyze data, to identify patterns and extract insights (Rosala, 2022 ; Olson and Kellogg, 2014 ) . Generally, researchers discovered that simplistic automation of qualitative analysis can break established workflows, increase discussion overhead, and lead to unexpected reductions in efficiency and quality (Borlinghaus and Huber, 2021 ) . In contrast, papers that closely examined different steps in qualitative analysis and intentionally preserved human agency, control, and goals often demonstrated better psychological and performative results (Marathe and Toyama, 2018 ; Rietz and Maedche, 2021 ; Wakatsuki and Yamamoto, 2021 ; Feuston and Brubaker, 2021 ; Gebreegziabher, 2023 ; Gao et al . , 2023 )
On the surface, qualitative analysis involves labeling data and extracting insights . Some studies aimed at speeding up the labeling process and using AI to produce labeled results (Li, 2021 ) . However, research has shown that these full automation approaches can easily break the existing workflow and lead to increased discussion overhead and reduced efficiency and quality (Borlinghaus and Huber, 2021 ) . In contrast, some papers broke down detailed steps in qualitative analysis to analyze their distinctions and different potentials for automation. Marathe et al. (Marathe and Toyama, 2018 ) divided qualitative analysis into two phases: building a codebook by analyzing data, and applying the codes to the remaining data.
Codebook building
Building a codebook with a data subset is a key learning and reasoning process in qualitative analysis, where researchers build “emotional connection — the intimacy, pride, and ownership — with the data” (Jiang et al . , 2021 ) and “think with their hands” (Borlinghaus and Huber, 2021 ) . Researchers generally oppose the introduction of “low-level, suggestion-based automation” in this process, to avoid taking away the invaluable cognitive process of human researchers (Marathe and Toyama, 2018 ; Jiang et al . , 2021 ) . Feuston et al. (Feuston and Brubaker, 2021 ) emphasized that qualitative research is a process that utilizes researchers’ unique perspectives in data analysis, whilst AI might take away this opportunity and reinforce past coding patterns in new data.
Codebook application
Once a codebook is developed, applying it to the remaining data can be relatively more mechanical. Previous studies have shown that automation is more welcomed in this phase (Marathe and Toyama, 2018 ) . As a result, many systems were built to automate the tedious aspects of labeling while preserving the researcher’s agency in learning (Marathe and Toyama, 2018 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ; Jiang et al . , 2021 ; Feuston and Brubaker, 2021 ) . But there is also more to the labeling process than simply applying the codebook: since qualitative analysis is often a collaborative process, Drouhard et al. emphasized the value of disagreement between researchers in reflecting ambiguities in data (Drouhard et al . , 2017 ) . Reflecting on and resolving these conflicts can help to improve researchers’ learnings (Chen et al . , 2018 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ) .
Advantages of interactive ML in qualitative analysis
The interactive ML technique provides great potential to automate tedious aspects of qualitative coding, while leaving the final decisions to users, preserving their agency. It has been employed in existing AI systems to support qualitative coding (Rietz and Maedche, 2020 , 2021 ; Gebreegziabher, 2023 ) . In the context of qualitative analysis, interactive ML engages users in a collaborative process, where they actively offer feedback on AI-generated outputs, thereby enhancing the precision and relevance of qualitative coding (Rietz and Maedche, 2020 ) . Interactive ML also does not require large labeled datasets and learns as users annotate more data, which naturally fits the qualitative analysis process. Building on top of human-interpretable rules, patterns and relatively simple AI models, they were able to achieve a certain level of explainability and interpretability. Cody also provided counterfactual explanations to help users further understand algorithmic predictions.
User control in qualitative analysis
User control has been a common theme in discussions of AI support in qualitative analysis (Rietz and Maedche, 2020 ; Jiang et al . , 2021 ; Feuston and Brubaker, 2021 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ; Gao et al . , 2023 ) . Earlier papers discussed how the lack of control might prevent AI from providing valuable support (Jiang et al . , 2021 ) . However, Feuston and Brubaker discovered that it is more nuanced: AI support can benefit certain steps in qualitative analysis, or even shifting some analytic practices, as long as it assists instead of automates existing analytic work practices (Feuston and Brubaker, 2021 ) . The careful design of systems including Cody (Rietz and Maedche, 2021 ) and PaTAT (Gebreegziabher, 2023 ) also confirmed the value of AI support while maintaining user control and agency. The “delegability” of human tasks to AI (Lubars and Tan, 2019 ) in qualitative coding depends on human motivation, task difficulty, associated risk, and human trust (Jiang et al . , 2021 ) .
4.2.2. Design Idea Evaluation
Two papers in our sample investigated the use of AI in evaluating design ideas. Siemon conducted a comparative study with a simulated AI system to investigate AI’s utility in helping reduce apprehension in design idea evaluation (Siemon, 2023 ) . In addition, Mesbah et al. combined AI with crowdsourcing to effectively measure the desirability, feasibility, viability, and overall feeling of design ideas (Mesbah et al . , 2023 ) . Given that current methodologies around design idea evaluation are subjective and task-dependent (Council, [n. d.] ; Gray, 2016 ) , AI models that are trained against general metrics such as in (Mesbah et al . , 2023 ) are likely not sufficient for real-world scenarios. It remains largely unclear how AI support might fit into existing manual evaluation processes. We believe a deeper empirical understanding of UX evaluation processes and practices is required to bridge this current gap.
4.2.3. Summary
In all, in the define phase, previous research that emphasized researchers’ agency in understanding, learning, and interpreting data with their unique perspectives generally showed better results than simplistic automation and acceleration (Marathe and Toyama, 2018 ; Rietz and Maedche, 2021 ; Wakatsuki and Yamamoto, 2021 ; Feuston and Brubaker, 2021 ; Gebreegziabher, 2023 ; Gao et al . , 2023 ) . The use of interactive ML techniques in qualitative analysis support has demonstrated potential in balancing researchers’ agency in learning and interpreting the data with algorithmic support (Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ) . For evaluating design ideas with AI, the subjective and task-dependent nature of current evaluation practices (Council, [n. d.] ; Gray, 2016 ) requires closer coupling between designers’ workflows, goals, and AI support to provide meaningful, holistic support.
4.3. Develop
Develop refers to the divergent phase where designers come up with solutions for the defined problem domain, informed by insights from the previous two phases (Council, [n. d.] ) . Our analysis identified the following themes for papers in this phase: UI Generation (N=51), Interface Design Inspiration (N=25), UI Optimization (N=21).
4.3.1. UI Generation
Large-scale UI datasets like RICO enabled AI research in automatic UI generation (more discussion about datasets is in the Datasets section). We divide past UI generation research roughly into 3 categories: full-screen UIs, UI components, and fidelity conversion.
Full-screen UIs
Many previous AI models focused on generating entire UI screens. As a fundamental step to effectively automate structuring UI elements, layout generation becomes a predominant focus of many previous work. Earlier on, Li et al. proposed applying Generative Adversarial Networks (GANs) to synthesize and model geometric relations of graphical elements for accurate layout alignment (Li et al . , 2021c ) . Furthermore, transformer-based architectures (Gupta et al . , 2021 ; Jiang et al . , 2023 ; Sobolevsky et al . , 2023 ) provided solutions that handle the hierarchical and sequential relationships of graphical elements, adding value especially for the UI generation task. Along the same line, Inoue et al. (Inoue et al . , 2023 ) and Zhang et al. (Zhang et al . , 2023 ) leveraged diffusion models for conditional layout generation. While these efforts mark considerable progress, the generation of high-fidelity UI screens remains early-stage, with notable attempts such as GUIGAN by Zhao et al. (Zhao et al . , 2021 ) , approaching high-fidelity generation through integrating GUI component subtree sequences in the generation process. Overall, we found only few existing AI models that offer high-fidelity UI generation ready for use in practice. The trajectory of UI layout and high-fidelity UI generation research reveals the critical need for solutions that are directly applicable in design workflows. Despite the trend towards more sophisticated AI capabilities, there remain unresolved challenges and gaps to seamlessly blend model-generated results with user-centered design practices.
UI Components
A few papers were dedicated to the generation of UI components, such as icons (Zhao et al . , 2020a ) and buttons. For example, ButtonTips (Liu et al . , 2019 ) dived deeply into automatic web button design with user input constraints, including button layout generation with text labels, color selection, spatial relationships, and presence prediction. These research efforts can help generate need-based design resources for novice designers. Additionally, designers in the industry nowadays commonly work with company-specific design systems to ensure branding and visual consistency (Frost, 2016 ) . Generation within the constraints of design systems might increase the adoption of AI tools in design practitioners’ workflow.
Fidelity Conversion
Except for AI models that adopt an end-to-end approach for UI generation, past research also investigated AI models’ capabilities in converting UI prototypes between different fidelities (Buschek et al . , 2020 ) . For example, Paper2Wire turns UI sketches into editable, mid-fidelity UI wireframes (Buschek et al . , 2020 ) , which can be helpful for early prototyping stages. MetaMorph, for another instance, assists in transforming constituent components from lo-fi sketches to higher fidelities (Sermuga Pandian et al . , 2021c ) . Rather than directly delivering the final result, such AI models take an apporach to facilitate designers’ existing workflows and contain a higher potential for adoption.
4.3.2. Interface Design Inspiration
Designers usually refer to external resources for inspiration. Currently, prevalent applications of example search fall into two categories: (1) design galleries, such as Gallery D.C. (Feng et al . , 2022 ) , where designers usually browse a wide range of examples as a serendipitous inspirational process; (2) algorithmic recommendation tools (Swearngin et al . , 2018 ) based on similarities to the user’s design input, where designers look for suggestions focusing on more concrete ideas (Mozaffari et al . , 2022 ) . Previous studies showed two challenges of existing exploratory strategies: design fixation (e.g. excessive focus on present concern) (Marsh et al . , 1996 ; Youmans and Arciszewski, 2014 ) and focus drift (e.g. deviation from original goal). Intelligent tools such as GANSpiration (Mozaffari et al . , 2022 ) generate diverse but relevant design examples, which seek the balance between and provide both targeted and serendipitous inspiration. Scout, for another example, focused on overcoming design fixation, providing more spatially diverse design examples, and “breaking out the linear design process“ (Swearngin et al . , 2020 ) . Meanwhile, AI might shed light on scaling up earlier solutions that help to avoid design fixation, such as parallel prototyping, by supporting exploring relevant alternatives during iteration (Dow et al . , 2011 ) .
Example exploration usually takes place in the early stages of design and continues to be a crucial component throughout the iterative process, expanding potential solution space. Existing AI-infused tools for inspiration search have expanded diversity of search mediums, enabling inputs such as such as natural language description (Wang et al . , 2021 ) , screenshots (Swearngin et al . , 2018 ) , hand-drawn sketches and doodles (Mohian and Csallner, 2022 ) , low-fidelity design artifacts such as wireframes (Chen et al . , 2020a ) , and hybrid inputs (e.g. text and doodle (Mohian and Csallner, 2023 ) ), supporting more flexible search processes (Lu et al . , 2022 ) . In later stages of design, external references also allow for reinterpretation of ideas and are used as validation tools (Herring et al . , 2009 ) . Given the iterative nature of design tasks, more research is needed on dynamically supporting and inspiring UI design as the artifact evolves in complexity and fidelity.
4.3.3. UI Optimization
UI optimization encompasses two main aspects: at the interface level, it involves enhancing the layout positioning and aesthetic style (Rahman et al . , 2021 ) ; at the user experience level, it focuses on improving the perceived affordances of components (Swearngin and Li, 2019 ; Pang et al . , 2016 ) . The process mainly aim at optimizing visual appeal, functional clarity, are addressed, and the overall interaction with the user interface. First, applying appropriate visual aesthetics plays an important role in generating and optimizing high-fidelity UI. The underlying difficulties in automatically suggesting and applying design styles include data-driven aesthetic assessment (Kong et al . , 2023 ; Kumar et al . , 2023 ) and transforming high-level design principles into explicit constraints. Accordingly, researchers proposed solutions that 1) translate natural language requirements into predictions of design properties (Kim et al . , 2022 ) and 2) extract applicable design constraints from design principles (Kong et al . , 2023 ) . There are also a few papers dedicated to specific aspects of aesthetics, such as color (Feng et al . , 2021b ; Hegemann et al . , 2023 ; O’Donovan et al . , 2011 ) and font design (Zhao et al . , 2018 ; O’Donovan et al . , 2014 ) . Meanwhile, due to the subjectivity of aesthetic styling, existing systems tend to keep designers actively engaged in the producing process, including making decisions about which recommended suggestions to adopt, iterating on their choices, and making further revisions afterwards (Kong et al . , 2023 ; Kim et al . , 2022 ; Hegemann et al . , 2023 ) . For optimization at the user experience level, past work drew insights from the correlation between components’ spatial relationships and user task performance (i.e. speed and accuracy), leveraging classic principles such as Fitts’s Law and neural network learning (Duan et al . , 2020 ) to reach ideal layout. Different from the previous categories, optimization contributes to finishing the design cycle. Given the standardized and consistent requirements across UI design practices, optimization tasks can further explore topics including visual alignment and consistency checking, usability issue mitigation, and design guidelines adherence improvement.
4.3.4. Summary
Machine learning, by enhancing design processes with its search and generative capabilities, offers innovative pathways for design inspiration (Feng et al . , 2022 ) . AI-enabled search and generation might enable more rapid and parallel prototyping, previously limited by human capacity, thereby increasing the potential to elevate design outcomes. While the quest for end-to-end solutions for complete UI design remains prevalent, there’s a shift towards automating select intermediary steps in the design workflow, promising more effective support for design objectives (Lu et al . , 2022 ) . Additionally, for design aspects steeped in subjectivity, like aesthetic choices, machine learning-assisted tools are emerging to bolster designers’ creative freedom through detailed interactions, ensuring technology complements rather than overrides human expertise.
4.4. Deliver
Deliver is the convergent phase in the second diamond, where through different evaluation methods, designers elicit feedback from users on their design prototypes, iteratively improve them, and come up with a final solution (Council, [n. d.] ) . There are several major themes in the testing phase: Visual Saliency Prediction (N=24), Aesthetic Analysis (N=12), Visual Error Detection (N=9).
4.4.1. Visual Saliency Prediction
Visual saliency is a proxy of the perceived importance of screen components, indicating UIs’ visual hierarchy. Such information can help UX practitioners better grasp users’ attention distribution, thus improving the information architecture design (Novák et al . , 2023 ) . Many model architectures have been developed for predicting visual saliency (Xu et al . , 2016 ; Georges et al . , 2016 ; Li et al . , 2016 ; Bylinskii et al . , 2017 ; Shen et al . , 2015 ) . Visual attention prediction for different user groups (Leiva et al . , 2022b ; Chen et al . , 2023 ) , and UI categories (Fosco et al . , 2020 ) allows more granularity and versatility for UX practitioners. Techniques to collect user gazing data with easy-to-access gadgets instead of expensive eye-tracking devices, such as webcams (Xu et al . , 2015 ) and mobile phones (Li et al . , 2017b ) , have also been investigated. Methods deploying crowd-sourcing for data collection are also presented, with eye-tracking techniques (Xu et al . , 2015 ) and by self-reporting where they had gazed at (Cheng et al . , 2023a ) .
4.4.2. Aesthetic Analysis
Automatic visual aesthetic analysis of UI screens can help UX professionals grasp perceptions of their design. While judging the visual appearance of UIs can be subjective, automatic evaluations afford quick predictions as initial feedback to designers. Past work has focused on AI applications in the evaluation of UI’s perceived aesthetics (Lima and Gresse von Wangenheim, 2022 ; Miniukovich and De Angeli, 2015 ; de Souza Lima et al . , 2022 ; Xing et al . , 2021 ; Dou et al . , 2019 ) and visual complexity (Akça and Tanriöver, 2021 ) , which is a key aspect of design aesthetics. In addition, aesthetic predictions according to different user groups (Leiva et al . , 2022b ) and in real usage contexts (Samele and Burny, 2023 ) facilitate more nuanced prediction needs. The majority of existing visual analyses of UIs relied on objective metrics and feature extraction (Akça and Tanriöver, 2021 ) , or AI models trained on user ratings (Dou et al . , 2019 ; Leiva et al . , 2022a ) . Both empirical analysis and experiment results have demonstrated the improved flexibility and quality of AI models’ evaluations (Akça and Tanriöver, 2021 ; Dou et al . , 2019 ) .
A study conducted by Rozenholtz et al. revealed that in practice, the perceived visual quality is not the only factor contributing to the evaluation of a design (Rosenholtz et al . , 2011 ) . Designers often have to make trade-offs between visual quality and design goals, which they concluded, “would likely interfere with acceptance of a perceptual tool by professional designers” . In addition, they observed that the overall “goodness” values were not useful beyond A/B comparisons between design options. A deeper empirical understanding of how UX practitioners utilize UI evaluation tools in real-world contexts would greatly benefit practical research in this direction.
4.4.3. Visual Error Detection
Automated visual error detection for UI screens is another key theme. Those systems can emulate human interactions with UI screens and save time and human effort after app development (Peng et al . , 2022 ) . While these systems are often used after app development and to check implementation quality, they are also capable of identifying design issues that get propagated to code implementation. Unlike system-specific tests like those developed especially for Android (Collins et al . , 2021 ; Llàcer Giner, 2020 ) , image-based testing techniques can take UI screenshots from different systems, increasing cross-platform versatility (Eskonen et al . , 2020 ; Eskonen, 2019 ) . These automated testing techniques help detect display issues (Su et al . , 2021 ) , generate testing reports, and detect UI discrepancies between its design and development (Chen et al . , 2017 ) . Some specific techniques, such as interaction and tappability prediction (Swearngin and Li, 2019 ; Schoop et al . , 2022 ) can also be utilized to serve more granular error detection goals. Design guideline violation checkers (Zhao et al . , 2020b ; Yang et al . , 2021a , b ) also have great practical potential in UX workflows. Overall, AI has great potential in flexible and universal visual error detection.
4.4.4. Additional Topics
Systems around sentiment prediction, usability testing, and automatic feedback generation are also included in our sample. Sentiment prediction centers the user’s perception of the product (Desolda et al . , 2021 ; Petersen et al . , 2020 ) . Some related works are user satisfaction prediction (Koonsanit and Nishiuchi, 2021 ; Koonsanit et al . , 2022 ) , and brand personality prediction (Wu et al . , 2019 ) . These models help guide the designer to analyze the design target and the predicted user perception.
Usability testing is also one processes that gather the researcher’s attention. To suit more nuanced device-specific usability testing needs, researchers present usability testing for mobile UI (Schoop et al . , 2022 ) , e-learning system (Oztekin et al . , 2013 ) , and thermostat (Ponce et al . , 2018 ) . Researchers use live emotion logs (Filho et al . , 2015 ) , think-aloud sessions (Fan et al . , 2020 , 2022 ) , and online reviews (Hedegaard and Simonsen, 2014 ) to extract usability-related data and assess interfaces. In addition, automatic feedback generation empowers designers to improve on the current design with the help of the ML system (Krause et al . , 2017 ; Ruiz and Snoeck, 2022 ) .
Other less-explored themes include dark pattern detection (Hasan Mansur et al . , 2023 ) and A/B testing (Kaukanen, 2020 ; Kharitonov et al . , 2017 ) . As accessibility design becomes more essential in UX design, researchers developed tools around automated accessibility testing (Vontell, 2019 ) .
Previous research in the deliver phase has explored various ways to provide UI evaluation feedback to designers. We observed that in our sample, these explorations are often based on the visual analysis of UIs. However, with the growing prevalence of design systems in practice (Frost, 2016 ; Churchill, 2019 ) , UX designers are shifting their focus from pixel-level asthetics to interaction flows and the holisitc user experiences across UI screens. The evaluation of interaction flows and user experiences go beyond saliency prediction (Section 4.4.1 ) and visual asthetics (Section 4.4.2 ), yet is still overlooked in research. Moreover, current visual analysis metrics does not often align with unique UI design asthetics such as flat design and skeumorphism, restricting their practical adoption. We believe more considerations of these unique aspects of UX design is important in creating translational research value (Colusso et al . , 2017 , 2019 ; Norman, 2010 )
IMAGES
VIDEO
COMMENTS
The output of a literature review is a written report that is structured to include: An overview of the project, including the research questions and goals. A summary of each of the sources included. An evaluation or critique of each source, comparing and contrasting key insights. A discussion of biases or weaknesses.
Abstract. User experience (UX) researchers in technical communication (TC) and beyond still need a clear picture of the methods used to measure and evaluate UX. This article charts current UX methods through a systematic literature review of recent publications (2016-2018) and a survey of 52 UX practitioners in academia and industry.
We conducted a Systematic Literature Review with string applied in search engines, besides selection criteria and quality assessment applied in the papers. ... UX Research practices represent recurring attitudes, actions, or activities of user experience research and evaluation work, which satisfy user-centered product development [2], [10] ...
A literature review is basically like a guide to a particular topic or research question. Moreover, conducting a literature review for UX allows researchers the chance to draw inspiration and insight from the literature and ensure the research they conduct is grounded in theory and thought, rather than based on assumptions.
Therefore, we conducted a systematic literature review, screening 153 papers from four years of the ACM Conference on Human Factors in Computing Systems proceedings (ACM CHI 2019 to 2022), of ...
2.1. Conducted systematic literature review process. A systematic literature review to identify various UX terms has been conducted and published in (Zarour & Alharbi, Citation 2017).Figure 4 summarizes the adopted systematic literature review process to select primary studies. A total of 114 primary studies out of 2,331 papers have been collected and analyzed, based on a defined set of ...
Context: A Multivocal Literature Review (MLR) is a form of a Systematic Literature Review (SLR) which includes the grey literature (e.g., blog posts, videos and white papers) in addition to the ...
But few studies in the literature discuss UX Research practices with Long-Term UX. ... Our review provided an overview of UX Research practices applied in two decades by software startups and established companies. This picture is in line with the state-of-the-art that UX term achieved in the literature [1], [14], [29]. Based on a qualitative ...
Lean UX: A Systematic Literature Review. David Aarlien and Ricardo Colomo-Palacios(&) Østfold University College, BRA Veien. 4, 1757 Halden, Norway. {david.aarlien,ricardo.colomo-palacios}@hiof.no. Abstract. The software industries often look for ways to remain competitive in terms of cost and time to market. Lean UX is a methodology aiming to ...
Lean UX is a methodology aiming to achieve this. In this paper, by means of a Systematic Literature Review, authors outline the evolution of Lean UX since its origins, its challenges and benefits, and its definition by means of a systematic literature review. Results showed similarities of the definition of Lean UX, challenges and benefits ...
3.1 Conducting the Literature Review. Search Process. We consider UX to form part of human-computer interaction (HCI), a research discipline building on top of (1) behavioral sciences such as psychology, anthropology, sociology, ergonomy and cognitive sciences; (2) design such as graphic design, information design and interaction design; (3) computer science such as computer graphics and ...
These reviews are commonly used in academia but can also serve as a valuable tool for grounding product development in existing insights. Furthermore, literature reviews can be particularly beneficial for entry-level UX researchers to showcase their impact. Outlined below are two perspectives to consider when approaching literature reviews:
A quick and dirty literature review (Lit Review) is a way to capture and synthesize information about a topic (a design problem, a new technology, an unfamiliar business area, etc.). It's a simple structure that will allow you to document relevant information in an organized and intentional format. Creating the Lit Review can take a relatively short time compared with formal UX research; but ...
2.3 Literature Review in AI Support for UI/UX design Past literature review studies in computing and HCI have success-fully identified trends and gaps and proposed new research di-rections in different specific domains [42, 46, 146, 180, 214]. We consider the call for more literature review studies in HCI, CSCW,
However, research on polyadic CAs is scattered across different fields, making it challenging to identify, compare, and accumulate existing knowledge. To promote the future design of CA systems, we conducted a literature review of ACM publications and identified a set of works that conducted UX (user experience) research.
UX research includes two main types: quantitative (statistical data) and qualitative (insights that can be observed but not computed), done through observation techniques, task analysis, and other feedback methodologies. The UX research methods used depend on the type of site, system, or app being developed.
A literature review should be done more frequently in UX because it is a viable option even for researchers with limited time and budget. The most challenging part is to persuade yourself and your team that the existing data is worth being summarized, compared, and collated to increase the overall effectiveness of your primary research.
Literature review is an essential part of research, be it academic research or product/user experience research. Some of the benefits are helping the researcher understand the research area better ...
A literature review is a summary and evaluation of the existing research on a particular topic. In UX, a literature review can help UX researchers and designers understand the current state of knowledge on a topic and to identify gaps or areas for further research.. A literature review typically involves searching for research materials on a specific topic, such as user behavior or design ...
User experience research relies heavily on survey scales as an essential method for measuring users' subjective experiences with technology. However, repeatedly raised concerns regarding the improper use of survey scales in UX research and adjacent fields call for a systematic review of current measurement practice. Until now, no such systematic investigation on survey scale use in UX ...
4. Due to the pandemic, most research work on UX designs cannot be conducted in person or on-site. In order to practice our remote research abilities, the UCI professors assigned a task of design innovation based on the literature review. The topic I selected is COVID-19 related mental health impact, which focuses on public mental health ...
A literature review can uncover insights into user behavior and design principles that inform your design strategy. Tools: Academic databases like Google Scholar, JSTOR, and specific UX/UI research databases. Reference management tools like Zotero and Mendeley can help organize your sources and streamline the review process.
A systematic literature review requires a rigorous and structured qualitative research approach that results in reliable and validated conclusions, giving credence and explanatory power to the findings (Alexander, 2020; Aveyard, 2018; Littell et al., 2008). The transdisciplinary focus of our review study addresses the benefits and opportunities ...
Communication is a core component of UX research, as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful ...
Further, the review scores representing user experience (UX) are assessed for their dependence on each challenge using the document-topic matrix and machine learning (ML) procedures. ... Puram, P., & Gurumurthy, A. (2023). Sharing economy in the food sector: A systematic literature review and future research agenda. Journal of Hospitality and ...
Literature reviews are comprehensive summaries and syntheses of the previous research on a given topic. While narrative reviews are common across all academic disciplines, reviews that focus on appraising and synthesizing research evidence are increasingly important in the health and social sciences.. Most evidence synthesis methods use formal and explicit methods to identify, select and ...
In 2022, Stige et al. (Stige et al., 2023) conducted a literature review on 46 articles in this field to analyze how AI is currently used in UX design (namely, user requirement specification, solution design, and design evaluation) and potential future research themes. Compared to their analysis sample (N=46), our sample was more comprehensive ...
Literature Review: Review of emerging technologies for BIM and DTs. Developing a five-level ladder categorization system for reviewing studies on DT applications, focusing on the building life cycle, research domains, and technologies. 10: Digital twin application in the construction industry: A literature review: Opoku et al. (2021)
The method used in this research is a systematic literature review (SLR). SLR is a collection of articles on community entrepreneurship policy politics in infrastructure development from leading journals in various relevant online references (Alomoto et al., Citation 2021 ; Macke & Genari, Citation 2019 ; Muluk, Citation 2021 ; Putri et al ...
Summary. Gartner research shows 78% of organizational leaders report experiencing "collaboration drag" — too many meetings, too much peer feedback, unclear decision-making authority, and too ...