Key Lime Logo

  • All Services
  • UX Strategy
  • User Research
  • UX/UI Design
  • EmTech Strategy
  • Our Approach
  • Enterprise Clients
  • Public Sector
  • Competitive Indexes
  • White Papers

The Value of Old-School Literature Reviews for Modern UX Research

ux research literature review

Chances are if you have spent any amount of time in academia, you have either encountered or been asked to conduct your own literature review. Many folks want to roll their eyes at the idea of having to do a “large book report,” but this discredits the powerful research methodology that is the literature review. Spending 6 years in academia prior to my time in UX research, I have been able to see a tremendous amount of value (and critical need) for the application of literature reviews in modern UX research. 

Often the work, practices, and thought of academia sit in the lofty “ivory tower” - where it is deemed to be used only by other “worthy” academics and rarely makes it to the larger public who could also benefit from this work. This causes methods like the Literature Review to become lost in their application to non-academic research since it is deemed as only being relegated to the world of academia. Additionally, it causes academic and empirical articles to not be utilized by non-academic researchers, since they feel it is not relevant to them and their work (plus the loads of academic jargon certainly don’t help in making these texts accessible). However, once this notion of the “ivory tower” is broken down, and that “academic research methods” are simply  “research methods,” then you realize you can skip over all the jargon to get to the root of the article and the real value of the literature review is able to come through.

A literature review is conducted by referencing published academic papers and other information in a particular subject area (and sometimes a particular time period) to gain an understanding of the work that was conducted prior, as well as where the current research questions fit into this research. It can help to piece together old information in a new way or be used to trace the way a particular research field has progressed. Additionally, a literature review might further evaluate the information presented, and help the reader identify which pieces of information are the most relevant. The goal of the literature review is not to add any new contributions to the body of research, but to summarize and synthesize the work that has already been done. This methodology is critical because it helps you as the researcher determine if the problem you want to solve is one that other researchers and academics agree is worth solving- which is arguably one of the most important aspects of conducting UX research as well.

Being able to determine if the problem you want to solve is worth solving is just one of the critical insights and benefits that literature reviews can provide to UX research. Conducting a literature review in UX will help researchers cover the gaps in their research, speed up time by determining which questions of theirs might have been already answered, and also validate if the work you are doing is going to add something new and valuable in return. A literature review is basically like a guide to a particular topic or research question. Moreover, conducting a literature review for UX allows researchers the chance to draw inspiration and insight from the literature and ensure the research they conduct is grounded in theory and thought, rather than based on assumptions. Furthermore, academic articles are not just theoretical pieces of research - they can provide insights into new and innovative research methods and concrete findings, and even tell the reader what further research the author thinks should be done to help solve this problem. 

Breaking down the idea that literature reviews belong solely in the world of academia helps researchers to be able to see the real-world value and application of this methodology in modern research efforts. I think we have just scratched the surface of the value of literature reviews for UX research!

More by this Author

You might also like.

Five-Second Test: User Testing And Design Perks

Add Comment

Key Lime Interactive is a user experience research and service design agency, with a sweet spot for emerging technology. As UX experts, our goal is to make your life easier, optimize user experiences, and make the world a better place.

Cheers to Key Lime Interactive’s 3rd Annual UX Mentor Day

Post By Topics

  • Research Methods
  • Emerging Trends
  • Events & News
  • UX Research
  • User Experience
  • User Interface
  • Conferences & Events
  • Competitive Research
  • Usability Testing
  • Eye tracking & Biometrics
  • Qualitative Research
  • Journey Mapping
  • Remote Testing
  • Customer Experience
  • User Experience Consultant
  • Mobile Banking
  • Multicultural UX
  • Quantitative Research
  • App Development
  • Conversational UI
  • Mobile Devices
  • User Experience Research Studies
  • Diversity Equity Inclusion
  • Expert Review
  • Voice Technology
  • Behavioral Personas
  • Industries & Market
  • Millennials
  • Accessibility
  • Diary Study & Ethnography
  • Global Research
  • Inclusivity Index
  • Card Sort & Tree Test
  • Human-first Approach
  • Personalization
  • Prototyping
  • Diary Study
  • Gamification
  • Adaptive Design
  • Datafication
  • Entertainment
  • In-Depth Interviews
  • Inclusivity
  • Information Architecture
  • User Testing
  • CX Research
  • Responsive Design
  • Women in UX
  • A/B Testing
  • Co-Creation
  • Customer Journey Mapping
  • Food & Beverage
  • Framing Effect
  • Programming
  • Remote Research
  • Research Analysis
  • Research Report
  • Transparency
  • Authenticity
  • Competitive Benchmarking

Competitive Insights

  • Competitive Intelligence
  • Confirmation Bias
  • Design Thinking
  • Ethnography
  • Healthcare CX Design
  • Healthcare Customer Experience Design
  • Internet of Things
  • Machine-learning
  • Note-taking
  • Research Participant Recruitment
  • Abstract Thinking
  • Algorithmic Disturbance
  • Analysis Paralysis
  • Anthropology
  • Brand Goodwill
  • Brand Value
  • Change Blindness
  • Competitive Intelligence Analysis
  • Custom Projects
  • Decentralization
  • Dedicated Services
  • Design Futures
  • Electronic Program Guide
  • Ethics of AI
  • Ethnographic Analysis
  • False Memories
  • Financial Services
  • Front-End Development
  • Future State
  • Games User Research
  • Heuristic Evaluation
  • Heuristic Principles
  • Inattentional Blindness
  • Invisible Link
  • Keystroke Level Model
  • Love Is Blind
  • Low-Fidelity Wireframes
  • Mental Health
  • Negotiation in UX
  • Netnography
  • Nonprofit UX
  • Patient Experiences
  • Playtesting
  • Rapid Research
  • Research with Children
  • Social Desirability Bias
  • Speculative Design
  • Storyboarding
  • Strategic Foresight
  • Swiss Cheese Model
  • Systemic Design
  • Systems Thinking
  • UX Templates
  • World Usability Day

Key_Lime_Logo

Miami New York

Subscribe to Our CX Newsletter

Copyright © 2024, Key Lime Interactive. All Rights Reserved . Privacy Policy.

ux research literature review

Quick Lit Reviews Reduce UX Research Time and Supercharge Your Design

Jolie Dobre

A quick and dirty literature review (Lit Review) is a way to capture and synthesize information about a topic (a design problem, a new technology, an unfamiliar business area, etc.). It’s a simple structure that will allow you to document relevant information in an organized and intentional format. Creating the Lit Review can take a relatively short time compared with formal UX research; but leaves you with a lasting resource that can organize your thoughts, inform your strategy, educate others, and positively influence team behavior and design.

What is a Literature Review?

You may have been exposed to a Lit Review in school as a part of undergraduate or graduate work. Lit Reviews are often performed in preparation for a master’s thesis, doctoral dissertation, or when writing journal articles (“Literature review,” 2019). A Lit Review is a survey of the available published information on a particular topic. A simple review can be composed of just a summary of sources but often includes an overview of the information available and a synthesis of the major findings (The Writing Center, n.d.).

When most people think of a Lit Review they associate it with the highly rigorous, complex, and time consuming Systematic Review and Meta-Analysis. This type is familiar because it is often referenced in journal articles and is performed by graduate students and academic researchers. It includes an exhaustive review of scholarly papers and recent research and an assessment of the search results to offset bias and ensure all relevant research is included. It then uses qualitative and quantitative methods to synthesize findings and has strict rules for the structuring of results (Pare & Kisiou, 2017; Uman, 2011; Venebio, 2017). The average time to conduct a Systematic Review is 1,139 hours (J Med Libr Assoc, 2018)—hardly practical for UX!

What people don’t realize is that the format of the Lit Review can be modified for different fields of study and by purpose. The simple Narrative Review provides a broad perspective on a topic and can be produced quickly and cheaply. It can be performed in mere hours, allows authors to select the material that interests them, ignores selection bias, and permits simple thematic or content analysis (Pare & Kisiou, 2017; The Writing Center, n.d.).

What is a Quick & Dirty Lit Review?

A Quick and Dirty Lit Review (Q&D Lit Review) is a Narrative Review that does not concern itself with formatting for final presentation, liberally uses copy and paste to capture useful information, and —most importantly — leverages qualitative coding techniques to analyze information as it is collected. In business we don’t have the time or budget for deep rigor, long analysis, or well-written prose; but we can still benefit from capturing information from multiple sources for analysis, reuse, and dissemination.

The Q&D Lit Review is also broadened to include non-peer reviewed work and other, non-published work. Often, in business our specific problem may not be supported by an existing body of research so information must be acquired from other sources such as informal, online articles, development forums, social media, talking with colleagues, user interviews, etc. Capturing these other, less reputable sources allows us to consider and incorporate the newest information and trends, while qualitative coding techniques allow us to easily compare themes across sources and quickly compare the value of new ideas against older, tested ones.

When to do a Q&D Lit Review?

A Lit Review can be performed any time you want to quickly get up to speed on a topic. However it is not a replacement for deeper, more rigorous research. Think of it as the first step in your UX research strategy. The Lit Review should bring your UX Research needs into focus. It is ideal when you don’t yet know the questions to ask, or when you want to know what you don’t know. Expect more focused questions to arise out of your initial Lit Review.

How to perform a Q&D Lit Review

A Q&D Lit Review follows the six basic steps of all Lit Reviews (Pare & Kisiou, 2017); but to save time and increase efficiency, steps 3, 4, 5 & 6 are done concurrently:

  • Formulate your research question
  • Search the literature
  • Screen for the material you want to include
  • Assess the quality of what you are including
  • Extract the data
  • Analyze the data

Six step process of Lit Review

Figure 1 The Quick and Dirty Lit Review is structured for speed and efficiency. The six basic steps of the Narrative Review are condensed to shorten data collection and coding time.

Formulate Your Research Question & Set-Up (15-20 min)

The first step in performing a Q&D Lit Review is to consider what you are researching and formulate a clear research question. This may seem like a trivial step but clearly formulating a research question will keep you focused and guide the rest of your actions (McCombes, 2020). At this stage your research question may be very broad. Some example questions from my own experiences include:

  • What should I consider when designing a Log On screen?
  • How will the transition to WCAG2.1 affect accessibly testing and accessible design?
  • How can I make Tableau as accessible as possible?
  • What is the best way to collect user feedback on a Drupal site page?

Often I find that the process of articulating the question yields keywords or additional sub-questions that I will use later. It also gives me a start on my inductive code set.

Note: To get an introduction to developing codes and coding qualitative data read Themes Don’t Just Emerge — Coding the Qualitative Data (Yi, 2018).

At this stage you must also set up your code book (the document where you ‘code’ your data). I like to use a table in Word because it’s easy to copy and paste into, it allows me to add formatting (bold and bullets) to my text, yet still retains a tabular format that makes it easy to sort and filter codes or sources and reorganize data rows. At a minimum, your code book should have three columns: Codes, Data, and source URLs. You may choose additional rows if you want primary and secondary codes, or if you want to easily track source type (i.e. journals, news, social media, interview, etc.), or the keywords you used to find the content.

Search the Literature (30-60 seconds per source)

Information can be acquired from any source: online magazines and journals, informal online posts, online training, development forums, social media, prior usability testing transcripts, impromptu interviews with colleagues or clients, office memos, competitor websites, etc. Printed material is also useful, but you may want to scan it to reduce keyboarding time, or be prepared to summarize the text. I have a shelf with a number of UX and software development books that I like to thumb through and extract ideas from before I begin my online search.

The broader your search is the more comprehensive your review will be; and more comprehensive equals more time. Don’t lose sight of the fact that this is supposed to be quick! If you’re short on time, limit yourself to 30 or 60 minutes. If you have more time, continue searching and reviewing sources until you see the core ideas and guidance repeating.

Screen, Assess, Extract, & Analyze (5-10 min per source)

For each article (or post, interview transcript, etc.) you find, skim for content relevant to your research question. As you see relevant ideas or concepts copy and paste them into your code book. Your codes can be words or phrases, whatever helps you organize the information.

You can also add your own commentary to the cell. I notate the data with my thoughts and questions as they occur. I’ll italicize that text so I can quickly review it later. My notes may lead me to search for additional information, or simply help me interpret the text and recall more valuable information.

A sample of a spreadsheet used to capture codes to tag sources

Figure 2 Illustration of a code book used to answer the question “What should I consider when designing a Log On screen?” Other codes appearing in the book are also displayed.

Visuals are a major part of UX. If you see a great design pattern or illustration of ideas, take a screenshot and add it to an appendix below the table. Use image captions to briefly summarize its importance and capture the source URL.

As you cut, paste, and organize content you’ll start to see similarities between articles. You may see the same phrase or guidance repeated (sometimes often enough to suspect plagiarism). Occasionally you’ll see content that directly contradicts other guidance. This may cause you to review previous articles and re-examine their statements. You’ll find that you’re reading articles from a more analytical perspective than you would be if you were not coding the data.

As you add sources, continue to organize and re-order the code book so that similar ideas are grouped together. Create theme statements as they occur to you. Merge cells that contain very similar ideas, so that one theme represents ideas repeated by different sources. Combining screening, assessment and extraction with analysis as you read allows you to quickly synthesize and internalize the information.

If a source lacks valuable information, copy the URL to the bottom of your table and provide a short sentence to summarize the article for yourself and why you did not extract information from it. Provide a code like “No Info” so you can sort them out. This will allow you to capture the full breadth of your research effort. It may also prove useful if, as your research develops, you realize that you may have overlooked something valuable and you want to reread a source, or if a source has very basic information that you later realize may be valuable to junior team members. It is also a useful way to keep yourself on task. If you’re not copying valuable information into your code book you may not be reading the articles you should be reading, you may be falling victim to distraction and click-bait. Keeping yourself honest is a good way to conserve and manage your time.

Final Analysis & Report Out (5-60 min)

Once you’ve used the time you have, or once you start to see information repeating, it’s time to stop searching and start reviewing what you’ve collected. At this point themes and high level conclusions will be evident. Skim your entire code book to see if anything new jumps out when you look at the full data set. Occasionally, key guidance is not exciting enough to draw your attention; but when you see it repeated several times you realize its importance. Incorporate these late stage thoughts into your theme statements and conclusions.

Thumbnail images representing pages from a lit review

Figure 3. This sample code book for a Login page redesign resulted in a list of best practices, design heuristics, and common issues which helped drive requirements and design. It also facilitated a deep partnership with the security team to balance ease of access with data-security concerns. Total sources: 8. Research time: 2 hours.

Review all your themes, conclusions, and notes to ensure that they are written in a manner that is meaningful to others. Create full and complete thoughts that summarize what you’ve learned and relate it to actions, behaviors, or processes that can be performed to solve your research problem. This is important for several reasons. First, it forces you to think reflexively. Reflective thinking is critical to complex problem-solving; it forces you to step back and think about how to solve a problem and how a set of problem-solving strategies can be leveraged to achieve a goal. (University of Hawaii, n.d.) Secondly, much of the value of the Lit Review is in its ability to quickly transfer information to others. If your thoughts are not clear and instructive, you cannot transfer knowledge. Finally, projects may be delayed or compete with other priorities. If you must revisit a project in six months, or if you have to balance multiple projects, you want your research to be meaningful to you.

When you do share your review, you may need to reorganize it so it tells a cohesive story for new readers. Depending on your audience, you can simply add a Table of Images to display the screen shots you’ve assembled. Or, if you plan to share your report with a client, you may want to convert your findings into a more narrative format as well as enter full citations for your sources.

As a beginner, expect to spend at least four hours to a day, on your first Lit Review. Your reading speed will affect your time. (I took a course in speed reading years ago and that allows me to skim many articles and quickly make value judgments. I then slowly re-read the material that I believe has value for my research question.) It takes time to integrate valuable information from various sources and you may need additional time to revisit and compare articles. If you are new to qualitative coding, expect a learning curve. It can be difficult to discern the correct code-set for your research problem if you are not a seasoned coder. Consider learning more about qualitative coding before you begin.

Top 10 Reasons & Tips for a Quick and Dirty Lit Review

You’re likely doing the research already.

To stay abreast of current design trends, technology innovations, and accessibility guidelines it’s likely you already read a great many UX articles, attend conferences or trainings, and network with other UX professionals. In other words, you’re already reviewing the “literature”; you’re just not documenting it in a way that makes it useful to you. If you’ve ever found yourself thinking “where did I see that?” or “what are the best practices?” in response to a design problem or question, then the structure of the Lit Review will help you.

Keep focused when researching online

We’ve all had the experience of reading an article online then getting distracted by click bait. Suddenly you’ve wasted an hour and have nothing to show for it. The Lit Review keeps you focused on drilling into a very specific topic. If you’re not cutting and pasting into the document, then you’re not reading relevant content and you have to move on.

Quickly identify patterns and contradictions

As you cut, paste, and organize content you’ll start to see similarities and contradictions between articles. This will cause you to review previous articles and re-examine their statements. You’ll find that you’re reading articles from a more analytical perspective.

Citations matter

When engaging with a client, design or development team, disagreements are bound to arise. Your research will support your ideas and provide persuasive justifications for design or process decisions. It’s not just you saying how it should be done; it’s coming from numerous well respected professionals. Using citations from reputable sources will add to your own trust and credibility.

Stand on the shoulders of giants

Merriam Webster defines an Expert as “one with the special skill or knowledge representing mastery of a particular subject”. The Lit Review provides a broad understanding of the topic area and equips you with the relevant facts as well as access to the authoritative sources of those facts. That equates to mastery. Congratulations, you are now an expert.

Establish a custom heuristics set to evaluate your design

As you collect and organize your information you will begin to see patterns that define the attributes of good design. You and your team can use these as heuristics to inform your design process and to evaluate and usability test your prototypes.

Avoid the mistakes of others

People are eager to share what works and what doesn’t. With a handful of articles or informal interviews, you can assemble a quick list of potential pitfalls and then establish strategies to avoid them.

Save time in the long run

Uninspired design cycles, falling victim to common mistakes, and late stage rework are all costly and time consuming. Knowledge can be the competitive edge that distinguishes your product’s user experience from that of the competition and shortens overall development time.

Your colleagues will love you

By performing the research and distilling it down to the core themes and issues, you shorten the learning curve of your colleagues. You also increase their confidence in you.

Someone is paying you to learn new things!

The Lit Review is a great excuse to get inspired, expand your knowledge, and create a useful deliverable at the same time.

J Med Libr Assoc. (2018). It takes longer than you think: librarian time spent on systematic review tasks. Journal of the Medical Library Association (JMLA), 198–207. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/29632442

Literature review. (2019). Retrieved January 2, 2019, from https://en.wikipedia.org/wiki/Literature_review

McCombes, S. (2020). https://www.scribbr.com/research-process/research-questions/

Pare, G., & Kisiou, S. (2017). Handbook of eHealth Evaluation: An Evidence-based Approach [Internet Ed.]. Victoria (BC): University of Victoria. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK481583/

The Writing Center. (n.d.). Literature Reviews. Retrieved from https://writingcenter.unc.edu/tips-and-tools/literature-reviews/

Uman, L. S. (2011). Systematic Reviews and Meta-Analyses. J Can Acad Child Adolesc Psychiatry, 20(1), 57–59.

University of Hawaii. (n.d.). Reflective Thinking: RT. Retrieved from http://www.hawaii.edu/intlrel/pols382/Reflective Thinking – UH/reflection.html

Venebio. (2017). 5 differences between a systematic review and other types of literature review. Retrieved January 2, 2019, from https://venebio.com/news/2017/09/5-differences-between-a-systematic-review-and-other-types-of-literature-review/

Yi, E. (2018). Themes Don’t Just Emerge — Coding the Qualitative Data. Medium, Project UX. Retrieved from https://medium.com/@projectux/themes-dont-just-emerge-coding-the-qualitative-data-95aff874fdce%0D

  • Make Design Decisions with a Purpose
  • Creating Personas
  • The Ethics of UX Research
  • Five Tips for Conducting Scientific Research in the UX World
  • Running a Successful User Workshop

UX research - or as it’s sometimes called, design research - informs our work, improves our understanding, and validates our decisions in the design process. In this Complete Beginner's Guide, readers will get a head start on how to use design research techniques in their work, and improve experiences for all users.

UX Booth is trusted by over 100,000 user experience professionals. Start your subscription today for free.

ACM Digital Library home

  • Advanced Search

UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library

  • 23 citation

New Citation Alert added!

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, supplementary material.

  • Carvalho L Valle P Leifheit B Cabrejos L Nakamura W Guerino G Garcia R Silva W (2024) What Do We Know About Usability Evaluation for Chatbots?: A Systematic Mapping Study Proceedings of the 20th Brazilian Symposium on Information Systems 10.1145/3658271.3658324 (1-11) Online publication date: 20-May-2024 https://dl.acm.org/doi/10.1145/3658271.3658324
  • Rayan J Kanetkar D Gong Y Yang Y Palani S Xia H Dow S (2024) Exploring the Potential for Generative AI-based Conversational Cues for Real-Time Collaborative Ideation Proceedings of the 16th Conference on Creativity & Cognition 10.1145/3635636.3656184 (117-131) Online publication date: 23-Jun-2024 https://dl.acm.org/doi/10.1145/3635636.3656184
  • Liu J Yao Y An P Wang Q (2024) PeerGPT: Probing the Roles of LLM-based Peer Agents as Team Moderators and Participants in Children's Collaborative Learning Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems 10.1145/3613905.3651008 (1-6) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613905.3651008
  • Show More Cited By

Index Terms

Human-centered computing

Human computer interaction (HCI)

HCI design and evaluation methods

User studies

Recommendations

Changes in verbal and nonverbal conversational behavior in long-term interaction.

We present an empirical investigation of conversational behavior in dyadic interaction spanning multiple conversations, in the context of a developing interpersonal relationship between a health counselor and her clients. Using a longitudinal video ...

Conversational Error Analysis in Human-Agent Interaction

Conversational Agents (CAs) present many opportunities for changing how we interact with information and computer systems in a more natural, accessible way. Building on research in machine learning and HCI, it is now possible to design and test multi-...

A Taxonomy of Social Cues for Conversational Agents

  • Development and evaluation of a taxonomy of social cues for conversational agents.

Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many ...

Information

Published in.

cover image ACM Conferences

PUC-Rio, Brazil

University of Michigan, USA

Author Picture

Université Paris-Saclay, France

Author Picture

Toyota Research Institute, USA

Microsoft Research, USA

Author Picture

University of Glasgow, UK

Author Picture

University of Tokyo, Japan

  • SIGCHI: ACM Special Interest Group on Computer-Human Interaction

Association for Computing Machinery

New York, NY, United States

Publication History

Permissions, check for updates, author tags.

  • Conversational AI
  • Conversational Agent
  • Literature Review
  • UX Research
  • Research-article
  • Refereed limited

Funding Sources

  • National Science Foundation

Acceptance Rates

Upcoming conference, contributors, other metrics, bibliometrics, article metrics.

  • 23 Total Citations View Citations
  • 5,140 Total Downloads
  • Downloads (Last 12 months) 3,330
  • Downloads (Last 6 weeks) 426
  • Kwon S Yoo D Kang Y (2024) Spiritual AI: Exploring the Possibilities of a Human-AI Interaction Beyond Productive Goals Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems 10.1145/3613905.3650743 (1-8) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613905.3650743
  • De A Lu Z (2024) #PoetsOfInstagram: Navigating The Practices And Challenges Of Novice Poets On Instagram Proceedings of the CHI Conference on Human Factors in Computing Systems 10.1145/3613904.3642173 (1-16) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613904.3642173
  • Kuang E Li M Fan M Shinohara K (2024) Enhancing UX Evaluation Through Collaboration with Conversational AI Assistants: Effects of Proactive Dialogue and Timing Proceedings of the CHI Conference on Human Factors in Computing Systems 10.1145/3613904.3642168 (1-16) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613904.3642168
  • Kuhail M Farooq S Almutairi S (2023) Recent Developments in Chatbot Usability and Design Methodologies Trends, Applications, and Challenges of Chatbot Technology 10.4018/978-1-6684-6234-8.ch001 (1-23) Online publication date: 24-Feb-2023 https://doi.org/10.4018/978-1-6684-6234-8.ch001
  • Abdulhamid N Ochieng M Bali K Ankrah E Karusala N Ronen K O'Neill J (2023) Can Large Language Models Support Medical Facilitation Work? A Speculative Analysis Proceedings of the 4th African Human Computer Interaction Conference 10.1145/3628096.3628752 (64-70) Online publication date: 27-Nov-2023 https://dl.acm.org/doi/10.1145/3628096.3628752
  • Cox S Lee Y Ooi W (2023) Comparing How a Chatbot References User Utterances from Previous Chatting Sessions: An Investigation of Users' Privacy Concerns and Perceptions Proceedings of the 11th International Conference on Human-Agent Interaction 10.1145/3623809.3623875 (105-114) Online publication date: 4-Dec-2023 https://dl.acm.org/doi/10.1145/3623809.3623875
  • Heo J Lee U (2023) Form to Flow: Exploring Challenges and Roles of Conversational UX Designers in Real-world, Multi-channel Service Environments Proceedings of the ACM on Human-Computer Interaction 10.1145/3610189 7 :CSCW2 (1-24) Online publication date: 4-Oct-2023 https://dl.acm.org/doi/10.1145/3610189

View options

View or Download as a PDF file.

View online with eReader .

HTML Format

View this article in HTML Format.

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

  • User Research Designers
  • UX Researchers
  • Digital Designers
  • UX Designers
  • UX Strategists
  • Product Designers
  • UI Designers
  • Mobile App Designers

The Complete Guide to UX Research Methods

UX research provides invaluable insight into product users and what they need and value. Not only will research reduce the risk of a miscalculated guess, it will uncover new opportunities for innovation.

The Complete Guide to UX Research Methods

By Miklos Philips

Miklos is a UX designer, product design strategist, author, and speaker with more than 18 years of experience in the design field.

PREVIOUSLY AT

“Empathy is at the heart of design. Without the understanding of what others see, feel, and experience, design is a pointless task.” —Tim Brown, CEO of the innovation and design firm IDEO

User experience (UX) design is the process of designing products that are useful, easy to use, and a pleasure to engage. It’s about enhancing the entire experience people have while interacting with a product and making sure they find value, satisfaction, and delight. If a mountain peak represents that goal, employing various types of UX research is the path UX designers use to get to the top of the mountain.

User experience research is one of the most misunderstood yet critical steps in UX design. Sometimes treated as an afterthought or an unaffordable luxury, UX research, and user testing should inform every design decision.

Every product, service, or user interface designers create in the safety and comfort of their workplaces has to survive and prosper in the real world. Countless people will engage our creations in an unpredictable environment over which designers have no control. UX research is the key to grounding ideas in reality and improving the odds of success, but research can be a scary word. It may sound like money we don’t have, time we can’t spare, and expertise we have to seek.

In order to do UX research effectively—to get a clear picture of what users think and why they do what they do—e.g., to “walk a mile in the user’s shoes” as a favorite UX maxim goes, it is essential that user experience designers and product teams conduct user research often and regularly. Contingent upon time, resources, and budget, the deeper they can dive the better.

Website and mobile app UX research methods and techniques.

What Is UX Research?

There is a long, comprehensive list of UX design research methods employed by user researchers , but at its center is the user and how they think and behave —their needs and motivations. Typically, UX research does this through observation techniques, task analysis, and other feedback methodologies.

There are two main types of user research: quantitative (statistics: can be calculated and computed; focuses on numbers and mathematical calculations) and qualitative (insights: concerned with descriptions, which can be observed but cannot be computed).

Quantitative research is primarily exploratory research and is used to quantify the problem by way of generating numerical data or data that can be transformed into usable statistics. Some common data collection methods include various forms of surveys – online surveys , paper surveys , mobile surveys and kiosk surveys , longitudinal studies, website interceptors, online polls, and systematic observations.

This user research method may also include analytics, such as Google Analytics .

Google Analytics is part of a suite of interconnected tools that help interpret data on your site’s visitors including Data Studio , a powerful data-visualization tool, and Google Optimize, for running and analyzing dynamic A/B testing.

Quantitative data from analytics platforms should ideally be balanced with qualitative insights gathered from other UX testing methods , such as focus groups or usability testing. The analytical data will show patterns that may be useful for deciding what assumptions to test further.

Qualitative user research is a direct assessment of behavior based on observation. It’s about understanding people’s beliefs and practices on their terms. It can involve several different methods including contextual observation, ethnographic studies, interviews, field studies, and moderated usability tests.

Quantitative UX research methods.

Jakob Nielsen of the Nielsen Norman Group feels that in the case of UX research, it is better to emphasize insights (qualitative research) and that although quant has some advantages, qualitative research breaks down complicated information so it’s easy to understand, and overall delivers better results more cost effectively—in other words, it is much cheaper to find and fix problems during the design phase before you start to build. Often the most important information is not quantifiable, and he goes on to suggest that “quantitative studies are often too narrow to be useful and are sometimes directly misleading.”

Not everything that can be counted counts, and not everything that counts can be counted. William Bruce Cameron

Design research is not typical of traditional science with ethnography being its closest equivalent—effective usability is contextual and depends on a broad understanding of human behavior if it is going to work.

Nevertheless, the types of user research you can or should perform will depend on the type of site, system or app you are developing, your timeline, and your environment.

User experience research methods.

Top UX Research Methods and When to Use Them

Here are some examples of the types of user research performed at each phase of a project.

Card Sorting : Allows users to group and sort a site’s information into a logical structure that will typically drive navigation and the site’s information architecture. This helps ensure that the site structure matches the way users think.

Contextual Interviews : Enables the observation of users in their natural environment, giving you a better understanding of the way users work.

First Click Testing : A testing method focused on navigation, which can be performed on a functioning website, a prototype, or a wireframe.

Focus Groups : Moderated discussion with a group of users, allowing insight into user attitudes, ideas, and desires.

Heuristic Evaluation/Expert Review : A group of usability experts evaluating a website against a list of established guidelines .

Interviews : One-on-one discussions with users show how a particular user works. They enable you to get detailed information about a user’s attitudes, desires, and experiences.

Parallel Design : A design methodology that involves several designers pursuing the same effort simultaneously but independently, with the intention to combine the best aspects of each for the ultimate solution.

Personas : The creation of a representative user based on available data and user interviews. Though the personal details of the persona may be fictional, the information used to create the user type is not.

Prototyping : Allows the design team to explore ideas before implementing them by creating a mock-up of the site. A prototype can range from a paper mock-up to interactive HTML pages.

Surveys : A series of questions asked to multiple users of your website that help you learn about the people who visit your site.

System Usability Scale (SUS) : SUS is a technology-independent ten-item scale for subjective evaluation of the usability.

Task Analysis : Involves learning about user goals, including what users want to do on your website, and helps you understand the tasks that users will perform on your site.

Usability Testing : Identifies user frustrations and problems with a site through one-on-one sessions where a “real-life” user performs tasks on the site being studied.

Use Cases : Provide a description of how users use a particular feature of your website. They provide a detailed look at how users interact with the site, including the steps users take to accomplish each task.

US-based full-time freelance UX designers wanted

You can do user research at all stages or whatever stage you are in currently. However, the Nielsen Norman Group advises that most of it be done during the earlier phases when it will have the biggest impact. They also suggest it’s a good idea to save some of your budget for additional research that may become necessary (or helpful) later in the project.

Here is a diagram listing recommended options that can be done as a project moves through the design stages. The process will vary, and may only include a few things on the list during each phase. The most frequently used methods are shown in bold.

UX research methodologies in the product and service design lifecycle.

Reasons for Doing UX Research

Here are three great reasons for doing user research :

To create a product that is truly relevant to users

  • If you don’t have a clear understanding of your users and their mental models, you have no way of knowing whether your design will be relevant. A design that is not relevant to its target audience will never be a success.

To create a product that is easy and pleasurable to use

  • A favorite quote from Steve Jobs: “ If the user is having a problem, it’s our problem .” If your user experience is not optimal, chances are that people will move on to another product.

To have the return on investment (ROI) of user experience design validated and be able to show:

  • An improvement in performance and credibility
  • Increased exposure and sales—growth in customer base
  • A reduced burden on resources—more efficient work processes

Aside from the reasons mentioned above, doing user research gives insight into which features to prioritize, and in general, helps develop clarity around a project.

What is UX research: using analytics data for quantitative research study.

What Results Can I Expect from UX Research?

In the words of Mike Kuniaysky, user research is “ the process of understanding the impact of design on an audience. ”

User research has been essential to the success of behemoths like USAA and Amazon ; Joe Gebbia, CEO of Airbnb is an enthusiastic proponent, testifying that its implementation helped turn things around for the company when it was floundering as an early startup.

Some of the results generated through UX research confirm that improving the usability of a site or app will:

  • Increase conversion rates
  • Increase sign-ups
  • Increase NPS (net promoter score)
  • Increase customer satisfaction
  • Increase purchase rates
  • Boost loyalty to the brand
  • Reduce customer service calls

Additionally, and aside from benefiting the overall user experience, the integration of UX research into the development process can:

  • Minimize development time
  • Reduce production costs
  • Uncover valuable insights about your audience
  • Give an in-depth view into users’ mental models, pain points, and goals

User research is at the core of every exceptional user experience. As the name suggests, UX is subjective—the experience that a person goes through while using a product. Therefore, it is necessary to understand the needs and goals of potential users, the context, and their tasks which are unique for each product. By selecting appropriate UX research methods and applying them rigorously, designers can shape a product’s design and can come up with products that serve both customers and businesses more effectively.

Further Reading on the Toptal Blog:

  • How to Conduct Effective UX Research: A Guide
  • The Value of User Research
  • UX Research Methods and the Path to User Empathy
  • Design Talks: Research in Action with UX Researcher Caitria O'Neill
  • Swipe Right: 3 Ways to Boost Safety in Dating App Design
  • How to Avoid 5 Types of Cognitive Bias in User Research

Understanding the basics

How do you do user research in ux.

UX research includes two main types: quantitative (statistical data) and qualitative (insights that can be observed but not computed), done through observation techniques, task analysis, and other feedback methodologies. The UX research methods used depend on the type of site, system, or app being developed.

What are UX methods?

There is a long list of methods employed by user research, but at its center is the user and how they think, behave—their needs and motivations. Typically, UX research does this through observation techniques, task analysis, and other UX methodologies.

What is the best research methodology for user experience design?

The type of UX methodology depends on the type of site, system or app being developed, its timeline, and environment. There are 2 main types: quantitative (statistics) and qualitative (insights).

What does a UX researcher do?

A user researcher removes the need for false assumptions and guesswork by using observation techniques, task analysis, and other feedback methodologies to understand a user’s motivation, behavior, and needs.

Why is UX research important?

UX research will help create a product that is relevant to users and is easy and pleasurable to use while boosting a product’s ROI. Aside from these reasons, user research gives insight into which features to prioritize, and in general, helps develop clarity around a project.

  • UserResearch

Miklos Philips

London, United Kingdom

Member since May 20, 2016

About the author

World-class articles, delivered weekly.

By entering your email, you are agreeing to our privacy policy .

Toptal Designers

  • Adobe Creative Suite Experts
  • Agile Designers
  • AI Designers
  • Art Direction Experts
  • Augmented Reality Designers
  • Axure Experts
  • Brand Designers
  • Creative Directors
  • Dashboard Designers
  • Digital Product Designers
  • E-commerce Website Designers
  • Full-Stack Designers
  • Information Architecture Experts
  • Interactive Designers
  • Mockup Designers
  • Presentation Designers
  • Prototype Designers
  • SaaS Designers
  • Sketch Experts
  • Squarespace Designers
  • User Flow Designers
  • Virtual Reality Designers
  • Visual Designers
  • Wireframing Experts
  • View More Freelance Designers

Join the Toptal ® community.

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Secondary research in ux.

ux research literature review

February 20, 2022 2022-02-20

  • Email article
  • Share on LinkedIn
  • Share on Twitter

You don’t have to do all the user-research work yourself. If somebody else already ran a study (and published it), grab it!

Have you ever completed a project only to find out that something very similar has already been done in your organization a couple of years ago? That situation is common, especially with rising employee-churn rates, and fueled the popularity of research repositories (e.g., Microsoft Human Insights System) and the growth of the  research-operations community . It should also inspire practitioners to do more secondary research.

Secondary research,  also known as desk research or, in academic contexts, literature review, refers to the act of gathering prior research findings and other relevant information related to a new project. It is a foundational part of any emerging research project and provides the project with background and context. Secondary research allows us to stand on the shoulders of giants and not to reinvent the wheel every time we initiate a new program or plan a study.

This article provides a step-by-step guide on how to conduct secondary research in UX. The key takeaway is that this type of research is not solely an intellectual exercise, but a way to minimize research costs, win internal stakeholders and get scaffolding for your own projects.

Academic publications include a literature review at the beginning to showcase context or known gaps and to justify the motivation for the research questions. However, the task of incorporating previous results is becoming more and more challenging with a growing number of publications in all fields. Therefore, practitioners across disciplines (for instance in eHealth, business, education, and technology) develop method guidelines for secondary research.  

In This Article:

When to conduct secondary research, types of secondary research, how to conduct secondary research.

Secondary research should be a standard first step in any rigorous research practice, but it’s also often cost-effective in more casual settings. Whether you are just starting a new project, joining an existing one, or planning a primary research effort for your team, it is always good to start with a broad overview of the field and existent resources. That would allow you to synthesize findings and uncover areas where more research is needed. 

Secondary research shows which topics are particularly popular or important for your organization and what problems other researchers are trying to solve. This research method is widely discussed in library and information sciences but is often neglected in UX. Nonetheless, secondary research can be useful to uncover industry trends and to inspire further studies. For example, Jessica Pater and her colleagues looked at the foundational question of participant compensation in user studies. They could have opted for user interviews or a costly large-scale survey, yet through secondary research, they were able to review 2250 unique user studies across 1662 manuscripts published in 2018-2019. They found inconsistencies in participant compensation and suggested changes to the current practices and further research opportunities.

Secondary research can be divided into two main types:  internal  and  external research.

Internal secondary research  involves gathering all relevant research findings already available in your organization. These might include artifacts from the past primary research projects, maps (e.g.,  customer-journey map ,  service blueprint ), deliverables from external consultants, or results from different kinds of  workshops  (e.g., discovery, design thinking, etc.). Hopefully, these will be available in a  research repository . 

External secondary research  is focused on sources outside of your organization, such as academic journals, public libraries, open data repositories, internet searches, and white papers published by reputable organizations. For example, external resources for the field of human-computer interaction (HCI) can be found at the  Association for Computing Machinery (ACM) digital library ,  Journal of Usability Studies (JUS ), or research websites like  ours . University libraries and labs like  UCSD Geisel Library ,  Carnegie Mellon University Libraries ,  MIT D-Lab ,  Stanford d.school , and specialized portals like  Google Scholar  offer another avenue for directed search. 

Our goal is to have the necessary depth, rigor, and usefulness for practitioners. Here are the 4 steps for conducting secondary research:

  • Choose the topic of research & write a  problem statement . 

Write a concise description of the problem to be solved. For example, if you are doing a website redesign, you might want to both learn the current standards and look at all the previous design iterations to avoid issues that your team already identified.

  • Identi fy external and internal resources.

Peer-reviewed publications (such as those published in academic journals and conferences) are a fairly reliable source. They always include a section describing methods, data-collection techniques, and study limitations. If a study you plan to use does not include such information, that might be a red flag and a reason to further scrutinize that source. Public datasets also often present some challenges because of errors and inclusion criteria, especially if they were collected for another purpose. 

One should be cautious of the seemingly reputable “research” findings published across different websites in a form of blog posts, which could be opinion pieces, not backed up by primary research. If you encounter such a piece, ask yourself — is the conclusion of the writeup based on a real study? If the study was quantitative, was it properly analyzed (e.g., at the very least, are  confidence intervals  reported, and was  statistical significance  evaluated?). For all studies, was the method sound and nonbiased (e.g., did the study have  internal and external validity )?

A more nuanced challenge involves evaluating findings based on a different audience, which might not be always generalizable to your situation, but may form hypotheses worthy of investigating. For example, if a design pattern is found okay to use by young adults, you may still want to know if this finding will also be valid for older generations.

  • Collect and analyze data from external and internal resources.

Remember that secondary research involves both the existing data and existing research. Both of those categories become helpful resources when they are critically evaluated for any inherent biases, omissions, and limitations. If you already have some secondary data in your organization, such as customer service logs or search logs, you should include them in secondary research alongside any existent analysis of such logs and previous reports. It is helpful to revisit previous findings, compare how they have or have not been implemented to refresh institutional memory and support future research initiatives.

  • Refine your problem statement and determine what still needs to be investigated.

Once you collected the relevant information, write a summary of findings, and discuss them with your team. You might need to refine your problem statement to determine what information you still need to answer your research questions. Next time your team is planning to adopt a trendy new design pattern, it may be a good idea to go back and search the web or an academic database for any evaluations of that pattern.

It is important to note that secondary research is not a substitute for primary research. It is always better to do both. Although secondary research is often cost-effective and quick, its quality depends to a large extent on the quality of your sources. Therefore, before using any secondary sources, you need to identify their validity and limitations. 

Secondary (or desk) research involves gathering existing data from inside and outside of your organization. A literature review should be done more frequently in UX because it is a viable option even for researchers with limited time and budget. The most challenging part is to persuade yourself and your team that the existing data is worth being summarized, compared, and collated to increase the overall effectiveness of your primary research. 

Jessica Pater, Amanda Coupe, Rachel Pfafman, Chanda Phelan, Tammy Toscos, and Maia Jacobs. 2021. Standardizing Reporting of Participant Compensation in HCI: A Systematic Literature Review and Recommendations for the Field. In  Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems.  Association for Computing Machinery, New York, NY, USA, Article 141, 1–16. https://doi.org/10.1145/3411764.3445734

Hannah Snyder. 2019. Literature review as a research methodology: An overview and guidelines.  Journal of business research  104, 333-339. DOI: https://doi.org/10.1016/j.jbusres.2019.07.039. 

Related Courses

Discovery: building the right thing.

Conduct successful discovery phases to ensure you build the best solution

User Research Methods: From Strategy to Requirements to Design

Pick the best UX research method for each stage in the design process

ResearchOps: Scaling User Research

Orchestrate and optimize research to amplify its impact

Related Topics

  • Research Methods Research Methods

Learn More:

ux research literature review

15 User Research Methods to Know Beyond Usability Testing

Samhita Tankala · 3 min

ux research literature review

Always Pilot Test User Research Studies

Kim Salazar · 3 min

ux research literature review

Level Up Your Focus Groups

Therese Fessenden · 5 min

Related Articles:

Open-Ended vs. Closed Questions in User Research

Maria Rosala · 5 min

UX Research Methods: Glossary

Raluca Budiu · 12 min

Recruiting and Screening Candidates for User Research Projects

Therese Fessenden · 10 min

ResearchOps: Study Guide

Kate Kaplan · 5 min

International Usability Testing: Why You Need It

Feifei Liu · 10 min

Triangulation: Get Better Research Results by Using Multiple UX Methods

Kathryn Whitenton · 3 min

Anthropology to UX Logo

  • Career Coaching

Literature Review

Ux informational interview.

  • UX Resume Review & Critique
  • UX Portfolio Review & Critique

UX Practice Job Interview

A literature review is a summary and evaluation of the existing research on a particular topic. In UX, a literature review can help UX researchers and designers understand the current state of knowledge on a topic and to identify gaps or areas for further research.

A literature review typically involves searching for research materials on a specific topic, such as user behavior or design principles. The search can be conducted using databases, search engines, or other sources of research materials. Once the research materials have been identified, they are reviewed and summarized, and their quality and relevance are evaluated.

A literature review can provide several benefits for UX. First, it can help UX teams gain a better understanding of the existing research on a topic and identify key themes, trends, and gaps in the literature. This can be useful for identifying areas where further research is needed or for informing the design of a product or service.

Second, a literature review can help to identify the most relevant and reliable research materials on a topic. This can be useful for UXresearchers and designers looking for evidence or guidance on a specific design problem or who want to avoid repeating research that has already been done.

Third, a literature review can help to contextualize a UX project within the broader field of UX research . It can provide a basis for comparing and contrasting a UX project with other research, and it can help to establish the contribution of the project to the existing body of knowledge.

UX Informational Interview

UX Resume Review & Critique

UX Portfolio Review & Critique

UX Portfolio Review & Critique

UX Practice Job Interview

Main content

Links to this project.

Title Authors

Loading projects and components...

Measurement Practices in UX Research: A Systematic Quantitative Literature Review

  • Fork this Project
  • Duplicate template
  • View Forks (0)
  • Bookmark Remove from bookmarks
  • Log in to request access
  • Sebastian A. C. Perrig
  • Lena Fanya Aeschbach
  • Nicolas Scharowski
  • Nick von Felten
  • Florian Brühlmann

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Category: Project

Description: User experience research relies heavily on survey scales to measure users' subjective experiences with technology. However, repeatedly raised concerns regarding the improper use of survey scales in UX research and adjacent fields call for a systematic review of current measurement practices. Therefore, we conducted a systematic literature review, screening 153 papers from four years of the ACM Conference on Human Factors in Computing Systems proceedings, of which 60 were eligible empirical studies using survey scales to study users' experiences. We identified 85 different scales and 172 distinct constructs measured. Most scales were used once (70.59%), and most constructs were measured only once (66.28%). Furthermore, results show that papers rarely contained complete rationales for scale selection (20.00%) and seldom provided all scale items used (30.00%). More than a third of all scales were adapted (34.19%), while only one-third of papers reported any scale quality investigation (36.67%). On the basis of our results, we highlight questionable measurement practices in UX research and suggest opportunities to improve scale use for UX-related constructs. Additionally, we provide recommendations to promote improved rigor in following best practices for scale-based UX research.

Link other OSF projects

  • Registrations
Results: All Projects Results: My Projects Results: All Registrations Results: My Registrations

Loading files...

Loading citations...

Get more citations

Recent Activity

Loading logs...

ux research literature review

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.

Copyright © 2011-2024 Center for Open Science | Terms of Use | Privacy Policy | Status | API TOP Guidelines | Reproducibility Project: Psychology | Reproducibility Project: Cancer Biology

A Complete Guide to Primary and Secondary Research in UX Design

ux research literature review

To succeed in UX design, you must know what UX research methods to use for your projects.

This impacts how you:

  • Understand and meet user needs
  • Execute strategic and business-driven solutions
  • Differentiate yourself from other designers
  • Be more efficient in your resources
  • Innovate within your market

Primary and secondary research methods are crucial to uncovering this. The former is when you gather firsthand data directly from sources, while the latter synthesizes existing data and translates them into insights and recommendations.

Let's dive deep into each type of research method and its role in UX research.

If you are still hungry to learn more, specifically how to apply it practically in the real world, you should check out Michael Wong's UX research course . He teaches you  the exact process and tactics he used that helped him build a UX agency that generated over $10M+ million in revenue.

What is p rimary research in UX design

Primary UX research gathers data directly from the users to understand their needs, behaviors, and preferences.

It's done through interviews, surveys, and observing users as they interact with a product.

Primary research in UX: When and why to use it

Primary research typically starts at the start of a UX project. This is so that the design process is grounded in a deep understanding of user needs and behaviors.

By collecting firsthand information early on, teams can tailor their designs to address real user problems.

Here are the reasons why primary research is important in UX design: ‍

1. It fast-tracks your industry understanding

Your knowledge about the industry may be limited at the start of the project. Primary research helps you get up to speed because you interact directly with real customers. As a result, this allows you to work more effectively.

Example: Imagine you're designing an app for coffee lovers. But you're not a coffee drinker yourself. Through user interviews, you learn how they prefer to order their favorite drink, what they love or hate about existing coffee apps, and their "wishlist" features by talking directly to them.

This crucial information will guide you on what to focus on in later stages when you do the actual designing. ‍

2. You'll gain clarity and fill knowledge gaps

There are always areas we know less about than we'd like. Primary research helps fill these gaps by observing user preferences and needs directly.

Example: Let's say you're working on a website for online learning. You might assume that users prefer video lessons over written content, but your survey results show that many users prefer written material because they can learn at their own pace.

With that in mind, you'll prioritize creating user-friendly design layouts for written lessons. ‍

3. You get to test and validate any uncertainties

When unsure about a feature, design direction, or user preference, primary research allows you to test these elements with real users.

This validation process helps you confidently move forward since you have data backing your decisions.

Example: You're designing a fitness app and can't decide between a gamified experience (with points and levels) or a more straightforward tracking system.

By prototyping both options and testing them with a group of users, you discover that the gamified experience concept resonates more.

Users are more motivated when they gain points and progress levels. As a result, you pivot to designing a better-gamified experience.

Types of primary research methods in UX design

Here's a detailed look at common primary research methods in UX:

1. User interviews

  • What is it: User interviews involve one-on-one conversations with users to gather detailed insights, opinions, and feedback about their experiences with a product or service.
  • Best used for: Gathering qualitative insights on user needs, motivations, and pain points.
  • Tools: Zoom and Google Meet for remote interviews; Calendly for scheduling; Otter.ai for transcription. ‍
  • What is it: Surveys are structured questionnaires designed to collect quantitative data on user preferences, behaviors, and demographics.
  • Best used for: Collecting data from many users to identify patterns and trends.
  • Tools: Google Forms, SurveyMonkey, and Typeform for survey creation; Google Sheets and Notion for note taking. ‍

3. Usability testing

  • What is it: Usability testing involves observing users interact with a prototype or the actual product to identify usability issues and areas for improvement.
  • Best used for: Identifying and addressing usability problems.
  • Tools: FigJam, Lookback.io , UserTesting, Hotjar for conducting and recording sessions; InVision, Figma for prototype testing; Google Sheets to log usability issues and track task completion rates. ‍

4. Contextual inquiry

  • What is it: This method involves observing and interviewing users in their natural environment to understand how they use a product in real-life situations.
  • Best used for: Gaining deep insights into user behavior and the context in which a product is used.
  • Tools: GoPro or other wearable cameras for in-field recording; Evernote for note-taking; Miro for organizing insights. ‍

5. Card sorting

  • What is it: Card sorting is when users organize and categorize content or information.
  • Best used for: Designing or evaluating the information architecture of a website or application.
  • Tools: FigJam, Optimal Workshop, UXPin, and Trello for digital card sorting; Mural for collaborative sorting sessions. ‍

6. Focus groups

  • What is it: Group discussions with users that explore their perceptions, attitudes, and opinions about a product.
  • Best used for: Gathering various user opinions and ideas in an interactive setting.
  • Tools: Zoom, Microsoft Teams for remote focus groups; Menti or Slido for real-time polling and feedback. ‍

7. Diary studies

  • What is it: A method where users record their experiences, thoughts, and frustrations while interacting with a product over a certain period of time.
  • Best used for: Understanding long-term user behavior, habits, and needs.
  • Tools: Dscout, ExperienceFellow for mobile diary entries; Google Docs for simple text entries. ‍

8. Prototype testing

  • What is it: Prototype testing is when users evaluate the usability and design of early product prototypes with users.
  • Best used for: Identifying usability issues and gathering feedback on design concepts
  • Tools: Figma for creating and sharing prototypes; Maze for unmoderated testing and analytics. ‍

9. Eye-tracking

  • What is it: A method that analyzes where and how long users look at different areas on a screen.
  • Best used for: Understanding user attention, readability, and visual hierarchy effectiveness.
  • Tools: Tobii, iMotions for hardware; Crazy Egg for website heatmaps as a simpler alternative. ‍

10. A/B testing

  • What is it: A/B testing compares two or more versions of a webpage or app feature to determine which performs better in achieving specific goals.
  • Best used for: Making data-driven decisions on design elements that impact user behavior.
  • Tools: Optimizely, Google Optimize for web-based A/B testing; VWO for more in-depth analysis and segmentation. ‍

11. Field studies

  • What is it: Research done in real-world settings to observe and analyze user behavior and interactions in their natural environment.
  • Best used for: Gaining insights into how products are used in real-world contexts and identifying unmet user needs.
  • Tools: Notability, OneNote for note-taking; Voice Memos for audio recording; Trello for organizing observations. ‍

12. Think-aloud protocols

  • What is it: A method involves users verbalizing their thought process while interacting with a product. It helps uncover their decision-making process and pain points.
  • Best used for: Understanding user reasoning, expectations, and experiences when using the product.
  • Tools: UsabilityHub, Morae for recording think-aloud sessions; Zoom for remote testing with screen sharing.

Challenges of primary research in UX

Here are the obstacles that UX professionals may face with primary research:

  • Time-consuming : Primary research requires significant planning, conducting, and analyzing. This is particularly relevant for methods that involve a lot of user interaction.
  • Resource intensive : A considerable amount of resources is needed, including specialized tools or skills for data collection and analysis.
  • Recruitment difficulties : Finding and recruiting suitable participants willing to put in the effort can be challenging and costly.
  • Bias and validity : The risk of bias in collecting and interpreting data highlights the importance of carefully designing the research strategy. This is so that the findings are accurate and reliable. ‍

What is secondary research in UX design

Once primary research is conducted, secondary research analyzes and converts this data into insights. They may also find common themes and ideas and convert them into meaningful recommendations.

Using journey maps, personas, and affinity diagrams can help them better understand the problem.

Secondary research also involves reviewing existing research, published books, articles, studies, and online information. This includes competitor websites and online analytics to support design ideas and concepts. ‍

Secondary research in UX: Knowing when and why to use it

Secondary research is a flexible method in the design process. It fits in both before and after primary research.

At the project's start, looking at existing research and what's already known can help shape your design strategy. This groundwork helps you understand the design project in a broader context.

After completing your primary research, secondary research comes into play again. This time, it's about synthesizing your findings and forming insights or recommendations for your stakeholders.

Here's why it's important in your design projects:

1. It gives you a deeper understanding of your existing research

Secondary research gathers your primary research findings to identify common themes and patterns. This allows for a more informed approach and uncovers opportunities in your design process.

Example: When creating personas or proto-personas for a fitness app, you might find common desires for personalized workout plans and motivational features.

This data shapes personas like "Fitness-focused Fiona," a detailed profile that embodies a segment of your audience with her own set of demographics, fitness objectives, challenges, and likes. ‍

2. Learn more about competitors

Secondary research in UX is also about leveraging existing data in the user landscape and competitors.

This may include conducting a competitor or SWOT analysis so that your design decisions are not just based on isolated findings but are guided by a comprehensive overview. This highlights opportunities for differentiation and innovation.

Example: Suppose you're designing a budgeting app for a startup. You can check Crunchbase, an online database of startup information, to learn about your competitors' strengths and weaknesses.

If your competitor analysis reveals that all major budgeting apps lack personalized advice features, this shows an opportunity for yours to stand out by offering customized budgeting tips and financial guidance. ‍

Types of secondary research methods in UX

1. competitive analysis.

  • What is it: Competitive analysis involves systematically comparing your product with its competitors in the market. It's a strategic tool that helps identify where your product stands about the competition and what unique value proposition it can offer.
  • Best used for: Identifying gaps in the market that your product can fill, understanding user expectations by analyzing what works well in existing products, and pinpointing areas for improvement in your own product.
  • Tools: Google Sheets to organize and visualize your findings; Crunchbase and SimilarWeb to look into competitor performance and market positioning; and UserVoice to get insights into what users say about your competitors.

2. Affinity mapping

  • What is it: A collaborative sorting technique used to organize large sets of information into groups based on their natural relationships.
  • Best used for: Grouping insights from user research, brainstorming sessions, or feedback to identify patterns, themes, and priorities. It helps make sense of qualitative data, such as user interview transcripts, survey responses, or usability test observations.
  • Tools: Miro and FigJam for remote affinity mapping sessions.

3. Customer journey mapping

  • What is it: The process of creating a visual representation of the customer's experience with a product or service over time and across different touchpoints.
  • Best used for: Visualizing the user's path from initial engagement through various interactions to the final goal.
  • Tools: FigJam and Google Sheets for collaborative journey mapping efforts.

4. Literature and academic review

  • What is it: This involves examining existing scholarly articles, books, and other academic publications relevant to your design project. The goal is to deeply understand your project's theoretical foundations, past research findings, and emerging trends.
  • Best used for: Establishing a solid theoretical framework for your design decisions. A literature review can uncover insights into user behavior and design principles that inform your design strategy.
  • Tools: Academic databases like Google Scholar, JSTOR, and specific UX/UI research databases. Reference management tools like Zotero and Mendeley can help organize your sources and streamline the review process.

Challenges of secondary research in UX design

These are the challenges that UX professionals might encounter when carrying out secondary research:

  • Outdated information : In a world where technology changes fast, the information you use must be current, or it might not be helpful.
  • Challenges with pre-existing data : Using data you didn't collect yourself can be tricky because you have less control over its quality. Always review how it was gathered to avoid mistakes.
  • Data isn't just yours : Since secondary data is available to everyone, you won't be the only one using it. This means your competitors can access similar findings or insights.
  • Trustworthiness : Look into where your information comes from so that it's reliable. Watch out for any bias in the data as well. ‍

The mixed-method approach: How primary and secondary research work together

Primary research lays the groundwork, while secondary research weaves a cohesive story and connects the findings to create a concrete design strategy.

Here's how this mixed-method approach works in a sample UX project for a health tech app:

Phase 1: Groundwork and contextualization

  • User interviews and surveys (Primary research) : The team started their project by interviewing patients and healthcare providers. The objective was to uncover the main issues with current health apps and what features could enhance patient care.
  • Industry and academic literature review (Secondary research) : The team also reviewed existing literature on digital health interventions, industry reports on health app trends, and case studies on successful health apps. ‍

Phase 2: Analysis and strategy formulation

  • Affinity mapping (Secondary research) : Insights from the interviews and surveys were organized using affinity mapping. It revealed key pain points like needing more personalized and interactive care plans.
  • Competitive benchmarking (Secondary research) : The team also analyzed competitors’ apps through secondary research to identify common functionalities and gaps. They noticed a lack of personalized patient engagement and, therefore, positioned their app to fill this void in the market. ‍

Phase 3: Design and validation

  • Prototyping (Secondary research) : With a good grasp of what users need and the opportunities in the market, the startup created prototypes. These prototypes include AI-powered personalized care plans, reminders for medications, and interactive tools to track health.
  • Usability testing (Primary research) : The prototypes were tested with a sample of the target user group, including patients and healthcare providers. Feedback was mostly positive, especially for the personalized care plans. This shows that the app has the potential to help patients get more involved in their health. ‍

Phase 4: Refinement and market alignment

  • Improving design through iterations: The team continuously refined the app's design based on feedback from ongoing usability testing.
  • Ongoing market review (Secondary research) : The team watched for new studies, healthcare reports, and competitors' actions. This helped them make sure their app stayed ahead in digital health innovation. ‍

Amplify your design impact and impress your stakeholders in 10+ hours

Primary and secondary research methods are part of a much larger puzzle in UX research.

However, understanding the theoretical part is not enough to make it as a UX designer nowadays.

The reason?

UX design is highly practical and constantly evolving. To succeed in the field, UX designers must do more than just design.

They understand the bigger picture and know how to deliver business-driven design solutions rather than designs that look pretty.

Sometimes, the best knowledge comes from those who have been there themselves. That's why finding the right mentor with experience and who can give practical advice is crucial.

In just 10+ hours, the Practical UX Research & Strategy Course dives deep into strategic problem-solving. By the end, you'll know exactly how to make data-backed solutions your stakeholders will get on board with.

Master the end-to-end UX research workflow, from formulating the right user questions to executing your research strategy and effectively presenting your findings to stakeholders.

Learn straight from Mizko—a seasoned industry leader with a track record as a successful designer, $10M+ former agency owner, and advisor for tech startups.

This course equips you with the skills to:

  • Derive actionable insights through objective-driven questions.
  • Conduct unbiased, structured interviews.
  • Select ideal participants for quality data.
  • Create affinity maps from research insights.
  • Execute competitor analysis with expertise.
  • Analyze large data sets and user insights systematically.
  • Transform research and data into actionable frameworks and customer journey maps.
  • Communicate findings effectively and prioritize tasks for your team.
  • Present metrics and objectives that resonate with stakeholders.

Designed for flexible and independent learning, this course allows you to progress independently.

With 4000+ designers from top tech companies like Google, Meta, and Squarespace among its alumni, this course empowers UX designers to integrate research skills into their design practices.

Here's what students have to say about the 4.9/5 rated course:

"I'm 100% more confident when talking to stakeholders about User Research & Strategy and the importance of why it needs to be included in the process. I also have gained such a beautiful new understanding of my users that greatly influences my designs. All of the "guesswork" that I was doing is now real, meaningful work that has stats and research behind it." - Booking.com Product Designer Alyssa Durante

"I had no proper clarity of how to conduct a research in a systematically form which actually aligns to the project. Now I have a Step by Step approach from ground 0 to final synthesis." - UX/UI Designer Kaustav Das Biswas

"The most impactful element has been the direct application of the learnings in my recent projects at Amazon. Integrating the insights gained from the course into two significant projects yielded outstanding results, significantly influencing both my career and personal growth. This hands-on experience not only enhanced my proficiency in implementing UX strategies but also bolstered my confidence in guiding, coaching, mentoring, and leading design teams." - Amazon.com UX designer Zohdi Rizvi

Gain expert UX research skills and outshine your competitors.

ux research literature review

Mizko, also known as Michael Wong, brings a 14-year track record as a Founder, Educator, Investor, and Designer. His career evolved from lead designer to freelancer, and ultimately to the owner of a successful agency, generating over $10M in revenue from Product (UX/UI) Design, Web Design, and No-code Development. His leadership at the agency contributed to the strategy and design for over 50 high-growth startups, aiding them in raising a combined total of over $400M+ in venture capital.

Notable projects include: Autotrader (Acquired. by eBay), PhoneWagon (Acquired by CallRails), Spaceship ($1B in managed funds), Archistar ($15M+ raised) and many more.

Table of contents

The design pulse.

ux research literature review

New website, course and product updates - April 2024

ux research literature review

Using Quantitative and Qualitative Research in UX Design

ux research literature review

10 Best Free Fonts for UI Design (2024 Edition)

ux research literature review

The Ultimate Guide to UX/UI Design in 2024

ux research literature review

16 Best UX Research Tools in 2024: Gather Faster & Better Insights

ux research literature review

15 Most Effective UX Research Methods: Pros and Cons

ux research literature review

The Ultimate Guide to Figma for Beginners (Updated 2024)

ux research literature review

The Ultimate Guide to UX Research (Updated 2024)

ux research literature review

5 Reasons Why You Need UX Research in 2024

Join our newsletter.

ux research literature review

Get 10% off on your first purchase

  • Victor Yocco
  • Apr 9, 2024

Connecting With Users: Applying Principles Of Communication To UX Research

  • 30 min read
  • UX , User Research , Communication
  • Share on Twitter ,  LinkedIn

About The Author

Victor Yocco, PhD, has over a decade of experience as a UX researcher and research director. He is currently affiliated with Allelo Design and is taking on … More about Victor ↬

Email Newsletter

Weekly tips on front-end & UX . Trusted by 200,000+ folks.

Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.

In this article, I’ll focus on UX research. Communication is a core component of UX research , as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.

Fundamentals of Communication Theory

Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.

In this article, I will focus on the Transactional Model of Communication . There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.

The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback . Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.

Understanding the Transactional Model

Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:

  • Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
  • Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
  • Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
  • Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
  • Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
  • Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.

Applying the Transactional Model of Communication to Preparing for UX Research

We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:

  • Improve Clarity The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
  • Minimize misunderstanding By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
  • Enhance research participant participation With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.

You can address the specific elements of the Transactional Model through the following steps while preparing for research:

Defining the Sender and Receiver

In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.

For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.

Message Creation

The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.

Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.

Choosing the Right Channel

The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.

Minimizing Noise

Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.

Facilitating Feedback

You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.

Track Your Alignment to the Framework

You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.

You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.

Applying the Transactional Model to Common UX Research Methods

UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.

Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.

Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.

Asking Open-ended Questions

To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.

Actively Listening

As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques , such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.

Being Responsive

Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.

We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews . This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.

For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.

Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.

Strategizing the Flow

Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.

A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.

While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.

Context Considerations

You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.

For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.

Power Dynamics

Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.

We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible . You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.

Post-Interview Actions

You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:

  • Debriefing Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
  • Short surveys Send a brief survey via email or an online platform to gather feedback on the interview experience.
  • Follow-up calls Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
  • Thank you emails Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.

You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.

As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings. “

We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.

By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.

The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.

Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).

Designing the Survey

Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.

Crafting Questions

The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.

For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?” .

Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.

Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.

When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.

For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.

You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.

The Transactional Model suggests active participation in communication is necessary for effective communication . Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.

This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.

Encouraging Feedback

While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.

You should be mindful that too many open-ended questions can cause fatigue , so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.

Post-Survey Actions

After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.

You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.

Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.

Usability Testing

Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.

Here are some ways a researcher can use elements of the Transactional Model during usability testing:

Task Assignment as Message Sending

When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.

You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.

Observing and Listening as Message Receiving

As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.

Real-time Interaction

The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).

In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.

The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.

One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.

Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols , where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.

You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.

We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.

Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model

The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.

The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.

Understanding Context

Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.

Deciphering Noise

In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.

Considering Sender-Receiver Dynamics

Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.

For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.

Reporting Results

Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:

  • Identify Stakeholder Groups Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
  • Prioritize Information Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
  • Adapt Communication Style Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.

Acknowledging Feedback

Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.

Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.

Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.

By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs. “

Tracking Your Application of the Transactional Model to Your Practice

You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.

You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.

StageColumnsDescriptionExample
Pre-Interview PlanningTopic/Question (Aligned with research goals)Identify the research question and design questions that encourage open-ended responses and co-construction of meaning.Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions?
Participant ContextNote relevant demographic and personal information to tailor questions and avoid biased assumptions.35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay.
Engagement StrategiesOutline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport.Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”).
Shared UnderstandingList potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning.Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page.
During InterviewVerbal CuesTrack participant’s language choices, including metaphors, pauses, and emotional expressions.Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault].
Nonverbal CuesNote participant’s nonverbal communication like body language, facial expressions, and eye contact.Frowning and crossed arms when discussing specific pain points.
Researcher ReflexivityRecord moments where your own biases or assumptions might influence the interview and potential mitigation strategies.Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions.
Power DynamicsIdentify instances where power differentials emerge and actions taken to address them.Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback.
Unplanned QuestionsList unplanned questions prompted by the participant’s responses that deepen understanding.What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay).
Post-Interview ReflectionMeaning Co-constructionAnalyze how both parties contributed to building shared meaning and insights.Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well.
Openness and FlexibilityEvaluate how well you adapted to unexpected responses and maintained an open conversation.Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised.
Participant FeedbackRecord any feedback received from participants regarding the interview process and areas for improvement.Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies.
Ethical ConsiderationsReflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics.Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol.
Key Themes/QuotesUse this column to identify emerging themes or save quotes you might refer to later when creating the report.Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options.
Analysis NotesUse as many lines as needed to add notes for consideration during analysis.Add notes here.

You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:

  • Date of Interview,
  • Participant ID,
  • Interview Format (e.g., in person, remote, video, phone).

By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.

As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.

As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice . By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.

Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.

References and Further Reading

  • The Mathematical Theory of Communication (PDF), Shannon, C. E., & Weaver, W.
  • From organizational effectiveness to relationship indicators: Antecedents of relationships, public relations strategies, and relationship outcomes , Grunig, J. E., & Huang, Y. H.
  • Communication and persuasion: Psychological studies of opinion change, Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Yale University Press
  • Communication research as an autonomous discipline, Chaffee, S. H. (1986). Communication Yearbook, 10, 243-274
  • Interpersonal Communication: Everyday Encounters (PDF), Wood, J. (2015)
  • Theories of Human Communication , Littlejohn, S. W., & Foss, K. A. (2011)
  • McQuail’s Mass Communication Theory (PDF), McQuail, D. (2010)
  • Bridges Not Walls: A Book About Interpersonal Communication , Stewart, J. (2012)

Smashing Newsletter

Tips on front-end & UX, delivered weekly in your inbox. Just the things you can actually use.

Front-End & UX Workshops, Online

With practical takeaways, live sessions, video recordings and a friendly Q&A.

TypeScript in 50 Lessons

Everything TypeScript, with code walkthroughs and examples. And other printed books.

Understanding the challenges affecting food-sharing apps’ usage: insights using a text-mining and interpretable machine learning approach

  • Original Research
  • Published: 27 June 2024

Cite this article

ux research literature review

  • Praveen Puram   ORCID: orcid.org/0000-0003-4871-7409 1 ,
  • Soumya Roy 2 &
  • Anand Gurumurthy 2  

Explore all metrics

Food waste is a serious problem affecting societies and contributing to climate change. About one-third of all food produced globally is wasted, while millions of people remain food insecure. Food-sharing apps attempt to simultaneously address ‘hunger’ and ‘food waste’ at the community level. Though highly beneficial, these apps experience low usage. Existing studies have explored multiple challenges affecting food-sharing usage, but are constrained by limited data and narrow geographical focus. To address this gap, this study analyzes online user reviews from top food-sharing apps operating globally. A unique approach of analyzing text data with interpretable machine learning (IML) tools is utilized. Eight challenges affecting food-sharing app usage are obtained using the topic modeling approach. Further, the review scores representing user experience (UX) are assessed for their dependence on each challenge using the document-topic matrix and machine learning (ML) procedures. Tree-based ML algorithms, namely regression tree, bagging, random forest, boosting, and Bayesian additive regression tree are employed. The best-performing algorithm is then complemented with IML tools such as accumulated local effects and partial dependence plots, to assess the impact of each challenge on UX. Critical improvement areas to increase food-sharing apps’ usage are highlighted, such as service responsiveness, app design, food variety, and unethical behavior. This study contributes to the nascent literature on food-sharing and IML applications. A significant advantage of the methodological approach utilized includes better explainability of ML models involving text data, at both the global and local interpretability levels, in terms of the associated features and feature interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

ux research literature review

Source: Authors

ux research literature review

Data availability

The data (online user reviews) used in this study were collected from various food-sharing apps in the Google Play Store and Apple App Store.

Code availability

The freely available software “Orange Data Mining”, “RStudio version 2023.03.0 + 386”, and “R version 4.3.0” were used for data analysis.

https://www.ft.com/content/c4160635-ad88-49bd-accb-9d5b2197a5c7

https://wedocs.unep.org/bitstream/handle/20.500.11822/35280/FoodWaste.pdf

https://ec.europa.eu/eurostat/documents/2995521/7695750/3-17102016-BP-EN.pdf/30c2ca2d-f7eb-4849-b1e1-b329f48338dc.

https://www.ers.usda.gov/data-products/ag-and-food-statistics-charting-the-essentials/food-security-and-nutrition-assistance

Python packages ‘Google-Play-Scraper’ and ‘Beautiful Soap’ were used for web scraping.

Python-based tool ‘Orange Text Mining’ was used for NLP procedures.

https://wisevoter.com/country-rankings/meat-consumption-by-country/

Apostolidis, C., Brown, D., Wijetunga, D., & Kathriarachchi, E. (2021). Sustainable value co-creation at the Bottom of the Pyramid: Using mobile applications to reduce food waste and improve food security. Journal of Marketing Management, 37 (9–10), 856–886.

Article   Google Scholar  

Barbosa, B., Saura, J. R., Zekan, S. B., & Ribeiro-Soriano, D. (2023). Defining content marketing and its influence on online user behavior: A data-driven prescriptive analytics method. Annals of Operations Research . https://doi.org/10.1007/s10479-023-05261-1

Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 25 (3), 351–370.

Brasse, J., Broder, H. R., Förster, M., Klier, M., & Sigler, I. (2023). Explainable artificial intelligence in information systems: A review of the status quo and future research directions. Electronic Markets, 33 (1), 26.

Çallı, L. (2023). Exploring mobile banking adoption and service quality features through user-generated content: The application of a topic modeling approach to Google Play Store reviews. International Journal of Bank Marketing, 41 (2), 428–454.

Chen, X., Wang, H., & Li, X. (2022). Doctor recommendation under probabilistic linguistic environment considering patient’s risk preference. Annals of Operations Research . https://doi.org/10.1007/s10479-022-04843-9

Cinelli, M., Ficcadenti, V., & Riccioni, J. (2021). The interconnectedness of the economic content in the speeches of the US Presidents. Annals of Operations Research, 299 (1), 593–615.

D’Ambrosi, L. (2018). Pilot study on food sharing and social media in Italy. British Food Journal, 120 (5), 1046–1058.

Darko, A. P., Liang, D., Zhang, Y., & Kobina, A. (2023). Service quality in football tourism: An evaluation model based on online reviews and data envelopment analysis with linguistic distribution assessments. Annals of Operations Research, 325 (1), 185–218.

Feng, Y., Yin, Y., Wang, D., Dhamotharan, L., Ignatius, J., & Kumar, A. (2022). Diabetic patient review helpfulness: Unpacking online drug treatment reviews by text analytics and design science approach. Annals of Operations Research . https://doi.org/10.1007/s10479-022-05121-4

Giannakis, M., Dubey, R., Yan, S., Spanaki, K., & Papadopoulos, T. (2022). Social media and sensemaking patterns in new product development: Demystifying the customer sentiment. Annals of Operations Research, 308 (1), 145–175.

Goto, H., Belal, H. M., & Shirahada, K. (2022). Value co-destruction causing customers to stop service usage: A topic modelling analysis of dental service complaint data. Annals of Operations Research . https://doi.org/10.1007/s10479-022-05045-z

Grover, P., Kar, A. K., & Dwivedi, Y. K. (2022). Understanding artificial intelligence adoption in operations management: Insights from the review of academic literature and social media discussions. Annals of Operations Research, 308 (1), 177–213.

Han, W., Wang, X., Ahsen, M. E., & Wattal, S. (2022). The societal impact of sharing economy platform self-regulations—An empirical investigation. Information Systems Research, 33 (4), 1303–1323.

Harvey, J., Smith, A., Goulding, J., & Branco Illodo, I. (2020). Food sharing, redistribution, and waste reduction via mobile applications: A social network analysis. Industrial Marketing Management, 88 , 437–448.

Hovy, D. (2021). Text Analysis in Python for Social Scientists: Discovery and Exploration . Cambridge University Press; https://doi.org/10.1017/9781108873352

Hu, Y. (2024). Quantitative food loss in the global supply chain. Nature Food, 5 (2), 100–101.

Huang, A. H., Wang, H., & Yang, Y. (2023). FinBERT: A large language model for extracting information from financial text*. Contemporary Accounting Research, 40 (2), 806–841.

Hussain, A., Hannan, A., & Shafiq, M. (2023). Exploring mobile banking service quality dimensions in Pakistan: A text mining approach. International Journal of Bank Marketing, 41 (3), 601–618.

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning: With Applications in R . Springer. https://faculty.marshall.usc.edu/gareth-james/ISL/

Joung, J., & Kim, H. (2023). Interpretable machine learning-based approach for customer segmentation for new product development from online product reviews. International Journal of Information Management, 70 , 102641.

Kapelner, A., & Bleich, J. (2015). Prediction with missing data via Bayesian additive regression trees. Canadian Journal of Statistics, 43 (2), 224–239.

Kar, A. K., Tripathi, S. N., Malik, N., Gupta, S., & Sivarajah, U. (2022). How does misinformation and capricious opinions impact the supply chain—A study on the impacts during the pandemic. Annals of Operations Research . https://doi.org/10.1007/s10479-022-04997-6

Kuhn, M., Wing, J., Weston, S., Williams, A., Keefer, C., Engelhardt, A., Cooper, T., Mayer, Z., Kenkel, B., Team, R. C., & others. (2020). Package ‘caret.’ The R Journal , 223 (7).

Kumar, P., Kushwaha, A. K., Kar, A. K., Dwivedi, Y. K., & Rana, N. P. (2022). Managing buyer experience in a buyer–supplier relationship in MSMEs and SMEs. Annals of Operations Research . https://doi.org/10.1007/s10479-022-04954-3

Kushwaha, A. K., Kumar, P., & Kar, A. K. (2021). What impacts customer experience for B2B enterprises on using AI-enabled chatbots? Insights from Big data analytics. Industrial Marketing Management, 98 , 207–221.

Lucas, B., Francu, R. E., Goulding, J., Harvey, J., Nica-Avram, G., & Perrat, B. (2021). A note on data-driven actor-differentiation and SDGs 2 and 12: insights from a food-sharing app. Research Policy, 50 (6), 104266.

Mazzucchelli, A., Gurioli, M., Graziano, D., Quacquarelli, B., & Aouina-Mejri, C. (2021). How to fight against food waste in the digital era: Key factors for a successful food sharing platform. Journal of Business Research, 124 , 47–58.

Michelini, L., Grieco, C., Ciulli, F., & Di Leo, A. (2020). Uncovering the impact of food sharing platform business models: A theory of change approach. British Food Journal, 122 (5), 1437–1462.

Michelini, L., Principato, L., & Iasevoli, G. (2018). Understanding food sharing models to tackle sustainability challenges. Ecological Economics, 145 , 205–217.

Molnar, C. (2020). Interpretable machine learning . Lulu. com. https://christophm.github.io/interpretable-ml-book/

Molnar, C., Casalicchio, G., & Bischl, B. (2018). iml: An R package for interpretable machine learning. Journal of Open Source Software, 3 (26), 786.

Nguyen, J. K., Karg, A., Valadkhani, A., & McDonald, H. (2022). Predicting individual event attendance with machine learning: A ‘step-forward’ approach. Applied Economics, 54 (27), 3138–3153.

Puram, P., & Gurumurthy, A. (2023). Sharing economy in the food sector: A systematic literature review and future research agenda. Journal of Hospitality and Tourism Management, 56 , 229–244.

Puram, P., Roy, S., Srivastav, D., & Gurumurthy, A. (2023). Understanding the effect of contextual factors and decision making on team performance in Twenty20 cricket: An interpretable machine learning approach. Annals of Operations Research, 325 (1), 261–288.

Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022a). Adopting digital reservation systems to enable circular economy in entrepreneurship. Management Decision , ahead-of-print (ahead-of-print). https://doi.org/10.1108/MD-02-2022-0190

Saura, J. R., Palacios-Marqués, D., & Ribeiro-Soriano, D. (2023a). Exploring the boundaries of open innovation: Evidence from social media mining. Technovation, 119 , 102447.

Saura, J. R., Palacios-Marqués, D., & Ribeiro-Soriano, D. (2023b). Leveraging SMEs technologies adoption in the Covid-19 pandemic: A case study on Twitter-based user-generated content. The Journal of Technology Transfer, 48 (5), 1696–1722.

Saura, J. R., Ribeiro-Navarrete, S., Palacios-Marqués, D., & Mardani, A. (2023c). Impact of extreme weather in production economics: Extracting evidence from user-generated content. International Journal of Production Economics, 260 , 108861.

Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022b). Assessing behavioral data science privacy issues in government artificial intelligence deployment. Government Information Quarterly, 39 (4), 101679.

Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2024). Data-driven strategies in operation management: Mining user-generated content in Twitter. Annals of Operations Research, 333 (2), 849–869.

Schanes, K., & Stagl, S. (2019). Food waste fighters: What motivates people to engage in food sharing? Journal of Cleaner Production, 211 , 1491–1501.

Srinivas, S., & Ramachandiran, S. (2023). Passenger intelligence as a competitive opportunity: Unsupervised text analytics for discovering airline-specific insights from online reviews. Annals of Operations Research . https://doi.org/10.1007/s10479-022-05162-9

Tibshirani, R., Hastie, T., Witten, D., & James, G. (2021). An introduction to statistical learning: With applications in R . Springer. https://hastie.su.domains/ISLR2/ISLRv2_website.pdf

Topuz, K., Davazdahemami, B., & Delen, D. (2023). A Bayesian belief network-based analytics methodology for early-stage risk detection of novel diseases. Annals of Operations Research . https://doi.org/10.1007/s10479-023-05377-4

Wu, J., Zhao, H., & Chen(Allan), H. (2021). Coupons or free shipping? Effects of price promotion strategies on online review ratings. Information Systems Research, 32 (2), 633–652.

Yang, N., Korfiatis, N., Zissis, D., & Spanaki, K. (2023). Incorporating topic membership in review rating prediction from unstructured data: A gradient boosting approach. Annals of Operations Research . https://doi.org/10.1007/s10479-023-05336-z

Yeomans, M., Minson, J., Collins, H., Chen, F., & Gino, F. (2020). Conversational receptiveness: Improving engagement with opposing views. Organizational Behavior and Human Decision Processes, 160 , 131–148.

Zhu, L., Lin, Y., & Cheng, M. (2020). Sentiment and guest satisfaction with peer-to-peer accommodation: When are online ratings more trustworthy? International Journal of Hospitality Management, 86 , 102369.

Download references

Not Applicable.

Author information

Authors and affiliations.

Institute of Management Technology, Hyderabad, 501218, India

Praveen Puram

Indian Institute of Management Kozhikode, Kozhikode, 673570, India

Soumya Roy & Anand Gurumurthy

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Praveen Puram .

Ethics declarations

Conflicts of interest.

No conflicts of interest were reported.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Puram, P., Roy, S. & Gurumurthy, A. Understanding the challenges affecting food-sharing apps’ usage: insights using a text-mining and interpretable machine learning approach. Ann Oper Res (2024). https://doi.org/10.1007/s10479-024-06130-1

Download citation

Received : 04 August 2023

Accepted : 19 June 2024

Published : 27 June 2024

DOI : https://doi.org/10.1007/s10479-024-06130-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • User-generated content
  • Natural language processing
  • Explainable machine learning
  • Sharing economy
  • Surplus food redistribution
  • Sustainability
  • Find a journal
  • Publish with us
  • Track your research

Communicative Sciences and Disorders

  • Online Learners: Quick Links
  • ASHA Journals
  • Research Tip 1: Define the Research Question
  • Reference Resources
  • Evidence Summaries & Clinical Guidelines
  • Drug Information
  • Health Data & Statistics
  • Patient/Consumer Facing Materials
  • Images/Streaming Video
  • Database Tutorials
  • Crafting a Search
  • Cited Reference Searching
  • Research Tip 4: Find Grey Literature
  • Research Tip 5: Save Your Work
  • Cite and Manage Your Sources
  • Critical Appraisal
  • What are Literature Reviews?
  • Conducting & Reporting Systematic Reviews
  • Finding Systematic Reviews
  • Tutorials & Tools for Literature Reviews
  • Point of Care Tools (Mobile Apps)

Choosing a Review Type

For guidance related to choosing a review type, see:

  • "What Type of Review is Right for You?" - Decision Tree (PDF) This decision tree, from Cornell University Library, highlights key difference between narrative, systematic, umbrella, scoping and rapid reviews.
  • Reviewing the literature: choosing a review design Noble, H., & Smith, J. (2018). Reviewing the literature: Choosing a review design. Evidence Based Nursing, 21(2), 39–41. https://doi.org/10.1136/eb-2018-102895
  • What synthesis methodology should I use? A review and analysis of approaches to research synthesis Schick-Makaroff, K., MacDonald, M., Plummer, M., Burgess, J., & Neander, W. (2016). What synthesis methodology should I use? A review and analysis of approaches to research synthesis. AIMS Public Health, 3 (1), 172-215. doi:10.3934/publichealth.2016.1.172 More information less... ABSTRACT: Our purpose is to present a comprehensive overview and assessment of the main approaches to research synthesis. We use "research synthesis" as a broad overarching term to describe various approaches to combining, integrating, and synthesizing research findings.
  • Right Review - Decision Support Tool Not sure of the most suitable review method? Answer a few questions and be guided to suitable knowledge synthesis methods. Updated in 2022 and featured in the Journal of Clinical Epidemiology 10.1016/j.jclinepi.2022.03.004

Types of Evidence Synthesis / Literature Reviews

Literature reviews are comprehensive summaries and syntheses of the previous research on a given topic.  While narrative reviews are common across all academic disciplines, reviews that focus on appraising and synthesizing research evidence are increasingly important in the health and social sciences.  

Most evidence synthesis methods use formal and explicit methods to identify, select and combine results from multiple studies, making evidence synthesis a form of meta-research.  

The review purpose, methods used and the results produced vary among different kinds of literature reviews; some of the common types of literature review are detailed below.

Common Types of Literature Reviews 1

Narrative (literature) review.

  • A broad term referring to reviews with a wide scope and non-standardized methodology
  • Search strategies, comprehensiveness of literature search, time range covered and method of synthesis will vary and do not follow an established protocol

Integrative Review

  • A type of literature review based on a systematic, structured literature search
  • Often has a broadly defined purpose or review question
  • Seeks to generate or refine and theory or hypothesis and/or develop a holistic understanding of a topic of interest
  • Relies on diverse sources of data (e.g. empirical, theoretical or methodological literature; qualitative or quantitative studies)

Systematic Review

  • Systematically and transparently collects and categorize existing evidence on a question of scientific, policy or management importance
  • Follows a research protocol that is established a priori
  • Some sub-types of systematic reviews include: SRs of intervention effectiveness, diagnosis, prognosis, etiology, qualitative evidence, economic evidence, and more.
  • Time-intensive and often takes months to a year or more to complete 
  • The most commonly referred to type of evidence synthesis; sometimes confused as a blanket term for other types of reviews

Meta-Analysis

  • Statistical technique for combining the findings from disparate quantitative studies
  • Uses statistical methods to objectively evaluate, synthesize, and summarize results
  • Often conducted as part of a systematic review

Scoping Review

  • Systematically and transparently collects and categorizes existing evidence on a broad question of scientific, policy or management importance
  • Seeks to identify research gaps, identify key concepts and characteristics of the literature and/or examine how research is conducted on a topic of interest
  • Useful when the complexity or heterogeneity of the body of literature does not lend itself to a precise systematic review
  • Useful if authors do not have a single, precise review question
  • May critically evaluate existing evidence, but does not attempt to synthesize the results in the way a systematic review would 
  • May take longer than a systematic review

Rapid Review

  • Applies a systematic review methodology within a time-constrained setting
  • Employs methodological "shortcuts" (e.g., limiting search terms and the scope of the literature search), at the risk of introducing bias
  • Useful for addressing issues requiring quick decisions, such as developing policy recommendations

Umbrella Review

  • Reviews other systematic reviews on a topic
  • Often defines a broader question than is typical of a traditional systematic review
  • Most useful when there are competing interventions to consider

1. Adapted from:

Eldermire, E. (2021, November 15). A guide to evidence synthesis: Types of evidence synthesis. Cornell University LibGuides. https://guides.library.cornell.edu/evidence-synthesis/types

Nolfi, D. (2021, October 6). Integrative Review: Systematic vs. Scoping vs. Integrative. Duquesne University LibGuides. https://guides.library.duq.edu/c.php?g=1055475&p=7725920

Delaney, L. (2021, November 24). Systematic reviews: Other review types. UniSA LibGuides. https://guides.library.unisa.edu.au/SystematicReviews/OtherReviewTypes

Further Reading: Exploring Different Types of Literature Reviews

  • A typology of reviews: An analysis of 14 review types and associated methodologies Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26 (2), 91-108. doi:10.1111/j.1471-1842.2009.00848.x More information less... ABSTRACT: The expansion of evidence-based practice across sectors has lead to an increasing variety of review types. However, the diversity of terminology used means that the full potential of these review types may be lost amongst a confusion of indistinct and misapplied terms. The objective of this study is to provide descriptive insight into the most common types of reviews, with illustrative examples from health and health information domains.
  • Clarifying differences between review designs and methods Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and methods. Systematic Reviews, 1 , 28. doi:10.1186/2046-4053-1-28 More information less... ABSTRACT: This paper argues that the current proliferation of types of systematic reviews creates challenges for the terminology for describing such reviews....It is therefore proposed that the most useful strategy for the field is to develop terminology for the main dimensions of variation.
  • Are we talking the same paradigm? Considering methodological choices in health education systematic review Gordon, M. (2016). Are we talking the same paradigm? Considering methodological choices in health education systematic review. Medical Teacher, 38 (7), 746-750. doi:10.3109/0142159X.2016.1147536 More information less... ABSTRACT: Key items discussed are the positivist synthesis methods meta-analysis and content analysis to address questions in the form of "whether and what" education is effective. These can be juxtaposed with the constructivist aligned thematic analysis and meta-ethnography to address questions in the form of "why." The concept of the realist review is also considered. It is proposed that authors of such work should describe their research alignment and the link between question, alignment and evidence synthesis method selected.
  • Meeting the review family: Exploring review types and associated information retrieval requirements Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: Exploring review types and associated information retrieval requirements. Health Information & Libraries Journal, 36(3), 202–222. doi: 10.1111/hir.12276

""

Integrative Reviews

"The integrative review method is an approach that allows for the inclusion of diverse methodologies (i.e. experimental and non-experimental research)." (Whittemore & Knafl, 2005, p. 547).

  • The integrative review: Updated methodology Whittemore, R., & Knafl, K. (2005). The integrative review: Updated methodology. Journal of Advanced Nursing, 52 (5), 546–553. doi:10.1111/j.1365-2648.2005.03621.x More information less... ABSTRACT: The aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process....An integrative review is a specific review method that summarizes past empirical or theoretical literature to provide a more comprehensive understanding of a particular phenomenon or healthcare problem....Well-done integrative reviews present the state of the science, contribute to theory development, and have direct applicability to practice and policy.

""

  • Conducting integrative reviews: A guide for novice nursing researchers Dhollande, S., Taylor, A., Meyer, S., & Scott, M. (2021). Conducting integrative reviews: A guide for novice nursing researchers. Journal of Research in Nursing, 26(5), 427–438. https://doi.org/10.1177/1744987121997907
  • Rigour in integrative reviews Whittemore, R. (2007). Rigour in integrative reviews. In C. Webb & B. Roe (Eds.), Reviewing Research Evidence for Nursing Practice (pp. 149–156). John Wiley & Sons, Ltd. https://doi.org/10.1002/9780470692127.ch11

Scoping Reviews

Scoping reviews are evidence syntheses that are conducted systematically, but begin with a broader scope of question than traditional systematic reviews, allowing the research to 'map' the relevant literature on a given topic.

  • Scoping studies: Towards a methodological framework Arksey, H., & O'Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8 (1), 19-32. doi:10.1080/1364557032000119616 More information less... ABSTRACT: We distinguish between different types of scoping studies and indicate where these stand in relation to full systematic reviews. We outline a framework for conducting a scoping study based on our recent experiences of reviewing the literature on services for carers for people with mental health problems.
  • Scoping studies: Advancing the methodology Levac, D., Colquhoun, H., & O'Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5 (1), 69. doi:10.1186/1748-5908-5-69 More information less... ABSTRACT: We build upon our experiences conducting three scoping studies using the Arksey and O'Malley methodology to propose recommendations that clarify and enhance each stage of the framework.
  • Methodology for JBI scoping reviews Peters, M. D. J., Godfrey, C. M., McInerney, P., Baldini Soares, C., Khalil, H., & Parker, D. (2015). The Joanna Briggs Institute reviewers’ manual: Methodology for JBI scoping reviews [PDF]. Retrieved from The Joanna Briggs Institute website: http://joannabriggs.org/assets/docs/sumari/Reviewers-Manual_Methodology-for-JBI-Scoping-Reviews_2015_v2.pdf More information less... ABSTRACT: Unlike other reviews that address relatively precise questions, such as a systematic review of the effectiveness of a particular intervention based on a precise set of outcomes, scoping reviews can be used to map the key concepts underpinning a research area as well as to clarify working definitions, and/or the conceptual boundaries of a topic. A scoping review may focus on one of these aims or all of them as a set.

Systematic vs. Scoping Reviews: What's the Difference? 

YouTube Video 4 minutes, 45 seconds

Rapid Reviews

Rapid reviews are systematic reviews that are undertaken under a tighter timeframe than traditional systematic reviews. 

  • Evidence summaries: The evolution of a rapid review approach Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: The evolution of a rapid review approach. Systematic Reviews, 1 (1), 10. doi:10.1186/2046-4053-1-10 More information less... ABSTRACT: Rapid reviews have emerged as a streamlined approach to synthesizing evidence - typically for informing emergent decisions faced by decision makers in health care settings. Although there is growing use of rapid review "methods," and proliferation of rapid review products, there is a dearth of published literature on rapid review methodology. This paper outlines our experience with rapidly producing, publishing and disseminating evidence summaries in the context of our Knowledge to Action (KTA) research program.
  • What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments Harker, J., & Kleijnen, J. (2012). What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. International Journal of Evidence‐Based Healthcare, 10 (4), 397-410. doi:10.1111/j.1744-1609.2012.00290.x More information less... ABSTRACT: In recent years, there has been an emergence of "rapid reviews" within Health Technology Assessments; however, there is no known published guidance or agreed methodology within recognised systematic review or Health Technology Assessment guidelines. In order to answer the research question "What is a rapid review and is methodology consistent in rapid reviews of Health Technology Assessments?", a study was undertaken in a sample of rapid review Health Technology Assessments from the Health Technology Assessment database within the Cochrane Library and other specialised Health Technology Assessment databases to investigate similarities and/or differences in rapid review methodology utilised.
  • Rapid Review Guidebook Dobbins, M. (2017). Rapid review guidebook. Hamilton, ON: National Collaborating Centre for Methods and Tools.
  • NCCMT Summary and Tool for Dobbins' Rapid Review Guidebook National Collaborating Centre for Methods and Tools. (2017). Rapid review guidebook. Hamilton, ON: McMaster University. Retrieved from http://www.nccmt.ca/knowledge-repositories/search/308
  • << Previous: Literature Reviews
  • Next: Conducting & Reporting Systematic Reviews >>
  • Last Updated: Jun 26, 2024 3:00 PM
  • URL: https://guides.nyu.edu/speech

AI Assistance for UX: A Literature Review Through Human-Centered AI

Recent advancements in HCI and AI research attempt to support user experience (UX) practitioners with AI-enabled tools. Despite the potential of emerging models and new interaction mechanisms, mainstream adoption of such tools remains limited. We took the lens of Human-Centered AI and presented a systematic literature review of 359 papers, aiming to synthesize the current landscape, identify trends, and uncover UX practitioners’ unmet needs in AI support. Guided by the Double Diamond design framework, our analysis uncovered that UX practitioners’ unique focuses on empathy building and experiences across UI screens are often overlooked. Simplistic AI automation can obstruct the valuable empathy-building process. Furthermore, focusing solely on individual UI screens without considering interactions and user flows reduces the system’s practical value for UX designers. Based on these findings, we call for a deeper understanding of UX mindsets and more designer-centric datasets and evaluation metrics, for HCI and AI communities to collaboratively work toward effective AI support for UX.

1. Introduction

Advancements in Artificial Intelligence (AI) enabled applications in numerous sectors, with the user experience (UX) industry being a notable potential beneficiary. AI models can facilitate processes that involve various data modalities, ranging from text-based affinity diagrams  (Goldman et al . , 2022 ; Borlinghaus and Huber, 2021 ) and user interface (UI) development codes  (Beltramelli, 2017 ; Feng et al . , 2021a ) to image-based UI screenshots  (Leiva et al . , 2022a ; Wang et al . , 2021 ; Zhao et al . , 2021 ) . The enhancements of language-based and multi-modal AI models have expanded the possibilities of applications in UX design and research  (Dhinakaran, [n. d.] ; Di Fede et al . , 2022 ; Kim et al . , 2023 ) . Notably, the impressive capabilities of large-language models (LLMs) further promoted AI adoption in real applications  (Dhinakaran, [n. d.] ) . Diffusion-based, text-to-image generative AI such as Stable Diffusion  (Rombach et al . , 2022 ) and Midjourney 1 1 1 https://www.midjourney.com/ also opens up new avenues for creative professionals to utilize AI in their work  (Verheijden and Funk, 2023 ; Wei et al . , 2023 ) .

However, creating usable, effective, and enjoyable AI-enabled experiences for UX practitioners remains challenging  (Yang et al . , 2020 ) . A technology-driven mindset, prevalent in AI communities, can lead to applications that are driven by the latest technology, but do not necessarily address UX practitioners’ unique goals such as empathy-building. Furthermore, the fluid, nonlinear UX methodologies  (Gray, 2016 ) are not the same as logical, computational thinking and can be hard to grasp for AI researchers. The lack of insight into designer workflow and practices can create challenges for AI research to create effective and seamless support for UX professionals.

Not all UX processes are desired to be delegated to AI  (Marathe and Toyama, 2018 ; Lubars and Tan, 2019 ) , leading to concerns about the diminished empathy of the designer when valuable research processes become automated. Such concerns question the real-world efficacy of these AI models in providing meaningful UX support. Early research prototypes on AI-enabled design support systems have received positive feedback in user studies  (Cheng et al . , 2023b ; Hegemann et al . , 2023 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ) . At the same time, unique data modalities, user needs, and workflows in UX also created new practical challenges for AI researchers to tackle  (Li et al . , 2021b ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ; Wang et al . , 2021 ) .

The field of human-centered AI (HCAI) provides valuable perspectives for investigating the current gap and future risks in AI for UX support. HCAI sits at the intersection of AI and Human-Computer Interaction (HCI) and embraces the human-centered philosophy. It aims to ensure that AI systems align with human values and mitigate potential harms to individuals, communities, and societies  (Shneiderman, 2022 ) . As AI models integrate into more real-world applications, it becomes imperative to prioritize human-centered design and research principles in AI adoption. Researchers in HCAI have investigated useful design metaphors and paradigms for AI systems  (Yang et al . , 2019b ; Shneiderman, 2022 ) .

In this work, we conducted a systematic literature review (SLR) through the lens of HCAI and analyzed the state of technical and system research in AI assistance for UX practitioners. We outline the role of AI in different phases of UX practices using the classic Double Diamond design framework  (Council, [n. d.] ) . Our SLR sought to understand AI’s current technical capabilities with UX-related tasks and map out the rapidly expanding design space of AI for UX support. Our general goal is to pinpoint opportunities for both HCI and AI communities, to identify the critical needs of UX professionals, and to find common ground between UX practices and frontier academic AI research. Thus, we define our research questions as follows.

What capabilities do the latest AI models possess for different UX-related tasks?

Regarding UX practitioners’ needs and preferences for AI assistance, what insights have been revealed from past research?

What are the gaps between existing empirical studies and opportunities for future AI research and interactive system development?

Through our SLR with 359 papers, we found that past work has a higher focus on technology-driven approaches than human-centered investigations. Our analysis underscored the contrast between AI’s data-driven nature and the human-centric philosophy of UX. Building on this, our study maps existing research onto the Double Diamond framework  (Council, [n. d.] ) , identifying key technical capabilities of AI in UX (Section  4 ) and underscoring overlooked areas such as empathy-building and enhancing user experiences across multiple UI screens (Section  4.6 ). The UX industry can also benefit from embracing data-driven strategies to capture feedback from ever-expanding user bases. We emphasize the need for a deeper understanding of UX methodologies and goals, the expansion of quantitative UX metrics, and careful consideration of AI delegability based on existing Human-Centered AI frameworks  (Lubars and Tan, 2019 ) . This work aims to offer valuable insights and direction for future research to the HCI, UX, and AI communities, highlighting the potential of this promising interdisciplinary, translational research domain.

2. Background and Related Work

2.1. ui/ux design and support tools.

UI/UX as a profession has established its status in both the tech industry and academia over the past decades. Nielson estimated that the population of UX professionals worldwide grew from about 1,000 to 1 million between 1983 and 2017. It is also estimated that in 2050, the number will increase by another 100-fold to 100 million  (Nielsen, 2017 ) . UX practitioners aim to create products and experiences that are user-friendly, enjoyable, and effective. They often try to understand target users’ needs through human-centered methodologies, e.g. contextual interviews, and iteratively prototype their design solutions and elicit feedback from users. Such a process is well captured in British Design Council’s Double Diamond framwork  (Council, [n. d.] ) . Through two divergent-convergent processes, UX practitioners brainstorm and select particular aspects of an issue to tackle, then iteratively prototype a few potential solutions and finalize on one through user feedback.

Numerous support tools have been developed for UX design. From early HCI research, the SILK system was one of the first no-code designer-support UI prototyping tools  (Landay, 1996 ) . Later, Sketch and Figma are among the most popular tools for UX prototyping. More related to the early exploratory phases, platforms such as Miro, Mural, and FigJam are created for UX professionals to organize ideas, conduct brainstorming, or qualitatively analyze user data. Evaluation platforms such as UserTesting 2 2 2 https://www.usertesting.com/ and Maze 3 3 3 https://maze.co provide support for conducting user evaluations, while researchers also investigated automated design testing  (Deka et al . , 2017b ) and remote user testing  (Martelaro and Ju, 2017 ) . Notably, design systems such as Google Material Design and Apple Human Interface Guidelines also provided tools to help designers create user-friendly, consistent, and accessible UIs.

Recently, we have witnessed an increase in AI integration into design support tools in both academia and industry. In academia, many researchers have been exploring AI-enabled support tools for UX practitioners  (Li et al . , 2021b ; Sermuga Pandian et al . , 2021b ; Lu et al . , 2022 ; Knearem et al . , 2023 ) . In the industry, design tools like Uizard 4 4 4 https://uizard.io/ and Framer 5 5 5 https://www.framer.com/ai have rolled out AI features to generate UI screens from natural language descriptions. Figma also recently acquired Diagram 6 6 6 https://diagram.com/ , a startup that previously focused on AI-enabled Figma plugins, and started to roll out AI features in their tool. However, the UX industry embodies a human-centered principle, which is inherently different from the technology-first mindset prevalent in AI communities. This has created friction in designing better AI experiences  (Yang et al . , 2020 ) as well as creating effective AI support for UX practitioners  (Lu et al . , 2022 ) . We have yet to observe any of these AI-enabled tools become mainstream and adopted by a significant portion of the UX industry. This might reflect a “research-practice gap” that is common across HCI research  (Norman, 2010 ) , solving which requires more translational research and resources to fulfill the needs of practitioners  (Colusso et al . , 2017 ) .

2.2. Human-Centered AI

Human-Centered AI (HCAI) is an emergent interdisciplinary research field that bridges AI and HCI. HCAI embraces the human-centered philosophy and takes a humanistic and ethical view towards the latest AI technology: how to enhance humans rather than replace them  (Xu, 2019 ) . Researchers in HCAI have predicted that by embracing a human-centered future, the AI community’s impact will likely grow even greater  (Shneiderman, 2022 ) .

The primary research focuses of HCAI include: (1) improving AI-driven technology to better augment human needs, (2) identifying design methodologies for safe and trustworthy AI systems, and (3) understanding and safeguarding the impact of AI on individuals, communities, and societies  (Xu, 2019 ; Shneiderman, 2022 ) . In this work, we investigate AI support for UX practitioners through the lens of HCAI, proposing our research questions (see the Introduction section) based on the research focuses above. We refer to past research in HCAI, including Principles of Mixed-Initiative Interfaces  (Horvitz, 1999 ) , Guidelines for Human-AI Interaction  (Amershi et al . , 2019 ) , and books on Human-Centered AI  (Shneiderman, 2022 ) . Particularly, we balance our analysis on both the technical and design aspects, seeking to understand existing AI models’ capabilities in UX tasks, as well as practitioners’ needs for automation in current methodologies and practices.

2.3. Literature Review in AI Support for UI/UX design

Past literature review studies in computing and HCI have successfully identified trends and gaps and proposed new research directions in different specific domains  (Dell and Kumar, 2016 ; Dillahunt et al . , 2017 ; Lopez and Guerrero, 2017 ; Pater et al . , 2021 ; Stefanidi et al . , 2023 ) . We consider the call for more literature review studies in HCI, CSCW, and Ubicomp  (Lopez and Guerrero, 2017 ) and specifically look at the emerging field of AI for UI/UX design support.

While many researchers conducted general investigations on this topic  (Lu et al . , 2022 ; Knearem et al . , 2023 ; Isgrò et al . , 2022 ; Liao et al . , 2020 ; Grigera et al . , 2023 ) , only 3 papers used systematic literature review by the time we conducted this study. Malik et al. reviewed 100 papers and analyzed the deep learning approaches that have been utilized to support UI/UX design work  (Malik et al . , 2023 ) . Their analysis results revealed potential for cross-platform datasets, more advanced UI generation models, and a centralized deep-learning-based design automation system.

In addition, Abbas et al.  (Abbas et al . , 2022 ) analyzed 18 papers in this field and analyzed UX designers’ current challenges in incorporating ML in their design process. Their results showed that most ML-enabled UX design tools fail to be integrated in practical settings. They argued the need to build support tools by considering existing design practices, rather than simply based on existing ML models’ capabilities. Interestingly, the paper did not distinguish designing with ML support (the focus of our paper) from designing ML-involved systems and experience (i.e., AI as a design material, outside of our scope). Many of their summaries and discussions were centered around the need for designers’ understanding of ML, which is beyond the scope of our analysis.

In 2022, Stige et al.  (Stige et al . , 2023 ) conducted a literature review on 46 articles in this field to analyze how AI is currently used in UX design (namely, user requirement specification, solution design, and design evaluation) and potential future research themes. Compared to their analysis sample (N=46), our sample was more comprehensive (N=359) and up-to-date (conducted in 2023), resulting in a more complete analysis of the recent empirical and technical research landscape (Section  4 ). In addition, by mapping previous research into the four phases of the Double Diamond framework, we revealed more details regarding AI’s involvement in UX research and design activities. Our analysis also uncovered more in-depth differences between AI and UX communities’ mindsets and pointed out meaningful gaps to bridge for future research (Section  5 ).

3. Literature Review Method

To address our research questions (see Introduction ), we conducted a systematic literature review (SLR) of papers in relevant research fields. SLRs are designed to help understand and interpret a large volume of information, to explain “what works” (i.e., current landscape) and “what should work” (i.e., potential gaps and future directions) in a given field. The “systematic” aspect of SLR focuses on identifying all research that addresses a specific question to conduct a balanced and unbiased summary  (Nightingale, 2009 ) . We followed previous guidelines on conducting SLRs  (Xiao and Watson, 2019 ; Nightingale, 2009 ) and referred to previous SLR studies in adjacent fields to form our methods  (Kaluarachchi and Wickramasinghe, 2023 ; Dillahunt et al . , 2017 ; Pater et al . , 2021 ; Wohlin, 2014 ) .

We used snowball sampling , a widely adopted literature search strategy, to select our literature sample 7 7 7 More explanation of our rationale to use snowball sampling can be found in Appendix  A . . It begins with a starter set of a few relevant papers, then iteratively includes related papers that were cited by, or cited, papers in the starter set (i.e., the backward and forward snowballing processes)  (Wohlin, 2014 ) . Google Scholar was used as our primary search engine, as it is one of the largest online academic search engines, and is commonly used in literature review studies  (Wohlin, 2014 ; Xiao and Watson, 2019 ; Siddaway et al . , 2019 ; Cheng, 2016 ) . We did not restrict the publication venues to reduce bias and get a diverse sample across disciplines  (Nightingale, 2009 ) . We depict our process in Fig.   1 by following an adapted version of the PRISMA statement  (Moher et al . , 2009 ) . Below, we detail our literature selection process, including our inclusion/exclusion criteria, the selection of a starter set, and iterative backward and forward samplings.

Refer to caption

3.1. Inclusion/Exclusion Criteria

In line with our research scope, we included papers that satisfy both of the following criteria:

providing support for methodologies or artifacts in UI/UX design and research,

incorporating the use of artificial intelligence for such support.

We referred to articles regarding UX design and research practices  (Rosala, 2020 ; Farrell, 2017 ; Pernice, 2019 ; Rosala, 2022 ) to inform our selection process against the first criteria. Specifically, we used the Double Diamond design framework  (Council, [n. d.] ) to map out opportunities for AI support in the UX workflow, similar to past studies in this domain  (Yang et al . , 2020 ) . We excluded papers that focused only on UI development without relevance to UI/UX design or research.

As discussed in previous work, coming up with a precise, comprehensive definition of AI is hard, even within AI research communities  (Stone et al . , 2022 ) . It is even harder when considered in the HCI and UX contexts  (Yang et al . , 2020 ) and is beyond the scope of our paper. We use the term as a reference to a suite of computational techniques generally considered in the domain of AI, from neuron-network-based deep learning models to statistical, machine learning approaches  (Russell and Norvig, 2010 ) . We excluded papers investigating the design of AI systems, often referred to as “AI as a design material”   (Yang et al . , 2020 ; Yildirim et al . , 2022 ) . These papers often work on the designerly understanding of AI  (Liao et al . , 2023 ) and design processes that account for AI safety and accountability  (Moore et al . , 2023 ) . They focus more on the design of AI instead of supporting design with the helpf of AI (our focus).

It is also noteworthy that our focus is specifically on AI adoption in UX support. While relevant, we do not aim to conduct a comprehensive literature review on creativity support tools, human-AI co-creation, or human-centered AI, given these are much broader research topics independent of our scope. However, we did draw inspiration from papers from these domains that do not fit our scope exactly and include them in our Discussion section for better generalizability of our findings.

3.2. Starter Set

In the beginning, four researchers collaboratively searched for and filtered relevant papers using research search engines including Google Scholar and ACM Digital Library, based on our inclusion criteria defined in Section  3.1 . When selecting our starter set, we followed previous work  (Nightingale, 2009 ) and aimed at the diversity of topics to minimize bias. Specifically, we also ensured to include a balanced set of papers addressing every phase of the Double Diamond process  (Council, [n. d.] ) . The four researchers frequently communicated and discussed in-depth during the selection process to ensure the representativeness and quality of our starter set. In the end, we included 17 papers related to the four Double Diamond phases (four, three, five, and five papers from discover, define, develop, and deliver, respectively). We also included two papers that investigate the same problem domain but do not specifically fit into any phase above to ensure representativeness and comprehensiveness. In all, our starter set consisted of 19 representative papers.

3.3. Backward and Forward Sampling

After selecting the starter set, we conducted two rounds of iterative sampling. In each iteration, both the papers that our sample cited (backward sampling of past papers) and the papers that cited our sample (forward sampling of later papers) were examined by four researchers. Researchers examined the full text of identified papers to determine their relevance, eligibility, and quality. A minimum of two researchers independently evaluated each paper and settled disagreements through discussions. Details of the iterations were depicted in Fig. 1 , following an adapted version of the PRISMA statement  (Moher et al . , 2009 ) .

We stopped after the second snowballing iteration because we had already obtained a large sample (N=359) that is representative of the existing work in our domain. Also, in the second iteration, we observed that papers from the first iteration repeatedly appeared in papers of interest. In the analysis process, upon detailed examination, 68 papers were excluded due to their relative lack of relevance to our research questions. Our final sample contained a total of 359 papers, sourced from March to July 2023 (Fig  1 ). To the best of our knowledge, it is to date the largest repository of existing literature on the topic of AI for UX support compared to past literature reviews in this field  (Abbas et al . , 2022 ; Malik et al . , 2023 ; Stige et al . , 2023 ) .

4. Analysis

After all papers were selected and screened, the research team mapped their main topic into one of the four phases in the Double Diamond design framework  (Council, [n. d.] ) , a classic framework that comprehensively covers various activities in a design process. It has guided many previous academic research on UX design  (Gustafsson, 2019 ; Yang et al . , 2020 ; Ammarullah et al . , 2021 ) . It encapsulates the two divergent–convergent processes in design, where designers explore potential problems to address in the domain, then converge to main target issues; prototype a few potential solutions, and decide on the most effective one through testings and evaluations  (Council, [n. d.] ) . It should be noted that modern design processes are mostly iterative, so designers can go back and forth between different phases.

Refer to caption

Given that we also focus on the technical feasibility of AI models in UX, two additional categories were also included: “Datasets” on UX-related datasets, and “General AI Models”, about AI models that work with UX-related data and can be applied to more than one phase in the Double Diamond framework. When a paper fits more than one phase, we include it in the primary phase it belongs to.

The papers in each phase were analyzed and discussed by at least two researchers. For each paper, based on our research questions and our human-centered AI perspective, we define the following seven aspects to focus on:

Research contribution type (according to  (Wobbrock and Kientz, 2016 ) )

Target problem/task

Study/discussion of user needs

Supporting empirical evidence from previous work (if any)

AI model architecture and data modality

Other important model aspects (e.g. user control, explainability)

UX artifacts involved

Researchers also took notes on meaningful information outside of these aspects. In a shared spreadsheet, researchers filled in information about the paper for the above aspects and discussed them for our analysis.

Refer to caption

Fig. 3 depicts the trend of paper counts for each year in our sample, and Fig. 4 provides a more detailed view of the six phases. They show that research in this field has significantly increased since 2020. Note that the literature review was conducted from March to July 2023, so we only included papers published before this. Through further analysis of the general trends, we identified two imbalances in the current research landscape:

Imbalance between technology-centric and human-centered approaches

We visualized the proportion of papers that studied or analyzed the needs of their target users using human-centered methodologies defined in previous literature  (Olson and Kellogg, 2014 ; Rosala, 2020 ; Farrell, 2017 ; Rosala, 2022 ; Moran, 2018 ) , such as ethnographic interviews, usability studies, etc. The result is shown in Figure 5 : in total, only 24.3% papers (N=76) from all papers in these 4 phases (N=309) used human-centered methodologies and discussed user needs in their scenarios. This reflects the current technology-centric tendency of research in AI assistance for UX. Although this phenomenon is not uncommon given the nascent nature of this field, it also calls for a more balanced approach that incorporates human-centered investigations. Emphasizing human-centered research not only addresses the preferences of users but also enhances the overall value and impact of AI solutions  (Shneiderman, 2022 ) .

Imbalance between studies in Double Diamond phases

As depicted in Fig. 5 (b), the papers in our sample display a noticeable inclination towards the develop and deliver phases, while seemingly underrepresenting the define phase. Determining the exact cause of this observed trend is challenging and beyond the scope of our review. Nevertheless, we hypothesize that this bias stems from the wealth of data available for the latter two phases (as discussed in Section Datasets ), coupled with the inherently subjective and task-dependent nature of evaluating design concepts during the define phase  (Council, [n. d.] ; Gray, 2016 ) .

Refer to caption

In the following sections, we dive deeper into our analysis of previous work in each of these six categories. At the end, we compare our findings from all six phases and summarize the results of our general analysis.

4.1. Discover

Discover is the divergent phase in the first diamond. It is the beginning phase where most exploratory user research is conducted. Designers need to understand the design problems and build user empathy in this phase. Common methodologies and artifacts involved in this phase include personas, user interviews, and brainstorming  (Council, [n. d.] ) . Our analysis summarized related research themes from past works as follows: Review Mining (N=27), Data-driven Persona (N=21), and AI-supported Brainstorming (N=18).

4.1.1. Review Mining

For UX researchers, analyzing user reviews helps them identify current design problems, potential user requirements, and other user experience-relevant information  (Hedegaard and Simonsen, 2013 ; Baj-Rogowska and Sikorski, 2023 ; Yang et al . , 2019a ; Mendes and Furtado, 2017 ) . The old-fashioned practice is manually coding data or using rule-based algorithms to classify user reviews into several topics and conducting statistical analysis  (Mendes et al . , 2015 ; Maalej et al . , 2016 ) . The introduction of machine learning to this task could date back to the 2010s  (Dąbrowski et al . , 2022 ) . It automated the process of addressing vast amounts of textual data and advanced traditional algorithms with a better understanding of natural language. For example, it facilitates extracting structured information from the narratives, such as product features and user attitudes   (Tuch et al . , 2013 ) .

One concern is how these works quantify the goal of mining user reviews. Most of them simplify design practitioners’ needs to classify user narratives based on some empirically defined computational models  (Hedegaard and Simonsen, 2013 ; Yang et al . , 2019a ) or quantified metrics such as user sentiments  (Guzman and Maalej, 2014 ; Li et al . , 2020c ) , satisfaction levels  (Jang and Park, 2022 ; Jang and Yi, 2017 ) . Only a limited number of these works validated the effectiveness of this equivalence in meeting designers’ needs.

Another concern is the generalizability of these formulations in identifying design problems in different scenarios. Some recent works indicated that review analysis could be fine-grained to user needs by integrating more advanced language models. For example, Wang et al.  (Wang et al . , 2022 ) increased the granularity of the extracted information and indicated specific problematic features for further improvements.

4.1.2. Data-Driven Persona

Data-driven persona refers to the adoption of algorithm methods to develop personas from numerical data  (Salminen et al . , 2021 ) . Machine learning pushes the purview further with its capacity in clustering and segmenting a variety of user data, such as feedback posts  (Tan et al . , 2022 ; Zhang et al . , 2016 ; Jisun et al . , 2017 ) and survey responses  (Hou et al . , 2020 ) . Besides, it also makes large-scale user data with time-changing behaviors feasible for persona development. For instance, user profiles and their interaction history  (Jansen et al . , 2019 ; Salminen et al . , 2021 ; An et al . , 2016 ) are introduced and make the persona construction more comprehensive.

A common criticism of this data-driven approach is its automation process hinders design practitioners from building as deep user empathy as they could with a qualitative approach   (Salminen et al . , 2020 ) . Efforts have been made to integrate mixed methods in recent years. For instance, quantitative results are considered archetypes and inform the following qualitative analysis  (Tan et al . , 2022 ; Zhang et al . , 2016 ; Jansen et al . , 2019 ) . Some other approaches verified qualitative insights via quantitative results  (Jung et al . , 2022 ) . However, evaluations of these mixed-method approaches are far from standardized and overlook examining their effectiveness as user-empathizing processes.

4.1.3. AI-supported Brainstorming

Ideation is another divergent thinking scenario with which many studies have tried to integrate AI. Research on AI for brainstorming includes individual support and human-human collaboration support.

For individual ideation supports, early systems adopted machine learning for retrieving inspirational ideas and searching associative knowledge from a defined collection  (Gilon et al . , 2018 ; Feng et al . , 2022 ; Andolina et al . , 2015 ; Kita and Rekimoto, 2018 ) , among which only a limited number of works considered learning from specific design contexts  (Koch et al . , 2019 ) . Recently, the advancement of Large Language Models (LLMs) has enhanced the capacity for divergent thinking in these ideation systems, but it also confines them primarily to textual modalities  (Memmert and Tavanapour, 2023 ; López, [n. d.] ; Di Fede et al . , 2022 ) . Besides, these systems mostly followed a series of linear structured stages in order to make AI integration more feasible , like a sequence model consisting of warm-up, generating ideas, discussing ideas with groups   (López, [n. d.] ; Memmert and Tavanapour, 2023 ; Tavanapour et al . , 2020 ) .

For collaborative brainstorming, researchers investigated how machine learning could be involved and support various interactions for team communication, like face-to-face ideation  (Andolina et al . , 2015 ) and table-top interfaces  (Hunter and Maes, [n. d.] ) . Machine learning could also be a team facilitator for human group ideations  (Bittner and Shoury, 2019 ; Tavanapour et al . , 2020 ) . The prosperity of generative AI provides more engaging roles for machine learning  (Shin et al . , 2023 ) such as experts  (Memmert and Tavanapour, 2023 ; Bittner and Shoury, 2019 ) and mediators  (Löbbers et al . , 2023 ) , and leads to further research opportunities.

Another research focus is on how social effects in human-human teaming transfer in human-AI collaborative ideation  (Hwang and Won, 2021 ; Memmert and Tavanapour, 2023 ) , which sheds light on the negative impacts that AI has introduced in this process, like distraction  (Kita and Rekimoto, 2018 ) , cognitive loads  (Zhang et al . , 2022 ) , and free-riding  (Memmert and Tavanapour, 2023 ) , which are not limited to UX ideation.

4.1.4. Additional Topics

In addition to the aforementioned topics, some other emerging works merit mention. Some studies addressed challenges in traditional qualitative research, such as communication fatigue and evaluation apprehension, by introducing AI-powered conversational agents  (Xiao et al . , 2020b ; Bulygin, 2022 ) . Researchers have explored its adoption in conducting user interviews, facilitating engaging communication with users, and information elicitation  (Han et al . , 2021 ; Xiao et al . , 2020a ) . Moreover, conducting user interviews at scale would be more accessible. How this interview mode affects interviewers, interviewees, and in-depth understanding leaves opportunities for future studies.

4.1.5. Summary

Current ML integrations in UX research mostly provide automation support for laborious work and enhance traditional processes in work with large-scale and large-variety user data, especially for review mining and data-driven persona. Studies on ideation are more diverse and consider different collaborative settings and potential roles of AI beyond automation. Based on a human-centered scope, an apparent question is how the integration of machine learning aligns with the need of the UX discover phase, which is, understanding design problems and building user empathy.

What we found from our analysis is an oversight of the empathy-building process and a limited interpretation of design practitioners’ needs . For example, constructing personas is regarded as making deliverables that could be automated by machines, while it is primarily a process where designers synthesize materials and build user understanding; quantitative metrics are adopted without validating its effectiveness for design practitioners and its generalizability for various design contexts. Future studies would be enriched by delving deeper into specific design contexts and designers’ cognitive processes, especially in enhancing the empathetic comprehension of users as highlighted by  (Zhu and Luo, 2023 ) . This should complement the focus on the informational necessities that bolster the designers’ empathetic processes.

4.2. Define

Define is the convergent phase in the first diamond, where designers define the problem statement and pinpoint the products’ desired impact based on previous research findings. The main themes we identified in the define phase are Qualitative Analysis (N=22) and AI for Design Idea Evaluation (N=2) 8 8 8 We specifically looked for other papers involving design idea evaluation and AI, but did not find any beyond our snowball sampling results. . Methodologies and artifacts involved in this phase include affinity diagramming and focus groups  (Council, [n. d.] ) . The primary objective of this phase is to sort through the extensive research data, discerning the most promising directions that align with user requirements, business objectives, and technical viability  (Rosala, 2022 ) .

4.2.1. Qualitative Analysis

AI support for qualitative analysis has been an active research area and is prevalent in our sample (N=22). UX professionals and HCI researchers use this methodology to organize, label, and analyze data, to identify patterns and extract insights  (Rosala, 2022 ; Olson and Kellogg, 2014 ) . Generally, researchers discovered that simplistic automation of qualitative analysis can break established workflows, increase discussion overhead, and lead to unexpected reductions in efficiency and quality   (Borlinghaus and Huber, 2021 ) . In contrast, papers that closely examined different steps in qualitative analysis and intentionally preserved human agency, control, and goals often demonstrated better psychological and performative results  (Marathe and Toyama, 2018 ; Rietz and Maedche, 2021 ; Wakatsuki and Yamamoto, 2021 ; Feuston and Brubaker, 2021 ; Gebreegziabher, 2023 ; Gao et al . , 2023 )

On the surface, qualitative analysis involves labeling data and extracting insights . Some studies aimed at speeding up the labeling process and using AI to produce labeled results  (Li, 2021 ) . However, research has shown that these full automation approaches can easily break the existing workflow and lead to increased discussion overhead and reduced efficiency and quality  (Borlinghaus and Huber, 2021 ) . In contrast, some papers broke down detailed steps in qualitative analysis to analyze their distinctions and different potentials for automation. Marathe et al.  (Marathe and Toyama, 2018 ) divided qualitative analysis into two phases: building a codebook by analyzing data, and applying the codes to the remaining data.

Codebook building

Building a codebook with a data subset is a key learning and reasoning process in qualitative analysis, where researchers build “emotional connection — the intimacy, pride, and ownership — with the data”   (Jiang et al . , 2021 ) and “think with their hands”   (Borlinghaus and Huber, 2021 ) . Researchers generally oppose the introduction of “low-level, suggestion-based automation” in this process, to avoid taking away the invaluable cognitive process of human researchers  (Marathe and Toyama, 2018 ; Jiang et al . , 2021 ) . Feuston et al.  (Feuston and Brubaker, 2021 ) emphasized that qualitative research is a process that utilizes researchers’ unique perspectives in data analysis, whilst AI might take away this opportunity and reinforce past coding patterns in new data.

Codebook application

Once a codebook is developed, applying it to the remaining data can be relatively more mechanical. Previous studies have shown that automation is more welcomed in this phase   (Marathe and Toyama, 2018 ) . As a result, many systems were built to automate the tedious aspects of labeling while preserving the researcher’s agency in learning  (Marathe and Toyama, 2018 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ; Jiang et al . , 2021 ; Feuston and Brubaker, 2021 ) . But there is also more to the labeling process than simply applying the codebook: since qualitative analysis is often a collaborative process, Drouhard et al. emphasized the value of disagreement between researchers in reflecting ambiguities in data  (Drouhard et al . , 2017 ) . Reflecting on and resolving these conflicts can help to improve researchers’ learnings  (Chen et al . , 2018 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ) .

Advantages of interactive ML in qualitative analysis

The interactive ML technique provides great potential to automate tedious aspects of qualitative coding, while leaving the final decisions to users, preserving their agency. It has been employed in existing AI systems to support qualitative coding  (Rietz and Maedche, 2020 , 2021 ; Gebreegziabher, 2023 ) . In the context of qualitative analysis, interactive ML engages users in a collaborative process, where they actively offer feedback on AI-generated outputs, thereby enhancing the precision and relevance of qualitative coding  (Rietz and Maedche, 2020 ) . Interactive ML also does not require large labeled datasets and learns as users annotate more data, which naturally fits the qualitative analysis process. Building on top of human-interpretable rules, patterns and relatively simple AI models, they were able to achieve a certain level of explainability and interpretability. Cody also provided counterfactual explanations to help users further understand algorithmic predictions.

User control in qualitative analysis

User control has been a common theme in discussions of AI support in qualitative analysis  (Rietz and Maedche, 2020 ; Jiang et al . , 2021 ; Feuston and Brubaker, 2021 ; Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ; Gao et al . , 2023 ) . Earlier papers discussed how the lack of control might prevent AI from providing valuable support  (Jiang et al . , 2021 ) . However, Feuston and Brubaker discovered that it is more nuanced: AI support can benefit certain steps in qualitative analysis, or even shifting some analytic practices, as long as it assists instead of automates existing analytic work practices  (Feuston and Brubaker, 2021 ) . The careful design of systems including Cody   (Rietz and Maedche, 2021 ) and PaTAT   (Gebreegziabher, 2023 ) also confirmed the value of AI support while maintaining user control and agency. The “delegability” of human tasks to AI  (Lubars and Tan, 2019 ) in qualitative coding depends on human motivation, task difficulty, associated risk, and human trust  (Jiang et al . , 2021 ) .

4.2.2. Design Idea Evaluation

Two papers in our sample investigated the use of AI in evaluating design ideas. Siemon conducted a comparative study with a simulated AI system to investigate AI’s utility in helping reduce apprehension in design idea evaluation  (Siemon, 2023 ) . In addition, Mesbah et al. combined AI with crowdsourcing to effectively measure the desirability, feasibility, viability, and overall feeling of design ideas  (Mesbah et al . , 2023 ) . Given that current methodologies around design idea evaluation are subjective and task-dependent  (Council, [n. d.] ; Gray, 2016 ) , AI models that are trained against general metrics such as in  (Mesbah et al . , 2023 ) are likely not sufficient for real-world scenarios. It remains largely unclear how AI support might fit into existing manual evaluation processes. We believe a deeper empirical understanding of UX evaluation processes and practices is required to bridge this current gap.

4.2.3. Summary

In all, in the define phase, previous research that emphasized researchers’ agency in understanding, learning, and interpreting data with their unique perspectives generally showed better results than simplistic automation and acceleration  (Marathe and Toyama, 2018 ; Rietz and Maedche, 2021 ; Wakatsuki and Yamamoto, 2021 ; Feuston and Brubaker, 2021 ; Gebreegziabher, 2023 ; Gao et al . , 2023 ) . The use of interactive ML techniques in qualitative analysis support has demonstrated potential in balancing researchers’ agency in learning and interpreting the data with algorithmic support  (Rietz and Maedche, 2021 ; Gebreegziabher, 2023 ) . For evaluating design ideas with AI, the subjective and task-dependent nature of current evaluation practices  (Council, [n. d.] ; Gray, 2016 ) requires closer coupling between designers’ workflows, goals, and AI support to provide meaningful, holistic support.

4.3. Develop

Develop refers to the divergent phase where designers come up with solutions for the defined problem domain, informed by insights from the previous two phases  (Council, [n. d.] ) . Our analysis identified the following themes for papers in this phase: UI Generation (N=51), Interface Design Inspiration (N=25), UI Optimization (N=21).

4.3.1. UI Generation

Large-scale UI datasets like RICO enabled AI research in automatic UI generation (more discussion about datasets is in the Datasets section). We divide past UI generation research roughly into 3 categories: full-screen UIs, UI components, and fidelity conversion.

Full-screen UIs

Many previous AI models focused on generating entire UI screens. As a fundamental step to effectively automate structuring UI elements, layout generation becomes a predominant focus of many previous work. Earlier on, Li et al. proposed applying Generative Adversarial Networks (GANs) to synthesize and model geometric relations of graphical elements for accurate layout alignment  (Li et al . , 2021c ) . Furthermore, transformer-based architectures  (Gupta et al . , 2021 ; Jiang et al . , 2023 ; Sobolevsky et al . , 2023 ) provided solutions that handle the hierarchical and sequential relationships of graphical elements, adding value especially for the UI generation task. Along the same line, Inoue et al.  (Inoue et al . , 2023 ) and Zhang et al.  (Zhang et al . , 2023 ) leveraged diffusion models for conditional layout generation. While these efforts mark considerable progress, the generation of high-fidelity UI screens remains early-stage, with notable attempts such as GUIGAN by Zhao et al.  (Zhao et al . , 2021 ) , approaching high-fidelity generation through integrating GUI component subtree sequences in the generation process. Overall, we found only few existing AI models that offer high-fidelity UI generation ready for use in practice. The trajectory of UI layout and high-fidelity UI generation research reveals the critical need for solutions that are directly applicable in design workflows. Despite the trend towards more sophisticated AI capabilities, there remain unresolved challenges and gaps to seamlessly blend model-generated results with user-centered design practices.

UI Components

A few papers were dedicated to the generation of UI components, such as icons  (Zhao et al . , 2020a ) and buttons. For example, ButtonTips   (Liu et al . , 2019 ) dived deeply into automatic web button design with user input constraints, including button layout generation with text labels, color selection, spatial relationships, and presence prediction. These research efforts can help generate need-based design resources for novice designers. Additionally, designers in the industry nowadays commonly work with company-specific design systems to ensure branding and visual consistency  (Frost, 2016 ) . Generation within the constraints of design systems might increase the adoption of AI tools in design practitioners’ workflow.

Fidelity Conversion

Except for AI models that adopt an end-to-end approach for UI generation, past research also investigated AI models’ capabilities in converting UI prototypes between different fidelities  (Buschek et al . , 2020 ) . For example, Paper2Wire turns UI sketches into editable, mid-fidelity UI wireframes  (Buschek et al . , 2020 ) , which can be helpful for early prototyping stages. MetaMorph, for another instance, assists in transforming constituent components from lo-fi sketches to higher fidelities  (Sermuga Pandian et al . , 2021c ) . Rather than directly delivering the final result, such AI models take an apporach to facilitate designers’ existing workflows and contain a higher potential for adoption.

4.3.2. Interface Design Inspiration

Designers usually refer to external resources for inspiration. Currently, prevalent applications of example search fall into two categories: (1) design galleries, such as Gallery D.C.   (Feng et al . , 2022 ) , where designers usually browse a wide range of examples as a serendipitous inspirational process; (2) algorithmic recommendation tools  (Swearngin et al . , 2018 ) based on similarities to the user’s design input, where designers look for suggestions focusing on more concrete ideas  (Mozaffari et al . , 2022 ) . Previous studies showed two challenges of existing exploratory strategies: design fixation (e.g. excessive focus on present concern)  (Marsh et al . , 1996 ; Youmans and Arciszewski, 2014 ) and focus drift (e.g. deviation from original goal). Intelligent tools such as GANSpiration  (Mozaffari et al . , 2022 ) generate diverse but relevant design examples, which seek the balance between and provide both targeted and serendipitous inspiration. Scout, for another example, focused on overcoming design fixation, providing more spatially diverse design examples, and “breaking out the linear design process“  (Swearngin et al . , 2020 ) . Meanwhile, AI might shed light on scaling up earlier solutions that help to avoid design fixation, such as parallel prototyping, by supporting exploring relevant alternatives during iteration  (Dow et al . , 2011 ) .

Example exploration usually takes place in the early stages of design and continues to be a crucial component throughout the iterative process, expanding potential solution space. Existing AI-infused tools for inspiration search have expanded diversity of search mediums, enabling inputs such as such as natural language description  (Wang et al . , 2021 ) , screenshots  (Swearngin et al . , 2018 ) , hand-drawn sketches and doodles  (Mohian and Csallner, 2022 ) , low-fidelity design artifacts such as wireframes  (Chen et al . , 2020a ) , and hybrid inputs (e.g. text and doodle  (Mohian and Csallner, 2023 ) ), supporting more flexible search processes  (Lu et al . , 2022 ) . In later stages of design, external references also allow for reinterpretation of ideas and are used as validation tools  (Herring et al . , 2009 ) . Given the iterative nature of design tasks, more research is needed on dynamically supporting and inspiring UI design as the artifact evolves in complexity and fidelity.

4.3.3. UI Optimization

UI optimization encompasses two main aspects: at the interface level, it involves enhancing the layout positioning and aesthetic style   (Rahman et al . , 2021 ) ; at the user experience level, it focuses on improving the perceived affordances of components  (Swearngin and Li, 2019 ; Pang et al . , 2016 ) . The process mainly aim at optimizing visual appeal, functional clarity, are addressed, and the overall interaction with the user interface. First, applying appropriate visual aesthetics plays an important role in generating and optimizing high-fidelity UI. The underlying difficulties in automatically suggesting and applying design styles include data-driven aesthetic assessment  (Kong et al . , 2023 ; Kumar et al . , 2023 ) and transforming high-level design principles into explicit constraints. Accordingly, researchers proposed solutions that 1) translate natural language requirements into predictions of design properties  (Kim et al . , 2022 ) and 2) extract applicable design constraints from design principles  (Kong et al . , 2023 ) . There are also a few papers dedicated to specific aspects of aesthetics, such as color  (Feng et al . , 2021b ; Hegemann et al . , 2023 ; O’Donovan et al . , 2011 ) and font design  (Zhao et al . , 2018 ; O’Donovan et al . , 2014 ) . Meanwhile, due to the subjectivity of aesthetic styling, existing systems tend to keep designers actively engaged in the producing process, including making decisions about which recommended suggestions to adopt, iterating on their choices, and making further revisions afterwards  (Kong et al . , 2023 ; Kim et al . , 2022 ; Hegemann et al . , 2023 ) . For optimization at the user experience level, past work drew insights from the correlation between components’ spatial relationships and user task performance (i.e. speed and accuracy), leveraging classic principles such as Fitts’s Law and neural network learning  (Duan et al . , 2020 ) to reach ideal layout. Different from the previous categories, optimization contributes to finishing the design cycle. Given the standardized and consistent requirements across UI design practices, optimization tasks can further explore topics including visual alignment and consistency checking, usability issue mitigation, and design guidelines adherence improvement.

4.3.4. Summary

Machine learning, by enhancing design processes with its search and generative capabilities, offers innovative pathways for design inspiration  (Feng et al . , 2022 ) . AI-enabled search and generation might enable more rapid and parallel prototyping, previously limited by human capacity, thereby increasing the potential to elevate design outcomes. While the quest for end-to-end solutions for complete UI design remains prevalent, there’s a shift towards automating select intermediary steps in the design workflow, promising more effective support for design objectives   (Lu et al . , 2022 ) . Additionally, for design aspects steeped in subjectivity, like aesthetic choices, machine learning-assisted tools are emerging to bolster designers’ creative freedom through detailed interactions, ensuring technology complements rather than overrides human expertise.

4.4. Deliver

Deliver is the convergent phase in the second diamond, where through different evaluation methods, designers elicit feedback from users on their design prototypes, iteratively improve them, and come up with a final solution  (Council, [n. d.] ) . There are several major themes in the testing phase: Visual Saliency Prediction (N=24), Aesthetic Analysis (N=12), Visual Error Detection (N=9).

4.4.1. Visual Saliency Prediction

Visual saliency is a proxy of the perceived importance of screen components, indicating UIs’ visual hierarchy. Such information can help UX practitioners better grasp users’ attention distribution, thus improving the information architecture design  (Novák et al . , 2023 ) . Many model architectures have been developed for predicting visual saliency  (Xu et al . , 2016 ; Georges et al . , 2016 ; Li et al . , 2016 ; Bylinskii et al . , 2017 ; Shen et al . , 2015 ) . Visual attention prediction for different user groups   (Leiva et al . , 2022b ; Chen et al . , 2023 ) , and UI categories  (Fosco et al . , 2020 ) allows more granularity and versatility for UX practitioners. Techniques to collect user gazing data with easy-to-access gadgets instead of expensive eye-tracking devices, such as webcams  (Xu et al . , 2015 ) and mobile phones  (Li et al . , 2017b ) , have also been investigated. Methods deploying crowd-sourcing for data collection are also presented, with eye-tracking techniques   (Xu et al . , 2015 ) and by self-reporting where they had gazed at  (Cheng et al . , 2023a ) .

4.4.2. Aesthetic Analysis

Automatic visual aesthetic analysis of UI screens can help UX professionals grasp perceptions of their design. While judging the visual appearance of UIs can be subjective, automatic evaluations afford quick predictions as initial feedback to designers. Past work has focused on AI applications in the evaluation of UI’s perceived aesthetics  (Lima and Gresse von Wangenheim, 2022 ; Miniukovich and De Angeli, 2015 ; de Souza Lima et al . , 2022 ; Xing et al . , 2021 ; Dou et al . , 2019 ) and visual complexity  (Akça and Tanriöver, 2021 ) , which is a key aspect of design aesthetics. In addition, aesthetic predictions according to different user groups  (Leiva et al . , 2022b ) and in real usage contexts  (Samele and Burny, 2023 ) facilitate more nuanced prediction needs. The majority of existing visual analyses of UIs relied on objective metrics and feature extraction  (Akça and Tanriöver, 2021 ) , or AI models trained on user ratings  (Dou et al . , 2019 ; Leiva et al . , 2022a ) . Both empirical analysis and experiment results have demonstrated the improved flexibility and quality of AI models’ evaluations   (Akça and Tanriöver, 2021 ; Dou et al . , 2019 ) .

A study conducted by Rozenholtz et al. revealed that in practice, the perceived visual quality is not the only factor contributing to the evaluation of a design  (Rosenholtz et al . , 2011 ) . Designers often have to make trade-offs between visual quality and design goals, which they concluded, “would likely interfere with acceptance of a perceptual tool by professional designers” . In addition, they observed that the overall “goodness” values were not useful beyond A/B comparisons between design options. A deeper empirical understanding of how UX practitioners utilize UI evaluation tools in real-world contexts would greatly benefit practical research in this direction.

4.4.3. Visual Error Detection

Automated visual error detection for UI screens is another key theme. Those systems can emulate human interactions with UI screens and save time and human effort after app development  (Peng et al . , 2022 ) . While these systems are often used after app development and to check implementation quality, they are also capable of identifying design issues that get propagated to code implementation. Unlike system-specific tests like those developed especially for Android  (Collins et al . , 2021 ; Llàcer Giner, 2020 ) , image-based testing techniques can take UI screenshots from different systems, increasing cross-platform versatility  (Eskonen et al . , 2020 ; Eskonen, 2019 ) . These automated testing techniques help detect display issues  (Su et al . , 2021 ) , generate testing reports, and detect UI discrepancies between its design and development  (Chen et al . , 2017 ) . Some specific techniques, such as interaction and tappability prediction  (Swearngin and Li, 2019 ; Schoop et al . , 2022 ) can also be utilized to serve more granular error detection goals. Design guideline violation checkers  (Zhao et al . , 2020b ; Yang et al . , 2021a , b ) also have great practical potential in UX workflows. Overall, AI has great potential in flexible and universal visual error detection.

4.4.4. Additional Topics

Systems around sentiment prediction, usability testing, and automatic feedback generation are also included in our sample. Sentiment prediction centers the user’s perception of the product  (Desolda et al . , 2021 ; Petersen et al . , 2020 ) . Some related works are user satisfaction prediction  (Koonsanit and Nishiuchi, 2021 ; Koonsanit et al . , 2022 ) , and brand personality prediction  (Wu et al . , 2019 ) . These models help guide the designer to analyze the design target and the predicted user perception.

Usability testing is also one processes that gather the researcher’s attention. To suit more nuanced device-specific usability testing needs, researchers present usability testing for mobile UI  (Schoop et al . , 2022 ) , e-learning system   (Oztekin et al . , 2013 ) , and thermostat  (Ponce et al . , 2018 ) . Researchers use live emotion logs   (Filho et al . , 2015 ) , think-aloud sessions  (Fan et al . , 2020 , 2022 ) , and online reviews  (Hedegaard and Simonsen, 2014 ) to extract usability-related data and assess interfaces. In addition, automatic feedback generation empowers designers to improve on the current design with the help of the ML system  (Krause et al . , 2017 ; Ruiz and Snoeck, 2022 ) .

Other less-explored themes include dark pattern detection  (Hasan Mansur et al . , 2023 ) and A/B testing  (Kaukanen, 2020 ; Kharitonov et al . , 2017 ) . As accessibility design becomes more essential in UX design, researchers developed tools around automated accessibility testing   (Vontell, 2019 ) .

Previous research in the deliver phase has explored various ways to provide UI evaluation feedback to designers. We observed that in our sample, these explorations are often based on the visual analysis of UIs. However, with the growing prevalence of design systems in practice  (Frost, 2016 ; Churchill, 2019 ) , UX designers are shifting their focus from pixel-level asthetics to interaction flows and the holisitc user experiences across UI screens. The evaluation of interaction flows and user experiences go beyond saliency prediction (Section  4.4.1 ) and visual asthetics (Section  4.4.2 ), yet is still overlooked in research. Moreover, current visual analysis metrics does not often align with unique UI design asthetics such as flat design and skeumorphism, restricting their practical adoption. We believe more considerations of these unique aspects of UX design is important in creating translational research value  (Colusso et al . , 2017 , 2019 ; Norman, 2010 )

Refer to caption

4.5. Datasets

Category Dataset Year Descriptions Size
Mobile Interfaces

RICO  ( Deka et al . , 2017a )

a large repository of Android app designs

72k screens from 9.7k apps, 3M components

ReDraw  ( Moran et al . , 2020 )

UI screens with GUI metadata

14k screens, 191k components

Enrico  ( Leiva et al . , 2020 )

human-annotated topic modeling of RICO subset

1.5k screens, 20 topics

VINS  ( Bunian et al . , 2021 )

wireframes and annotations for sketches and high-fidelity UIs for Android and iOS

11 components, 257 wireframes, 4.5k high-fi screens

Screen2Words  ( Wang et al . , 2021 )

screen summarizations based on RICO

112k summaries for 22k screens

Clay  ( Li et al . , 2022 )

human-made annotations to denoise RICO

60k screen layouts

Android in the Wild  ( Rawles et al . , 2023 )

human demonstrations of mobile device interactions

715k interactions, 30k instructions

Swire  ( Huang et al . , 2019 )

crowd-sourced, hand-drawn sketches based on RICO

3.8k screens

UISketch  ( Sermuga Pandian et al . , 2021b )

crowd-sourced, hand-drawn low-fi UI element sketches

18k sketches of 21 UI elements

Synz  ( Sermuga Pandian et al . , 2021a )

synthetic smartphone low-fi screen sketches, generated based on RICO and UISketch

175k screens

Lofi Sketch  ( Sermuga Pandian et al . , 2022 )

crowd-sourced, hand-drawn smartphone low-fi screen sketches, generated by random allocation

4.5k screen sketches, annotated with 21 UI element categories

Web Interfaces

Webzeitgeist  ( Kumar et al . , 2013 )

a large repository of web interfaces

100k screens, 100M components

WebUI  ( Wu et al . , 2023b )

low-cost, large-scale repository of web interfaces

400k screens

Webshop  ( Yao et al . , 2023 )

human demonstrations of e-commerce website interactions

12k instructions, 1.6k demonstrations

Open-sourced datasets on user interfaces of different devices and modalities have significantly contributed to AI support for UI/UX design. In Table  1 , we summarized open-sourced datasets we collected in our sample. We discovered that current datasets often overlook the user experiences underlying the interfaces , limiting their applications in UX design. As a result, technical work based on these datasets often leans toward the latter two phases of develop and deliver in the Double Diamond (Fig.  2 ). In addition, we still lack universal benchmarks to evaluate design quality , reflected in UIs and the underlying user experiences, due to the often diverse goals of design in different digital products. Overall, existing datasets have enabled more technical solutions to user task automation  (Li et al . , 2017a ; Rawles et al . , 2023 ) and design task automation  (Moran et al . , 2020 ; Arroyo et al . , 2021 ; Huang et al . , 2021 ) , instead of designer-centric augmentation tools.

4.5.1. Mobile UI Datasets

Most of the publicly available datasets for UX-related tasks focus on mobile user interfaces. RICO  (Deka et al . , 2017a ) is arguably the most utilized UI dataset, containing 72k mobile screens and 3M UI components. ReDraw   (Moran et al . , 2020 ) is a similar mobile UI dataset with 14k screens and 191k annotated components. Later on, many papers aimed at augmenting RICO in different directions, including topic modeling   (Leiva et al . , 2020 ) , semantic summarization   (Wang et al . , 2021 ) , element mismatch denoising   (Li et al . , 2022 ) , and adding new screens and wireframes   (Bunian et al . , 2021 ) . In addition, Android in the Wild ( AitW ) is a mobile dataset containing 715k interaction episodes, spanning 30k unique instructions on different Android devices  (Rawles et al . , 2023 ) .

Four sketch datasets have also been released for mobile UI screens and components. Swire was created by recruiting designers to hand-draw sketches of 3.8k interfaces taken from RICO  (Huang et al . , 2019 ) . Similarly, UISketch crowdsourced 18k hand-drawn low-fidelity UI elements  (Sermuga Pandian et al . , 2021b ) and Lofi Sketch crowdsourced 4.5k screen sketches  (Sermuga Pandian et al . , 2022 ) . The Synz dataset took a purely synthetic approach and generated UI screen sketches with UI elements in UISketch and UI layouts from screens in RICO  (Sermuga Pandian et al . , 2021a ) .

4.5.2. Web UI Datasets

Given the significant resources and restrictions involved in collecting mobile UI data, researchers also collected website UI datasets: Webzeitgeist   (Kumar et al . , 2013 ) with 100k pages and 100M elements, and WebUI   (Wu et al . , 2023b ) with 400k pages. The advantage of collecting web UI is the ability to scale up with responsive layouts in different viewport sizes. Many websites do not require logins to view content, avoiding the potential login wall in mobile app UI collections  (Wu et al . , 2023b ) . Recent studies also demonstrated the potential to augment AI models’ understanding of mobile UIs with web UI data  (Wu et al . , 2023b ) . A UI navigation and automation dataset for e-commerce websites, WebShop , with 12k crowd-sourced text instructions and over 1.6k human demonstrations is also created  (Yao et al . , 2023 ) .

4.5.3. Discussion

Missing connections across ui screens.

Most datasets in our sample consist of individually separated UI screenshots, their hierarchy information, and metadata. They miss the connections across multiple UI screens , which encapsulate the underlying user tasks, experiences, and goals. The absence of these inter-screen connections highlights a fundamental distinction between UI and UX design, limiting existing research’s practical applications in the UX industry  (Norman and Nielsen, 1998 ) 9 9 9 The RICO dataset  (Deka et al . , 2017a ) included interaction tracing and animation between screens, but they remain much underexplored, especially compared to the other parts of the dataset such as UI screenshots, hierarchies, and layouts. . As a result, research based on these datasets tends to be biased towards viewing UIs predominantly as static, multimodal entities comprising textual and visual information  (Moran et al . , 2020 ; Li et al . , 2021b ; Wang et al . , 2021 ; Huang et al . , 2019 ; Wang et al . , 2023 ) . This also led to a research focus skewed towards the develop and deliver stages of the Double Diamond model (Fig.  2 ), where static UIs appear more than the first two exploratory phases (Fig. 4 ). We believe that a deeper understanding of UX practices and mindsets is essential to align datasets and AI models with the complexities of real-world UX design.

The recent release of datasets for UI task automation, such as Android in the Wild  (Rawles et al . , 2023 ) , provides valuable data on user flows across multiple UI screens. While the primary focus is on supporting task automation for end users, they also have the potential to benefit UX practitioners. For example, assessments of these user flows and their design contexts can help UX designers find relevant and high-quality inspirations in the early stages of design. In addition, commercially available, designer-centric datasets such as Mobbin 10 10 10 https://mobbin.com can inform academic creation of open-source datasets that more directly afford applications in realistic UX domains.

Lack of meaningful evaluation metrics

We still lack objective metrics that effectively reflect the quality of UI and the underlying user experiences. Most AI generation models trained on existing datasets are evaluated against metrics including overlap, alignment, and intersect-over-union (IoU) that do not necessarily align with the perceived quality of UIs  (Jing et al . , 2023 ; Li et al . , 2021c ; Kikuchi et al . , 2021 ) . Other common metrics include visual complexities  (Alemerien and Magel, 2014 ; Riegler and Holzmann, 2015 ; Reinecke et al . , 2013 ; Ines et al . , 2017 ) , visual saliency  (Zhao et al . , 2021 ; Leiva et al . , 2022a ; Bylinskii et al . , 2017 ; Kumar et al . , 2023 ; Li et al . , 2016 ; Shen et al . , 2015 ; Kruthiventi et al . , 2017 ; Judd et al . , 2009 ) , and visual similarities  (Li et al . , 2021b ; Huang et al . , 2019 ; Karimi et al . , 2020 ) .

This reflects the contrast between AI’s data-driven nature and UX’s user-centered philosophy, which contains opportunities for both disciplines and remains to be further explored  (Chromik et al . , 2020 ) . Most UX tasks hardly be holistically evaluated using only objective metrics. In practice, they are often embedded in individual projects’ contexts of user needs, business objectives, and technical feasibility. Evaluating UI screens and their user experiences significantly differs from traditional image-based assessment in domains like computer vision, requiring the development of novel, UX-focused objective metrics tailored for AI’s application in this field.

4.6. General AI Models

In our sample, we also identified 37 AI-focused papers that do not specifically fit into any of the four Double Diamond phases, but still work with UX- and UI-related tasks. We analyze these papers here to further understand the current technical landscape. Generally, we identified three themes: (1) UI annotation & component detection; (2) UI semantic understanding; (3) UI interaction automation. These foundational AI explorations from the 37 papers contain significant potential to assist UX designers through downstream tasks, providing practical applications that can enhance their design processes and outcomes.

4.6.1. UI Annotation & Component Detection

Detecting and annotating visual elements on UIs can provide value in downstream tasks like UI semantic understanding and interaction automation, and also many different use cases including accessibility and UI testing  (Chen et al . , 2022 ) . In our sample, researchers utilized many AI model architectures for this task, but ResNet  (Chen et al . , 2022 ; Li et al . , 2020a ; Chen et al . , 2020b ) and Faster-RCNN  (Zhang et al . , 2021 ; Manandhar et al . , 2021 ) remain dominant given their impressive capabilities in general object detection. For component detection specifically on UIs, the precision for UI component locations and sizes is paramount, which slightly differs from general object detection. To address this challenge, some papers also included UI view hierarchies in addition to screenshot images for more accurate location information  (Zang et al . , 2021 ; Li et al . , 2020a ) . Annotating UI components on screens can augment UI datasets with detailed meta-level information, supporting modular design paradigms such as Atomic Design  (Frost, 2016 ) .

4.6.2. UI Semantic Understanding

Many AI models were developed to tackle the fundamental task of UI understanding. They mostly focused on the semantic meaning, i.e., the functionalities and purposes, of UI components and screens. Many AI models take in screenshots and/or corresponding view hierarchies  (Li et al . , 2021b ; Ang and Lim, 2022 ; Li et al . , 2021a ; Bai et al . , 2021 ; Wu et al . , 2023a ) and output an embedding of the interface screen or component. ActionBert used user actions with the UI to learn a UI embedding  (He et al . , 2021 ) . Fu et al. made the analogy between words–sentences in NLP and pixels–screens for UI understanding. Recently, with the rise of increasingly larger model sizes, a relatively large vision-only UI model based on pre-trained large ViT and T5, Spotlight, was trained on 2.5M mobile screens and 80M web pages and achieved SoTA on some representative UI tasks  (Li and Li, 2023 ) . These general-purpose models lay the groundwork for more sophisticated downstream tasks that can support various UX workflows.

Another approach to achieving holistic UI understanding is through UI screen summarization. These summarizations present concise textual information regarding a UI screen’s appearance and functionality, which can be useful for many language-based application scenarios. Researchers have attempted to use multimodal AI models  (Wang et al . , 2021 ) and vision-based approaches  (Leiva et al . , 2022a ) to generate such summaries. Such summaries can be helpful for text-based retrieval of similar screens, screen readers enhancement, and screen indexing for conversational applications  (Wang et al . , 2021 ) .

4.6.3. UI Interaction Automation

Many researchers also investigated the potential of AI to automatically interact with UIs, which reduced the user efforts required for creating task automation compared with e.g., programming by demonstration and interactive task learning methods  (Li et al . , 2019 ; Li et al . , 2017a , 2020b ) . Over the years, researchers have explored single-turn, UI-element-based simple interactions  (Degott et al . , 2019 ; Todi et al . , 2021 ; Wang et al . , 2023 ) , to multi-turn  (Iki and Aizawa, 2022 ; Yao et al . , 2023 ; Furuta et al . , 2023 ; Wen et al . , 2023 ) , more complex and precise actions, such as horizontal scrolls  (Rawles et al . , 2023 ) . Language models  (Todi et al . , 2021 ; Iki and Aizawa, 2022 ; Wang et al . , 2023 ; Furuta et al . , 2023 ; Rawles et al . , 2023 ; Wen et al . , 2023 ) and reinforcement learning  (Degott et al . , 2019 ; Yao et al . , 2023 ) are the most utilized approaches for predicting action sequences in UIs. The understanding and prediction of user actions on UI screens can support diverse downstream designer-centric tasks, such as facilitating the prototyping of user flows, simplifying existing user experiences, and understanding user goals and intents.

4.6.4. Summary

In all, papers in this section focus on exploring and extending AI models’ general abilities on tasks related to UI and UX. The advancement of these AI models’ capabilities can benefit from ongoing innovations in the AI community. Multi-modal model pipelines remain mainstream with UI/UX datasets and have continuously demonstrated impressive performance  (Zhang et al . , 2021 ; Li et al . , 2021b ; Wang et al . , 2021 ; Ang and Lim, 2022 ) . Compared to traditional computer vision tasks, merely considering pixel information is far from satisfactory. Other modalities, such as structural information in vector graphics, are quite common in human design practices and thus require more attention in these model architectures. Recently, the significant increase in AI model sizes, reflected in pipelines based on BERT  (Bai et al . , 2021 ) and large vision-language models  (Li and Li, 2023 ) provide potential future directions. Meanwhile, deeper engagement with the UX communities and human-centered approaches can help uncover more direct translational opportunities to support UX practitioners with AI, as discussed in Section  4.5.3 . We believe these two complementary approaches are both indispensable in pushing forward the boundary of AI-driven UX design support tools.

5. Discussion

In the previous section, we have mapped the existing literature on AI’s role in UX support, applying the Double Diamond framework to structure our exploration. Here, from a meta-level perspective, we draw inspiration from existing Human-Centered AI research  (Horvitz, 1999 ; Amershi et al . , 2019 ; Lubars and Tan, 2019 ; Shneiderman, 2022 ) and distill the patterns observed across all phases, aiming to summarize and discuss generalizable insights. These insights pinpoint details of the gap between technical AI research and the human-centered UX mindset, emphasizing the need for the collaborative adaptation and evolution of both domains to better complement each other.

5.1. AI Assistance for UX: A Promising Field for Interdisciplinary, Translational Research

Our systematic literature review has demonstrated that the area of AI assistance for UX has witnessed significant growth. Research across HCI and AI has pushed the boundaries of AI datasets and models for UI/UX, understanding UX practices, and applications of technical innovations into various design activities.

Various AI techniques have been effectively utilized to process, understand, and generate user interface data, an inherently rich, multimodal data format  (Deka et al . , 2017a ; Rawles et al . , 2023 ) . Techniques and methodologies from subfields of AI, such as natural language processing, computer vision, graph learning, and reinforcement learning, have all been utilized, often in combination, to experiment with UI datasets  (Liu et al . , 2018 ; Zhang et al . , 2021 ; Wang et al . , 2021 ; Li et al . , 2021b ; Schoop et al . , 2022 ; Wang et al . , 2020 ; Eskonen et al . , 2020 ; Brückner et al . , 2022 ; Hotti et al . , 2022 ) . Research in the field has been consistently reflecting AI breakthroughs, with the latest adoption being Large Language Models  (Wang et al . , 2023 ) and Large Vision-Language Models  (Li and Li, 2023 ) . To this extent, UX research and design have provided AI researchers with unique challenges to tackle, effectively benefiting the AI community.

UX research and design are also fertile grounds for translational research  (Colusso et al . , 2017 ; Norman, 2010 ; Colusso, 2020 ) to impact the UX industry. It also has generalizable values for various adjacent domains. For example, the non-linear aspect of UX processes makes the research findings generalizable to AI-supported creativity research. Future research will continuously provide immense opportunities for both AI and UX to collaboratively evolve.

5.2. Unique Characteristics of UX: Empathy Building and Emphasis on Experiences

5.2.1. the essence of ux methodologies: empathy building.

A central goal of UX methodologies and processes is empathy building . Previous research has uncovered that UX practitioners view methodologies more as “ mindsets ”, rather than actual rigorous methods, to scaffold listening to users and considering diverse user inputs  (Gray, 2016 ) . Practitioners emphasized the need to prioritize this mindset when adopting and adapting UX methodologies based on each project’s unique scenarios:

“ … methods themselves are quite rudimentary… you probably can describe in a page. But when it comes to actually getting the right value out of them, it’s having that right mindset –– what are the right questions we need to ask? How can we answer them? And then using that as the basis for what methods you need. ”  (Gray, 2016 )

This unique characteristic of UX is often overlooked in current AI support tools: it is often not about automating the processes or methodologies, but supporting UX designers’ and researchers’ empathy-building with their users. Simplistic automation and acceleration can overlook the cognitive process of UX professionals, thus obstructing the researchers’ learning and empathy-building  (Marathe and Toyama, 2018 ) . As a result, existing research that uses AI for simplistic automation, as we have discussed in Sections  4.1 and   4.2 , is generally not desired by UX practitioners and hard to integrate into existing workflows. For example, AI systems that aim to directly provide synthetic user information  (Zhang et al . , 2016 ; An et al . , 2018 ; Tan et al . , 2022 ) can obstruct the empathy-building goal of UX and reinforce stereotypes by providing “statistically most likely” information about users  (Salehi, 2023 ) . Addressing this gap calls for more adoption of a human-centered AI perspective  (Shneiderman, 2022 ) , orienting future research to support the mindsets and goals of UX designers, instead of simply automating UX processes and generating relevant UI/UX artifacts.

5.2.2. From Individual UI Screens To Underlying User Experiences

Most of past research focused on individual UI screens, often overlooking the user flows and user experiences encapsulated across multiple interfaces. As we discovered in Section  4.5 , most existing datasets focused on static UI screens and components. AI models and their applications, built on top of these datasets, also mostly worked with static UIs (see Sections  4.3 ,   4.4 ,  4.6 ). Only a limited number of exceptions were found in our sample, focusing on topics such as user engagement  (Wu et al . , 2020 ) and creating UI animations  (Natarajan and Csallner, 2018 ) .

This is a main factor that limits existing research’s practical application in the real-world UX industry. During the past years, there has been a notable shift of design’s focus, from user interfaces to holistic user experiences  (Norman and Nielsen, 1998 ) . This shift has been further amplified by the wide adoption of design systems  (Churchill, 2019 ) , i.e. libraries of UI components and styles defined within companies to ensure consistent visual styles and branding across products (e.g. Google Material Design, IBM Carbon, Microsoft Fluent). The high-fidelity design components in design systems are defined with great detail and precision. Thus, designers are constrained in changing the visual aspects of design, while freed to focus more on crafting friendly, seamless user experiences with pre-defined UI elements  (Frost, 2016 ) . Consequently, this gap between academic explorations and industry practices widens, limiting practical, real-world adoption of AI-enabled design support tools created in academic settings.

However, great potentials for AI support still exist, if coupled with a deep understanding of existing UX practices and workflows. Recent research in UI task automation, from datasets like Android in the Wild  (Rawles et al . , 2023 ) to models such as UIBert  (Bai et al . , 2021 ) , reflect a gradual shift from individual UI screens to the underlying user flows and experiences. While the users’ perspective of interacting with UIs can still be different from designers’ considerations, great opportunities for design support tools lie in designer-centric applications of these datasets and AI models. In addition, as UI animation and motion design increasingly become integral parts of modern user experience  (Google, [n. d.] ) , video-based AI models present promising avenues to enhance relevant design processes and tools  (Wu et al . , 2020 ; Natarajan and Csallner, 2018 ) .

5.3. Analyze Task Delegability in UX Workflows

Given the intricacies of UX processes and methodologies, it is necessary to consider UX practitioners’ goals when determining the delegability of tasks to AI. Delegability is a concept in human-centered AI that describes the extent to which AI should be involved in certain tasks  (Jiang et al . , 2021 ; Feuston and Brubaker, 2021 ) . Lubars and Tan proposed a framework of task delegability for AI, considering the motivation, difficulty, risk, and trust when deciding the involvement of AI  (Lubars and Tan, 2019 ) . UX processes are often fluid and non-linear yet tied to practical business and design goals (e.g. higher conversion rate, increased user engagement)  (Li et al . , 2024 ) . Such processes blend both creative and analytical tasks, complicating AI task delegability analysis.

In the context of UX, our analysis has highlighted UX practitioners’ main motivation for empathy-building with users. We encourage future researchers to carefully analyze the different UX methodologies, as well as the detailed steps within these methodologies, against the task delegability framework. The level of AI automation and the granularity of task breakdown are not binary choices, but balances to be kept when designing UX support tools  (Shneiderman, 2022 ) . For example, for qualitative analysis (Section  4.2 ), prior research suggests less AI delegability for the initial codebook creation than for the later codebook labeling process  (Marathe and Toyama, 2018 ) . Systems like Cody  (Rietz and Maedche, 2021 ) and PaTAT  (Gebreegziabher, 2023 ) serve as positive examples, where AI models automate manual work but still leave room for reflections, learnings about users, and empathy-building. The appropriate amount of automation with AI in suitable tasks will improve the quality of UX outcomes, helping practitioners get the right design and get the design right.

5.4. Designer-Centric Datasets and Evaluation Metrics as Solid Technical Foundations

Most of the current open source UI/UX datasets  (Deka et al . , 2017a ; Moran et al . , 2020 ; Leiva et al . , 2020 ) and user task automation  (Rawles et al . , 2023 ) are not directly associated with designers’ considerations and priorities when designing for user experiences. Even the datasets in our sample that are created for design purposes are often limited to individual UI screens and widgets  (Huang et al . , 2019 ; Sermuga Pandian et al . , 2021b , a ) , ignoring practitioners’ emphasis on user experience across screens. For example, UX designers’ key considerations in the design process can include: how to identify appropriate design patterns for a given design scenario  (Silva-Rodríguez et al . , 2019 ) , how to implement user flows and product features with existing UI components from a design system  (Churchill, 2019 ) , and how to fit UI components into existing screens to support additional product features  (Lu et al . , 2022 ) .

This calls for future research contributions in two main areas: first, we need better evaluation metrics of UI/UX data that align more closely with current UX design goals, such as usability heuristics  (Ponce et al . , 2018 ) (also see Section  4.5.3 ); second, more datasets containing the results of such metrics are needed for large-scale benchmarking efforts. Metrics that effectively reflect the quality of UI/UX design are still not efficient, which nowadays is mostly achieved through subjective scores provided by potential user groups  (Swearngin et al . , 2018 ; Ang and Lim, 2022 ) . UX designers and researchers should work closely with AI researchers and engineers in defining these metrics, as well as contributing to such data collection and labeling efforts  (Yildirim et al . , 2022 ) .

The lowering barrier of utilizing AI, through pre-trained large language models  (Wang et al . , 2023 ) and large vision-language models  (Li and Li, 2023 ) , can also enable more UX teams to easily integrate AI into their toolkits  (Brie et al . , 2023 ; Xiao et al . , 2023 ; Feng et al . , 2023 ) . Interacting with these models, whether through fine-tuning or prompt-based mechanisms, reduces the reliance on domain-specific datasets. This is advantageous for UX professionals leaning towards qualitative methods or those lacking the means to collect large-scale datasets. However, employing such extensive models also raises ethical concerns over potentially leaking user data, emphasizing the need for more ethical research on the responsible deployment of these models  (Shen et al . , 2023 ) .

6. Conclusion

This study underscores the expanding potential of integrating AI into the UX domain through a systematic literature review (SLR) from a Human-Centered AI (HCAI) perspective.By mapping research onto the Double Diamond framework, we identified key technical capabilities of AI in UX and highlighted overlooked aspects like empathy-building and multi-screen user experiences. We highlight the need for a deep understanding of UX practices, mindsets, and goals to design effective AI support, calling for careful AI delegability analysis for UX tasks based on existing HCAI frameworks  (Lubars and Tan, 2019 ) . Designer-centric datasets and evaluation metrics can greatly improve the technical foundations for direct real-world impact. This review summarizes the current landscape and lays out future opportunities for this promising interdisciplinary, translational research domain.

7. Limitations

Given the rapid advancements in our focus areas, keeping up-to-date with the latest developments is challenging, particularly for a systematic literature review. Our snowball sampling method enabled us to gather a substantial set of relevant papers (N=369), but this approach limits our ability to incorporate new studies as they are published. Notably, recent papers on UI task automation using LLMs  (Li et al . , 2023 ; Yan et al . , 2023 ) appeared after our review period and were not included in our analysis. Despite this, we believe our findings remain relevant and insightful in light of these new publications. Nonetheless, we acknowledge this as a limitation of our study.

  • Abbas et al . (2022) Abdallah M. H. Abbas, Khairil Imran Ghauth, and Choo-Yee Ting. 2022. User Experience Design Using Machine Learning: A Systematic Review. IEEE Access 10 (2022), 51501–51514. https://doi.org/10.1109/ACCESS.2022.3173289 Conference Name: IEEE Access.
  • Akça and Tanriöver (2021) Eren Akça and Ömer Özgür Tanriöver. 2021. A comprehensive appraisal of perceptual visual complexity analysis methods in GUI design. Displays 69 (Sept. 2021), 102031. https://doi.org/10.1016/j.displa.2021.102031
  • Alemerien and Magel (2014) Khalid Alemerien and Kenneth Magel. 2014. GUIEvaluator: A Metric-tool for Evaluating the Complexity of Graphical User Interfaces.. In SEKE . 13–18.
  • Amershi et al . (2019) Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) . Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300233
  • Ammarullah et al . (2021) Ali Naufal Ammarullah, Mutia Marcha Fatika, Muhammad Hafizhan, and Auzi Asfarian. 2021. Design Concept: Get Comfortable Sleep Using Ambient Experience with Smart Pillow. In Asian CHI Symposium 2021 (Asian CHI Symposium 2021) . Association for Computing Machinery, New York, NY, USA, 174–176. https://doi.org/10.1145/3429360.3468205
  • An et al . (2016) Jisun An, Hoyoun Cho, Haewoon Kwak, Mohammed Ziyaad Hassen, and Bernard J. Jansen. 2016. Towards Automatic Persona Generation Using Social Media. In 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW) . IEEE, Vienna, Austria, 206–211. https://doi.org/10.1109/W-FiCloud.2016.51
  • An et al . (2018) J. An, H. Kwak, S. Jung, J. Salminen, M. Admad, and B. Jansen. 2018. Imaginary People Representing Real Numbers: Generating Personas from Online Social Media Data. ACM Transactions on the Web 12, 4 (Nov. 2018), 1–26. https://doi.org/10.1145/3265986
  • Andolina et al . (2015) Salvatore Andolina, Khalil Klouche, Diogo Cabral, Tuukka Ruotsalo, and Giulio Jacucci. 2015. InspirationWall: Supporting Idea Generation Through Automatic Information Exploration. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition . ACM, Glasgow United Kingdom, 103–106. https://doi.org/10.1145/2757226.2757252
  • Ang and Lim (2022) Gary Ang and Ee-Peng Lim. 2022. Learning Semantically Rich Network-based Multi-modal Mobile User Interface Embeddings. ACM Transactions on Interactive Intelligent Systems 12, 4 (Dec. 2022), 1–29. https://doi.org/10.1145/3533856
  • Arroyo et al . (2021) Diego Martin Arroyo, Janis Postels, and Federico Tombari. 2021. Variational Transformer Networks for Layout Generation. https://doi.org/10.48550/arXiv.2104.02416 arXiv:2104.02416 [cs].
  • Bai et al . (2021) Chongyang Bai, Xiaoxue Zang, Ying Xu, Srinivas Sunkara, Abhinav Rastogi, Jindong Chen, and Blaise Aguera y Arcas. 2021. UIBert: Learning Generic Multimodal Representations for UI Understanding. http://arxiv.org/abs/2107.13731 arXiv:2107.13731 [cs].
  • Baj-Rogowska and Sikorski (2023) Anna Baj-Rogowska and Marcin Sikorski. 2023. Exploring the usability and user experience of social media apps through a text mining approach. Engineering Management in Production and Services 15, 1 (March 2023), 86–105. https://doi.org/10.2478/emj-2023-0007
  • Beltramelli (2017) Tony Beltramelli. 2017. pix2code: Generating Code from a Graphical User Interface Screenshot. https://doi.org/10.48550/arXiv.1705.07962 arXiv:1705.07962 [cs].
  • Bittner and Shoury (2019) Eva Bittner and Omid Shoury. 2019. Designing Automated Facilitation for Design Thinking: A Chatbot for Supporting Teams in the Empathy Map Method. https://doi.org/10.24251/HICSS.2019.029
  • Borlinghaus and Huber (2021) Parzival Borlinghaus and Stephan Huber. 2021. Comparing Apples and Oranges: Human and Computer Clustered Affinity Diagrams Under the Microscope. In 26th International Conference on Intelligent User Interfaces . ACM, College Station TX USA, 413–422. https://doi.org/10.1145/3397481.3450674
  • Brie et al . (2023) Paul Brie, Nicolas Burny, Arthur Sluÿters, and Jean Vanderdonckt. 2023. Evaluating a Large Language Model on Searching for GUI Layouts. Proceedings of the ACM on Human-Computer Interaction 7, EICS (June 2023), 178:1–178:37. https://doi.org/10.1145/3593230
  • Brückner et al . (2022) Lukas Brückner, Luis A. Leiva, and Antti Oulasvirta. 2022. Learning GUI Completions with User-defined Constraints. ACM Transactions on Interactive Intelligent Systems 12, 1 (March 2022), 6:1–6:40. https://doi.org/10.1145/3490034
  • Bulygin (2022) Denis Bulygin. 2022. How do Conversational Agents Transform Qualitative Interviews? Exploration and Support of Researchers’ Needs in Interviews at Scale. In 27th International Conference on Intelligent User Interfaces . ACM, Helsinki Finland, 124–128. https://doi.org/10.1145/3490100.3516478
  • Bunian et al . (2021) Sara Bunian, Kai Li, Chaima Jemmali, Casper Harteveld, Yun Fu, and Magy Seif Seif El-Nasr. 2021. VINS: Visual Search for Mobile User Interface Design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–14. https://doi.org/10.1145/3411764.3445762
  • Buschek et al . (2020) Daniel Buschek, Charlotte Anlauff, and Florian Lachner. 2020. Paper2Wire: a case study of user-centred development of machine learning tools for UX designers. In Proceedings of Mensch und Computer 2020 . ACM, Magdeburg Germany, 33–41. https://doi.org/10.1145/3404983.3405506
  • Bylinskii et al . (2017) Zoya Bylinskii, Nam Wook Kim, Peter O’Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, and Aaron Hertzmann. 2017. Learning Visual Importance for Graphic Designs and Data Visualizations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology . 57–69. https://doi.org/10.1145/3126594.3126653 arXiv:1708.02660 [cs].
  • Chen et al . (2017) Chun-Fu (Richard) Chen, Marco Pistoia, Conglei Shi, Paolo Girolami, Joseph W. Ligman, and Yong Wang. 2017. UI X-Ray: Interactive Mobile UI Testing Based on Computer Vision. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (IUI ’17) . Association for Computing Machinery, New York, NY, USA, 245–255. https://doi.org/10.1145/3025171.3025190
  • Chen et al . (2020a) Jieshan Chen, Chunyang Chen, Zhenchang Xing, Xin Xia, Liming Zhu, John Grundy, and Jinshui Wang. 2020a. Wireframe-based UI Design Search through Image Autoencoder. ACM Transactions on Software Engineering and Methodology 29, 3 (July 2020), 1–31. https://doi.org/10.1145/3391613
  • Chen et al . (2020b) Jieshan Chen, Chunyang Chen, Zhenchang Xing, Xiwei Xu, Liming Zhu, Guoqiang Li, and Jinshui Wang. 2020b. Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning. https://doi.org/10.1145/3377811.3380327 arXiv:2003.00380 [cs].
  • Chen et al . (2022) Jieshan Chen, Amanda Swearngin, Jason Wu, Titus Barik, Jeffrey Nichols, and Xiaoyi Zhang. 2022. Towards Complete Icon Labeling in Mobile Applications. In CHI Conference on Human Factors in Computing Systems . ACM, New Orleans LA USA, 1–14. https://doi.org/10.1145/3491102.3502073
  • Chen et al . (2018) Nan-Chen Chen, Margaret Drouhard, Rafal Kocielnik, Jina Suh, and Cecilia R. Aragon. 2018. Using Machine Learning to Support Qualitative Coding in Social Science: Shifting the Focus to Ambiguity. ACM Transactions on Interactive Intelligent Systems 8, 2 (June 2018), 1–20. https://doi.org/10.1145/3185515
  • Chen et al . (2023) Shi Chen, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai Kohlhoff, and Junfeng He. 2023. Learning From Unique Perspectives: User-Aware Saliency Modeling. 2701–2710. https://openaccess.thecvf.com/content/CVPR2023/html/Chen_Learning_From_Unique_Perspectives_User-Aware_Saliency_Modeling_CVPR_2023_paper.html
  • Cheng et al . (2023b) Chin-Yi Cheng, Forrest Huang, Gang Li, and Yang Li. 2023b. PLay: Parametrically Conditioned Layout Generation using Latent Diffusion. https://doi.org/10.48550/arXiv.2301.11529 arXiv:2301.11529 [cs].
  • Cheng (2016) Mingming Cheng. 2016. Sharing economy: A review and agenda for future research. International Journal of Hospitality Management 57 (Aug. 2016), 60–70. https://doi.org/10.1016/j.ijhm.2016.06.003
  • Cheng et al . (2023a) Shiwei Cheng, Jing Fan, and Yilin Hu. 2023a. Visual saliency model based on crowdsourcing eye tracking data and its application in visual design. Personal and Ubiquitous Computing 27, 3 (June 2023), 613–630. https://doi.org/10.1007/s00779-020-01463-7
  • Chromik et al . (2020) Michael Chromik, Florian Lachner, and Andreas Butz. 2020. ML for UX? - An Inventory and Predictions on the Use of Machine Learning Techniques for UX Research. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society (NordiCHI ’20) . Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3419249.3420163
  • Churchill (2019) Elizabeth F Churchill. 2019. Scaling UX with design systems. Interactions 26, 5 (2019), 22–23.
  • Collins et al . (2021) Eliane Collins, Arilo Neto, Auri Vincenzi, and José Maldonado. 2021. Deep Reinforcement Learning based Android Application GUI Testing. In Brazilian Symposium on Software Engineering . ACM, Joinville Brazil, 186–194. https://doi.org/10.1145/3474624.3474634
  • Colusso et al . (2017) Lucas Colusso, Cynthia L. Bennett, Gary Hsieh, and Sean A. Munson. 2017. Translational Resources: Reducing the Gap Between Academic Research and HCI Practice. In Proceedings of the 2017 Conference on Designing Interactive Systems (Edinburgh, United Kingdom) (DIS ’17) . Association for Computing Machinery, New York, NY, USA, 957–968. https://doi.org/10.1145/3064663.3064667
  • Colusso et al . (2019) Lucas Colusso, Ridley Jones, Sean A Munson, and Gary Hsieh. 2019. A translational science model for HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems . 1–13.
  • Colusso (2020) Lucas Franco Colusso. 2020. Understanding and Tooling Translational Research in Human-Computer Interaction . Ph. D. Dissertation. University of Washington.
  • Council ([n. d.]) British Design Council. [n. d.]. The Double Diamond - Design Council. https://www.designcouncil.org.uk/our-resources/the-double-diamond/
  • de Souza Lima et al . (2022) Adriano Luiz de Souza Lima, Osvaldo P. Heiderscheidt Roberge Martins, Christiane Gresse von Wangenheim, Aldo von Wangenheim, Adriano Ferreti Borgatto, and Jean C. R. Hauck. 2022. Automated assessment of visual aesthetics of Android user interfaces with deep learning. In Proceedings of the 21st Brazilian Symposium on Human Factors in Computing Systems (IHC ’22) . Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3554364.3559113
  • Degott et al . (2019) Christian Degott, Nataniel P. Borges Jr., and Andreas Zeller. 2019. Learning user interface element interactions. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis . ACM, Beijing China, 296–306. https://doi.org/10.1145/3293882.3330569
  • Deka et al . (2017a) Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar. 2017a. Rico: A Mobile App Dataset for Building Data-Driven Design Applications. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology . ACM, Québec City QC Canada, 845–854. https://doi.org/10.1145/3126594.3126651
  • Deka et al . (2017b) Biplab Deka, Zifeng Huang, Chad Franzen, Jeffrey Nichols, Yang Li, and Ranjitha Kumar. 2017b. ZIPT: Zero-Integration Performance Testing of Mobile App Designs. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology . ACM, Québec City QC Canada, 727–736. https://doi.org/10.1145/3126594.3126647
  • Dell and Kumar (2016) Nicola Dell and Neha Kumar. 2016. The Ins and Outs of HCI for Development. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16) . Association for Computing Machinery, New York, NY, USA, 2220–2232. https://doi.org/10.1145/2858036.2858081
  • Desolda et al . (2021) Giuseppe Desolda, Andrea Esposito, Rosa Lanzilotti, and Maria F. Costabile. 2021. Detecting Emotions Through Machine Learning for Automatic UX Evaluation. In Human-Computer Interaction – INTERACT 2021 (Lecture Notes in Computer Science) , Carmelo Ardito, Rosa Lanzilotti, Alessio Malizia, Helen Petrie, Antonio Piccinno, Giuseppe Desolda, and Kori Inkpen (Eds.). Springer International Publishing, Cham, 270–279. https://doi.org/10.1007/978-3-030-85613-7_19
  • Dhinakaran ([n. d.]) Aparna Dhinakaran. [n. d.]. Survey: Massive Retooling Around Large Language Models Underway. https://www.forbes.com/sites/aparnadhinakaran/2023/04/26/survey-massive-retooling-around-large-language-models-underway/ Section: AI.
  • Di Fede et al . (2022) Giulia Di Fede, Davide Rocchesso, Steven P. Dow, and Salvatore Andolina. 2022. The Idea Machine: LLM-based Expansion, Rewriting, Combination, and Suggestion of Ideas. In Creativity and Cognition . ACM, Venice Italy, 623–627. https://doi.org/10.1145/3527927.3535197
  • Dillahunt et al . (2017) Tawanna R. Dillahunt, Xinyi Wang, Earnest Wheeler, Hao Fei Cheng, Brent Hecht, and Haiyi Zhu. 2017. The Sharing Economy in Computing: A Systematic Literature Review. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (Dec. 2017), 1–26. https://doi.org/10.1145/3134673
  • Dou et al . (2019) Qi Dou, Xianjun Sam Zheng, Tongfang Sun, and Pheng-Ann Heng. 2019. Webthetics: Quantifying webpage aesthetics with deep learning. International Journal of Human-Computer Studies 124 (April 2019), 56–66. https://doi.org/10.1016/j.ijhcs.2018.11.006
  • Dow et al . (2011) Steven P. Dow, Alana Glassco, Jonathan Kass, Melissa Schwarz, Daniel L. Schwartz, and Scott R. Klemmer. 2011. Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Transactions on Computer-Human Interaction 17, 4 (Dec. 2011), 18:1–18:24. https://doi.org/10.1145/1879831.1879836
  • Drouhard et al . (2017) Margaret Drouhard, Nan-Chen Chen, Jina Suh, Rafal Kocielnik, Vanessa Pena-Araya, Keting Cen, Xiangyi Zheng, and Cecilia R. Aragon. 2017. Aeonium: Visual analytics to support collaborative qualitative coding. In 2017 IEEE Pacific Visualization Symposium (PacificVis) . IEEE, Seoul, South Korea, 220–229. https://doi.org/10.1109/PACIFICVIS.2017.8031598
  • Duan et al . (2020) Peitong Duan, Casimir Wierzynski, and Lama Nachman. 2020. Optimizing User Interface Layouts via Gradient Descent. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20) . Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376589
  • Dąbrowski et al . (2022) Jacek Dąbrowski, Emmanuel Letier, Anna Perini, and Angelo Susi. 2022. Analysing app reviews for software engineering: a systematic literature review. Empirical Software Engineering 27, 2 (March 2022), 43. https://doi.org/10.1007/s10664-021-10065-7
  • Eskonen (2019) Juha Eskonen. 2019. Deep Reinforcement Learning in Automated User Interface Testing. (May 2019). https://aaltodoc.aalto.fi:443/handle/123456789/37895 Accepted: 2019-05-12T15:06:20Z.
  • Eskonen et al . (2020) Juha Eskonen, Julen Kahles, and Joel Reijonen. 2020. Automating GUI Testing with Image-Based Deep Reinforcement Learning. In 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS) . 160–167. https://doi.org/10.1109/ACSOS49614.2020.00038
  • Fan et al . (2020) Mingming Fan, Yue Li, and Khai N. Truong. 2020. Automatic Detection of Usability Problem Encounters in Think-aloud Sessions. ACM Transactions on Interactive Intelligent Systems 10, 2 (June 2020), 1–24. https://doi.org/10.1145/3385732
  • Fan et al . (2022) Mingming Fan, Xianyou Yang, TszTung Yu, Q Vera Liao, and Jian Zhao. 2022. Human-ai collaboration for ux evaluation: Effects of explanation and synchronization. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (2022), 1–32.
  • Farrell (2017) Susan Farrell. 2017. UX Research Cheat Sheet. https://www.nngroup.com/articles/ux-research-cheat-sheet/
  • Feng et al . (2022) Sidong Feng, Chunyang Chen, and Zhenchang Xing. 2022. Gallery D.C.: Auto-created GUI Component Gallery for Design Search and Knowledge Discovery. http://arxiv.org/abs/2204.06700 arXiv:2204.06700 [cs].
  • Feng et al . (2023) Weixi Feng, Wanrong Zhu, Tsu-jui Fu, Varun Jampani, Arjun Akula, Xuehai He, Sugato Basu, Xin Eric Wang, and William Yang Wang. 2023. LayoutGPT: Compositional Visual Planning and Generation with Large Language Models. https://doi.org/10.48550/arXiv.2305.15393 arXiv:2305.15393 [cs].
  • Feng et al . (2021a) Zhen Feng, Jiaqi Fang, Bo Cai, and Yingtao Zhang. 2021a. GUIS2Code: A Computer Vision Tool to Generate Code Automatically from Graphical User Interface Sketches. In Artificial Neural Networks and Machine Learning – ICANN 2021 (Lecture Notes in Computer Science) , Igor Farkaš, Paolo Masulli, Sebastian Otte, and Stefan Wermter (Eds.). Springer International Publishing, Cham, 53–65. https://doi.org/10.1007/978-3-030-86365-4_5
  • Feng et al . (2021b) Zhitao Feng, Mingliang Hou, Huiyang Liu, Mujie Liu, Achhardeep Kaur, Falih Gozi Febrinanto, and Wenhong Zhao. 2021b. SmartColor: Automatic Web Color Scheme Generation Based on Deep Learning. In 2021 12th International Conference on Information and Communication Systems (ICICS) . 285–290. https://doi.org/10.1109/ICICS52457.2021.9464536 ISSN: 2573-3346.
  • Feuston and Brubaker (2021) Jessica L. Feuston and Jed R. Brubaker. 2021. Putting Tools in Their Place: The Role of Time and Perspective in Human-AI Collaboration for Qualitative Analysis. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct. 2021), 1–25. https://doi.org/10.1145/3479856
  • Filho et al . (2015) Jackson Feijó Filho, Thiago Valle, and Wilson Prata. 2015. Automated Usability Tests for Mobile Devices through Live Emotions Logging. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI ’15) . Association for Computing Machinery, New York, NY, USA, 636–643. https://doi.org/10.1145/2786567.2792902
  • Fosco et al . (2020) Camilo Fosco, Vincent Casser, Amish Kumar Bedi, Peter O’Donovan, Aaron Hertzmann, and Zoya Bylinskii. 2020. Predicting Visual Importance Across Graphic Design Types. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST ’20) . Association for Computing Machinery, New York, NY, USA, 249–260. https://doi.org/10.1145/3379337.3415825
  • Frost (2016) Brad Frost. 2016. Atomic Design . http://atomicdesign.bradfrost.com/
  • Furuta et al . (2023) Hiroki Furuta, Ofir Nachum, Kuang-Huei Lee, Yutaka Matsuo, Shixiang Shane Gu, and Izzeddin Gur. 2023. Instruction-Finetuned Foundation Models for Multimodal Web Navigation. (2023).
  • Gao et al . (2023) Jie Gao, Yuchen Guo, Gionnieve Lim, Tianqin Zhang, Zheng Zhang, Toby Jia-Jun Li, and Simon Tangi Perrault. 2023. CollabCoder: A GPT-Powered Workflow for Collaborative Qualitative Analysis. https://doi.org/10.48550/arXiv.2304.07366 arXiv:2304.07366 [cs].
  • Gebreegziabher (2023) Simret Araya Gebreegziabher. 2023. PaTAT: Human-AI Collaborative Qualitative Coding with Explainable Interactive Rule Synthesis. (2023).
  • Georges et al . (2016) Vanessa Georges, François Courtemanche, Sylvain Senecal, Thierry Baccino, Marc Fredette, and Pierre-Majorique Leger. 2016. UX Heatmaps: Mapping User Experience on Visual Interfaces. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems . ACM, San Jose California USA, 4850–4860. https://doi.org/10.1145/2858036.2858271
  • Gilon et al . (2018) Karni Gilon, Joel Chan, Felicia Y. Ng, Hila Liifshitz-Assaf, Aniket Kittur, and Dafna Shahaf. 2018. Analogy Mining for Specific Design Needs. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems . ACM, Montreal QC Canada, 1–11. https://doi.org/10.1145/3173574.3173695
  • Goldman et al . (2022) Ariel Goldman, Cindy Espinosa, Shivani Patel, Francesca Cavuoti, Jade Chen, Alexandra Cheng, Sabrina Meng, Aditi Patil, Lydia B Chilton, and Sarah Morrison-Smith. 2022. QuAD: Deep-Learning Assisted Qualitative Data Analysis with Affinity Diagrams. In CHI Conference on Human Factors in Computing Systems Extended Abstracts . ACM, New Orleans LA USA, 1–7. https://doi.org/10.1145/3491101.3519863
  • Google ([n. d.]) Google. [n. d.]. Motion – Material Design 3. https://m3.material.io/styles/motion/overview
  • Gray (2016) Colin M. Gray. 2016. ”It’s More of a Mindset Than a Method”: UX Practitioners’ Conception of Design Methods. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16) . Association for Computing Machinery, New York, NY, USA, 4044–4055. https://doi.org/10.1145/2858036.2858410
  • Grigera et al . (2023) Julián Grigera, Jordán Pascual Espada, and Gustavo Rossi. 2023. AI in User Interface Design and Evaluation. IT Professional 25, 2 (March 2023), 20–22. https://doi.org/10.1109/MITP.2023.3267139 Conference Name: IT Professional.
  • Gupta et al . (2021) Kamal Gupta, Justin Lazarow, Alessandro Achille, Larry Davis, Vijay Mahadevan, and Abhinav Shrivastava. 2021. LayoutTransformer: Layout Generation and Completion with Self-attention. https://doi.org/10.48550/arXiv.2006.14615 arXiv:2006.14615 [cs].
  • Gustafsson (2019) Daniel Gustafsson. 2019. Analysing the Double diamond design process through research & implementation. (2019). https://aaltodoc.aalto.fi:443/handle/123456789/39285 Accepted: 2019-07-14T17:02:11Z.
  • Guzman and Maalej (2014) Emitza Guzman and Walid Maalej. 2014. How Do Users Like This Feature? A Fine Grained Sentiment Analysis of App Reviews. In 2014 IEEE 22nd International Requirements Engineering Conference (RE) . IEEE, Karlskrona, Sweden, 153–162. https://doi.org/10.1109/RE.2014.6912257
  • Han et al . (2021) Xu Han, Michelle Zhou, Matthew J. Turner, and Tom Yeh. 2021. Designing Effective Interview Chatbots: Automatic Chatbot Profiling and Design Suggestion Generation for Chatbot Debugging. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–15. https://doi.org/10.1145/3411764.3445569
  • Hasan Mansur et al . (2023) S M Hasan Mansur, Sabiha Salma, Damilola Awofisayo, and Kevin Moran. 2023. AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE) . 1958–1970. https://doi.org/10.1109/ICSE48619.2023.00166 ISSN: 1558-1225.
  • He et al . (2021) Zecheng He, Srinivas Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Wichers, Gabriel Schubiner, Ruby Lee, Jindong Chen, and Blaise Agüera y Arcas. 2021. ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces. http://arxiv.org/abs/2012.12350 arXiv:2012.12350 [cs].
  • Hedegaard and Simonsen (2013) Steffen Hedegaard and Jakob Grue Simonsen. 2013. Extracting usability and user experience information from online user reviews. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems . ACM, Paris France, 2089–2098. https://doi.org/10.1145/2470654.2481286
  • Hedegaard and Simonsen (2014) Steffen Hedegaard and Jakob Grue Simonsen. 2014. Mining until it hurts: automatic extraction of usability issues from online reviews compared to traditional usability evaluation. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (NordiCHI ’14) . Association for Computing Machinery, New York, NY, USA, 157–166. https://doi.org/10.1145/2639189.2639211
  • Hegemann et al . (2023) Lena Hegemann, Niraj Ramesh Dayama, Abhishek Iyer, Erfan Farhadi, Ekaterina Marchenko, and Antti Oulasvirta. 2023. CoColor: Interactive Exploration of Color Designs. In Proceedings of the 28th International Conference on Intelligent User Interfaces . ACM, Sydney NSW Australia, 106–127. https://doi.org/10.1145/3581641.3584089
  • Herring et al . (2009) Scarlett R. Herring, Chia-Chen Chang, Jesse Krantzler, and Brian P. Bailey. 2009. Getting inspired! understanding how and why examples are used in creative design practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09) . Association for Computing Machinery, New York, NY, USA, 87–96. https://doi.org/10.1145/1518701.1518717
  • Horvitz (1999) Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI ’99) . Association for Computing Machinery, New York, NY, USA, 159–166. https://doi.org/10.1145/302979.303030
  • Hotti et al . (2022) Alexandra Hotti, Riccardo Sven Risuleo, Stefan Magureanu, Aref Moradi, and Jens Lagergren. 2022. Graph Neural Networks for Nomination and Representation Learning of Web Elements. http://arxiv.org/abs/2111.02168 arXiv:2111.02168 [cs].
  • Hou et al . (2020) Wen-jun Hou, Xiang-yuan Yan, and Jia-xin Liu. 2020. A Method for Quickly Establishing Personas. In Artificial Intelligence in HCI , Helmut Degen and Lauren Reinerman-Jones (Eds.). Vol. 12217. Springer International Publishing, Cham, 16–32. https://doi.org/10.1007/978-3-030-50334-5_2 Series Title: Lecture Notes in Computer Science.
  • Huang et al . (2019) Forrest Huang, John F. Canny, and Jeffrey Nichols. 2019. Swire: Sketch-based User Interface Retrieval. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) . Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3290605.3300334
  • Huang et al . (2021) Forrest Huang, Gang Li, Xin Zhou, John F. Canny, and Yang Li. 2021. Creating User Interface Mock-ups from High-Level Text Descriptions with Deep-Learning Models. https://doi.org/10.48550/arXiv.2110.07775 arXiv:2110.07775 [cs].
  • Hunter and Maes ([n. d.]) Seth Hunter and Pattie Maes. [n. d.]. WordPlay: A Table-Top Interface for Collaborative Brainstorming and Decision Making. ([n. d.]).
  • Hwang and Won (2021) Angel Hsing-Chi Hwang and Andrea Stevenson Won. 2021. IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–16. https://doi.org/10.1145/3411764.3445270
  • Iki and Aizawa (2022) Taichi Iki and Akiko Aizawa. 2022. Do BERTs Learn to Use Browser User Interface? Exploring Multi-Step Tasks with Unified Vision-and-Language BERTs. https://doi.org/10.48550/arXiv.2203.07828 arXiv:2203.07828 [cs].
  • Ines et al . (2017) Gasmi Ines, Soui Makram, Chouchane Mabrouka, and Abed Mourad. 2017. Evaluation of mobile interfaces as an optimization problem. Procedia computer science 112 (2017), 235–248.
  • Inoue et al . (2023) Naoto Inoue, Kotaro Kikuchi, Edgar Simo-Serra, Mayu Otani, and Kota Yamaguchi. 2023. LayoutDM: Discrete Diffusion Model for Controllable Layout Generation. https://doi.org/10.48550/arXiv.2303.08137 arXiv:2303.08137 [cs].
  • Isgrò et al . (2022) Francesco Isgrò, Silvia D. Ferraris, and Sara Colombo. 2022. AI-Enabled Design Tools: Current Trends and Future Possibilities. In [ ] With Design: Reinventing Design Modes , Gerhard Bruyns and Huaxin Wei (Eds.). Springer Nature, Singapore, 2836–2847. https://doi.org/10.1007/978-981-19-4472-7_183
  • Jang and Yi (2017) Jincheul Jang and Mun Yong Yi. 2017. Modeling User Satisfaction from the Extraction of User Experience Elements in Online Product Reviews. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems . ACM, Denver Colorado USA, 1718–1725. https://doi.org/10.1145/3027063.3053097
  • Jang and Park (2022) Yeonju Jang and Eunil Park. 2022. Satisfied or not: user experience of mobile augmented reality in using natural language processing techniques on review comments. Virtual Reality 26, 3 (Sept. 2022), 839–848. https://doi.org/10.1007/s10055-021-00599-y
  • Jansen et al . (2019) Bernard J. Jansen, Soon-gyo Jung, and Joni Salminen. 2019. Creating Manageable Persona Sets from Large User Populations. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems . ACM, Glasgow Scotland Uk, 1–6. https://doi.org/10.1145/3290607.3313006
  • Jiang et al . (2021) Jialun Aaron Jiang, Kandrea Wade, Casey Fiesler, and Jed R. Brubaker. 2021. Supporting Serendipity: Opportunities and Challenges for Human-AI Collaboration in Qualitative Analysis. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 1–23. https://doi.org/10.1145/3449168 arXiv:2102.03702 [cs].
  • Jiang et al . (2023) Zhaoyun Jiang, Jiaqi Guo, Shizhao Sun, Huayu Deng, Zhongkai Wu, Vuksan Mijovic, Zijiang James Yang, Jian-Guang Lou, and Dongmei Zhang. 2023. LayoutFormer++: Conditional Graphic Layout Generation via Constraint Serialization and Decoding Space Restriction. https://doi.org/10.48550/arXiv.2208.08037 arXiv:2208.08037 [cs].
  • Jing et al . (2023) Qianzhi Jing, Tingting Zhou, Yixin Tsang, Liuqing Chen, Lingyun Sun, Yankun Zhen, and Yichun Du. 2023. Layout Generation for Various Scenarios in Mobile Shopping Applications. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23) . Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3544548.3581446
  • Jisun et al . (2017) An Jisun, Kwak Haewoon, and Jansen Bernard J. 2017. Automatic Generation of Personas Using YouTube Social Media Data. Proceedings of the 50th Hawaii International Conference on System Sciences (HICSS) (2017).
  • Judd et al . (2009) Tilke Judd, Krista Ehinger, Frédo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In 2009 IEEE 12th International Conference on Computer Vision . 2106–2113. https://doi.org/10.1109/ICCV.2009.5459462 ISSN: 2380-7504.
  • Jung et al . (2022) Summer Da Hyang Jung, Chandrayee Basu, Donghyeon Park, Julie Fukunaga, Maycon Cesar Santos, and Sohyeong Kim. 2022. Two-handed Design: Development of Food Personality Framework Using Mixed Method Needfinding. In CHI Conference on Human Factors in Computing Systems Extended Abstracts . ACM, New Orleans LA USA, 1–9. https://doi.org/10.1145/3491101.3503554
  • Kaluarachchi and Wickramasinghe (2023) Thisaranie Kaluarachchi and Manjusri Wickramasinghe. 2023. A systematic literature review on automatic website generation. Journal of Computer Languages 75 (June 2023), 101202. https://doi.org/10.1016/j.cola.2023.101202
  • Karimi et al . (2020) Pegah Karimi, Jeba Rezwana, Safat Siddiqui, Mary Lou Maher, and Nasrin Dehbozorgi. 2020. Creative sketching partner: an analysis of human-AI co-creativity. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI ’20) . Association for Computing Machinery, New York, NY, USA, 221–230. https://doi.org/10.1145/3377325.3377522
  • Kaukanen (2020) Miki Kaukanen. 2020. Evaluating the impacts of machine learning to the future of A/B testing. (2020). https://lutpub.lut.fi/handle/10024/161780 Accepted: 2020-10-30T09:10:47Z.
  • Kharitonov et al . (2017) Eugene Kharitonov, Alexey Drutsa, and Pavel Serdyukov. 2017. Learning Sensitive Combinations of A/B Test Metrics. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (WSDM ’17) . Association for Computing Machinery, New York, NY, USA, 651–659. https://doi.org/10.1145/3018661.3018708
  • Kikuchi et al . (2021) Kotaro Kikuchi, Edgar Simo-Serra, Mayu Otani, and Kota Yamaguchi. 2021. Constrained Graphic Layout Generation via Latent Optimization. In Proceedings of the 29th ACM International Conference on Multimedia . 88–96. https://doi.org/10.1145/3474085.3475497 arXiv:2108.00871 [cs].
  • Kim et al . (2023) Tae Soo Kim, Minsuk Chang, Yoonjoo Lee, and Juho Kim. 2023. Cells, Generators, and Lenses: Design Framework for Object-Oriented Interaction with Large Language Models. (2023).
  • Kim et al . (2022) Tae Soo Kim, DaEun Choi, Yoonseo Choi, and Juho Kim. 2022. Stylette: Styling the Web with Natural Language. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22) . Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3491102.3501931
  • Kita and Rekimoto (2018) Yui Kita and Jun Rekimoto. 2018. V8 Storming: How Far Should Two Ideas Be?. In Proceedings of the 9th Augmented Human International Conference . ACM, Seoul Republic of Korea, 1–8. https://doi.org/10.1145/3174910.3174937
  • Knearem et al . (2023) Tiffany Knearem, Mohammed Khwaja, Yuling Gao, Frank Bentley, and Clara E Kliman-Silver. 2023. Exploring the future of design tooling: The role of artificial intelligence in tools for user experience professionals. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23) . Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3544549.3573874
  • Koch et al . (2019) Janin Koch, Andrés Lucero, Lena Hegemann, and Antti Oulasvirta. 2019. May AI? Design Ideation with Cooperative Contextual Bandits. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) . Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300863
  • Kong et al . (2023) Wenyuan Kong, Zhaoyun Jiang, Shizhao Sun, Zhuoning Guo, Weiwei Cui, Ting Liu, Jianguang Lou, and Dongmei Zhang. 2023. Aesthetics++: Refining Graphic Designs by Exploring Design Principles and Human Preference. IEEE Transactions on Visualization and Computer Graphics 29, 6 (June 2023), 3093–3104. https://doi.org/10.1109/TVCG.2022.3151617 Conference Name: IEEE Transactions on Visualization and Computer Graphics.
  • Koonsanit et al . (2022) Kitti Koonsanit, Daiki Hiruma, Vibol Yem, and Nobuyuki Nishiuchi. 2022. Using Random Ordering in User Experience Testing to Predict Final User Satisfaction. Informatics 9, 4 (Dec. 2022), 85. https://doi.org/10.3390/informatics9040085 Number: 4 Publisher: Multidisciplinary Digital Publishing Institute.
  • Koonsanit and Nishiuchi (2021) Kitti Koonsanit and Nobuyuki Nishiuchi. 2021. Predicting Final User Satisfaction Using Momentary UX Data and Machine Learning Techniques. Journal of Theoretical and Applied Electronic Commerce Research 16, 7 (Dec. 2021), 3136–3156. https://doi.org/10.3390/jtaer16070171 Number: 7 Publisher: Multidisciplinary Digital Publishing Institute.
  • Krause et al . (2017) Markus Krause, Tom Garncarz, JiaoJiao Song, Elizabeth M. Gerber, Brian P. Bailey, and Steven P. Dow. 2017. Critique Style Guide: Improving Crowdsourced Design Feedback with a Natural Language Model. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17) . Association for Computing Machinery, New York, NY, USA, 4627–4639. https://doi.org/10.1145/3025453.3025883
  • Kruthiventi et al . (2017) Srinivas S. S. Kruthiventi, Kumar Ayush, and R. Venkatesh Babu. 2017. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations. IEEE Transactions on Image Processing 26, 9 (Sept. 2017), 4446–4456. https://doi.org/10.1109/TIP.2017.2710620 Conference Name: IEEE Transactions on Image Processing.
  • Kumar et al . (2023) Rahul Kumar, Shankar Natarajan, Mohamed Akram Ulla Shariff, and Parameswaranath Vaduckupurath Mani. 2023. Dynamic User Interface Composition. SN Computer Science 4, 3 (March 2023), 259. https://doi.org/10.1007/s42979-023-01672-w
  • Kumar et al . (2013) Ranjitha Kumar, Arvind Satyanarayan, Cesar Torres, Maxine Lim, Salman Ahmad, Scott R. Klemmer, and Jerry O. Talton. 2013. Webzeitgeist: design mining the web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13) . Association for Computing Machinery, New York, NY, USA, 3083–3092. https://doi.org/10.1145/2470654.2466420
  • Landay (1996) James A. Landay. 1996. SILK: sketching interfaces like krazy. In Conference companion on Human factors in computing systems common ground - CHI ’96 . ACM Press, Vancouver, British Columbia, Canada, 398–399. https://doi.org/10.1145/257089.257396
  • Leiva et al . (2020) Luis A. Leiva, Asutosh Hota, and Antti Oulasvirta. 2020. Enrico: A Dataset for Topic Modeling of Mobile UI Designs. In 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services . ACM, Oldenburg Germany, 1–4. https://doi.org/10.1145/3406324.3410710
  • Leiva et al . (2022a) Luis A. Leiva, Asutosh Hota, and Antti Oulasvirta. 2022a. Describing UI Screenshots in Natural Language. ACM Transactions on Intelligent Systems and Technology (Sept. 2022). https://doi.org/10.1145/3564702 Just Accepted.
  • Leiva et al . (2022b) Luis A. Leiva, Morteza Shiripour, and Antti Oulasvirta. 2022b. Modeling how different user groups perceive webpage aesthetics. Universal Access in the Information Society (Aug. 2022). https://doi.org/10.1007/s10209-022-00910-x
  • Li et al . (2022) Gang Li, Gilles Baechler, Manuel Tragut, and Yang Li. 2022. Learning to Denoise Raw Mobile UI Layouts for Improving Datasets at Scale. In CHI Conference on Human Factors in Computing Systems . ACM, New Orleans LA USA, 1–13. https://doi.org/10.1145/3491102.3502042
  • Li and Li (2023) Gang Li and Yang Li. 2023. Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus. https://doi.org/10.48550/arXiv.2209.14927 arXiv:2209.14927 [cs].
  • Li et al . (2016) Jian Li, Li Su, Bo Wu, Junbiao Pang, Chunfeng Wang, Zhe Wu, and Qingming Huang. 2016. Webpage saliency prediction with multi-features fusion. In 2016 IEEE International Conference on Image Processing (ICIP) . 674–678. https://doi.org/10.1109/ICIP.2016.7532442 ISSN: 2381-8549.
  • Li et al . (2024) JiayiZhou Li, Junxiu Tang, Tan Tang, Haotian Li, Weiwei Cui, Yingcaui Wu, et al . 2024. Understanding Nonlinear Collaboration between Human and AI Agents: A Co-design Framework for Creative Design. arXiv preprint arXiv:2401.07312 (2024).
  • Li et al . (2021c) Jianan Li, Jimei Yang, Aaron Hertzmann, Jianming Zhang, and Tingfa Xu. 2021c. LayoutGAN: Synthesizing Graphic Layouts With Vector-Wireframe Adversarial Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 7 (July 2021), 2388–2399. https://doi.org/10.1109/TPAMI.2019.2963663 Conference Name: IEEE Transactions on Pattern Analysis and Machine Intelligence.
  • Li et al . (2023) Tao Li, Gang Li, Zhiwei Deng, Bryan Wang, and Yang Li. 2023. A Zero-Shot Language Agent for Computer Control with Structured Reflection. arXiv preprint arXiv:2310.08740 (2023).
  • Li et al . (2017a) Toby Jia-Jun Li, Amos Azaria, and Brad A. Myers. 2017a. SUGILITE: Creating Multimodal Smartphone Automation by Demonstration. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems . ACM, Denver Colorado USA, 6038–6049. https://doi.org/10.1145/3025453.3025483
  • Li et al . (2020b) Toby Jia-Jun Li, Tom Mitchell, and Brad Myers. 2020b. Interactive Task Learning from GUI-Grounded Natural Language Instructions and Demonstrations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations . Association for Computational Linguistics, Online, 215–223. https://doi.org/10.18653/v1/2020.acl-demos.25
  • Li et al . (2021b) Toby Jia-Jun Li, Lindsay Popowski, Tom Mitchell, and Brad A Myers. 2021b. Screen2Vec: Semantic Embedding of GUI Screens and GUI Components. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–15. https://doi.org/10.1145/3411764.3445049
  • Li et al . (2019) Toby Jia-Jun Li, Marissa Radensky, Justin Jia, Kirielle Singarajah, Tom M. Mitchell, and Brad A. Myers. 2019. PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology . ACM, New Orleans LA USA, 577–589. https://doi.org/10.1145/3332165.3347899
  • Li et al . (2020a) Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. 2020a. Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements. http://arxiv.org/abs/2010.04295 arXiv:2010.04295 [cs].
  • Li et al . (2021a) Yang Li, Gang Li, Xin Zhou, Mostafa Dehghani, and Alexey Gritsenko. 2021a. VUT: Versatile UI Transformer for Multi-Modal Multi-Task User Interface Modeling. http://arxiv.org/abs/2112.05692 arXiv:2112.05692 [cs].
  • Li et al . (2017b) Yixuan Li, Pingmei Xu, Dmitry Lagun, and Vidhya Navalpakkam. 2017b. Towards Measuring and Inferring User Interest from Gaze. In Proceedings of the 26th International Conference on World Wide Web Companion (WWW ’17 Companion) . International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 525–533. https://doi.org/10.1145/3041021.3054182
  • Li (2021) Zhoufan Li. 2021. Qualitative Coding in the Computational Era: A Hybrid Approach to Improve Reliability and Reduce Effort for Coding Ethnographic Interviews - Google Search. https://www.google.com/search?q=Qualitative+Coding+in+the+Computational+Era%3A+A+Hybrid+Approach+to+Improve+Reliability+and+Reduce+Effort+for+Coding+Ethnographic+Interviews&sourceid=chrome&ie=UTF-8
  • Li et al . (2020c) Z. Li, Z. G. Tian, J. W. Wang, and W. M. Wang. 2020c. Extraction of affective responses from customer reviews: an opinion mining and machine learning approach. International Journal of Computer Integrated Manufacturing 33, 7 (July 2020), 670–685. https://doi.org/10.1080/0951192X.2019.1571240
  • Liao et al . (2020) Jing Liao, Preben Hansen, and Chunlei Chai. 2020. A framework of artificial intelligence augmented design support. Human–Computer Interaction 35, 5-6 (Nov. 2020), 511–544. https://doi.org/10.1080/07370024.2020.1733576 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/07370024.2020.1733576.
  • Liao et al . (2023) Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, and Jennifer Wortman Vaughan. 2023. Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience. http://arxiv.org/abs/2302.10395 arXiv:2302.10395 [cs].
  • Lima and Gresse von Wangenheim (2022) Adriano Luiz de Souza Lima and Christiane Gresse von Wangenheim. 2022. Assessing the Visual Esthetics of User Interfaces: A Ten-Year Systematic Mapping. International Journal of Human–Computer Interaction 38, 2 (Jan. 2022), 144–164. https://doi.org/10.1080/10447318.2021.1926118 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10447318.2021.1926118.
  • Liu et al . (2019) Dawei Liu, Ying Cao, Rynson W.H. Lau, and Antoni B. Chan. 2019. ButtonTips: Design Web Buttons with Suggestions. In 2019 IEEE International Conference on Multimedia and Expo (ICME) . 466–471. https://doi.org/10.1109/ICME.2019.00087 ISSN: 1945-788X.
  • Liu et al . (2018) Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. 2018. Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration. http://arxiv.org/abs/1802.08802 arXiv:1802.08802 [cs].
  • Llàcer Giner (2020) David Llàcer Giner. 2020. A Deep Learning Based Approach to Automated App Testing . Master’s thesis. Universitat Politècnica de Catalunya. https://upcommons.upc.edu/handle/2117/335561 Accepted: 2021-01-19T14:22:53Z.
  • Lopez and Guerrero (2017) Gustavo Lopez and Luis A. Guerrero. 2017. Awareness Supporting Technologies used in Collaborative Systems: A Systematic Literature Review. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17) . Association for Computing Machinery, New York, NY, USA, 808–820. https://doi.org/10.1145/2998181.2998281
  • Lu et al . (2022) Yuwen Lu, Chengzhi Zhang, Iris Zhang, and Toby Jia-Jun Li. 2022. Bridging the Gap Between UX Practitioners’ Work Practices and AI-Enabled Design Support Tools. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (CHI EA ’22) . Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3491101.3519809
  • Lubars and Tan (2019) Brian Lubars and Chenhao Tan. 2019. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Advances in Neural Information Processing Systems , Vol. 32. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2019/hash/d67d8ab4f4c10bf22aa353e27879133c-Abstract.html
  • López ([n. d.]) Daniel Peña López. [n. d.]. Triggering ideas with Generative AI. ([n. d.]).
  • Löbbers et al . (2023) Sebastian Löbbers, Mathieu Barthet, and György Fazekas. 2023. AI as mediator between composers, sound designers, and creative media producers. http://arxiv.org/abs/2303.01457 arXiv:2303.01457 [cs].
  • Maalej et al . (2016) Walid Maalej, Zijad Kurtanović, Hadeer Nabil, and Christoph Stanik. 2016. On the automatic classification of app reviews. Requirements Engineering 21, 3 (Sept. 2016), 311–331. https://doi.org/10.1007/s00766-016-0251-9
  • Malik et al . (2023) Subtain Malik, Muhammad Tariq Saeed, Marya Jabeen Zia, Shahzad Rasool, Liaquat Ali Khan, and Mian Ilyas Ahmed. 2023. Reimagining Application User Interface (UI) Design using Deep Learning Methods: Challenges and Opportunities. http://arxiv.org/abs/2303.13055 arXiv:2303.13055 [cs].
  • Manandhar et al . (2021) Dipu Manandhar, Hailin Jin, and John Collomosse. 2021. Magic Layouts: Structural Prior for Component Detection in User Interface Designs. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . IEEE, Nashville, TN, USA, 15804–15813. https://doi.org/10.1109/CVPR46437.2021.01555
  • Marathe and Toyama (2018) Megh Marathe and Kentaro Toyama. 2018. Semi-Automated Coding for Qualitative Research: A User-Centered Inquiry and Initial Prototypes. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems . ACM, Montreal QC Canada, 1–12. https://doi.org/10.1145/3173574.3173922
  • Marsh et al . (1996) Richard L. Marsh, Joshua D. Landau, and Jason L. Hicks. 1996. How examples may (and may not) constrain creativity. Memory & Cognition 24, 5 (Sept. 1996), 669–680. https://doi.org/10.3758/BF03201091
  • Martelaro and Ju (2017) Nikolas Martelaro and Wendy Ju. 2017. The Needfinding Machine. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17) . Association for Computing Machinery, New York, NY, USA, 355–356. https://doi.org/10.1145/3029798.3034811
  • Memmert and Tavanapour (2023) Lucas Memmert and Navid Tavanapour. 2023. Towards Human-AI-Collaboration in Brainstorming: Empirical Insights into the Perception of Working with a Generative AI. (2023).
  • Mendes et al . (2015) Marília S. Mendes, Elizabeth Furtado, Vasco Furtado, and Miguel F. De Castro. 2015. Investigating Usability and User Experience from the User Postings in Social Systems. In Social Computing and Social Media , Gabriele Meiselwitz (Ed.). Vol. 9182. Springer International Publishing, Cham, 216–228. https://doi.org/10.1007/978-3-319-20367-6_22 Series Title: Lecture Notes in Computer Science.
  • Mendes and Furtado (2017) Marília Soares Mendes and Elizabeth Sucupira Furtado. 2017. UUX-Posts: a tool for extracting and classifying postings related to the use of a system. In Proceedings of the 8th Latin American Conference on Human-Computer Interaction . ACM, Antigua Guatemala Guatemala, 1–8. https://doi.org/10.1145/3151470.3151471
  • Mesbah et al . (2023) Sepideh Mesbah, Ines Arous, Jie Yang, and Alessandro Bozzon. 2023. HybridEval: A Human-AI Collaborative Approach for Evaluating Design Ideas at Scale. In Proceedings of the ACM Web Conference 2023 . ACM, Austin TX USA, 3837–3848. https://doi.org/10.1145/3543507.3583496
  • Miniukovich and De Angeli (2015) Aliaksei Miniukovich and Antonella De Angeli. 2015. Computation of Interface Aesthetics. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15) . Association for Computing Machinery, New York, NY, USA, 1163–1172. https://doi.org/10.1145/2702123.2702575
  • Moher et al . (2009) D. Moher, A. Liberati, J. Tetzlaff, D. G Altman, and for the PRISMA Group. 2009. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339, jul21 1 (July 2009), b2535–b2535. https://doi.org/10.1136/bmj.b2535
  • Mohian and Csallner (2022) Soumik Mohian and Christoph Csallner. 2022. PSDoodle: fast app screen search via partial screen doodle. In Proceedings of the 9th IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft ’22) . Association for Computing Machinery, New York, NY, USA, 89–99. https://doi.org/10.1145/3524613.3527816
  • Mohian and Csallner (2023) Soumik Mohian and Christoph Csallner. 2023. Searching Mobile App Screens via Text + Doodle. https://doi.org/10.48550/arXiv.2305.06165 arXiv:2305.06165 [cs].
  • Moore et al . (2023) Steven Moore, Q. Vera Liao, and Hariharan Subramonyam. 2023. fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems . ACM, Hamburg Germany, 1–19. https://doi.org/10.1145/3544548.3581242
  • Moran (2018) Kate Moran. 2018. Quantitative User-Research Methodologies: An Overview. https://www.nngroup.com/articles/quantitative-user-research-methods/
  • Moran et al . (2020) Kevin Moran, Carlos Bernal-Cárdenas, Michael Curcio, Richard Bonett, and Denys Poshyvanyk. 2020. Machine Learning-Based Prototyping of Graphical User Interfaces for Mobile Apps. IEEE Transactions on Software Engineering 46, 2 (Feb. 2020), 196–221. https://doi.org/10.1109/TSE.2018.2844788 Conference Name: IEEE Transactions on Software Engineering.
  • Mozaffari et al . (2022) Mohammad Amin Mozaffari, Xinyuan Zhang, Jinghui Cheng, and Jin L. C. Guo. 2022. GANSpiration: Balancing Targeted and Serendipitous Inspiration in User Interface Design with Style-Based Generative Adversarial Network. In CHI Conference on Human Factors in Computing Systems . 1–15. https://doi.org/10.1145/3491102.3517511 arXiv:2203.03827 [cs].
  • Natarajan and Csallner (2018) Siva Natarajan and Christoph Csallner. 2018. P2A: a tool for converting pixels to animated mobile application user interfaces. In Proceedings of the 5th International Conference on Mobile Software Engineering and Systems (MOBILESoft ’18) . Association for Computing Machinery, New York, NY, USA, 224–235. https://doi.org/10.1145/3197231.3197249
  • Nielsen (2017) Jakob Nielsen. 2017. A 100-Year View of User Experience (by Jakob Nielsen). https://www.nngroup.com/articles/100-years-ux/
  • Nightingale (2009) Alison Nightingale. 2009. A guide to systematic literature reviews. Surgery (Oxford) 27, 9 (Sept. 2009), 381–384. https://doi.org/10.1016/j.mpsur.2009.07.005
  • Norman and Nielsen (1998) Don Norman and Jakob Nielsen. 1998. The Definition of User Experience. Nielsen Norman Group (Aug 1998). https://www.nngroup.com/articles/definition-user-experience/
  • Norman (2010) Donald A Norman. 2010. The research-Practice Gap: The need for translational developers. interactions 17, 4 (2010), 9–12.
  • Novák et al . (2023) Jakub Štěpán Novák, Jan Masner, Petr Benda, Pavel Šimek, and Vojtěch Merunka. 2023. Eye Tracking, Usability, and User Experience: A Systematic Review. International Journal of Human–Computer Interaction 0, 0 (June 2023), 1–17. https://doi.org/10.1080/10447318.2023.2221600 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10447318.2023.2221600.
  • O’Donovan et al . (2011) Peter O’Donovan, Aseem Agarwala, and Aaron Hertzmann. 2011. Color compatibility from large datasets. ACM Transactions on Graphics 30, 4 (July 2011), 1–12. https://doi.org/10.1145/2010324.1964958
  • O’Donovan et al . (2014) Peter O’Donovan, Jānis Lībeks, Aseem Agarwala, and Aaron Hertzmann. 2014. Exploratory font selection using crowdsourced attributes. ACM Transactions on Graphics 33, 4 (July 2014), 92:1–92:9. https://doi.org/10.1145/2601097.2601110
  • Olson and Kellogg (2014) Judith S Olson and Wendy A Kellogg. 2014. Ways of Knowing in HCI . Vol. 2. Springer.
  • Oztekin et al . (2013) Asil Oztekin, Dursun Delen, Ali Turkyilmaz, and Selim Zaim. 2013. A machine learning-based usability evaluation method for eLearning systems. Decision Support Systems 56 (Dec. 2013), 63–73. https://doi.org/10.1016/j.dss.2013.05.003
  • Pang et al . (2016) Xufang Pang, Ying Cao, Rynson W. H. Lau, and Antoni B. Chan. 2016. Directing user attention via visual flow on web designs. ACM Transactions on Graphics 35, 6 (Dec. 2016), 240:1–240:11. https://doi.org/10.1145/2980179.2982422
  • Pater et al . (2021) Jessica Pater, Amanda Coupe, Rachel Pfafman, Chanda Phelan, Tammy Toscos, and Maia Jacobs. 2021. Standardizing Reporting of Participant Compensation in HCI: A Systematic Literature Review and Recommendations for the Field. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–16. https://doi.org/10.1145/3411764.3445734
  • Peng et al . (2022) Chao Peng, Zhao Zhang, Zhengwei Lv, and Ping Yang. 2022. MUBot: Learning to Test Large-Scale Commercial Android Apps like a Human. In 2022 IEEE International Conference on Software Maintenance and Evolution (ICSME) . 543–552. https://doi.org/10.1109/ICSME55016.2022.00074 ISSN: 2576-3148.
  • Pernice (2019) Kara Pernice. 2019. Affinity Diagramming: Collaborate, Sort and Prioritize UX Ideas (Video). https://www.nngroup.com/videos/affinity-diagramming/
  • Petersen et al . (2020) Curtis Lee Petersen, Ryan Halter, David Kotz, Lorie Loeb, Summer Cook, Dawna Pidgeon, Brock C. Christensen, and John A. Batsis. 2020. Using Natural Language Processing and Sentiment Analysis to Augment Traditional User-Centered Design: Development and Usability Study. JMIR mHealth and uHealth 8, 8 (Aug. 2020), e16862. https://doi.org/10.2196/16862 Company: JMIR mHealth and uHealth Distributor: JMIR mHealth and uHealth Institution: JMIR mHealth and uHealth Label: JMIR mHealth and uHealth Publisher: JMIR Publications Inc., Toronto, Canada.
  • Ponce et al . (2018) Pedro Ponce, David Balderas, Therese Peffer, and Arturo Molina. 2018. Deep learning for automatic usability evaluations based on images: A case study of the usability heuristics of thermostats. Energy and Buildings 163 (March 2018), 111–120. https://doi.org/10.1016/j.enbuild.2017.12.043
  • Rahman et al . (2021) Soliha Rahman, Vinoth Pandian Sermuga Pandian, and Matthias Jarke. 2021. RUITE: Refining UI Layout Aesthetics Using Transformer Encoder. In 26th International Conference on Intelligent User Interfaces - Companion (IUI ’21 Companion) . Association for Computing Machinery, New York, NY, USA, 81–83. https://doi.org/10.1145/3397482.3450716
  • Rawles et al . (2023) Christopher Rawles, Alice Li, Daniel Rodriguez, Oriana Riva, and Timothy Lillicrap. 2023. Android in the Wild: A Large-Scale Dataset for Android Device Control. https://doi.org/10.48550/arXiv.2307.10088 arXiv:2307.10088 [cs].
  • Reinecke et al . (2013) Katharina Reinecke, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z Gajos. 2013. Predicting users’ first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In Proceedings of the SIGCHI conference on human factors in computing systems . 2049–2058.
  • Riegler and Holzmann (2015) Andreas Riegler and Clemens Holzmann. 2015. UI-CAT: Calculating user interface complexity metrics for mobile applications. In Proceedings of the 14th International Conference on Mobile and Ubiquitous Multimedia . 390–394.
  • Rietz and Maedche (2020) Tim Rietz and Alexander Maedche. 2020. Towards the Design of an Interactive Machine Learning System for Qualitative Coding. In International Conference on Information Systems, ICIS 2020 - Making Digital Inclusive: Blending the Local and the Global, India, December 13-16, 2020. Ed.: Joey George . 1830. https://publikationen.bibliothek.kit.edu/1000124563
  • Rietz and Maedche (2021) Tim Rietz and Alexander Maedche. 2021. Cody: An AI-Based System to Semi-Automate Coding for Qualitative Research. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–14. https://doi.org/10.1145/3411764.3445591
  • Rombach et al . (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . IEEE, New Orleans, LA, USA, 10674–10685. https://doi.org/10.1109/CVPR52688.2022.01042
  • Rosala (2020) Maria Rosala. 2020. The Discovery Phase in UX Projects. https://www.nngroup.com/articles/discovery-phase/
  • Rosala (2022) Maria Rosala. 2022. How to Analyze Qualitative Data from UX Research: Thematic Analysis. https://www.nngroup.com/articles/thematic-analysis/
  • Rosenholtz et al . (2011) Ruth Rosenholtz, Amal Dorai, and Rosalind Freeman. 2011. Do predictions of visual perception aid design? ACM Transactions on Applied Perception 8, 2 (Feb. 2011), 12:1–12:20. https://doi.org/10.1145/1870076.1870080
  • Ruiz and Snoeck (2022) Jenny Ruiz and Monique Snoeck. 2022. Feedback Generation for Automatic User Interface Design Evaluation. In Software Technologies (Communications in Computer and Information Science) , Hans-Georg Fill, Marten van Sinderen, and Leszek A. Maciaszek (Eds.). Springer International Publishing, Cham, 67–93. https://doi.org/10.1007/978-3-031-11513-4_4
  • Russell and Norvig (2010) Stuart J Russell and Peter Norvig. 2010. Artificial intelligence a modern approach . London.
  • Salehi (2023) Niloufar Salehi. 2023. I tried out SyntheticUsers, so you don’t have to. https://niloufars.substack.com/p/i-tried-out-syntheticusers-so-you
  • Salminen et al . (2020) Joni Salminen, Kathleen Guan, Soon-Gyo Jung, Shammur A. Chowdhury, and Bernard J. Jansen. 2020. A Literature Review of Quantitative Persona Creation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376502
  • Salminen et al . (2021) Joni Salminen, Kathleen Guan, Soon-Gyo Jung, and Bernard J. Jansen. 2021. A Survey of 15 Years of Data-Driven Persona Development. International Journal of Human–Computer Interaction 37, 18 (Nov. 2021), 1685–1708. https://doi.org/10.1080/10447318.2021.1908670
  • Samele and Burny (2023) Alberto Samele and Nicolas Burny. 2023. Bootstrapped Evaluation with OctoDollop: A Mobile Application for Evaluating Mobile GUI Aesthetics in Context. In Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS ’23 Companion) . Association for Computing Machinery, New York, NY, USA, 73–75. https://doi.org/10.1145/3596454.3597186
  • Schoop et al . (2022) Eldon Schoop, Xin Zhou, Gang Li, Zhourong Chen, Bjoern Hartmann, and Yang Li. 2022. Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis. In CHI Conference on Human Factors in Computing Systems . ACM, New Orleans LA USA, 1–21. https://doi.org/10.1145/3491102.3517497
  • Sermuga Pandian et al . (2022) Vinoth Pandian Sermuga Pandian, Abdullah Shams, Sarah Suleri, and Prof. Dr. Matthias Jarke. 2022. LoFi Sketch: A Large Scale Dataset of Smartphone Low Fidelity Sketches. In CHI Conference on Human Factors in Computing Systems Extended Abstracts . ACM, New Orleans LA USA, 1–5. https://doi.org/10.1145/3491101.3519624
  • Sermuga Pandian et al . (2021c) Vinoth Pandian Sermuga Pandian, Sarah Suleri, Christian Beecks, and Matthias Jarke. 2021c. MetaMorph: AI Assistance to Transform Lo-Fi Sketches to Higher Fidelities. In Proceedings of the 32nd Australian Conference on Human-Computer Interaction (OzCHI ’20) . Association for Computing Machinery, New York, NY, USA, 403–412. https://doi.org/10.1145/3441000.3441030
  • Sermuga Pandian et al . (2021a) Vinoth Pandian Sermuga Pandian, Sarah Suleri, and Matthias Jarke. 2021a. SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors. In 26th International Conference on Intelligent User Interfaces . ACM, College Station TX USA, 67–69. https://doi.org/10.1145/3397482.3450725
  • Sermuga Pandian et al . (2021b) Vinoth Pandian Sermuga Pandian, Sarah Suleri, and Prof. Dr. Matthias Jarke. 2021b. UISketch: A Large-Scale Dataset of UI Element Sketches. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–14. https://doi.org/10.1145/3411764.3445784
  • Shen et al . (2015) Chengyao Shen, Xun Huang, and Qi Zhao. 2015. Predicting Eye Fixations on Webpage With an Ensemble of Early Features and High-Level Representations from Deep Network. IEEE Transactions on Multimedia 17, 11 (Nov. 2015), 2084–2093. https://doi.org/10.1109/TMM.2015.2483370 Conference Name: IEEE Transactions on Multimedia.
  • Shen et al . (2023) Hong Shen, Tianshi Li, Toby Jia-Jun Li, Joon Sung Park, and Diyi Yang. 2023. Shaping the Emerging Norms of Using Large Language Models in Social Computing Research. http://arxiv.org/abs/2307.04280 arXiv:2307.04280 [cs].
  • Shin et al . (2023) Joon Gi Shin, Janin Koch, Andrés Lucero, Peter Dalsgaard, and Wendy E. Mackay. 2023. Integrating AI in Human-Human Collaborative Ideation. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems . ACM, Hamburg Germany, 1–5. https://doi.org/10.1145/3544549.3573802
  • Shneiderman (2022) Ben Shneiderman. 2022. Human-Centered AI . Oxford University Press. Google-Books-ID: mSRXEAAAQBAJ.
  • Siddaway et al . (2019) Andy P. Siddaway, Alex M. Wood, and Larry V. Hedges. 2019. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annual Review of Psychology 70, 1 (2019), 747–770. https://doi.org/10.1146/annurev-psych-010418-102803 _eprint: https://doi.org/10.1146/annurev-psych-010418-102803.
  • Siemon (2023) Dominik Siemon. 2023. Let the computer evaluate your idea: evaluation apprehension in human-computer collaboration. Behaviour & Information Technology 42, 5 (April 2023), 459–477. https://doi.org/10.1080/0144929X.2021.2023638
  • Silva-Rodríguez et al . (2019) Viridiana Silva-Rodríguez, Sandra Edith Nava-Muñoz, Luis A. Castro, Francisco E. Martínez-Pérez, Héctor G. Pérez-González, and Francisco Torres-Reyes. 2019. Machine Learning Methods for Inferring Interaction Design Patterns from Textual Requirements. Proceedings 31, 1 (2019), 26. https://doi.org/10.3390/proceedings2019031026 Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
  • Sobolevsky et al . (2023) Andrey Sobolevsky, Guillaume-Alexandre Bilodeau, Jinghui Cheng, and Jin L. C. Guo. 2023. GUILGET: GUI Layout GEneration with Transformer. https://doi.org/10.48550/arXiv.2304.09012 arXiv:2304.09012 [cs].
  • Stefanidi et al . (2023) Evropi Stefanidi, Marit Bentvelzen, Paweł W. Woźniak, Thomas Kosch, Mikołaj P. Woźniak, Thomas Mildner, Stefan Schneegass, Heiko Müller, and Jasmin Niess. 2023. Literature Reviews in HCI: A Review of Reviews. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems . ACM, Hamburg Germany, 1–24. https://doi.org/10.1145/3544548.3581332
  • Stige et al . (2023) Åsne Stige, Efpraxia D Zamani, Patrick Mikalef, and Yuzhen Zhu. 2023. Artificial intelligence (AI) for user experience (UX) design: a systematic literature review and future research agenda. Information Technology & People (2023).
  • Stone et al . (2022) Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, et al . 2022. Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence. arXiv preprint arXiv:2211.06318 (2022).
  • Su et al . (2021) Yuhui Su, Zhe Liu, Chunyang Chen, Junjie Wang, and Qing Wang. 2021. OwlEyes-online: a fully automated platform for detecting and localizing UI display issues. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2021) . Association for Computing Machinery, New York, NY, USA, 1500–1504. https://doi.org/10.1145/3468264.3473109
  • Swearngin et al . (2018) Amanda Swearngin, Mira Dontcheva, Wilmot Li, Joel Brandt, Morgan Dixon, and Amy J. Ko. 2018. Rewire: Interface Design Assistance from Examples. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18) . Association for Computing Machinery, 1–12. https://doi.org/10.1145/3173574.3174078 Place: New York, NY, USA.
  • Swearngin and Li (2019) Amanda Swearngin and Yang Li. 2019. Modeling Mobile Interface Tappability Using Crowdsourcing and Deep Learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) . Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300305
  • Swearngin et al . (2020) Amanda Swearngin, Chenglong Wang, Alannah Oleson, James Fogarty, and Amy J. Ko. 2020. Scout: Rapid Exploration of Interface Layout Alternatives through High-Level Design Constraints. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . 1–13. https://doi.org/10.1145/3313831.3376593 arXiv:2001.05424 [cs].
  • Tan et al . (2022) Hao Tan, Shenglan Peng, Jia-Xin Liu, Chun-Peng Zhu, and Fan Zhou. 2022. Generating Personas for Products on Social Media: A Mixed Method to Analyze Online Users. International Journal of Human–Computer Interaction 38, 13 (Aug. 2022), 1255–1266. https://doi.org/10.1080/10447318.2021.1990520 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10447318.2021.1990520.
  • Tavanapour et al . (2020) Navid Tavanapour, Daphne Theodorakopoulos, and Eva A. C. Bittner. 2020. A Conversational Agent as Facilitator: Guiding Groups Through Collaboration Processes. In Learning and Collaboration Technologies. Human and Technology Ecosystems , Panayiotis Zaphiris and Andri Ioannou (Eds.). Vol. 12206. Springer International Publishing, Cham, 108–129. https://doi.org/10.1007/978-3-030-50506-6_9 Series Title: Lecture Notes in Computer Science.
  • Todi et al . (2021) Kashyap Todi, Luis A. Leiva, Daniel Buschek, Pin Tian, and Antti Oulasvirta. 2021. Conversations with GUIs. In Designing Interactive Systems Conference 2021 . ACM, Virtual Event USA, 1447–1457. https://doi.org/10.1145/3461778.3462124
  • Tuch et al . (2013) Alexandre N. Tuch, Rune Trusell, and Kasper Hornbæk. 2013. Analyzing users’ narratives to understand experience with interactive products. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems . ACM, Paris France, 2079–2088. https://doi.org/10.1145/2470654.2481285
  • Verheijden and Funk (2023) Mathias Peter Verheijden and Mathias Funk. 2023. Collaborative Diffusion: Boosting Designerly Co-Creation with Generative AI. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems . ACM, Hamburg Germany, 1–8. https://doi.org/10.1145/3544549.3585680
  • Vontell (2019) Aaron Richard Vontell. 2019. Bility : automated accessibility testing for mobile applications . Thesis. Massachusetts Institute of Technology. https://dspace.mit.edu/handle/1721.1/121685 Accepted: 2019-07-15T20:33:58Z Journal Abbreviation: Automated accessibility testing for mobile applications.
  • Wakatsuki and Yamamoto (2021) Yuki Wakatsuki and Yusuke Yamamoto. 2021. Clustering to Support Users Finding Unexpected Perspectives in Brainstorming. In 2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI) . 494–497. https://doi.org/10.1109/IIAI-AAI53430.2021.00086 ISSN: 2770-8470.
  • Wang et al . (2023) Bryan Wang, Gang Li, and Yang Li. 2023. Enabling Conversational Interaction with Mobile UI using Large Language Models. http://arxiv.org/abs/2209.08655 arXiv:2209.08655 [cs].
  • Wang et al . (2021) Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, and Yang Li. 2021. Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning. http://arxiv.org/abs/2108.03353 arXiv:2108.03353 [cs].
  • Wang et al . (2020) Guolong Wang, Zheng Qin, Junchi Yan, and Liu Jiang. 2020. Learning to Select Elements for Graphic Design. In Proceedings of the 2020 International Conference on Multimedia Retrieval (ICMR ’20) . Association for Computing Machinery, New York, NY, USA, 91–99. https://doi.org/10.1145/3372278.3390678
  • Wang et al . (2022) Yawen Wang, Junjie Wang, Hongyu Zhang, Xuran Ming, Lin Shi, and Qing Wang. 2022. Where is Your App Frustrating Users?. In Proceedings of the 44th International Conference on Software Engineering . 2427–2439. https://doi.org/10.1145/3510003.3510189 arXiv:2204.09310 [cs].
  • Wei et al . (2023) Jialiang Wei, Anne-Lise Courbis, Thomas Lambolais, Binbin Xu, Pierre Louis Bernard, and Gérard Dray. 2023. Boosting GUI Prototyping with Diffusion Models. http://arxiv.org/abs/2306.06233 arXiv:2306.06233 [cs].
  • Wen et al . (2023) Hao Wen, Yuanchun Li, Guohong Liu, Shanhui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang, Yunhao Liu, Yaqin Zhang, and Yunxin Liu. 2023. Empowering LLM to use Smartphone for Intelligent Task Automation. https://doi.org/10.48550/arXiv.2308.15272 arXiv:2308.15272 [cs].
  • Wobbrock and Kientz (2016) Jacob O. Wobbrock and Julie A. Kientz. 2016. Research contributions in human-computer interaction. Interactions 23, 3 (April 2016), 38–44. https://doi.org/10.1145/2907069
  • Wohlin (2014) Claes Wohlin. 2014. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering . ACM, London England United Kingdom, 1–10. https://doi.org/10.1145/2601248.2601268
  • Wu et al . (2023a) Jason Wu, Amanda Swearngin, Xiaoyi Zhang, Jeffrey Nichols, and Jeffrey P. Bigham. 2023a. Screen Correspondence: Mapping Interchangeable Elements between UIs. http://arxiv.org/abs/2301.08372 arXiv:2301.08372 [cs].
  • Wu et al . (2023b) Jason Wu, Siyan Wang, Siman Shen, Yi-Hao Peng, Jeffrey Nichols, and Jeffrey P. Bigham. 2023b. WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics. https://doi.org/10.48550/arXiv.2301.13280 arXiv:2301.13280 [cs].
  • Wu et al . (2020) Ziming Wu, Yulun Jiang, Yiding Liu, and Xiaojuan Ma. 2020. Predicting and Diagnosing User Engagement with Mobile UI Animation via a Data-Driven Approach. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376324
  • Wu et al . (2019) Ziming Wu, Taewook Kim, Quan Li, and Xiaojuan Ma. 2019. Understanding and Modeling User-Perceived Brand Personality from Mobile Application UIs. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) . Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300443
  • Xiao and Watson (2019) Yu Xiao and Maria Watson. 2019. Guidance on Conducting a Systematic Literature Review. Journal of Planning Education and Research 39, 1 (March 2019), 93–112. https://doi.org/10.1177/0739456X17723971 Publisher: SAGE Publications Inc.
  • Xiao et al . (2023) Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, and Pierre-Yves Oudeyer. 2023. Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. In 28th International Conference on Intelligent User Interfaces . ACM, Sydney NSW Australia, 75–78. https://doi.org/10.1145/3581754.3584136
  • Xiao et al . (2020a) Ziang Xiao, Michelle X. Zhou, Wenxi Chen, Huahai Yang, and Changyan Chi. 2020a. If I Hear You Correctly: Building and Evaluating Interview Chatbots with Active Listening Skills. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376131
  • Xiao et al . (2020b) Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, and Huahai Yang. 2020b. Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions. ACM Transactions on Computer-Human Interaction 27, 3 (June 2020), 1–37. https://doi.org/10.1145/3381804
  • Xing et al . (2021) Baixi Xing, Huahao Si, Junbin Chen, Minchao Ye, and Lei Shi. 2021. Computational model for predicting user aesthetic preference for GUI using DCNNs. CCF Transactions on Pervasive Computing and Interaction 3, 2 (June 2021), 147–169. https://doi.org/10.1007/s42486-021-00064-4
  • Xu et al . (2015) Pingmei Xu, Krista A. Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R. Kulkarni, and Jianxiong Xiao. 2015. TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking. https://doi.org/10.48550/arXiv.1504.06755 arXiv:1504.06755 [cs].
  • Xu et al . (2016) Pingmei Xu, Yusuke Sugano, and Andreas Bulling. 2016. Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16) . Association for Computing Machinery, New York, NY, USA, 3299–3310. https://doi.org/10.1145/2858036.2858479
  • Xu (2019) Wei Xu. 2019. Toward human-centered AI: a perspective from human-computer interaction. Interactions 26, 4 (June 2019), 42–46. https://doi.org/10.1145/3328485
  • Yan et al . (2023) An Yan, Zhengyuan Yang, Wanrong Zhu, Kevin Lin, Linjie Li, Jianfeng Wang, Jianwei Yang, Yiwu Zhong, Julian McAuley, Jianfeng Gao, et al . 2023. Gpt-4v in wonderland: Large multimodal models for zero-shot smartphone gui navigation. arXiv preprint arXiv:2311.07562 (2023).
  • Yang et al . (2019a) Bai Yang, Ying Liu, Yan Liang, and Min Tang. 2019a. Exploiting user experience from online customer reviews for product design. International Journal of Information Management 46 (June 2019), 173–186. https://doi.org/10.1016/j.ijinfomgt.2018.12.006
  • Yang et al . (2021a) Bo Yang, Zhenchang Xing, Xin Xia, Chunyang Chen, Deheng Ye, and Shanping Li. 2021a. Don’t Do That! Hunting Down Visual Design Smells in Complex UIs Against Design Guidelines. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) . 761–772. https://doi.org/10.1109/ICSE43902.2021.00075 ISSN: 1558-1225.
  • Yang et al . (2021b) Bo Yang, Zhenchang Xing, Xin Xia, Chunyang Chen, Deheng Ye, and Shanping Li. 2021b. UIS-Hunter: Detecting UI Design Smells in Android Apps. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion) . 89–92. https://doi.org/10.1109/ICSE-Companion52605.2021.00043 ISSN: 2574-1926.
  • Yang et al . (2020) Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376301
  • Yang et al . (2019b) Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019b. Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) . Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300468
  • Yao et al . (2023) Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2023. WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents. https://doi.org/10.48550/arXiv.2207.01206 arXiv:2207.01206 [cs].
  • Yildirim et al . (2022) Nur Yildirim, Alex Kass, Teresa Tung, Connor Upton, Donnacha Costello, Robert Giusti, Sinem Lacin, Sara Lovic, James M O’Neill, Rudi O’Reilly Meehan, Eoin Ó Loideáin, Azzurra Pini, Medb Corcoran, Jeremiah Hayes, Diarmuid J Cahalane, Gaurav Shivhare, Luigi Castoro, Giovanni Caruso, Changhoon Oh, James McCann, Jodi Forlizzi, and John Zimmerman. 2022. How Experienced Designers of Enterprise Applications Engage AI as a Design Material. In CHI Conference on Human Factors in Computing Systems . ACM, New Orleans LA USA, 1–13. https://doi.org/10.1145/3491102.3517491
  • Youmans and Arciszewski (2014) Robert J. Youmans and Thomaz Arciszewski. 2014. Design fixation: Classifications and modern methods of prevention. AI EDAM 28, 2 (May 2014), 129–137. https://doi.org/10.1017/S0890060414000043 Publisher: Cambridge University Press.
  • Zang et al . (2021) Xiaoxue Zang, Ying Xu, and Jindong Chen. 2021. Multimodal Icon Annotation For Mobile Applications. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction . ACM, Toulouse & Virtual France, 1–11. https://doi.org/10.1145/3447526.3472064
  • Zhang et al . (2023) Junyi Zhang, Jiaqi Guo, Shizhao Sun, Jian-Guang Lou, and Dongmei Zhang. 2023. LayoutDiffusion: Improving Graphic Layout Generation by Discrete Diffusion Probabilistic Models. https://doi.org/10.48550/arXiv.2303.11589 arXiv:2303.11589 [cs].
  • Zhang et al . (2016) Xiang Zhang, Hans-Frederick Brown, and Anil Shankar. 2016. Data-driven Personas: Constructing Archetypal Users with Clickstreams and User Telemetry. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems . ACM, San Jose California USA, 5350–5359. https://doi.org/10.1145/2858036.2858523
  • Zhang et al . (2021) Xiaoyi Zhang, Lilian De Greef, Amanda Swearngin, Samuel White, Kyle Murray, Lisa Yu, Qi Shan, Jeffrey Nichols, Jason Wu, Chris Fleizach, Aaron Everitt, and Jeffrey P Bigham. 2021. Screen Recognition: Creating Accessibility Metadata for Mobile Applications from Pixels. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, Yokohama Japan, 1–15. https://doi.org/10.1145/3411764.3445186
  • Zhang et al . (2022) Xiyuan Zhang, Xinyang Mao, Yang Yin, Chunlei Chai, and Ting Zhang. 2022. Melting Your Models: An Integrated AI-based Creativity Support Tool for Inspiration Evolution. In 2022 15th International Symposium on Computational Intelligence and Design (ISCID) . 97–101. https://doi.org/10.1109/ISCID56505.2022.00029 ISSN: 2473-3547.
  • Zhao et al . (2020b) Dehai Zhao, Zhenchang Xing, Chunyang Chen, Xiwei Xu, Liming Zhu, Guoqiang Li, and Jinshui Wang. 2020b. Seenomaly: vision-based linting of GUI animation effects against design-don’t guidelines. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (ICSE ’20) . Association for Computing Machinery, New York, NY, USA, 1286–1297. https://doi.org/10.1145/3377811.3380411
  • Zhao et al . (2018) Nanxuan Zhao, Ying Cao, and Rynson W.H. Lau. 2018. Modeling Fonts in Context: Font Prediction on Web Designs. Computer Graphics Forum 37, 7 (Oct. 2018), 385–395. https://doi.org/10.1111/cgf.13576
  • Zhao et al . (2020a) Nanxuan Zhao, Nam Wook Kim, Laura Mariah Herman, Hanspeter Pfister, Rynson W.H. Lau, Jose Echevarria, and Zoya Bylinskii. 2020a. ICONATE: Automatic Compound Icon Generation and Ideation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376618
  • Zhao et al . (2021) Tianming Zhao, Chunyang Chen, Yuanning Liu, and Xiaodong Zhu. 2021. GUIGAN: Learning to Generate GUI Designs Using Generative Adversarial Networks. https://doi.org/10.48550/arXiv.2101.09978 arXiv:2101.09978 [cs].
  • Zhu and Luo (2023) Qihao Zhu and Jianxi Luo. 2023. Toward Artificial Empathy for Human-Centered Design: A Framework. http://arxiv.org/abs/2303.10583 arXiv:2303.10583 [cs].

Appendix A Appendix

A.1. rationale to use snowball sampling.

Query ACM DL Google Scholar

(”UX” or ”user experience”) and (”AI” or ”Artificial Intelligence” or ”ML” or ”Machine Learning”)

292,808 2,580

(”UI” or ”user interface” or ”UX” or ”user experience” or ”HCI”) and (”AI” or ”Artificial Intelligence” or ”ML” or ”Machine Learning”)

67,164 5,510,000

(”UI design” or ”UX design” or ”interaction design”) and (”AI” or ”ML”)

3,222 16,900

As discussed in section  3 , we used snowball sampling for literature selection instead of keyword-search in popular search engines or databases. We initially attempted to use keyword search and tested out a few advanced queries in ACM Digital Library and Google Scholar. A few advanced search queries about ”UX” and ”AI”, or ”machine learning” in these systems all resulted in hundreds of thousands of papers, similar to  (Stefanidi et al . , 2023 ) . Most of the search results were not relevant to our scope and it is hardly practical for us to manually filter through them all. Moreover, these online search engines/databases often disagree with each other in their results significantly, posing uncertainty regarding their reliability. As a result, we conducted snowball sampling instead. Here we attach a few queries we attempted to use and the count of their search results as of August 2023 when we write this paper.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

ux research literature review

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The role of bim in integrating digital twin in building construction: a literature review.

ux research literature review

1. Introduction

2. background, 2.1. concept of bim, 2.2. concept of digital twin, 2.3. advancement of bim to digital twin, 3. methodology, 4. literature review, 4.1. discussion on available research on digital twin with bim.

  • Integration of BIM and DT: Douglas et al. [ 35 ] focused on using real time data from sensors and other sources to enhance the DT, as well as using data analytics and machine learning algorithms to analyze these data and make predictions about building performance;
  • Real time data analysis: Opoku et al. [ 27 ] and Deng et al. [ 11 ] focused on using real time data from sensors and other sources to enhance the DT, as well as using data analytics and machine learning algorithms to analyze these data and make predictions about building performance;
  • Simulation and visualization: there has been research on using simulation and visualization technologies to enhance the DT and improve decision-making in the construction and engineering industries [ 21 , 27 ];
  • Cost and resource optimization: DT and BIM potentially reduce costs, improve resource allocation, and increase overall efficiency in the building construction process [ 33 , 40 ];
  • BIM/DT in the context of sustainability: the integration of BIM and DT support sustainable design and construction practices by incorporating data on energy efficiency [ 21 ], material usage [ 26 ], and environmental impact [ 18 ]; it integrates real-time data from sensors and IoT devices [ 21 ], enabling continuous monitoring [ 5 ], analysis, and proactive maintenance [ 34 ] for sustainable practices.

4.2. Evolution of Digital Twin from BIM

4.3. current study to compare digital twin with bim.

  • Concept Origin: technology’s origin is its history, goals, and principles. Understanding the concept helps researchers evaluate their strengths, weaknesses, and applications. The concept’s origin can also indicate which technological parts are more developed or need more research.
  • Purpose: to define each technology’s scope and goals. This criterion helps determine their complementary roles and the best integration strategies to improve building design, construction, and operation.
  • Application focus: It highlights each technology’s primary focus. It also shows each technology’s pros and cons to guide future improvements. It is crucial to choose the right technology for a project or application.
  • Features: They are an essential aspect of the scientific comparison between BIM and DT, as they help understand each technology’s capabilities and limitations and their potential for integration and interoperability.
  • Level of Details: We can assess the pros and cons of integrating these technologies into building projects.
  • Scalability: allows for evaluating their ability to handle different types of projects and their potential limitations regarding resource requirements and integration with other technologies.
  • Main Users: Identify each technology’s primary users and how it meets their needs. This information can help stakeholders choose technology based on project needs and team expertise.
  • Interoperability: enables these technologies to be integrated with other systems and software, leading to greater efficiencies and improved outcomes in the building lifecycle management process.
  • Application interface: evaluates the usability and effectiveness of the software for different users and applications.
  • Building life cycle stage: compares BIM and DT in building construction, as it can help determine which technology is more suitable for a given project.

4.3.1. Concept Origin

4.3.2. purposes, 4.3.3. application focus, 4.3.4. features, 4.3.5. level of details (lod), 4.3.6. scalability, 4.3.7. main users, 4.3.8. interoperability, 4.3.9. application interface, 4.3.10. characteristics, 4.4. advancement of bim to improve digital twin in building construction.

  • Increased interoperability: BIM technology has become more interoperable, allowing seamless data exchange between platforms and systems [ 7 ]. It makes creating and updating DT easier with real time data from sensors and other sources.
  • Improved data accuracy: BIM technology can offer precise and comprehensive insights into a building’s blueprint, building process, and maintenance, all of which can contribute to developing a more precise DT [ 12 ].
  • Increased collaboration: BIM enables collaboration among architects, engineers, and construction professionals, leading to better decision-making and improved overall outcomes [ 25 ]. When this collaboration is applied to creating a DT, it can result in a more comprehensive and effective virtual representation of the building.
  • Better visualization: BIM technology has advanced to include more realistic and interactive visualizations [ 40 ], making it easier to understand and analyze the building’s performance through the DT [ 11 ].
  • More advanced simulation: BIM has also advanced to include more advanced simulation capabilities, allowing for the simulation of complex systems and analyzing building performance in real time [ 40 ].

5. Result and Discussion

5.1. result and discussion, 5.2. limitation, 5.3. future study, 6. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

#TitlesAuthors/
Years
Citation #Journals/
Conferences
Research
Methodologies
Key Findings
1Digital Twin: Vision, Benefits, Boundaries, and
Creation for Buildings
Khajavi et al. (2019)[ ]IEEEExperimentation:
Testing—Sensor network used to create DT of a building.
Proposing a framework to enable a DT of a building facade.
2Towards a semantic Construction Digital Twin: Directions for future researchBoje et al. (2020)[ ]Automation in ConstructionLiterature Review:
The research approach is divided into three steps: reviewing BIM, analyzing DT uses, and identifying research gaps.
BIM can be used to create a construction DT concept, allowing for more efficient construction.
3Characterizing the Digital Twin: A systematic literature reviewJones et al. (2020)[ ]CIRP-JMSTLiterature Review:
This paper provided a characterization of the DT, identified gaps in knowledge, and identified areas for future research.
Identifying 13 characteristics of the DT and its process of operation, as well as 7 knowledge gaps and topics for future research focus.
4Construction with digital twin information systemsSacks et al. (2020)[ ]Data-Centric EngineeringConceptual analysis:
Analyzes construction project management processes, digital tools, and workflow frameworks.
Four core information and control concepts for DT construction, focusing on concentric control workflow cycles and prioritizing closure.
5Differentiating Digital Twin from Digital Shadow: Elucidating a Paradigm Shift to Expedite a Smart, Sustainable Built EnvironmentSepasgozar (2021)[ ]MDPILiterature Review:
This section analyzes DT scientific research quantitatively, using scientometric analysis to identify trends, challenges, and publications in various fields.
DT applications are recommended for real-time decision-making, self-operation, and remote supervision in smart cities, engineering and construction sectors post-COVID-19.
6Digital Twin in construction: An Empirical AnalysisEl Jazzar et al. (2020)[ ]Conference PaperLiterature Review DT practice in construction:
Categorizes integration into Digital Model, Digital Shadow, and DT.
Developing the framework for understanding DT implementation in the construction industry.
7Digital Twins in Built Environments: An Investigation of the Characteristics, Applications, and ChallengesShahzad et al. (2022)[ ]MDPILiterature Review:
Semi-structured interviews with ten industry experts.
Exploring the relationship between DTs, technologies, and implementation challenges.
8SPHERE: BIM Digital Twin PlatformAlonso et al. (2019)[ ]MDPILiterature Review:
Collaborative practices are facilitated using the IDDS framework and PAAS platform for data integration and processing.
SPHERE platform improves building energy performance, reduces costs, and enhances the indoor environment.
9From BIM to Digital Twins: A Systematic Review of the Evolution of Intelligent Building Representations in the AEC-FM industry Deng et al. (2021)[ ]IT ConLiterature Review:
Review of emerging technologies for BIM and DTs.
Developing a five-level ladder categorization system for reviewing studies on DT applications, focusing on the building life cycle, research domains, and technologies.
10Digital twin application in the construction industry: A literature reviewOpoku et al. (2021)[ ]Building EngineeringSystematic Review:
The study analyzes DT concepts, technologies, and applications in construction using systematic review methodology and the science mapping method.
Highlighting six DT applications in construction, highlighting their development in various lifecycle phases but focusing on design and engineering over demolition and recovery.
11From BIM towards Digital Twin: Strategy and Future Development for Smart Asset ManagementLu et al. (2020)[ ]CSICLiterature Review:
The study reviews latest research and industry standards impacting BIM and asset management.
Proposing a framework for smart asset management using DT technology and promoting smart DT-enabled asset management adoption.
12Digital Twins for Construction Sites: Concepts,
LoD Definition, and Applications
Zhang et al. (2022)[ ]ASCEQuestionnaires and interviews are used to propose a framework that enhances construction site monitoring, management, quality, efficiency, and safety.Proposing a framework for utilizing DTs to extend BIM, IoT, data storage, integration, analytics, and physical environment interaction in construction site management.
13A Proposed Framework for Construction 4.0 Based on a Review of LiteratureSawhney et al. (2020)[ ]ASCLiterature Review:
The study reviews Industry 4.0’s impact on the construction sector, defining the framework, benefits, and barriers.
Revealing BIM and CDE are crucial for Construction 4.0 implementation, transforming the industry into efficient, quality-centered, and safe.
14A Review of Digital Twin Applications in ConstructionMadubuike et al. (2022)[ ]IT ConSystematic Review:
The study reviews literature, analyzes existing and emerging applications, and identifies limitations.
Evaluating DT technology’s benefits in construction, comparing applications, and identifying limitations.
15Application of Digital Twin Technologies in Construction: An
Overview of Opportunities and Challenges
Feng et al. (2021)[ ]ISARCLiterature Review:
23 recent publications were reviewed for DT development in construction.
DT technologies in the AEC industry face challenges in data integration, security, and funding, requiring skilled professionals and advanced technologies.
16Design and Construction Integration Technology Based on Digital TwinZhou et al. (2021)[ ]PSGECLiterature Review:
Review recent papers on the application of DT in substation design and construction integration.
Improving performance, reducing construction difficulties, and simplifying maintenance by addressing low digitization intelligence issues.
17Digital Twin-Driven Intelligent Construction: Features and TrendsZhang et al. (2021)[ ]Tech. Science PressLiterature Review:
The study reviews DT-driven IC usage, focusing on information perception, data mining, state assessment, and intelligent optimization.
Sustainable IC and DT enhance construction industry efficiency, real-time structure monitoring, and safety prediction, with four aspects proposed for digital dual-drive sustainable intelligent construction.
18Towards Next Generation Cyber-Physical Systems and Digital Twins for ConstructionAkanmu et al. (2021)[ ]IT ConLiterature Review:
The paper reviews evolution, applications, limitations, next generation CPS/DTs, enabling technologies, and conclusions in construction.
Exploring opportunities for CPS and DT in construction, promoting increased deployment and workforce productivity.
19Virtually Intelligent Product Systems:
Digital and Physical Twins
Grieves (2019)[ ]Astronautics
Aeronautics
Literature Review:
Paper explores interconnected Physical Twin, product lifecycle, and DT concepts.
DT concept requires value-driven use cases, with new ones emerging as technology advances.
20Digital twins from
design to handover
of constructed assets
Seaton et al. (2022)[ ]World Built Environment ForumLiterature Review; Case Studies; Interviews:
The paper examines DTs’ dimensions, application, asset life cycle, and use cases from the perspective of professionals in the built environment sector.
DTs in the built environment require accurate definition, efficient data management, and high BIM adoption for success.
21Digital Twin for Accelerating Sustainability in Positive Energy District: A Review of Simulation Tools and ApplicationsZhang et al. (2021)[ ]Frontiers in Sustainable CitiesLiterature Review:
Review of DT for PEDs, discussing concepts, principles, tools, and applications.
Digital PED twin consists of virtual models, sensor network integration, data analytics, and a stakeholder layer, with limited tools for full functionality.
22A Review of the Digital Twin Technology in the AEC-FM IndustryHosamo et al. (2022)[ ]Hindawi
Civil Engineering
Literature Review:
77 academic publications clustered around DT applications in the AEC-FM industry.
DT implementation in the AEC-FM industry requires information standardization and a conceptual framework.
23BIM, Digital Twin and Cyber Physical Systems:
Crossing and Blurring Boundaries
Douglas et al. (2021)[ ]Computing in ConstructionSystematic Review:
The paper reviews DT BIM and CPS concepts, promoting discussion in construction.
Identifying three distinct DT and BIM understandings, requiring further investigation.
24Climate Emergency—Managing, Building, and Delivering the Sustainable Development GoalsGorse et al. (2020)[ ]SEEDSLiterature Review; Interview; Case Studies:
Data collection, communication, and rapid response processes.
Proposing the growth of DT as benefits realized over time and an approach to DT for BIM-enabled asset management.
25Developing BIM-Based Linked Data Digital Twin Architecture to Address a Key Missing Factor: OccupantsSobhkhiz and El-Diraby (2022)[ ]ASCECase Study:
Extended the DT architecture for addressing issues.
Proposing architecture for designing DTs using semantic web technologies, linked data approaches, machine learning, and BIM integration.
26Digital Twin in the Architecture, Engineering, and Construction Industry: A Bibliometric ReviewAlmatared et al. (2022)[ ]ASCELiterature Review:
Research synthesizes DT in the AEC industry using bibliometric analysis, identifying trends, challenges, and knowledge gaps.
Exposing quantitative research trends and needs for DT in the AEC industry. Future research should focus on data interoperability, AIoT, and AI.
27Digital Twins: Details
Of Implementation
Quirk et al. (2020)[ ]ASHRAELiterature Review:
This article discusses implementing a DT, validating results, and real-time calibration.
DTs enable ongoing monitoring of data center environments, enabling rapid decision-making and energy efficiency optimization, reducing surprises, and enhancing business efficiency.
28Industry 4.0
for the Built
Environment: The Role of Digital Twins and Their Application for the Built Environment
Bolpagni et al. (2021)[ ]Structural
Integrity 20
Case Study:
Literature Review of DT vision, utilization, BIM specifications, and energy efficiency management in facility management.
Discussing DT concept, human–building interaction, post-construction use cases, property management, field data, and practical solutions.
29The Development of a BIM-Based Interoperable Toolkit for
Efficient Renovation in Buildings: From BIM to Digital Twin
Daniotti et al. (2022)[ ]MDPILiterature Review:
A European project validates the BIM4EEB renovation toolset using KPIs in real-world cases.
Developing the Horizon2020 Project’s BIM-based toolkit development, real-world validation, and benefits enhance the building renovation process.
30Internet of Things (IoT), Building Information Modeling (BIM),
and Digital Twin (DT) in Construction Industry: A Review,
Bibliometric, and Network Analysis
Baghalzadeh et al. (2022)[ ]MDPILiterature Review:
Reviews 1879 studies in Web of Science database network on visualization, research interactions, and influential authors.
Revealing prolific authors, prominent journals, nations, popular topics, and future trends.
  • Zhang, J.; Cheng, J.C.P.; Chen, W.; Chen, K. Digital Twins for Construction Sites: Concepts, LoD Definition, and Applications. J. Manag. Eng. 2022 , 38 , 04021094. [ Google Scholar ] [ CrossRef ]
  • Lu, Q.; Xie, X.; Heaton, J.; Parlikad, A.K.; Schooling, J. From BIM towards Digital Twin: Strategy and Future Development for Smart Asset Management. In Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future ; Borangiu, T., Trentesaux, D., Leitão, P., Giret Boggino, A., Botti, V., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 392–404. [ Google Scholar ]
  • Grieves, M. Digital Twin: Manufacturing Excellence through Virtual Factory Replication ; Michael W. Grieves, LLC: Cocoa Beach, FL, USA, 2015. [ Google Scholar ]
  • O’Gorman, T. How Digital Twins Optimize the Performance of Your Assets in a Sustainable Way. IBM Blog . 2023. Available online: https://www.ibm.com/blog/how-digital-twins-optimize-the-performance-of-your-assets-in-a-sustainable-way (accessed on 22 May 2023).
  • Sacks, R.; Brilakis, I.; Pikas, E.; Xie, H.S.; Girolami, M. Construction with Digital Twin Information Systems. Data Cent. Eng. 2020 , 1 , e14. [ Google Scholar ] [ CrossRef ]
  • Autodesk Autodesk. Available online: https://www.autodesk.com/solutions/digital-twin/architecture-engineering-construction (accessed on 15 February 2022).
  • Boje, C.; Guerriero, A.; Kubicki, S.; Rezgui, Y. Towards a Semantic Construction Digital Twin: Directions for Future Research. Autom. Constr. 2020 , 114 , 103179. [ Google Scholar ] [ CrossRef ]
  • Chen, Y.; Kamara, J.M. A Framework for Using Mobile Computing for Information Management on Construction Sites. Autom. Constr. 2011 , 20 , 776–788. [ Google Scholar ] [ CrossRef ]
  • Fischer, M.; Ashcraft, H.W.; Reed, D.; Khanzode, A. Atul Khanzode Integrating Project Delivery ; Wiley: Hoboken, NJ, USA, 2017; ISBN 978-0470587355. Available online: https://www.amazon.com/Integrating-Project-Delivery-Martin-Fischer/dp/0470587350 (accessed on 29 October 2022).
  • Eastman, C.M.; Eastman, C.; Teicholz, P.; Sacks, R.; Liston, K. BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors ; John Wiley & Sons: Hoboken, NJ, USA, 2011; ISBN 978-0-470-54137-1. [ Google Scholar ]
  • Deng, M.; Menassa, C.C.; Kamat, V.R. From BIM to Digital Twins: A Systematic Review of the Evolution of Intelligent Building Representations in the AEC-FM Industry. ITcon 2021 , 26 , 58–83. [ Google Scholar ] [ CrossRef ]
  • Akanmu, A.A.; Anumba, C.J.; Ogunseiju, O.O. Towards next Generation Cyber-Physical Systems and Digital Twins for Construction. ITcon 2021 , 26 , 505–525. [ Google Scholar ] [ CrossRef ]
  • Building Digital Twin Association Publications: Antwerp, Belgium, 2019. White Paper, Q4 2019. p. 9. Available online: https://buildingdigitaltwin.org/wp-content/uploads/2022/02/WhitePaper1-en.pdf (accessed on 22 May 2023).
  • Adhikari, S.; Collins, J.; Loreto, G.; Nguyen, T.D. The Use of Parametric Modeling to Enhance the Understanding of Concrete Formwork Structures. In Proceedings of the 2021 ASEE Virtual Annual Conference Content Access, Virtual, 26–29 July 2021. [ Google Scholar ]
  • Grieves, M.W. Product Lifecycle Management: The New Paradigm for Enterprises. Int. J. Prod. Dev. 2005 , 2 , 71–84. [ Google Scholar ] [ CrossRef ]
  • Grieves, M. Product Lifecycle Management: Driving the next Generation of Lean Thinking ; McGraw-Hill: New York, NY, USA, 2006; ISBN 978-0-07-145230-4. [ Google Scholar ]
  • Grieves, M. Virtually Perfect: Driving Innovative and Lean Products through Product Lifecycle Management ; Space Coast Press: Cocoa Beach, FL, USA, 2011; ISBN 978-0-9821380-0-7. [ Google Scholar ]
  • Seaton, H.; Savian, C.; Sepasgozar, S.; Sawhney, A. Digital Twins from Design to Handover of Constructed Assets ; Royal Institution of Chartered Surveyors: London, UK, 2022. [ Google Scholar ]
  • Hardin: BIM and Construction Management: Proven Tools. Available online: https://scholar.google.com/scholar_lookup?title=BIM%20and%20Construction%20Management%3A%20Proven%20Tools%2C%20Methods&author=B.%20Hardin&publication_year=2009 (accessed on 10 May 2023).
  • Zhou, L.; An, C.; Shi, J.; Lv, Z.; Liang, H. Design and Construction Integration Technology Based on Digital Twin. In Proceedings of the 2021 Power System and Green Energy Conference (PSGEC), Shanghai, China, 20–22 August 2021; pp. 7–11. [ Google Scholar ]
  • Khajavi, S.H.; Motlagh, N.H.; Jaribion, A.; Werner, L.C.; Holmström, J. Digital Twin: Vision, Benefits, Boundaries, and Creation for Buildings. IEEE Access 2019 , 7 , 147406–147419. [ Google Scholar ] [ CrossRef ]
  • Jones, D.; Snider, C.; Nassehi, A.; Yon, J.; Hicks, B. Characterising the Digital Twin: A Systematic Literature Review. CIRP J. Manuf. Sci. Technol. 2020 , 29 , 36–52. [ Google Scholar ] [ CrossRef ]
  • Sepasgozar, S.M.E. Differentiating Digital Twin from Digital Shadow: Elucidating a Paradigm Shift to Expedite a Smart, Sustainable Built Environment. Buildings 2021 , 11 , 151. [ Google Scholar ] [ CrossRef ]
  • El Jazzar, M.; Piskernik, M.; Nassereddine, H. Digital Twin in Construction: An Empirical Analysis. In Proceedings of the EG-ICE 2020 Workshop on Intelligent Computing in Engineering, Proceedings, Online, 1–4 July 2020. [ Google Scholar ]
  • Shahzad, M.; Shafiq, M.T.; Douglas, D.; Kassem, M. Digital Twins in Built Environments: An Investigation of the Characteristics, Applications, and Challenges. Buildings 2022 , 12 , 120. [ Google Scholar ] [ CrossRef ]
  • Alonso, R.; Borras, M.; Koppelaar, R.H.E.M.; Lodigiani, A.; Loscos, E.; Yöntem, E. SPHERE: BIM Digital Twin Platform. Proceedings 2019 , 20 , 9. [ Google Scholar ] [ CrossRef ]
  • Opoku, D.-G.J.; Perera, S.; Osei-Kyei, R.; Rashidi, M. Digital Twin Application in the Construction Industry: A Literature Review. J. Build. Eng. 2021 , 40 , 102726. [ Google Scholar ] [ CrossRef ]
  • Sawhney, A.; Riley, M.; Irizarry, J.; Pérez, C.T. A Proposed Framework for Construction 4.0 Based on a Review of Literature. In EPiC Series in Built Environment ; EasyChair: Liverpool, UK, 2020; Volume 1, pp. 301–309. [ Google Scholar ]
  • Madubuike, O.C.; Anumba, C.J.; Khallaf, R. A Review of Digital Twin Applications in Construction. ITcon 2022 , 27 , 145–172. [ Google Scholar ] [ CrossRef ]
  • Feng, H.; Chen, Q.; García de Soto, B. Application of Digital Twin Technologies in Construction: An Overview of Opportunities and Challenges. In Proceedings of the 38th International Symposium on Automation and Robotics in Construction (ISARC 2021), Dubai, United Arab Emirates, 4 August 2021. [ Google Scholar ]
  • Zhang, H.; Zhou, Y.; Zhu, H.; Sumarac, D.; Cao, M. Digital Twin-Driven Intelligent Construction: Features and Trends. Struct. Durab. Health Monit. 2021 , 15 , 183–206. [ Google Scholar ] [ CrossRef ]
  • Grieves, M.W. Virtually Intelligent Product Systems: Digital and Physical Twins. In Complex Systems Engineering: Theory and Practice ; Flumerfelt, S., Schwartz, K.G., Mavris, D., Briceno, S., Eds.; American Institute of Aeronautics and Astronautics, Inc.: Reston, VA, USA, 2019; pp. 175–200. ISBN 978-1-62410-564-7. [ Google Scholar ]
  • Zhang, X.; Shen, J.; Saini, P.K.; Lovati, M.; Han, M.; Huang, P.; Huang, Z. Digital Twin for Accelerating Sustainability in Positive Energy District: A Review of Simulation Tools and Applications. Front. Sustain. Cities 2021 , 3 , 663269. [ Google Scholar ] [ CrossRef ]
  • Hosamo, H.H.; Imran, A.; Cardenas-Cartagena, J.; Svennevig, P.R.; Svidt, K.; Nielsen, H.K. A Review of the Digital Twin Technology in the AEC-FM Industry. Adv. Civ. Eng. 2022 , 2022 , e2185170. [ Google Scholar ] [ CrossRef ]
  • Douglas, D.; Kelly, G.; Kassem, K. BIM, Digital Twin and Cyber-Physical Systems: Crossing and Blurring Boundaries. In Proceedings of the 2021 European Conference on Computing in Construction, Rhodes, Greece, 26 July 2021; pp. 204–211. [ Google Scholar ]
  • Gorse, C.; Booth, C.; Scott, L.; Dastbaz, M. Climate Emergency—Managing, Building, and Delivering the Sustainable Development Goals: Selected Proceedings from the International Conference of Sustainable Ecological Engineering Design for Society (SEEDS) 2020 ; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; ISBN 978-3-030-79450-7. [ Google Scholar ]
  • Sobhkhiz, S.; El-Diraby, T. Developing BIM-Based Linked Data Digital Twin Architecture to Address a Key Missing Factor: Occupants ; ASCE Library: Reston, VA, USA, 2022; pp. 11–20. [ Google Scholar ] [ CrossRef ]
  • Almatared, M.; Liu, H.; Tang, S.; Sulaiman, M.; Lei, Z.; Li, H.X. Digital Twin in the Architecture, Engineering, and Construction Industry: A Bibliometric Review ; ASCE Library: Reston, VA, USA, 2022; p. 678. [ Google Scholar ]
  • Quirk, D.; Lanni, J.; Chauhan, N. Digital Twins: Details Of Implementation. ASHRAE J. 2020 , 62 , 20–24. [ Google Scholar ]
  • Bolpagni, M.; Gavina, R.; Ribeiro, D. Industry 4.0 for the Built Environment: Methodologies, Technologies and Skills ; Springer Nature: Berlin/Heidelberg, Germany, 2021; ISBN 978-3-030-82430-3. [ Google Scholar ]
  • Daniotti, B.; Masera, G.; Bolognesi, C.M.; Lupica Spagnolo, S.; Pavan, A.; Iannaccone, G.; Signorini, M.; Ciuffreda, S.; Mirarchi, C.; Lucky, M.; et al. The Development of a BIM-Based Interoperable Toolkit for Efficient Renovation in Buildings: From BIM to Digital Twin. Buildings 2022 , 12 , 231. [ Google Scholar ] [ CrossRef ]
  • Baghalzadeh Shishehgarkhaneh, M.; Keivani, A.; Moehler, R.C.; Jelodari, N.; Roshdi Laleh, S. Internet of Things (IoT), Building Information Modeling (BIM), and Digital Twin (DT) in Construction Industry: A Review, Bibliometric, and Network Analysis. Buildings 2022 , 12 , 1503. [ Google Scholar ] [ CrossRef ]
  • Kaur, M.J.; Mishra, V.P.; Maheshwari, P. The Convergence of Digital Twin, IoT, and Machine Learning: Transforming Data into Action ; Springer: Cham, Switzerland, 2020; Available online: https://link.springer.com/chapter/10.1007/978-3-030-18732-3_1 (accessed on 14 May 2023).
  • Schleich, B.; Anwer, N.; Mathieu, L.; Wartzack, S. Shaping the Digital Twin for Design and Production Engineering. CIRP Ann. 2017 , 66 , 141–144. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Qi, Q.; Tao, F. Digital Twin and Big Data Towards Smart Manufacturing and Industry 4.0: 360 Degree Comparison. IEEE Access 2018 , 6 , 3585–3593. [ Google Scholar ] [ CrossRef ]
  • Carlsén, A.; Elfstrand, O. Augmented Construction: Developing a Framework for Implementing Building Information Modeling through Augmented Reality at Construction Sites ; Semantic Scholar: Stockholm, Sweden, 2018. [ Google Scholar ]
  • Gubbi, J.; Buyya, R.; Marusic, S.; Palaniswami, M. Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions. Future Gener. Comput. Syst. 2013 , 29 , 1645–1660. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Tang, S.; Shelden, D.R.; Eastman, C.M.; Pishdad-Bozorgi, P.; Gao, X. A Review of Building Information Modeling (BIM) and the Internet of Things (IoT) Devices Integration: Present Status and Future Trends. Autom. Constr. 2019 , 101 , 127–139. [ Google Scholar ] [ CrossRef ]
  • Lee, D.; Cha, G.; Park, S. A Study on Data Visualization of Embedded Sensors for Building Energy Monitoring Using BIM. Int. J. Precis. Eng. Manuf. 2016 , 17 , 807–814. [ Google Scholar ] [ CrossRef ]
  • IoT for All. ThoughtWire Blog. Digital Twins vs. Building Information Modeling (BIM). 2020. Available online: https://www.iotforall.com/digital-twin-vs-bim?ss360SearchTerm=Digital%20Twin (accessed on 16 August 2022).
  • Rolle, R.; Martucci, V.; Godoy, E. Architecture for Digital Twin Implementation Focusing on Industry 4.0. IEEE Lat. Am. Trans. 2020 , 18 , 889–898. [ Google Scholar ] [ CrossRef ]
  • Kritzinger, W.; Karner, M.; Traar, G.; Henjes, J.; Sihn, W. Digital Twin in Manufacturing: A Categorical Literature Review and Classification. IFAC Pap. 2018 , 51 , 1016–1022. [ Google Scholar ] [ CrossRef ]
  • Lavrentyeva, A.V.; Dzikia, A.A.; Kalinina, A.E.; Frolov, D.P.; Akhverdiev, E.A.; Barakova, A.S. Artificial Intelligence and Digital Transformations in the Society. IOP Conf. Ser. Mater. Sci. Eng. 2019 , 483 , 012019. [ Google Scholar ] [ CrossRef ] [ Green Version ]

Click here to enlarge figure

#Authors/Years Journals/
Conferences
MethodsBroad Area
1Khajavi et al. (2019)[ ]IEEEExperimentation TestingConstruction
2Boje et al. (2020)[ ]Automation in ConstructionLiterature ReviewConstruction
3Jones et al. (2020)[ ]CIRP-JMSTLiterature ReviewMultidisciplinary
4Sacks et al. (2020)[ ]Data-Centric EngineeringLiterature ReviewConstruction
5Sepasgozar (2021)[ ]MDPILiterature ReviewConstruction
6El Jazzar et al. (2020)[ ]Conference PaperLiterature ReviewConstruction
7Shahzad et al. (2022)[ ]MDPILiterature Review
Interviews
Multidisciplinary
8Alonso et al. (2019)[ ]MDPILiterature ReviewConstruction
9Deng et al. (2021)[ ]IT ConLiterature ReviewCivil Engineering
10Opoku et al. (2021)[ ]Building EngineeringSystematic ReviewConstruction
11Lu et al. (2020)[ ]CSICLiterature ReviewConstruction
12Zhang et al. (2022)[ ]ASCEQuestionnaires
Interviews
Construction
13Sawhney et al. (2020)[ ]ASCLiterature ReviewConstruction
14Madubuike et al. (2022)[ ]IT ConSystematic ReviewConstruction
15Feng et al. (2021)[ ]ISARCLiterature ReviewConstruction
16Zhou et al. (2021)[ ]PSGECLiterature ReviewConstruction
17Zhang et al. (2021)[ ]Tech. Science PressLiterature ReviewConstruction
18Akanmu et al. (2021)[ ]IT ConLiterature ReviewConstruction
19Grieves (2019)[ ]Astronautics
Aeronautics
Literature ReviewEngineering
20Seaton et al. (2022)[ ]World Built Environment ForumLiterature Review
Case Studies
Construction
21Zhang et al. (2021)[ ]Frontiers in Sustainable CitiesLiterature ReviewConstruction
22Hosamo et al. (2022)[ ]Hindawi
Civil Engineering
Literature ReviewConstruction
23Douglas et al. (2021)[ ]Computing in ConstructionSystematic ReviewConstruction
24Gorse et al. (2020)[ ]SEEDSLiterature Review
Interviews
Construction
25Sobhkhiz and El-Diraby (2022)[ ]ASCECase StudyConstruction
26Almatared et al. (2022)[ ]ASCELiterature ReviewConstruction
27Quirk et al. (2020)[ ]ASHRAELiterature ReviewConstruction
28Bolpagni et al. (2021)[ ]Structural
Integrity 20
Case Study
Literature Review
Construction
29Daniotti et al. (2022)[ ]MDPILiterature Review
Experimentation Testing
Construction
30Baghalzadeh et al. (2022)[ ]MDPILiterature ReviewConstruction
#ItemsBIMDigital Twin in Building
1Concept OriginDr. Charles Eastman (1970s)NASA Apollo program (1960s)
Dr. Michael Grieves (2000s)
2PurposesUsed to enhance efficiency during design, construction, and throughout the building lifecycleUsed to enhance operational efficiency through predictive maintenance and monitoring assets
3Application focusDesign visualization and consistency
Class detection
Time and cost estimation
Lean construction
Stakeholders’ interoperability
Predictive Maintenance
What-if analysis
Occupant satisfaction
Resource consumption efficiency
Closed-loop design
4FeaturesReal time data flow is not necessarily required.Real time data flow is not necessarily required
5Level of
Details
A detailed model of the building’s design and constructionPerformance and optimization-focused real time building operation replica
6ScalabilityDepends on underlying technology and resources available for data processing and storageMore suitable for large-scale projects
7Main UsersComplex and detailed, geared towards architects, engineers, contractors, and building professionals with high level of control and customizationStreamlined and intuitive, geared towards facility managers and operators with real time data and monitoring capabilities
8Interoperability3D model, Construction Operation Building COBie, IFC, CDE3D Model, WSN, Data Analytics, Machine learning
9Application
interface
Autodek Revit, ArchiCAD, MicroStation, BIM Server, Grevit, Open SourceAutodesk Tandem, Predix, Dasher 360, Ecodomus, Siemens Digital Twin, Bentley iTwin
10Building Life cycle stageDesign
Construction
Use (Maintenance)
Demolition
Use (Operation)
#ItemsBIMDigital TwinSources
13D model visualizationYesYes[ , ]
2Reliance on CDEYesNo[ , ]
3Reliance on IFCYesNo[ , ]
4Reliance on WSNNoYes[ , ]
5Reliance on Data AnalyticsNoYes[ , ]
6Reliance on Machine LearningNoYes[ , ]
7APIs InteroperabilityYesYes[ , ]
8COBie InteroperabilityYesYes[ , ]
9Data standardizationYesYes[ , ]
10Data exchangeability
(two-way communication)
NoYes[ ]
11SchedulingYesYes[ , ]
12Architects, Engineers, and Contractors interfaceYesNo[ ]
13Facility Manager/Operator interfaceNoYes[ , ]
14Focus on CollaborationYesYes[ , ]
15Focus on Real-time dataNoYes[ , ]
16Focus on Design and ConstructionYesNo[ , ]
17Focus on Building OperationsNoYes[ , ]
18Focus on Physical & Functional Aspects of BuildingYesNo[ , ]
19Inclusion of People, Processes, and BehaviorsNoYes[ , ]
20Time managementYesYes[ , ]
21Budget managementYesYes[ , ]
22Project simulation analysisYesYes[ ]
23Simulation analysis in contextNoYes[ ]
24Live monitoring of assetsNoYes[ , ]
25Live and instant updates on equipment statusNoYes[ ]
26Instant response to equipment failuresNoYes[ ]
27Insights to increase building use and performanceNoYes[ ]
28Overall project time and cost reductionYesYes[ , ]
29Easy application on existing buildingsNoYes[ ]
30Better value for employersYesYes[ , ]
31Improved building sustainabilityYesYes[ , ]
32Dynamic construction risk management improvedNoYes[ , ]
33Enhance site logisticsNoYes[ , ]
34Use of machine learning and automated processesNoYes[ , ]
35Use of self-learning algorithmsNoYes[ , ]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Nguyen, T.D.; Adhikari, S. The Role of BIM in Integrating Digital Twin in Building Construction: A Literature Review. Sustainability 2023 , 15 , 10462. https://doi.org/10.3390/su151310462

Nguyen TD, Adhikari S. The Role of BIM in Integrating Digital Twin in Building Construction: A Literature Review. Sustainability . 2023; 15(13):10462. https://doi.org/10.3390/su151310462

Nguyen, Tran Duong, and Sanjeev Adhikari. 2023. "The Role of BIM in Integrating Digital Twin in Building Construction: A Literature Review" Sustainability 15, no. 13: 10462. https://doi.org/10.3390/su151310462

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Why Cross-Functional Collaboration Stalls, and How to Fix It

  • Sharon Cantor Ceurvorst,
  • Kristina LaRocca-Cerrone,
  • Aparajita Mazumdar,

ux research literature review

Research shows that 78% of leaders report “collaboration drag” — too many meetings, too much peer feedback, and too much time spent getting buy-in from stakeholders.

Gartner research shows 78% of organizational leaders report experiencing “collaboration drag” — too many meetings, too much peer feedback, unclear decision-making authority, and too much time spent getting buy-in from stakeholders. This problem is compounded by the fact that companies are running as many as five types of complex initiatives at the same time — each of which could involve five to eight corporate functions and 20 to 35 team members. The sheer breadth of resource commitments across such a range of initiatives creates a basic, pervasive background complexity. To better equip teams to meet the demands of this complexity, Gartner recommends the following strategies: 1) Extend executive alignment practices down to tactical levels; 2) Develop employee strategic and interpersonal skills; and 3) Look for collaboration drag within functions or teams.

Corporate growth is the ultimate team sport, relying on multiple functions’ data, technology, and expertise. This is especially true as technology innovation and AI introduce new revenue streams and business models, which require significant cross-functional collaboration to get off the ground.

  • SC Sharon Cantor Ceurvorst is vice president of research in the Gartner marketing practice , finding new ways of solving B2B and B2C strategic marketing challenges. She sets annual research agendas and harnesses the collective expertise of marketing analysts and research methodologists to generate actionable insights.
  • KL Kristina LaRocca-Cerrone is senior director of advisory in the Gartner marketing practice , overseeing Gartner’s coverage of marketing leadership and strategy, cross-functional collaboration, proving the value of marketing, and marketing innovation and transformation.
  • AM Aparajita Mazumdar is senior research principal in the Gartner marketing practice , co-leading the research agenda for marketing technology.  Her research focuses primarily on marketing strategy and technology topics such as cross-functional collaboration and marketing technology utilization.
  • AN Anja Naski is senior research specialist in the Gartner marketing practice . She edits the Gartner CMO Quarterly journal, highlighting the latest insights on critical challenges facing CMOs. Her research covers topics related to marketing operations, CMO leadership, and cross-functional collaboration.

Partner Center

IMAGES

  1. Literature Reviews for UX Research

    ux research literature review

  2. What is UX Research and Why is it Important? (2022)

    ux research literature review

  3. UX Research Methods

    ux research literature review

  4. What is UX Research and Why is it Important? (2022)

    ux research literature review

  5. Literature Reviews for UX Research

    ux research literature review

  6. Analyzing UX Research: Tips and Best Practices

    ux research literature review

VIDEO

  1. 'చిరు' నవ్వుతో ఉద్యోగమా ... #kasturivijayam #dalecarnegie #howtowinfriends

  2. English Grammar

  3. PubMed Search Tutorial

  4. Reading 100+ Research Papers in 10 Minuutes #engineering #artificialinteligence #technology #cs #ai

  5. UX Research Roadmaps

  6. 8 Top Databases for Literature review

COMMENTS

  1. Literature Reviews for UX Research

    The output of a literature review is a written report that is structured to include: An overview of the project, including the research questions and goals. A summary of each of the sources included. An evaluation or critique of each source, comparing and contrasting key insights. A discussion of biases or weaknesses.

  2. User Experience Methods in Research and Practice

    Abstract. User experience (UX) researchers in technical communication (TC) and beyond still need a clear picture of the methods used to measure and evaluate UX. This article charts current UX methods through a systematic literature review of recent publications (2016-2018) and a survey of 52 UX practitioners in academia and industry.

  3. UX Research practices related to Long-Term UX: A Systematic Literature

    We conducted a Systematic Literature Review with string applied in search engines, besides selection criteria and quality assessment applied in the papers. ... UX Research practices represent recurring attitudes, actions, or activities of user experience research and evaluation work, which satisfy user-centered product development [2], [10] ...

  4. The Value of Old-School Literature Reviews for Modern UX Research

    A literature review is basically like a guide to a particular topic or research question. Moreover, conducting a literature review for UX allows researchers the chance to draw inspiration and insight from the literature and ensure the research they conduct is grounded in theory and thought, rather than based on assumptions.

  5. (PDF) Measurement practices in user experience (UX) research: a

    Therefore, we conducted a systematic literature review, screening 153 papers from four years of the ACM Conference on Human Factors in Computing Systems proceedings (ACM CHI 2019 to 2022), of ...

  6. Full article: User experience framework that combines aspects

    2.1. Conducted systematic literature review process. A systematic literature review to identify various UX terms has been conducted and published in (Zarour & Alharbi, Citation 2017).Figure 4 summarizes the adopted systematic literature review process to select primary studies. A total of 114 primary studies out of 2,331 papers have been collected and analyzed, based on a defined set of ...

  7. UX Research practices related to Long-Term UX: A Systematic Literature

    Context: A Multivocal Literature Review (MLR) is a form of a Systematic Literature Review (SLR) which includes the grey literature (e.g., blog posts, videos and white papers) in addition to the ...

  8. UX Research practices related to Long-Term UX: A Systematic Literature

    But few studies in the literature discuss UX Research practices with Long-Term UX. ... Our review provided an overview of UX Research practices applied in two decades by software startups and established companies. This picture is in line with the state-of-the-art that UX term achieved in the literature [1], [14], [29]. Based on a qualitative ...

  9. PDF Lean UX: A Systematic Literature Review

    Lean UX: A Systematic Literature Review. David Aarlien and Ricardo Colomo-Palacios(&) Østfold University College, BRA Veien. 4, 1757 Halden, Norway. {david.aarlien,ricardo.colomo-palacios}@hiof.no. Abstract. The software industries often look for ways to remain competitive in terms of cost and time to market. Lean UX is a methodology aiming to ...

  10. Lean UX: A Systematic Literature Review

    Lean UX is a methodology aiming to achieve this. In this paper, by means of a Systematic Literature Review, authors outline the evolution of Lean UX since its origins, its challenges and benefits, and its definition by means of a systematic literature review. Results showed similarities of the definition of Lean UX, challenges and benefits ...

  11. Perceived Value of UX in Organizations: A Systematic Literature Review

    3.1 Conducting the Literature Review. Search Process. We consider UX to form part of human-computer interaction (HCI), a research discipline building on top of (1) behavioral sciences such as psychology, anthropology, sociology, ergonomy and cognitive sciences; (2) design such as graphic design, information design and interaction design; (3) computer science such as computer graphics and ...

  12. Conducting impactful literature reviews for UX research

    These reviews are commonly used in academia but can also serve as a valuable tool for grounding product development in existing insights. Furthermore, literature reviews can be particularly beneficial for entry-level UX researchers to showcase their impact. Outlined below are two perspectives to consider when approaching literature reviews:

  13. Quick Lit Reviews Reduce UX Research Time and Supercharge ...

    A quick and dirty literature review (Lit Review) is a way to capture and synthesize information about a topic (a design problem, a new technology, an unfamiliar business area, etc.). It's a simple structure that will allow you to document relevant information in an organized and intentional format. Creating the Lit Review can take a relatively short time compared with formal UX research; but ...

  14. AI Assistance for UX: A Literature Review Through Human-Centered AI

    2.3 Literature Review in AI Support for UI/UX design Past literature review studies in computing and HCI have success-fully identified trends and gaps and proposed new research di-rections in different specific domains [42, 46, 146, 180, 214]. We consider the call for more literature review studies in HCI, CSCW,

  15. UX Research on Conversational Human-AI Interaction: A Literature Review

    However, research on polyadic CAs is scattered across different fields, making it challenging to identify, compare, and accumulate existing knowledge. To promote the future design of CA systems, we conducted a literature review of ACM publications and identified a set of works that conducted UX (user experience) research.

  16. The Complete Guide to UX Research Methods

    UX research includes two main types: quantitative (statistical data) and qualitative (insights that can be observed but not computed), done through observation techniques, task analysis, and other feedback methodologies. The UX research methods used depend on the type of site, system, or app being developed.

  17. Secondary Research in UX

    A literature review should be done more frequently in UX because it is a viable option even for researchers with limited time and budget. The most challenging part is to persuade yourself and your team that the existing data is worth being summarized, compared, and collated to increase the overall effectiveness of your primary research.

  18. Reasons why I need literature review to do UX research

    Literature review is an essential part of research, be it academic research or product/user experience research. Some of the benefits are helping the researcher understand the research area better ...

  19. Literature Review

    A literature review is a summary and evaluation of the existing research on a particular topic. In UX, a literature review can help UX researchers and designers understand the current state of knowledge on a topic and to identify gaps or areas for further research.. A literature review typically involves searching for research materials on a specific topic, such as user behavior or design ...

  20. Measurement Practices in UX Research: A Systematic Quantitative

    User experience research relies heavily on survey scales as an essential method for measuring users' subjective experiences with technology. However, repeatedly raised concerns regarding the improper use of survey scales in UX research and adjacent fields call for a systematic review of current measurement practice. Until now, no such systematic investigation on survey scale use in UX ...

  21. Conceptual UX/UI design based on literature review research

    4. Due to the pandemic, most research work on UX designs cannot be conducted in person or on-site. In order to practice our remote research abilities, the UCI professors assigned a task of design innovation based on the literature review. The topic I selected is COVID-19 related mental health impact, which focuses on public mental health ...

  22. A Complete Guide to Primary and Secondary Research in UX Design

    A literature review can uncover insights into user behavior and design principles that inform your design strategy. Tools: Academic databases like Google Scholar, JSTOR, and specific UX/UI research databases. Reference management tools like Zotero and Mendeley can help organize your sources and streamline the review process.

  23. Creativity-Fostering Teacher Behaviors in Higher Education: A

    A systematic literature review requires a rigorous and structured qualitative research approach that results in reliable and validated conclusions, giving credence and explanatory power to the findings (Alexander, 2020; Aveyard, 2018; Littell et al., 2008). The transdisciplinary focus of our review study addresses the benefits and opportunities ...

  24. Connecting With Users: Applying Principles Of Communication To UX Research

    Communication is a core component of UX research, as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful ...

  25. Understanding the challenges affecting food-sharing apps ...

    Further, the review scores representing user experience (UX) are assessed for their dependence on each challenge using the document-topic matrix and machine learning (ML) procedures. ... Puram, P., & Gurumurthy, A. (2023). Sharing economy in the food sector: A systematic literature review and future research agenda. Journal of Hospitality and ...

  26. What are Literature Reviews?

    Literature reviews are comprehensive summaries and syntheses of the previous research on a given topic. While narrative reviews are common across all academic disciplines, reviews that focus on appraising and synthesizing research evidence are increasingly important in the health and social sciences.. Most evidence synthesis methods use formal and explicit methods to identify, select and ...

  27. AI Assistance for UX: A Literature Review Through Human-Centered AI

    In 2022, Stige et al. (Stige et al., 2023) conducted a literature review on 46 articles in this field to analyze how AI is currently used in UX design (namely, user requirement specification, solution design, and design evaluation) and potential future research themes. Compared to their analysis sample (N=46), our sample was more comprehensive ...

  28. Sustainability

    Literature Review: Review of emerging technologies for BIM and DTs. Developing a five-level ladder categorization system for reviewing studies on DT applications, focusing on the building life cycle, research domains, and technologies. 10: Digital twin application in the construction industry: A literature review: Opoku et al. (2021)

  29. Full article: Political community entrepreneurship policy as an effort

    The method used in this research is a systematic literature review (SLR). SLR is a collection of articles on community entrepreneurship policy politics in infrastructure development from leading journals in various relevant online references (Alomoto et al., Citation 2021 ; Macke & Genari, Citation 2019 ; Muluk, Citation 2021 ; Putri et al ...

  30. Why Cross-Functional Collaboration Stalls, and How to Fix It

    Summary. Gartner research shows 78% of organizational leaders report experiencing "collaboration drag" — too many meetings, too much peer feedback, unclear decision-making authority, and too ...