A systematic review of software usability studies

  • Original Research
  • Published: 11 December 2017

Cite this article

thesis usability study

  • Kalpna Sagar   ORCID: orcid.org/0000-0002-1034-3328 1 &
  • Anju Saha 1  

3105 Accesses

26 Citations

Explore all metrics

The aim of this review is to summarize, analyze various research studies and identify different research gaps regarding usability standards and models, usability evaluation methods, usability metric, usability at different phases of software development life cycle and application domains of usability. This systematic review of usability studies between 1990 and 2016 has been conducted and 150 studies are identified. We conclude that researchers have not reached at consensus w.r.t. software usability models. We identify that Efficiency, Effectiveness, Satisfaction, and Learnability are commonly addressed attributes in various existing software usability models and standards. Further, developers do not have sufficient knowledge to decide the appropriate usability evaluation method to use in given domain. On the contrary, Usability Testing, Heuristic Evaluation and Questionnaire are identified frequently used methods for usability evaluation. Our findings investigate different metrics and measurement approaches used for usability estimation. But, current methods for usability measurement in practice do not include all ISO and ANSI defined aspects of usability into a single metric. Although, we identify studies concerning the integration of usability and software engineering into a single framework with generalizable results, their practical implementation is still missing and significantly needed. Conversely, this study highlights the fact that around 71% of studies address usability related issues during Design-Phase of software development life cycle. At present, usability issues have been identified in various domains but around 33.82% of studies identify that usability evaluation approach is widely used in Web-Domain.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

thesis usability study

Introduction to Design Science Research

thesis usability study

Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

thesis usability study

Challenges of Low-Code/No-Code Software Development: A Literature Review

These online databases are mentioned in Sect.  2.2 .

Associate professors are from University School of Information and Communication Technology, Guru Gobind Singh Indraprastha University, Delhi, India.

List of publication sources from international journals and conferences is provided in Table  4 .

2-researchers are Kalpna Sagar, a PhD-Student and Dr. Anju Saha, an Associate Professor in University School of Information and Communication Technology, GGSIPU, Delhi, India.

This list is provided in Table  6 .

Final scores details are presented in Table  5 .

These questions are mentioned in Table  3 .

We have included 89 research publications from SCI-Indexed journals and 29 from Non-SCI journals.

Final score is computed for each selected primary study after adding the values assigned to each quality assessment question 7.

5-Research questions are presented in Table  1 .

A complete list of usability metric or measurement approach is provided in Table  13 .

Institute of Electrical and Electronics Engineers (1990) IEEE standard glossary of software engineering terminology, IEEE std. 610.12-1990. Author, Los Alamitos

ISO 9126 (1991) Information technology-software product evaluation-quality characteristics and guidelines for their use. Geneva

Nielsen J (1993) Usability engineering. Academic press, London, ISBN: 978-0-12-518406-9

International Organization for Standardization (1998) ISO 9241-11:1998, Ergonomic requirements for office work with visual display terminals (vdts), Part 11: guidance on usability. Author, Geneva

Google Scholar  

Maguire M (2001) Context of use within usability activities. Int J Hum Comput Stud 55:453–483

Article   MATH   Google Scholar  

Seffah A, Donyaee M, Kline RB et al (2006) Usability measurement and metrics: a consolidated model. Softw Qual J 14:159–178 [SPRINGER, (S107)]

Article   Google Scholar  

Seffah A, Metzker E (2004) The obstacles and myths of usability and software engineering. Commun ACM 47(12):71–76 (S24)

Bertoa MF, Troya JM, Vallecillo A (2005) Measuring the usability of software components. J Syst Softw 79:427–439 (S48)

Borgholm T, Madsen KH (1999) Cooperative usability practices. Commun ACM 42(5):91–97 (S103)

Graham MJ, Kubose TK, Jordan D et al (2004) Heuristic evaluation of infusion pumps: implications for patient safety in intensive care units. Int J Med Inf 73(11–12):771–779 (S36)

Schusteritsch R, Wei CY, Larosa M (2007) Towards the perfect infrastructure for usability testing on mobile devices. In: Proceeding of CHI EA ‘07 CHI ‘07 Extended Abstracts on Human Factors in Computing Systems, ACM, pp 1839–1844. ( S25 )

Bodker S, Buur J (2002) The design collaboratorium—a place for usability design. ACM Trans Comput Hum Interact 9(2):152–169 (S15)

Clamann M, Kaber BD (2004) Applicability of usability evaluation techniques to aviation systems. Int J Aviat Psychol 14(4):385–420 (S123)

Peevers G, Douglas G, Jack AM (2008) A usability comparison of three alternative message formats for an sms banking service. Int J Hum Comput Stud 66:113–123 [ELSEVIER, (S128)]

Weir SC, Anderson NJ, Jack AM (2006) On the role of metaphor and language in design of third party payments in eBanking: usability and quality. Int J Hum Comput Stud 64:770–784 [ELSEVIER, (S129)]

Chan AJ, Islam MK, Rosewall T et al (2012) Applying usability heuristics to radiotherapy systems. Radiother Oncol 102(1):142–147 [ELSEVIER, (S37)]

Kushniruka AW, Triola MM, Borycki EM et al (2005) Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. Int J Med Inf 74:519–526 [ELSEVIER, (S26)]

Bastien JMC Usability testing (2010) a review of some methodological and technical aspects of the method. Int J Med Inf 79(4):18–23 [ELSEVIER, (S92)]

Mansar LS, Jariwala S, Shahzad M et al (2012) A usability testing experiment for a localized weight loss mobile application. Proc Technol 5:839–848 [ELSEVIER, (S118)]

Hochhauser M, Gal E (2015) Weiss LP Negotiation strategy video modeling training for adolescents with autism spectrum disorder: a usability study. Int J Hum Comput Interact 31:472–480 (S146)

Kjeldskov J, Skov MB, Stage J (2004) Instant data analysis: conducting usability evaluations in a day. In: Proceeding NordiCHI ‘04 Proceedings of the third Nordic conference on Human-computer interaction, ACM, pp 233–240 ( S57 )

Hua L, Gong Y (2013) Usability evaluation of a voluntary patient safety reporting system: understanding the difference between predicted and observed time values by retrospective think-aloud protocols. Human-computer interaction. Springer, New York, pp 94–100 (S75)

Zapata CB, Fernandez-Aleman LJ, Idri A et al (2015) Empirical studies on usability of mhealth apps: a systematic literature review. J Med Syst 39:1 [Springer, (S134)]

Lewis RJ (2015) Introduction to the special issue on usability and user experience: psychometrics. Int J Hum Comput Interact 31:481–483

Lewis RJ, Utesch SB (2015) Maher ED Measuring perceived usability: the sus, umux-lite and altusability. Int J Hum Comput Interact 31:496–505 (S147)

Kortum P, Sorber M (2015) Measuring the usability of mobile applications for phones and tablets. Int J Hum Comput Interact 31:518–529 (S29)

Hoehle H, Aljafari R, Ventakatesh V (2016) Leveraging microsoft’s mobile usability guidelines: conceptualizing and developing scales for mobile application usability. Int J Hum Comput Stud 89:35–53 [ELSEVIER, (S113)]

Moumane K, Idri A, Abran A (2016) Usability evaluation of mobile applications using iso 9241 and iso 25062 standards. SpringerPlus, 5:1–15 ( S135 )

McClellan AM, Karumur PR, Vogel IR et al (2016) Designing an educational website to improve quality of supportive oncology care for women with ovarian cancer: an expert usability review and analysis. Int J Hum Comput Stud 32:297–307 (S139)

Rodriguez DF, Acuna TS, Juristo N (2015) Design and programming patterns for implementing usability functionalities in web applications. J Syst Softw 105:107–124 [ELSEVIER, (S106)]

Bringula PR (2016) Factors affecting web portal information services usability: a canonical correlation analysis. Int J Hum Comput Interact 32(10):814–826 (S148)

Battleson B, Booth A, Weintrop J (2001) Usability testing of an academic library web site: a case study. J Acad Librariansh 27(3):188–198

Ardito C, Costabile FM, Marsico DM (2006) An approach to usability evaluation of e-learning applications. Univers Access Inf Soc 4:270–283

Rupp AM, Oppold P, McConnell SD (2015) Evaluating input device usability as a function of task difficulty in a tracking task. Ergonomics 58(5):722–735 (S140)

Jankowski J (2015) Grabowski A Usability evaluation of vr interface for mobile robot teleoperation. Int J Hum Comput Interact 31:882–889 (S20)

Kolade WYA (2015) Integrating usability work into a large inter-organisational agile development project: tactics developed by usability designers. J Syst Softw 100:54–66 [ELSEVIER, (S110)]

Gonzalez MP, Lores J (2008) Granollers A Enhanching usability testing through datamining techniques: a novel approach to detecting usability problem patterns for a context of use. Inf Softw Technol 50:547–568 [ELSEVIER, (S86)]

Kalpna S, Anju S (2016) Enhancing usability inspection through data-mining techniques: an automated approach for detecting usability problem patterns of academic websites. International conference on intelligent human computer interaction. Springer, Cham

Kalpna S, Anju S (2017) Qualitative usability feature selection with ranking: a novel approach for ranking the identified usability problematic attributes for academic websites using data-mining techniques. Human-centric computing and information sciences, Springer, New York [Accepted]

Panach IJ, Juristo N, Valverde F et al (2015) A framework to identify primitives that represent usability within model-driven development methods. Inf Softw Technol 58:338–354 [ELSEVIER, (S115)]

Stapić Z (2012) Procedures for performing systematic literature review in software engineering. CECIIS 2012-23rd International Conference

Fernandez A, Insfran E, Abrahão S (2011) Usability evaluation methods for the web: a systematic mapping study. Inf Softw Technol 53:789–817 [ELSEVIER, (S102)]

Bevan N, Azuma M (1997) Quality in use: incorporating human factors into the software engineering lifecycle. In: Proceedings of the third IEEE international symposium and forum on software engineering standards, pp 169–179 ( S1 )

Clemmensen T (2011) Templates for cross-cultural and culturally specific usability testing: results from field studies and ethnographic interviewing in three countries. Int J Hum Comput Int 27(7):634–669 (S31)

Heiskari HJ, Kauppinen M, Runonen M et al. (2009) Bridging the gap between usability and requirements engineering. In: 17th IEEE international in requirements engineering conference, pp 303–308 ( S2 )

Winter J, Rönkkö K (2009) Satisfying stakeholders’ needs—balancing agile and formal usability test results. e-Inf Softw Eng J 3:1 ( S32 )

Lallemand C (2011) Toward a closer integration of usability in software development: a study of usability inputs in a model driven engineering process. In: in proceedings of the 3rd ACM SIGCHI symposium on Engineering interactive computing systems, pp 299–302 ( S3)

Polgár BP (2015) Using the cognitive walkthrough method in software process improvement. e-Inf Softw Eng J 9(1):79–85 (S33)

Burr J (1999) Bagger K Replacing usability testing with user dialogue. Commun ACM 42(5):63–66 (S4)

Lee Y, Kozar AK (2012) Understanding of website usability: specifying and measuring constructs and their relationships. Decis Support Syst 52:450–463 [ELSEVIER, (S34)]

Bass L (2003) John BE Linking usability to software architecture patterns through general scenarios. J Syst Softw 66:187–197 (S5)

Vilbergsdottira GS, Hvannbergb TE, Law ELC (2014) Assessing the reliability, validity and acceptance of a classificationscheme of usability problems (cup). J Syst Softw 87:18–37 [ELSEVIER, (S35)]

Folmer E, Bosch J (2004) Architecting for usability: a survey. J Syst Softw 70:61–78 (S6)

Nebe K, Paelke V (2009) Usability-engineering-requirements as a basis for the integration with software engineering. In: Proceedings of 13th International Conference Human-Computer Interaction, Part I, pp 652–659. ( S7 )

Folmer E, Gurp JV, Bosch J (2005) Software architecture analysis of usability. In: Proceedings of international conference eng. human-computer interaction and interactive systems, pp 38–58 ( S8 )

Makri S, Blandford A, Cox LA et al (2011) Evaluating the information behaviour methods:formative evaluations of two methods for assessing the functionality and usability of electronic information resources. Int J Hum Comput Stud 69:455–482 [ELSEVIER, (S38)]

John BE, Bass L, Segura MIS et al. (2005) Bringing usability concerns to the design of software architecture. In: Proceedings of international conference on engineering human-computer interaction and interactive systems, pp 1–19 ( S9 )

Hiniker A, Sobel K, Hong S, Suh H et al (2016) Hidden symbols: how informal symbolism in digital interfaces disrupts usability for preschoolers. Int J Hum Comput Stud 90:53–67 [ELSEVIER, (S39)]

Juristo N, Moreno AM, Segura MIS (2007) Analysing the impact of usability on software design. J Syst Softw 80(9):1506–1516 (S10)

Khawaja AM, Chen F (2014) Marcus N Measuring cognitive load using linguistic features: implications for usability evaluation and adaptive interaction design. Int J Hum Comput Interact 30:343–368 (S40)

Seffah A, Mohamed T, Mammar HH et al (2008) Reconciling usability and interactive system architecture using patterns. J Syst Softw 81:1845–1852 (S11)

Aryana B, Clemmensen T (2013) Mobile usability: experiences from iran and turkey. Int J Hum Comput Interact 29:220–242 (S41)

Juristo N, Moreno AM, Segura MS (2007) Guidelines for eliciting usability functionalities. IEEE Trans Softw Eng 33(11):744–758 (S12)

Tamir DE, Mueller CJ (2010) Pinpointing usability issues using an effort based framework. In: IEEE, international conference on systems man and cybernetics, pp 931–938 ( S42 )

Seffah A, Djouab R, Antunes H (2001) Comparing and reconciling usability-centered and use case-driven requirements engineering processes. Aust Comput Sci Commun IEEE Comput Soc 23(5):132–139 (S13)

Moorthy JTS, Ibrahim SB, Mahrin MN (2013) Formulation of usability risk assessment model, IEEE, Conference on open systems ( S43 )

Beu A, Honold P (2000) Yuan X How to build up an infrastructure for intercultural usability engineering. Int J Hum Comput Interact 12(3 and 4):347–358 (S14)

Nielsen J, Landauer KT (1993) A mathematical model of the finding of usability problems, ACM, Conference on Human Factors in Computing Systems pp 206–213 ( S44 )

Dubey KS, Rana A (2011) Usability estimation of software system by using object-oriented metrics. ACM SIGSOFT Softw Eng Notes 36(2):371–382 (S45)

Roberts JM, Newton JE, Lagattolla DF et al (2013) Objective versus subjective measures of paris metro map usability: investigating traditional octolinear versus all-curves schematics. Int J Hum Comput Stud 71:363–386 [ELSEVIER, (S16)]

Sauro J, Kindlund E (2005) A method to standardize usability metrics into a single score, ACM, CHI. 1-58113-998-5 ( S46 )

Hertzum M (2006) Problem prioritization in usability evaluation: from severity assessments toward impact on design. Int J Hum Comput Interact 21(2):125–146 (S17)

Hornbaek K (2006) Current practice in measuring usability: challenges to usability studies and research. Int J Hum Comput Stud 64:79–102 (S47)

Metzker M, Offergeld M (2001) An interdisciplinary approach for successfully integrating human-centered design methods into development processes practiced by industrial software development organizations. In: Proceedings of International Conference on Engineering for Human-Computer Interaction. Springer, New York ( S18 )

Hertzum MA, Clemmensen T (2012) How do usability professionals construe usability? Int J Hum Comput Stud 70:26–42 [ELSEVIER, (S19)]

Erickson W, Trerise S, Lee C et al (2013) The accessibility and usability of college websites: is your website presenting barriers to potential students? Commun Coll J Res Pract 37:864–876 (S49)

Dubey KS, Rana A (2010) Assessment of usability metrics for object-oriented software system. ACM SIGSOFT Softw Eng Notes 35 ( S50 )

Lavie T, Gilad OT (2011) Meyer J Aesthetics and usability of in-vehicle navigation displays. Int J Hum Comput Stud 69:80–99 [ELSEVIER, (S21)]

Frojkaer E, Hertzum M, Hornbaek K (2000) Measuring usability: are effectiveness, efficiency and satisfaction really correlated, ACM Press, CHI, pp 345–352 ( S51 )

Carvajal L, Moreno AM, Segura MIS et al (2013) Usability through software design. IEEE Trans Softw Eng 39(11):1582–1596 (S22)

Babiker EM, FH Boyle et al. (1991) A metric for hypertext usability, ACM Press. In: Proceedings of 11th annual international conference on system documentation, pp 95–104 ( S52 )

Carvajal L (, ) Usability-enabling guidelines: a design pattern and software plug-in solution. In: ESEC/FSE Doctoral symposium, ACM ( S23 )

Beckles B, Welch V (2005) Basney J Mechanisms for increasing the usability of grid security. Int J Hum Comput Stud 63:74–101 [ELSEVIER, (S53)]

McGee M (2004) Master usability scaling: magnitude estimation and master scaling applied to usability measurement, ACM. In: Proceedings of CHI, pp 335–342 ( S54 )

Nielsen J, Levy J (1994) Measuring usability: preference vs. performance. Commun ACM 37:66–76 (S55)

Sauro J, Kindlund E (2005) Making sense of usability metrics: usability and six sigma, UPA conference ( S56 )

Teruel AM, Navarro E, Jaquero LV et al (2014) A cscw requirements engineering case tool: development and usability evaluation. Inf Softw Technol 56:922–949 [ELSEVIER, (S27)]

Lindgaard G (2015) Challenges to assessing usability in the wild: a case study. Int J Hum Comput Interact 31:618–631 (S28)

Skov MB, Stage J (2005) Supporting problem identification in usability evaluation, CHI. In: Proceedings of OZCHI. 1-59593-222-4. [S58]

Mapayi T, Olaniyan OM, Isamotu NO et al (2013) Evaluating usability factors in different authentication methods using artificial neural network. IEEE. Afr J Comput ICT 6(1):69–78 (S59)

Johannessen JHG, Hornbæk K (2014) Must evaluation methods be about usability? devising and assessing the utility inspection method. Behav Inf Technol 33(2):195–206 (S30)

Ivory MY, Hearst MA (2001) The state of the art in automated usability evaluation of user interfaces. ACM Comput Surv 33:470–516 (S60)

Lee S, Cho JE (2007) Usability evaluation of korean e-government portal, SPRINGER. In: universal access in human-computer interaction applications and services, pp 64–72 ( S61 )

González M, Masip L, Granollers A et al (2009) Quantitative analysis in a heuristic evaluation experiment. Adv Eng Softw 40:1271–1278 [Elsevier, (S62)]

Delice EK (2009) Z Gungor The usability analysis with heuristics evaluation and analytic hierarchy process. Int J Ind Ergon 39:934–993 [Elsevier, (S63)]

Ríos AD, García VA, Rey ME et al (2010) Usability: a critical analysis and taxonomy. Int J Hum Comput Interact 26(1):53–74 (S108)

Bradner E, Dawe M (2008) Parts of the sum: a case study of usability benchmarking using the sum metric. In: UPA international conference ( S64 )

Winter S, Wagner S, Deissenboeck F (2008) A comprehensive model of usability, Springer, International Federation for Information Processing, pp 106–122 ( S109 )

Kline RB, Seffah A (2005) Evaluation of integrated software development environments: challenges and results from three empirical studies. Int J Hum Comput Stud 63:607–627 [ELSEVIER, (S65)]

Tamir D, Komogortsev OV, Mueller CJ (2008) An effort and time based measure of usability, ACM, WoSQ. 978-1-60558-023-4 ( S66 )

Scheller T, Kuhn E (2015) Automated measurement of api usability: the api concepts framework. Inf Softw Technol 61:145–162 [ELSEVIER, (S111)]

Tullis T, Fleischman S, McNulty M et al. (2002) An empirical comparison of lab and remote usability testing of web sites, In: Proceedings of Usability Professionals Conference ( S67 )

Santana VFD, Baranauskas MCC (2015) Welfit: a remote evaluation tool for identifying web usage patterns through client-side logging. Int J Hum Comput Stud 76:40–49 [ELSEVIER, (S112)]

Følstad A (2007) Work-domain experts as evaluators: usability inspection of domain-specific work support systems. Int J Hum Comput Interact 22(3):217–245 (S68)

Frøkjær E, Hornbæk K (2005) Cooperative usability testing: complementing usability tests with user-supported interpretation sessions. In: CHI’ 05 extended abstracts on human factors in computing systems, ACM, pp 1383–1386 ( S69 )

Brown M, Sharples S, Harding J (2013) Introducing pegi: a usability process for the practical evaluation of geographic information. Int J Hum Comput Interact 71:668–678 [ELSEVIER, (S114)]

Thompson KE, Rozanski EP, Haake AR (2004) Here, there, anywhere: remote usability testing that works in, SIGITE, ACM, New York, pp 132–137 ( S70 )

Bak JO, Nguyen K, Risgaard P et al (2008) Obstacles to usability evaluation in practice: a survey of software development organizations, ACM. In: Proceedings of Nordi CHI ( S71 )

Harrison R, Flood D, Duce D (2013) Usability of mobile applications: literature review and rationale for a new usability model. J Interact Sci 1:1

Winter J, Hinely M (2011) Examining correlations in usability data to effectivize usability testing. e-Inf Softw Eng J 5(1):25–37 (S72)

Castilla D, Palacios GA, Lopez BJ et al (2013) Process of design and usability evaluation of a telepsychology web and virtual reality system for elderly: butler. Int J Hum Comput Stud 71:350–362 [ELSEVIER, (S117)]

Bruun A, Stage J (2012) Training software development practitioners in usability testing: an assessment acceptance and prioritization, OZCHI, ACM ( S73 )

Sauer J, Seibel K, Ruttinger B (2014) The influence of user expertise and prototype fidelity in usability tests. Appl Ergon 41:130–140 [ELSEVIER, (S74)]

Hurtado N, Ruiz M, Orta E et al (2015) Using simulation to aid decision making in managing the usability evaluation process. Inf Softw Technol 57:509–526 (S119)

Hasan L, Morris A, Probets S (2012) A comparison of usability evaluation methods for evaluating e-commerce websites. Behav Inf Technol 31(7):707–737 (S120)

Hong JI, Heer J, Waterson S et al (2001) Webquilt: a proxy-based approach to remote web usability testing. ACM Trans Inf Syst 19(3):263–285 (S76)

Kjeldskov J, Stage J (2004) New techniques for usability evaluation of mobile systems. Int J Hum Comput Stud 60:599–620 [ELSEVIER, (S121)]

Nielsen J, Molich R (1990) Heuristic evaluation of user interfaces. In: Proceedings ACM CHI pp 249–256 ( S77 )

Ji GY, Park HJ, Lee C et al (2006) A usability checklist for the usability evaluation of mobile phone user interface. Int J Hum Comput Interact 3(3):207–231 (S122)

Nielsen J (1994) Enhancing the explanatory power of usability heuristics, ACM, CHI Human factors in computing systems, 0-89791-650-6/94/0152 ( S78 )

Torrente SCM, Prieto MBA, Gutiérrez AD et al (2013) Sirius: a heuristic-based framework for measuring web usability adapted to the type of website. J Syst Softw 86:649–663 [ELSEVIER, (S79)]

Folmer E, Welie VM, Bosch J (2006) Bridging patterns: an approach to bridge gaps between se and hci. Inf Softw Technol 48:69–89 [ELSEVIER, (S124)]

Andreasen MS, Nielsen HV, Schrøder SO (2007) What happened to remote usability testing? an empirical study of three methods, CHI, ACM, pp 1405–1414. ( S80 )

Roberts LV, Fels ID (2006) Methods for inclusion: employing think aloud protocols in software usability studies with individuals who are deaf. Int J Hum Comput Stud 64:489–501 [ELSEVIER, (S125)]

Wright PC, Monk AF (1991) The use of think—aloud evaluation methods in design. ACM SIGCHI 23(1):55–57 (S81)

Nivala MA, Sarjakoski TL, Sarjakoski T (2007) Usability methods familiarity among map application developers. Int J Hum Comput Stud 65:784–795 (S126)

Wharton C, Rieman J, Lewis C et al (1994) (1994) The cognitive walkthrough method: a practitioner’s guide. In: Nielsen J, Mack RL (eds) Usability inspection methods. Wiley, Hoboken, pp 105–140 (S82)

Propp S, Buchholz G, Forbrig P (2009) Integration of usability evaluation and model-based software development. Adv Eng Softw 40:1223–1230 [ELSEVIER, (S127)]

Leow CM, Wang KYL, Lau HS et al (2016) Usability of rpg-based learning framework. Int J Hum Comput Interact. doi: 10.1080/10447318.2016.1183863 (S83)

Følstada A, Hornbæk K (2010) Work-domain knowledge in usability evaluation: experiences with cooperative usability testing. J Syst Softw 83(11):2019–2030 (S84)

Skov MB, Stage J (2004) Integrating usability design and evaluation: training novice evaluators in usability testing. In: Proceedings of the workshop on Improving the interplay between Usability Evaluation and User Interface Design, ACM ( S85 )

Isa WAM, Suhami MR, Safie NI et al (2011) Accessing the usability and accessbility of malaysia e-government website. Am J Econ Bus Adm 3(1):40–46 (S130)

Yammiyavar P, Clemmensen T (2008) Kumar J Influence of cultural background on non-verbal communication in a usability testing situation. Int J Des 2(2):31–40 (S131)

Hartson HR, Castillo JC, Kelso J (1996) Remote evaluation: the network as an extension of usability laboratory, ACM, CHI. 089791-777-4/96/04 ( S87 )

Hertzum M, Clemmensen T, Hornbæk K et al (2011) Personal usability constructs: how people construe usability across nationalities and stakeholder groups. Int J Hum Comput Interact 27(8):729–761 (S132)

Nielsen J (1992) Finding usability problems through heuristics evaluation. In: Proceedings ACM CHI pp 373–380 ( S88 )

Rivero L, Conte T (2013) Using an empirical study to evaluate the feasibility of a new usability inspection technique for paper based prototypes of web applications. J Softw Eng Res Dev 1:2 [Springer, (S133)]

Mankoff J, Dey AK, Hsieh G et al (2003) Heuristic evaluation of ambient displays, ACM, CHI: new horizons, 5:1 ( S89 )

Ko MS, Chang SW, Ji GY (2013) Usability principles for augmented reality applications in a smartphone environment. Int J Hum Comput Interact 29:501–515 (S90)

Sawyer P, Flanders A, Wixon D (1996) Making a difference-the impact of inspections. In: Proceedings of CHI, ACM press ( S91 )

Hamborg CK, Vehse B, Bludau BH (2004) Questionnaire based usability evaluation of hospital information systems. Electron J Inf Syst Eval 7(1):21–30 (S136)

Arnhold M, Quade M, Kirch W (2014) Mobile applications for diabetics: a systematic review and expert-based usability evaluation considering the special requirements of diabetes patients age 50 years or older. J Med Internet Res 14(4):1–18 (S137)

Paradowski M, Fletcher A (2004) Using task analysis to improve usability of fatigue modelling software. Int J Hum Comput Stud 60:101–115 [ELSEVIER, (S93)]

Mirkovic J, Kaufman RD (2014) Rutland MC (2014) Supporting cancer patients in illness management: usability evaluation of a mobile app. JMIR MHEALTH AND UHEALTH 2(3):1–21 (S138)

Holzinger A (2005) Usability engineering methods for software developers. Commun ACM 48(1):71–74 (S94)

Spencer R (2000) The streamlined cognitive walkthrough method: working around social constraints encountered in software development company. In: Proceedings of the conference on human factors in computing systems (CHI), ACM press, New York ( S95 )

Christophersen T (2011) Konradi U Reliability, validity and sensitivity of a single-item measure of online store usability. Int J Hum Comput Stud 69:269–280 [ELSEVIER, (S96)]

Datcu D, Lukosch S, Brazier F (2015) On the usability and effectiveness of different interaction types in augmented reality. Int J Hum Comput Interact 31:193–209 (S141)

Leuthold S, Avila BAJ, K K (2008) Beyond web content accessibility guidelines: design of enhanced text user interfaces for blind internet users. Int J Hum Comput Stud 66:257–270 [ELSEVIER, (S97)]

Bias GR, Moon MB (2015) Hoffman RR Concept mapping usability evaluation: an exploratory study of a new usability inspection method. Int J Hum Comput Interact 31(9):571–583 (S142)

Kim JK, Han HS, Yun HM et al (2002) A systematic procedure for modeling usability based on product design variables: a case study in audiovisual consumer electronic products. Int J Occup Saf Ergon 8(3):387–406 (S98)

Tung LL, Xu Y, Tan BF (2009) Attributes of web site usability: a study of web users with the repertory grid technique. Int J Electron Commer 13(4):97–126 (S143)

Chung W, Chen H, Chaboya GL et al (2005) Evaluating event visualization: a usability study of coplink spatio-temporal visualizer. Int J Hum Comput Stud 62:127–157 [ELSEVIER, (S99)]

Choi KI, Kim SW, Lee D et al (2015) A weighted qfd-based usability evaluation method for elderly in smart cars. Int J Hum Comput Interact 31(10):703–716 (S144)

Roth V, Straub T, Richter K (2005) Security and usability engineering with particular attention to electronic mail. Int J Hum Comput Stud 63:51–73 [ELSEVIER, (S100)]

Nawaz A (2013) Clemmensen T Website usability in asia”from within: an overview of a decade of literature. Int J Hum Comput Interact 29:256–273 (S145)

Howell M, Love S, Turner M (2006) Visualisation improves the usability of voice-operated mobile phone services. Int J Hum Comput Stud 64:754–769 [ELSEVIER, (S101)]

West R, Lehman KR (2006) Automated summative usability studies: an empirical evaluation, ACM, CHI 2006 Proceedings. 1-59593-178-3/06/0004 ( S104 )

Shamim A, Balakrishnan V, Tahir M et al (2016) Age and domain specific usability analysis of opinion visualisation techniques. Behav Inf Technol 35(8):680–689 (S149)

Wixon D (2003) Evaluating usability methods: why the current literature fails the practitioner, ACM, New York, 1072-5220/03/0700 ( S105 )

Hanrath S, Kottman M (2015) Use and usability of a discovery tool in an academic library. J Web Librariansh 9:1–21 (S150)

Abran A, Khelifi A, Suryn W (2003) Usability meanings and interpretations in iso standards. Softw Qual J 11:325–338

Arms WY (2000) Digital libraries. MIT Pr, Cambridge

Bevan N, Kirakowsi J, Maissel J (1991) What is usability? In: Proceedings of the 4th international conference on HCI, pp 651–655

Boëhm B (1978) Characteristics of software quality, Vol 1 TRW series on software technology. North-Holland, Amsterdam

Booth P (1989) An introduction to human-computer interaction. Lawrence Erlbaum Associates Publishers, Hove/East Sussex

Lewis RJ (1995) IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Hum Comput Interact 7:57–78

Blandford A, George B (2002) Usability for digital libraries. In: Proceedings of the second ACM/lEEECS joint conference on digital libraries. ACM Press, New York, p 424

Brinck Tom, Gergle Darren, Wood Scott D (2002) Designing web sites that work: usability for the web. Morgan Kaufmann, San Francisco

Constantine L, Lockwood LAD (1999) Software for use: a practical guide to the models and methods of usage-centered design. Addison-Wesley, New York

Dix A, Finley J, Abowd G et al (1998) Human-computer interaction, 2nd edn. Prentice-Hall, Englewood Cliffs

Donyaee M, Seffah A (2001) Quim: an integrated model for specifying and measuring quality in use. In: Eighth IFIP Conference on Human Computer Interaction. Tokyo, Japan

Grady RB (1992) Practical software metrics for project management and process improvement. Prentice Hall, Englewood Cliffs

Gould JD (1988) How to design usable systems. In: Helander M (ed) Handbook of human computer interaction. Elsevier, New York, pp 57–89

Hix D, Hartson HR (1993) Developing user interfaces: ensuring usability through product & process. Wiley, New York

MATH   Google Scholar  

Löwgren J (1993) Human-computer interaction: What every system developer should know. Studentlitteratur, Lund

McCall JA, Richards PK, Walters GF (1977) Factors in software quality, vol II. Rome Aid Defence Centre, Italy

Macleod M, Bowden R, Bevan N et al (1997) The music performance method. Behav Inf Technol 16:279–293

Preece J, Benyon D, Davies G et al (1993) A guide to usability: human factors in computing. Addison-Wesley, Reading

Preece J, Rogers Y, Sharp H et al (1994) Human-computer interaction. Addison-Wesley, Reading

Quesenbery W (2001) What does usability mean: looking beyond ease of use. In: Proceedings of the 18th Annual Conference Society for Technical Communications

Shneiderman B, Plaisant C (2005) Designing the user interface: strategies for effective human-computer interaction. Addison Wesley, Boston

Shackel B (1991) Usability—context, framework, definition, design and evaluation. In: Shackel B, Richardson SJ (eds) Human factors for informatics usability. Cambridge University Press, New York, pp 21–37

Bevan N (1995) Human-computer interaction standards. In: Proceedings of the 6th international conference on human computer interaction, Elsevier, Amsterdam

Bevan N (2009) International standards for usability should be more widely used. J Usability Stud 4(3):106–113

Bevan N, Schoeffel R (2001) A proposed standard for consumer product usability. In: Proceedings of Ist international conference on universal access in human computer interaction, pp 557—561

DIN EN ISO 13407 (1999s) Human-centered design processes for interactive systems, CEN—European Committee for Standardization, Brussels

ISO/PAS 18152 (2003) Ergonomics of human-system interaction-specification for the process assessment of human-system issues. ISO, Genf

Lewis RJ (2014) Usability: lessons learned…and yet to be learned. Int J Hum Comput Interact 30:663–684

Seffah A, Mammar HH (2009) Usability engineering laboratories: limitations and challenges toward a unifying tools/practices environment. Behav Inf Technol 28(3):281–291 (94)

Kortum Philip T, Bangor Aaron (2013) Usability ratings for everyday products measured with the System Usability Scale. Int J Hum Comput Interact 29(2):67–76

Kortum P, Peres CS (2014) The relationship between system effectiveness and subjective usability scores using the system usability scale. Int J Hum Comput Interact 30:575–584

Download references

Acknowledgements

Darshan Lal, Dipta Sakkarwal, Ricky Tharan, Shruti Singh, Surbhi Garg and Upasana Kundra are gratefully acknowledged for their assistance in providing their suggestions for improvement of this SLR. We also thank Associate Professors of Guru Gobind Singh Indraprastha University for their help in conceptualizing this work.

Author information

Authors and affiliations.

University School of Information and Communication Technology GGSIPU, Sector 16-c, Delhi, 110078, India

Kalpna Sagar & Anju Saha

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kalpna Sagar .

Rights and permissions

Reprints and permissions

About this article

Sagar, K., Saha, A. A systematic review of software usability studies. Int. j. inf. tecnol. (2017). https://doi.org/10.1007/s41870-017-0048-1

Download citation

Received : 15 March 2017

Accepted : 11 October 2017

Published : 11 December 2017

DOI : https://doi.org/10.1007/s41870-017-0048-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Software usability
  • Systematic literature review
  • Software development life cycle
  • Single usability metric
  • Object oriented metric
  • Master Usability Scaling
  • Find a journal
  • Publish with us
  • Track your research
  • Advanced search
  • Peer review

thesis usability study

Celebrating 65 years of The Computer Journal  - free-to-read perspectives -  bcs.org/tcj65

  • Record : found
  • Abstract : found
  • Conference Proceedings : found

The Usability of E-learning Platforms in Higher Education: A Systematic Mapping Study

thesis usability study

  • Download PDF
  • Review article
  • Invite someone to review

Abstract

Author and article information , contributors, affiliations.

This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

1477-9358 BCS Learning & Development

M. Aberdour R. Smith 2006 Usability in e-learning Online: http://www.epic.co.uk/content/resources/white_papers/pdf_versions/Development/Epic_White_Paper_Us ability.pdf (accessed 19.09.08)

B. Adeoye R. M. Wentling 2007 The relationship between national culture and the usability of an e-learning system International Journal on ELearning 6 1 119 146 (Ref P39)

A. Aggarwal V. Adlakha T. Ross 2012 A Hybrid approach for selecting a course management system: A case study Journal of Information Technology Education: Innovations in Practice 11 283 300

A. Al-Ajlan H. Zedan 2008 Why moodle. In Future Trends of Distributed Computing Systems, 2008 FTDCS’08. 12th IEEE International Workshop on 58 64 IEEE

M. Alseid D. Rigas 2010 Three different modes of avatars as virtual lecturers in e-learning interfaces: a comparative usability study The Open Virtual Reality Journal, Bentham Open, ISSN: 1875-323X 2 1 8 17 (Ref P45)

M. Alseid D. Rigas 2011 The role of earcons and auditory icons in the usability of avatar-based e-learning interfaces In Developments in E-systems Engineering (DeSE) 2011 276 281 IEEE (Ref P41)

M. Alseid M. Azzeh Y. El Sheikh 2014 A Comparative Usability Study on the Use of Auditory Icons to Support Virtual Lecturers in E-Learning Interfaces Journal of Advanced Computer Science & Applications 5 4 (Ref P3)

M. Alshammari R. Anane R. J. Hendley 2015 Design and usability evaluation of adaptive e-learning systems based on learner knowledge and learning style In Human-Computer Interaction 584 591 Springer, Cham (Ref P12)

M. Alshammari R. Anane R. J. Hendley 2016 May Usability and effectiveness evaluation of adaptivity in e-learning systems In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems 2984 2991 ACM (Ref P48)

C. Ardito M. F. Costabile M. De Marsico R. Lanzilotti S. Levialdi T. Roselli V. Rossano 2006 An approach to usability evaluation of e-learning applications Universal access in the information society 4 3 270 283 (Ref P6)

C. Ardito M. De Marsico R. Lanzilotti S. Levialdi T. Roselli V. Rossano M. Tersigni 2004 May Usability of e-learning tools In Proceedings of the working conference on Advanced visual interfaces 80 84 ACM (Ref 55)

M. Asarbakhsh J. Sandars 2013 E‐learning: the essential usability perspective The clinical teacher 10 1 47 50 (Ref P16)

A. Bernerus J. Zhang 2010 A Peek at the Position of Pedagogical Aspects in Usability Evaluation of E-learning System-A Literature Review of Usability Evaluation of E-learning System conducted since 2000 (Bachelor’s thesis)

T. Berns 2004 Usability and user-centred design, a necessity for efficient e-learning International Journal of The Computer, the Internet and Management 12 2 20 25 (Ref P51)

C. T. Blake L. Rapanotti 2004 Usability evaluation of distributed groupware in distance learning In Information Technology Based Higher Education and Training, 2004. ITHET 2004. Proceedings of the FIfth International Conference on 500 504 IEEE (Ref P52)

P. Brereton B.A. Kitchenham D. Budgen M. Turner M. Khalil 2007 Lessons from applying the systematic literature review process within the software engineering domain Journal of Systems and Software 80 4 571 583

G. Bubaš T. Orehovački I. Balaban A. Ćorić 2010 Evaluation of Web 2.0 tools in the e-learning context: Case studies related to pedagogy and usability In University Information Systems-Selected Problems. Difin SA. (Ref P25)

S. Celik 2012 Development of usability criteria for e-learning content development software Turkish Online Journal of Distance Education 13 2 336 345 (Ref P14)

J. Chauhan K. Batbayar R. Sharma D. Sharma D. Popli N. Kumar A. Goel 2015 Towards Adapting Sakai for e-Learning Provider In CSEDU 1 306 314

C. M. Chiu M. H. Hsu S. Y. Sun T. C. Lin P. C. Sun 2005 Usability, quality, value and e-learning continuance decisions Computers & education 45 4 399 416 (Ref P57)

Y. F. Chuah L. F. Fong Z. M. Zaki 2016 Lear ’ E U b y D Features of Chinese as a Foreign Language E-Learning Websites International Journal of Learning and Teaching 2 1 91 98 (Ref P31)

K. A. Conrad 2016 Developing and Implementing a LGBT Family Studies Course: A Pre-Post Evaluation

M. F. Costabile M. De Marsico R. Lanzilotti V. L. Plantamura T. Roselli 2005 On the usability evaluation of e-learning applications In System Sciences, 2005. HICSS’05. Proceedings of the 38th Annual Hawaii International Conference on 6b 6b IEEE

G. Costagliola A. De Lucia F. Ferrucci C. Gravino G. Scanniello 2008 Assessing the usability of a visual tool for the definition of e-learning processes Journal of Visual Languages & Computing 19 6 721 737 (Ref P11)

R. Deraniyagala R.J. Amdur A.L. Boyer S. Kaylor 2015 Usability study of the EduMod eLearning Program for contouring nodal stations of the head and neck Practical radiation oncology 5 3 169 175 (Ref P56)

E. G. Doubleday V. D. O’Loughlin A. F. Doubleday 2011 The virtual anatomy laboratory: Usability testing to improve an online learning resource for anatomy education Anatomical sciences education 4 6 318 326 (Ref P44)

B. A. Eldridge 2014 Exploring faculty adoption and utilization of Blackboard at a community college in the Kentucky Community and Technical College System Doctoral dissertation, University of Kentucky, USA

A. M. Elkaseh K. W. Wong C. C. Fung 2016 Perceived ease of use and perceived usefulness of social media for e-learning in Libyan higher education: a structural equation modeling analysis International Journal of Information and Education Technology 6 3 192 (Ref P34)

Evaluating usability of e-learning systems in universities International Journal of Advanced Computer Science and Applications 5 8 97 102 (Ref P22)

A. Fernandez E. Insfran S. Abrahão 2011 Usability evaluation methods for the web: A systematic mapping study Information and Software Technology 53 8 789 817

L. L. Freire P. M. Arezes J. C. Campos 2012 A literature review about usability evaluation methods for e-learning platforms Work, 41(Supplement 1) 1038 1044

J. M. Fuentes Pardo Á. Ramírez Gómez A. I. García García F. Ayuga Téllez 2012 Web-based education in Spanish Universities A Comparison of Open Source E-Learning Platforms. Journal of Systemics, Cybernetics and Informatics 10 6 47 53

M. Garreta-Domingo E. Mor 2007 User Centered Desing in E-Learning Environments: from Usability to Learner Experience In Proceedings of the EDEN 2007 Annual Conference June 13-16 Naples Italy (Ref P59)

N. Harrati I. Bouchrika A. Tari A. Ladjailia 2016 Exploring user satisfaction for e-learning systems via usage-based metrics and system usability scale analysis Computers in Human Behavior 61 463 471 (Ref P26)

I. Hatzilygeroudis C. Koutsojannis N. Papachristou 2007 June Evaluation of usability and assessment capabilities of an e-Learning System for Nursing Radiation Protection In Computer-Based Medical Systems, 2007. CBMS’07. Twentieth IEEE International Symposium on 301 306 IEEE (Ref P24)

M. F. Hilmi Y. Mustapha H. M. Ali S. Pawanchik 2012 Service Quality and Ease-of-Use of a Learning Management System Portal: Perceptions of Distance Learners Age 25 49 10 9 (Ref P36)

International Standards Organization 1998 ISO 9241 – 11, Ergonomics of human system interaction

P. Jain 2015 Virtual learning environment International Journal in IT & Engineering 3 5 75 84

R. Jamaludin P.L. Funn 2007 USERS’CURRENT VIEWS ABOUT APPLIED E-LEARNING COURSEWARE USABILITY: A CASE STUDY AT UNIVERSITI SAINS MALAYSI A In: 1st International Malaysian Educational Technology Convention 2007 (Ref P61)

A. Jashapara W. C. Tai 2011 Knowledge mobilization through e-learning systems: Understanding the mediating roles of self-efficacy and anxiety on perceptions of ease of use Information Systems Management 28 1 71 83 (Ref P30)

D. Karahoca A. Karahoca 2009 Assessing effectiveness of the cognitive abilities and individual differences on e-learning portal usability evaluation Procedia-Social and Behavioral Sciences 1 1 368 380 (Ref P9)

N.K. Kiget G. Wanyembi A.I. Peters 2014

B. Kitchenham S. Charters 2007 Guidelines for performing systematic literature reviews in software engineering Keele University and Durham University Joint Report

B.A. Kitchenham D. Budgen P. Brereton 2015 Evidence-based software engineering and systematic reviews 4 CRC Press

A. Koohang E. Weiss 2003 Effect of prior experience with the Internet on graduate ’ w w b y and web-based distance learning instruction: An exploratory study in a hybrid instruction environment Issues in Information Systems 4 2 535 542 (Ref P18)

A. Koohang J. Paliszkiewicz J.H. Nord 2015 Predictors of success in e-Learning courseware usability design Issues in Information Systems 16 2 116 122 (Ref P35)

E. Koulocheri A. Soumplis N. Kostaras M. Xenos 2011 Usability inspection through heuristic evaluation in e-Learning environments: The LAMS case In VII International Conference on ICT in Education, Challenges 617 630 (Ref P53)

S. Kumar A. K. Gankotiya K. Dutta 2011 April A comparative study of moodle with other e-learning systems. In Electronics Computer Technology (ICECT) 2011 3rd International Conference on 5 414 418 IEEE

K. M. Lin 2011 e-Learning continuance intention: Moderating effects of user e-learning experience Computers & Education 56 2 515 526 (Ref P15)

M. Liu Z. Zhu 2012 A case study of using eye tracking techniques to evaluate the usability of e-learning courses International Journal of Learning Technology 7 2 154 171 (Ref P1)

D. K. Logan T. Neumann 2010 Comparison of blackboard 9.1 and moodle 2.0 Learning Technologies Unit, Institute of Education, University of London, London, UK

S. Markovicć N. Jovanović 2012 Learning style as a factor which affects the quality of e-learning Artificial Intelligence Review 38 4 303 312

M. Masood A. Musman 2015 The Usability and its Influence of an e-Learning System on Student Participation Procedia-Social and Behavioral Sciences 197 2325 2330 (Ref P43)

M. C. Mazzoleni C. Rognoni E. Finozzi I. Giorgi F. Pugliese M. Pagani M. Imbriani 2008 Development of an e-learning system for occupational medicine: Usability issues Studies in health technology and informatics 136 579 (Ref P13)

G. Meiselwitz G. Trajkovski 2006 June Effects of computer competency on usability and learning experience in online learning environments In Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, 2006. SNPD 2006. Seventh ACIS International Conference on 339 342 IEEE (Ref P19)

Moodle.org “Moodle Statistic” 18 [Online]. Available: http://moodle.net/stats/ . [Accessed 14 30 2018]

A. Muhammad A. Iftikhar S. Ubaid M. Enriquez 2011 A weighted usability measure for e-learning systems J Am Sci 7 2 670 686 (Ref P 5)

W. T. Nakamura E. H. T. de Oliveira T. Conte 2017 Usability and User Experience Evaluation of Learning Management Systems- A Systematic Mapping Study In Proceedings of the 19th International Conference on Enterprise Information Systems (ICEIS 2017) 3 97 108

NielsenJacob 1993 Usability engineering Cambridge, MA Academic Press

O. Odhiambo F.R. Acosta 2009 An E valuation of the Usability and Interactivity of e-Learning Platforms Used In Kenyan Universities In: E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. Association for the Advancement of Computing in Education (AACE) 1016 1025 (Ref P7)

A. Oztekin D. Delen A. Turkyilmaz S. Zaim 2013 A machine learning-based usability evaluation method for eLearning systems Decision Support Systems 56 63 73 (Ref P4)

A. Oztekin Z. J. Kong O. Uysal 2010 UseLearn: A novel checklist and usability evaluation method for eLearning systems by criticality metric analysis International Journal of Industrial Ergonomics 40 4 455 469 (Ref P58)

D. Plantak Vukovac V. Kirinic B. Klicek 2010 A comparison of usability evaluation methods for e-learning systems DAAAM International Scientific Book 271 288

I. T. Plata D. B. Alado 2009 Evaluating the Perceived Usability of Virtual Learning Environment in Teaching ICT Courses Globalilluminators. Org 1 63 76

S. Poelmans P. Wessa K. Milis E. Bloemen C. Doom 2008 November Usability and acceptance of e-learning in statistics education, based on the compendium platform In Proceedings of the International Conference of Education, Research and Innovation 1 10 (Ref P47)

A. Rahimi M.A. Embi A. Rahimi 2015 Evaluation of the e-Learning developed for casemix and clinical coding: Quality of the material and usability of the system (Ref P23)

M. Raspopovic A. Jankulovic J. Runic V. Lucic 2014 Success factors for e-learning in a developing country: A case study of Serbia The International Review of Research in Open and Distributed Learning 15 3 1 23

T. C. Reeves L. Benson D. Elliott M. Grant D. Holschuh B. Kim H. Kim E. Lauber S. Loh 2002 Usability and Instructional Design Heuristics for E Learning Evaluation In Proc. of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2002 1 1615 1621 (Ref P49)

S. Rigutti G. Paoletti A. Morandini 2008 Lifelong learning and e-learning 2.0: The contribution of usability studies Journal of e-Learning and Knowledge Society 4 1 221 229 (Ref P32)

G. Rugg M. Petre 2007 A Gentle guide to research methods Open University Press England

Sakaiproject.org 2018 Overview |Sakai [online] at: http://sakaiproject.org/overview (accessed 04-03-18)

P. I. Santosa 2009 USABILITY OF E-LEARNING PORTAL AND HOW IT AFFECTS STUDENTS’ATTITUDE AND SATISFACTION, AN EXPLORATORY STUD Y PACIS 2009 Proceedings 71 (Ref P54)

H. B. Santoso R. Y. K. Isal T. Basaruddin L. Sadira M. Schrepp 2014 September In-progress: User experience evaluation of Student Centered E-Learning Environment for computer science program In User Science and Engineering (i-USEr), 2014 3rd International Conference on 52 55 IEEE (Ref P60)

N. Satam J. Taslim W. A. W. Adnan N. A. Manaf 2016 August Usability testing of e-learning system: A case study on CeL in TARUC, Johor Branch Campus In User Science and Engineering (i-USEr), 2016 4th International Conference on 63 68 IEEE (Ref P10)

T. SEPIC S. RASPOR I. POGARCIC 2008 eLearning: Usability and Learnability of an ESP Online Course In Proceedings of the 7th Wseas International Conference on E-Activities-Recent Advances in E-Activities 74 79 (Ref P20)

H. J. Shah S. Attiq 2016 Impact of technology quality, perceived ease of use and perceived usefulness in the formation of m ’ x -learning Abasyn J. Soc. Sci 9 1 124 140 (Ref P27)

S. Ssemugabi R. De Villiers 2007a Usability and learning: A framework for evaluation of web-based e-learning applications In EdMedia: World Conference on Educational Media and Technology 906 913 Association for the Advancement of Computing in Education (AACE) (Ref P50)

S. Ssemugabi R. De Villiers 2007b A comparative study of two usability evaluation methods using a web-based e-learning application In Proceedings of the 2007 annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries 132 142 ACM (Ref P2)

J. Sulaiman T. Zulkifli K. S. K. Ibrahim N. K. M. Noor 2009 December Implementing Usability Attributes In E-learning System Using Hybrid Heuristics In Information and Multimedia Technology, 2009. ICIMT’09. International Conference on 189 193 IEEE (Ref P28)

K. G. D. Tharangie C. M. A. Irfan C. A. Marasinghe K. Yamada 2008 Kansei Engineering Assessing System to enhance the usability in e-learning web interfaces: Colour basis In 16th International Conference on Computers in Education 1 1 145 150 (Ref P29)

Y. L. Theng J. Sin 2012 Evaluating usability and efficaciousness of an e-learning system: A quantitative, model-driven approach In Advanced Learning Technologies (ICALT), 2012 IEEE 12th International Conference on 303 307 IEEE (Ref P21)

M. H. Thowfeek M. N. A. Salam 2014 Students’ Assessment on the Usability of E-leaming Websites Procedia-Social and Behavioral Sciences 141 916 922 (Ref P37)

F. Torun H. Tekedere 2015 The Usability Analysis of An E-Learning Environment Turkish Online Journal of Distance Education (Ref P42)

T. M. Van der Merwe M. E. Van Heerden 2013 October Ease of use and usefulness of webinars in an open distance learning environment: an activity theory perspective In Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference 262 270 ACM (Ref P17)

S. E. Van Nuland K. A. Rogers 2016 The anatomy of E‐Learning tools: Does software usability influence learning outcomes? Anatomical sciences education 9 4 378 390 (Ref P38)

S. Venkataraman S. Sivakumar 2015 Engaging students in Group based Learning through e-learning techniques in Higher Education System International Journal of Emerging Trends in Science and Technology 2 01 1741 1746

A. Wiklund-Engblom 2010 October Triangulating methods for exploring the link between user experience and e-learning In Proceedings of the 14th International Academic MindTrek Conference: Envisioning Future Media Environments 171 177 ACM (Ref P46)

A. N. M. Yusof N. L. Ahmad 2012 An Investigation on the Relationship between Online Distance Learning with Learning Usability Procedia-Social and Behavioral Sciences 65 1066 1070 (Ref P8)

A. N. M. Yusof E. Kassim N. H. Zamzuri 2010 December Online distance learning: Quality characteristics and usability evaluation In Science and Social Research (CSSR), 2010 International Conference on 575 579 IEEE (Ref P33)

N. H. Zamzuri E. S. Kassim M. Shahrom 2010 January The role of cognitive styles in investigating E-learning usability In e-Education, e-Business, e-Management, and e-Learning, 2010. IC4E’10. International Conference on 3 6 IEEE (Ref P40)

Comment on this article

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tech Innov Patient Support Radiat Oncol
  • v.24; 2022 Dec

Usability: An introduction to and literature review of usability testing for educational resources in radiation oncology

Heather l. keenan.

a School of Clinical Medicine, University of Cambridge, United Kingdom

Simon L. Duke

b Education Centre, University of Nottingham, United Kingdom

Heather J. Wharrad

Gillian a. doody, rakesh s. patel.

Usability, or the ease with which something can be used, is a key aspect in ensuring end-users can achieve the best possible outcomes from a given educational resource. Ideally usability testing should take place iteratively throughout the design of the resource, and there are several approaches for undertaking usability testing described in the wider literature. Within radiation oncology education, the extent to which usability testing occurs remains unclear. This literature review aimed to assess current practice and provide a practical introduction to usability testing for educational resource design within radiation oncology.

Two web databases were searched for articles describing planned or completed usability testing during the design of a radiation oncology educational resource. Fifteen studies were identified. Data was gathered describing the type of usability testing performed, the number of cycles of testing and the number of test subjects. Articles described design of educational resources for both patients and trainees, with the number of test subjects ranging from 8 to 18. Various testing methods were used, including questionnaires, think aloud studies and heuristic evaluation. Usability testing comprised a range of single cycle through to several rounds of testing.

Through illustrative examples identified in the literature review, we demonstrate that usability testing is feasible and beneficial for educational resources varying in size and context. In doing so we hope to encourage radiation oncologists to incorporate usability testing into future educational resource design.

Introduction

The past two decades have seen a meteoric rise in the use of digital methods and resources within medical education [1] , an increase which has been further fuelled by the Covid-19 pandemic and the accompanying restrictions on face-to-face teaching [2] , [3] , [4] . In parallel, digital educational resources have become an important source of accurate and up-to-date information for patients [5] , [6] .

Usability testing focusses on ensuring the user of any digital tool or information system can navigate and engage with the resource easily and effectively. It is widely performed in system design within the software industry and its importance in design of educational interventions is increasingly recognised [7] . It is not clear to what extent usability is applied within radiation oncology.

The aims of this article are to outline the main ways of testing usability and assess how this has been done already within radiation oncology by means of a literature review. In doing so we aim to provide a practical guide for readers of this special issue in education to incorporate usability testing into design of their own radiation oncology educational resources in future.

Background to Usability

What is usability.

In its simplest terms, usability is “the ease with which a person can use a product in a particular set of circumstances” [8] . Usability has been more formally defined by the International Organization for Standardization as “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” [9] .

In her introductory book on the subject, Barnum highlights the importance of these specified users, specified goals and specified context when considering usability [10] . To consider these within the realm of medical education:

  • • Users – An online oncology educational resource may be highly usable for a young adult, but relatively unusable to an elderly person (statistically more likely to be diagnosed with cancer and therefore receive radiotherapy) who does not regularly access the internet [11] .
  • • Goals – An educational resource designed to facilitate oncology treatment decisions for patients, which contains detailed information but provides no final summary page, may be highly informative but ultimately unusable for its defined goal as users struggle to remember and assimilate what they have read.
  • • Context – A resource designed to be used on a Safari or Chrome web-browser but released in a hospital where all computers operate a legacy version of Internet Explorer, may be useless in its specified context.

Within this review, we will be considering the assessment of usability of educational resources within radiation oncology. Users are therefore usually either patients or healthcare practitioners and goals range from facilitating treatment decisions to improving communication or radiotherapy contouring skills.

Why is Usability Important?

Usability is relevant in the design of any educational resource, but its importance is never clearer than within e-learning where users may be required to interact with complex systems. These systems can provide new educational opportunities, for example educational contouring software which can provide direct user feedback for trainees [12] . As complexity increases however, so too does the opportunity to lose users due to issues with system design.

Usability has been shown to be a key factor (alongside perceived usefulness) influencing our acceptance of information technology [13] , [14] , which predicts actual use.

E-learning has been extensively studied within the corporate field where e-learning courses increasingly replace traditional instructor-led courses [15] . While e-learning courses are often cheaper and more convenient, they have also been shown to have higher attrition rates than traditional courses; one reason for this could be poor usability of the resources [16] . Sandars agrees with this view, when he argues that poor usability could be an explanation for the findings of a 2008 meta-analysis that demonstrates that while e-learning in healthcare is superior to no intervention, it is no more effective than traditional learning methods [8] , [17] .

It is easy to assume that an educational resource we have created is usable; we have, after all, been carefully developing it for weeks or months and its intricacies are second nature. Barnum observes that “From the moment you know enough to talk about a product […] you know too much to be able to tell if the product would be usable for a person who doesn’t know what you know.” [10] It is therefore essential that we not only consider usability during the design of an educational resource, but that we formally test it.

How to Test Usability

Sandars describes four main dimensions that we should consider when assessing usability of e-resources: the learner, technological aspects, instructional design aspects, the context [8] . It is essential that, at least in some stages, a usability assessment involves the intended end-user and that ideally it is assessed in the context in which it will finally be used. ‘Technological aspects’ refers to factors such as the ease of navigation, consistency of layout and clarity of the visual design. Instructional design includes the content itself, the interactivity and judicious use of multimedia.

There are multiple different methods of usability testing described in the literature. Table 1 summarises those most encountered within medical education, briefly describes the benefits and limitations of each and provides an example study which can be consulted for further reference.

Summary of different methods of evaluating usability.

It is relatively straightforward to survey many individuals with a questionnaire; it is considerably more resource intensive to perform multiple think aloud studies, cognitive walkthroughs or heuristic evaluations. Nielson and Landauer analysed 11 usability studies using either heuristic evaluators (i.e. usability experts) or end-user evaluators (e.g. patients or clinicians), compared the number of evaluators with the number of usability problems identified, and developed a model to determine how many testers were required [18] . They showed that for a small project the optimum cost-benefit analysis requires only four evaluators. Five evaluators are generally accepted to be able to identify ∼85% of usability issues [19] . Later authors have however stressed the importance of context and appropriate sampling in defining the numbers to be studied [20] .

Literature review of usability in radiation oncology education

A literature review was carried out to assess the reported use of usability testing in the radiation oncology education literature. PRISMA guidelines were followed [30] . Inclusion criteria were articles which described completed or planned usability assessment of an educational resource within radiation oncology. The search was initially carried out on 25/6/2022 and all results up to this date were included. Fig. 1 outlines the methodology:

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

Literature review methodology for identification of relevant articles.

Three articles were identified as relevant from abstract screening and subsequently excluded at review. One was an analysis of pre-existing web resources for patients with prostate cancer [31] . While the tool they used to assess these websites had been previously tested for usability, the websites themselves were not explicitly assessed. A second article addresses usability specifically in the context of patients with lower health literacy levels [32] . They provide recommendations for enhancing usability in a practical sense, e.g. by providing audio material alongside visual, but do not cover how to carry out usability testing. A third study observed browsing patterns of visitors seeking radiology-related information on a hospital website to develop a model which could then by applied elsewhere to improve browsing experience [33] . This was excluded as it is not specifically about the development of an educational resource.

The fifteen items selected for full review were read by two reviewers (HLK, SLD) to identify the educational tool being developed, the intended audience, the usability assessment method used, the number of testers and the point in the design process at which usability was assessed. The results are summarised in Table 2 .

Systematic review of current use of usability testing in the radiation oncology education literature.

Most examples of usability testing in the radiation oncology medical education literature describe the design of a patient information resource (13/15, 87%), although there are also examples of usability assessment use in designing resources for healthcare professionals (4/15, 27%). There are two examples of papers which do both: Juraskova et al. [21] describe the design of educational modules for both clinicians and patients/caregivers, and the study by Raith et al. [42] describes two separate augmented reality protocols for patients and radiographers respectively.

Direct observation was used in 8/15 (60%) studies. 3/15 (20%) describe the use of expert heuristic evaluation. Two of these studies are among those which describe more extensive usability testing, including multiple testing modalities and iterative testing throughout the design process. 6/15 (40%) studies describe a think aloud. These involve a range of 8 to 18 participants in any single round of testing, and some studies describe more than one round. The formality of analysis of the think aloud varies; some papers describe extensive transcription and thematic analysis, whereas others describe drawing general learning points.

10/15 (67%) studies use a survey as part of their usability assessment. Of these, 5/10 (50%) were not formally validated usability questionnaires, but asked questions explicitly designed to probe usability. Of those which used validated studies, three (60%) used the SUS (one of which was modified) and one (20%) used SUMI. One study mentions a future plan to use UMUX-LITE but has not yet done so.

6/15 (40%) studies described more than one round of usability testing during the design process and 7/15 (47%) studies use more than one type of usability assessment. Three studies – Juraskova et al., [21] , Ankolekar et al., [29] and Berg et al., [23] – did both.

Although most studies assessed usability on the intended target audience this was not universal; there was one example of a study where clinicians alone were used to assess the usability of a resource being designed for patients. Raith et al., [42] were unable to assess their augmented reality resource on patients due to the restraints of the covid-19 pandemic.

In the introduction to this article, we described five methods of usability testing. These can be divided into ‘expert-led’ testing (heuristic evaluation, cognitive walkthrough) and ‘user-led’ testing (think aloud, semi-structured interview, questionnaire). Examples of all five types of testing exist in the current radiation oncology education literature. There are notably far more cases of user-led testing.

Usability experts may come with a cost - much of the seminal work on usability comes out of software design where usability experts to undertake heuristic evaluations are readily available, which is likely to be a luxury unavailable in healthcare. This scarcity may be addressed by collaboration with computer science departments within or across academic institutions.

In one of only three studies employing expert-led testing, Ankolekar et al. describe the development and validation of a patient-led decision aid for prostate cancer [29] . They detail an extensive process of usability testing involving multiple cycles of testing, re-design and re-testing. Over five rounds of testing, they employ questionnaires, heuristic evaluation and think-aloud methods. They explain that “Our development process spanned over two years and involved 58 participants, resulting in >100 h of interview material and feedback that needed to be processed, analyzed and incorporated in successive rounds.” While such extensive testing clearly has the potential to produce a high-quality educational resource, the time and financial commitment required may prove a disincentive for others to carry out similar work.

Other studies describe less extensive usability testing from which the authors are nevertheless able to make changes to their resource. Bigelow et al. describe two cycles of questionnaire-based testing and demonstrate an improvement in usability between cycle 1 and cycle 2 following changes to the design, language and graphics of their resource [40] . We would therefore argue that current literature suggests that it is both valuable and feasible to carry out usability studies within radiation oncology education on a variety of different scales suited to the specific aims of the resource.

Most of the studies which carry out only one or two rounds of usability testing employ user-led testing. Many of the advantages of this are intuitive; testing on the group that will ultimately be using the resource makes logical sense. Additionally, most healthcare professionals involved in the design of an educational resource will have easy and free (limited by necessary ethical approval) access to the end users, be they trainees or patients.

Within user-based usability testing a scale of resource and time-intensity exists. Formal think-aloud testing is an extensive process involving scriptwriting, taping of users, transcription of tapes, coding by independent coders and thematic analysis. Hopmans et al. provide an example of how this process can be shortened while still providing valuable insights [36] . In an initial round of think-aloud testing they ask 18 participants to navigate through their website and then ask a series of probing questions. They transcribe all the interviews, then three are selected for independent coding and analysis by two separate researchers; the remainder are analysed by one researcher only. This is one example of how a potentially intensive analysis process can be shortened while still providing useful insights. The article does not describe how the three transcriptions are selected; it would be important to ensure this is done at random.

Finally, questionnaires are a relatively easy method of usability testing. Most frequently used in our review is the SUS, which is a well-validated usability survey that is freely available online. As healthcare practitioners we are generally used to seeking survey style feedback on our educational resources, so it is fairly straightforward to add in some usability-focussed questions. If longer surveys like UTAUT are felt to be too arduous and add excessively to survey burden, then the SUS or UMUX-LITE are shorter alternatives.

Several studies in our review test usability on ‘cancer patients’. We would agree that it is essential in health education to test resources on the target audience, however it is worth considering that unless these testers are at the beginning of their cancer journey, they may in fact have a higher level of knowledge than intended. In this context, healthy volunteers may provide a reasonable alternative, with the caveat that volunteers are likely to be interested and engaged in healthcare and may therefore have a higher level of health literacy than the general population.

Our systematic review identified only two studies describing usability testing on an educational resource for radiation oncology trainees. This might simply represent the fact that there are fewer such resources being regularly created. The two articles identified in our review [26] , [37] both describe design of resources to help with contouring. This is clearly a field where usability is of crucial importance as factors like ease of navigation, ability to concurrently view atlases and contouring software and similarity to trainees’ local contouring software is likely to have a large impact on their engagement with the resource. We would encourage anyone designing such a resource to consider undertaking and reporting usability testing.

As a result of the inclusion criteria, all articles included in our review include a description of planned or completed usability testing. Only a proportion of these articles describe whether and how the results of the usability testing benefited ongoing resource development. An example of this being done well is Nguyen et al.; Table 1 in their paper describes the method, results and insights of serial rounds of usability testing [24] . We would suggest that a description of the changes made (and, if possible, repeat usability testing to demonstrate an improvement), would enhance any paper reporting usability as it would help identify common areas of difficulty.

Limitations and possible future work

A limitation of this review is that due to the search criteria, it only picks up studies which have specifically mentioned ‘usability’. It is possible that usability may be assessed but not formally described, or that usability is being assessed in educational resources that are never formally published (or are published only in abstract form). We would encourage more radiotherapy researchers to publish their usability data and lessons, as these may help prevent others from repeating similar mistakes. The literature search was limited to only two databases, which may also have limited the number of results.

A future review might identify a more specific area of radiation oncology educational material and assess all educational resources published within this field, to determine what proportion of them report usability testing. This would give a better idea of how widespread usability testing is. This is not possible to assess from our review, which does not include educational resources which are not usability tested.

In this article, we have discussed the rationale for carrying out usability testing in the design of educational resources and described the main methods for doing so. We go on to report the results of a literature review of the current use of usability testing within radiation oncology.

Current practice demonstrates that there is a balance to be achieved between the resource intensity of usability testing and the potential improvements to an educational resource. We would encourage all educationalists designing resources for either patients or trainees to consider how usability testing might reasonably be incorporated in their own design process. The ideal method(s) depends on the aim of the resource and certainly anyone aiming to design a durable and far-reaching resource should consider multiple rounds and methodologies of usability testing.

We hope we have also provided the necessary tools and information to show that even in simpler more local projects, it is feasible to carry out some basic usability testing to maximise the impact of a resource.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Integrations

What's new?

Prototype Testing

Live Website Testing

Feedback Surveys

Interview Studies

Card Sorting

Tree Testing

In-Product Prompts

Participant Management

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Maze Guides | Resources Hub

A Beginner's Guide to Usability Testing

0% complete

How to write an effective usability testing script (templates + examples)

To get the most out of a usability study, it’s vital to create a usability testing script. With this script, you’ll be able to run a successful usability test. Here, we'll show you how to sculpt the perfect usability test script.

Why you need a usability testing script

Usability testing is made far more effective when using a test script. There are numerous benefits of scripting out the test, including:

  • Being able to review the tasks and questions with colleagues beforehand
  • Having a roadmap for the things you’ll say, so you don’t have to think on the fly
  • Keeping the methodology consistent, with the same questions and tasks
  • Determining how long a session takes, so you can properly time the meetings

All in all, having a well-thought-out script for your usability test is a must.

Before you create the test script

Matthieu Dixte , Product Researcher at Maze suggests aligning and framing your research plan before you start writing your usability test script, “This helps identify the target audience, select the appropriate usability testing method, and the script/questions that are part of that test.”

According to Mattieu, there are several elements every usability plan should have:

  • What triggers and motivates the realization of this study
  • The type of decision you want to make
  • Personal assumptions
  • Clear evidence
  • The learning gap you’re looking to fill

Remember, when preparing to start usability testing, you shouldn’t be going into it with a broad objective of “getting feedback on the product.” That’s too vague of a goal. The results you get will be scattered, possibly inconsistent, and difficult to get insights from.

Instead, the test should cover realistic actions that users will take with your product to help you detect usability problems and see if your product is easily understandable. A better goal, for a language-learning app, for example, would be “see if the user can start a new conversation in their target language.”

You also want to outline who the testing users will be, as well as how many test participants to include . Finally, remember to get consent from these users beforehand. This is usually done with a consent form. And if you’re running a moderated usability test , make sure the participants are aware that you’ll be seeing their screen and hearing their voice while they’re testing your product.

Now you’re ready to start creating your usability test script.

Get usability testing insights in hours, not days

Maze enables you to collect both qualitative and quantitative usability data, all in one place, that enables better product decisions.

thesis usability study

Creating a usability test script

To make your usability test go well, your script should:

  • Have clear objectives, and create user tasks which test those objectives
  • Be reviewed by your peers, and modified based on their feedback
  • Be well-timed, ideally around 30 minutes or so. If this is the first usability test you’re running, then aim to keep it shorter than longer.
  • Be tested by a colleague before giving to real usability testers

Let’s break the actual test script down into four sections: Introduction, background questions, tasks, and the wrap-up.

Step 1: Introduction

The first thing to do is to outline how this usability testing session is going to go. Especially if this is the test user’s first time participating in a usability study, it’s good to get on the same page. You can set expectations, let them know how long the process will take, and clarify any issues they might have.

A usability test isn’t performed by robots. It’s people completing tasks being monitored by other people. Back and forth communication is possibly the most important part of a usability test. So, you want your test users to feel comfortable with you before they start the test itself, and getting off on the right foot as early as the introduction is the best way to start a positive rapport.

In fact, this goes for before and during the test. The user should be well-informed as they enter the test, and having the ability to communicate while testing can also be helpful.

So, introduce yourself and your team. Break the ice as much as possible. Try to find a place you’ve both visited, an activity you both enjoy, or shared career experiences.

You don’t have to invite them for dinner, but if you can build a nice rapport before the test starts, then their feedback will be more honest, and the results will be better for it.

Step 2: Background questions

“Don’t underestimate the importance of background questions,” warns Matthieu, “The order of the questions or steps in a test can influence the sequence of answers. Respondents may be biased by their previous answers.”

Once you’ve introduced yourself properly to your participants, you’ll now want to get some background information on them. This group of information will help to better inform you of why their experience went the way it did. Try to found out:

  • Their job, and the tasks they do in it
  • Demographics questions (careful not to pry into irrelevant and personal details, though!)
  • Their experience with products similar to yours
  • Their experience with this usability testing platform
  • Any other data that’s pertinent to your testing

Try to keep questions about their experience as open-ended as you can. The objective here is to gauge their level of understanding in general. E.g. “We have a tool called “troubleshooting”, what do you think this would do?”

Remind the users that you’re interested in testing the software, not them. They’re not being judged on their abilities. If you can clearly reaffirm this, you may put some anxious testers at ease and help combat cognitive biases like social desirability.

Finally, you want to double-check that they’ve given their consent. You have a legal and moral obligation to ensure they’ve consented to this study, and that they understand fully what the study entails.

Step 3: Scripting the usability tasks

This is the stage where the testing actually begins. There are countless variables that can make the test go better or worse, and a lot of this depends on your usability method. If you're using heuristic evaluation , for instance, your script may look a little different. This section is a general outline for how to write well-scripted usability tasks.

  • Stick to eight tasks at maximum. This might not seem like much, but even an eight-task test is going to give you valuable insights about your product. If you have more hypotheses to test—you definitely should, by the way—then test them in the next round of usability tests.
  • Your tasks should reflect realistic user goals. You’re not aiming to test the most niche use of your product, nor finding ways to break the program. That’s the job of a QA.
  • Don’t interrupt users, or tell them the path to take. Instead, tell them where you want them to arrive, and see if they can achieve it without your guidance.

Let’s use the example of a language app that matches native speakers with people learning their language:

  • Objective: User should start a conversation with a German speaker
  • Poor task description: "Go to the new conversation page, add German as the preferred language, and find an appropriate user to start a conversation with."
  • Better task description: "You want to learn German, so you signed up for a language app that matches you with native speakers. Find someone to speak German with who lives in your timezone."

The language you use when giving instructions is important, and the details are also nuanced. Subtle hints in commands can influence the way users engage with the program. Especially if they feel you’re trying to get them to perform a task in a particular way, they will have a different experience and perspective of the task itself.

Your tasks should also follow a logical and realistic sequence. You don’t want to send the users between pages at random to perform tasks. Instead, consider a sequence of tasks more like:

  • Send the user from the main page to a sub-page
  • Then, review some part of that sub-page
  • Finally, complete a new action within that page

If your tasks can’t be arranged to follow a neat sequence, then at least keep them clustered to an element of the product.

Make these tasks as direct and plain as possible. Some other things to avoid:

  • Using a “salesy” description. Example: "Choose one of our powerful new AI-powered tools to check for grammar mistakes in the text."
  • You’re not selling the product, so you don’t need to convince the user of anything. Be direct and clear with your language.
  • Possibly offending the user. Example: "You’re not smart enough to understand this page. Can you find the FAQ for answers?"
  • Don’t make things personal, stay away from insulting language, and avoid any topics that may be sensitive as much as you can.
  • Unnecessary backstory. Example: "Your wedding is coming up, and you’ve made a promise to get into shape for it. You tried jogging with your friends Jane and Mike, but you didn’t like it. You then heard about a good workout routine from your other friend Milos. You don’t have time to waste in the morning because you have to read the newspaper for your job, as well as making sure your children get to the school bus on time. So, finding a gym close to your office is important to you. Can you find 3 gyms close to this address?"

There is a difference between setting the scene and giving totally superfluous exposition. Tasks that resemble a real-life scenario are preferable, it even helps you to get realistic data. But, don’t give too many details, especially ones that aren’t actually relevant to the test.

Finally, make sure you’re able to communicate both ways during the test. Being able to record the voice of the user while they’re testing the product can be a huge boost. Hearing their thought process gives a whole new layer of context, and can better inform you of why things go wrong.

Step 4: Systematic observations and probing questions

At this stage, your participants are using the product or service, performing the tasks you've scripted. Your role now is to don your detective hat, watch users interact with your product, and ask the right questions to gather meaningful insights.

While any data gathered from usability tasks will be helpful, there’s a wealth of proverbial gold-dust you can gather from observation and additional questions. Observation gives you a first-hand view into how users navigate your product; you can notice thought processes, hesitations, changes in mind or perception. Accompanying this with probing questions adds a new layer to understanding the 'why' behind these actions.

If the quantitative data of usability metrics is the outline sketch, the qualitative insights from observation and questions are the color.

Systematic observations:

In this phase, you should keep an eye on:

  • Non-verbal cues: Look out for those telltale signs of emotion or opinion. A sudden frown may mean they’re irritated something didn’t perform as expected. Knitted eyebrows may show surprise or confusion.
  • Task flow: Map out the path your users take—are they breezing through or stumbling at some steps? Are they backtracking, or hovering around options without clicking? Why is this?
  • Time: Keep track of time. Are users spending longer on tasks than you'd anticipated, or in different places than expected? Match this with non-verbal cues and task flow to theorize what’s causing the disruption or hold up.

Probing questions:

Now, you can complement those observations with intentional questions. Matthieu recommends asking probing questions like:

  • Can you walk me through the steps you took to complete this task?
  • What came to mind when you had to [task]?
  • Tell me more about why you chose to do [action]?
  • What would you expect to happen once you've [task]?
  • What would enable you to accomplish [task] more effectively?
  • Did you notice if there was another way to [complete a specific step/task]?
  • What are your thoughts on the layout of [section]?

Remember, your goal here is to initiate conversation and spark their feedback, not to guide their actions. If you need inspiration for your questions, Maze’s Question Bank offers over 350+ tried-and-tested questions for multiple research scenarios.

Step 5: Wrap-up questions and feedback

Once the test has finished, the final step to take with your test users is to get their more general thoughts on the process. This is the final set of usability testing questions that you’ll be giving to your participants. This gives the chance to soak up any details about the test user’s experience, so make the most of it!

Aim to ask follow-up questions about:

  • Their overall impressions of the product, and of the session
  • What works well/poorly in the product
  • Overall difficulties they had with the tasks
  • Any comments they wanted to add during the test but didn’t

Usability testing script template

A well-structured script helps maintain consistency across multiple sessions and participants, which is critical for reliable results. Here's a user-friendly and adaptable script template that you can use as a starting point for a variety of usability tests.

While this template provides a broad framework, it should be tailored to suit the unique needs and characteristics of your product or service, your test participants, and your specific test objectives.

Introduction

  • Hello, I'm [name]. I'm here today to gather your thoughts on our new [product].
  • We'd like to see how you interact with it and hear your opinions.
  • As you explore, please voice your thoughts out loud. This helps us understand your experience better.
  • Remember, there are no right or wrong answers. We're looking for your honest feedback to improve [product].
  • Feel free to be candid with your opinions. Your insights are key to our improvement.
  • I wasn't involved in the design of anything you'll see today, so I won't be offended if you don't like it or if you criticize the experience. Please don't hesitate to be honest and spontaneous.
  • I saw that you signed our consent form and the non-disclosure agreement. Do you have any questions about this?
  • Before we get started with the interview, we wanted to ask if it's okay to record the call for reference purposes. We just want to make sure we don't miss any important details or insights from our conversation. Would it be okay with you to have the call recorded?
  • Do you have any other questions before we start?
  • Great, let's begin with a few questions.

Background questions

  • Before we start, could you tell us a little about yourself?
  • How long have you worked at [Company]?
  • Could you describe your experience with your current [related product]?
  • Now that we have some context, let's move on to exploring [product].

Asking participants about their experience with a related product will help you understand their preferences and expectations. It gives you context for framing your usability test within their existing knowledge and usage. For example, if you’re working on a food delivery app, you could ask how often the participant uses Uber Eats or DoorDash.

Detailed task description

  • Now, let's walk through [product] together.
  • Your task is to [specific task]. As you complete the task, remember to think aloud, sharing your thoughts and experiences.
  • Don't worry if you encounter any difficulties or confusion, these are important for us to know.
  • Ready? Let's begin!

Systematic observations and probing questions

  • As you navigate through [product], I'll occasionally ask questions.
  • For example, if you seem confused or hesitant, I might ask, 'Could you share what's making you pause?'
  • Or if you avoid a certain feature or function, I may ask, 'Could you tell me why you're avoiding that?'
  • The purpose of these questions is to understand your experience better, not to guide your actions.

Wrap up and final feedback

  • Now that we've explored [product], let's wrap up with a few questions:
  • What are your overall impressions of [product]?
  • If you could change one thing about [product], what would it be?
  • Are there any other features you wish it had?
  • Thank you for your time and invaluable insights. It's been truly helpful. If you have any questions or additional feedback, feel free to share. Your input is central to our work.

Example of a usability test script

To finish off, we’ll give a brief example of a usability test script.

Let’s say you’ve created a program that helps users to find the perfect movie to watch. The program has a huge range of custom filters, like original language, scenes of violence, the emotional tone of the film, and so on. You can add profiles of your family to your account, and choose which filters to add for any particular search.

Your objective of a usability test in this example could be: The user can find a movie that’s suitable to watch with their young children. Your test group for this demographic is: Parents of young children.

Once you’ve set up the screen sharing, you’re ready to go.

Let’s look at an example usability test script for this movie-searching program.

Hi Maria, how are you? I really appreciate you taking time out of your day to participate in this test. I am Susan and I’m a researcher at Maze.

(If you are with colleagues, you should also introduce them to the test user, too.)

So, let me outline how this will go. I’d like to start by asking you some questions about who you are, your background, and your relevant experience. I will then ask you to perform some tasks on our language-learning app. Once the tasks have been completed, I’d like to get some feedback from you about your experience with our program.

We’re doing this usability test to see how users interact with our software, and to hear their thoughts on it. We’re trying to make this the best it can be, so your honest thoughts are really important to us.

Is there anything you’d like to ask before we get going?

Finally, I’d like to make sure you’re comfortable with us recording today’s session. Is this okay with you?

Fantastic, so I’ll now start recording the audio and dive into some background questions.

So, Maria, could you tell us what your current job title is, and a brief overview of what your job entails?

(Here, you’ll want to get through the list of background questions you have prepared.)

Thank you for your answers. We’re now ready to start the test. Before we begin, I’d like to remind you of a few things.

First off, remember that we aren’t testing you today, we’re testing our program. So if something isn’t working, don’t worry, it’s a problem with our software and not something you’ve done wrong. In fact, there are no wrong answers here.

When using the program, try to act as naturally as possible. I get that it’s hard to do that with us watching your screen. But, please try to act as if you were using the app on your own, without anyone watching.

Please think aloud as you’re using our program. We really want to hear your thoughts, like where you’re navigating on the page, why you’re clicking somewhere, what you expect to happen when you do click, that sort of thing. If you have questions, feel free to ask me, and I’ll answer all of them I can.

Finally, we’d like you to be as honest as possible. If something doesn’t make sense on the page, or it’s not working right, then feel free to tell us. You’re not going to hurt our feelings, so don’t worry about that.

Great, so let’s begin. I’ll now start recording your screen.

  • Please take a look at the main page, and tell me what you’re seeing.
  • Okay, now I’d like you to create a profile for your child. You don’t have to use real information, you can create fictional details.
  • Next, I’d like you to set some filters. You only want to see results that are suitable for kids.
  • Now, please find a movie that your child would enjoy watching.

Observations and open-ended questions

While you interact with the program, I'll be observing your actions. I might ask questions if I notice something interesting or unusual.

For example, I might ask, 'I noticed you hesitated before setting that filter. Can you share what made you pause?' Or if you avoid using a feature, I might ask, 'Is there a reason why you didn't choose to use that feature?'

These questions help me understand your thought process better, and they're not intended to guide your actions. Just carry on as naturally as possible.

Please don’t worry about making mistakes, as any struggles you encounter can provide valuable insights for us. Remember, we’re testing the program, not you.

So, let’s continue with the next task. Please search for a movie that you, as an adult, would enjoy watching after the kids are in bed.

And that’s the final task finished! I’ve stopped recording your screen.

Before we finish, I’d like to ask you a few quick questions.

Firstly, what did you think of the homepage?

(Continue to go through the list of wrap-up questions, as well as any questions specific to this user about actions they took).

Thank you for that. Is there anything you’d like to add before we finish up?

Fantastic. Well, thank you again so much for taking the time out of your day to take part in this study with us. Your input today will be extremely useful for us. Take care, I hope to speak to you soon. Goodbye!

Better product decisions start with Maze

Run expert-level testing, get real customer insights, and deliver better products faster to drive business growth.

thesis usability study

Frequently asked questions about usability testing scripts

What is a usability testing script?

A usability testing script is a plan of all the actions that you’ll need to perform to run a successful usability test . A script allows you to plan the usability tasks and questions and review them with your colleagues beforehand. It also acts as a guide during the test, allowing you to keep the methodology consistent and time the sessions properly.

How do you write a usability testing script?

When writing a usability testing script, we recommend following these four steps:

  • Introduce yourself and your team to your test users and tell them how the usability testing session is going to go
  • Prepare some background questions to get to know your participants and their level of knowledge about the product
  • Script the usability tasks. Try to stick to eight tasks at maximum and include tasks that reflect realistic user goals and follow a logical sequence.
  • Prepare a list of follow-up questions to gather insights and details about the test users' experience

How do you document usability tests?

At the end of the usability testing process, make sure you create a final report to share your results with the rest of your team. The report should include a brief overview of the usability test you ran, your research goals, the methods you used to perform the test, information about each test participant, and, most importantly, your findings. You can find detailed information on how to analyze and report usability test results in the final chapter of this guide.

Essential usability testing methods

A USABILITY STUDY OF THE INTELLIGENT ASSISTANT FOR SENIOR CITIZENS TO SEEK HEALTH INFORMATION

Add to collection, downloadable content.

thesis usability study

  • May 15, 2019
  • Affiliation: School of Information and Library Science
  • This research study expands earlier works on the usability performance of multimodal intelligent assistants in the field of health information search by senior citizens. Intelligent assistants are able to bring the health information search to an advanced level by having conversations and touch interactions with the device instead of text-based searching engine. It is important to have usability evaluation of system and device to push the design and development of the information search and retrieval experience to be better. With the growth of the market and user need in the field of multimodal intelligent personal assistants, it is absolutely necessary to conduct research to understand the performance of the current system and figure out the present weaknesses and direction for future improvements.
  • usability study
  • health information search
  • information retrieval
  • multimodal interaction
  • human computer interaction
  • senior citizens
  • intelligent assistant
  • https://doi.org/10.17615/gtnh-xa45
  • Honors Thesis
  • Mostafa, Javed
  • Bachelor of Science
  • Information & Library Science
  • University of North Carolina at Chapel Hill

This work has no parents.

Select type of work

Master's papers.

Deposit your masters paper, project or other capstone work. Theses will be sent to the CDR automatically via ProQuest and do not need to be deposited.

Scholarly Articles and Book Chapters

Deposit a peer-reviewed article or book chapter. If you would like to deposit a poster, presentation, conference paper or white paper, use the “Scholarly Works” deposit form.

Undergraduate Honors Theses

Deposit your senior honors thesis.

Scholarly Journal, Newsletter or Book

Deposit a complete issue of a scholarly journal, newsletter or book. If you would like to deposit an article or book chapter, use the “Scholarly Articles and Book Chapters” deposit option.

Deposit your dataset. Datasets may be associated with an article or deposited separately.

Deposit your 3D objects, audio, images or video.

Poster, Presentation, Protocol or Paper

Deposit scholarly works such as posters, presentations, research protocols, conference papers or white papers. If you would like to deposit a peer-reviewed article or book chapter, use the “Scholarly Articles and Book Chapters” deposit option.

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

What is a thesis | A Complete Guide with Examples

Madalsa

Table of Contents

A thesis is a comprehensive academic paper based on your original research that presents new findings, arguments, and ideas of your study. It’s typically submitted at the end of your master’s degree or as a capstone of your bachelor’s degree.

However, writing a thesis can be laborious, especially for beginners. From the initial challenge of pinpointing a compelling research topic to organizing and presenting findings, the process is filled with potential pitfalls.

Therefore, to help you, this guide talks about what is a thesis. Additionally, it offers revelations and methodologies to transform it from an overwhelming task to a manageable and rewarding academic milestone.

What is a thesis?

A thesis is an in-depth research study that identifies a particular topic of inquiry and presents a clear argument or perspective about that topic using evidence and logic.

Writing a thesis showcases your ability of critical thinking, gathering evidence, and making a compelling argument. Integral to these competencies is thorough research, which not only fortifies your propositions but also confers credibility to your entire study.

Furthermore, there's another phenomenon you might often confuse with the thesis: the ' working thesis .' However, they aren't similar and shouldn't be used interchangeably.

A working thesis, often referred to as a preliminary or tentative thesis, is an initial version of your thesis statement. It serves as a draft or a starting point that guides your research in its early stages.

As you research more and gather more evidence, your initial thesis (aka working thesis) might change. It's like a starting point that can be adjusted as you learn more. It's normal for your main topic to change a few times before you finalize it.

While a thesis identifies and provides an overarching argument, the key to clearly communicating the central point of that argument lies in writing a strong thesis statement.

What is a thesis statement?

A strong thesis statement (aka thesis sentence) is a concise summary of the main argument or claim of the paper. It serves as a critical anchor in any academic work, succinctly encapsulating the primary argument or main idea of the entire paper.

Typically found within the introductory section, a strong thesis statement acts as a roadmap of your thesis, directing readers through your arguments and findings. By delineating the core focus of your investigation, it offers readers an immediate understanding of the context and the gravity of your study.

Furthermore, an effectively crafted thesis statement can set forth the boundaries of your research, helping readers anticipate the specific areas of inquiry you are addressing.

Different types of thesis statements

A good thesis statement is clear, specific, and arguable. Therefore, it is necessary for you to choose the right type of thesis statement for your academic papers.

Thesis statements can be classified based on their purpose and structure. Here are the primary types of thesis statements:

Argumentative (or Persuasive) thesis statement

Purpose : To convince the reader of a particular stance or point of view by presenting evidence and formulating a compelling argument.

Example : Reducing plastic use in daily life is essential for environmental health.

Analytical thesis statement

Purpose : To break down an idea or issue into its components and evaluate it.

Example : By examining the long-term effects, social implications, and economic impact of climate change, it becomes evident that immediate global action is necessary.

Expository (or Descriptive) thesis statement

Purpose : To explain a topic or subject to the reader.

Example : The Great Depression, spanning the 1930s, was a severe worldwide economic downturn triggered by a stock market crash, bank failures, and reduced consumer spending.

Cause and effect thesis statement

Purpose : To demonstrate a cause and its resulting effect.

Example : Overuse of smartphones can lead to impaired sleep patterns, reduced face-to-face social interactions, and increased levels of anxiety.

Compare and contrast thesis statement

Purpose : To highlight similarities and differences between two subjects.

Example : "While both novels '1984' and 'Brave New World' delve into dystopian futures, they differ in their portrayal of individual freedom, societal control, and the role of technology."

When you write a thesis statement , it's important to ensure clarity and precision, so the reader immediately understands the central focus of your work.

What is the difference between a thesis and a thesis statement?

While both terms are frequently used interchangeably, they have distinct meanings.

A thesis refers to the entire research document, encompassing all its chapters and sections. In contrast, a thesis statement is a brief assertion that encapsulates the central argument of the research.

Here’s an in-depth differentiation table of a thesis and a thesis statement.

Now, to craft a compelling thesis, it's crucial to adhere to a specific structure. Let’s break down these essential components that make up a thesis structure

15 components of a thesis structure

Navigating a thesis can be daunting. However, understanding its structure can make the process more manageable.

Here are the key components or different sections of a thesis structure:

Your thesis begins with the title page. It's not just a formality but the gateway to your research.

title-page-of-a-thesis

Here, you'll prominently display the necessary information about you (the author) and your institutional details.

  • Title of your thesis
  • Your full name
  • Your department
  • Your institution and degree program
  • Your submission date
  • Your Supervisor's name (in some cases)
  • Your Department or faculty (in some cases)
  • Your University's logo (in some cases)
  • Your Student ID (in some cases)

In a concise manner, you'll have to summarize the critical aspects of your research in typically no more than 200-300 words.

Abstract-section-of-a-thesis

This includes the problem statement, methodology, key findings, and conclusions. For many, the abstract will determine if they delve deeper into your work, so ensure it's clear and compelling.

Acknowledgments

Research is rarely a solitary endeavor. In the acknowledgments section, you have the chance to express gratitude to those who've supported your journey.

Acknowledgement-section-of-a-thesis

This might include advisors, peers, institutions, or even personal sources of inspiration and support. It's a personal touch, reflecting the humanity behind the academic rigor.

Table of contents

A roadmap for your readers, the table of contents lists the chapters, sections, and subsections of your thesis.

Table-of-contents-of-a-thesis

By providing page numbers, you allow readers to navigate your work easily, jumping to sections that pique their interest.

List of figures and tables

Research often involves data, and presenting this data visually can enhance understanding. This section provides an organized listing of all figures and tables in your thesis.

List-of-tables-and-figures-in-a-thesis

It's a visual index, ensuring that readers can quickly locate and reference your graphical data.

Introduction

Here's where you introduce your research topic, articulate the research question or objective, and outline the significance of your study.

Introduction-section-of-a-thesis

  • Present the research topic : Clearly articulate the central theme or subject of your research.
  • Background information : Ground your research topic, providing any necessary context or background information your readers might need to understand the significance of your study.
  • Define the scope : Clearly delineate the boundaries of your research, indicating what will and won't be covered.
  • Literature review : Introduce any relevant existing research on your topic, situating your work within the broader academic conversation and highlighting where your research fits in.
  • State the research Question(s) or objective(s) : Clearly articulate the primary questions or objectives your research aims to address.
  • Outline the study's structure : Give a brief overview of how the subsequent sections of your work will unfold, guiding your readers through the journey ahead.

The introduction should captivate your readers, making them eager to delve deeper into your research journey.

Literature review section

Your study correlates with existing research. Therefore, in the literature review section, you'll engage in a dialogue with existing knowledge, highlighting relevant studies, theories, and findings.

Literature-review-section-thesis

It's here that you identify gaps in the current knowledge, positioning your research as a bridge to new insights.

To streamline this process, consider leveraging AI tools. For example, the SciSpace literature review tool enables you to efficiently explore and delve into research papers, simplifying your literature review journey.

Methodology

In the research methodology section, you’ll detail the tools, techniques, and processes you employed to gather and analyze data. This section will inform the readers about how you approached your research questions and ensures the reproducibility of your study.

Methodology-section-thesis

Here's a breakdown of what it should encompass:

  • Research Design : Describe the overall structure and approach of your research. Are you conducting a qualitative study with in-depth interviews? Or is it a quantitative study using statistical analysis? Perhaps it's a mixed-methods approach?
  • Data Collection : Detail the methods you used to gather data. This could include surveys, experiments, observations, interviews, archival research, etc. Mention where you sourced your data, the duration of data collection, and any tools or instruments used.
  • Sampling : If applicable, explain how you selected participants or data sources for your study. Discuss the size of your sample and the rationale behind choosing it.
  • Data Analysis : Describe the techniques and tools you used to process and analyze the data. This could range from statistical tests in quantitative research to thematic analysis in qualitative research.
  • Validity and Reliability : Address the steps you took to ensure the validity and reliability of your findings to ensure that your results are both accurate and consistent.
  • Ethical Considerations : Highlight any ethical issues related to your research and the measures you took to address them, including — informed consent, confidentiality, and data storage and protection measures.

Moreover, different research questions necessitate different types of methodologies. For instance:

  • Experimental methodology : Often used in sciences, this involves a controlled experiment to discern causality.
  • Qualitative methodology : Employed when exploring patterns or phenomena without numerical data. Methods can include interviews, focus groups, or content analysis.
  • Quantitative methodology : Concerned with measurable data and often involves statistical analysis. Surveys and structured observations are common tools here.
  • Mixed methods : As the name implies, this combines both qualitative and quantitative methodologies.

The Methodology section isn’t just about detailing the methods but also justifying why they were chosen. The appropriateness of the methods in addressing your research question can significantly impact the credibility of your findings.

Results (or Findings)

This section presents the outcomes of your research. It's crucial to note that the nature of your results may vary; they could be quantitative, qualitative, or a mix of both.

Results-section-thesis

Quantitative results often present statistical data, showcasing measurable outcomes, and they benefit from tables, graphs, and figures to depict these data points.

Qualitative results , on the other hand, might delve into patterns, themes, or narratives derived from non-numerical data, such as interviews or observations.

Regardless of the nature of your results, clarity is essential. This section is purely about presenting the data without offering interpretations — that comes later in the discussion.

In the discussion section, the raw data transforms into valuable insights.

Start by revisiting your research question and contrast it with the findings. How do your results expand, constrict, or challenge current academic conversations?

Dive into the intricacies of the data, guiding the reader through its implications. Detail potential limitations transparently, signaling your awareness of the research's boundaries. This is where your academic voice should be resonant and confident.

Practical implications (Recommendation) section

Based on the insights derived from your research, this section provides actionable suggestions or proposed solutions.

Whether aimed at industry professionals or the general public, recommendations translate your academic findings into potential real-world actions. They help readers understand the practical implications of your work and how it can be applied to effect change or improvement in a given field.

When crafting recommendations, it's essential to ensure they're feasible and rooted in the evidence provided by your research. They shouldn't merely be aspirational but should offer a clear path forward, grounded in your findings.

The conclusion provides closure to your research narrative.

It's not merely a recap but a synthesis of your main findings and their broader implications. Reconnect with the research questions or hypotheses posited at the beginning, offering clear answers based on your findings.

Conclusion-section-thesis

Reflect on the broader contributions of your study, considering its impact on the academic community and potential real-world applications.

Lastly, the conclusion should leave your readers with a clear understanding of the value and impact of your study.

References (or Bibliography)

Every theory you've expounded upon, every data point you've cited, and every methodological precedent you've followed finds its acknowledgment here.

References-section-thesis

In references, it's crucial to ensure meticulous consistency in formatting, mirroring the specific guidelines of the chosen citation style .

Proper referencing helps to avoid plagiarism , gives credit to original ideas, and allows readers to explore topics of interest. Moreover, it situates your work within the continuum of academic knowledge.

To properly cite the sources used in the study, you can rely on online citation generator tools  to generate accurate citations!

Here’s more on how you can cite your sources.

Often, the depth of research produces a wealth of material that, while crucial, can make the core content of the thesis cumbersome. The appendix is where you mention extra information that supports your research but isn't central to the main text.

Appendices-section-thesis

Whether it's raw datasets, detailed procedural methodologies, extended case studies, or any other ancillary material, the appendices ensure that these elements are archived for reference without breaking the main narrative's flow.

For thorough researchers and readers keen on meticulous details, the appendices provide a treasure trove of insights.

Glossary (optional)

In academics, specialized terminologies, and jargon are inevitable. However, not every reader is versed in every term.

The glossary, while optional, is a critical tool for accessibility. It's a bridge ensuring that even readers from outside the discipline can access, understand, and appreciate your work.

Glossary-section-of-a-thesis

By defining complex terms and providing context, you're inviting a wider audience to engage with your research, enhancing its reach and impact.

Remember, while these components provide a structured framework, the essence of your thesis lies in the originality of your ideas, the rigor of your research, and the clarity of your presentation.

As you craft each section, keep your readers in mind, ensuring that your passion and dedication shine through every page.

Thesis examples

To further elucidate the concept of a thesis, here are illustrative examples from various fields:

Example 1 (History): Abolition, Africans, and Abstraction: the Influence of the ‘Noble Savage’ on British and French Antislavery Thought, 1787-1807 by Suchait Kahlon.
Example 2 (Climate Dynamics): Influence of external forcings on abrupt millennial-scale climate changes: a statistical modelling study by Takahito Mitsui · Michel Crucifix

Checklist for your thesis evaluation

Evaluating your thesis ensures that your research meets the standards of academia. Here's an elaborate checklist to guide you through this critical process.

Content and structure

  • Is the thesis statement clear, concise, and debatable?
  • Does the introduction provide sufficient background and context?
  • Is the literature review comprehensive, relevant, and well-organized?
  • Does the methodology section clearly describe and justify the research methods?
  • Are the results/findings presented clearly and logically?
  • Does the discussion interpret the results in light of the research question and existing literature?
  • Is the conclusion summarizing the research and suggesting future directions or implications?

Clarity and coherence

  • Is the writing clear and free of jargon?
  • Are ideas and sections logically connected and flowing?
  • Is there a clear narrative or argument throughout the thesis?

Research quality

  • Is the research question significant and relevant?
  • Are the research methods appropriate for the question?
  • Is the sample size (if applicable) adequate?
  • Are the data analysis techniques appropriate and correctly applied?
  • Are potential biases or limitations addressed?

Originality and significance

  • Does the thesis contribute new knowledge or insights to the field?
  • Is the research grounded in existing literature while offering fresh perspectives?

Formatting and presentation

  • Is the thesis formatted according to institutional guidelines?
  • Are figures, tables, and charts clear, labeled, and referenced in the text?
  • Is the bibliography or reference list complete and consistently formatted?
  • Are appendices relevant and appropriately referenced in the main text?

Grammar and language

  • Is the thesis free of grammatical and spelling errors?
  • Is the language professional, consistent, and appropriate for an academic audience?
  • Are quotations and paraphrased material correctly cited?

Feedback and revision

  • Have you sought feedback from peers, advisors, or experts in the field?
  • Have you addressed the feedback and made the necessary revisions?

Overall assessment

  • Does the thesis as a whole feel cohesive and comprehensive?
  • Would the thesis be understandable and valuable to someone in your field?

Ensure to use this checklist to leave no ground for doubt or missed information in your thesis.

After writing your thesis, the next step is to discuss and defend your findings verbally in front of a knowledgeable panel. You’ve to be well prepared as your professors may grade your presentation abilities.

Preparing your thesis defense

A thesis defense, also known as "defending the thesis," is the culmination of a scholar's research journey. It's the final frontier, where you’ll present their findings and face scrutiny from a panel of experts.

Typically, the defense involves a public presentation where you’ll have to outline your study, followed by a question-and-answer session with a committee of experts. This committee assesses the validity, originality, and significance of the research.

The defense serves as a rite of passage for scholars. It's an opportunity to showcase expertise, address criticisms, and refine arguments. A successful defense not only validates the research but also establishes your authority as a researcher in your field.

Here’s how you can effectively prepare for your thesis defense .

Now, having touched upon the process of defending a thesis, it's worth noting that scholarly work can take various forms, depending on academic and regional practices.

One such form, often paralleled with the thesis, is the 'dissertation.' But what differentiates the two?

Dissertation vs. Thesis

Often used interchangeably in casual discourse, they refer to distinct research projects undertaken at different levels of higher education.

To the uninitiated, understanding their meaning might be elusive. So, let's demystify these terms and delve into their core differences.

Here's a table differentiating between the two.

Wrapping up

From understanding the foundational concept of a thesis to navigating its various components, differentiating it from a dissertation, and recognizing the importance of proper citation — this guide covers it all.

As scholars and readers, understanding these nuances not only aids in academic pursuits but also fosters a deeper appreciation for the relentless quest for knowledge that drives academia.

It’s important to remember that every thesis is a testament to curiosity, dedication, and the indomitable spirit of discovery.

Good luck with your thesis writing!

Frequently Asked Questions

A thesis typically ranges between 40-80 pages, but its length can vary based on the research topic, institution guidelines, and level of study.

A PhD thesis usually spans 200-300 pages, though this can vary based on the discipline, complexity of the research, and institutional requirements.

To identify a thesis topic, consider current trends in your field, gaps in existing literature, personal interests, and discussions with advisors or mentors. Additionally, reviewing related journals and conference proceedings can provide insights into potential areas of exploration.

The conceptual framework is often situated in the literature review or theoretical framework section of a thesis. It helps set the stage by providing the context, defining key concepts, and explaining the relationships between variables.

A thesis statement should be concise, clear, and specific. It should state the main argument or point of your research. Start by pinpointing the central question or issue your research addresses, then condense that into a single statement, ensuring it reflects the essence of your paper.

You might also like

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Sumalatha G

AI for Meta-Analysis — A Comprehensive Guide

Monali Ghosh

How To Write An Argumentative Essay

Carnegie Mellon University

Soft Material and Algorithmic Innovations for the Improvement of Existing and the Development of Future Wearable Devices

 Wearable electronic devices are growing in popularity because of their potential to portably interface with the human body and provide continuous user-specific sensing and feedback in new and unregulated environments. However, the algorithmic shortcomings and the rigid, bulky, or tethered form factors of wearable devices often cause them to be difficult to use and adapt poorly to various body types and environments, which limits their adoption. To unlock the full potential of wearable devices to be easy-to-use, portable, and accurate across users and environments, both material and algorithmic innovations are needed to develop wearable devices that are smarter and softer. To develop smarter wearable devices, studies have investigated novel integration strategies, sensor fusion algorithms, and the application of machine learning to improve wearable device usability and the robustness of on-body sensing methods. To develop softer wearable devices, studies have investigated new electrically, magnetically, and thermally responsive soft material architectures to enable new approaches toward sensing and actuation. In this dissertation, I consider these approaches and commonly utilized components for wearable devices such as inertial and magnetic sensors, magnetic materials, and optical sensors, and investigate algorithmic and material approaches to improve the usability, form factor, and adaptability of technology for wearable devices. This dissertation focuses on the introduction and investigation of (1) an auto-establishing kinematic framework for a network of wireless inertial sensors, (2) a magnetically responsive elastomer thin film that has two configurations of deformation in response to the state of an electropermanent magnet, and (3) a soft deformable magnetic ring system that can be used to measure normal and shear displacement on rigid and compliant substrates. Together, this work explores algorithmic and soft material innovations to take steps toward improving wearable device technology and further encourage their widespread adoption. 

Degree Type

  • Dissertation
  • Mechanical Engineering

Degree Name

  • Doctor of Philosophy (PhD)

Usage metrics

CC BY 4.0

University of Rhode Island

  • Future Students
  • Parents and Families

Rhody Today

Graduate students recognized with ‘three minute thesis’ awards for influential research.

KINGSTON, R.I., – April 22, 2024 – Three Ph.D. students at the University of Rhode Island have been recognized for their ability to communicate their research effectively to the public as part of URI’s inaugural Three Minute Thesis (3MT) competition. Each student who entered the competition was challenged to present the content and contributions of their research in a 3-minute “ elevator pitch ” that attempted to  capture the attention of a non-specialist audience in a vivid, cogent, jargon-free style – without reverting to academic language – while relying on only a  single static slide as a visual aid.  

“An 80,000 word Ph.D. thesis would take 9 hours to present.  Your time limit… 3 minutes” according to the University of Queensland , which founded the 3MT competition in 2008.  3MT is now replicated at over 900 universities around the world.  “3MT cultivates students’ academic, presentation, and research communication skills, . . . and supports their capacity to effectively explain their research in 3 minutes, in a language appropriate to a non-specialist audience,” according to University of Queensland.

thesis usability study

The finals of URI’s 3MT competition took place last month, in Lippitt Hall before an audience of nearly 100 spectators. Winners are: Sarah Davis, who is studying biological and environmental sciences; Md Abdullah Al Rumon, a student in electrical engineering; and Helani Singhapurage, who is studying physics.

“Graduate programs are the secret sauce that makes URI such a great place to study and work, and the Three Minute Thesis competition was an incredible opportunity to showcase the innovative work that our graduate students do every day,” said Professor of Chemistry and Graduate School Dean Brenton DeBoef.

Davis’ first-place presentation, “A Crab’s Eye View of the Plastic Pollution Crisis,” about the New England native Jonah crab, highlighted the urgent threat that pollution poses to ocean life. 

According to Davis, the greatest threat to this species is the level of plastic pollution in the ocean. When the plastic refuse breaks down, becoming microplastics, ingesting these synthetic materials becomes unavoidable for ocean-floor scavengers like crustaceans. 

thesis usability study

Second-place winner Al Rumon, wrote his thesis on a device that would more efficiently monitor the vital signs of infants in neonatal intensive care units. His “smart belt” device is a “softer” non-invasive way to gather essential medical information from infants and newborns in delicate conditions. 

Current standard equipment for taking vital signs in infant intensive care units is composed of equipment, including leads for monitors and tubes that are fastened directly to the baby’s skin, which has many drawbacks, says Rumon. Not only can the wires damage or irritate the skin of a newborn, causing bleeding and contact dermatitis, they can prevent the baby from moving freely, making it more difficult for mothers and nurses to hold them. Al Rumon’s belt design is simpler than what exists now for taking vital signs. An article on the $2.6 million grant awarded by the National Institutes of Health to al Rumon and the team developing this device was featured in Rhody Today on Oct. 19, 2023.

thesis usability study

Singhapurage, was voted winner of the people’s choice award by the audience, for her research in the field of physics. Singhapurage’s thesis explores light refraction on the microscopic level, and compares the refractions of different kinds of light on different materials by measuring the vibrations emitted. 

Studies of this sort have been conducted before, but Singhapurage’s methods of light and vibration measurement allowed her to discern minute variations in the emitted frequencies that speak to the chemical differences of the material, and perceived differences in its color.

“Analyzing scattered light from Raman active vibrations of materials provides vital information,” Singhapurage said. “Ultimately, findings of my research will help to develop more efficient semiconductors and high-powered lasers.”

DeBoef, who judged the event, along with Jen Riley, dean of the College of Arts and Sciences; Vinka Oyanedel-Craver, professor and associate dean of research in the College of Engineering; and Christopher Lavan, associate vice provost for the advancement of teaching and learning, called the event a resounding success.

You can view the presentations of all the winners and see the list of the 10 finalists HERE

This story was written by Samantha Melia, a senior journalism and political science major at the University of Rhode Island and an intern in the Department of Marketing and Communications.

Department Of Psychology and Neuroscience

Celebrating the Winners of the Earl and Barbara Baughman Dissertation Research Award

By Patricia Spillane

We are delighted to announce that Jillian Battista, Jimmy Capella, and Madison McCall are the esteemed recipients of this year’s Earl and Barbara Baughman Dissertation Research Award. This prestigious award is presented annually to outstanding graduate student researchers within our department, acknowledging their innovative dissertation research.

The Earl and Barbara Baughman Dissertation Award underscores our department’s commitment to supporting advanced graduate students who are in the crucial stages of their dissertation projects. By enabling them to focus exclusively on their research over the summer, the award alleviates the need to seek other employment, ensuring that our brightest scholars can devote their full attention to making significant academic contributions.

This year, the department has allocated three awards to assist our recipients as they push towards the completion of their transformative research projects. This investment in our students’ future is not only a testament to their hard work and potential but also a tribute to the legacy of Earl and Barbara Baughman’s dedication to academic excellence.

For more details about the application process or future deadlines, please contact Dr. Charlie Wiss. Let us extend our heartfelt congratulations to Jillian, Jimmy, and Madison . We eagerly anticipate the impactful research outcomes their work will bring to our community and the broader field.

You are using an outdated browser. Please upgrade your browser or activate Google Chrome Frame to improve your experience.

UNC School of Social Work

  • CENTERS & INSTITUTES

Doctoral student Anderson Al Wazni awarded prestigious dissertation fellowship to study state fragility in the face of climate disasters 

Posted on April 22, 2024

by Chris Hilburn-Trenkle  

It wouldn’t necessarily be inaccurate to say that Anderson Al Wazni’s award-winning dissertation fellowship was nearly 20 years in the making. 

In 2006, Al Wazni received funding through the United States Department of State to spend time in Southeast Asia for work focusing on comparative religion. She traveled around the continent, taking a pilgrimage through India with members of the Jain religion, spending a few weeks at a Buddhist monastery in Taiwan and learning Bengali, the official language of Bangladesh, while studying at a university in Dhaka, Bangladesh. 

While Al Wazni, a doctoral student at the University of North Carolina at Chapel Hill School of Social Work, officially began her research on state fragility in the context of climate change years later by examining other nations, her travels in Southeast Asia unofficially opened the door for her later work. 

Al Wazni, who plans to graduate from the School of Social Work in 2025, received a dissertation fellowship from the American Association of University Women (AAUW) for her project examining how climate disasters impact vulnerable populations in the United States through the concept of state fragility. 

“I was stunned,” said Al Wazni in response to receiving the fellowship. “I was of course hoping, but if you see the people who’ve gotten it before and the types of work that gets rewarded, I would not have been upset if I didn’t (get it) because I recognize it would have gone to a very deserving woman-identified scholar in the field, so good for her … It felt great.” 

Al Wazni’s project is developed into three separate papers. The first paper will focus on three prominent measurements associated with state fragility — public services, infrastructure and state legitimacy — to examine different regions of the United States that show fragility, especially relating to forcible displacement due to natural disasters. 

Although Al Wazni notes that state fragility is usually reserved for underdeveloped countries, she’s redefining the measurements to apply to the United States more readily. While the United States does not have widespread violent conflict as defined by the state fragility index, Al Wazni is sorting through data to examine bullet wounds and other markers in particular regions. 

Her second paper involves qualitative interviews with key experts in the field who focus on natural disaster response and professionals in policy or direct service work related to climate change. With the help of those in the field, she will refine an online survey that will serve as the basis of her third paper. Since forced migration is a big factor in calculating state fragility, according to Al Wazni, she is asking for 200 to 300 respondents who were forcibly displaced due to a natural disaster over the last decade.  

“I’m saying that climate change is pouring gas on a fire,” Al Wazni said. “We have this fire of inequity that is already there, and as long as you don’t address climate change it’s going to get worse and worse. I think that disasters are a really good window into seeing where that state service delivery and that trust is not doing a fantastic job currently.”  

It was nearly 20 years ago when Al Wazni was traveling in Southeast Asia as a religious studies major from North Carolina State University seeing the real-world effects of climate change. Today, the AAUW is recognizing the potential of her project that could help alleviate the suffering of those affected by natural disasters around the world, from Bangladesh to the United States. 

Related Stories

thesis usability study

MSW student, Peace Rotary Fellow set to begin next chapter

Alexandra Rose had her eye on the Rotary Peace Fellowship for some time when she came across the University of North Carolina at Chapel Hill School of Social Work’s website.

thesis usability study

MSW Student Jacob Hoyt receives equity and inclusion student award

Master of Social Work student Jacob Hoyt was among five recipients who received a 2024 Equity and Inclusion Student Award from the National Association of Social Workers North Carolina Chapter (NASW-NC).

UNC School of Social Work

IMAGES

  1. Analyze and Synthesize the Results of Usability Study

    thesis usability study

  2. Usability study

    thesis usability study

  3. 1 Conceptual study framework for the usability test

    thesis usability study

  4. How To Write A Usability Report

    thesis usability study

  5. The Advantages and Disadvantages of Usability Studies

    thesis usability study

  6. What is Usability Testing? How to Evaluate the User Experience

    thesis usability study

VIDEO

  1. SAP2000 API & Rhino 3D (Grasshopper) no.2

  2. 产品经理如何做Usability study 现场实操&一些反思

  3. How to Test Images in your Usability Study

  4. Usability Study: Cara Menguji dan Meningkatkan UX Lebih Baik

  5. User Experience Design

  6. "The definition of usability"

COMMENTS

  1. Usability and User Experience: Design and Evaluation

    Usability and user experience (UX) are important concepts in the design and evaluation of products or systems intended for human use. This chapter introduces the fundamentals of design for usability and UX, focusing on the application of science, art, and craft to their principled design. It reviews the major methods of usability assessment ...

  2. Emphasizing the user in the usability study: Investigating activity

    website usability studies that apply to a wide variety of websites. For example, basic guidelines for website design include: use small chunks of text, use consistent visual cues throughout the site, and minimize the need for the user to scroll within the site. In an effort to emphasize the individual user in usability studies, human factors

  3. (PDF) USABILITY AND USER EXPERIENCE: DESIGN AND EVALUATION

    Abstract. Usability and user experience (UX) are important concepts in the design and evaluation of products or systems intended for human use. This chapter introduces the fundamentals of design ...

  4. Exploring the impact of design technique on usability: A case study on

    Several studies have been carried out which mostly proposed different types of user-centered User Interface (UI) design methodologies, 13-16 developed intuitive and interactive UIs, 17, 18 developed tools to evaluate the system usability, 19 evaluating usability for applications developed in web or mobile platforms, 17, 20-24 investigating the ...

  5. PDF Unmoderated Remote Usability Testing by Expert Reviewers: an Assessment

    1. Lab-based usability study 2. Remote usability study Lab-based usability studies are conducted as in-person usability studies. Participants are brought into a lab, matched one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service (Rohrer, 2014). In contrast, remote ...

  6. PDF Usability and user experience: measurement model

    describes the applied research methods and discusses a validity threats. Model development discusses the study process itself and shows how exactly the results are produced. Results section outlines the results of this thesis. Last section contains discussion on the results and ideas for further work on usability and user experience field.

  7. Usability research in educational technology: a state-of-the-art

    This paper presents a systematic literature review characterizing the methodological properties of usability studies conducted on educational and learning technologies in the past 20 years. PRISMA guidelines were followed to identify, select, and review relevant research and report results. Our rigorous review focused on (1) categories of educational and learning technologies that have been ...

  8. Mobile Applications Usability Evaluation: Systematic Review and

    To enhance the usage intention, usability improvement is an effective way, and research related to mobile applications usability evaluation has become popular. Based on the definition by International Organization for Standardization, usability is the ability that the system can enable a specific user to accomplish a specific goal under a ...

  9. PDF Usability As a Constituent of End­User Computing Satisfaction

    Master's thesis This thesis studies usability as a constituent of end-user computing satisfaction. Usability is a complex concept that consists of many different aspects. Ease of use as well as efficient and pleasurable interaction are some indicators of usability. User satisfaction, for its part, is defined in this thesis as an affective

  10. A systematic review of software usability studies

    The aim of this review is to summarize, analyze various research studies and identify different research gaps regarding usability standards and models, usability evaluation methods, usability metric, usability at different phases of software development life cycle and application domains of usability. This systematic review of usability studies between 1990 and 2016 has been conducted and 150 ...

  11. PDF Master thesis

    The remaining thesis is divided into six chapters. The following chapter describes the related research. It covers the main topics in the research field and critically analyses published papers on novice and expert users in usability testing. Chapter 3 describes the aims of this study and the research questions.

  12. PDF PLANNING A SUMMATIVE USABILITY TEST FOR A MEDICAL DEVICE

    Oulu University of Applied Sciences Information and Communications Technology, Option of Device and Product design. Author: Marko Ylikulju Title of the bachelor's thesis: Planning a Summative Usability Test for a Medical Device Supervisors: Kari Jyrkkä, Antti Kyllönen Term and year of completion: Autumn 2018 Pages: 46 + 7 appendices.

  13. The Usability of E-learning Platforms in Higher Education: A Systematic

    The use of e-learning in higher education has increased significantly in recent years, which has led to several studies being conducted to investigate the usability of the platforms that support it. A variety of different usability evaluation methods and attributes have been used, and it has therefore become important to start reviewing this work in a systematic way to determine how the field ...

  14. PDF A Systematic Mapping Study of Software Usability Studies

    The study reviews primary studies (PS) and identify their distinct contributions. The mapping study analyzes the overall research productivity, demographics, trends, and challenges in software usability. The research identifies area of research that are least addressed and provides directions for future research.

  15. PDF A Usability Study of the

    A Usability Study of the "Tksee" Software Exploration Tool by Francisco Herrera A thesis presented to the University of Ottawa in fulfillment of the thesis requirement for the degree of Master in Computer Science School of Information Technology and Engineering University of Ottawa, Ottawa, Ontario, CANADA The Master in Computer Science is a ...

  16. Usability: An introduction to and literature review of usability

    One study mentions a future plan to use UMUX-LITE but has not yet done so. 6/15 (40%) studies described more than one round of usability testing during the design process and 7/15 (47%) studies use more than one type of usability assessment. Three studies - Juraskova et al., , Ankolekar et al., and Berg et al., - did both.

  17. Usability implications in software architecture: The case study of a

    Abstract Usability is a highly desired but often ignored software quality. Effective user interfaces tend to increase learnability, effectiveness and user satisfaction. ... In this work, we present a case study where we analyze the impact of introducing a variety of usability mechanisms to a mobile application and we report on the architectural ...

  18. Full article: Formative usability evaluation of a fixed-dose pen

    The documented application of usability methods, including usability testing, throughout the design cycle is also required by regulatory authorities. Citation 3 - Citation 5 Typically, the usability-testing process during new-device development is divided into three parts. First, early formative studies are conducted during early development ...

  19. How to Write a Usability Testing Script (+ Examples)

    Step 3: Scripting the usability tasks. This is the stage where the testing actually begins. There are countless variables that can make the test go better or worse, and a lot of this depends on your usability method. If you're using heuristic evaluation, for instance, your script may look a little different.

  20. PDF Web usability in e-commerce

    There are many attributes affecting the success of e-commerce, and especially usability, the ease of use of a website, has been proven to be a crucial factor in succeeding in online business. The purpose of this thesis was to study and compare the usability of four selected web shops and to generate usability data which can be used to improve ...

  21. Evaluating the Usability of Two-Factor Authentication

    a comparative usability study of some of the most common two-factor authentication systems. In contrast to previous authentication usability studies, we had participants use the system for a period of two weeks and collected both timing data and SUS metrics on the systems under test. From these studies, we make several conclusions about the state

  22. Undergraduate Honors Thesis

    This research study expands earlier works on the usability performance of multimodal intelligent assistants in the field of health information search by senior citizens. Intelligent assistants are able to bring the health information search to an advanced level by having conversations and touch interactions with the device instead of text-based ...

  23. What is a thesis

    A thesis is an in-depth research study that identifies a particular topic of inquiry and presents a clear argument or perspective about that topic using evidence and logic. Writing a thesis showcases your ability of critical thinking, gathering evidence, and making a compelling argument. Integral to these competencies is thorough research ...

  24. Soft Material and Algorithmic Innovations for the Improvement of

    To develop smarter wearable devices, studies have investigated novel integration strategies, sensor fusion algorithms, and the application of machine learning to improve wearable device usability and the robustness of on-body sensing methods. ... In this dissertation, I consider these approaches and commonly utilized components for wearable ...

  25. Graduate students recognized with "Three Minute Thesis" Awards for

    KINGSTON, R.I., - April 22, 2024 - Three Ph.D. students at the University of Rhode Island have been recognized for their ability to communicate their research effectively to the public as part of URI's inaugural Three Minute Thesis (3MT) competition. Each student who entered the competition was challenged to present the content and contributions of […]

  26. Celebrating the Winners of the Earl and Barbara Baughman Dissertation

    We are delighted to announce that Jillian Battista, Jimmy Capella, and Madison McCall are the esteemed recipients of this year's Earl and Barbara Baughman Dissertation Research Award. This prestigious award is presented annually to outstanding graduate student researchers within our department, acknowledging their innovative dissertation research.

  27. Doctoral student Anderson Al Wazni awarded prestigious dissertation

    It was nearly 20 years ago when Al Wazni was traveling in Southeast Asia as a religious studies major from North Carolina State University seeing the real-world effects of climate change. Today, the AAUW is recognizing the potential of her project that could help alleviate the suffering of those affected by natural disasters around the world ...

  28. UF DBA's research born from personal tragedy awarded best paper

    DBA alumnus' dissertation born out of personal tragedy awarded for excellence. Growing up in South Florida, Jim Fatzinger (DBA '20) experienced a tragedy that many Floridians are all too familiar with - losing his family home in a natural disaster. Despite the devastating loss, Fatzinger found his own way to make sense of his experience ...