Interfaces Humano-Computador - Aula 06 - Avaliação da IHC
Introduction to Human-Computer Interaction Evaluation
In this section, the instructor introduces the topic of Human-Computer Interaction (HCI) evaluation and discusses the importance of understanding evaluation techniques in HCI.
Understanding the Cycle of Human-Centered Design
- The cycle of human-centered design involves a repeated process of evaluation to ensure that design solutions align with user goals and objectives.
- The evaluation process helps determine if design solutions are effective or need further refinement.
- Design solutions are often prototypes with varying levels of fidelity.
Reasons for Conducting HCI Evaluation
- To understand how users perceive and comprehend a design concept based on their requirements.
- To compare alternative design options and determine which one is better suited for users' needs.
- To assess compliance with usability standards and guidelines.
- To identify integration issues and potential problems before releasing the final product.
Types of Evaluation Techniques
Formative Evaluation
- Formative evaluations are conducted during the early stages of design development when there is still uncertainty about the effectiveness of a concept.
- These evaluations focus on understanding user needs, comparing design alternatives, and refining ideas.
- Prototypes used in formative evaluations are typically low-fidelity.
Summative Evaluation
- Summative evaluations are conducted after the product has been developed to assess its usability against predefined goals and requirements.
- These evaluations aim to measure how well the system meets usability criteria.
- Prototypes used in summative evaluations are typically high-fidelity, navigable, or even functional systems.
Inspection Techniques in HCI Evaluation
Expert-Based Inspections
- Expert-based inspections involve specialists who act as advocates for users during the evaluation process.
- These experts evaluate interactions from a user's perspective based on their knowledge of users, domain expertise, and usability principles.
- Examples include heuristic evaluation, cognitive walkthroughs, inspection by analogy, and more.
Conclusion
- Expert-based inspection techniques are valuable because they are relatively easy to learn and provide insights into usability issues.
- The effectiveness of expert-based inspections depends on the expertise and knowledge of the evaluators.
- These techniques can be used in both formative and summative evaluations.
Timestamps have been associated with bullet points as requested.
Usability Testing and Heuristic Evaluation
In this section, the speaker discusses the importance of usability testing and heuristic evaluation in evaluating user interfaces. They explain that usability testing requires careful preparation and resources, while heuristic evaluation relies on the expertise of evaluators.
Usability Testing
- Usability testing is a valuable method for evaluating user interfaces.
- It involves several steps and costs but provides excellent results.
- The preparation for usability testing is not trivial.
- Usability testing requires users to interact with the system.
- Testers should have some knowledge of design patterns and user needs.
- Usability testing is quick to apply and productive in finding problems.
Heuristic Evaluation
- Heuristic evaluation depends primarily on evaluators' expertise.
- It is an easy technique to teach, as anyone familiar with the system can perform it.
- Evaluators should be users of the system being evaluated.
- Evaluators with different backgrounds can find different types of problems.
- The diversity of evaluators' perspectives enriches the evaluation process.
Process of Heuristic Evaluation
- The process starts with recruiting a diverse team of evaluators.
- Evaluators should have domain knowledge, understanding of user characteristics, and familiarity with usability heuristics.
- The evaluation begins with an instruction phase to train evaluators on the procedure.
- Each evaluator performs individual evaluations, which are then compiled into a consolidated result.
- Findings are classified by severity and decisions are made regarding design changes based on cost-benefit analysis.
Recruiting Evaluators
- The quality of heuristic evaluation depends on the skills and diversity of evaluators.
- Domain experts, users, and designers should be part of the evaluator team.
- The number of evaluators should allow for diverse perspectives without becoming too costly or time-consuming.
Introduction to Heuristic Evaluation Process
In this section, the speaker explains the process of heuristic evaluation and provides guidance on how to train evaluators.
Presenting the Evaluation Process
- Start by presenting the domain and context of use to evaluators.
- Explain the user characteristics and specific scenarios they will evaluate.
- Provide evaluators with a specific scenario to investigate during their evaluation.
- Choose scenarios that are relevant and manageable within the available time frame.
Training Evaluators
- Train evaluators on the evaluation procedure.
- Explain what heuristic evaluation is and its purpose.
- Emphasize the importance of evaluating from different perspectives.
- Ensure evaluators understand the domain and context of use.
- Clarify any questions or concerns evaluators may have.
Selecting Specific Scenarios for Evaluation
In this section, the speaker discusses selecting specific scenarios for heuristic evaluation and considerations for time management.
Choosing Scenarios
- Select scenarios that represent important aspects of system usage.
- Consider scenarios that cover a range of functionalities or critical tasks.
- Avoid an excessive number of scenarios to prevent evaluations from becoming too time-consuming.
Time Management
- Balance the number of scenarios with available evaluator time.
- Consider both scenario complexity and evaluator efficiency when estimating time requirements.
- Strive for a reasonable duration that allows thorough evaluations without being overly burdensome.
Timestamps are approximate.
Usability Inspection Process
This section discusses the steps involved in the usability inspection process, focusing on identifying and addressing usability problems.
Identifying Problems in Interface Design
- Inspectors focus on interface, interaction, and conceptual model.
- They identify problems and violations of heuristics.
- Different types of prototypes can be evaluated: functioning systems, high-fidelity prototypes, or paper prototypes.
Compilation of Evaluation Results
- Moderators compile individual opinions from evaluators.
- Problems that are repeatedly identified are given higher visibility.
- All identified problems that may impact user experience are retained for further analysis.
Severity Classification of Problems
- Severity is classified based on dimensions such as frequency, impact, and persistence.
- Problems that occur frequently or have a significant impact are considered more severe.
- Persistent problems that appear across multiple screens are also taken into account.
Compiling Problem List
- Results are compiled into a single problem list spreadsheet.
- Visibility and severity of each problem are noted.
Cost-Benefit Analysis for Problem Solutions
- Designers prioritize which problems to address based on severity and cost-benefit analysis.
- High-severity problems require immediate attention, while low-severity ones may be addressed if resources allow.
Usability Testing Techniques
This section introduces usability testing techniques as an alternative approach to inspecting interfaces. It explains the process of conducting usability tests with real users and analyzing the results.
Usability Testing Process
- Usability testing involves experiments with real users interacting with prototypes.
- The process includes defining objectives, planning the experiment, executing tests with real users, and analyzing results.
Objective Definition for Testing
- Establishing clear objectives is crucial before conducting usability tests.
- Objectives help guide the testing process and interpretation of results.
Planning the Experiment
- Careful planning is necessary for a successful usability test.
- Various steps are involved in the planning phase to ensure a well-executed experiment.
Executing Usability Tests
- Real users are involved in performing tasks with the prototypes.
- Observations and interactions are closely monitored by moderators.
Analysis of Test Results
- Designers or usability analysts analyze the observations and data collected during the tests.
- Usability professionals interpret the results and draw insights that may not have been anticipated.
Conclusion
The transcript covers two main topics: the usability inspection process and usability testing techniques. The usability inspection process involves identifying problems in interface design, compiling evaluation results, classifying problem severity, compiling a problem list, and conducting cost-benefit analysis for problem solutions. On the other hand, usability testing techniques involve defining objectives, planning experiments, executing tests with real users, and analyzing test results. Both approaches provide valuable insights into improving user experience in interface design.
Ethical Considerations and Subject Recruitment
In this section, the speaker discusses the ethical concerns related to involving people in usability testing. They emphasize the need to establish relevant scenarios for testing and prepare a suitable testing environment.
Concerns about Ethics and Scenarios
- It is important to address the ethical issues associated with involving people in usability testing.
- Relevant scenarios need to be established for presenting to test subjects who will assess the usability of the system.
- A suitable testing environment with monitoring capabilities should be prepared.
Subject Recruitment and Motivation
- The speaker, having conducted user studies, can choose representative users for testing.
- Recruiting participants is a delicate process that involves inviting them to test in an environment different from their usual setting.
- Motivating participants can be done through rewards or incentives, but care must be taken not to coerce or violate ethical principles.
- Participants should provide informed consent freely and willingly.
Mitigating Risks and Ensuring Ethical Conduct
This section focuses on mitigating risks and ensuring ethical conduct during usability testing. The importance of considering potential risks, obtaining consent, and avoiding unethical practices is emphasized.
Mitigating Risks and Consent
- Usability tests involve human subjects, so it is crucial to consider potential risks and mitigate them.
- Users should be made aware of any risks involved in participating in the test.
- Consent must be obtained from participants before conducting any usability experiments.
Ethical Principles
- Researchers must adhere to ethical guidelines when conducting research involving human subjects.
- It is essential not to put participants in uncomfortable or embarrassing situations during usability tests.
- Collecting images or personal data without explicit consent from participants is prohibited.
Determining Scenarios and Task Design
This section discusses the process of determining relevant scenarios and designing tasks for usability testing. The importance of providing necessary information and decision-making elements to participants is highlighted.
Identifying Relevant Scenarios
- It is crucial to determine which situations or scenarios require testing.
- In the example given, a scenario involving a nurse prescribing medication was considered.
- All necessary information should be provided to participants so they can make informed decisions during the test.
Task Design and Support Material
- Tasks need to be clearly identified and described in support materials for consistent presentation to each participant.
- Care must be taken not to introduce biases in how tasks are presented.
- The duration of the test should be limited, typically not exceeding half an hour.
Preparing the Testing Environment
This section focuses on preparing the testing environment for usability testing. Different options, such as conducting tests in a laboratory or real-world setting, are discussed.
Testing Environment Options
- Usability tests can be conducted in a laboratory or real-world setting depending on the context being observed.
- In a simple setup, cameras can capture both facial expressions and user interactions with prototypes.
- The choice of environment depends on the designer's understanding of how it may impact the experiment.
Importance of a Good Testing Team
- A good testing team is essential for conducting effective usability tests.
- The team includes recruited users, individuals operating tools and recording videos, and most importantly, a moderator who guides participants through the test process.
- Moderators play a crucial role in maintaining pace, presenting scenarios, ensuring user comfort, and resolving any issues that arise during testing.
Conducting Pilot Tests
This section emphasizes the importance of conducting pilot tests before the actual usability testing. The purpose of pilot tests is to ensure that all planning and preparations are effective.
Purpose of Pilot Tests
- Pilot tests serve as a test run for the actual usability testing.
- They help identify any potential issues or shortcomings in the planned procedures.
- Conducting pilot tests with a small group of participants ensures that everything is well-prepared and executed.
Execution of Usability Testing
This section outlines the execution process of usability testing, including pre-testing, main testing, and post-testing stages.
Pre-Testing Stage
- In the pre-testing stage, participants are invited to assess if they match the desired profile for testing.
- Participants fill out a questionnaire to provide information about their profiles for comparison with the researcher's perspective.
The remaining part of the transcript was not provided.
New Section
This section discusses the process of usability testing, including scenarios, data collection, post-test phase, and data analysis.
Usability Testing Process
- Usability testing involves the user performing scenarios to identify usability issues in the system.
- After completing the scenarios, the user enters the post-test phase where their impressions and experiences are collected.
- During this phase, users are asked about their experience and satisfaction using standardized scales.
- Comparing users' expectations with their actual experience helps identify usability improvements.
- Data analysis involves reviewing audio recordings, transcribing them if necessary, analyzing videos, and interpreting user behavior and motivations.
- The analysis aims to understand why users can or cannot use the system effectively by comparing expectations with satisfaction levels.
- Severity of issues is assessed along with proposed design solutions.
New Section
Remote usability testing has become common due to technological advancements. The steps involved in remote testing are similar to traditional methods but allow for flexibility in participant location.
Remote Usability Testing
- Remote usability testing allows participants to complete tasks from home or work environments using available technology.
- All other steps of usability testing remain necessary such as planning, recruitment, and analysis.
- Remote tests can be supervised or unsupervised depending on the setup.
New Section
Remote usability testing offers significant time and performance advantages. Additional resources provided for further understanding.
Advantages of Remote Usability Testing
- Remote usability testing saves time and resources by eliminating the need for participants to visit a physical lab.
- Remote testing has advanced significantly due to the ease of computer and internet access.
Conclusion
This transcript discusses the process of usability testing, including scenarios, data collection, post-test phase, and data analysis. It also highlights the advantages of remote usability testing. The provided timestamps can be used to navigate through the video for further study.