Vídeo 1

Vídeo 1

Introduction to AI, Ethics, and Education

Overview of the Session

  • The session is led by Tina Campo Fernández and Fátima García Doval, both from the Secretaría Xeral Técnica.
  • The focus will be on biases in AI, opportunities and risks associated with it, followed by a discussion on regulations affecting education.

Understanding Biases in AI

  • Fátima introduces the concept of biases (or "sesgos") that affect decision-making processes in AI.
  • Biases are defined as cognitive shortcuts that help humans make quick decisions but can lead to incorrect conclusions.

Types of Cognitive Biases

  • Humans often rely on predefined molds of thought which can mislead; for example, assuming causation from correlation.
  • The "blind spot bias" suggests individuals believe they are less biased than others, leading to irrational decision-making.

Confirmation Bias Explained

  • Confirmation bias leads people to remember information that supports their beliefs while forgetting contradictory evidence.
  • An example illustrates how two correlated events (eating ice cream and drowning incidents) do not imply one causes the other.

Implications for Education

  • Correlation does not equal causation; this principle is crucial when analyzing educational data through AI.
  • A classic educational example shows that having more books at home correlates with better educational outcomes but doesn't mean simply providing books guarantees success.

Understanding Bias in Artificial Intelligence

The Role of Socioeconomic Factors

  • Families with higher socioeconomic status often correlate with better educational outcomes, emphasizing the importance of valuing knowledge and having a reading habit for academic success.

Causality vs. Correlation

  • There is a common misconception that statistical correlations imply causation; however, statistics only show joint occurrences without establishing true causal relationships.

Human Bias in AI

  • AI systems are designed to mimic human thought processes, which are inherently biased. This bias can lead to flawed outputs if not addressed through rigorous scientific methods.
  • Most information sources used to train AI are biased, as they reflect human biases. Quality studies that minimize these biases are rare and often inaccessible.

Garbage In, Garbage Out

  • The concept of "garbage in, garbage out" highlights that poor-quality data leads to unreliable AI outputs. If the input data is flawed or biased, the results will also be flawed.

Methodological Issues in Research

  • Many research studies fail to adequately address methodological issues related to bias, resulting in limited quality and representativeness of findings used for training AI systems.

Trusting AI Outputs

  • All AIs operate under some level of bias due to their training on skewed data sets and probabilistic reasoning that mimics human thinking patterns.
  • Experts suggest minimizing risks associated with biases rather than attempting complete elimination since total avoidance is impractical.

Overconfidence in Technology

  • People often place undue trust in technology based on past experiences where computer systems dictate actions without considering underlying errors or misconfigurations.

Perception of Machine Rationality

  • There’s a tendency to view machines as infallible due to their perceived rationality; this belief can lead individuals to accept machine outputs as absolute truth without critical evaluation.

Confirmation Bias Amplified by AI

  • AIs tend to reinforce users' pre-existing beliefs by providing responses that align with those beliefs, leading users to remember information that confirms their views (confirmation bias).

By understanding these key concepts surrounding bias in artificial intelligence, we can better navigate the complexities involved in its application and development.

Understanding the Self-Fulfilling Prophecy in Education

The Concept of Self-Fulfilling Prophecy

  • The discussion introduces a dangerous and powerful bias known as self-fulfilling prophecy, also referred to as the Pygmalion effect. This concept has been explored by psychologists Jacobson and Rosenthal in educational settings.

Research Methodology

  • In their study, they administered a potential learning ability test (not a knowledge test) to students, which aimed to identify those with higher growth potential.
  • After testing, teachers were informed about selected students who supposedly had greater learning potential, influencing how they interacted with these students.

Observations and Results

  • At the end of the course, all students were retested for learning potential. Interestingly, those identified as having higher potential showed significant improvement compared to their peers.
  • The results indicated that even students initially perceived as less capable demonstrated substantial growth when teachers believed in their abilities.

Implications of Belief

  • This phenomenon illustrates that genuine belief in a student's capabilities can lead to improved outcomes; superficial belief is ineffective.
  • The speaker emphasizes that true confidence in student potential is crucial for fostering growth and development.

Connection to Placebo Effect

  • A parallel is drawn between this educational principle and the placebo effect in medicine, where patients' beliefs about treatment efficacy can influence health outcomes.

The Role of AI in Educational Assessment

Trusting AI Assessments

  • When using AI to profile student capabilities or predict future performance, educators must genuinely trust these assessments for them to be effective.

Ethical Considerations with AI Use

  • There are concerns regarding using AI for early detection of learning disabilities. If an AI identifies a problem like dyslexia, it may inadvertently create a self-fulfilling prophecy regarding the student's abilities.

Responsibility for Outcomes

  • It’s essential to recognize that if an error occurs due to reliance on AI assessments, responsibility lies with the educator who utilized the technology rather than the AI itself.

Ethical Principles Governing AI Usage

Importance of Ethical Review

  • Ethical evaluations should occur after reviewing normative standards; ethical principles cannot solely depend on compliance with regulations but must consider broader implications.

Ethical Considerations in AI Use in Education

Overview of Ethical Norms

  • The discussion begins with the importance of adhering to existing regulations before evaluating ethical principles related to AI use in education.
  • Emphasizes that the ethicality of an AI object or service is determined by its usage rather than its inherent qualities, paralleling other educational resources.

Key Ethical Principles

  • Highlights the necessity for a specific, explicit, and demonstrable ethical evaluation when using AI tools in education.
  • Introduces core ethical principles: beneficence (maximizing benefits while minimizing harm), non-maleficence (avoiding harm), and respect for autonomy (acknowledging individual rights).

Fairness and Integrity

  • Stresses the need for justice in AI applications, ensuring equitable treatment across all users without bias.
  • Discusses integrity and transparency as essential components; unethical practices include hidden agendas or undisclosed methodologies.

Responsibility and Accountability

  • Underlines that educators must take responsibility for their decisions regarding AI use, acknowledging consequences rather than deflecting blame onto technology providers.

Future Guidelines on Ethical Use

  • Mentions upcoming changes to EU guidelines on AI ethics set to be released by early 2026, reflecting advancements since the rise of generative AIs like ChatGPT.
  • Assures that while new guidelines will provide more detailed examples relevant to generative AIs, foundational ethical concepts will remain unchanged.

Importance of Human Oversight

  • Advocates for a broader application of ethical evaluations beyond just AI technologies, emphasizing human oversight and fundamental rights protection.

Transparency and Explainability

  • Calls attention to the necessity for explainability in decision-making processes involving AI; results should not emerge from "black box" systems but be understandable by users.

Diversity and Inclusion

  • Highlights criteria such as diversity, non-discrimination, equity, accessibility, and universal design as critical factors in preventing unjust biases within educational contexts.

Sustainability and Data Governance in Education

The Importance of Sustainability

  • Activities in education should be fun but also consider their environmental impact, including energy and water consumption. This ties into the broader concept of sustainability.
  • The discussion emphasizes the need for awareness regarding data privacy and governance, highlighting its ethical and legal implications as outlined in regulations like GDPR.

Data Ownership and Security

  • Students own their data, not teachers; they must have control over it. Educational institutions must ensure compliance with national security frameworks when handling sensitive information.
  • Educational centers are vulnerable to cyberattacks due to the sensitivity of educational data, necessitating robust security measures to protect against unauthorized access.

Resilience Against Cyber Threats

  • Continuous attacks on educational systems require resilience; while most attacks are repelled, some may succeed without causing harm. However, lax security can lead to significant vulnerabilities.
  • Accurate data is crucial; both false and imprecise data can lead to detrimental outcomes. Institutions must ensure reliability and reproducibility of their data management practices.

Accountability in Decision-Making

  • Transparency is essential; decisions about AI systems should be auditable, minimizing negative impacts while ensuring accountability for any issues that arise.
  • A systematic approach is necessary when evaluating AI tools: compliance with regulations must be verified first before considering ethical implications or relevance in educational contexts.

Ethical Considerations in AI Usage

  • The evaluation process for using AI should prioritize regulatory compliance followed by ethical considerations. If a tool does not meet these criteria, it should not be used despite its potential benefits.
  • Opportunity cost is a critical factor; educators must weigh the use of AI against other potentially more effective methods or interventions that could yield better results.