Intelligence évaluative et évaluation artificielle : réflexions sur la relation IA / évaluation
Introduction to the Conference
Welcome and Purpose
- Martine Genère-Hemondi introduces herself as a professor at Université Sorbonne-Paris-Nord, welcoming attendees to the Spring Research in Education conference.
- The conference aims to promote educational research within university training for teachers and education staff, both in initial and continuing education.
Theme of the Conference
- The chosen theme for this twelfth edition is "Artificial Intelligence" and its implications for teacher training.
Presentation of Alban Robles
Background of the Speaker
- Alban Robles is introduced as a recognized specialist in evaluation issues, previously taught by Martine Genère-Hemondi.
- His academic journey includes significant work on evaluation during his master's theses and doctoral dissertation titled “Living the Experience of Evaluation: A Micro-Phenomenological Contribution.”
Academic Achievements
- Robles successfully defended his thesis at Université Sorbonne-Paris-Nord in June 2022 and became a qualified lecturer that same year.
- He has held positions at various prestigious institutions, including l'École normale supérieure de Lyon and Université catholique de Lille.
Robles' Contributions to Evaluation Methodologies
Recent Publications
- In 2026, he published a chapter reflecting on 40 years of crisis in evaluation within an edited volume by Nathalie Younes and Christophe Grébillon.
- He authored another chapter titled "The Value of Evancipation," exploring connections between evancipation (a new term coined by his team) and evaluation.
Focus on Artificial Intelligence in Evaluation
Introduction to AI Discussion
- Alban Robles expresses gratitude for being invited to discuss artificial intelligence's role in evaluation during this web conference series.
Key Hypotheses Presented
- He outlines two main hypotheses regarding AI:
- First, evaluating human aspects through AI.
- Second, distinguishing between technical evaluations conducted by AI versus ethical or pedagogical evaluations performed by humans.
Challenges with AI Evaluations
Quality Concerns
- A critical issue arises concerning who ensures the quality of results produced by AI. Questions about criteria, values, and norms related to these results are raised.
Future Evaluative Practices
- Robles invites reflection on potential evaluative practices that could be imagined or utilized with respect to AI's integration into educational contexts.
Evaluation in Education and Artificial Intelligence
Overview of Evaluation in Education
- The speaker aims to synthesize insights on evaluation, referencing Martin Janer and Mandy's presentation about major Francophone entities involved in educational evaluation.
- Evaluation is framed as an integrated activity within broader professional contexts, often extending beyond mere assessment tasks.
Nature of Evaluation
- Evaluation involves assigning value based on collected information and interpretation, serving learning and professional development purposes.
- It is a process leading to a product shaped by explicit or implicit expectations, often referred to as the evaluation reference.
Tools and Methods of Evaluation
- Various tools such as criteria grids, competency frameworks, and scoring rubrics are commonly recognized in the evaluation process.
- Institutionalized methods can dominate perceptions of effective evaluation, often prioritizing control over alternative evaluative approaches.
Technology and Social Context
- The concept of evaluation is also viewed through the lens of technology; it can be seen as a social technology that generates practical knowledge about actions.
- Daniel Hamline (1989) critiques modern society for transforming evaluation into a generalized technique driven by its own logic rather than genuine inquiry.
Historical Perspective on Evaluation
- Hamline discusses how modern society has made evaluation an obligatory technique that justifies itself through its own processes.
- The historical context highlights how artificial intelligence (AI), embedded in modern technologies, influences contemporary evaluative practices.
Artificial Intelligence's Role
- AI manifests through various technological devices like smartphones and computers, operating via algorithmic programs.
- Terminology associated with AI includes neural networks which illustrate both biological inspiration and generative capabilities within AI systems.
Intersection of Evaluation and AI
- Both evaluation methodologies and AI aim to solve problems or answer questions; they share common goals despite differing approaches.
- The fundamental question driving evaluations—what do I want to assess?—mirrors inquiries found within scientific disciplines related to AI.
Why Evaluate?
Purpose and Ethics of Evaluation
- The speaker raises fundamental questions about the purpose and ethical implications of evaluation, emphasizing its practical utility in understanding why evaluations are conducted.
- There is a discussion on the potential rules or ethics surrounding evaluation practices, highlighting that critical literature has long pointed out the perils associated with evaluation.
- Citing Fabridio Butera and Consor (2011), the speaker notes historical ties between evaluation and governance in education or military contexts, linking it to control mechanisms for strategic decision-making.
- The notion of objectivity in evaluation is critiqued; while often perceived as cold and rational, it lacks neutrality due to human involvement behind machines used for evaluations.
- The speaker mentions that discussions around evaluation can lead to divergent opinions among individuals, indicating its contentious nature.
Artificial Intelligence in Evaluation
Concept of Artificial Evaluation
- Transitioning to artificial intelligence (AI), the speaker introduces the concept of "artificial evaluation," questioning what AI evaluates and how it functions.
- A quote from Cardan suggests that AI's intelligence relies more on statistical capabilities than logical reasoning, utilizing vast datasets for increasingly accurate predictions.
- The limitations of AI sources are acknowledged; despite their massive size, they can still be restricted by their origins which may affect their outputs.
- Unlike humans who may struggle with information overload, AI can process extensive data without limits, leading to statistically sound predictions based on current available data.
- The political neutrality of AI is challenged; biases may arise depending on source selection, referencing a scandal involving an AI correlating sexual orientation with gender.
Examples of Evaluative AI Applications
Practical Implementations
- Aladin by BlackRock is presented as an example of an evaluative AI tool capable of assessing financial risks more effectively than humans through rapid data analysis.
- In educational settings, anti-plagiarism tools serve as another example where AI evaluates written work against referenced materials to determine originality or plagiarism levels.
The Role of Evaluative Artificial Intelligence
Objective Posture in Evaluation
- The concept of evaluative artificial intelligence is framed as a practical tool that allows quick access to large datasets aimed at achieving not strict objectivity but rather an objective stance for fair judgment.
Artificial Evaluative Intelligence: Key Insights
Understanding Artificial Evaluative Intelligence
- Definition and Paradigm: Artificial evaluative intelligence (AEI) operates within a connectionist paradigm, emphasizing the interconnectedness of various elements. It suggests that anything not linked to this neural network cannot be considered.
- Time and Access Efficiency: AEI can significantly save time and enhance access in digitally constituted fields, particularly when it comes to computing and recording information online, such as scientific bibliographic references.
- Generative AI's Role: Generative AIs can connect with existing data to help ensure accuracy in terminology and assess the alignment of scientific propositions with previously established elements, addressing issues like plagiarism or forced inspiration.
Societal Objectives of AEI
- Purpose Justification: The use of AEI is justified by its societal objectives; it must prove useful and practical through technical results. Its credibility stems from being machine-generated, which reduces the subjectivity often associated with human opinions.
- Economic Model Dependency: AEI primarily serves an economic model focused on attention calibration influenced by digital usage patterns.
Limitations and Human Dependency
- Opaque Processes: There are concerns regarding individuals or information that remain un-digitized. AEI creates opacity in its reasoning processes, limiting access to the entirety of its conclusions.
- Need for Human Evaluation: Despite its capabilities, AEI relies on human initiation for evaluation. Continuous questioning of its value is essential, especially considering biases inherent in data that may lead to inequities or discrimination against underrepresented groups.
Justifications for Human Evaluation
- Data Source Biases: The justification for human evaluation arises from recognizing that the data feeding into AI originates from humans. Inherent biases can result in unfair outcomes if certain populations are inadequately represented.
- Ecological Costs Consideration: Another justification involves assessing ecological costs related to AI operations—both environmental impacts due to data centers located outside Western territories and pedagogical implications tied to AI's integration into educational contexts.
Practical Implications of AI Usage
- Human-Centric Approach: Utilizing AI for evaluation should focus on real-world applications rather than mere prescriptive uses. It's crucial to understand how AI can assist while acknowledging its limitations.
- Cautionary Measures: There is a need for caution regarding the technical results produced by AI evaluations; these results are confined within their specific usage frameworks.
Understanding the Role of AI in Evaluation
The Nature of AI and Its Values
- The use of AI is influenced by inherent values, which can vary between different systems like Microsoft and Shaptivity. The technical results are shaped by both the AI itself and the user's capabilities.
Human-AI Relationship
- There exists a fundamental relationship between humans and AI that influences decision-making processes. This relationship raises questions about whether to utilize or evaluate AI.
Impact of Statistical Interpretation
- Statistics serve as information for explaining phenomena and aiding decisions. Presenting data can be biased, leading to artificial evaluations when conducted by AI.
Defining Artificiality
- "Artificial" refers to something fabricated, contrasting with natural elements. This distinction limits our understanding of what constitutes genuine evaluation.
Conformity in Evaluation Practices
- Evaluations often lean towards conformity due to inherited practices or social pressures, which can lead to uniformity in behaviors and assessments.
Judgment Biases in Evaluation
- Artificial evaluations carry an illusion of objectivity since they are not directly human-made; however, human evaluators also possess inherent biases that affect their judgments.
Errors in Human vs. Machine Evaluations
- Both machines and humans make errors, highlighting the fallibility present in all forms of evaluation. This realization prompts reflection on our own potential for mistakes.
Pedagogical Norms and AI Integration
- Introducing AI into evaluation does not inherently challenge existing pedagogical norms but necessitates new frameworks for its application within educational contexts.
Objectivity Challenges with AI
- The integration of AI raises questions about objectivity, suggesting that perceived neutrality may still be a construct influenced by human perspectives.
Everyday Evaluation Practices
- Incorporating AI into daily evaluative practices requires rethinking traditional assessment methods beyond mere technological adaptation, ensuring core objectives remain intact.
Understanding the Role of Human Evaluation in Artificial Intelligence
Defining AI and Human Evaluation
- The discussion begins with five propositions aimed at delineating artificial intelligence (AI) and its evaluative capabilities through human assessment, focusing on how to better utilize AI.
The Nature of Information Evaluation
- While AI assesses the virality of information, humans are responsible for evaluating its intrinsic importance. This highlights the distinction between algorithmic visibility and human interpretation.
Time and Analysis in Information Processing
- The rapid delivery of information by AI contrasts with the necessary time humans require for observation, analysis, and verification. This emphasizes the value of reflective thinking in processing AI-generated data.
Ethical Considerations in Data Usage
- Humans must evaluate not just the results provided by AI but also their ethical implications. This involves questioning what these results reveal about oneself and society.
Criteria for Evaluating Information Quality
- When assessing data from AI, humans need to consider where this information originates from, acknowledging potential biases that may favor certain demographics or viewpoints over others.
Propositions for Responsible Use of AI
- The speaker presents five propositions aimed at guiding responsible engagement with AI outputs while recognizing personal biases and interests involved in this interaction.
Criteria for Meaningful Results
- Humans must assign meaning to results generated by AI. This includes ethical considerations regarding accessibility and acceptability of information as per André Tricaud's framework.
Engagement with Information
- A criterion is introduced that emphasizes personal connection to information; individuals should feel compelled to engage with data that resonates personally or socially.
Rhetorical Criteria for Discourse
- Discusses rhetorical criteria used to assess whether AI-generated information aligns with ongoing discussions or topics being addressed, ensuring relevance in discourse.
Validity Checks on Information Credibility
- Emphasizes the importance of verifying if an assertion made by an algorithm is credible rather than merely opinion-based, underscoring a need for critical evaluation.
Acceptability and Contextualization
- Highlights that validating the relevance of AI-provided information requires contextualizing it within one's own experiences and knowledge base before acceptance.
Ethical Dimensions: Justice vs. Justness
- Explores the dialectic between justice (fairness towards all individuals affected by decisions made based on data) versus justness (accuracy in representation), stressing sensitivity towards marginalized groups when utilizing AI insights.
This structured overview captures key themes discussed regarding artificial intelligence's role alongside human evaluative processes while providing timestamps for easy reference back to specific points in the transcript.
AI and Data Usage: Ethical Considerations
Authenticity in AI Usage
- The speaker discusses the importance of authenticity in the usage of AI, emphasizing how it should be comfortable and practical for users.
- Questions arise regarding whether AI is being used within compliant frameworks, challenging existing norms and regulations.
Legal and Ethical Dimensions
- There is a lack of documentation to ensure that data collected by AI respects privacy, confidentiality, and anonymity.
- The need for human evaluation in assessing AI's compliance with ethical standards is highlighted; human oversight remains crucial.
Professional Legitimacy
- The discussion touches on the legitimacy of using AI across various professions, suggesting that new roles may emerge as technology evolves.
Emerging Principles for AI Use
Recommendations for Best Practices
- The speaker proposes several emerging principles regarding the use of artificial intelligence in education.
- Viewers are encouraged to explore additional resources such as web conferences and articles that provide further insights into effective practices.
Competency Assessment Challenges
- A critical question posed is whether educators can accurately assess competencies related to potential user interactions with technology.
- Distinguishing between human-generated content and machine-generated output remains a significant challenge; many educators struggle with this differentiation.
Data Quality and Security Measures
Importance of Data Integrity
- Emphasizing that data quality must be optimized to ensure reliable results from AI systems; accurate training data is essential.
Cybersecurity Enhancements
- There’s a call for improved cybersecurity measures to protect confidential information when utilizing AI technologies in educational settings.
Social Interaction Post-COVID
Impact on Communication Skills
- Observations indicate that digital communication tools have altered social interaction dynamics, particularly among younger generations post-COVID.
Addressing Inequalities
- Ensuring universal access to technology is vital to prevent exacerbating existing social inequalities related to access.
Principles of AI in Education
The Role of AI as an Assistant
- AI should not replace colleagues but serve as an assistant, distinguishing it from traditional tools like a hammer.
Consultation and Decision-Making
- It may be more beneficial to consult nearby colleagues for decisions rather than relying solely on AI.
Values Guiding AI Usage
- Three guiding values for using AI are:
- Transparency: Understanding the source and verifying information.
- Responsibility: Acknowledging the costs associated with AI usage.
- Equity: Ensuring fair access and application of AI technologies.
Human-Centric Problem Solving
- Certain problems should remain human-centric, focusing on which issues to address with AI while maintaining human understanding as a priority.
Integrating AI into Pedagogy
Citing Sources and Collaboration
- When using tools like Copilot, it's essential to clarify their role in educational settings, emphasizing collaborative work akin to Open Source or Open Science.
Ethical Considerations in Information Processing
- Avoid treating AI as sacred or demonic; instead, recognize its contextual relevance while promoting critical information processing skills among learners.
Critical Thinking and Evaluation
Historical Context of Information Processing
- The need for critical thinking in processing information has been recognized long before modern communication sciences emerged.
Robust Evaluation Practices
- Emphasizing robust evaluation over mere performance metrics is crucial. This involves prioritizing processes and activities rather than superficial gadgets that may quickly become obsolete.
Practical Applications of Ethical Evaluation Tools
Experimentation with Assessment Tools
- Propose practical applications for ethical evaluative intelligence by experimenting with assessment tools collaboratively within educational institutions.
Self-Regulation Initiatives
- Encourage self-regulation within educational frameworks without waiting for artificial intelligence advancements.
Collaborative Practice Analysis Groups
Building Supportive Communities
- Establish groups focused on practice analysis where members can collaborate, share ideas, and support each other in navigating challenges related to education and technology integration.
Discussion on AI and Education
The Role of Digital Tools in Learning
- The speaker reflects on the feeling of loneliness associated with digital tools, suggesting that discomfort with technology may be judged too quickly. They emphasize the need for skills related to computers and digital devices in the context of artificial intelligence.
- There is a recognition that not everyone has equal access or desire to engage with technology, raising the question of whether it is necessary to adopt these tools given current educational environments that encourage their use.
Pedagogical Resistance and Autonomy
- A call for resistance against the notion that pedagogy requires digital tools to exist is made, highlighting the importance of traditional teaching methods alongside technological advancements.
- The discussion opens up about maintaining human control over AI tools, emphasizing that while humans can utilize AI, they must ensure they remain masters of these technologies.
Evaluation and Empowerment in Learning
- The conversation shifts towards exploring concrete perspectives on evaluation as a means of empowerment rather than mere control, linking it to critical thinking and ethical reflection.
- Concerns are raised about AI being perceived as an unavoidable trend in education. The speaker argues that this perception stems from how society chooses to integrate it into learning processes.
Rethinking Educational Practices
- The essence of pedagogical practices is questioned: what do we aim for in learning? It’s suggested that education should focus on values surrounding learning experiences rather than just content delivery.
- There's a distinction made between memorization and meaningful learning. While AI can provide information efficiently, it lacks the ability to impart significance or purpose behind knowledge acquisition.
Living and Learning Together
- Emphasizing Jacques Rancière's ideas from "The Ignorant Schoolmaster," the speaker discusses how living and learning are interconnected; education should ultimately serve life itself.
- Empathy is highlighted as a fundamental aspect of learning—education should facilitate understanding how to live well amidst challenges rather than merely accumulating knowledge.
Navigating AI's Impact on Education
- The rapidity and comprehensiveness of AI can create feelings of overwhelm. However, it's essential for individuals to determine how they will engage with these technologies meaningfully.
- Acknowledging AI's conversational nature raises questions about its role as a tool: while it offers assistance, users must critically assess its outputs and decide their relevance or application in real-life contexts.
Teacher Training for Effective Use of AI
- Questions arise regarding teacher training concerning AI usage. Educators must clarify their objectives when evaluating student work influenced by AI-generated content, prompting deeper reflections on educational goals.
Evaluation and Learning in the Age of AI
The Nature of Evaluation
- The speaker reflects on the habit of evaluating memorized elements, noting that Lias is learnable. They emphasize the importance of understanding how students apply knowledge in personal or production work.
- The discussion highlights that evaluation methods have been questioned long before AI, suggesting a need for more meaningful assessments focused on student progress rather than just results.
Individual Learning Progress
- The speaker points out differences in learning progression among students, using Mamadou and Jean Guillaume as examples to illustrate varying levels of understanding and reasoning abilities.
- Emphasizing formative assessment, they argue for giving more attention to individual learning processes and providing constructive feedback rather than solely focusing on outcomes.
Exploring AI's Role in Education
- A question arises regarding whether generative links were primarily discussed in the conference. The speaker confirms this while acknowledging their limitations as neither an expert nor a practitioner in artificial intelligence.
- They describe their pragmatic use of AI tools like Copilot as assistants for reasoning and formulation, stressing that ultimate responsibility lies with them as educators.
Philosophical Considerations in Education
- The conversation shifts to the pedagogical implications of AI usage, questioning its impact on teaching goals and student learning experiences.
- This raises fundamental questions about the purpose of education and its role in nurturing educational relationships, indicating a lack of clear answers but recognizing ongoing discussions within educational philosophy.
Accessibility Issues with AI
- There is acknowledgment that not everyone has access to artificial intelligence tools or desires to use them, which complicates assumptions about universal adoption in educational practices.
- This highlights significant barriers to integrating AI into education effectively, emphasizing that it cannot be assumed all educators will adopt these technologies uniformly.
The Role of AI in Educational Assessment
Concerns About Control Over Learning
- The question arises whether allowing AI to prepare assessments might lead to a loss of control over learning processes. This is a complex issue that warrants careful consideration.
- Educators often do not see themselves primarily as evaluators; their roles encompass broader pedagogical responsibilities, which include teaching and coordinating educational activities rather than merely assigning values within limited timeframes.
Ethical Responsibility in Evaluation
- There is an ethical responsibility associated with evaluations, where educators must remain actively involved in the assessment process despite delegating some tasks to machines. It is crucial for teachers to pilot evaluations and utilize them effectively in their teaching practices.
- The use of AI should not absolve educators from accountability; they must still engage critically with the evaluation tools and understand their implications on student learning outcomes.
Distinction Between Human and AI Evaluations
- A significant distinction exists between human-led evaluations and those generated by AI, emphasizing that while both can be useful, they serve different purposes within the educational framework. Educators need to maintain clarity about their role as facilitators of learning rather than relying solely on automated systems.
- Understanding why certain elements are prioritized in assessments is essential for effective education, highlighting the importance of reflective practice among educators regarding their evaluation methods.
Hybrid Models of Assessment
- The discussion shifts towards constructing hybrid models that integrate both human judgment and AI capabilities in assessments, suggesting that this approach could enhance educational practices without compromising quality or integrity. Educators are encouraged to experiment with these models while remaining critical of the information provided by AI tools.
- Practical applications may involve using AI for generating technical aspects of assessments while ensuring that educators retain control over key decisions related to content relevance and pedagogical goals. This balance aims at improving assessment design without losing sight of educational objectives.
Leveraging AI Effectively
- To maximize the benefits of AI in education, it’s important for educators to interact meaningfully with these technologies, ensuring they align with established pedagogical frameworks such as Vygotsky's concept of the zone of proximal development—where learners can effectively engage with new material when appropriately supported.
- Ultimately, hybridization involves delegating specific assessment tasks to AI while maintaining oversight on how these contributions fit into broader educational strategies aimed at enhancing student learning experiences through thoughtful integration of technology into pedagogy.
Evaluation and the Role of AI in Education
Reflections on Evaluation Practices
- The speaker reflects on their tendency to be overly ambitious in describing evaluation elements, suggesting that this can lead to an overwhelming amount of information during assessments. They consider focusing on fewer aspects for clarity.
- A question arises about the impact of artificial intelligence (AI) on summative evaluations, prompting a discussion about whether AI necessitates a rethinking of these evaluations.
Technical Aspects of Summative Evaluations
- The speaker acknowledges that some summative evaluations involve technical components, such as multiple-choice questions (MCQs), which require careful construction to be effective.
- After creating MCQs, the results need processing to calculate scores. The speaker mentions verifying a sample of responses randomly to ensure alignment with AI-generated outcomes.
Verification and Trust in AI
- There is a debate about whether educators should trust AI outputs entirely or verify all results themselves. This raises questions about efficiency versus thoroughness in educational assessments.
- The speaker references Pascal Pasquini's work regarding normative and summative values, questioning the purpose behind conducting summative assessments—whether they serve merely as control mechanisms or have deeper educational goals.
Purpose Behind Summative Assessments
- Emphasizing the importance of understanding why summative evaluations are conducted, the speaker argues that they should aim at demonstrating how educational objectives can be achieved rather than simply controlling learners' performance.
- The role of feedback is highlighted; the speaker expresses skepticism about delegating sensitive feedback tasks to AI, emphasizing human interaction's importance in delivering constructive criticism effectively.
Conclusion and Future Discussions
- The session concludes with gratitude towards Albonne Robles for their insightful presentation and acknowledgment of audience engagement through questions.
- An announcement is made regarding an upcoming conference focused on climate information and critical thinking in education, indicating ongoing discussions around evolving educational challenges.