IA COBAEZ

IA COBAEZ

Welcome and Introduction

Technical Issues and Opening Remarks

  • The session begins with a welcome message, acknowledging technical difficulties that have been resolved.
  • Participants are asked to turn off their cameras and microphones to facilitate the director's welcoming message.

Purpose of the Training Session

  • The director emphasizes the importance of this training as an opportunity for professional development in police intelligence, led by Dr. José Manuel Gámez, who has extensive experience in the field.
  • This initiative aims to enhance teaching tools and improve student learning outcomes through shared experiences among educators.

The Role of Artificial Intelligence in Education

Complementary Tools for Teaching

  • The director discusses how artificial intelligence (AI) should be viewed as a complement rather than a replacement for traditional teaching methods, enhancing various educational processes such as assessment and content creation.
  • There is an emphasis on AI's potential to provide valuable resources that can aid educators in addressing challenges faced by students today, particularly those from vulnerable backgrounds.

Commitment to Educational Improvement

  • A call is made for utilizing AI tools effectively within educational institutions to support teaching efforts across different contexts. The director expresses gratitude towards Dr. Gámez for his contributions to strengthening education at all levels.

Introduction of Dr. José Manuel Gámez

Background Information

  • Dr. Gámez is introduced as a researcher from the Universidad Autónoma de Zacatecas with expertise in high-level technological construction and leadership roles within academia, specializing in artificial intelligence since 2020.

Workshop Objectives

  • He outlines the goals of today's workshop designed specifically for Universidad de Bachilleres' academic enhancement initiatives aimed at 2026, focusing on future competencies needed by 2030 according to global economic reports.

Future Competencies and Technological Literacy

Key Competencies Identified

  • Reference is made to a World Economic Forum report highlighting essential skills projected for 2030, including technological literacy, critical thinking, leadership abilities, and analytical reasoning necessary for modern professionals in education fields.

Importance of Technology Integration

  • Emphasis is placed on integrating technology into educational practices using AI tools like large language models or neural networks which can significantly impact generative applications within teaching methodologies moving forward into higher education contexts.

Understanding the Role of Technology in Education

The Purpose of Educational Technology

  • The director emphasizes that technology is not meant to replace teachers but to enhance their capabilities, serving as a tool similar to computers and smartphones.

Addressing Pedagogical Challenges

  • Discussion on how educational tools can respond to pedagogical needs, manage information overload, and adapt to rapid technological changes affecting daily life.

Strategic Design of Learning Materials

  • Introduction of a roadmap focusing on the strategic appropriation of designing personalized educational activities and materials for effective learning outcomes.

Feedback Mechanisms in Education

  • Emphasis on generating constructive and timely feedback using artificial intelligence tools, enhancing the learning experience through personalized responses.

Ethical Considerations in Technology Use

  • Exploration of ethical criteria regarding technology use, including responsibility for both users and tools, alongside fostering critical thinking about algorithms.

Navigating Information Overload

Understanding Digital Noise

  • Brief overview of information overload; increased notifications lead to mental fatigue and contribute to the spread of misinformation.

Statistics on Notification Overload

  • A study from the University of Michigan reveals an average person receives 56 notifications daily, with adolescents receiving around 240 notifications, leading to processing challenges.

The Concept of Infodemic

Excessive Information During Crises

  • The World Health Organization defines "infodemic" as an excess of true and false information during health crises like COVID-19, complicating reliable information sourcing.

Impact on Public Trust

  • Misinformation during COVID led to anxiety and cognitive fatigue; examples include unfounded conspiracy theories about vaccines that eroded public trust in health communications.

Identifying Real vs. Unverified Information

The Role of Educators as Filters

  • Educators must act as filters in a world saturated with information, helping students discern between verified and unverified data.
  • Critical analysis is essential for students to determine the authenticity of information they encounter.

Adolescent Internet Usage

  • Studies indicate that adolescents spend approximately six hours daily online, engaging with social media and various websites.
  • This extensive internet usage impacts their attention spans and concentration levels during academic tasks.

Impact of Notifications on Attention

Distraction from Academic Tasks

  • Notifications from devices can significantly disrupt students' focus while working on assignments, leading to decreased productivity.
  • Each interruption diminishes concentration, making it challenging for students to return to their original task effectively.

Global Responses to Distraction

  • Some countries have implemented bans on cell phones in classrooms to combat distractions and improve student attention. Examples include regions in Argentina and South Korea.

Cognitive Economics: Managing Attention

Energy Allocation in Information Processing

  • The brain allocates cognitive resources based on perceived importance; less attention means less energy expenditure on a topic.
  • When encountering sensational headlines, individuals face choices: share, investigate, or ignore—each requiring different amounts of cognitive energy.

Consequences of Ignoring Information

  • Ignoring certain pieces of information allows individuals to conserve mental energy but may lead to missing critical insights or developments.

Cognitive Overload and Its Effects

Identifying Limits of Attention

  • Continuous exposure to notifications can lead to cognitive overload, where individuals struggle to process information effectively due to fatigue.

Navigating Misinformation

Evaluating News Credibility

  • Individuals are often faced with sensational news stories (e.g., conspiracy theories) that require quick decisions about engagement (reading, sharing, ignoring).

Strategies for Reducing Cognitive Fatigue

  • Researchers recommend implementing digital controls as a strategy for educators to help filter out misinformation effectively while managing cognitive load among students.

Understanding Digital Noise and Information Overload

The Challenge of Digital Notifications

  • The speaker discusses the overwhelming nature of digital notifications, describing them as "noise" that can be both distracting and useful.
  • A simulation is mentioned that helps identify which notifications are valuable, suggesting that most notifications are indeed useless.
  • Recommendations include removing phones from personal spaces like bedrooms to improve focus and attention on surrounding tasks.

Commitment to Attention Management

  • Participants can create a "pact of attention," committing to actions such as keeping their phone out of the bedroom or organizing notifications for better focus.
  • This pact can be downloaded in PDF format for personal reminders or as evidence for educational activities.

The Era of Non-Things: Media Literacy

Concept Introduction

  • The term "non-things" relates to media literacy, emphasizing how information has replaced physical objects in our lives.
  • Shul Han's theory suggests that digitalization alters our experiences by prioritizing data over tangible items, leading us to focus more on trending topics rather than substantive content.

Impact of Misinformation

  • An example is given about Nancy Pelosi, where manipulated video footage went viral, misleading many into believing false narratives due to its widespread circulation.
  • Health misinformation regarding chlorine dioxide was propagated by influencers without scientific backing, prompting investigations due to health risks associated with its promotion.

Consequences of Misinformation in Economics and Conflict

Economic Implications

  • A case involving King Karashyan illustrates how undisclosed paid promotions led individuals to invest in cryptocurrencies without verifying their legitimacy, resulting in financial losses.

Misrepresentation in Conflict Reporting

  • Instances where outdated images from conflicts were misused online highlight the dangers of misinformation during crises like the Russia-Ukrainian conflict.

Navigating Information Manipulation

Understanding Report Neutrality

  • The discussion emphasizes the importance of neutral reports containing raw data without subjective adjectives, which can be easily manipulated or speculated upon by social media users.

Regulatory Measures

  • In China, regulations require influencers discussing technical subjects to possess university degrees, aiming to ensure informed opinions are shared within public discourse.

Understanding Misinformation and Post-Truth

The Impact of Context on Information Validity

  • Changing the context of information can render it false, as verified facts may be disregarded. This highlights the importance of returning to reality for accurate understanding.

Analyzing Information in a Digital Age

  • The discussion focuses on how studies are analyzed and the impact of distorted information. Emphasis is placed on evidence-based teaching practices amidst rampant misinformation.

Importance of Technical Verification

  • Technical verification, scientific evidence, and rigorous analysis are crucial in discerning truth from misinformation. A notable quote emphasizes that while information spreads quickly, truth takes longer to establish.

Exploring the Concept of Post-Truth

  • Post-truth is defined as a situation where objective facts have less influence on public opinion than personal beliefs or emotions, leading to increased disinformation. This phenomenon has been exacerbated in recent times.

Historical Context and Vaccination Example

  • The speaker illustrates post-truth with historical examples like vaccines, which have significantly impacted human evolution but face skepticism due to emotional biases over factual evidence.

Group Identity and Fragmentation Effects

  • Individuals often align their beliefs with group identity, disregarding factual accuracy if it contradicts group consensus. This leads to fragmentation where social media does not seek consensus but rather amplifies existing beliefs.

Social Media's Role in Amplifying Beliefs

  • Students spend significant time on social media platforms, which act as echo chambers reinforcing their beliefs despite scientific validation being available elsewhere. Awareness is needed at family levels regarding this issue.

The Mechanics of Political Discourse

Simulation of Truth Distortion Scenarios

  • A simulation allows participants to choose scenarios (e.g., political speeches) and adjust perceived truth levels, illustrating how emotions are leveraged in political discourse for effective persuasion despite minimal factual basis.

Emotional Manipulation in Politics

  • Current political rhetoric often exploits emotions through patriotic sentiments or rumors, particularly noted in statements from influential leaders like the U.S. President, impacting public belief systems significantly more than factual data does.

Data vs Emotion: The Dichotomy

  • When focusing on hard data versus emotional appeals in discourse:
  • Emotional impacts can reach up to 96%, while factual relevance drops significantly.
  • This disparity shows why emotional manipulation is prevalent in politics over straightforward facts that lack emotional resonance.

Challenges with Misinformation Sources

Influence of Social Media Content

  • Misinformation about vaccines exemplifies challenges faced when referencing social media content as credible sources; quick consumption formats (like TikTok) prioritize engagement over fact-checking or source reliability.

Caution Against Unverified Influencer Content

  • There’s a growing trend where unverified influencer content gains traction due to its ease of consumption; this raises concerns about reliance on such sources for academic references without proper scrutiny or verification processes outlined by citation standards like APA guidelines.

By structuring these notes around key themes and timestamps from the transcript, readers can easily navigate complex discussions surrounding misinformation and its implications within society today.

Understanding Post-Truth in Education

The Challenge of Information Review

  • Reading information from videos on mobile devices is difficult, leading to procrastination in reviewing content. This reflects the concept of post-truth in education, where emotional narratives may overshadow factual evidence.

Emotional Narratives vs. Evidence

  • Studies indicate that when students and families prioritize emotional stories over scientific evidence, it negatively impacts learning outcomes. Emotional weight often trumps objective data, even when presented with clear academic results.

Critical Thinking and Media Literacy

  • Teaching critical thinking must go beyond formulas; it should include methodologies for validating reliable sources. Without proper guidance on how to assess information credibility, students may struggle to apply critical thinking effectively.

Panic and Misinformation in Schools

  • Instances of misinformation spread through platforms like WhatsApp can lead to unnecessary school closures based on unfounded threats or viral challenges. This highlights the need for effective evaluation of information sources within educational settings.

The Impact of Simplistic Messaging

  • A historical example illustrates how misleading slogans can influence public opinion significantly—such as the false claim regarding UK contributions to EU health funding during Brexit discussions—which emphasizes the power of emotionally charged yet simplistic messages over complex truths.

Navigating Complexity in Communication

  • In an era dominated by post-truth dynamics, there is a fundamental distortion where emotional effectiveness competes against factual complexity. Educators must simplify their messages while ensuring they remain informative enough to engage students effectively amidst short attention spans prevalent today.

The Impact of Misinformation on Information Sharing

The Nature of Misinformation

  • The speaker discusses how increasing precision in messaging can often reduce its ability to mobilize the masses, highlighting a paradox in communication effectiveness.
  • A simulation is introduced that illustrates the difference in speed and reach between sharing verified facts versus spreading misinformation, with misinformation traveling much faster.
  • Citing an MIT study, it is noted that false notifications evoke surprise or fear and are shared up to six times more quickly than truthful information.

Challenges Faced by Educators

  • Examples of misinformation are presented, emphasizing the need for critical thinking regarding dubious remedies and get-rich-quick schemes prevalent online.
  • The discussion transitions to algorithmic literacy, questioning how algorithms assist or hinder decision-making processes.

Algorithmic Influence on Information Consumption

  • An exploration into recommendation systems reveals how they filter and structure seemingly autonomous decisions based on user data.
  • Users provide personal information to applications which then recommend content tailored to their interests, raising concerns about echo chambers.

Engagement vs. Diversity

  • Algorithms prioritize content likely to generate high interaction rates, potentially limiting exposure to diverse viewpoints and niche topics.
  • Different user profiles receive tailored news; for instance, one profile may focus on economic trends while another emphasizes lifestyle content.

The Paradox of Personalization

  • While personalization enhances user experience by providing relevant content, it also risks reducing overall diversity in information consumption.
  • There is a documented risk that optimizing for engagement can lead to a "rich get richer" effect where popular content overshadows less mainstream but valuable information.

Optimization by Popularity in Content Recommendations

The Role of Algorithms in Content Delivery

  • The discussion begins with the concept of optimizing content based on popularity, where applications recommend trending content regardless of individual user preferences.
  • It highlights that algorithms dictate what users see, particularly in streaming services and media platforms, often prioritizing viral trends over personal taste.

Automated Decision-Making and Its Implications

  • Reference is made to studies from the European Parliament and ScienceDirect regarding automated decision-making processes that utilize probabilistic calculations.
  • A specific example is given about credit requests, illustrating how automated models can lead to different outcomes based on the same input data across various banks.

Transparency and Bias in Algorithmic Decisions

  • There is a concern about the opacity surrounding statistical evaluations used in automated decisions, questioning which variables and biases influence these models.
  • The speaker emphasizes that while autonomy remains intact, it operates within a framework influenced by commercial metrics.

Developing Critical Digital Thinking

  • Understanding the invisible guidelines set by algorithms is crucial for fostering critical digital thinking and agency among users.

Introduction to Generative AI Concepts

Neural Networks Explained

  • The session transitions into generative AI concepts, starting with an explanation of neural networks as layered filtering systems.

Data Processing Mechanism

  • An analogy is drawn between water purification systems and neural networks, where information undergoes processing through multiple internal layers before yielding results.

Output Based on Probability

  • The output from a neural network is described as probabilistic rather than definitive; it assesses inputs (like images or numerical data), assigning weights to determine relevance for further processing.

Understanding Neural Networks and Language Models

The Basics of Probability in Neural Networks

  • Neural networks operate on the principle of probability, providing a 98% confidence level that an input is a cat based on data processing. This highlights their function as probability calculators rather than thinkers.
  • Large language models (LLMs), like ChatGPT, predict the next word using historical data and statistical methods, emphasizing their reliance on mathematical computations rather than human-like thought processes.

Statistical Predictions and Temperature Control

  • LLMs assess the likelihood of subsequent words through statistical analysis, determining which word has the highest probability to follow based on previous inputs.
  • The concept of "temperature" affects randomness in predictions: lowering it increases creativity while raising it restricts outputs to more predictable responses. A temperature setting of one results in strict adherence to user prompts.

Tokens: The Building Blocks of Language Models

  • Tokens are fundamental units used by language models; they can be whole words, parts of words, or punctuation marks that are converted into unique numerical identifiers for processing. This conversion allows models to detect patterns and calculate probabilities effectively.
  • Each token contributes to forming sentences; for example, "Hola, ¿cómo están?" is broken down into tokens that the model uses for prediction purposes. The model's output is influenced by how these tokens are processed mathematically.

Adjusting Randomness with Temperature Settings

  • By manipulating temperature settings within the model, users can control randomness in generated text; lower temperatures yield less variability while higher temperatures allow for more creative outputs. This adjustment directly impacts how predictions are made based on token probabilities.

Distinguishing Between AI Types: Traditional vs Generative

  • Traditional AI focuses primarily on prediction tasks such as email classification (e.g., spam detection) or risk assessment based on historical data without generating new content from scratch. It relies heavily on existing datasets for its operations.
  • In contrast, generative AI creates new content based on learned patterns from data inputs, showcasing a significant difference in functionality between traditional predictive models and those designed for generation tasks like text creation or image synthesis.

Understanding the Risks of Predictive AI

The Illusion of Objectivity in Predictive AI

  • The predictive aspect of AI can create a false sense of objectivity, leading users to believe that system outputs are inherently correct.
  • This illusion contributes to the perpetuation of historical biases, as seen in a case where production line sensors continued to identify outdated product codes despite changes in actual products.

Generative vs. Predictive AI

  • Generative AI utilizes large language models to produce new content by recognizing patterns and making decisions based on existing data.
  • Unlike traditional methods, generative AI modifies existing data rather than copying it directly, creating original outputs through statistical pattern combinations.

Risks Associated with Generative Models

  • A significant risk is "hallucinations," where the model generates false information, which can mislead users about its accuracy.
  • The lack of traceability for sources raises concerns over the validity of generated information, especially when models are trained on unauthorized data.

Importance of Source Verification

  • Users must confirm the accuracy of information provided by generative tools, even when sources are cited; errors can still occur despite source presentation.

Recommendations for Responsible Use in Education

  • Educators have vast opportunities to leverage these technologies responsibly for creating assessments and resources but should not rely solely on them for decision-making.
  • Tools like generative AI should be viewed as assistants rather than replacements for human judgment; critical evaluation is essential.

Caution Against Over-Automation

  • While there are opportunities for automation in education and industry, not all processes should be automated; sensitive decisions require human oversight.
  • Delegating critical evaluations or student assessments to automated systems poses risks; human verification remains crucial.

How to Effectively Use AI Tools in Education

Importance of Human Oversight

  • Emphasizes the necessity for human supervision when using AI tools, highlighting that trained educators must verify all outputs generated by these systems.

Interaction with AI Tools

  • Introduces the basic interaction method with AI through a chat interface, which serves as a gateway to access underlying mathematical models.

Structuring Effective Prompts

  • Discusses the importance of crafting structured prompts for effective interaction with AI, suggesting that users need to know how to ask questions properly.

Key Questions for Prompt Creation

  • Outlines essential questions to define the role and context of the AI model: "Who?", "What?", and "For whom?" This helps in setting clear objectives for the interaction.

Defining Roles and Context

  • Stresses defining the role of the AI (e.g., acting as a biology teacher), which influences its output based on assigned tasks and audience.

Example Prompt Structure

  • Provides an example prompt: “Act as a biology teacher for students aged 16-17,” establishing clarity on who is involved and what is expected from the AI.

Specificity in Tasks

  • Highlights that prompts should include specific tasks, such as designing a 30-minute activity on mitosis, ensuring clarity in expectations from the tool.

Instructional Format Guidance

  • Encourages specifying how results should be presented (e.g., tables or lists), reinforcing that educators maintain control over their teaching materials.

Quality Criteria in Outputs

  • Advises setting quality conditions and pedagogical focus within prompts, such as using simple language and including interactive activities to enhance student engagement.

Evaluating Responses from AI Tools

  • Notes that responses from AI often come framed as questions, prompting users to evaluate justifications provided by the tool against their own criteria.

Caution Against Misinformation

  • Warns about potential inaccuracies ("hallucinations") in information provided by AI tools; emphasizes verifying data before use.

Transparency with Students

  • Stresses transparency regarding resources developed using AI tools; highlights their purpose should be administrative support rather than replacing teachers.

Revolutionizing Education with AI

The Shift in Focus from Technology to Humanity

  • A revolution changes how we do things and think, emphasizing a shift from technology-focused approaches (like the Industrial Revolution) to human-centered applications of artificial intelligence.
  • Current AI theories aim to facilitate human activities, providing more time for individuals while focusing on their health and well-being. These tools serve as assistants rather than replacements for expert judgment.

Understanding AI Information Processing

  • When using AI tools, it's crucial to understand where they derive their information; this aspect was not initially discussed.
  • AI systems undergo extensive training processes that are costly and rigorous, utilizing available data (purchased or permitted) formatted into tokens for processing and retention.
  • These tools have access to vast amounts of human knowledge across various topics, including complete libraries and social media content, which they filter and refine during training.

Designing Effective Prompts for AI Tools

  • To create effective prompts, users should define roles, tasks, formats, contexts, and criteria clearly. For example: acting as a high school ethics teacher requires specific instructions about the subject matter.
  • Users can design exercises related to ethical discussions in schools by specifying actions like designing group activities that progress from general concepts to specific tasks.
  • The format of responses is important; presenting answers in tables helps organize information effectively. Criteria such as language simplicity and pedagogical tone must also be considered.

Utilizing Prompts for Educational Planning

  • A guide can be created based on structured prompts that educators can copy into their chosen AI tool for generating educational content efficiently.
  • While some advanced features may require special codes or API keys (e.g., Gemini), basic prompt generation remains accessible without these enhancements.

Evaluating Generated Content

  • Educators are encouraged to evaluate the results produced by AI tools critically. Examples of prompts include creating lesson plans across various subjects like history or biology tailored for high school students.
  • Continuous improvement of prompts is essential; educators should adapt them based on their experiences with different subjects and teaching methods.

Addressing Biases in AI Outputs

  • It's vital to recognize that not all information provided by AI is accurate; verification is necessary due to potential biases inherent in the data sources used by these systems.
  • Contextual understanding of biases—such as those related to gender or geography—is crucial when interpreting outputs generated by artificial intelligence.

Understanding Bias in Language Models

Gender and Geographic Biases

  • According to UNESCO, language models depict women four times more often in domestic roles compared to men, reserving leadership terms for males. This highlights a significant gender bias rooted in the training data.
  • Studies show an automatic association of poverty with specific regions, particularly Latin America and Africa, while wealth is linked to the Global North. This geographic bias skews perceptions of global poverty.
  • AI tutors often describe minority students using negative terms like "behavioral problems" or "low performance," indicating that biases are prevalent in educational contexts due to how these models were trained.

Addressing Biases in Educational Frameworks

  • It is essential to mitigate geographic and gender biases within educational programs (PROM). Recommendations include avoiding such biases during curriculum development.
  • The World Economic Forum identifies critical skills needed by 2030, emphasizing the importance of adapting education to meet future job market demands.

Key Competencies for Future Employment

  • Skills deemed less relevant include basic literacy and manual dexterity; however, competencies like global citizenship and quality control remain important.
  • Consolidated competencies include resource management and systemic thinking. Emerging skills focus on environmentalism and user experience design alongside key areas like artificial intelligence.

Shifting Focus from Traditional Qualifications

  • In recruitment, traditional qualifications are becoming less significant than practical skills such as project leadership, teamwork, flexibility, and adaptability. Employers prioritize candidates who can learn new things over those with specific degrees.
  • The psychological aspect of receiving feedback is crucial; some professionals struggle with criticism which impacts their performance.

Current State of Educational Competencies

  • UNESCO outlines key competencies for upcoming years: information literacy, critical thinking, creativity, autonomy in learning processes—skills that enhance employability beyond technical knowledge.

Practical Applications of AI in Education

Designing Effective Learning Experiences

  • The session concludes by discussing tailored laboratories designed for educators aimed at improving teaching practices through AI tools.

Example Curriculum Development

  • An example provided involves a didactic planning approach using AI within a research laboratory course structure divided into six phases starting with problem selection.

Understanding Social Research in Education

Overview of Phase 1: Introduction to Social Research

  • The first phase focuses on understanding social research and its utility, including how students engage with research labs and the subtopics involved.
  • Key topics include distinguishing between science and common sense, as well as identifying immediate social realities.

Practical Application in Classrooms

  • Students are tasked with identifying real problems in their environment, emphasizing the importance of addressing issues through research.
  • A structured output format is suggested: a table with three columns detailing social issues, urgency for investigation, and potential impacts if ignored.

Utilizing AI Tools for Research Development

  • The discussion introduces Keminie as a versatile tool integrated into email platforms to assist in developing prompts for research topics.
  • Emphasis is placed on using a "thinking mode" within the tool to process information effectively without selecting specific models initially.

Generating Relevant Content Through AI

  • Interaction with AI tools occurs via chat; users input requirements to receive formatted outputs that address identified social issues.
  • Educators evaluate the appropriateness of content generated by AI, which can then be used to create presentations or further educational materials.

Collaboration and Development of Educational Materials

  • The presentation discussed was primarily developed using artificial intelligence, reviewed by educators who made necessary adjustments based on initial frameworks provided by course coordinators.