Conferencia sobre uso eficiente y seguro de la IA generativa para investigadores

Conferencia sobre uso eficiente y seguro de la IA generativa para investigadores

Introduction to Cedar: Ireland's AI Technology Center

Overview of Cedar

  • Cedar is established as Ireland's center for AI, functioning as a not-for-profit organization funded by the state through two agencies.
  • The center was founded in 2013 and aims to support Irish businesses by leveraging university resources for research and talent.

Focus on Technology Readiness

  • Cedar operates at the technology readiness level starting from proof of concept up to market deployment, bridging the gap between research and industry application.
  • The organization collaborates with partners engaged in basic research while focusing on practical applications of AI solutions developed in academic settings.

Expanding Areas of Focus in AI

Generative and Trustworthy AI

  • As generative AI evolves, it has become a significant area of focus for Cedar, reflecting its growing importance since its inception.
  • The center supports various sectors within Irish business, adapting to the expanding landscape of AI technologies.

Training and Resource Hub Development

Centralized Training Initiatives

  • University College Dublin (UCD) is establishing a training and resource hub aimed at centralizing available resources related to AI literacy across its large staff and student body.
  • This initiative addresses the widespread integration of AI across all departments, emphasizing that it's no longer limited to technical fields like computer science.

Governance Principles Implementation

  • Cedar advises UCD’s university management team (UMT) on implementing governance principles tailored to the context of UCD, ensuring alignment with institutional needs.
  • Engaging with stakeholders will help identify which governance principles are most relevant across different schools within the university.

Existing Resources and Policy Development

Audit of Current Materials

  • An audit was conducted to compile existing materials related to safe use policies for generative AI within UCD, building upon prior experimentation since late 2022 when ChatGPT emerged.
  • The findings highlight that there are already several resources available that can inform policy development regarding safe practices in using generative AI tools.

Importance of EU Legislation

  • The introduction of the EUAI Act emphasizes compliance requirements for institutions in Ireland, mandating adherence to new legislation concerning artificial intelligence usage.
  • Understanding these regulations is crucial for developing effective policies at UCD that mitigate potential penalties associated with non-compliance.

UCD Project on Generative AI and Academic Integrity

Overview of the UCD Project

  • The project, led by the speaker and colleague Shen from computer science, received €10,000 funding for a 12-month initiative focused on generative AI.
  • The aim was to translate a European Commission document (updated April 2025) for the benefit of University College Dublin (UCD).
  • Workshops and surveys were conducted across various university communities, leading to the creation of a white paper with diverse results.

Findings from Community Surveys

  • The white paper highlighted significant differences in perceptions about generative AI among different communities at UCD: computer science, cedar community, and non-technical groups.
  • Concerns regarding academic integrity were raised; generative AI can potentially facilitate plagiarism, fabrication, and falsification—three major academic misconduct issues.

Importance of Academic Integrity

  • Despite excitement around new tools like generative AI, it is crucial to remember existing academic integrity standards that predate these technologies.
  • The project advocates for safe use rather than banning these tools while emphasizing awareness of their potential risks.

Insights from the White Paper

  • Survey results indicated varying comfort levels with using generative AI based on individual backgrounds; some users felt confident while others expressed fear regarding academic integrity violations.
  • Key areas covered in the white paper included writing, reading, testing, and reviewing—core activities where generative AI might be applied.

External Research Perspectives

  • The presentation will also reference three peer-reviewed papers discussing responsible use principles related to generative AI in research.
  • One paper provided a visual snapshot of core principles essential for responsible usage in research contexts.
  • Another study surveyed researchers' perceptions and outlined various use cases ranging from idea generation to writing and reporting.

Risks Associated with Generative AI Use

  • While there are beneficial applications for reviewing content using generative AI tools, there are also significant risks involved concerning permissions and ethical considerations.

Overview of Generative AI Systems and Their Implications

Introduction to Architecture Graphs

  • The speaker presents a simplified architecture graph for generative AI systems, aimed at an audience interested in computer science.
  • As technology advances, the external services associated with generative AI are expanding, indicating growth in capabilities and applications.

Understanding Large Language Models (LLMs)

  • The speaker discusses the importance of being cautious about constraints when working with LLMs, particularly those trained on fixed datasets.
  • Users may encounter outdated information if using free versions of LLMs that do not have access to real-time updates; recent references may stop at 2023.
  • LLM outputs are generated based on probabilistic models, leading to variable results which challenge reproducibility—a feature inherent to current LLM designs.

Interaction Methods with LLMs

  • Users typically interact with LLMs through web-based platforms or APIs for repeatable tasks; each method has its pros and cons.
  • Closed local systems offer safety but lack real-time updates unless specifically designed to integrate outside data sources.

Research Study Considerations

  • A table from the discussed paper outlines best practices for conducting research studies involving generative AI, including literature review and study execution guidelines.
  • The speaker notes potential overlaps between this content and other materials presented during the discussion.

Policies from Academic Institutions

  • The speaker introduces various policies from academic institutions regarding generative AI use, highlighting a western bias due to language preferences.

Trinity College Dublin Policy Insights

  • Trinity College Dublin outlines four key benefits of using generative AI in research but emphasizes more risks than advantages in their policy documentation.

Technical University Dublin Use Cases

  • Technical University Dublin provides detailed illustrations of specific use cases for generative AI, offering insights into practical applications beyond general benefits.

Oxford University's Definition of Substantive Use

  • Further details on Oxford University's approach will be explored next, focusing on their definition of substantive use within the context of generative AI policies.

Understanding Substantial Use of Generative AI in Academia

Defining Substantial Use

  • The European Commission document discusses the distinction between substantial and non-substantial use, which is crucial for institutions to consider.
  • Oxford has defined "substantive use" within the research lifecycle, providing clarity on what constitutes substantial use versus private use.
  • Clear guidelines help students and staff understand compliance requirements regarding their usage of generative AI tools.

Limitations and User Competency

  • York's policy addresses limitations of generative AI, emphasizing that users must be aware of potential pitfalls.
  • A key insight is that user competency directly impacts the effectiveness and safety of using generative AI; incompetence can lead to significant risks.
  • In professional contexts, lack of competency not only endangers the user but also poses risks to institutional reputation and data security.

Disclosure and Protection of Work

  • Illinois provides detailed guidance on disclosing the use of generative AI, encouraging transparency in academic work.
  • The importance of protecting one's own work when generating new content with these tools is highlighted as a critical consideration.

Opportunities vs. Risks

  • Toronto's insights reveal a stark contrast between opportunities offered by generative AI and associated risks, including environmental impact considerations.
  • Institutions are encouraged to assess their energy footprint as they transition from traditional search methods to more resource-intensive generative AI tools.

Final Thoughts on Professional Identity

  • The concept of "psychopantic" behavior in generative AI suggests that these tools may prioritize user satisfaction over accuracy, potentially compromising quality.
  • Emphasizing human expertise remains vital; no tool can replace the nuanced understanding required for effective application in professional settings.

The Importance of Expertise in Using Generative AI Tools

The Role of Expertise

  • Being an expert in your subject matter is crucial for validating outputs from generative AI tools. Without expertise, there's a risk of committing academic integrity violations, such as plagiarism or relying on fabricated information.
  • It takes significant time—often cited as 10,000 hours—to become an expert. There are no shortcuts; one must invest the necessary effort to gain expertise.
  • While generative AI tools can enhance productivity and support experts, they cannot replace the foundational knowledge and experience required in any field.

Personal vs. Professional Use of AI Tools

  • The speaker emphasizes the blending of personal and professional use of generative AI tools, highlighting that personal experiences can influence professional judgment.
  • Trusting the outputs from these models based on personal use may lead to biased decision-making in professional contexts. Awareness of this potential bias is essential.
  • Users should recognize that using the same account for both personal and professional purposes means that data is recorded collectively by the tool provider, which could affect how outputs are tailored.

Recommendations for Becoming an Expert

  • To develop expertise traditionally involves reading peer-reviewed materials within one's area. This includes understanding existing literature thoroughly—from introduction to conclusion.
  • Conducting a literature review is vital for identifying what has already been achieved in a field and recognizing key experts and institutions relevant to your research area.
  • Building a solid knowledge base through extensive reading helps establish connections with other experts globally, enhancing one's understanding and network within their discipline.

Understanding the Role of Generative AI in Research

Importance of Reading and Synthesizing Information

  • The ability to read and synthesize information is crucial for understanding and learning in research. This foundational skill allows researchers to build upon existing knowledge effectively.

Reproduction vs. New Production in Academia

  • There is a bias towards producing new research rather than reproducing existing experiments, which is often neglected despite its importance. Reproducing studies can validate findings and contribute to the body of knowledge.

Identifying Gaps in Research

  • Researchers must identify gaps in existing evidence to justify new contributions. This involves arguing that certain areas lack sufficient exploration, thus warranting further investigation based on solid evidence.

Skills Required for Becoming an Expert

  • To be recognized as an expert, one must possess various skills:
  • Ability to read and synthesize evidence.
  • Capability to argue for new directions in research.
  • Persuasion skills are essential for securing funding and support from others.

Responsibility Beyond Generative AI

  • While generative AI can assist with literature reviews, methods, writing, and idea generation, the researcher remains responsible for their work's integrity and defense during presentations or conferences. Personal accountability is paramount when justifying results or seeking publication opportunities.

Convincing Stakeholders

  • Researchers need persuasive abilities not only to secure funding but also to defend their hypotheses and methodologies convincingly at conferences or during evaluations like PhD vivas. The emphasis is on personal engagement rather than reliance on technology alone.

Effective Use of Generative AI Tools

  • Generative AI should be utilized as a supportive tool once researchers have a clear overview of their objectives; it enhances clarity rather than serving as a starting point for those unsure about their direction in research projects. It helps refine ideas effectively when used appropriately.

Training Opportunities for Researchers

  • There are training programs available aimed at enhancing researchers' skills related to generative AI tools; interested individuals are encouraged to reach out for more information regarding these opportunities tailored specifically for researchers' needs.

Discussion on Education and AI in Rural Areas

Context of the Inquiry

  • Dr. Hugo raises a question regarding the educational challenges faced by Mayan indigenous students in Chiapas, where Spanish is a second language.
  • He highlights the potential of generative AI tools like ChatGPT to assist these students with their academic work, questioning what skills they should retain amidst this technological shift.

The Role of Audience Awareness

  • The speaker emphasizes the importance of understanding the audience for whom students are writing, suggesting that this awareness can guide their choice of medium (pen and paper vs. digital tools).
  • It’s crucial for students to consider whether they are addressing instructors or broader communities when producing written work.

Access to Technology and Digital Divide

  • The discussion points out that access to technology (computers and internet) is essential for utilizing generative AI effectively.
  • Acknowledges the digital divide, noting that not all students have equal access to AI tools, which can lead to disparities in learning outcomes.

Implications of Unequal Access

  • The speaker compares unequal access to generative AI tools with having an intelligent family member who can provide help; not everyone has such resources available.
  • This inequality affects both the quality of outputs produced by students and their overall academic performance.

Concerns About Academic Integrity

  • There are risks associated with using generative AI, including potential plagiarism and concerns about academic integrity.
  • Students need clarity on how using these tools will impact their grades or qualifications; fear of penalties may deter them from utilizing beneficial technologies.

Conclusion on Educational Strategies

  • It's important for educators to communicate clearly about expectations regarding tool usage so that students feel secure in their learning processes.
  • Understanding these dynamics will help educators support rural students better as they navigate both traditional learning methods and new technologies.

Presentation of Certificate

Presentation Ceremony for Dr. Bctor

  • The speaker confirms that they have answered a previous question and invites further inquiries if needed.
  • A certificate is being presented to Dr. Bctor, indicating recognition or achievement in a specific field.
Video description

Conferencia: "Uso eficiente y seguro de la IA Generativa para investigadores", por el Dr. Adrian Byrne. Un espacio para reflexionar sobre las buenas prácticas, los retos y las oportunidades de la GenAI en el ámbito académico.