Potential Pitfalls of Artificial Intelligence

Overview of AI Challenges

  • The speaker introduces potential pitfalls of artificial intelligence, particularly machine learning, emphasizing their practical and ethical implications for lawyers.
  • These challenges are framed as "challenges" rather than "problems," aiming to maintain a balanced perspective on the technology's impact.

Success Example: Contract Review Contest

  • A successful case study is presented involving Logix's contest where an AI model was trained to identify legal issues in non-disclosure agreements (NDAs).
  • The AI competed against 20 experienced corporate lawyers using five real-life NDAs and a list of 30 possible legal issues.
  • Results showed the AI achieved 94% accuracy compared to 85% for the lawyers, highlighting its effectiveness in identifying legal issues.
  • The speed of the AI was also notable, completing reviews in just 26 seconds versus an average of 92 minutes for human lawyers.

Identifying Key Challenges

  • Despite the success shown by the AI, there are at least five significant challenges that could arise from its use.

Bias Issues

  1. Sample Bias
  • Sample bias occurs when training data does not represent real-world cases adequately, leading to incorrect or biased results.
  1. Human Fallibility
  • Implicit biases from humans can affect data used for training models, perpetuating bias within machine learning systems.
  1. Feedback Loops
  • Bias can be reinforced through societal feedback loops based on how models are utilized.

Technical Challenges

  1. Feature Selection
  • Choosing appropriate features for training is crucial; missing important aspects or including irrelevant ones can skew results.
  1. Inscrutability
  • Deep learning models often lack interpretability, making it difficult to explain outputs in understandable terms.

Data Set Sample Bias Explained

  • Addressing sample bias requires careful attention to dataset quality; it's a common issue affecting many machine learning systems today.
  • An example includes gender bias in speech recognition systems like Google's, which has been shown to perform better with male voices due to imbalanced training data.

Implicit Human Bias

Understanding Implicit Bias and AI

The Nature of Implicit Bias

  • Implicit bias refers to the unconscious prejudices that influence people's actions, often without their awareness. It highlights how societal stereotypes can affect behavior.

Examples of Bias in Search Algorithms

  • A notable example is from a Huff Post article where searching "Nurse" yields predominantly female images, while "Doctor" shows mostly males, illustrating gender bias in search results.
  • The biased representation stems from human activity and the context surrounding web pages, which influences image search algorithms.

Feedback Loops in Predictive Policing

  • Predictive policing uses data and AI to forecast criminal activity, potentially leading to biased resource allocation based on historical crime data.
  • PreadPoll's machine learning algorithm predicts crime locations using anonymized historical reports but risks reinforcing existing biases through its predictions.

Consequences of Sampling Bias

  • If black neighborhoods are overrepresented in training data, increased police patrols may lead to more reported crimes in those areas, creating a self-reinforcing cycle of bias.

Types of Harms Caused by AI Bias

  • Two main harms arise from biased AI systems: allocative harm (unfairly distributing resources or opportunities) and representational harm (reinforcing societal subordination based on race or gender).

Addressing AI Bias: Recommendations

Key Recommendations by Kate Crawford

  • Crawford emphasizes the importance of "fairness forensics," advocating for thorough testing and evaluation of AI models to identify biases effectively.

Importance of Interdisciplinary Approaches

  • Addressing bias requires interdisciplinary teams that consider societal implications alongside technical capabilities for effective solutions.

Ethical Considerations in AI Development

  • Developers should critically assess the ethical implications of their AI applications—questioning whether certain technologies should be pursued at all.

Challenges with Feature Selection

Complexity in Feature Engineering

Understanding AI Complexity and Interpretability

The Necessity of Model Complexity

  • Model complexity and accuracy are crucial, but the relevance of features is also significant. Features can be human-level, identified by designers, or low-level data developed internally by deep learning.

Importance of Feature Selection in AI Decision-Making

  • In AI decision-making tools, understanding the factors influencing decisions is vital, especially in legal contexts. For instance, a company using an AI CV scanner to filter job applications must consider how it evaluates candidates.

Case Study: Discrimination in AI Systems

  • An example illustrates that an AI system may inadvertently discriminate against women due to poor feature selection during pre-processing. Despite avoiding protected characteristics like gender, other selected features could act as proxy variables leading to bias.
  • Length of continuous employment was used as a feature but disproportionately affected women with career breaks due to childcare responsibilities. Machine learning excels at identifying such hidden correlations.

Challenges with Deep Learning Models

  • A more advanced company eliminated feature extraction and relied solely on deep learning for applicant ranking from CV text. This approach led to biases that were harder to identify since the model's internal reasoning was opaque.
  • The deep learning model created its own internal criteria for hiring without clear visibility into its decision-making process, complicating efforts to understand biases present in its outputs.

Interpretability vs Explainability in AI

  • The concepts of interpretability and explainability are critical when assessing AI models. Different types of models vary significantly in their transparency regarding inner workings and decision processes.
  • Linear regression offers easier interpretability compared to complex models like deep neural networks, which are often termed "black boxes" due to their lack of transparency about how inputs relate to outputs.

Understanding Explainability

  • Explainability involves comprehending an AI model's behavior in relatable terms for those impacted by its decisions. It allows stakeholders to grasp how input features influence outcomes meaningfully.
  • An explainable model can clarify the importance of each feature contributing to a decision outcome while ensuring fairness and relevance in the factors considered by the model.

Distinguishing Between Interpretability and Explainability

  • While related, interpretability focuses on understanding how an AI technology functions overall; explainability emphasizes understanding specific reasoning behind individual results.
  • A report from ICO and Alan Turing Institute provides insights into these concepts along with recommendations addressing legal issues surrounding explainable AI usage in decision-making contexts.

Ongoing Research on Explainability

  • The challenge of achieving explainable AI is a focal point for scientific research exploring various technical approaches aimed at enhancing transparency within modern artificial intelligence systems.

Conclusion: Essentials for Legal Professionals