AIGSA

Connecting graduate students in artificial intelligence

First AI Safety Discussion


WNGR 285

The Fall 2022 AIGSA reading group is about AGI (Artificial General Intelligence) Safety Fundamentals. Join us for the first meeting! Lunch will be provided.

Please RSVP here (if you haven’t already RSVPed by reacting to the announcement on Discord).

This week’s topic is Artificial General Intelligence. Here’s an introduction from Richard Ngo, the developer of the curriculum:

The first two readings this week offer several different perspectives on how we should think about artificial general intelligence. This is the key concept underpinning the course, so it’s important to deeply explore what we mean by it, and reasons for thinking that the field of AI is heading towards it. The third reading focuses on grounding these high-level arguments by reference to the behavior of existing ML systems. Steinhardt argues that, in machine learning, novel behaviors which are difficult to predict using standard approaches tend to emerge at larger scales.

To prepare, please spend ~1 hour reading these short pieces:

  1. Four background claims (Soares, 2015) (15 mins)
  2. AGI safety from first principles (Ngo, 2020) (from section 1 to end of 2.1) (20 mins)
  3. More is different for AI (Steinhardt, 2022) (only introduction, second post, third post) (20 mins)

Return...