Thought-Provoking Questions from a Stanford HAI Discussion
Attended a breakfast session hosted by the Stanford Club of Singapore with the Stanford Institute for Human-Centered AI (HAI)
Discussions spanned many topics, from AI safety, incentives, to the human side of AI adoption. Here’s some questions raised across different tables that I found most thought-provoking
Incentives vs Safety
- $ vs safety: Inherent tradeoff between profitability vs AI safety?
- What incentives do companies have to train AI on underrepresented languages? It doesn’t make as much money.
- Who should take charge of training regional-AI for underrepresented regions? NGOs? Governments?
Human vs AI Standards
- What standards do we put on AI vs humans?
- Doctors have ~30% diagnostic error rates, yet we expect near-perfect accuracy from AI systems
The Human side of AI
- Protecting human dignity: One of the biggest uses of ChatGPT is as a personal therapist - people can share their deepest worries without judgement
- What data would AI need to become more empathetic?
- How does a human “feel at home,” and “trust something”? We need to understand this, so we know how to build trustworthy AI
Evaluations & Benchmarks
- Benchmarks & evaluations lack transparency. We don’t even know if the evals are meaningful for real-world use cases
- Need more benchmarks from non-profits, academia, etc… many benchmarks are from industrial folks, which can be suspect as they have incentives to create a benchmark that’s best for their model/services
- Evals must be more robust & transparent for Safe AI
Real vs Perceived AI Risk
- What AI dangers are real & what aren’t? E.g. US congress said they got antrax recipes from llama… but this can also be found on Google & Interest… Is this an AI problem or an internet problem?
Opportunities
- We don’t know what we don’t know in terms of possibilities… During the industrial revolution, we can easily see what to automate. But in the software world we live in, it’s not as obvious what things we can augment/automate