The Anti-Humanism of AI Research
Computers are not merely automata, and the failure to realise this has been catastrophic for computing.
In a 2019 post by Richard S. Sutton, he writes:
[…] researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.
In fairness to this, it was written in the context of misapplied focus within AI research. However I find it perhaps unintentionally profound, as it hearkens to a broader spiritual deficit in AI research as a whole.
The utility of AI is confusing at best. It’s a fully automated way of solving arbitrary problems using machine learning and some math, but how does that relate to the real world? Sometimes, it’s ostensibly to help medical diagnostics. Other times it’s for the dubious goal of winning at chess. Even when the ends seem noble, they’re often plagued with the most basic of errors. What’s really going on here?
Keep reading with a 7-day free trial
Subscribe to Nichstack to keep reading this post and get 7 days of free access to the full post archives.