Discussion about this post

User's avatar
Niklas Anzinger's avatar

Hi Alexey,

I think you misunderstand the point of the Chinese Room thought experiment - the point is exactly to show that being better able to solve complex puzzles is NOT the same thing as understanding

The point is made well here by Michael Huemer: https://fakenous.substack.com/p/how-much-should-you-freak-out-about?utm_source=publication-search

Another classic to drive the point is the "What is it like to be a bat" essay by Thomas Nagel. The point is, there is a qualia to an understanding subject - it's not the sum total of the functions that describe it.

Expand full comment
Chase Hasbrouck's avatar

I remain a skeptic. Until shown otherwise, I still lean to Goodhart's law and benchmark contamination as the majority of the "progress."

I do agree, though, that it's possible that what we have is enough. Do we need AGI if the proto-AGI's we have can be scaffolded enough for the economically relevant tasks?

Expand full comment
8 more comments...

No posts