less than 1 minute read

“The main mistake people usually make is thinking [people like] Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student,” Wolf wrote. “To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. … If we want scientific breakthroughs, we should probably explore how we’re currently measuring the performance of AI models and move to a measure of knowledge and reasoning able to test if scientific AI models can for instance:

  • Challenge their own training data knowledge
  • Take bold counterfactual approaches
  • Make general proposals based on tiny hints
  • Ask non-obvious questions that lead to new research paths”

Thomas Wolf on X: “I’m afraid AI won’t give us a “compressed 21st century”. / X

i find it interesting that we think we can control the replication of intelligence when the top people working on this can’t even agree on a basic framework for what intelligence actually is