Settings

Theme

Why Artificial General Intelligence Is and Remains a Fiction

osf.io

11 points by ketlag 10 months ago · 7 comments

Reader

thomasahle 10 months ago

> The paper argues that computers, being non-living and outside evolutionary processes, lack the goal-orientation essential for consciousness and mind creation. Computers, not being part of life's evolutionary journey, cannot inherently possess goal-oriented behaviors.

This seems to be more about the author's definition of consciousness than about intelligence.

clauderoux 10 months ago

This is the same problem that I have with creationists, when they say that life cannot emerge from "rocks". Still, we are a walking bag of electrico-chemical reactions based on the same basic minerals that are found everywhere on the planet. There is no "specific, invisible component" that would make us alive, but an incredibly complex arrangement of minerals and metals. In the same way, what would make the mind impossible to duplicate? Unless you believe that there an inscrutable, extra-subtile element that we cannot see or demonstrate? The soul maybe? So basically, every argument against AGI fells in the category of religious beliefs.

starchild3001 10 months ago

I pretty much agree with the perspective in this paper. That is, depending on your definition, either

1) AGI has already happened (in a narrow domain of question answering, machines can answer any question & converse, nearly as well as an average human)

2) AGI requires biological embodiment (such as real time learning, growth, long term memory, various real life motives and behaviors) therefore it will never happen with current devices.

  • singularity2001 10 months ago

    3) AGI has almost happened, the current integration of real time visual, spatial and audio inputs with logical reasoning and different layers of learning (beyond context window and 'memory') just doesn't satisfy even the most benevolent definitions yet

munchler 10 months ago

People are still going to be writing papers like this after AGI has taken over the world. Denial is a strong impulse.

alexwebb2 10 months ago

It’s tedious shooting down all of these backwards-from-conclusion things from the anti-AI crowd.

Good thing I have an intelligent AI that can respond for itself!

——

There appear to be several potential issues with the paper's argumentation:

1. False Dichotomy in Systems Comparison - The paper appears to create an artificial divide between "thermodynamic systems" and "computer systems" - This ignores that computers are also physical systems governed by thermodynamics - The distinction between biological and artificial systems may be one of degree rather than kind

2. Evolutionary Argument Problems - The paper assumes consciousness/intelligence requires evolutionary history - This is a correlation-causation fallacy - just because biological intelligence evolved doesn't mean evolution is the only path to intelligence - It fails to consider that artificial systems could potentially develop goal-oriented behaviors through other mechanisms - The argument would also imply that any hypothetical alien intelligence that evolved differently from Earth life couldn't be conscious

3. Goal-Orientation Assumptions - Claims computers "lack goal-orientation essential for consciousness" - This begs the question by assuming: a) Consciousness requires goal-orientation b) Only evolutionary processes can create genuine goal-orientation - Neither assumption is clearly justified

4. Methodological Issues - Using multiple disciplines (physics, biology, philosophy, neuroscience) could be a strength, but could also indicate cherry-picking convenient arguments from each field - The abstract suggests a conclusion-driven approach rather than following evidence to a conclusion

5. Consciousness-Intelligence Conflation - The paper appears to conflate consciousness with intelligence - These are separate concepts - we could potentially have AGI without consciousness, or consciousness without human-level intelligence - Many AGI researchers aren't claiming to create consciousness, just general problem-solving ability

6. Definitional Vagueness - Based on the abstract, it's unclear how the paper defines key terms like: - Artificial General Intelligence - Consciousness - Goal-orientation - Mind creation - Without clear definitions, the arguments may be attacking straw men

7. Predictive Cognition Argument - The claim that AGI is an "illusion shaped by the information our minds receive" could be turned around - The same argument could be used to claim that AGI skepticism is an illusion shaped by our cognitive biases - This is essentially a form of psychological dismissal rather than substantive argument

8. Historical Perspective - The paper seems to ignore that many previously "uniquely human" capabilities have been successfully mechanized - Claims about fundamental impossibility need to account for why previous similar claims have often been wrong

9. Thermodynamic Argument Issues - While biological systems are indeed complex thermodynamic systems, the paper needs to demonstrate why this specific physical implementation is necessary for intelligence - Many complex behaviors can be implemented through different physical mechanisms - The argument risks confusing the substrate with the function

10. Scope Problem - The paper makes a very strong claim ("AGI is and remains a fiction") - To justify this, it would need to prove not just that current approaches won't work, but that NO possible approach could ever work - This is a much harder philosophical and scientific claim to defend

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection