Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OK, so maybe we're talking somewhat at cross purposes.

I was talking about the process/mechanism of reasoning - how do our brains appear to implement the capability that we refer to as "reasoning", and by extension how could an AI do the same by implementing the same mechanisms.

If we accept prediction (i.e use of past experience) as the mechanistic basis of reasoning, then choice of logic doesn't really come into it - it's more just a matter of your past experience and what you have learnt. What predictive rules/patterns have you learnt, both in terms of a corpus of "knowledge" you can bring to bear, but also in terms of experience with the particular problem domain - what have you learnt (i.e. what solution steps can you predict) about trying to reason about any given domain/goal ?

In terms of consistent use of logic, and sticking to it, one of the areas where LLMs are lacking is in not having any working memory other than their own re-consumed output, as well as an inability to learn beyond pre-training. With both of these capabilities an AI could maintain a focus (working memory) on the problem at hand (vs suffer from "context rot") and learn consistent, or phased/whatever, logic that has been successful in the past at solving similar problems (i.e predicting actions that will lead to solution).



But prediction as the basis for reasoning (in epistemological sense) requires the goal to be given from the outside, in the form of the system that is to be predicted. And I would even say that this problem (giving predictions) has been solved by RL.

Yet, the consensus seems to be we don't quite have AGI; so what gives? Clearly just making good predictions is not enough. (I would say current models are empiricist to the extreme; but there is also rationalist position, which emphasizes logical consistency over prediction accuracy.)

So, in my original comment, I lament that we don't really know what we want (what is the objective). The post doesn't clarify much either. And I claim this issue occurs with much simpler systems, such as lambda calculus, than reality-connected LLMs.


> But prediction as the basis for reasoning (in epistemological sense) requires the goal to be given from the outside, in the form of the system that is to be predicted.

Prediction doesn't have goals - it just has inputs (past and present) and outputs (expected inputs). Something that is on your mind (perhaps a "goal") is just a predictive input that will cause you to predict what happens next.

> And I would even say that this problem (giving predictions) has been solved by RL.

Making predictions is of limited use if you don't have the feedback loop of when your predictions are right or wrong (so update prediction for next time), and having the feedback (as our brain does) of when your prediction is wrong is the basis of curiosity - causing us to explore new things and learn about them.

> Yet, the consensus seems to be we don't quite have AGI; so what gives? Clearly just making good predictions is not enough.

Prediction is important, but there are lots of things missing from LLMs such as ability to learn, working memory, innate drives (curiosity, boredom), etc.


Related: Computational models of abduction have a goal and orientation towards an optimal "solution". We need none of that for creative abduction.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: