XPost: comp.theory, comp.lang.c++, comp.ai.philosophy
From: 643-408-1753@kylheku.com
On 2025-11-25, olcott wrote:
> On 11/25/2025 9:50 AM, Bonita Montero wrote:
>> Am 25.11.2025 um 16:47 schrieb olcott:
>>> On 11/25/2025 9:20 AM, Bonita Montero wrote:
>>>> What you do is like thinking in circles before falling asleep.
>>>> It never ends. You're gonna die with that for sure sooner or later.
>>>>
>>>
>>> I now have four different LLM AI models that prove
>>> that I am correct on the basis that they derive the
>>> proof steps that prove that I am correct.
>
>> It don't matters if you're correct. There's no benefit in discussing
>> such a theoretical topic for years. You won't even stop if everyone
>> tells you're right.
>
> My whole purpose of this has been to establish a
> new foundation for correct reasoning that gets rid
Unfortunately, your reasoning was proven wrong before
you were born, and your computer program does
not show what you say it does.
> The timing for such a system is perfect because it
> could solve the LLM AI reliability issues.
You have no idea how LLMs work and what is at the root of the LLM
reliability issues, and how to even take the first step in fixing it.
You have zero qualifications for doing anything like that, and no chance
of developing the qualifications; that window is long gone.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @Kazinator@mstdn.ca
--- SoupGate-Win32 v1.05
* Origin: you cannot sedate... all the things you hate (1:229/2)
|