UP | HOME

Chat Gpt

Oh, yes, yes. This mass hysteria on HN.

There is what you would have. The inability, in principle, to trust any written text because you cannot be sure it isn’t coming from one of these bullshit verbiage synthesizers.

Again, what a model “learns” is the actual shape of the current snapshot of verbiage noise given to it a training sets.

With the code that it is able to generate the situation is exactly the same. One have to exercise formal methods to make sure that it is correct.

Some people already said that just code-reviews by real experts will solve the problem, but it wont. There the Dijkstra’s principle that tests never show the absence of bugs will shine again.

One more time. It outputs “what looks like” the correct code, it never produces the code from the first principles, using standard idioms and best practices. It just searches within its trained structure.

So, no. Fundamentally it all the code it returns at the prompts should be formally verified because it is not produced by a bottom-up process which is preceded by a carefully selecting most relevant concepts and abstracting them out in an optimal (minimal and just right) way.

When one prompts it to “write a ray-tracer using opengl” it never goes through all the vastly complex, tens of layers deep (evolved by the process of trial and error and biased selection by thousands of humans) conceptual framework. It just spews out the code with the best “weights”. It is a search through ready made snippets (it was trained with), after all.

It does not “create” even the simplest code in any sense. It returns it.

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:36

Emacs 29.1.50 (Org mode 9.7-pre)