Elon Musk, to convey GOK AI to “prescribe” human knowledge

Published on:

Elon Musk says that his company of artificial intelligence XAI has retrained his AI model, GROK, on ​​a modern knowledge base free from “garbage” and “uncorrected data” – first using it to prescribe history.

In Saturday post X Musk said that the upcoming Grok 3.5 model would have “advanced reasoning” and wanted him to be used “to rewrite the entire corps of human knowledge, adding missing information and removing mistakes.”

He said that the model would then be retrained on a modern set of knowledge, claiming that “in every model of the foundation was trained in any foundation model.”

Source: Elon Musk

Musk’s last fight with “Woke”

Musk has long claims that competing with AI models, such as Chatgpt of OpenAI, the company he co -founded, they are biased and skip information that is not politically correct.

For years, Musk has tried to shape products so that they were free from what he considers as harmful political correctness and strives to make groc what he calls “anti -dio”.

He also released Twitter’s content and disinformation when he took power in 2022, which the platform flooded uncontrolled conspiracy theories, extremist content and false messages, some of which were distributed by Musk himself.

Musk intended to fight the wave of disinformation by implementing the “social notes” function, enabling x users of refuting or adding context to posts that show noticeable as part of offending posts.

Criticism leveled when retraining grok

Post Musk attracted the condemnation of his critics, including from Gary Marcus, the founder of the AI ​​startup and a retired professor at the University of Novel York, who compared the billionaire’s plan with dystopia.

“Straight from 1984”, Marcus wrote on X. “You can’t adapt the groc with your own personal beliefs so that you can rewrite history so that it is in line with your views.”

Source: Gary Marcus

Bernardino Sassoli de ‘Bianchi, professor at the University of Milan Professor of Logic and Science, wrote On LinkedIn that he “lost his words, commenting on how dangerous the musk plan is.

Related: Do you need to help chatgpt? It can make you stupid

“When the powerful billionaires treat history as plastic simply because the results are not in line with their beliefs, we are no longer dealing with innovations – we stand in the face of narrative control,” he added. “Rewriting training data to match ideology is wrong at every possible level.”

Conservation of musk to “Facts” brings conspiracy theories, lies

As part of his efforts to renovate, GROK Musk called x users to share “dividing facts” to training the bot, stating that they should be “politically incorrect, but in fact true.”

The answers were recorded by various conspiracy theories and refuted extremist claims, including the distortion of the Holocaust, the overthrown disinformation of the vaccine, racist pseudoscientific claims regarding intelligence and refusal to change climate.

AI eye: And is good for employment, says PwC – ignore the extermination of AI

Related

Leave a Reply

Please enter your comment!
Please enter your name here