OpenAI is working on a tool to examine parts of a large language model (LLM) and each of its behaviours.

According to the company’s engineers, the development is in the early stages, however, the code to run it is available in open source on GitHub. 

“We are trying to anticipate what the problems with an AI system will be. We want to really be able to know that we can trust what the model is doing and the answer that it produces,” an OpenAI spokesperson told TechCrunch.

According to the report, the company is using its GPT-4 “to produce explanations of what a neuron is looking for and then score how well those explanations match the reality of what it is doing.”

To determine how accurate the explanation is, the tool provides GPT-4 with text sequences and has it predict, or simulate how the neuron would behave.

It noted that the researchers were able to generate explanations for all 3,07,200 neurons in GPT-2.

Meanwhile, researchers at Texas University have recently developed an artificial intelligence (AI) model like ChatGPT which is capable of reading, interpreting and reconstructing human thoughts.

comment COMMENT NOW