Neuro-inspired AI framework uses reverse-order learning to enhance code generation

Large language models (LLMs), such as the model behind OpenAI’s popular platform ChatGPT, have been found to successfully tackle a wide range of language processing and text generation tasks. Some of these models have also shown some promise for the generation of programming code, particularly when deployed in sets as part of so-called multi-agent systems.