‘Indiana Jones’ jailbreak approach highlights the vulnerabilities of existing LLMs

Large language models (LLMs), such as the model underpinning the functioning of the conversational agent ChatGPT, are becoming increasingly widespread worldwide. As many people are now turning to LLM-based platforms to source information and write context-specific texts, understanding their limitations and vulnerabilities is becoming increasingly vital.

Machine learning accelerates discovery of membranes to filter PFAS from water

Someday, your drinking water could be completely free of toxic “forever chemicals.” These chemicals, called PFAS (per- and polyfluoroalkyl substances), are found in common household items like makeup, nonstick cookware, dental floss, batteries, and food packaging. PFAS permeate the soil, water, food, and air, and they can remain in the environment for millennia. Once inside …

Neuro-inspired AI framework uses reverse-order learning to enhance code generation

Large language models (LLMs), such as the model behind OpenAI’s popular platform ChatGPT, have been found to successfully tackle a wide range of language processing and text generation tasks. Some of these models have also shown some promise for the generation of programming code, particularly when deployed in sets as part of so-called multi-agent systems.