The language processing tool, ChatGPT, developed by OpenAI, has raised concerns of being used to spread misinformation within the nutrition industry by influencers and fake experts. It highlighted on LinkedIn that ChatGPT could generate articles that appear credible and well-referenced but contain false information. This highlights the potential for AI to be used for malicious purposes in the hands of those with malicious intent.
In response to the concerns, a group of scientific researchers has published a seven-point list of “best practices” for using AI and ChatGPT when writing manuscripts. The list emphasizes the need for human oversight during editing, careful review, and disclosure of the use of the technology. The researchers’ group comprised 44 researchers who focused on ensuring the responsible use of AI technology, particularly ChatGPT.
The researchers’ recommendations for the ethical use of ChatGPT are crucial as the technology becomes increasingly popular. While ChatGPT has the potential to generate credible and well-referenced articles, it could also be used for spreading misinformation, emphasizing the importance of caution in its use. It is important to ensure that AI technology like ChatGPT is used ethically, with human oversight, and with transparent disclosure to mitigate the risks of its misuse.