ChatGPT, developed by Microsoft-backed research lab OpenAI, has occupied the artificial intelligence (AI) news stream since its initial launch in December 2022. ChatGPT is a natural language processing (NLP) generative AI tool that, depending on the model a user can access, can generate text-to-text, or image-to-text, output in a formula which mimics human conversation. The first publicly available tool of its kind, ChatGPT has become one of the fastest-growing consumer applications of all time, attracting 100 million active monthly users earlier this year.[1] Of course, like most new technologies, ChatGPT has its risks and limitations.
ChatGPT is Amazing, but Sometimes it’s Amazingly Wrong
While ChatGPT has extensive potential for the advancement of all types of tasks, there are also system bugs that plague its reliability. By now, you’ve likely heard about ChatGPT “hallucinations”: ChatGPT output that is either slightly, or completely, incorrect or even nonsensical, delivered with confidence. As an example of a ChatGPT hallucination, the Guardian reported that ChatGPT made up the title of a news article supposedly written by them, with the associated reporter’s name, as a source of information on a particular topic inquired upon by a researcher. The article in question had never been written.
ChatGPT’s answers also seem to show bias as a result of the dataset it was trained with, a concern that OpenAI has addressed publicly.[2] While OpenAI seeks to tackle misinformation, questions arise surrounding what threshold exists for deeming certain information as “misinformation” and who is responsible for making those decisions.
Please log in to read the full article.