Are AI chatbots more creative than humans? New study reveals surprising results

In a recent study published in the journal Scientific Reports, researchers compared human and artificial intelligence (AI) chatbot creativity using the alternate uses task (AUT) to understand the current boundaries and potential of machine-generated creativity.

Study: Best humans still outperform artificial intelligence in a creative divergent thinking task. Image Credit: girafchik / ShutterstockStudy: Best humans still outperform artificial intelligence in a creative divergent thinking task. Image Credit: girafchik / Shutterstock

Background 

Generative AI tools, such as Chat Generative Pre-Trained Transformer (ChatGPT) and MidJourney, have stirred debates regarding their impact on jobs, education, and legal protections for AI-generated content. Historically, creativity was seen as uniquely human and traditionally linked to originality and usefulness, but AI's emerging capabilities are now challenging and redefining this belief. However, there is a pressing need for further research to profoundly comprehend the underlying mechanisms of human and AI creativity and their implications for society, employment, ethics, and the shifting definition of human identity in the AI era.

About the study 

In the present study, AUT data from humans was sourced from a previous research project, and using the online platform Prolific, native English speakers were engaged. Out of the 310 participants who began the study, 279 completed it. After evaluating their attentiveness through visual tasks, 256 were deemed diligent and included in the analysis, boasting an average age of 30.4. Most participants were full-time employees or students, hailed primarily from the United States of America (USA), the United Kingdom (UK), Canada, and Ireland.

In 2023, three AI chatbots, ChatGPT3.5 (later referred to as ChatGPT3), ChatGPT4, and Copy.Ai, were tested on specific dates, undergoing examination 11 times using four different object prompts across separate sessions. This method ensured a sizable sample to discern differences, especially when compared against the extensive human data.

For the AUT procedure, participants were engaged with four objects: rope, box, pencil, and candle, and were advised to prioritize the originality of their answers rather than sheer volume. While humans were tested once per session, AIs underwent 11 distinct sessions with instructions slightly modified to fit their design. The primary concern was keeping the AI responses comparable to human answers, especially in length.

Before analysis, the responses underwent a spell-check, and any ambiguous short answers were discarded. The essence of divergent thinking was gaged using the semantic distance between an object and its AUT response, utilizing the SemDis platform. Any potential bias in responses, particularly from AIs using jargon like "Do It Yourself" (DIY), was addressed for consistency.

The originality of answers was evaluated by six human raters who, unaware of the AI-generated responses, rated each answer's originality on a scale of 1 to 5. The rating methodology had clear guidelines to ensure objectivity, and their collective ratings demonstrated high inter-rater reliability.

Lastly, the data underwent rigorous statistical analyses with the aim of deriving meaningful conclusions from the amassed data. Various models were employed to evaluate the scores, taking into account fixed effects such as group and object and potential covariates.

Study results 

The present study analyzed the creative divergent thinking in humans and AI chatbots, focusing on their responses to different objects, and observed a moderate correlation between the semantic distance and humans' subjective ratings. This suggested that while both scoring methods measured similar qualities, they were not identical in nature. Therefore, it was deemed appropriate to evaluate the data using both semantic distance and subjective ratings.

Using linear mixed-effect models for a broad comparison between humans and AI, a consistent pattern emerged: AI chatbots not only generally outperformed humans but also had higher mean and max scores in terms of semantic distance. When fluency was considered as a covariate, it was seen to decrease the mean scores but increase the max scores. This trend was also reflected in the human subjective ratings of creativity, where AI again scored higher in both mean and max scores. An interesting observation was that while some human participants responded with typical or even illogical uses of the objects, AI chatbots consistently provided atypical and logical uses, never scoring below a certain threshold.

The study delved deeper into comparing the responses of humans and individual AI chatbots to specific objects, namely a rope, box, pencil, and candle. The analyses showcased that ChatGPT3 and ChatGPT4, two of the AI models, outperformed humans in terms of mean semantic distance scores. However, when considering max scores, there was no statistically significant difference between human participants and the AI chatbots. It was also observed that responses to the rope were typically rated lower in terms of semantic distance than the other objects.

The human subjective ratings assessing creativity revealed that ChatGPT4 consistently received higher ratings than both humans and the other chatbots, showcasing its clear edge. However, this advantage was not observed when the chatbots were tasked with the object "pencil." An interesting pattern emerged with the candle, as responses related to it generally received lower ratings compared to other objects. A standout observation was that two AI sessions, one from ChatGPT3 and the other from ChatGPT4, recorded max scores higher than any human for the object "box."

The data underlined the impressive performance of AI chatbots, particularly ChatGPT4, in creative divergent thinking tasks compared to humans. However, it is worth noting that AI did not uniformly outpace humans across all metrics or objects, underscoring the complexities of creativity and the areas where humans still hold an advantage.

Journal reference:
Susha Cheriyedath

Written by

Susha Cheriyedath

Susha is a scientific communication professional holding a Master's degree in Biochemistry, with expertise in Microbiology, Physiology, Biotechnology, and Nutrition. After a two-year tenure as a lecturer from 2000 to 2002, where she mentored undergraduates studying Biochemistry, she transitioned into editorial roles within scientific publishing. She has accumulated nearly two decades of experience in medical communication, assuming diverse roles in research, writing, editing, and editorial management.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Cheriyedath, Susha. (2023, September 17). Are AI chatbots more creative than humans? New study reveals surprising results. News-Medical. Retrieved on December 21, 2024 from https://www.news-medical.net/news/20230917/Are-AI-chatbots-more-creative-than-humans-New-study-reveals-surprising-results.aspx.

  • MLA

    Cheriyedath, Susha. "Are AI chatbots more creative than humans? New study reveals surprising results". News-Medical. 21 December 2024. <https://www.news-medical.net/news/20230917/Are-AI-chatbots-more-creative-than-humans-New-study-reveals-surprising-results.aspx>.

  • Chicago

    Cheriyedath, Susha. "Are AI chatbots more creative than humans? New study reveals surprising results". News-Medical. https://www.news-medical.net/news/20230917/Are-AI-chatbots-more-creative-than-humans-New-study-reveals-surprising-results.aspx. (accessed December 21, 2024).

  • Harvard

    Cheriyedath, Susha. 2023. Are AI chatbots more creative than humans? New study reveals surprising results. News-Medical, viewed 21 December 2024, https://www.news-medical.net/news/20230917/Are-AI-chatbots-more-creative-than-humans-New-study-reveals-surprising-results.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New AI tool maps millions of CD8+ T cells to advance disease research