Explaining Artificial Intelligence Generation and Creativity: Human interpretability for novel ideas and artifacts

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

PERSPECTIVE article

How artificial intelligence can help us understand human creativity.

Fernand Gobet

  • 1 Department of Psychological Sciences, University of Liverpool, Liverpool, United Kingdom
  • 2 Graduate School of Human Sciences, Osaka University, Suita, Japan

Recent years have been marked by important developments in artificial intelligence (AI). These developments have highlighted serious limitations in human rationality and shown that computers can be highly creative. There are also important positive outcomes for psychologists studying creativity. It is now possible to design entirely new classes of experiments that are more promising than the simple tasks typically used for studying creativity in psychology. In addition, given the current and future AI algorithms for developing new data structures and programs, novel theories of creativity are on the horizon. Thus, AI opens up entire new avenues for studying human creativity in psychology.

In psychology, research into creativity 1 has tended to follow well-trodden paths: simple tests of creativity (e.g., alternative uses test), correlations with measures of intelligence, and more recently neural correlates of creativity such as EEG and fMRI (e.g., Weisberg, 2006 ; Runco, 2014 ) 2 . One line of research that has been little explored is to use progress in artificial intelligence (AI) to generate tools for studying human creativity.

Developments of AI have been impressive. DeepMind’s AlphaGo has easily beaten the best human grandmasters in Go, a game that for many years had seemed beyond the reach of AI ( Silver et al., 2016 ). IBM’s Watson mastered natural language and knowledge to the point that it outclassed the best human players in Jeopardy! – a game show where contestants have to find the question to an answer ( Ferrucci, 2012 ). Not less impressive, we are now on the brink of having self-driving cars and automated assistants able to book appointment by phone ( Smith and Anderson, 2014 ). These developments raise profound issues about human identity; they also pose difficult but exciting questions about the very nature of human creativity and indeed rationality. But they also present novel opportunities for studying human creativity. Entirely new classes of experiments can be devised, going way beyond the simple tasks typically used so far for studying creativity, and new theories of creativity can be developed.

Artificial Intelligence Research and Creativity

Using AI for understanding creativity has a long history and is currently an active domain of research with annual international conferences (for reviews, see Meheus and Nickles, 2009 ; Colton and Wiggins, 2012 ). As early as 1957, Newell, Simon, and Shaw had programmed Logic Theorist to prove theorems in symbolic logic. Not only did this research lead to an influential theory of problem-solving ( Newell et al., 1958 ) but it also shed important light on human creativity, as Logic Theorist was able to prove some theorems in a more elegant way than Russell and Whitehead, two of the leading mathematicians of the twentieth century ( Gobet and Lane, 2015 ). There are numerous examples of AI creativity in science today ( Sozou et al., 2017 ). For example, at Aberystwyth University, a “robot scientist” specialized in functional genomics not only produced hypotheses independently but also designed experiments for testing these hypotheses, physically performed them and then interpreted the results ( King et al., 2004 ).

In the arts, British abstract painter Harold Cohen all but abandoned a successful career as an artist to understand his own creative processes. To do so, he wrote a computer program, AARON, able to make drawings and later color paintings autonomously ( McCorduck, 1990 ). More recently, several programs have displayed high levels of creativity in the arts. For example, a deep-learning algorithm produced a Rembrandt-like portrait ( Flores and Korsten, 2016 ) and the program Aiva, also using deep learning, composes classical music ( Aiva Technologies, 2018 ). An album of Aiva’s music has already been released, and its pieces are used in films and by advertising agencies. In chess, the program CHESTHETICA automatically composes chess problems and puzzles that are considered by humans as esthetically pleasing ( Iqbal et al., 2016 ).

However, AI has had only little impact on creativity research in psychology (for an exception, see Olteţeanu and Falomir’s, 2015 , 2016 work on modelling the Remote Associate Test and the Alternative Uses Test). There is only passing mention if at all in textbooks and handbooks of creativity (e.g., Kaufman and Sternberg, 2006 ; Runco, 2014 ), and mainstream research simply ignores it. In our view, this omission is a serious mistake.

The Specter of Bounded Rationality

AI has uncovered clear limits in human creativity, as is well illustrated by Go and chess, two board games requiring creativity when played competitively. After losing 3–0 against computer program AlphaGo Master in 2017, Chinese Go grandmaster Ke Jie, the world No. 1, declared: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong… I would go as far as to say not a single human has touched the edge of the truth of Go” ( Kahn, 2017 ). Astonishingly, this version of AlphaGo, which won not only all its games against Ke Jie but also against other leading Go grandmasters, was beaten 89–11 a few months later by AlphaGo Zero, a new version of the program that learns from scratch by playing against itself, thus creating all its knowledge except for the rules of the game ( Silver et al., 2016 , 2017 ).

Ke Jie’s remark is echoed by chess grandmasters’ comments ( Gobet, 2018 ). In the second game of his 1997 match against Deep Blue, Kasparov and other grandmasters were astonished by the computer’s sophisticated and creative way of first building a positional advantage and then denying any counter-play for Kasparov. Kasparov’s surprise was such that he accused IBM and the programming team behind Deep Blue of cheating, a charge that he maintained for nearly 20 years. More recently, in the sixth game of the 2006 match between Deep Fritz and world champion Vladimir Kramnik, the computer played a curious rook maneuver that commentators ridiculed as typical of a duffer. As the game unfolded, it became clear that this maneuver was a very creative way of provoking weaknesses on Kramnik’s kingside, which allowed Deep Blue to unleash a fatal offensive on the other side of the board.

In general, these limits in rationality and creativity are in line with Simon’s theory of bounded rationality ( Simon, 1956 , 1997 ; Gobet and Lane, 2012 ; Gobet, 2016a ), which proposed that limitations in knowledge and computational capacity drastically constrain a decision maker’s ability to make rational choices. These limits are also fully predictable from what we know from research in cognitive psychology. For example, Bilalić et al. (2008) showed that even experts can be blinded by their knowledge, with the consequence that they prefer standard answers to novel and creative answers, even when the latter are objectively better. Thus, when a common solution comes first to mind, it is very hard to find another one (a phenomenon known as the Einstellung effect). In Bilalić et al.’s chess experiment, the effect was powerful: compared to a control group, the strength of the Einstellung group decreased by about one standard deviation.

The power of long-term memory schemas and preconceptions is a common theme in the history of science and art and has often thwarted creativity. For example, in the early 1980s, the unquestioned wisdom was that stomach ulcers were caused by excess acid, spicy food, and stress. The genius of Marshall and Warren (1984) in their Nobel-winning discovery was to jettison all these assumptions before hypothesizing that a bacterium (helicobacter pylori) was the main culprit. Finding ways to overcome such mind-sets is an important task for fostering human creativity ( Gobet et al., 2014 ), as they are common with normal cognition. In some instances, in order to be creative and explore new conceptual spaces, it is necessary to break these mind-sets, either by inhibiting some specific concepts or groups of concepts, or by eschewing concepts altogether. AI systems can use a large variety of different methods – some similar to those used by humans, some entirely dissimilar. Thus, they are less likely to be subject to such mind-sets and could provide humans with useful alternatives for developing creative products.

Artificial Intelligence Offers Novel Methods for Studying Creativity

When considering the literature on creativity in psychology, it is hard to escape the feeling that something is amiss in this field of research. A considerable amount of research has studied simple tasks that are remote from real creativity in the arts and science – for example, alternative uses task, word generation task, and insight problems (see e.g., Runco, 2014 ) – but it is at the least debatable whether these tasks tell us much about real creativity. As support for this critique of the lack of ecological validity of many tasks used in the field, numerous experiments have found that these tasks correlate more with general intelligence (g) and verbal intelligence than with real-world creativity ( Wallach, 1970 ; Silvia, 2015 ). In addition, in their review of the literature, Zeng et al. (2011) conclude that divergent-thinking tests suffer from six major weaknesses, including poor predictive, ecological, and discriminant validities. (For a more positive evaluation, see Plucker and Makel, 2010 .) While some researchers have developed tasks that map more directly into the kind of tasks carried out in real-world creativity – see in particular the research on scientific discovery ( Klahr and Dunbar, 1988 ; Dunbar, 1993 ) – this approach is relatively underrepresented in research into creativity.

A similar concern can be voiced with respect to experimentation and theory development. Although a fair amount of avenues have been explored – including generation and selection (e.g., Simonton, 1999 ), heuristic search (e.g., Newell et al., 1962 ), problem finding (e.g., Getzels and Csikszentmihalyi, 1976 ), systems theories (e.g., Gruber, 1981 ), explanations based on intelligence (e.g., Eysenck, 1995 ), and psychopathological explanations (e.g., Post, 1994 ) – entire experimental and theoretical spaces have been fully ignored or, in the best case, barely scratched. Clearly, this is due to the limits imposed by human bounded rationality, to which one should add the constraints imposed by the limited time resources available.

AI can help with both empirical and theoretical research. Empirically, it can simulate complex worlds that challenge human creativity; theoretically, it can help develop new theories by inhibiting some concepts (see above), making unexpected connections between known mechanisms or proposing wholly new explanations. Here we focus on scientific discovery, but similar conclusions can be reached for creativity in the arts.

A New Way of Designing Experiments

AI can be used as a new way to perform experiments on creativity. The central idea is to exploit current technology to design complex environments that can be studied with a creative application of the scientific method. Thus, these experiments go way beyond the simple tasks typically used in creativity research. Rather than studying creativity asking people to generate words that are related to three stimulus words as in the Remote Associates Test ( Mednick, 1962 ), one studies it by asking participants to find the laws of a simulated world. This is of course what Dunbar, Klahr, and others did in earlier experiments ( Klahr and Dunbar, 1988 ; Dunbar, 1993 ). The key contribution here is to propose to use much more complex environments, including environments where the presence of intelligent agents approximates the complexity of studying phenomena affected by humans, as is the case in psychology and sociology. Thus, where standard programming techniques are sufficient for simulating physical worlds with no intelligent agents, AI techniques make it possible to simulate much more complex worlds, which incorporate not only physical and biological laws, but also psychosocial laws. In both cases, the participants’ task is to reverse-engineer at least some of the laws of the domains – that it to make scientific discoveries about these domains. Thus, for example, participants must devise experiments for understanding the learning mechanisms of agents inhabiting a specific world. The mechanisms and laws underpinning these worlds can be similar to those currently postulated in science, or wholly different with new laws of physics, biology, or psychology. In that case, the situation is akin to scientists exploring life on a new planet.

These environments can be used with several goals in mind. First, they can test current theories of creativity and scientific discovery. The worlds can be designed in such a way that their understanding is facilitated by the mechanisms proposed by some theories as opposed to others (e.g., heuristic search might be successful, but randomly generating concepts might not, or vice versa ). Additional questions include whether participants adapt their strategy as a function of the results they obtain and whether they develop new experimental designs where necessary. Second, these environments can be used to observe new empirical phenomena related to creativity, such as the generation of as yet unknown strategies. New phenomena are bound to occur, as the complexity of the proposed tasks is larger by several orders of magnitude than the tasks typically studied in psychology.

A third use is to identify creative people in a specific domain, for example in biology or psychology. As creativity is measured in a simulated environment that is close to the target domain, one is more likely to correctly identify individuals that might display creativity in the domain. If one wishes, one can correlate performance in the task and other behavioral measures with standard psychological measures such as IQ, motivation, and psychoticism.

A final use is to train people to be creative in a specific domain. Variables in the environment can be manipulated such that specific skills are taught, for example the efficient use of heuristics or standard research methods in science. The difficulty of finding laws can be manipulated as well: from a clear linear relation between two variables to non-linear relations between several variables with several sources of noise. The reader will have noticed that such environments are not dissimilar from some video games, and this game-like feature can be used to foster enjoyment and motivation, and thus learning.

Please note that we make no claim that training creativity in one domain will provide something like general creativity, as is sometimes proposed in the literature (e.g., De Bono, 1970 ). There is now very strong experimental evidence that skills acquired in a domain do not generalize to new domains sharing few commonalities with the original one ( Gobet, 2016b ; Sala and Gobet, 2017a ), and this conclusion almost certainly also applies to creativity. One possible reason for this lack of far transfer is that expertise relies on the ability of recognizing patterns that are specific to a domain ( Sala and Gobet, 2017b ). It is possible to speculate that being creative relies, at least in part, on recognizing rare domain-specific patterns in a problem situation. For example, to go back to the example of discovering that stomach ulcers are caused by bacteria, Warren recognized the presence of bacteria in gastric specimens he studied with a microscope, although this was not expected as it was thought that the stomach was a sterile environment inhospitable for bacteria ( Thagard, 1998 ). However, we do recognize that this is a hypothesis that should be tested, and it could turn out that, in fact, creativity is a general ability. This is an empirical question that can only be settled with new experiments, and the methods proposed in this paper may contribute to its answer.

Automatic Generation of Theories

As noted above, human bounded rationality has the consequence that humans only explore a very small number of subspaces within the space of all possible theories, and even these subspaces are explored only sparsely. Mind-sets and other biases mean that even bad hypotheses are maintained while more promising ones are ignored. AI can help break these shackles.

The subfield of AI known as computational scientific discovery has been active for decades, spearheaded by Herbert Simon’s seminal work ( Newell et al., 1962 ; Bradshaw et al., 1983 ). The aim is precisely to develop algorithms that can produce creative behavior in science, either replicating famous scientific discoveries or making original contributions (for a review, see Sozou et al., 2017 ). Due to space constraints, we limit ourselves to the description of only one approach – Automatic Generation of Theories (AGT) ( Lane et al., 2014 ) – which is particularly relevant to our discussion as it excels in avoiding being stuck in local minima, contrary to human cognition which is notably prone to mind-sets, Einstellung effects, and other cognitive biases. In a nutshell, the central ideas of AGT are (1) to consider theories as computer programs; (2) to use a probabilistic algorithm (genetic programming) to build those programs; (3) to simulate the protocols of the original experiments; (4) to compare the predictions of the theories with empirical data in order to compute the quality (fitness) of the theories; and (5) to use fitness to evolve better theories, using mechanisms of selection, mutation, and crossover. Simulations have shown that the methodology is able to produce interesting theories with simple experiments. With relentless progress in technology, it is likely that this and other approaches in artificial scientific discovery will provide theoretical explanations for more complex human behaviors, including creativity itself.

The two uses of AI proposed in this paper for studying creativity in psychology are not meant to replace current methods, but to add to the arsenal of theoretical concepts and experimental techniques available to researchers. Nor are they proposed as magic bullets that will answer all questions related to creativity. Our point is that these uses of AI present potential benefits that have been overlooked by psychologists studying creativity.

As any new approach, these uses raise conceptual and methodological challenges. Regarding the proposed method for collecting data, challenges include the way participants’ results will be scored and compared, and how they will be used to test theories. A related challenge concerns the kind of theory suitable to account for these data; given the complexity and richness of the data, it is likely that computational models will be necessary – possibly models generated by the second use of AI we proposed.

Similarly, using AI for generating theories raises interesting practical and theoretical questions. Will the generated theories be understandable to humans, or will they only be black boxes providing correct outputs (predictions) given a description of the task at hand and other kind of information such as the age of the participants? Will their structure satisfy canons of parsimony in science? How will they link epistemologically to other theories in psychology, for example theories of memory and decision-making? Will they be useful for practical applications such as training experts to be creative in their specialty? In addition, there is of course the question as to what kind of AI is best suited for generating theories. We have provided the example of genetic programming, but many other techniques can be advanced as candidates, including adaptive production systems ( Klahr et al., 1987 ) and deep learning ( LeCun et al., 2015 ).

Problems and Prospects

Recent developments in AI signal a new relationship between human and machine. Interesting albeit perhaps threatening questions are posed about our human nature and, specifically, the meaning of creativity. These include philosophical and ethical questions. Can a product be creative if it is conceived by a computer? If so, who owns the research? Should computer programs be listed as co-authors of scientific papers? How will the synergy between human and computer creativity evolve? Should some types of creativity – e.g., generating fake news for political aims – be curtailed or even banned?

These developments also raise significant questions about human rationality, as discussed above. In doing so, they highlight the magnificent achievements of some human creators, such as Wolfgang Amadeus Mozart or Pablo Picasso. In addition, they have substantial implications for creativity in science and the arts. Entirely new conceptual spaces might be explored, with computer programs either working independently or co-designing creative products with humans. In science – the focus of this perspective article – this might lead to the development of novel research strategies, methodologies, types of experiments, theories, and theoretical frameworks. Of particular interest is the possibility of mixing concepts and mechanisms between different subfields (e.g., between memory research and decision-making research), between different fields (e.g., psychology and chemistry), and even between science and the arts. As discussed above, there are also some new exciting opportunities for training. It is only with the aid of artificial creativity that we will break our mind-sets and reach a new understanding of human creativity.

Author Contributions

Both authors conceptualized the paper. FG wrote the first draft of the paper and GS contributed to drafting its final version.

GS is a JSPS International Research Fellow (grant number: 17F17313).

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

1. It is notably difficult to define “creativity,” and a large number of definitions exist with little agreement among researchers (see e.g., Hennessey and Amabile, 2010 ). In this article, we focus on what Boden (1990) calls “historical creativity” (concerning products that are considered novel by society at large) rather than “psychological creativity” (concerning products that are novel only for the agent producing them). Thus, if Joe Bloggs for the first time of his life realizes that a brick can be used as a pen holder, this is psychological but not historical creativity. If he is the first ever to claim that a brick can be used as an abstract rendition of Beethoven’s 5th Symphony, this is both psychological and historical creativity according to Boden’s definition.

2. While the aim of this Perspective Article is not to provide a review of the extensive literature on creativity in psychology and neuroscience, a few additional pointers might be helpful to the reader: Cristofori et al. (2018) ; Kaufman and Sternberg (2019) ; and Simonton (2014) .

Aiva Technologies. (2018). Available at: http://www.aiva.ai (Accessed September 08, 2018).

Google Scholar

Bilalić, M., McLeod, P., and Gobet, F. (2008). Inflexibility of experts: reality or myth? Quantifying the Einstellung effect in chess masters. Cogn. Psychol. 56, 73–102. doi: 10.1016/j.cogpsych.2007.02.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Boden, M. (1990). The creative mind . (New York: BasicBooks).

Bradshaw, G., Langley, P. W., and Simon, H. A. (1983). Studying scientific discovery by computer simulation. Science 222, 971–975. doi: 10.1126/science.222.4627.971

Colton, S., and Wiggins, G. A. (2012). “Computational creativity: the final frontier?” in Proceedings of the 20th European conference on artificial intelligence . eds. L. De Raedt, C. Bessiere, D. Dubois, P. Doherty, P. Frasconi, F. Heintz, and P. Lucas (Montpellier, France: IOS Press), 21–26.

Cristofori, I., Salvi, C., Beeman, M., and Grafman, J. (2018). The effects of expected reward on creative problem solving. Cogn. Affect. Behav. Neurosci. 18, 925–931. doi: 10.3758/s13415-018-0613-5

De Bono, E. (1970). Lateral thinking: Creativity step by step . (New York: Harper & Row).

Dunbar, K. (1993). Concept discovery in a scientific domain. Cogn. Sci. 17, 397–434. doi: 10.1207/s15516709cog1703_3

CrossRef Full Text | Google Scholar

Eysenck, H. J. (1995). Genius: The natural history of creativity . (New York: Cambridge University Press).

Ferrucci, D. A. (2012). Introduction to “This is Watson”. IBM J. Res. Dev. 56, 1:1–1:15. doi: 10.1147/JRD.2012.2184356

Flores, E., and Korsten, B. (2016). The Next Rembrandt. Available at: http://www.nextrembrandt.com/ (Accessed September 08, 2018).

Getzels, J. W., and Csikszentmihalyi, M. (1976). The Creative vision: A longitudinal study of problem finding in art . (New York: John Wiley & Sons).

Gobet, F. (2016a). “From bounded rationality to expertise” in Minds, models and milieux: Commemorating the centenary of Herbert Simon’s birth . eds. R. Frantz and L. Marsh (New York: Palgrave Macmillan), 151–166.

Gobet, F. (2016b). Understanding expertise: A multidisciplinary approach . (London: Palgrave).

Gobet, F. (2018). The psychology of chess . (London: Routledge).

Gobet, F., and Lane, P. C. R. (2012). “Bounded rationality and learning” in Encyclopedia of the sciences of learning . ed. N. M. Seel (New York, NY: Springer).

Gobet, F., and Lane, P. C. R. (2015). “Human problem solving – Beyond Newell et al.’s (1958): elements of a theory of human problem solving” in Cognitive psychology: Revisiting the classic studies . eds. M. W. Eysenck and D. Groome (Thousand Oaks, CA: Sage).

Gobet, F., Snyder, A., Bossomaier, T., and Harre, M. (2014). Designing a “better” brain: insights from experts and savants. Front. Psychol. 5:470. doi: 10.3389/fpsyg.2014.00470

Gruber, H. E. (1981). Darrwin on man: A psychological study of scientific creativity . Rev. edn. (Chicago: University of Chicago Press).

Hennessey, B. A., and Amabile, T. M. (2010). Creativity. Annu. Rev. Psychol. 61, 569–598. doi: 10.1146/annurev.psych.093008.100416

Iqbal, A., Guid, M., Colton, S., Krivec, J., Azman, S., and Haghighi, B. (2016). The digital synaptic neural substrate: A new approach to computational creativity . (Switzerland: Springer International Publishing).

Kahn, J. (2017). Robots are going to take our jobs and make us look like fools while doing it. Available at: https://medium.com/bloomberg/robots-are-going-to-take-our-jobs-and-make-us-look-like-fools-while-doing-it-ec25b05a5910 (Accessed May 11, 2018).

J. C. Kaufman and R. J. Sternberg (eds.) (2006). The international handbook of creativity . (Cambridge, UK: Cambridge University Press).

J. C. Kaufman and R. J. Sternberg (eds.) (2019). The Cambridge handbook of creativity . (Cambridge: Cambridge University Press).

King, R. D., Whelan, K. E., Jones, F. M., Reiser, P. G. K., Bryant, C. H., Muggleton, S. H., et al. (2004). Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427, 247–252. doi: 10.1038/nature02236

Klahr, D., and Dunbar, K. (1988). Dual space search during scientific reasoning. Cogn. Sci. 12, 1–48. doi: 10.1207/s15516709cog1201_1

Klahr, D., Langley, P., and Neches, R. (1987). Production system models of learning and development . (Cambridge, MA: MIT Press).

Lane, P., Sozou, P., Addis, M., and Gobet, F. (2014). “Evolving process-based models from psychological data using genetic programming” in Proceedings of the 50th anniversary convention of the AISB: Computational scientific discovery symposium . eds. M. Addis, F. Gobet, P. Lane, and P. Sozou (London: AISB).

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444. doi: 10.1038/nature14539

Marshall, B. J., and Warren, J. R. (1984). Unidentified curved bacilli in the stomach of patients with gastritis and peptic ulceration. Lancet 823, 1311–1315.

McCorduck, P. (1990). AARON’S code: Meta-art, artificial intelligence, and the work of Harold Cohen . (New York: W. H. Freeman & Co).

Mednick, S. A. (1962). The associative basis of the creative process. Psychol. Rev. 69, 220–232. doi: 10.1037/h0048850

Meheus, J., and Nickles, T. (2009). Models of discovery and creativity . (New York: Springer).

Newell, A., Shaw, J. C., and Simon, H. A. (1958). Elements of a theory of human problem solving. Psychol. Rev. 65, 151–166. doi: 10.1037/h0048495

Newell, A., Shaw, J. C., and Simon, H. A. (1962). “The process of creative thinking” in Contemporary approaches to creative thinking . Vol. 3, eds. H. E. Gruber, G. Terrell, and Werheimer (New York: Atherton Press), 63–119.

Olteţeanu, A.-M., and Falomir, Z. (2015). comRAT-C: a computational compound remote associates test solver based on language data and its comparison to human performance. Pattern Recogn. Lett. 67, 81–90. doi: 10.1016/j.patrec.2015.05.015

Olteţeanu, A.-M., and Falomir, Z. (2016). Object replacement and object composition in a creative cognitive system. Towards a computational solver of the alternative uses test. Cogn. Syst. Res. 39, 15–32. doi: 10.1016/j.cogsys.2015.12.011

Plucker, J. A., and Makel, M. C. (2010). “Assessment of creativity” in The Cambridge handbook of creativity . eds. J. C. Kaufman and R. J. Sternberg (Cambridge: Cambridge University Press), 48–73.

Post, F. (1994). Creativity and psychopathology: a study of 291 world-famous men. Br. J. Psychiatry 165, 22–34. doi: 10.1192/bjp.165.1.22

Runco, M. A. (2014). Creativity theories and themes: Research, development, and practice . (New York: Academic Press).

Sala, G., and Gobet, F. (2017a). Does far transfer exist? Negative evidence from chess, music, and working memory training. Curr. Dir. Psychol. Sci. 26, 515–520. doi: 10.1177/0963721417712760

Sala, G., and Gobet, F. (2017b). Experts’ memory superiority for domain-specific random material generalizes across fields of expertise: a meta-analysis. Mem. Cogn. 45, 183–193. doi: 10.3758/s13421-016-0663-2

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489. doi: 10.1038/nature16961

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of Go without human knowledge. Nature 550, 354–359. doi: 10.1038/nature24270

Silvia, P. (2015). Intelligence and creativity are pretty similar after all. Educ. Psychol. Rev. 27, 599–606. doi: 10.1007/s10648-015-9299-1

Simon, H. A. (1956). Rational choice and the structure of the environment. Psychol. Rev. 63, 129–138. doi: 10.1037/h0042769

Simon, H. A. (1997). Models of bounded rationality . Vol. 3, (Cambridge, MA: The MIT Press).

Simonton, D. K. (1999). Origins of genius . (Oxford: Oxford University Press).

D. K. Simonton (ed.) (2014). The Wiley-Blackwell handbook of genius . (Oxford, UK: Wiley-Blackwell).

Smith, A., and Anderson, J. (2014). Predictions for the state of AI and robotics in 2025 . (Washington, DC: Pew Research Center).

Sozou, P. D., Lane, P. C., Addis, M., and Gobet, F. (2017). “Computational scientific discovery” in Springer handbook of model-based science . eds. L. Magnani and T. Bertolotti (New York: Springer), 719–734.

Thagard, P. (1998). Ulcers and bacteriaI: discovery and acceptance. Stud. Hist. Philos. Sci. C 29, 107–136.

Wallach, M. A. (1970). “Creativity” in Carmichael’s manual of child psychology . ed. P. H. Mussen (New York: Wiley), 1273–1365.

Weisberg, R. W. (2006). Creativity . (New York: Wiley).

Zeng, L., Proctor, R. W., and Salvendy, G. (2011). Can traditional divergent thinking tests be trusted in measuring and predicting real-world creativity? Creat. Res. J. 23, 24–37. doi: 10.1080/10400419.2011.545713

Keywords: artificial intelligence, bounded rationality, creativity, evolutionary computation, intelligence, simulation, scientific discovery, theory

Citation: Gobet F and Sala G (2019) How Artificial Intelligence Can Help Us Understand Human Creativity. Front. Psychol . 10:1401. doi: 10.3389/fpsyg.2019.01401

Received: 15 May 2018; Accepted: 29 May 2019; Published: 19 June 2019.

Reviewed by:

Copyright © 2019 Gobet and Sala. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Fernand Gobet, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

How Generative AI Can Augment Human Creativity

  • Tojin T. Eapen,
  • Daniel J. Finkenstadt,
  • Lokesh Venkataswamy

essay on artificial intelligence and human creativity

There is tremendous apprehension about the potential of generative AI—technologies that can create new content such as text, images, and video—to replace people in many jobs. But one of the biggest opportunities generative AI offers is to augment human creativity and overcome the challenges of democratizing innovation.

In the past two decades, companies have used crowdsourcing and idea competitions to involve outsiders in the innovation process. But many businesses have struggled to capitalize on these contributions. They’ve lacked an efficient way to evaluate the ideas, for instance, or to synthesize different ideas.

Generative AI can help over­come those challenges, the authors say. It can supplement the creativity of employees and customers and help them produce and identify novel ideas—and improve the quality of raw ideas. Specifically, companies can use generative AI to promote divergent thinking, challenge expertise bias, assist in idea evaluation, support idea refinement, and facilitate collaboration among users.

Use it to promote divergent thinking.

Idea in Brief

The problem.

In the past two decades, companies’ efforts to involve outsiders in the process of coming up with new offerings have taken off. Crowdsourcing and idea competitions are two prime examples. But firms still struggle to make use of the plethora of ideas that are generated.

The Root Causes

A lack of an efficient way to evaluate the ideas, domain experts’ struggles in accepting novel ideas, the inability of contributors to provide details needed to make their ideas feasible, and the challenge of synthesizing different ideas are all factors.

The Solution

Generative AI can help overcome these challenges. It can augment the creativity of employees and customers and help them generate and identify novel ideas as well as improve the quality of raw ideas.

There is tremendous apprehension about the potential of generative AI—technologies that can create new content such as audio, text, images, and video—to replace people in many jobs. But one of the biggest opportunities generative AI offers to businesses and governments is to augment human creativity and overcome the challenges of democratizing innovation.

  • Tojin T. Eapen is a principal consultant at Innomantra and a senior fellow at the Conference Board.
  • Daniel J. Finkenstadt is a principal at Wolf Stake Consulting, a military officer, and a former assistant professor at the Naval Postgraduate School. He is the coauthor of the book Supply Chain Immunity (Springer 2022).
  • JF Josh Folk is a cofounder and the president of enterprise solutions at IdeaScale, a cloud-based innovation-software platform.
  • LV Lokesh Venkataswamy is the CEO and managing director of Innomantra, an innovation and intellectual-property consulting firm in Bengaluru, India.

Partner Center

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 09 March 2020

Human ownership of artificial creativity

  • Jason K. Eshraghian   ORCID: orcid.org/0000-0002-5832-4054 1  

Nature Machine Intelligence volume  2 ,  pages 157–160 ( 2020 ) Cite this article

2641 Accesses

27 Citations

27 Altmetric

Metrics details

  • Computer science

Advances in generative algorithms have enhanced the quality and accessibility of artificial intelligence (AI) as a tool in building synthetic datasets. By generating photorealistic images and videos, these networks can pose a major technological disruption to a broad range of industries from medical imaging to virtual reality. However, as artwork developed by generative algorithms and cognitive robotics enters the arena, the notion of human-driven creativity has been thoroughly tested. When creativity is automated by the programmer, in a style determined by the trainer, using features from information available in public and private datasets, who is the proprietary owner of the rights in AI-generated artworks and designs? This Perspective seeks to provide an answer by systematically exploring the key issues in copyright law that arise at each phase of artificial creativity, from programming to deployment. Ultimately, four guiding actions are established for artists, programmers and end users that utilize AI as a tool such that they may be appropriately awarded the necessary proprietary rights.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

essay on artificial intelligence and human creativity

Similar content being viewed by others

essay on artificial intelligence and human creativity

Bias against AI art can enhance perceptions of human creativity

C. Blaine Horton Jr, Michael W. White & Sheena S. Iyengar

essay on artificial intelligence and human creativity

Improving the quality of image generation in art with top-k training and cyclic generative methods

Laura Vela, Félix Fuentes-Hurtado & Adrián Colomer

essay on artificial intelligence and human creativity

Catalyzing next-generation Artificial Intelligence through NeuroAI

Anthony Zador, Sean Escola, … Doris Tsao

Lin, Y. & Pan, D. Z. in Machine Learning in VLSI Computer-Aided Design (eds Elfadel, I. M., Boning, D. S. & Li, X.) 95–115 (Springer, 2018).

Chui, M. et al. Notes from the AI frontier: Insights from Hundreds of Use Cases (McKinsey Global Institute, 2018).

Goodfellow, I. et al. Generative adversarial nets. In Advances in Neural Information Processing Systems 2672–2680 (NeurIPS, 2014).

Radford, A., Metz, L. & Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In Int. Conf. Learning Representations (ICML, 2015).

Barrat, R. art-DCGAN (GitHub, accessed 18 January 2019); https://github.com/robbiebarrat/art-DCGAN

Chintala, S. DCGAN (GitHub, accessed 10 October 2019); https://github.com/soumith/dcgan.torch

Aplin, T. & Pasqualetto, G. Regulating Industrial Internet through IPR, Data Protection and Competition Law (eds Ballardini, R. M. et al.) Ch. 5 (Wolters Kluwer, 2019).

Guadamuz, A. Do androids dream of electric copyright? Comparative analysis of originality in artificial intelligence generated works. Intell. Prop. Q. 2 , 169–186 (2017).

Google Scholar  

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521 , 436–444 (2015).

Article   Google Scholar  

Berne Convention for the Protection of Literary and Artistic Works: Paris Act of 24 July 24 1971, as amended on 28 September 1979, Article 2, paragraph 2.

Sterling, J. A. L. World Copyright Law 198–201 (2018).

Feist Publications, Inc. v. Rural Telephone Service Co . 499 US340 (1991).

Ladbroke (Football) Ltd v. William Hill (Football) Ltd 1 WLR 273 (HL) (1964).

Infopaq International A/S v. Danske Dagblades Forening C-5/08 (2009).

CCH v. Law Society of Upper Canada 1 SCR 339, 2004 SCC 13, 236 DLR (4th) 395, 30 CPR (4th) 1, 247 FTR 318 CHH (2004).

Telstra Corporation Ltd v. Phone Directories Company Pty Ltd FCAFC 149 (2010).

Compendium of U. S. Copyright Office Practices 3rd edn § 306 (US Copyright Office, 2017).

Warner-Lambert Co Ltd v. Generics (UK) Ltd (2018).

Copyright, Designs and Patents Act 1988 (UK), s9(3).

Copyright and Related Acts, 2000 (Ireland), s.21(f).

New Zealand Copyright Act 1994 s5(2)(a).

SAS Institute Inc. v. World Programming Ltd EWCA Civ 1482, 2014. RPC 8 (2013).

Samuelson, P. Allocating ownership rights in computer-generated works. U. Pitt. L. Rev. 47 , 1185–1228 (1985).

Copyright Act of 1976 (USA) s.503.03(a).

Copyright Act 1968 (Australia) s.32.

Perry, M. & Margoni, T. From music tracks to Google Maps: who owns computer generated works? Comp. Law Security Rev. 26 , 621–629 (2010).

Information Sheet G089v4: Music: DJs (Australian Copyright Council, 2014).

Hugenholtz, P. B. in Kritika: Essays on Intellectual Property (eds Ulrich, H. et al.) Ch. 3 (Edward Elgar, 2018).

Gatys, L. A., Ecker, A. S. & Bethge, M. Image style transfer using convolutional neural networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2414–2423 (IEEE, 2016).

Isola, I., Zhu, J. Y., Zhou, T. & Efros, A. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conf. Computer Vision and Pattern Recognition 1125–1134 (IEEE, 2017).

Nova Productions Ltd v. Mazooma Games Ltd RPC 379 (UK) (2006).

Rearden, LLC v. Walt Disney no. 17-cv-04006-JST, 2018 WL 1000350 (2018).

Ginsburg, J. C. & Budiardjo, L. A. Authors and machines. Berkeley Technol. J. 34 , 343–448 (2019).

Wang, T.-C. et al. Video-to-video synthesis. In Advances in Neural Information Processing Systems 1152–1164 (NeurIPS, 2018).

Interlego AG v. Tyco Industries Inc AC 217, 263 (1989).

Sawkins v. Hyperion Records Ltd EWCA Civ 565, 2005. 1 WLR 3281 (2005).

Simon, J. Artbreeder http://www.artbreeder.com (2019).

Positive Black Talk Inc. v. Cash Money Records Inc . 394 F.3d 357 (2003).

Sobel, B. Artificial intelligence’s fair use crisis. Colum. J. Law Arts 41 , 45 (2017).

Copyright Act 1968 (Australia) s.14.

Copyright, Designs and Patents Act 1988 (UK), s16(3)(a).

Berne Convention for the Protection of Literary and Artistic Works, Paris Act of 24 July 1971, as amended on 28 September 1979 Article 2, paragraph 3.

U.S. Code § 106.

LB (Plastics) Ltd v. Swish Products Ltd RPC 551 at 56 per Whitford J (1979).

de Castilho, R. E., Dore, G., Margoni, T., Labropoulou, P. & Gurevych, I. A legal perspective on training models for natural language processing. In Proc. Eleventh International Conference on Language Resources and Evaluation 1267–1274 (ACL, 2018).

The legal side of open source. Section 5. Open Source Guides https://opensource.guide/legal/#which-open-source-license-is-appropriate-for-my-project (2019).

Download references

Acknowledgements

Thanks to X. Feng, G. Zarrella and G. Cohen, for their invaluable discussions in the development of this manuscript, and the National Science Foundation who supported the organization of the Telluride Neuromorphic Cognition Engineering Workshop, which inspired this Perspective.

Author information

Authors and affiliations.

Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA

Jason K. Eshraghian

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jason K. Eshraghian .

Ethics declarations

Competing interests.

The author declares no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Eshraghian, J.K. Human ownership of artificial creativity. Nat Mach Intell 2 , 157–160 (2020). https://doi.org/10.1038/s42256-020-0161-x

Download citation

Received : 18 September 2019

Accepted : 11 February 2020

Published : 09 March 2020

Issue Date : March 2020

DOI : https://doi.org/10.1038/s42256-020-0161-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Not “what”, but “where is creativity”: towards a relational-materialist approach to generative ai.

  • Claudio Celis Bueno
  • Pei-Sze Chow
  • Ada Popowicz

AI & SOCIETY (2024)

Generative AI entails a credit–blame asymmetry

  • Sebastian Porsdam Mann
  • Brian D. Earp
  • Julian Savulescu

Nature Machine Intelligence (2023)

AI-generated characters for supporting personalized learning and well-being

  • Pat Pataranutaporn
  • Valdemar Danry

Nature Machine Intelligence (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

essay on artificial intelligence and human creativity

AI Creativity and the Human-AI Co-creation Model

  • Conference paper
  • First Online: 03 July 2021
  • Cite this conference paper

Book cover

  • Zhuohao Wu 9 ,
  • Danwen Ji 10 ,
  • Kaiwen Yu 11 ,
  • Xianxu Zeng 12 ,
  • Dingming Wu 13 &
  • Mohammad Shidujaman 14  

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12762))

Included in the following conference series:

  • International Conference on Human-Computer Interaction

7147 Accesses

16 Citations

Artificial intelligence (AI) is bringing new possibilities to numerous fields. There have been a lot of discussions about the development of AI technologies and the challenges caused by AI such as job replacement and ethical issues. However, it’s far from enough to systematically discuss how to use AI creatively and how AI can enhance human creativity. After studying over 1,600 application cases across more than 45 areas, and analyzing related academic publications, we believe that focusing on the collaboration with AI will benefit us far more than dwelling on the competing against AI. “AI Creativity” is the concept we want to introduce here: the ability for human and AI to co-live and co-create by playing to each other’s strengths to achieve more. AI is a complement to human intelligence, and it consolidates wisdom from all achievements of mankind, making collaboration across time and space possible. AI empowers us throughout the entire creative process, and makes creativity more accessible and more inclusive than ever. The corresponding Human-AI Co-Creation Model we proposed explains the creative process in the era of AI, with new possibilities brought by AI in each phase. In addition, this model allows any “meaning-making” action to be enhanced by AI and delivered in a more efficient way. The emphasis on collaboration is not only an echo to the importance of teamwork, but is also a push for co-creation between human and AI. The study of application cases shows that AI Creativity has been making significant impact in various fields, bringing new possibilities to human society and individuals, as well as new opportunities and challenges in technology, society and education.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Computational creativity also known as artificial creativity, mechanical creativity, creative computing or creative computation.

The painting by Mr. HOW with Deep Dream Generator: https://deepdreamgenerator.com/ .

The poem Mr. HOW with Tsinghua JiuGe: http://118.190.162.99:8080/ , Microsoft JueJu: http://couplet.msra.cn/jueju/ and SouYun: https://sou-yun.cn/MAnalyzePoem.aspx .

The poem translated by Mr. HOW with Google, Apple and Microsoft Translation.

The music by Mr. HOW with LingDongYin: https://demo.lazycomposer.com/compose/v2/ .

The making of the artwork, The Mind of AI Creativity: http://qr09.cn/Ew06EW .

Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein, G.: Explanation in Human-AI Systems: a Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI (2019)

Google Scholar  

Peeters, M.M.M., van Diggelen, J., van den Bosch, K., et al.: Hybrid collective intelligence in a human-AI society. AI Soc. 36 , 217–238 (2021). https://link.springer.com/article/10.1007/s00146-020-01005-y

Tata Consultancy Services: How Companies are Improving Performance using Artificial Intelligence (2017)

PwC: Sizing the prize. What’s the real value of AI for your business and What’s the real value of AI for your business and how can you capitalise? (2017)

Accenture: How AI Boosts Industry Profits and Innovation (2017)

McKinsey Global Institute: Jobs Lost, Jobs Gained. Workforce Transitions in A Time of Automation (2017)

Frey, C.B., Osborne, M.A.: The Future of Employment. How Susceptible Are Jobs to Computerisation (2013)

Asian Development Bank: Asian Development Outlook. Asian Development Bank, Manila, Philippines (2018)

Amershi, S., et al.: Guidelines for human-AI interaction. In: CHI 2019 Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland UK, 4–9 May, 2019. ACM, New York (2019)

Lee, K.F.: How AI can save our humanity. https://www.ted.com/talks/kai_fu_lee_how_ai_can_save_our_humanity

Howkins, J.: The Creative Economy: How People Make Money from Ideas. Penguin, UK (2002)

Weisberg, R.W.: Creativity: Understanding Innovation in Problem Solving, Science, Invention, and the Arts. John Wiley (2006)

Palmiero, M., Piccardi, L., Nori, R., Palermo, L., Salvi, C., Guariglia, C.: Editorial: creativity: education and rehabilitation. Front. Psychol. 10 , 1500 (2019)

Article   Google Scholar  

Wikipedia: Creativity. https://en.wikipedia.org/w/index.php?title=Creativity&oldid=999304631

Kaufman, J.C., Sternberg, R.J. (eds.): The Cambridge Handbook of Creativity. Cambridge University Press, Cambridge (2019)

Owen Kelly: Digital creativity (1996)

Colton, S., Wiggins, G.A.: Others: computational creativity: the final frontier? ECAI 12 , 21–26 (2012)

OpenAI: AlphaFold : a solution to a 50-year-old grand challenge in biology. https://deepmind.com/blog (2020)

Deepmind: AlphaFold: a solution to a 50-year-old grand challenge in biology. https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology

Boden, M.A.: Creativity and artificial intelligence. Artif. Intell. 103 , 347–356 (1998)

Erden, Y.J.: Could a created being ever be creative? Some philosophical remarks on creativity and AI development. Minds Mach. 20 , 349–362 (2010)

Fetzer, J.H., Dartnall, T.: Artificial Intelligence and Creativity. An Interdisciplinary Approach. Springer Netherlands, Dordrecht (2010).

Dartnall, T. (ed.): Artificial Intelligence and Creativity. An Interdisciplinary Approach. Springer Netherlands, Dordrecht (1994)

Miller, A.I.: The Artist in the Machine. The World of AI-Powered Creativity. The MIT Press, Cambridge, Massachusetts (2019)

Fischer, G., Nakakoji, K.: Amplifying designers’ creativity with domain-oriented design environments. In: Dartnall, T. (ed.) Artificial Intelligence and Creativity. An Interdisciplinary Approach, pp. 343–364. Springer Netherlands, Dordrecht (1994)

Anantrasirichai, N., Bull, D.: Artificial Intelligence in the Creative Industries: A Review (2020)

Chen, W., Shidujaman, M., Tang, X.: AiArt: Towards Artificial Intelligence Art (2020)

King, R., Churchill, E.F., Tan, C.: Designing with Data: Improving the User Experience with A/B Testing. O’Reilly Media, Incorporated (2017)

Graham Michael Dove: CoDesign with data (2015)

JSAI AI Map task force: AI Map Beta (English), 6 June 2019

Poincaré, H.: The Foundations of Science (GH Halstead, Trans.). Science , New York (1913)

Miller, A.I.: Insights of Genius: Imagery and Creativity in Science and Art. Springer Science & Business Media (2012)

Wertheimer, M.: Max Wertheimer Productive Thinking. Springer (2020). https://doi.org/10.1007/978-3-030-36063-4

Csikszentmihalyi, M.: Flow and the Psychology of Discovery and Invention, vol. 39. HarperPerennial, New York (1997)

Simonton, D.K.: Origins of Genius: Darwinian Perspectives on Creativity. Oxford University Press (1999)

Mumford, M.D.: Something old, something new: revisiting Guilford’s conception of creative problem solving. Creat. Res. J. 13 , 267–276 (2001)

Guilford, J.P.: Varieties of creative giftedness, their measurement and development. Gifted Child Q. 19 , 107–121 (1975)

Campbell, D.T.: Blind variation and selective retentions in creative thought as in other knowledge processes. Psychol. Rev. 67 , 380 (1960)

Newell, A., Shaw, J.C., Simon, H.A.: The processes of creative thinking. In: Contemporary Approaches to Creative Thinking, 1958, University of Colorado, CO, US; This paper was presented at the aforementioned symposium (1962)

Dietrich, A.: The cognitive neuroscience of creativity. Psychon. Bull. Rev. 11 , 1011–1026 (2004)

Arden, R., Chavez, R.S., Grazioplene, R., Jung, R.E.: Neuroimaging creativity: a psychometric view. Behav. Brain Res. 214 , 143–156 (2010)

Chen, W., Shidujaman, M., Jin, J., Ahmed, S.U.: A methodological approach to create interactive art in artificial intelligence. In: Stephanidis, C., et al. (eds.) HCII 2020. LNCS, vol. 12425, pp. 13–31. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60128-7_2

Chapter   Google Scholar  

Wikipedia: Creativity techniques. https://en.wikipedia.org/w/index.php?title=Creativity_techniques&oldid=996672803

Al’tshuller, G.S.: The Innovation Algorithm: TRIZ, Systematic Innovation and Technical Creativity. Technical innovation center, Inc. (1999)

Creative Education Foundation: The CPS Process. https://www.creativeeducationfoundation.org/creative-problem-solving/the-cps-process/

Brown, T.: Others: design thinking. Harv. Bus. Rev. 86 , 84 (2008)

Design Council: What is the framework for innovation? Design Council's evolved Double Diamond. https://www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolved-double-diamond

Merrill, M.D.: First principles of instruction. Educ. Tech. Res. Dev. 50 , 43–59 (2002)

de Bono, E.: Six Thinking Hats. Penguin,s UK (2017)

Herrmann: HBDI (Herrmann Brain Dominance Instrument)® Assessment | Herrmann. https://www.thinkherrmann.com/hbdi

Rodden, K., Hutchinson, H., Fu, X.: Measuring the user experience on a large scale: user-centered metrics for web applications. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2395–2398 (2010)

Abrahamsson, P., Salo, O., Ronkainen, J., Warsta, J.: Agile Software Development Methods: Review and Analysis (2017)

Banfield, R., Lombardo, C.T., Wax, T.: Design Sprint. A Practical Guidebook for Building Great Digital Products. O’Reilly Media, Ind, Sebastopol (2016)

Guilford, J.P.: The structure of intellect. Psychol. Bull. 53 , 267 (1956)

Torrance, E.R.: The torrance tests of creative thinking. Norms-technical mannual (1990)

Getzels, J.W., Jackson, P.W.: Creativity and intelligence: explorations with gifted students. AAUP Bull. 48 , 186 (1962)

Wallach, M.A., Kogan, N.: Modes of Thinking in Young Children: A Study of the Creativity-Intelligence Distinction. Holt Rinehart and Winston (1965)

ISTE: ISTE Standards for Students. https://www.iste.org/standards/for-students

k12 AI Education MIT: aik12-MIT. https://aieducation.mit.edu/

MIT: MIT Covid-19 Initiative. https://opensigma.mit.edu/

Laguarta, J., Hueto, F., Subirana, B.: COVID-19 artificial intelligence diagnosis using only cough recordings. IEEE Open J. Eng. Med. Biol. 1 , 275–281 (2020). https://doi.org/10.1109/OJEMB.2020.3026928

Waymo: Waymo Open Dataset: Sharing our self-driving data for research. https://blog.waymo.com/2019/08/waymo-open-dataset-sharing-our-self.html

JD.com: Pig Facial Recognition – YouTube. https://www.youtube.com/watch?v=BECDKKYi-48

Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

Google Arts & Culture: Color Explorer: Street Art - Google Arts & Culture. https://artsandculture.google.com/color?project=street-art&col=RGB_1BE5E1

Kashcha, A.: Autocomplete VS graph. https://anvaka.github.io/vs/?query=creativity

Baker, B.: Emergent Tool Use from Multi-Agent Interaction. OpenAI (2019)

Shidujaman, M., Zhang, S., Elder, R., Mi, H.: “RoboQuin”: a mannequin robot with natural humanoid movements. In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1051–1056 (2018)

Park, T., Liu, M.-Y., Wang, T.-C., Zhu, J.-Y.: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2337–2346 (2019)

Latitude: AI Dungeon. https://play.aidungeon.io/

Mr. HOW: Animation with Facial Recognition & Motion Capture on Mobile Phone. https://v.kuaishou.com/7reyAJ

Debuild: Build web apps lightning fast. https://debuild.co/

STARCK Site web officiel: A.I. - Introducing The First Chair Created With Artificial Intelligence. https://www.starck.com/a-i-introducing-the-first-chair-created-with-artificial-intelligence-p3801

Alibaba Cloud Community: The Evolution of Luban in Designing One Billion Images. https://www.alibabacloud.com/blog/the-evolution-of-luban-in-designing-one-billion-images_596118

University of Vermont: Team Builds the First Living Robots. https://www.uvm.edu/uvmnews/news/team-builds-first-living-robots

University of Southern Denmark: Collaborative Robots in Assembly: A Digital-Twin Approach. https://www.youtube.com/watch?v=lWHT3_B2spg

Shidujaman, M., Mi, H.: “which country are you from?” A cross-cultural study on greeting interaction design for social robots. In: International Conference on Cross-Cultural Design, pp. 362–374 (2018)

Ransbotham, S., Kiron, D., Gerbert, P., Reeves, M.: Reshaping Business with Artificial Intelligence. Closing the Gap Between Ambition and Action (2017)

Alekseeva, L., Azar, J., Gine, M., Samila, S., Taska, B.: The demand for AI skills in the labour market. https://voxeu.org/article/demand-ai-skills-labour-market

Wang, K.-J., Shidujaman, M., Zheng, C.Y., Thakur, P.: HRIpreneur thinking: strategies towards faster innovation and commercialization of academic HRI research. In: 2019 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), pp. 219–226 (2019)

Mr. HOW AI Creativity Academy: Inspire Kids to Live and Create with AI by Playing to Each Other’s Strength. http://qr09.cn/CVUI2H

AI4ALL: AI4ALL Annual Report 2019 (2019)

Download references

Acknowledgements

The authors like to thank Qinwen Chen, Guojie Qi, Peiqi Su, Qing Sheng, Jieqiong Li, Qianqiu Qiu, Linda Li and all the volunteers for their contribution in this paper.

Author information

Authors and affiliations.

School of Animation and Digital Arts, Communication University of China, Beijing, China

College of Design and Innovation, Tongji University, Shanghai, China

San Jose State University, San Jose, CA, 95112, USA

Department of Mathematics, University of British Columbia, Vancouver, Canada

Xianxu Zeng

College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China

Dingming Wu

Department of Information Art and Design, Academy of Arts and Design, Tsinghua University, Beijing, China

Mohammad Shidujaman

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

The Open University of Japan, Chiba, Japan

Masaaki Kurosu

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Cite this paper.

Wu, Z., Ji, D., Yu, K., Zeng, X., Wu, D., Shidujaman, M. (2021). AI Creativity and the Human-AI Co-creation Model. In: Kurosu, M. (eds) Human-Computer Interaction. Theory, Methods and Tools. HCII 2021. Lecture Notes in Computer Science(), vol 12762. Springer, Cham. https://doi.org/10.1007/978-3-030-78462-1_13

Download citation

DOI : https://doi.org/10.1007/978-3-030-78462-1_13

Published : 03 July 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-78461-4

Online ISBN : 978-3-030-78462-1

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Shopping Cart

No products in the cart.

AI and it’s impact on Creativity

  • By: Anthony
  • December 30, 2022
  • Tags: ai , being human , creativity , innovation

essay on artificial intelligence and human creativity

Share This Post

The potential impact of artificial intelligence (ai) on human creativity and innovation.

Artificial intelligence (AI) has the potential to revolutionize various fields, including the arts and creative industries. However, there are also concerns about the impact of AI on human creativity and innovation.

One perspective is that AI can augment human creativity by providing new tools and resources. For example, AI-powered music composition algorithms can generate unique melodies and harmonies that humans may not have thought of on their own. Similarly, AI-powered design software can generate new design concepts and prototypes that humans can then refine and develop further.

However, there are also concerns that AI could potentially displace human workers and diminish the need for creative thinking in certain fields. For instance, if AI algorithms are able to generate high-quality content more efficiently and at a lower cost than human workers, there may be less demand for human creative labor.

Despite these concerns, it is important to note that AI is still in its early stages and its potential impact on human creativity and innovation is not fully understood. As such, it is important to approach the integration of AI with caution and to carefully consider the potential consequences of its use.

As Douglas Rushkoff, a media theorist and author, noted: “AI is not going to replace humans, but it will most certainly replace jobs. The question is whether we will use it to free ourselves from toil, or allow it to enslave us to our own technological creations.” (Source: Forbes, “Why We Need To Be Careful How We Use Artificial Intelligence,” 2018)

Similarly, Yuval Noah Harari, a historian and author, commented: “The real question is not whether machines will one day be able to think like humans, but whether humans will be able to think like machines…To remain competitive, people will have to learn how to think and act like algorithms.” (Source: Forbes, “The Future Of Work: 3 Ways AI Will Change How We Do Business,” 2017)

On the other hand, AI researcher Fei-Fei Li has a more optimistic view, stating: “AI can be a great enabler for human creativity, providing new tools and resources for artists and designers to work with…AI can be a source of inspiration, leading to new ideas and possibilities that humans might not have considered on their own.” (Source: Forbes, “How AI Is Changing The Creative Industries,” 2018)

Overall, the impact of AI on human creativity and innovation is a complex and multifaceted issue that requires further study and consideration. While AI has the potential to augment and enhance human creativity, it is important to ensure that its integration is done in a responsible and ethical manner that takes into account the potential consequences for human workers and society as a whole.

Confession time... AI wrote this whole article for me in 30 seconds. What are your thoughts?

Sign in , or join the revolution , to view the comments., ignite your creative journey, embark on a journey of self-discovery, creative freedom, and professional growth. join our global community of creative thinkers, innovators, and pioneers. the creative revolution is more than a membership – it's a movement., more to explore.

essay on artificial intelligence and human creativity

Building a Culture of Creativity: Taking Inspiration from Sir Ken Robinson and Beyond

Explore the indispensable role of creativity in driving progress within education, workplaces, and society at large, inspired by the insights of Sir Ken Robinson and other thought leaders. Discover how fostering a culture of creativity can unlock unprecedented potential and innovation.

essay on artificial intelligence and human creativity

The Heart of Creative Leadership: Innovation Through Empathy

Discover the transformative power of creative leadership, where empathy meets innovation. Learn how leading with imagination, diversity, and urgency can drive your team towards unprecedented success. Explore the essentials of fostering a culture of trust and experimentation in today’s dynamic world.

Reclaim Your Creativity. Starting Now

essay on artificial intelligence and human creativity

There was a problem reporting this post.

Block Member?

Please confirm you want to block this member.

You will no longer be able to:

  • See blocked member's posts
  • Mention this member in posts
  • Invite this member to groups
  • Message this member
  • Add this member as a connection

Please note: This action will also remove this member from your connections and send a report to the site admin. Please allow a few minutes for this process to complete.

FINAL OFFER!

essay on artificial intelligence and human creativity

“Wow, they really want me to join The Creative Revolution!”

You bet we do! We genuinely believe that The Creative Revolution holds the transformative power for anyone, irrespective of their background or expertise, to awaken their creativity, elevate their potential and create a future for us all!

(Trust us, once you plunge into the depths of the Education Values Challenge… you’ll immediately recognise the exponential value of being a part of The Creative Revolution community.)

Usually, our policy is rock-solid: no free trials for The Creative Revolution. But just for you, right here, right now, we’re making an exception. Why? Because we’re confident about the immense value and transformative experience our community provides.

You can now secure a FULL 30-day free trial of The Creative Revolution! Dive deep into creativity, learn from the best, and take tangible actions to manifest your dreams.

But remember — this is your FINAL CALL to leverage this unique opportunity. This offer disappears when you leave this page…

So, don’t wait!

Click the “Join The Revolution” button below before this offer is gone for good…

No thanks, I'm not interested... please take me to the confirmation page

Register for 'the educational values workshop' for free today.

Put in your primary email address below to claim your place before it fills up!

Wow, they really want me to join The Creative Revolution!”

(Trust us, once you plunge into the depths of the Element 7-Day Challenge… you’ll immediately recognise the exponential value of being a part of The Creative Revolution community.)

Register For The Workshop For FREE Today!

  • Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • About Journal of Intellectual Property Law & Practice
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

  • І. Definitions of creativity from the standpoint of copyright
  • II. Copyright concept of creativity
  • III. The activity of AI: is it creativity?
  • IV. Conclusion
  • < Previous

Creativity and artificial intelligence: a view from the perspective of copyright

  • Article contents
  • Figures & tables
  • Supplementary Data

Anna Shtefan, Creativity and artificial intelligence: a view from the perspective of copyright, Journal of Intellectual Property Law & Practice , Volume 16, Issue 7, July 2021, Pages 720–728, https://doi.org/10.1093/jiplp/jpab093

  • Permissions Icon Permissions

One of the fields of application of artificial intelligence (AI) is creating objects that look the same as works protected by copyright. Poems, novels, drawings, music and videos generated by computers without direct human involvement become more and more perfect with the help of AI. Sometimes it is difficult to distinguish between an outcome of human creativity and AI activity. The common feature between human-created works and AI-generated objects is their form of expression, which is the essence of copyright items. However, works protected by copyright have one more feature—they are the result of creativity.

Creativity is not a legal concept but a universal one. It has repeatedly been the subject of study in philosophy, psychology, sociology, pedagogy and cultural studies. Many theories of creativity have been developed by these disciplines. But creativity has not been analysed in the context of copyright under legal studies. However, many publications mention that the work results from creativity, the content of this activity, as a rule, is not disclosed. The essence of creativity from the point of view of copyright is almost unexplored; none of the existing theories examines creativity in connection with copyright. This can be regarded as a gap in scholarly research. Creative works are the focus of copyright. Therefore, understanding creativity is vital for copyright.

Email alerts

Citing articles via.

  • About Journal of Intellectual Property Law & Practice
  • JIPLP Weblog
  • Recommend to your Library

Affiliations

  • Online ISSN 1747-1540
  • Print ISSN 1747-1532
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

RTF | Rethinking The Future

Will Artificial Intelligence end Human Creativity?

essay on artificial intelligence and human creativity

Artificial intelligence is the stimulation designed for human intelligence by computer systems. Programming for artificial intelligence focuses on three cognitive skills: reasoning, learning, and self-correction. Artificial intelligence and its effects on humans, society, and work have been a fast-developing process, with numerous applications and tools arising from the benefits of using new technology.

Will Artificial Intelligence end Human Creativity? - Sheet1

The capability of Artificial Intelligence

In a recent incident, a painting worth 432,000 dollars was sold at Christie’s, which was, however, not a product of a human painter but of an algorithm. This raises a very interesting question about how artificial intelligence will affect creativity and creative content in the future and how it will change perspectives in the near future.

Will Artificial Intelligence end Human Creativity? - Sheet2

The development of algorithms not only affects visual art but also influences music and words. In the year 2019, an open AI released a few work samples that showcased computer-generated texts about a topic of the user’s choice. The algorithms of the sentences don’t sound perfect at the moment but show promising implications for the future.

Douglas Eck, the principal scientist at Google , working in the areas of music, art, and machine learning, states that in less than five years, the technology will become extremely sophisticated in a manner that students will no longer find the need to draught an essay on their own. Just a couple of headlines and entering the correct data and artificial intelligence would draft the whole requirement together.

Will Artificial Intelligence end Human Creativity? - Sheet3

The Google Magenta research project, which has recently begun, pushes the limits of artificial intelligence in the field of music with a focus on the arts. The algorithm performance RNN is trained in classical music and is also capable of writing its own piano pieces.

Will Artificial Intelligence end Human Creativity? - Sheet4

A company named Story file by Heather Smith, along with the USC Shoah Foundation used artificial intelligence to preserve history. Using big data, they recreated an experience of one-on-one conversation with individuals who lived through the holocaust via a hologram. They managed to do so by compiling footage collected over a period of five days and filming for 25 hours to collect the answers to over 2000 questions. The language processing made it possible to create an interactive, immersive hologram experience.

There are ample such cases where one can witness the role played by artificial intelligence in marketing campaigns. 

Will Artificial Intelligence end Human Creativity? - Sheet5

In 2017, Nutella and their agency, Ogilvy and Mather Italia, created about seven million unique jars with the help of an algorithm and sold them throughout Italy. Initially, the idea looked amazing and was fun as the jars looked appealing and nice, but these were just the ones they used to build their case as there were also a few not-so-successful jars found on the supermarket shelves to adapt that campaign for Germany.

essay on artificial intelligence and human creativity

Another interesting case witnessed is the artificial intelligence scripted commercial for Lexus. The creative agency, along with its technical partners, worked with the IBM Watson team to analyse fifteen years of footage, audio, and text for the award-winning car and luxury brand campaigns. Artificial intelligence had the job of identifying the key elements labelled as emotional intelligence and entertainment, which helped the script flow and the outline on which the agency formed its storyline. 

Artificial Intelligence and Human Creativity

The examples mentioned above already highlight the biggest downsides to artificial intelligence, which is that it would never be able to truly choose to do anything without a human command.

Heather Smith quotes states that what you put in is what you will get out. This proves that artificial intelligence heavily relies on the data provided by humans and can only create new things based on the data.

The creative works allow artificial intelligence to follow the rules of sampling with an uncertain outcome. To ensure excellent results and excellent outcomes, it always needs to be curated by humans. Also, the example of Lexus commercials portrays no genuine surprises.

Will Artificial Intelligence end Human Creativity? - Sheet6

The role of artificial intelligence in our daily lives will definitely increase in the future, but that’s not something to be afraid of. It indeed can be used as a new tool which would help process a large amount of data, which would provide information and take care of boring tasks, but it is definitely something that needs to be harnessed in our lives, like the invention of electric guitars and cameras. It will certainly change the possibility of creativity and create new possibilities, but in the end, it won’t work without human assistance and could never create something truly new.

As artificial, the name of the article suggests and presses importance on the rising issue of artificial intelligence would take over human creativity by understanding the working of artificial intelligence, the possibility of that seems extremely less. Human minds have seemed to be the creators of most of the things brought into existence after nature, and the day of possibility of a machine-based system taking over that creative mind still seems far away. 

Reference: 

  • SearchEnterpriseAI. 2022. What is Artificial Intelligence (AI)? Definition, Benefits and Use Cases . [online] Available at: https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence [Accessed 11 July 2022].

Will Artificial Intelligence end Human Creativity? - Sheet1

Disha is an architecture graduate from Nagpur University, 2021. Being an avid traveler, she has always tried to connect the city’s art & culture with architecture. She is a keen learner & an extremely creative individual who always seeks opportunities to enhance knowledge & experience in the field of architecture.

essay on artificial intelligence and human creativity

An overview of Axial Symmetry Architecture

essay on artificial intelligence and human creativity

Maggie’s Manchester by Foster and Partners: When Architecture turns Deterministic

Related posts.

essay on artificial intelligence and human creativity

Technology and Innovation in Architectural Style

essay on artificial intelligence and human creativity

Sensory Dynamics in Experimental Architecture: Engaging Human Perception for Transformative Design Experiences

essay on artificial intelligence and human creativity

BIM: A tool for designing biomimetic architecture forms

essay on artificial intelligence and human creativity

Thermal Imaging: The Road to Sustainability

essay on artificial intelligence and human creativity

The Intersection of Technology and Art

essay on artificial intelligence and human creativity

Best Practices for Implementing BIM and Digital Innovation Technologies.

  • Architectural Community
  • Architectural Facts
  • RTF Architectural Reviews
  • Architectural styles
  • City and Architecture
  • Fun & Architecture
  • History of Architecture
  • Design Studio Portfolios
  • Designing for typologies
  • RTF Design Inspiration
  • Architecture News
  • Career Advice
  • Case Studies
  • Construction & Materials
  • Covid and Architecture
  • Interior Design
  • Know Your Architects
  • Landscape Architecture
  • Materials & Construction
  • Product Design
  • RTF Fresh Perspectives
  • Sustainable Architecture
  • Top Architects
  • Travel and Architecture
  • Rethinking The Future Awards 2022
  • RTF Awards 2021 | Results
  • GADA 2021 | Results
  • RTF Awards 2020 | Results
  • ACD Awards 2020 | Results
  • GADA 2019 | Results
  • ACD Awards 2018 | Results
  • GADA 2018 | Results
  • RTF Awards 2017 | Results
  • RTF Sustainability Awards 2017 | Results
  • RTF Sustainability Awards 2016 | Results
  • RTF Sustainability Awards 2015 | Results
  • RTF Awards 2014 | Results
  • RTF Architectural Visualization Competition 2020 – Results
  • Architectural Photography Competition 2020 – Results
  • Designer’s Days of Quarantine Contest – Results
  • Urban Sketching Competition May 2020 – Results
  • RTF Essay Writing Competition April 2020 – Results
  • Architectural Photography Competition 2019 – Finalists
  • The Ultimate Thesis Guide
  • Introduction to Landscape Architecture
  • Perfect Guide to Architecting Your Career
  • How to Design Architecture Portfolio
  • How to Design Streets
  • Introduction to Urban Design
  • Introduction to Product Design
  • Complete Guide to Dissertation Writing
  • Introduction to Skyscraper Design
  • Educational
  • Hospitality
  • Institutional
  • Office Buildings
  • Public Building
  • Residential
  • Sports & Recreation
  • Temporary Structure
  • Commercial Interior Design
  • Corporate Interior Design
  • Healthcare Interior Design
  • Hospitality Interior Design
  • Residential Interior Design
  • Sustainability
  • Transportation
  • Urban Design
  • Host your Course with RTF
  • Architectural Writing Training Programme | WFH
  • Editorial Internship | In-office
  • Graphic Design Internship
  • Research Internship | WFH
  • Research Internship | New Delhi
  • RTF | About RTF
  • Submit Your Story

Looking for Job/ Internship?

Rtf will connect you with right design studios.

essay on artificial intelligence and human creativity

More From Forbes

The Intersection Of AI And Human Creativity: Can Machines Really Be Creative?

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

The ability to be creative has always been a big part of what separates human beings from machines. But today, a new generation of “generative” artificial intelligence (AI) applications is casting doubt on how wide that divide really is!

Tools like ChatGPT and Dall-E give the appearance of being able to carry out creative tasks – such as writing a poem or painting a picture – in a way that’s often indistinguishable from what we can do ourselves.

Does this mean that computers are truly being creative? Or are they simply giving the impression of creativity while, in reality, simply following a set of pre-programmed or probabilistic rules provided by us?

What is Creativity?

In order to tackle this seemingly complicated and philosophical quandary, we need to start by defining what creativity actually is.

The Oxford dictionary gives a simple definition: “The use of imagination or original ideas to create something.”

Best High-Yield Savings Accounts Of September 2023

Best 5% interest savings accounts of september 2023.

I think “imagination” is a good starting point for this investigation – after all, it’s the faculty that we associate with our ability to have new ideas – and, therefore, the genesis of creativity.

The workings of the human brain – of which imagination is one element – are largely still a mystery. But we can consider the importance of “imagination” to mean that some element of thought has to go into the process of making something in order to consider that we are being creative.

It’s also important that whatever we create has some meaning, significance, or value – if we just randomly put words together into a jumbled word-soup or scribble ink across a page without even thinking about its aesthetic value, let alone the message we are trying to communicate, we don’t usually think that we’re being creative – we’re just making a mess!

There also has to be an element of originality - simply copying someone else's painting or poem can't really be called creativity unless we are making our own interpretation or adding our own unique touch to it.

Human ideas and imagination often come from making connections – we see, hear, feel, or learn something, and this causes us to form an idea or opinion. When we express this by creating something that represents that connection – such as writing a poem about something sad in order to communicate the sadness of an event or situation to others – we are being creative in a very human way.

Generative AI

Generative AI involves machine learning (ML) algorithms that can learn a set of rules by studying a large amount of existing data (known as “training data”) and using the rules it learns to create something new based on an input (known as a “prompt”).

For example, by being trained on 1,000 descriptions of "the sun," it might establish rules that say there is a high probability that “the sun” is hot, massive, yellow, and roughly 100 million miles away.

Therefore, when it’s asked to create a piece of text describing the sun, it has all the information it needs to do it.

The same principle would apply if a graphical generative AI algorithm were asked to draw a picture of the sun or if a sound-based algorithm was asked to compose a piece of music inspired by the sun.

In this very simplified example, the hypothetical AI algorithm is using just four parameters – heat, size, color, and distance from us – to create content about the sun.

One of the most advanced generative AI models available today – OpenAI’s GPT-4 – is believed to have been trained on around one trillion parameters. The precise details of the training dataset have not been made public, but we can assume it knows far more about the sun than the hypothetical model used for our example.

This means that the content it can generate can be far more detailed, sophisticated, in-depth, and, from a certain point of view, creative.

Let’s test this out – this is GPT-4’s (via ChatGPT Plus) response to my prompt “write a haiku about the Sun”:

“Golden Orb Ascends

Warmth embraces Earth below

Life awakes with light."

Now, a haiku is a very limiting form - which, of course, is why we consider the ability to write them well to be so "creative."

But even so, the algorithm has been able to pack in an impressive amount of knowledge, ideas, and concepts into the poem. These include the color and shape of the sun, the fact it emits heat, shines down on the planet Earth, and, by doing so, enables life.

I am no poet, but I am a human – and I would have a great deal of difficulty expressing all of those distinct concepts in the three-line, five-seven-five syllable structure of a haiku. Does this mean the algorithm is more creative than me? From a purely objective point of view, it seems difficult to argue that it isn’t!

Can Computers Have New Ideas?

In order to really get closer to a definitive answer to the question, though, it’s important to remember one thing:

No matter how impressive a piece of computer-created poetry or artwork might be, it’s always built from blocks carved out of the data that’s used to train it. In other words, it isn’t genuinely capable of what we would call “original thought” – having new ideas of its own.

Are humans? Well, as we mentioned before, we have a faculty that we call "imagination," which we think of as an ability to conjure up new ideas and concepts in our heads. But how many of what we consider to be our own original thoughts are derived from things we’ve previously seen, heard, read, experienced, or learned? As we’ve already covered, the workings of the human brain are still largely a mystery, even to psychologists and neuroscientists.

But one point I think that can be made about how we differ from machines in this regard is that the connections we make between things we’ve previously experienced and the new ideas we come up with are something to do with our humanity. It’s things we’ve previously seen, heard, and read (our own “training data”) but filtered through the lens of our own perceptions, feelings, beliefs, and experiences – in other words, our humanity.

After all, it’s these feelings, beliefs, and experiences that make us what we are – human. Generative AI algorithms make connections too, but they do it in an entirely probabilistic, mechanized way – simply by determining what words or concepts are most frequently connected together. "What shape is the Sun?" – "The Sun is an orb."

ChatGPT Plus may know that the sun gives off heat and light and makes it possible for light to exist. But it doesn’t have its own personal thoughts, feeling, and memories about the sun in the same way that you or I do.

The Illusion of Creativity

It might seem that today, the best way to sum up the difference between human creativity and machine creativity is to consider that human creativity is the original source and machine creativity is best thought of as an emulation, or an illusion, of human creativity. A digital extension of our ability to express ourselves, generate new ideas, and inspire an emotional response from our audience.

After all, without humans to create the data that's been used to train seemingly highly creative machine intelligence like Dall-E and GPT-4, they wouldn't be capable of "creating" anything more impressive than random word-soup or a toddler's scribblings.

However, generative AI is without a doubt the closest we have come to machines that can be thought of as creative, and arguing the case for or against it is, at this point, really just a matter of semantics.

As time goes on and AI becomes even more capable, our definition of “creative” – which we use to determine our answer to the question we’re tackling here - will undoubtedly change. We might find that human creativity itself changes – as we attempt to find ways in which we can express ourselves that continue to go beyond what machines are capable of.

This could give rise to entirely new ideas about what makes us human – a question that many people consider to lie at the heart of much of the art that we’ve created – as well as new forms of art and creative expression.

One thing we can say for certain is that forcing us to reassess our ideas around what constitutes art, expression, and creativity is just one of the many tumultuous effects that AI will have on society in the coming years. And if the result of that is that we understand a little more about what it is that makes us human, then that by itself could have all kinds of implications for how we live our lives in the future.

To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter , follow me on Twitter , LinkedIn , and YouTube , and check out my books ‘ Future Skills: The 20 Skills And Competencies Everyone Needs To Succeed In A Digital World ’ and ‘ Business Trends in Practice , which won the 2022 Business Book of the Year award.

Bernard Marr

  • Editorial Standards
  • Reprints & Permissions

Advertisement

How AI mathematicians might finally deliver human-level reasoning

Artificial intelligence is taking on some of the hardest problems in pure maths, arguably demonstrating sophisticated reasoning and creativity – and a big step forward for AI

By Alex Wilkins

10 April 2024

New Scientist Default Image

Simon Danaher

In pure mathematics, very occasionally, breakthroughs arrive like bolts from the blue – the result of such inspired feats of reasoning and creativity that they seem to push the very bounds of intelligence . In 2016, for instance, mathematician Timothy Gowers marvelled at a solution to the cap set problem , which has to do with finding the largest pattern of points in space where no three points form a straight line. The proof “has a magic quality that leaves one wondering how on Earth anybody thought of it”, he wrote.

You might think that such feats are unique to humans. But you might be wrong. Because last year, artificial intelligence company Google DeepMind announced that its AI had discovered a better solution to the cap set problem than any human had . And that was just the latest demonstration of AI’s growing mathematical prowess. Having long struggled with this kind of sophisticated reasoning, today’s AIs are proving themselves remarkably capable – solving complex geometry problems, assisting with proofs and generating fresh avenues of attack for long-standing problems.

Can AI ever become conscious and how would we know if that happens?

All of which has prompted mathematicians to ask if their field is entering a new era. But it has also emboldened some computer scientists to suggest we are pushing the bounds of machine intelligence, edging ever closer to AI capable of genuinely human-like reasoning – and maybe even artificial general intelligence, AI that can perform as well as or better than humans on a wide range of tasks. “Mathematics is the language of reasoning,” says Alex Davies at DeepMind. “If models can…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

To continue reading, subscribe today with our introductory offers

No commitment, cancel anytime*

Offer ends 2nd of July 2024.

*Cancel anytime within 14 days of payment to receive a refund on unserved issues.

Inclusive of applicable taxes (VAT)

Existing subscribers

More from New Scientist

Explore the latest news, articles and features

Watch mini humanoid robots showing off their football skills

Why ais that tackle complex maths could be the next big breakthrough.

Subscriber-only

AI pop-ups can help you stop doomscrolling on your phone

Code dependent: a must-read exploring the human impact of ai, popular articles.

Trending New Scientist articles

Design & Make with Autodesk

  • Architecture, Engineering, Construction & Operations
  • Product Design & Manufacturing
  • Media & Entertainment
  • Emerging Tech
  • Point of View
  • Report home
  • Introduction
  • About the report
  • Key theme: Business Resilience
  • — Optimism returns
  • — Cost control
  • — Digital maturity
  • — All-in on AI
  • Key theme: Talent
  • — Upskilling
  • — Talent crunch
  • Key theme: Sustainability
  • — Sustainability actions
  • — Business health
  • AECO industry
  • D&M industry
  • M&E industry

ARTIFICIAL INTELLIGENCE (AI)

Artificial intelligence.

Artificial intelligence is the ability for machines to complete tasks that normally take human intelligence, and it can serve to enhance and empower human creativity rather than replace it.

Data points are superimposed on an image of a man working on a laptop

What is artificial intelligence?

Artificial intelligence (AI) is the theory and development of computer systems that can perform tasks that would otherwise require human intelligence, such as pattern recognition from data; learning from experience; interpreting visual inputs; and recognizing, translating, and transcribing language.

In practice, AI is a way for humans and computers to work better together. In design and make industries, AI helps creative people do more with less by automating mundane and repetitive tasks and by cutting through the complexity of huge datasets to provide insights that may lead to new solutions.

Two data scientists consult a data model on a computer screen.

How does artificial intelligence work?

AI works when computer systems apply logic-based programming to interpret data and automate actions with limited human input.

There are many subsets and types of artificial intelligence. For example, machine learning (ML) uses statistical data analysis for the model to improve its performance over time, in a sense learning from its mistakes and successes. As a subset of ML, deep learning goes further by analyzing data with artificial neural networks, which attempt to mimic human brain functions to find complex patterns in enormous data sets. Deep learning has made possible many AI innovations that are now common, such as computer vision (including facial and gesture recognition), voice assistants (Siri, Alexa), and speech-to-text transcription.

These innovations have already changed how people work and communicate and have made advanced technology available to more people than ever. As they develop, AI’s abilities will impact the global population on an even higher level. For example, deep learning AI contributes to analyzing medical images for disease detection and analyzing biological data for genome editing, discovering new medicines, and diagnosing diseases. In the global effort to build the infrastructure and housing needed for a growing population, AI can help plan for flooding, wind, noise and other conditions with predictive analysis, find efficiencies to free up people’s time and make manufacturing and construction more sustainable, and continue to assist workflows in new and more powerful ways.

A virtual chat is superimposed on a photo of a person working at a computer

Types of artificial intelligence

Artificial intelligence, or AI, refers to computer systems that can perform tasks that otherwise would require human intelligence. There are, however, numerous types of AI for different purposes. The following are some of the most important types of AI, all of which can help people and computer systems to work better together.

Machine learning (ML)  is a subset of AI that lets computers “learn” from data sets without being explicitly programmed. Supervised learning ML learns from labeled data, such as images with metatags, whereas unsupervised learning ML works off unlabeled data. Either way, ML systems permeate modern life as a part of GPS maps, search and recommendation engines, spam filters, credit card fraud detection mechanisms, and much more.

Deep learning  uses machine learning algorithms with biologically inspired logical structures called “artificial neural networks.” Deep learning is used for applications such as facial recognition and other types of computer vision, speech recognition for translating languages and transcribing spoken audio, bioinformatics such as analyzing DNA sequences and protein structures, and medical image analysis used to detect and diagnose diseases.

Generative AI  is based on pre-trained, deep neural networks, which are capable of producing novel text, images, audio, video, and computer code. Especially when it comes to the large language models (LLMs) like ChatGPT, Claude.ai, and many others that produce written outputs, generative AI’s messages are probable but unvalidated.

Generative design , not to be confused with generative AI, is an algorithmic process in which software runs precise real-world simulations to arrive at a set of optimal design solutions for a defined problem. Designers or engineers define their design goals, along with constraints pertaining to cost, materials, manufacturing methods, and performance and spatial requirements. The software can quickly generate numerous options that explore the possible permutations of the design requirements, simulating and learning from each iteration. Generative design solutions often but do not always employ AI.

Artificial intelligence by industry

A man in a construction vest looks at a building interior design on a computer.

Architecture, engineering, construction, and operations

Architecture and engineering firms are using AI to optimize buildings and sites for sunlight, wind, and noise conditions, while construction companies employ AI-powered robots for fast, accurate site mapping.

A robot places parts in a factory.

Product design and manufacturing

AI-powered robots and deep learning help manufacturers to avoid costly downtime by scheduling preemptive repairs, while generative design empowers designers to find optimal solutions based on parameter requirements and desired outcomes.

A computer screen shows AI-assisted media workflows.

Media and entertainment

Artists can automatically manipulate scene data directly into Autodesk Maya using natural language text prompts thanks to the AI-powered Maya Assist tool.

Benefits of artificial intelligence

Across industries, AI reduces tedious work, handles complex data crunching, and opens up new possibilities.

Augmentation

AI’s contextual assistance can enhance creative professionals’ creativity and exploratory capabilities by improving speed, accuracy, and breadth of thinking.

Automate tedious work

AI can automate workflow steps that traditionally required tedious and/or repetitive manual work, which can significantly reduce overhead and free up time to focus on more creative work.

When professionals face overwhelming amounts of complex data, AI features can sift through and understand what is really happening and give actionable insights.

Ideas and innovation

With AI assisting in ideation and visualizing more contributors’ ideas, more voices are heard, and more projects can be launched, accelerating innovation. 

Customer satisfaction

Taking proper advantage of AI’s capabilities can lessen the gap between client expectations and design-and-make reality.

Specific design-and-make tasks

AI helps in specific areas of design and make. Autodesk Forma can maximize light in buildings and optimize sites for wind and noise conditions. Autodesk Fusion can automate 2D documentation while making manufacturing drawings interactive.

essay on artificial intelligence and human creativity

How generative artificial intelligence will transform workflows

Ideally, AI will give users more time for creativity by decreasing the amount of tedious, repetitive chores. AI should augment and serve human creativity rather than replace it.

Generative AI has the potential to accelerate the visualization of concepts, leading to more ideas from more people. With faster ideation, teams can explore more possibilities and start more projects. Furthermore, AI features can help greatly reduce the gap between client expectations and design-and-make reality, boosting customer satisfaction. 

Generative AI is already showing its transformative power for creative software in the AEC, design and manufacturing, and media and entertainment industries. Architects will be able to generate editable floor plans and easily import information from drawings into CAD projects. Construction IQ’s data analysis in the Autodesk Construction Cloud identifies risks and informs decisions on design, quality, safety, and project control priorities. Product designers can generate part assemblies for 3D CAD models. And in media and entertainment, AI generates 3D character rigging, making skeletal animation much faster.

Artificial intelligence in action

Design and make firms are using AI to extend the reach of their work, whether that’s to the bottom of the ocean, to remote communities, or all the way to Mars.

A computer-generated goat is skydiving.

UNTOLD STUDIOS

Heads in the cloud and minds on AI

In addition to being the world’s first fully cloud-based animation and VFX shop, Untold Studios uses its own data to train machine learning models to accelerate repetitive tasks within disciplines such as character rigging.

A golden computer-generated race car looks futuristic.

AI drives automotive design to a new level

In this interview and blog post, the ex-Ford Chief Designer and AI design expert shares thoughts on the AI design process, how to give generative AI feedback, AI’s limits, and how AI encourages exploration in automotive design.

A boy runs across a footbridge.

BRIDGES TO PROSPERITY

Better living through AI geospatial analysis

After creating design “presets” of trail bridges in Autodesk AutoCAD to connect rural communities to basic services, Bridges to Prosperity's AI and machine-learning geospatial data tool, Fika Map, remotely analyzes prospective sites so the nonprofit can easily scale up.

Image courtesy of Bridges to Prosperity

A computer-generate image of a NASA rover explores terrain.

Alien-looking “evolved structures” aid the search for alien life

NASA used generative design to produce several parts for its EXCITE telescope and the Mars Sample Return Mission, taking advantage of the technology’s crucial benefits, such as designing for reduced mass.

Image courtesy of NASA

A man stands next to a space-research machine he designed.

“I truly believe that humanity is at the dawn of the age of AI, and using generative design is absolutely essential for any engineering team to remain competitive in the future.”

—Alex Miller, lead mechanical engineer, Newton | Engineering and Product Development

A computer-generated image of a dog is surrounded by lights.

“Our work with Autodesk and its Design and Make Platform have proven integral to our continued evolution, helping us to push the boundaries of technology to achieve outcomes previously thought impossible.”

—Amaan Akram, head of VFX, Untold Studios

Autodesk AI helps you design and make more with less

See how Autodesk AI’s innovations are already helping customers in AEC, design and manufacturing, and media entertainment do their work faster, better, cheaper, and more sustainably, with more capabilities being added by the day.

AI and automation for design and manufacturing ROI e-book

Learn how new approaches to collaboration and advanced technologies like AI and automation are transforming design and manufacturing work in this white paper from  Harvard Business Review  and Autodesk.

AI and automation for ROI in AECO e-book

Culture changes in architecture, engineering, and construction—as well as the advancement of AI and automation tools—are generating returns for industry firms, according to this report from  Harvard Business Review  and Autodesk.

Autodesk AI Lab’s new design tools

Five recent AI papers apply deep-learning AI to create 3D models from spoken words, predict how to join CAD parts, transfer style between 3D objects, and find different ways of reverse-engineering CAD models.

Introduction to generative design for manufacturing

This Autodesk Fusion course starts designers on their generative design journey by introducing them to the mindset and workflow needed to succeed.

Generative design for architecture, engineering & construction

Quick videos and stories breakdown the process, benefits, and uses of generative design in the AECO industry, particularly for Autodesk Revit software.

Frequently asked questions (FAQs) about artificial intelligence

What are examples of artificial intelligence.

Some examples of artificial intelligence include:

• Autonomous vehicles : AI interprets sensor data to make driving decisions.

• Chatbots : Customer service chatbots and large language model (LLM) AIs such as ChatGPT use natural language processing to answer questions.

• Virtual assistants : Voice-controlled assistants like Siri and Google Assistant, and their extended “smart home” systems like Apple HomePod and Google Home, use AI to understand spoken commands and to automate tasks.

• Recommendation engines : The people at Amazon, TikTok, and YouTube don’t really know what you want next; AI suggests things based on data including your behavior history and other people’s consumption.

• Health care : AI is used to assist in certain surgeries, detect diseases, formulate drugs, and personalize treatment plans.

Why is artificial intelligence important?

Artificial intelligence is important for doing a number of things that people otherwise would not be able to do. For example, AI can analyze data at a volume and rate that far exceeds human capabilities, which has allowed people to use AI for example to diagnose diseases and to better predict traffic patterns and weather. AI’s data-driven insights can also help people make more informed decisions.

When AI takes over tedious and repetitive tasks, that can free up people and businesses to focus on more creative and nuanced problems. AI-based technologies such as voice recognition, text-to-speech, language translation, and others also make technology more accessible to people, especially those with disabilities.

What is the most used form of artificial intelligence?

While generative AI tools such as ChatGPT have become very popular, the most-used form of artificial intelligence is machine learning, a subset of AI where computer systems get better over time at making predictions based on large amounts of input data.

Examples of machine learning systems include IBM Watson, Nvidia Deep Learning AI, and TensorFlow.

Machine learning contributes to many technologies in use today, including search engines, autonomous vehicles, email spam filters, social media feeds, credit card fraud detection mechanisms, voice-recognition assistants, and recommendation engines.

How is AI used in everyday life?

AI is used in everyday life and has been implemented for years in a variety of ways that many people consider commonplace. For example, virtual assistants like Siri and Alexa use artificial intelligence for voice recognition.

Machine learning also pervades modern life. In transportation, GPS systems uses it to suggest the best route according to traffic data, and ride-sharing apps use it to match drivers with passengers and calculate trip times and prices. Recommendation engines for shopping, streaming video, and social media all use machine learning.

AI benefits health care through disease diagnosis, drug discovery, and more. AI contributes to security for example by identifying spam and phishing emails, detecting fraud in banking, and spotting unwanted activity from security cameras.

What are the disadvantages of AI?

The disadvantages of AI can include high cost, job loss, lack of creativity and emotion, unpredictability, and ethical and privacy concerns.

AI’s high financial costs come from the amount of data, software, hardware, and human labor needed to maintain and repair them. Societal costs include the potential for AI to replace human jobs, especially jobs involving repetitive tasks and simple problem-solving. Another danger is that humanity could possibly become too dependent on machines.

Other societal concerns include AI acting with bias, discrimination, unfairness, and lack of transparency. Security and privacy concerns also abound, as AI collects huge amounts of data and is vulnerable to hacking, technical glitches, and human error.

Privacy | Do not sell or share my personal information | Cookie preferences | Report noncompliance | Terms of use | Legal  |  © 2024 Autodesk Inc. All rights reserved

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: ai knowledge and reasoning: emulating expert creativity in scientific research.

Abstract: We investigate whether modern AI can emulate expert creativity in complex scientific endeavors. We introduce novel methodology that utilizes original research articles published after the AI's training cutoff, ensuring no prior exposure, mitigating concerns of rote memorization and prior training. The AI are tasked with redacting findings, predicting outcomes from redacted research, and assessing prediction accuracy against reported results. Analysis on 589 published studies in four leading psychology journals over a 28-month period, showcase the AI's proficiency in understanding specialized research, deductive reasoning, and evaluating evidentiary alignment--cognitive hallmarks of human subject matter expertise and creativity. These findings suggest the potential of general-purpose AI to transform academia, with roles requiring knowledge-based creativity become increasingly susceptible to technological substitution.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of jintell

Creativity and Artificial Intelligence—A Student Perspective

Rebecca marrone.

1 The Centre for Change and Complexity in Learning, The University of South Australia, Adelaide 5000, Australia

Victoria Taddeo

Gillian hill.

2 Centre for Research in Expertise Acquisition, Training and Excellence, School of Psychology, University of Buckingham, Buckingham MK18 1EG, UK

Associated Data

Restrictions apply to the availability of these data. Data was obtained from students and are available from the authors with the permission of the students.

Creativity is a core 21st-century skill taught globally in education systems. As Artificial Intelligence (AI) is being implemented in classrooms worldwide, a key question is proposed: how do students perceive AI and creativity? Twelve focus groups and eight one-on-one interviews were conducted with secondary school-aged students after they received training in both creativity and AI over eight weeks. An analysis of the interviews highlights that the students view the relationship between AI and creativity as four key concepts: social, affective, technological and learning factors. The students with a higher self-reported understanding of AI reported more positive thoughts about integrating AI into their classrooms. The students with a low understanding of AI tended to be fearful of AI. Most of the students indicated a thorough understanding of creativity and reported that AI could never match human creativity. The implications of the results are presented, along with recommendations for the future, to ensure AI can be effectively integrated into classrooms.

1. Introduction

There is a strong consensus that creativity is a crucial 21st-century competency. Education systems report the importance of creativity ( Patston et al. 2021 ). Similarly, Artificial Intelligence (AI) is significantly impacting a growing number of fields, including education ( Gabriel et al. 2022 ). Globally, education systems are developing strategic plans to embed AI in classrooms adequately (see Singapore, Estonia, Australia, New Zealand, and Scotland, to name a few) ( Gabriel et al. 2022 ). Whilst the importance of both creativity and AI are well established, less is known about how students perceive and value the relationship between AI and creativity. This paper will explore how students perceive AI and creativity, and endeavour to ensure that education systems support the development of both competencies.

1.1. Artificial Intelligence in Education

Artificial Intelligence (AI) is a branch of computer science that uses algorithms and machine learning techniques to replicate or simulate human intelligence ( Helm et al. 2020 ). There are three types of AI: narrow AI, general AI, and Artificial Superintelligence. Narrow AI is the most common and realized form of AI to date. It is very goal-orientated and uses machine learning techniques to achieve one specific goal or task (e.g., image and facial recognition, Siri/Alexa). General (or deep) AI is AI that is deemed on par with human capabilities (e.g., AI that can discern the needs and emotions of other intelligent beings). Thirdly, Artificial Superintelligence is AI that is more capable than humans (similar to a sci-fi movie portrayal of AI that supersedes humans in every regard) ( Hassani et al. 2020 ).

Within the education context, artificial intelligence development will likely remain in the form of narrow AI. Current educational technologies include speech semantic recognition, image recognition, Augmented Reality/Virtual Reality, machine learning, brain neuroscience, quantum computing, blockchain, et cetera. These technologies are rapidly being integrated within classrooms. An ever-increasing number of artificial intelligence education products are being applied to K-12 education ( Yufeia et al. 2020 ). Literature studies show that artificial intelligence technology in education has been used in at least 10 aspects: “the (i) automatic grading system, (ii) interval reminder, (iii) teacher’s feedback, (iv) virtual teachers, (v) personalized learning, (vi) adaptive learning, (vii) augmented reality/virtual reality, (viii) accurate reading, (ix) intelligent campus, and (x) distance learning” ( Yufeia et al. 2020, p. 550 ).

The Artificial Intelligence in Education (AIED) community emphasises the creation of systems that are as effective as one-on-one human tutoring ( VanLehn 2011 ). Over the last 25 years, there have been significant advances toward achieving that goal. However, by enforcing the human tutor/teacher as the gold standard, a typical example of AIED practices has often included a student working with a computer to solve step-based problems focused on domain-level knowledge in subjects such as science and mathematics ( Trilling and Fadel 2009 ). However, this example does not consider the recent developments in education practices and theories, including introducing 21st-century competencies. The 21st-century competency approach to education emphasises the value of general skills and competencies such as creativity. Today’s classrooms strive to incorporate authentic practices using real-world problems in collaborative learning settings. To maintain its relevance and increase its impact, the field of AIED has to adapt to these changes.

1.2. What Does Creativity in an AI Classroom Look Like?

Boden ( 1998 ), in her paper, suggests that AI techniques can be used to enhance creativity in three ways: ‘by producing novel combinations of familiar ideas; by exploring the potential of conceptual spaces; and by making transformations that enable the generation of previously impossible ideas’ (p. 1). While there have been attempts to combine the fields of AI and creativity, and to define them through the emerging field of computational creativity, it has often ended in confusion. Computational creativity (CC) (also known as artificial creativity or creative computation) places AI/computers at the centre of creativity ( Colton and Wiggins 2012 ). Computational creativity is underpinned by Rhodes’ 4P’s of creativity theory, which emphasises that creativity is an interaction between four factors: process, person, product, and press (environment) ( Rhodes 1961 ). While all four factors are crucial for human creativity, Cropley et al. ( 2021 ) have suggested that only two factors are important for human and artificial creativity: process (i.e., cognition), and product (i.e., outcome). Creative products are measured by novelty and effectiveness ( Cropley and Cropley 2012 ; Cropley and Kaufman 2012 ), where novelty refers to a new or original idea or concept, and effectiveness is the ability of the product or solution to achieve its desired result. Process is defined as the cognitive mechanisms of creativity and is key to understanding what artificial intelligence can offer to develop novel and effective solutions to problems. Therefore, to encourage the use of creativity and AI, educators should consider the process by which creativity has unfolded and/or the product of the creative endeavour.

There is emerging research on assessing the creative product using AI-based methodologies. Cropley and Marrone ( 2021 ) demonstrate how AI can successfully assess figural creativity using convolutional neural networks. Beaty and Johnson ( 2021 ), and Olson et al. ( 2021 ) also demonstrate the use of latent semantic analysis to assess the creativity of student responses to a traditional alternate uses task. While this is a growing field, this research focuses more on the outcome or product of creativity and less on the process.

1.3. The Process of Creativity and AI

Students should be aware of how AI can support their creativity and learning. Modern education favours problem-solving-based pedagogies, which emphasise the importance of fostering children’s ability to think creatively. However, considerable research supports the existence of a creativity slump in younger children across subjects ( Torrance 1968 ; Tubb et al. 2020 ). One proposal for this slump is an overly structured school curriculum and a lack of play-based learning activities in educational practices ( Alves-Oliveira et al. 2017 ). Emerging research shows how AI can support skills often associated with creativity, such as curiosity ( Gordon et al. 2015 ), grit, persistence, and attentiveness ( Belpaeme et al. 2018 ). The ability of AI to support creativity is also being explored. Kafai and Burke ( 2014 ), in their study, report that the purpose of AI in education is to encourage and support skills such as problem-solving and creativity through collaboration with AI, rather than simply acquiring knowledge in the specific domain. The paper suggests that AI can help creativity unfold and is therefore related to the process through which creativity occurs. Furthermore, Ryu and Han ( 2018 ) studied Korean school teachers’ perceptions of AI in education and report that teachers with experience in leadership recognized that AI would help to improve creativity. Therefore, it is proposed that AI in education may address some of the main concerns associated with the creativity slump, particularly an emphasis on the creative process. This may help improve creative thinking in students and comfortability using AI, and to adequately prepare students to enter the modern workforce.

To successfully combine and integrate AI and creativity, we must better understand how students perceive the relationship between the two concepts. To understand this perception, we should also situate AI with other predominant creativity theories, including the 4C model of creativity.

1.4. A 4C Approach to AI

Creativity and AI in an educational context can be viewed through a 4C model ( Kaufman and Beghetto 2009 ). Mini-c or ‘personal creativity’ embodies the personal ( Runco 1996 ; Vygotsky 2004 ) and developmental ( Cohen 1989 ) aspects of creativity. Mini-c relates to subjective self-discoveries that are creative to the individual involved and not necessarily others. An example may be an individual making a slight variation on a well-known recipe. Little-c is also called ‘everyday creativity’ and refers to something other people recognise as creative, such as generating a new recipe. Pro-c or ‘professional creativity’ is defined as becoming an expert in any field or discipline. An example may be the chef, Gordon Ramsey. Big-C or ‘legendary creativity’ is defined as eminent creativity and will be remembered for centuries. An example may be August Escoffier, who is credited as the founder of modern cuisine and has dramatically altered the field of cooking ( Beghetto et al. 2016 ).

Most obviously, AI can support creativity at the pro-c and potentially Big-C levels, as it can help extend expert knowledge in specific domains. Less obvious is how AI can support mini-c and little-c contributions. At the mini-c and little-c levels, the creative output is not as crucial as the self-discovery that occurs through the creative process. It is therefore essential to develop both an appreciation and understanding of when and where AI is most valuable, that is, in what narrow domains does AI best suit education, and how can AI be used to encourage mini-c and little-c contributions?

This research will investigate how students perceive AI and creativity, and the relationship between the two. We expect insights to highlight how AI can be designed to support creativity in the classroom.

2. Materials and Methods

2.1. participants.

Eighty secondary school students from four South Australian schools (mean age 15) participated in an eight-week programme. Students were tasked with the challenge of: ‘How do we sustain life on Mars?’ Sixty students completed this task as part of their regular science class. Twenty students completed this task as an extracurricular after-school programme. The programme’s content was identical, irrespective of whether the student participated in their regular science class or as an extracurricular activity. The same staff conducted both versions.

2.2. Method

Grounded theory (GT) is a structured yet flexible methodology that is appropriate when little is known about a phenomenon ( Chun Tie et al. 2019 ). Grounded theory investigates the experience of people and their responses and reactions and generates a theory. A defining characteristic of GT is that it aims to generate a theory that is grounded in the data. Considering there is minimal research on student perceptions of AI and creativity, this methodology was chosen.

2.3. Context

The students explored a variety of sub-problems related to their task; however, one task was around designing and building a Mars Rover. Those who engaged as part of their science class worked in groups of 4–5 students, and each team spent one week (four × 50-min lessons) engaging solely with artificial intelligence and building their Rover. For the other seven weeks, students engaged with the AI system, once a lesson for approximately 10 min each time (40 min per week over seven weeks). The students who engaged in the extracurricular version of this programme also were in groups of 4–5 and engaged with the AI system for six hours over a one-day, in-person event. The other lessons were hosted on Zoom and did not involve AI. The students physically built a Mars Rover using Fischer Technik kits and then engaged with an AI-based vision analytics tool to receive feedback on their build. Whilst the technology behind the vision analytics tool has been created by individuals at the pro-c level, its application in the classroom was created to elicit mini-c or little-c creativity in students. This is because the students use the system to get specific and targeted feedback on every step of their build. The students can then use this information to decide if the AI is helping them achieve their goals of creating the Rover. Once students had built their Rover, the vision analytics system could scan it and upload it into a 3D virtual environment, where students could drive their Rover on Mars. Here they learnt about planetary factors, such as gravity, and terrain.

This was an open-designed task with no instructions, and students were instructed to be creative with their choices and designs. They received creativity training, specifically: “What is creativity and what is it not?”.

2.4. Data Analysis

Twelve focus groups were conducted with the students engaged with this project in their regular science lessons. Eight one-on-one interviews were conducted with those students who participated in this programme as an extracurricular programme. The questions asked to all the students were the same, regardless of whether they engaged in their class or as an extracurricular activity. The interviews were framed around how students perceive both AI and creativity. See Appendix A for the interview questions. A content analysis methodology was used to analyse the meaning of the participants’ narratives. Fraenkel et al. ( 2006 ) define content analysis as ‘a technique that enables researchers to study human behaviour in an indirect way, through an analysis of their communications (p. 483). The purpose of content analysis is to explore participants’ verbal communication and social behaviour without influencing it. Content analysis allows a researcher to interpret what is being communicated, why it is being communicated, and with what effects ( Wagenaar and Babbie 2004 ). An objective codification process characterises content analysis and involves placing coded data into key categories and more abstract concepts.

One conceptualisation of creativity and AI that emerged from the students’ remarks was labelled ‘Social Factors’. Typical categories defining the concept were ‘conversation and lack of awareness’, ‘student interest’ and ‘social intelligence/social skills’. Another different conceptualisation identified in the content units was ‘affective’. Typical categories defining this concept were ‘comfortable with AI’ and ‘not comfortable with AI’. A different kind of conceptualisation was observed in the cognitive view expressed by some of the students interviewed. This led to the concept ‘Technological Factors’. The typical categories here were ‘access and use of AI’, ‘technology focused’, ‘robotics’, and ‘computers’. The final concept was labelled as ‘Learning Factors’. The typical categories related to the student’s current school environment were ‘AI provides a learning aid’, and ‘creativity takes time’. These concepts are shown in Appendix B , along with the content units from which they were derived, and the categories defined by these content units.

3. Results and Discussion

This study aimed to understand how the students view the relationship between AI and creativity. This topic was addressed through a content analysis interpretation of the students’ responses to key questions. The results highlight that the students in the study understood the relationship between AI and creativity as four fundamental concepts: social, affective, technological and learning factors.

3.1. Social Factors

The results from the interviews suggest that secondary school students in Australia hold opinions that AI can negatively impact their social skills. The AI facilitators/barriers category tended to include negative views and perceptions of AI. Previous research notes that AI will drive us into roles that require more social skills and typically encourage these social-based roles ( Deming 2017 ; Makridakis 2017 ). However, the students believed that AI would negatively impact their social skills. Comments such as ‘AI can make people lack ‘social-wise’. AI can make social intelligence weaken a little bit, which can affect them (students), and another comment: ‘Well, if we’re talking about robots and such for computers and phones and digital media social media, that kind of stuff…it’s taking away from people’s social lives, and they’re just more concerned about having a digital platform to present themselves on, rather than focusing on presenting themselves in the physical world.’. One student reported that getting AI to become ‘a mainstream thing so everyone can speak to everyone on it, so we can ask whole communities and get out with a lot of people’ was essential to changing the conversations about AI. These somewhat negative perceptions may hinder students’ willingness to adopt AI technologies in their classrooms. Chai et al. ( 2021 ) demonstrate that the intention to learn AI in primary school students is influenced by the students’ perception of the use of AI for social good. Furthermore, Chai et al. ( 2020 ) highlight that students perceive the purpose of learning of AI for social good as the most powerful predictor for their behavioural intention to continue learning AI. The students also reported that AI will never work in fields where human skills are required for problem-solving. When asked whether AI can match human skills, one focus group reported that the father of a participant in this group was a pilot. They mentioned that it was crucial AI never entered the cockpit as humans should be tasked with solving a complex problem like flying a plane. Interestingly, every member of this group agreed and seemed apparently unaware of the level of technology that is associated when flying. This represents a gap in student understanding of how AI can be used to assist humans. The students in this group failed to see the value of AI as a teammate and solely viewed this role as a human skill. Further emphasis should be placed on educating students on the role of human–AI teaming, and that AI can support humans, even in seemingly social or complex situations. The belief that AI can negatively hinder their social skills also represents an opportunity to demonstrate how AI can benefit social skills and enhance connections across communities.

3.2. Affective Factors

Students reported various affective responses to AI. Those students who verbally reported feeling more familiar with AI also reported feeling more comfortable using AI technologies. However, the students who said they were not sure what AI was, also said they felt less comfortable defining AI, as well as integrating it into their classrooms. This finding is supported by both Chiu ( 2017 ), and Teo and Tan ( 2012 ). These authors highlight that a positive attitude towards technology can explain one’s intention of using the technology. One student reported feeling comfortable because he had ‘all the safety programmes on it (his computer)’, so he reported having trust in his AI systems. Another student responded, ‘depends on the type of AI, so, I guess computers and programming and telling a computer instructions’. When prompted, they reported they wouldn’t feel as comfortable using ‘robots and machines’. This transparency in the AI system relates to an increase in trust in the AI. This is in line with previous research that transparency and the avoidance of ‘black box’ suggestions can foster AI adoption. This is referred to as explainable AI ( Lundberg et al. 2020 ).

3.3. Technological Factors

Interestingly, the majority of the students’ perceptions of AI were related to technological factors. Categories such as advanced technology, automation, coding/programming, futuristic, not human and robots, all had a lot in common. Students typically thought of AI as robots or computer-based, as this is how they interact with AI in their daily lives. These comments can be interpreted as the students possessing quite a limited view of AI applications, and they all struggled to move beyond the idea that AI is more than robots and computers. Several students felt that AI was a ‘futuristic’ phenomenon and was not as impactful in their current lives. All students reported that AI, to them, included some form of robotics. Chiu et al. ( 2021 ) and Chiu and Chai ( 2020 ) suggest that students should learn about AI by referring to real-life applications that they are likely to encounter in their daily experiences.

When asked if AI can ever match human creativity, students reported that, despite AI being technically superior to humans, human creativity will always be a uniquely human trait that should be fostered. One student commented, ‘Basically, most things in artificial intelligence are made by humans so, unless we actually create a robot which can be a human, it probably won’t be able to match the creativity of humans.’. The students who did believe that AI could match human creativity suggested that ‘maybe over time, when technology gets a lot more advanced, I think that it would be eventually possible to be as creative as humans’. Thus, they didn’t think AI could currently match human creativity but may do so in the future. When asked ‘do you think AI could ever match human creativity?’ One student made a very interesting comment. She said, ‘Yes, kind of. It’s a very interesting question. I think it can spark creativity. I don’t know if AI itself (can be creative). I don’t know if a robot can be creative because, in order for a robot to be creative, someone has had to create the robot and give it its creativity as such, so I don’t know if they can be creative themselves, but I think they can spark creativity.’. Therefore, they view AI as a way to facilitate or ‘spark’ creativity. Based on these comments, it is suggested that AI should be used to enhance creativity. Markauskaite et al. ( 2022 ), in their recent paper, demonstrate how AI can be used to support creativity across different age groups. The authors polylogue provides concrete suggestions based on a 4C theory of creativity approach on how and where AI can be used to enhance creativity, particularly for students.

3.4. Learning Factors

The most frequent and mentioned categories are related to the concept of learning factors. The students reported a positive view of AI and that it can support them to access information more efficiently; it can promote global connections, support their ideas, and aid learning. The students also reported that the benefits of creativity include time management and increasing their novel ideas. However, students also reported that their current school environments sometimes negatively impact their ability to exhibit creativity. Unsurprisingly, students mentioned not having enough time to be creative and that assignments were not designed to allow creativity to develop, indicated by comments such as ‘sometimes you can’t (be creative); sometimes you do have a set structure of things that you have to follow, and you can’t always be creative, which can sometimes be a bit sad because you want to do something interesting but sometimes you know you have to follow a set structure for an assignment or something’. The students provided suggestions on how their learning environments could support creativity. The students felt that AI could help develop their creativity by encouraging independent thinking and creating opportunities to be creative, such as encouraging ‘new ways to approach different situations’. Another student mentioned, ‘Also, if you’re trying to make a robot move down a path or something, sometimes it’s going to bump into things and it’s going to, you know, go a bit wonky, so you’ve got to think out of the box and you, hang on a second, what’s going wrong here and then backtrack kind of thing, thinking in a different mindset, I guess, to how you usually think.’.

The students think AI can assist creativity when asked to deepen their thoughts in their learning. It is suggested that schools adopt opportunities for students to engage with creativity and AI as the students desire to engage in these activities.

3.5. Theoretical and Practical Contribution (From 4C to 4AI)

The students’ perceptions of AI varied; those more comfortable with AI had a more comprehensive understanding of the concept. This is in line with the research on trust with AI research ( Ashoori and Weisz 2019 ). Similarly, those who accurately defined creativity and valued the competency tended to think AI could never match human creativity. However, what was notable was that, when students were asked to define AI, they had a very limited understanding of the concept and tended to view AI as general AI or Artificial Superintelligence. The students had experienced an intensive programme using narrow AI, so it was surprising that they did not acknowledge this. Adopting a 4C approach to these results, we propose that the students do not value what we have termed ‘everyday-AI’ (a combination of mini-c and little-c).

It is proposed that the effective integration of AI into classrooms must address the misconceptions students may have about AI. By extending the 4C theory of creativity, we propose a ‘4AI model of Artificial Intelligence’. Following the same principles of the 4C model, we suggest mini-AI, little-AI, Big-AI and legendary-AI. Students described an evident appreciation of Big and legendary AI but did not appear to appreciate the mini or little AI (despite the AI tool being created to support mini-c and little-c). Drawing analogies with the 4C theory of creativity, we propose that thinking about four aspects of AI, perhaps as a ‘4AI Model of Artificial Intelligence in Education’ may be useful. Therefore, educators should focus on this aspect as it is unlikely that Big- or legendary-AI will be as frequently experienced by students in the same way that children are more likely to experience mini-c and little-c. This could include explaining the myths and misconceptions of AI and encouraging students to look for and appreciate examples of mini- or little-AI in their everyday lives. There is also the suggestion that, as with creativity, where there is teaching with creativity, for creativity, and about creativity, there should be teaching for AI, with AI, and about AI. Within these three domains, mini- and little-AI can be explored. It is proposed that students would increase their realistic understandings of AI over time, and some of the issues raised by the students who participated in this programme could be minimised.

3.6. Future Research

This study investigated student perceptions of AI and creativity and has proposed a 4AI model of creativity and AI. Future research could establish this model through both qualitative and quantitative methods. Quantitively, AI-based tasks could be employed in classrooms, delineating mini-AI (perhaps around personalized feedback in learning) versus little-AI. Furthermore, this model could be compared against pre- to post-measures of creativity. Further qualitative work could explore broader perceptions of everyday AI in children and adolescents. Finally, future research should focus on increasing students’ limited views of AI to incorporate more of what AI entails and how widely it permeates society and their learning environments ( Yufeia et al. 2020 ).

3.7. Limitations

This study has several limitations. First, this study was limited to secondary school students in South Australia, Australia. Further research should examine and compare K-12 students’ perceptions from other countries and demographics. Secondly, the students reported issues with the AI system effectively working every time they used it. These issues may have contributed to some poorer attitudes for students, if this was their first experience working with AI. Thirdly, whilst the interviews provided rich and in-depth insights into student perceptions, more empirical attitude measures could have been used, which would have provided further insights.

4. Conclusions

The interviews highlighted that the students view the relationship between AI and creativity from four key concepts: social, affective, technological and learning factors. Most of the students reported that, although AI could never match human creativity, AI could certainly help them develop their creativity. A 4AI model of Artificial Intelligence has been proposed to help educators support mini-AI and little-AI experiences, which the findings show was overlooked by the students, despite these being the core of the programme they had experienced. Future research could focus on using AI to address the concerns students mentioned and be used to enhance their creativity.

Acknowledgments

The authors would like to acknowledge the participants and their teachers.

Creativity and Artificial Intelligence—a student perspective

Interview Questions for one-on-one interviews

Creativity:

  • What comes to mind when you hear the word ‘creativity’?
  • In what areas of your school life do you see creativity being beneficial?
  • What are the challenges associated with creativity?
  • Are some people more ‘creative’ than others?

I will now move into some questions on artificial intelligence.

  • 5. Do you know what AI is?
  • 6. How comfortable do you feel using AI?
  • 7. How often do you use AI—have you used it before?

Artificial Intelligence:

  • 8. What comes to mind when I say the words ‘Artificial Intelligence’?
  • 9. In what areas do you see AI being beneficial?
  • 10. What are the challenges associated with AI?
  • 11. Who can help bring AI into your classroom?
  • 12. What do you think needs to happen to see AI in a classroom?
  • 13. Do you want AI in your classroom?

Creativity and AI:

  • 14. What is the relationship between creativity and AI?
  • 15. Can AI be creative?
  • 16. What skills do you think are important for the future of work?
  • 17. How can we support these skills?
  • 18. Can AI ever match human creativity?

Due to nature of the focus groups, we condensed the above 18 questions into 11 questions

Interview Questions for Focus Groups

  • What comes to mind when I say the words ‘Artificial Intelligence’?
  • Do you know what AI is?
  • How comfortable do you feel using AI?
  • How often do you use AI—have you used it before?
  • How do you feel about AI in a collaborative learning environment?
  • Do you want AI in your classroom?
  • What was your experience working with Vianna? What did you like and did not like?
  • 8. What comes to mind when you hear the word ‘creativity’
  • 9. Do you think AI can ever match human skills/creativity in the future?
  • 10. What skills do you think are important for the future of work?
  • 11. Bearing your previous discussion in mind, in what ways were you and/or your group creative in this this project?

Content units, categories and concepts derived from the qualitative data.

Table A1 illustrates that the students in the study understood the relationship between creativity and AI in terms of four fundamental dimensions (referred to as ‘concepts’ in the table): social, affective, technological and learning factors.

Funding Statement

This research received no external funding.

Author Contributions

Conceptualization, R.M., V.T. and G.H.; methodology, V.T. and R.M.; formal analysis, V.T. and R.M.; writing—original draft preparation, R.M., V.T. and G.H.; writing—review and editing, R.M., V.T. and G.H.; project administration, R.M. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of The University of South Australia (protocol code 203661 and date of approval 13 January 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Alves-Oliveira Patrícia, Arriaga Patrícia, Paiva Ana, Hoffman Guy. Yolo, a robot for creativity: A co-design study with children; Paper presented at the 2017 Conference on Interaction Design and Children; Stanford, CA, USA. June 27–30; 2017. pp. 423–29. [ Google Scholar ]
  • Ashoori Maryam, Weisz Justin D. In AI we trust? Factors that influence trustworthiness of AI-infused decision-making processes. arXiv. 2019 1912.02675 [ Google Scholar ]
  • Beaty Roger E., Johnson Dan R. Automating creativity assessment with SemDis: An open platform for computing semantic distance. Behavior Research Methods. 2021; 53 :757–80. doi: 10.3758/s13428-020-01453-w. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beghetto Ronald A., Kaufman James C., Hatcher Ryan. Applying creativity research to cooking. The Journal of Creative Behavior. 2016; 50 :171–77. doi: 10.1002/jocb.124. [ CrossRef ] [ Google Scholar ]
  • Belpaeme Tony, Kennedy James, Ramachandran Aditi, Scassellati Brian, Tanaka Fumihide. Social robots for education: A review. Science Robotics. 2018; 3 :eaat5954. doi: 10.1126/scirobotics.aat5954. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boden Margaret A. Creativity and artificial intelligence. Artificial Intelligence 40 Years Later. 1998; 103 :347–56. doi: 10.1016/S0004-3702(98)00055-1. [ CrossRef ] [ Google Scholar ]
  • Chai Ching Sing, Lin Pei-Yi, Jong Morris Siu-Yung, Dai Yun, Chiu Thomas K. F., Huang Biyun. Factors Influencing Students’ Behavioral Intention to Continue Artificial Intelligence Learning; Paper presented at the 2020 International Symposium on Educational Technology (ISET); Bangkok, Thailand. August 24–27; 2020. pp. 147–50. [ Google Scholar ]
  • Chai Ching Sing, Lin Pei-Yi, Jong Morris Siu-Yung, Dai Yun, Chiu Thomas K. F., Qin Jianjun. Perceptions of and behavioral intentions towards learning artificial intelligence in primary school students. Educational Technology & Society. 2021; 24 :89–101. [ Google Scholar ]
  • Chiu Thomas K. Introducing electronic textbooks as daily-use technology in schools: A top-down adoption process. British Journal of Educational Technology. 2017; 48 :524–37. doi: 10.1111/bjet.12432. [ CrossRef ] [ Google Scholar ]
  • Chiu Thomas K., Chai Ching Sing. Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability. 2020; 12 :5568. doi: 10.3390/su12145568. [ CrossRef ] [ Google Scholar ]
  • Chiu Thomas K. F., Meng Helen, Chai Ching-Sing, King Irwin, Wong Savio, Yam Yeung. Creation and evaluation of a pretertiary artificial intelligence (AI) curriculum. IEEE Transactions on Education. 2021; 65 :30–39. doi: 10.1109/TE.2021.3085878. [ CrossRef ] [ Google Scholar ]
  • Chun Tie Ylona, Birks Melanie, Francis Karen. Grounded theory research: A design framework for novice researchers. SAGE Open Medicine. 2019; 7 :2050312118822927. doi: 10.1177/2050312118822927. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen Leonora M. A continuum of adaptive creative behaviors. Creativity Research Journal. 1989; 2 :169–83. doi: 10.1080/10400418909534313. [ CrossRef ] [ Google Scholar ]
  • Colton Simon, Wiggins Geraint A. Computational creativity: The final frontier?; Paper presented at the ECAI 2012: 20th European Conference on Artificial Intelligence; Montpellier, France. August 27–31; 2012. pp. 21–26. [ Google Scholar ]
  • Cropley David H., Cropley Arthur J. A psychological taxonomy of organizational innovation: Resolving the paradoxes. Creativity Research Journal. 2012; 24 :29–40. doi: 10.1080/10400419.2012.649234. [ CrossRef ] [ Google Scholar ]
  • Cropley David H., Kaufman James C. Measuring functional creativity: Non-expert raters and the Creative Solution Diagnosis Scale. The Journal of Creative Behavior. 2012; 46 :119–37. doi: 10.1002/jocb.9. [ CrossRef ] [ Google Scholar ]
  • Cropley David H., Marrone Rebecca L. Automated Scoring of Figural Creativity using a Convolutional Neural Network. Psychology of Aesthetics, Creativity, and the Arts. 2021 doi: 10.1037/aca0000510. [ CrossRef ] [ Google Scholar ]
  • Cropley David H., Medeiros Kelsey E., Damadzic Adam. The Intersection of Human and Artificial Creativity. Springer; Berlin/Heidelberg: 2021. [ CrossRef ] [ Google Scholar ]
  • Deming David J. The growing importance of social skills in the labor market. The Quarterly Journal of Economics. 2017; 132 :1593–640. doi: 10.1093/qje/qjx022. [ CrossRef ] [ Google Scholar ]
  • Fraenkel Jack R., Wallen Norman E., Hyun Heo. How to Design and Evaluate Research in Education. Mac Graw Hill; New York: 2006. [ Google Scholar ]
  • Gabriel Florence, Marrone Rebecca, Van Sebille Ysabella, Kovanovic Vitomir, de Laat Maarten. Digital education strategies around the world: Practices and policies. Irish Educational Studies. 2022; 41 :85–106. doi: 10.1080/03323315.2021.2022513. [ CrossRef ] [ Google Scholar ]
  • Gordon Goren, Breazeal Cynthia, Engel Susan. Can children catch curiosity from a social robot?; Paper presented at the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction; Portland, OR, USA. March 2–5; 2015. pp. 91–98. [ Google Scholar ]
  • Hassani Hossein, Silva Emmanuel S., Unger Stephanie, TajMazinani Maedeh, Mac Feely Stephen. Artificial Intelligence (AI) or Intelligence Augmentation (IA): What Is the Future? AI. 2020; 1 :143–155. doi: 10.3390/ai1020008. [ CrossRef ] [ Google Scholar ]
  • Helm J. Matthew, Swiergosz Andrew M., Haeberle Heather S., Karnuta Jaret M., Schaffer Jonathan L., Krebs Viktor E., Spitzer Andrew I., Ramkumar Prem N. Machine learning and artificial intelligence: Definitions, applications, and future directions. Current Reviews in Musculoskeletal Medicine. 2020; 13 :69–76. doi: 10.1007/s12178-020-09600-8. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kafai Yasmin B., Burke Quinn. Connected Code: Why Children Need to Learn Programming. MIT Press; Cambridge: 2014. [ Google Scholar ]
  • Kaufman James C., Beghetto Ronald A. Beyond big and little: The four c model of creativity. Review of General Psychology. 2009; 13 :1–12. doi: 10.1037/a0013688. [ CrossRef ] [ Google Scholar ]
  • Lundberg Scott M., Erion Gabriel, Chen Hugh, DeGrave Alex, Prutkin Jordan M., Nair Bala, Katz Ronit, Himmelfarb Jonathan, Bansal Nisha, Lee Su-In. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence. 2020; 2 :56–67. doi: 10.1038/s42256-019-0138-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Makridakis Spyros. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures. 2017; 90 :46–60. doi: 10.1016/j.futures.2017.03.006. [ CrossRef ] [ Google Scholar ]
  • Markauskaite Lina, Marrone Rebecca, Poquet Oleksandra, Knight Simon, Martinez-Maldonado Roberto, Howard Sarah, Tondeur Jo, De Laat Maarten, Shum Simon Buckingham, Gašević Dragan, et al. Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers and Education: Artificial Intelligence. 2022; 3 :100056. doi: 10.1016/j.caeai.2022.100056. [ CrossRef ] [ Google Scholar ]
  • Olson Jay A., Nahas Johnny, Chmoulevitch Denis, Cropper Simon J., Webb Margaret E. Naming unrelated words predicts creativity. Proceedings of the National Academy of Sciences of the United States of America. 2021; 118 :e2022340118. doi: 10.1073/pnas.2022340118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Patston Timothy J., Kaufman James C., Cropley Arthur J., Marrone Rebecca. What is creativity in education? A qualitative study of international curricula. Journal of Advanced Academics. 2021; 32 :207–30. doi: 10.1177/1932202X20978356. [ CrossRef ] [ Google Scholar ]
  • Rhodes Mel. An analysis of creativity. The Phi Delta Kappan. 1961; 42 :305–10. [ Google Scholar ]
  • Runco Mark A. Personal creativity: Definition and developmental issues. New Directions for Child and Adolescent Development. 1996; 1996 :3–30. doi: 10.1002/cd.23219967203. [ CrossRef ] [ Google Scholar ]
  • Ryu Miyoung, Han Seonkwan. The educational perception on artificial intelligence by elementary school teachers. Journal of the Korean Association of Information Education. 2018; 22 :317–24. doi: 10.14352/jkaie.2018.22.3.317. [ CrossRef ] [ Google Scholar ]
  • Teo Timothy, Tan Lynde. The theory of planned behavior (TPB) and pre-service teachers’ technology acceptance: A validation study using structural equation modeling. Journal of Technology and Teacher Education. 2012; 20 :89–104. [ Google Scholar ]
  • Torrance E. Paul. A longitudinal examination of the fourth grade slump in creativity. Gifted Child Quarterly. 1968; 12 :195–99. doi: 10.1177/001698626801200401. [ CrossRef ] [ Google Scholar ]
  • Trilling Bernie, Fadel Charles. 21st Century Skills: Learning for Life in Our Times. John Wiley & Sons; San Francisco: 2009. [ Google Scholar ]
  • Tubb Adeline L., Cropley David H., Marrone Rebecca L., Patston Timothy, Kaufman James C. The development of mathematical creativity across high school: Increasing, decreasing, or both? Thinking Skills and Creativity. 2020; 35 :100634. doi: 10.1016/j.tsc.2020.100634. [ CrossRef ] [ Google Scholar ]
  • VanLehn Kurt. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist. 2011; 46 :197–221. doi: 10.1080/00461520.2011.611369. [ CrossRef ] [ Google Scholar ]
  • Vygotsky Len S. Imagination and creativity in childhood. Journal of Russian & East European Psychology. 2004; 42 :7–97. [ Google Scholar ]
  • Wagenaar Theodore C., Babbie Earl R. Guided Activities for the Practice of Social Research. Wadsworth Publishing Company; Belmont: 2004. [ Google Scholar ]
  • Yufeia Liu, Salehb Salmiza, Jiahuic Huang, Syed Syed Mohamad. Review of the Application of Artificial Intelligence in Education. International Journal of Innovation, Creativity and Change. 2020; 12 :548–62. doi: 10.53333/IJICC2013/12850. [ CrossRef ] [ Google Scholar ]

AI Index Report

The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The report aims to be the world’s most credible and authoritative source for data and insights about AI.

Subscribe to receive the 2024 report in your inbox!

AI Index coming soon

Coming Soon: 2024 AI Index Report!

The 2024 AI Index Report will be out April 15! Sign up for our mailing list to receive it in your inbox.

Steering Committee Co-Directors

Jack Clark

Ray Perrault

Steering committee members.

Erik Brynjolfsson

Erik Brynjolfsson

John Etchemendy

John Etchemendy

Katrina light

Katrina Ligett

Terah Lyons

Terah Lyons

James Manyika

James Manyika

Juan Carlos Niebles

Juan Carlos Niebles

Vanessa Parli

Vanessa Parli

Yoav Shoham

Yoav Shoham

Russell Wald

Russell Wald

Staff members.

Loredana Fattorini

Loredana Fattorini

Nestor Maslej

Nestor Maslej

Letter from the co-directors.

AI has moved into its era of deployment; throughout 2022 and the beginning of 2023, new large-scale AI models have been released every month. These models, such as ChatGPT, Stable Diffusion, Whisper, and DALL-E 2, are capable of an increasingly broad range of tasks, from text manipulation and analysis, to image generation, to unprecedentedly good speech recognition. These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new. However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.

Although 2022 was the first year in a decade where private AI investment decreased, AI is still a topic of great interest to policymakers, industry leaders, researchers, and the public. Policymakers are talking about AI more than ever before. Industry leaders that have integrated AI into their businesses are seeing tangible cost and revenue benefits. The number of AI publications and collaborations continues to increase. And the public is forming sharper opinions about AI and which elements they like or dislike.

AI will continue to improve and, as such, become a greater part of all our lives. Given the increased presence of this technology and its potential for massive disruption, we should all begin thinking more critically about how exactly we want AI to be developed and deployed. We should also ask questions about who is deploying it—as our analysis shows, AI is increasingly defined by the actions of a small set of private sector actors, rather than a broader range of societal actors. This year’s AI Index paints a picture of where we are so far with AI, in order to highlight what might await us in the future.

- Jack Clark and Ray Perrault

Our Supporting Partners

AI Index Supporting Partners

Analytics & Research Partners

AI Index Supporting Partners

Stay up to date on the AI Index by subscribing to the  Stanford HAI newsletter.

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Megan Loh and Nadine Gaab show a model of an MRI machine they use to acclimate young study participants.

Getting ahead of dyslexia

Naomi Saphra, Lawrence Weru, and Maitreya Shah.

Why AI fairness conversations must include disabled people

Sir Andre Geim (pictured), giving the Morris Loeb Lecture in Physics.

How did you get that frog to float?

essay on artificial intelligence and human creativity

Illustration by Nick Shepherd/Ikon Images

‘Harvard Thinking’: Is AI friend or foe? Wrong question.

In podcast, a lawyer, computer scientist, and statistician debate ethics of artificial intelligence

Samantha Laine Perfas

Harvard Staff Writer

ChatGPT’s launch in late 2022 heightened the debate about whether recent leaps in artificial intelligence technology will help or hurt humanity — with some experts warning that AI tools pose an existential threat and others predicting a new era of flourishing.

Perhaps we need a bit more nuance in the conversation, argues Sheila Jasanoff , a science and technology expert at Harvard Kennedy School.

“I’ve been struck, as somebody who’s been studying risk for decades and decades, at how inexplicit this idea of threat is,” Jasanoff said in this episode of “Harvard Thinking.” “There’s a disconnect between the kind of talk we hear about threat and the kind of specificity we hear about the promises. And I think that one of the things that troubles me is that imbalance in the imagination.”

Martin Wattenberg , a computer scientist at the School of Engineering and Applied Sciences, said he’s been surprised at some of the ways AI has developed. While Hollywood depictions tend to depict enormous advances in math and science leading to humanity’s demise, what we’ve seen is a rise in creative augmentation through programs like Midjourney and DALL·E.

“In some ways it feels like the cutting edge [of AI] is with astonishing visuals, with humor even, with things that seem almost literary,” he said. “That’s been really surprising for a lot of people.”

Regardless of how AI continues to develop, ethics need to be at the forefront of conversations and integrated into education, said Susan Murphy , a statistician and associate faculty member at the Kempner Institute for the Study of Natural and Artificial Intelligence. One model of how that might be done is Harvard’s Embedded EthiCS initiative to weave philosophy and ethical modules into computer science coursework.

“We all have a responsibility to ensure our research is used ethically,” she said. “Often we go off the trail when someone has an enormous amount of hubris … and then there’s all these unintended consequences.”

In this episode, host Samantha Laine Perfas, Jasanoff, Wattenberg, and Murphy discuss the perils and promise of AI.

Sheila Jasanoff: So with AI, there are going to be consequences, and some of them will be good surprises, and others of them will be bad surprises. What is it that we want to do in the way of achieving a good society and where does the technology help us or hurt us?

Samantha Laine Perfas: Until recently, the capabilities of artificial intelligence have fallen short of human imagination. It’s now catching up, and it raises the question: How do we develop these technologies ethically?

Welcome to “Harvard Thinking,” a podcast where the life of the mind meets everyday life. Today we’re joined by:

Martin Wattenberg: Martin Wattenberg. I’m a computer scientist and a professor here at Harvard.

Laine Perfas: Martin is also part of Embedded EthiCS, a Harvard initiative to bring philosophers into computer science classrooms. Then:

Susan Murphy: Susan Murphy. I’m a professor also here at Harvard. I’m in statistics and the computer science department.

Laine Perfas: She’s also a faculty member at the Kempner Institute for the Study of Natural and Artificial Intelligence. She works at the intersection of AI and health. And finally:

Jasanoff: Sheila Jasanoff. I work at the Kennedy School of Government.

Laine Perfas: A pioneer in the field, she’s done a lot of work in science policy. Lately, a major topic of interest has been the governance of new and emerging technologies.

And I’m your host, Samantha Laine Perfas. I’m also a writer for The Harvard Gazette. And today we’ll be talking about the peril and promise of AI.

Artificial intelligence has been in the news a lot over the last year or so. And a lot of the coverage I see focuses on why we should fear it. What is it that is so scary about AI?

Jasanoff: I’ve been struck, as somebody who’s been studying risk for decades and decades, at how inexplicit this idea of threat is. Whether you look at the media or whether you look at fictive accounts or whatever, there is this coupling of the idea of extinction together with AI, but very little specificity about the pathways by which the extinction is going to happen. I mean, are people imagining that somehow the AIs will take control over nuclear arsenals? Or are they imagining that they will displace the human capacity to think, and therefore build in our own demise into ourselves? I mean, there’s a disconnect between the kind of talk we hear about threat and the kind of specificity we hear about the promises. And I think that one of the things that troubles me is that imbalance in the imagination.

Wattenberg: For me, the primary emotion and thinking about AI is just tremendous uncertainty about what’s going to happen. It feels almost parallel to the situation when people were first effectively using steam engines a couple hundred years ago, and there were immediate threats. In fact, a lot of the early steam engines literally would blow up. And that was a major safety issue. And people really worried about that. But if you think about the industrial revolution over time, there were a whole lot of other things that were very dangerous that happened, ranging from terrible things happening to workers and working conditions, to nuclear weapons, to the ozone layer starting to disappear, that I think would have been very hard to anticipate. One of the things that I feel like is a theme and what has worked well is very close observation. And so my feeling at this point is that, yeah, there is a lot of generalized worry that in fact, when there’s any very large change, there’s all sorts of ways that it can potentially go wrong. We may not be able to anticipate exactly what they are, but that doesn’t mean we should just be nihilistic about it. Instead, I think we should go into very deliberate, active information-gathering mode in a couple of ways.

Jasanoff: Martin, I think that’s an excellent entry point to get serious conversation going between us over at the Kennedy School and people in public health and other places, because it raises the question of whose responsibility is it to do that monitoring? I’m an environmental lawyer by training. I got into the field before Harvard was even teaching the subject. And one of the things that we chronically do not do as a society is invest in the monitoring, in the close supervision that you’re talking about. Time after time, we get seduced by the innovative spirit. I think that on the whole, the promise discourse tends to drown out the fear discourse, at least in America. I mean, it’s often considered part of what makes America great, right? That we are a nation of risk-takers. But it does raise the question, whether we’re willing to invest in the brakes at the same time that we’re investing in the accelerator. And this is where history suggests that we just don’t do it. Brakes are not as exciting as accelerators.

Laine Perfas: As someone who is not a computer scientist and not as well-versed in the nitty-gritty of artificial intelligence, I think a lot of the conversations that I hear or read or see seem so binary. I appreciate hearing some of the more nuanced ways that you all are thinking about this. And I’m curious if there’s other nuance that needs to be in this conversation.

Jasanoff: I think some of the nuance has to be around the whole idea of intelligence, right? People who are dealing with education theory, for instance, have been pointing out for a long time that one of the great faculties of the human mind is that we’re intelligent about very different things. I know people who are fantastic at math and have low, you know, emotional maturity and intelligence. And I know people who have no sense of direction, but still can compose music. And there has been a discussion about how the computer and also the personalities who do computer science may be guiding that idea of intelligence in overly narrowing ways.

Wattenberg: A lot of times you do hear questions about to what degree the people who are working on AI, that composition of this group, “Is that affecting what is happening, and in particular, the type of technologies developed?” And I think in many ways, you can point to aspects where there is an effect. But I would also say that collectively as a field, I think we are very interested in other approaches. I think there would be actually tremendous appetite for collaboration. There is another thing that I would say, which is that it’s not just the people, though. The technology itself has certain affordances of what turns out to be easy, what turns out to be really expensive to do, and that ends up being part of the equation as well. I think it’s important to take into account both the human aspect and how various human biases are coming into play, but also realize that to some extent, there are technical things happening. Some things just have turned out to be much easier than people expected. And some things have turned out to be harder. I would say the classic example of this that people talk about informally is that if you look at Hollywood depictions of AI, say Data from “Star Trek,” where people expected that the first big breakthroughs would be very mathematical, very literal. Instead, when we look at large language models, or, say, generative image models like DALL·E or Midjourney, in some ways it feels like the cutting edge is with astonishing visuals, with humor even, with things that seem almost literary in certain ways. And I think that’s been really surprising for a lot of people.

Murphy: I just wanted to jump in and, it’s a little bit of a different direction, but in terms of all of us who work at AI, we all have a responsibility to ensure our research is used ethically. The CS (Computer Science) department at Harvard is really trying hard to embed ethics in the classes. And I feel that’s a critical point because often we go off the trail when someone has an enormous amount of hubris and they think they don’t need anyone else and they can just do something, and then there’s all these unintended consequences. Whereas this Embedded EthiCS course, Martin, can you speak a little bit? I really feel like this is a bright point.

Wattenberg: Yeah. The general idea is that you want to make sure that students understand that ethics is just part of how you think about things in general, and so as part of many courses in the computer science departments, there’ll be a module that’s embedded, this is done with the philosophy department, to think about the very complicated issues that come up. There really is this sense that it is part of what we need to think about.

Jasanoff: There are two points I’d like to make in this connection. Many years ago, I was in a discussion when nanotechnology was the newest kid on the block, and all of humanity’s problems would be solved by going nano. There is this element of hype around new technologies. But people were talking about nano ethics. And a skeptical voice in the room said, “Well, there’s a lot of imitation going on here, because bioethics is a field and nanoethics is building on that.” This person said, “Does anybody know of a single case where bioethics said, stop, do not continue this line of research”? And there was dead silence in the room. And that fits with a perception that as soon as you turn ethics into a set of modularized principles, you end up standardizing the moral faculties in some ways. With bioethics, where, after all, we have now 30-plus years of experience with packaged or principalist bioethics, as people call it, people have turned away from that and have said that in order to really grapple with the moral dilemmas around such things as how much should we intervene in human reproduction, for instance, the philosophy department is not the right place to start. I don’t want to be a party spoiler in a sense, but there is a whole debate about what we’re trying to accomplish by thinking about ethics as a kind of add-on to the process instead of, let’s say, starting with the moral questions. What is it that we want to do in the way of achieving a good society? And where does the technology help us or hurt us? As opposed to starting with the technology and saying, “This is what the technology can do. Now imagine ethically problematic consequences.”

Murphy: I think I operated at a much lower level. For example, we have these algorithms, they’re running online, and they’re with people that are really struggling with some sort of health condition. And we’re thinking, “How are we going to monitor these algorithms as they learn about the individual and provide different kinds of suggestions or nudges?” And our main red flag is ethics. Are we overburdening these people? Are we causing trouble in their lives? It’s just, it’s all very practical. The algorithm we’re designing is all about the patient comes first, research is second.

Laine Perfas: I’m also curious if pursuing ethics is a challenge because not everyone is on board with using technology ethically. A lot of times technologies are used for malicious purposes or for personal gain. I’m wondering as artificial intelligence continues to develop so quickly, beyond just ethics, how do we also create space for things like regulation and oversight?

Jasanoff: I would take issue with the idea that ethics can be separated from regulation and oversight, because, after all, regulation and oversight express a population’s collective values. Regulation is a profoundly ethical act. It says that there are certain places we should go and certain places that we should not go. You know, I think that we haven’t put the question of money into this discussion. I mean, there’s this idea that the technology is just advancing by itself. It’s not just the brilliant engineer who has got nothing but the welfare of the patient in mind. It’s also, what are the spinoff technologies? Who’s going to come forward with the venture capital? Whose preferences, whose anticipatory ideas get picked up and promoted? Where is that discussion going to be had? Susan, I’m absolutely in sympathy with you. I don’t think it’s low-level at all, and I think in fact calling it low-level, the sort of pragmatic, on-the-ground thing, is disabling what is a noble instinct. I mean, that’s part of the Hippocratic Oath. If we’re going to deliver a medical service, we should do it for the benefit of the patient, right? I don’t think that’s low-level at all, but it is kind of linear. That is, when we say that it shouldn’t nudge the patient in the wrong direction. But supposing our problem is obesity, which is a big problem in this country. Should we be tackling at the level of nudging the patient into more healthful eating, or should we also be discussing how Lunchables get into the lunchbox?

Murphy: Right on with influence of money, Sheila. At least in my world, you put your finger right on a big concern. There’s very strong monetary incentives to go in certain directions, and it’s hard to fight against that.

Wattenberg: Yeah, I think that this idea of what is equitable and what isn’t, this is absolutely critical. When we talk about what are the worries with AI, this is one that you hear people talk about a lot: What if it ends up helping the already powerful become even more powerful? Now, I will say that those to me seem like incredibly legitimate worries. They also do not seem like inevitabilities to me. I think there are many paths to making sure that technology can work for many people. There’s another thing that I think is actually potentially very interesting; some of these technologies, so ChatGPT, for example, may help less-skilled people more than highly skilled people. There’s an interesting study that came out recently where people were thinking of ideas in a business setting. And what they found is that the most skilled people who were tested weren’t improved that much by using ChatGPT, but the performance was very much improved by people who were less skilled. And that’s interesting because it’s sort of flattening a curve in a way. Now, whether that study holds up I don’t know. But it’s an interesting thought, you know, I think that we should not assume that it is going to increase inequality and in fact what we should try to do is work very hard so it does not.

Jasanoff: Martin, if I could throw the question back at you: But supposing what it means is that the less-skilled jobs can be replaced by a ChatGPT, but the higher-skilled jobs cannot? Then is that not a different sort of take on the problem that routinized tasks would be better performed by mechanical instruments whose job it is to do routine? That’s relatively easy to appreciate as just a logical point. Since you referred to the history of technology, we’ve seen when machine looms were first introduced, they displaced the people at the lower end of the scale. So what has happened over hundreds of years? We still appreciate the craft skills, but now it’s the very rich people who can afford the craft. Hand-loomed silk fabrics and hand-embroidered seed pearls still command unbelievable prices. It’s just that, most people can only afford glass pearls or whatever. And we are certainly in a much more technologically interesting world, there’s no doubt about it. But the inequality problems, if anything, are worse. That is a kind of problem that does preoccupy us on our side of the river.

Wattenberg: I think there’s this broad question about technology in general, and then there’s, I think, the specific question about what is different about AI. This I think we don’t know yet. And, to go back to what you began with in terms of will this lead to job displacement, there is this famous saying that the worries about AI are all ultimately worries about capitalism. And I think it’s a fairly deep saying in a lot of ways. But even within the framework that we have, could we reconfigure the economic system is one question, but even if we can’t, I feel like within that, there are lots of things we can do to make the technology work better.

Laine Perfas: Martin, one thing you mentioned earlier was an effort to democratize the technology. When I think about the technology being as widely available as it is, that requires a lot of trust in one another, and we don’t have a lot of that going around these days.

Wattenberg: That’s a very important point. And the idea of like, how do we democratize the technology without making it too easily usable by bad actors. It’s hard. I don’t know the answer to this. I do think this is where this idea of observation comes in. To go back to a metaphor Sheila used before was a car, of brakes and accelerator. And when I think about driving a car, what makes a safe driver? Is it access to brakes and accelerator? Yeah, that’s part of it, but what you really need is clear vision. You need a dashboard, you need a speedometer, you need a check-engine light, you need an airbag. And in a sense, thinking in terms of just brakes or acceleration is a very narrow way to approach the problem. And instead we should think about, OK, what is the equivalent of an airbag? Are there economic cushions that we could create? What is the equivalent of a speedometer, of a fuel gauge? This is why I believe that, and literally this is what my research is largely about at this point, is understanding both what neural networks are doing internally and thinking about their effects on the world. Because I think if you’re going to drive, it’s not just a matter of thinking do I speed up or slow down, but you really have to look around you and look at what’s actually happening in the world.

Jasanoff: If I could double down a bit, the brake and the accelerator, of course, are metaphors. And I was making the point that as a society, we tend to favor certain kinds of developments more than others. The stop, look, be wise, be systemic, do recursive analysis, those are things that we systematically do not invest the same kind of resources in as “move quickly and break things.” One of the things we have to cultivate alongside hubris is humility, and I think you and I are on the same page, and Susan too, that it has to be a much more rounded way of looking at technological systems. Again, I’m an environmental lawyer and on the whole, we started investing in waste management much later than we started investing in production. And we needed some really big disasters, such as the entire nuclear waste problem around the sites where the nuclear weapons plants are built. And now people, of course, recognize that. With AI, there are going to be consequences, and some of them will be, as you said, good surprises, and others of them will be bad surprises. Let me use a different driving metaphor: Are we awake at the wheel, or are we asleep at the wheel? I think the question whether we can be globally sleepwalking is a genuine, real question. It’s not something that AIs have to invent for us.

Wattenberg: Yeah, when I hear you talking about the car, you’re also talking about looking ahead, looking out the window, trying to figure out what’s going on. And I think that’s the key thing. It’s figuring out what are the things we want to worry about. There are things that you’ve alluded to that were genuine disasters that took decades for people to figure out. And one of the things I think about is if we could go back in time, what structures would we put into place to make sure we were worrying about the right things? That we press the brakes when necessary. We accelerate when necessary. We turn the wheels when necessary. What sort of observational capabilities can we build in to gain information to see where we’re going?

Laine Perfas: What is the role of universities and research institutions in this conversation as opposed to someone who might be using the technology for profit?

Wattenberg: I feel like there’s actually tremendously good work happening both in industry and academia. I also do think that these systems are somewhat less opposed than we believe. But I do think there is a big difference, which is that in a university, we can work on basic science. That can happen in industry too, but it really is something that is the core mission of the university, to figure things out, to understand the truth. And I think that attitude of trying to understand, there’s a lot the university can offer.

Jasanoff: We in liberal arts universities have been committed to the idea that what we’re training is future citizens. And so we take advanced adolescents and produce young adults. And during those four years, they undergo a profound transformation. I think that if we put side by side with the acquisition of knowledge, the production of citizens, then I think that there’s actually a huge promissory space that we are not currently filling as we might. What do we need to do to take citizens of the United States in the 21st century and make sure that, whether they go into industry or whether they go into the military or whatever, certain habits of mind will stay with them? A spirit of skepticism, a spirit of modesty, I think that is every bit as important a mission as knowledge acquisition for its own sake.

Laine Perfas: I want to pivot a little bit; we’ve been talking a lot about the threats and the concerns. A lot of you have also done work with very exciting things in AI.

Murphy: One thing that I’m excited about in terms of AI is you’re seeing hospitals use AI to better allocate scarce resources. For example, identify people who are most likely to have to come back into the hospital later, and so then they can allocate more resources to these people to prevent them from having to re-enter the hospital. Many resources, particularly in the healthcare system, are incredibly scarce. And whenever AI can be harnessed to allocate those resources in a way which is more equitable, I think this is great.

Wattenberg: I have to say, there’s a massive disconnect between the very-high-level conversations that we’re having and what I’m seeing anecdotally, which is this sort of lighthearted, mildly positive feeling that this is fun and working out. And I know of several people who are junior coders, for example, who will just talk to ChatGPT to help understand code that maybe colleagues have written. And they get great answers and they feel like this is this significant improvement to their life and they’re becoming much more effective. They’re learning from it. That’s something that I would just point to as something for us in the academic world to look at carefully.

Jasanoff: Any sort of powerful technology, there are these dimensions of whether people can take the technology and make it their own and do things that set creative instincts free. I’ve certainly found among my students who are adept at using AIs that there’s a lot of excitement about what one might call sort of creativity-expanding dimensions of the technology. But then there are creativity-dulling aspects of the technologies as well.

Laine Perfas: What are things that would be helpful to consider as we think about AI and the place it will have in our future?

Wattenberg: I have one, I would say, prime directive for people who want to know more about AI, which is to try it out yourself. Because one of the things I’ve discovered is that learning about it by hearsay is really hard. And it’s very distorting. And you often hear what you want to hear, or if you’re a pessimistic person, what you don’t want to hear. Today, you have any number of free online chatbots that you can use. And my strongest piece of advice is just try them out yourself. Play with them, spend a few hours, try different ways of interacting, try playful things, try yelling at it, try giving it math problems if you want, but try a variety of things. Because this is a case where, like, your own personal unmediated experience is going to be an incredibly important guide. And then that’s going to very much help you in understanding all of the other debates you hear.

Jasanoff: I’m totally in favor of developing an experimental, playful relationship with the AIs, but at the same time keep certain questions in the back of one’s mind. Who designed this? Who owns it? Are there intellectual property rights in it? When I’m playing with it, is somebody recording the data of me playing with it? What’s happening to those data? And what could go wrong? And then the single thing that I would suggest is, along with asking about the promises, ask about the distributive implications. To whom will the promises bring benefits? From whom might they actually take some resources away?

Laine Perfas: Thank you all so much for such a great conversation.

Murphy: Thank you.

Laine Perfas: Thanks for listening. For a transcript of this episode and to see all of our other episodes, visit harvard.edu/thinking. This episode was hosted and produced by me, Samantha Laine Perfas. It was edited by Ryan Mulcahy, Paul Makishima, and Simona Covel, with additional support from Al Powell. Original music and sound design by Noel Flatt. Produced by Harvard University.

Recommended reading

  • Will ChatGPT supplant us as writers, thinkers? by The Harvard Gazette
  • How artificial intelligence learned language by The Harvard Gazette
  • AI is coming fast and it’s going to be rough ride by The Harvard Gazette
  • The Ethics of Invention by Sheila Jasanoff
  • Martin Wattenberg: ML Visualization and Interpretability on The Gradient podcast

More episodes

Share this article, you might like.

Harvard lab’s research suggests at-risk kids can be identified before they ever struggle in school

Naomi Saphra, Lawrence Weru, and Maitreya Shah.

Tech offers promise to help yet too often perpetuates ableism, say researchers. It doesn’t have to be this way.

Sir Andre Geim (pictured), giving the Morris Loeb Lecture in Physics.

Ever-creative, Nobel laureate in physics Andre Geim extols fun, fanciful side of very serious science

Harvard announces return to required testing

Leading researchers cite strong evidence that testing expands opportunity

Forget ‘doomers.’ Warming can be stopped, top climate scientist says

Michael Mann points to prehistoric catastrophes, modern environmental victories

Yes, it’s exciting. Just don’t look at the sun.

Lab, telescope specialist details Harvard eclipse-viewing party, offers safety tips

  • Support WPR!
  • Where to Watch Us
  • Current TV Program
  • SkyWatchTV Archive
  • Five in Ten
  • Return to Eden
  • Web Exclusives
  • Storefront Main Page
  • Current Program Specials
  • Current Best Sellers
  • Prophecy Books
  • Audio & DVDs
  • Researchers Library Of Ancient Texts

Sign up for email updates!

essay on artificial intelligence and human creativity

Make a Donation

Secular Artists Band Together Against Artificial Intelligence: “This Assault on Human Creativity Must Be Stopped,” Says Jon Bon Jovi, Pearl Jam, and More Than 200 Others

essay on artificial intelligence and human creativity

With each passing month, AI technology appears to grow, creating practically anything the mind can conceive. And unlike the past, AI can produce a prompt in a matter of seconds. Just proving how far technology advanced over the last decade, many singers, performers, and musicians are concerned about the integration of technology into the entertainment industry. The concern seemed to reach the Artist Rights Alliance as the organization, supported by artists like Jon Bon Jovi, presented a letter warning about AI being an “assault on human creativity.” Signed by over 200 artists like Bon Jovi, Pearl Jam, Sheryl Crow, and Stevie Wonder, the letter from the Artist Rights Alliance shared the concerns that many artists have when discussing AI. “Unfortunately, some platforms and developers are employing Al to sabotage creativity and undermine artists, songwriters, musicians and rights holders.” The letter continued, “When used irresponsibly, Al poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods. Some of the biggest and most powerful companies are, without permission, using our work to train Al models. These efforts are directly aimed at replacing the work of human artists with massive quantities of Al-created ‘sounds’ and ‘images’ that substantially dilute the royalty pools that are paid out to artists…” (READ MORE)

DON’T MISS THE PRE-ORDER SALE FOR  DR.TOM HORN’S FINAL BOOK SUMMONING THE DEMON!

essay on artificial intelligence and human creativity

Category: Featured Articles

Featured Product

essay on artificial intelligence and human creativity

Recent Posts

Priming society to embrace the beast system, right under the nose of a sleeping church…article 42, the jericho project—a “new testament” homecoming, 4,000-year-old copper dagger found in polish forest, answers in genesis launching new curriculum for christian schools, sign up for our email newsletter.

Subscribe to our Rumble Channel For all of our new videos!

IMAGES

  1. (PDF) Art, Creativity, and the Potential of Artificial Intelligence

    essay on artificial intelligence and human creativity

  2. What is Artificial Intelligence Free Essay Example

    essay on artificial intelligence and human creativity

  3. Artificial Intelligence Essay: 500+ Words Essay for Students

    essay on artificial intelligence and human creativity

  4. ≫ Present and Future of Artificial Intelligence Free Essay Sample on

    essay on artificial intelligence and human creativity

  5. Artificial Intelligence Essay

    essay on artificial intelligence and human creativity

  6. Essay on Artificial Intelligence in English 1000 Words

    essay on artificial intelligence and human creativity

VIDEO

  1. Artificial intelligence- the death of creativity. CSS 2024 essay paper

  2. Essay: Artificial Intelligence: The death of Creativity (2024)

  3. Artificial Intelligence Essay In English l 10 Lines On Artificial intelligence l 10 Line Essay On AI

  4. pschology artificial intelligence human nature behaviour #mentalhealth #mentalhealthawareness

  5. pschology artificial intelligence human nature behaviour #mentalhealth #mentalhealthawareness

  6. pschology artificial intelligence human nature behaviour #mentalhealth #mentalhealthawareness

COMMENTS

  1. Explaining Artificial Intelligence Generation and Creativity: Human

    Creativity is often thought of as the pinnacle of human achievement, but artificial intelligence (AI) is now starting to play a central role in creative processes, whether autonomously or in collaboration with people. Widespread deployment is now pushing for explanations on how creative AI is working, whether to engender trust, enable action, provide a basis for evaluation, or for intrinsic ...

  2. Generative artificial intelligence, human creativity, and art

    Abstract. Recent artificial intelligence (AI) tools have demonstrated the ability to produce outputs traditionally considered creative. One such system is text-to-image generative AI (e.g. Midjourney, Stable Diffusion, DALL-E), which automates humans' artistic execution to generate digital artworks. Utilizing a dataset of over 4 million ...

  3. Full article: Artificial everyday creativity: creative leaps with AI

    2. Artificial creativity. The term 'Artificial Creativity,' much along the lines of Artificial Intelligence, is a field that studies human creativity by developing computational systems that can be considered creative on their own terms (Cope Citation 2005; p. vii in Fonseca Citation 2011).The aim of artificial creativity is to employ cognitive, embodied, and situated frameworks to ...

  4. Creativity and artificial intelligence: A multilevel perspective

    Artificial intelligence is likely to revolutionize multiple aspects of organizational creativity. Through a multilevel theoretical lens, the present paper reviews the extant body of knowledge on creativity at individual, team and organizational levels, and draws a series of propositions on how the implementation of artificial intelligence may affect each level.

  5. How Artificial Intelligence Can Help Us Understand Human Creativity

    In psychology, research into creativity 1 has tended to follow well-trodden paths: simple tests of creativity (e.g., alternative uses test), correlations with measures of intelligence, and more recently neural correlates of creativity such as EEG and fMRI (e.g., Weisberg, 2006; Runco, 2014) 2.One line of research that has been little explored is to use progress in artificial intelligence (AI ...

  6. How Generative AI Can Augment Human Creativity

    It can supplement the creativity of employees and customers and help them produce and identify novel ideas—and improve the quality of raw ideas. Specifically, companies can use generative AI to ...

  7. Artificial intelligence as a tool for creativity

    Artificial intelligence (AI) technology has become a major topic of interest to creativity scholars, especially since the release of ChapGPT in November 2022. New research questions emerged, from examining creativity of ideas produced by generative AI and comparing them to those produced by human participants ( Cropley, 2023; Koivisto ...

  8. Artificial Intelligence, Creativity, and Intentionality: The Need for a

    Applying artificial intelligence (AI) to generate creative and valuable outputs is far from a novel concept (Boden, 1998; Cope, 1989).However, only in recent years have we witnessed large scale release and adoption of AI tools capable of generating high-quality content in written, image, video, and sound formats (Anantrasirichai & Bull, 2022).The fast adoption of these tools, and the ...

  9. Artificial Intelligence & Creativity: A Manifesto for Collaboration

    With the advent of artificial intelligence (AI), the field of creativity faces new opportunities and challenges. This manifesto explores several scenarios of human-machine collaboration on creative tasks and proposes "fundamental laws of generative AI" to reinforce the responsible and ethical use of AI in the creativity field.

  10. Human ownership of artificial creativity

    Article 15 July 2021. Artistic creativity has traditionally been viewed as a product of the human mind. This association is increasingly cast into doubt as artificial intelligence (AI) becomes ...

  11. Can AI and creativity coexist?

    May 9, 2023. One of the Writers Guild of America's demands in its current strike is for studios to regulate the use of artificial intelligence for creating, writing and rewriting TV and movie scripts and other material. That might have sounded like a far-fetched concern just a few years ago.

  12. J. Intell.

    Creativity is a core 21st-century skill taught globally in education systems. As Artificial Intelligence (AI) is being implemented in classrooms worldwide, a key question is proposed: how do students perceive AI and creativity? Twelve focus groups and eight one-on-one interviews were conducted with secondary school-aged students after they received training in both creativity and AI over eight ...

  13. AI Creativity and the Human-AI Co-creation Model

    AI Creativity refers to the ability for human and AI to co-live and co-create by playing to each other's strengths to achieve more. AI is a complement to human intelligence, and it consolidates wisdom from all achievements of mankind, making collaboration across time and space possible.

  14. Art, Creativity, and the Potential of Artificial Intelligence

    Our essay discusses an AI process developed for making art (AICAN), and the issues AI creativity raises for understanding art and artists in the 21st century. Backed by our training in computer science (Elgammal) and art history (Mazzone), we argue for the consideration of AICAN's works as art, relate AICAN works to the contemporary art context, and urge a reconsideration of how we might ...

  15. AI and it's impact on Creativity

    In this article, we explore the potential impact of artificial intelligence (AI) on human creativity and innovation. We examine both the potential benefits of AI, such as augmenting human creativity through the use of new tools and resources, as well as the potential concerns, such as the displacement of human workers and the potential diminishment of the need for creative thinking in certain ...

  16. (PDF) Artificial intelligence, creativity, and education: Critical

    General AI is mostly considered only a limited, unlikely or distant possibility, given the complexity of attempting to imitate human consciousness, intelligence, sociocultural understandings, and ...

  17. Creativity and artificial intelligence

    Creativity is a fundamental feature of human intelligence, and a challenge for AI. AI techniques can be used to create new ideas in three ways: by producing novel combinations of familiar ideas; by exploring the potential of conceptual spaces; and by making transformations that enable the generation of previously impossible ideas.

  18. Creativity and artificial intelligence: a view from the perspective of

    Anna Shtefan, Creativity and artificial intelligence: a view from the perspective of copyright, Journal of Intellectual Property Law & Practice, Volume 16, Issue 7, July 2021, ... Sometimes it is difficult to distinguish between an outcome of human creativity and AI activity. The common feature between human-created works and AI-generated ...

  19. Can Artificial Intelligence replace human creativity?

    An essay on Borges, Artificial Intelligence, DALL-E 2.0 and the future of work. ... not machines could ever actually surpass human intelligence, more specifically by posing a game in which a human ...

  20. Will Artificial Intelligence end Human Creativity?

    In a recent incident, a painting worth 432,000 dollars was sold at Christie's, which was, however, not a product of a human painter but of an algorithm. This raises a very interesting question about how artificial intelligence will affect creativity and creative content in the future and how it will change perspectives in the near future.

  21. The Intersection Of AI And Human Creativity: Can Machines ...

    The ability to be creative has always been a big part of what separates human beings from machines. But today, a new generation of "generative" artificial intelligence (AI) applications is ...

  22. Using Artificial Intelligence for enhancing Human Creativity

    improving human creativity. 1. Creating new works of art with arti ficial intelli-. gence is one way in which arti ficial intelligence. can enhance human creativity. AI programs use. algorithms ...

  23. How AI mathematicians might finally deliver human-level reasoning

    Artificial intelligence is taking on some of the hardest problems in pure maths, arguably demonstrating sophisticated reasoning and creativity - and a big step forward for AI

  24. Artificial Intelligence (AI)

    Artificial intelligence (AI) is the theory and development of computer systems that can perform tasks that would otherwise require human intelligence, such as pattern recognition from data; learning from experience; interpreting visual inputs; and recognizing, translating, and transcribing language. In practice, AI is a way for humans and ...

  25. AI Knowledge and Reasoning: Emulating Expert Creativity in Scientific

    We investigate whether modern AI can emulate expert creativity in complex scientific endeavors. We introduce novel methodology that utilizes original research articles published after the AI's training cutoff, ensuring no prior exposure, mitigating concerns of rote memorization and prior training. The AI are tasked with redacting findings, predicting outcomes from redacted research, and ...

  26. Creativity and Artificial Intelligence—A Student Perspective

    Artificial Intelligence (AI) is a branch of computer science that uses algorithms and machine learning techniques to replicate or simulate human intelligence ( Helm et al. 2020 ). There are three types of AI: narrow AI, general AI, and Artificial Superintelligence. Narrow AI is the most common and realized form of AI to date.

  27. AI Index Report

    AI Index Report. The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the ...

  28. How to develop ethical artificial intelligence

    Samantha Laine Perfas. Harvard Staff Writer. April 10, 2024 long read. ChatGPT's launch in late 2022 heightened the debate about whether recent leaps in artificial intelligence technology will help or hurt humanity — with some experts warning that AI tools pose an existential threat and others predicting a new era of flourishing.

  29. Secular Artists Band Together Against Artificial Intelligence: "This

    Secular Artists Band Together Against Artificial Intelligence: "This Assault on Human Creativity Must Be Stopped," Says Jon Bon Jovi, Pearl Jam, and More Than 200 Others ... presented a letter warning about AI being an "assault on human creativity." Signed by over 200 artists like Bon Jovi, Pearl Jam, Sheryl Crow, and Stevie Wonder, the ...

  30. Teachers are using AI to grade essays. Students are using AI to write

    Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023. Teachers are turning to AI tools and platforms ...