MIT Libraries home DSpace@MIT

  • DSpace@MIT Home
  • MIT Libraries
  • Graduate Theses

ChatGPT and the Future of Management Consulting: Opportunities and Challenges Ahead

Thumbnail

Terms of use

Date issued, collections.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 April 2024

ChatGPT and the digitisation of writing

  • Xin Zhao   ORCID: orcid.org/0000-0002-8968-6912 1 ,
  • Andrew Cox 1 &
  • Liang Cai 2  

Humanities and Social Sciences Communications volume  11 , Article number:  482 ( 2024 ) Cite this article

291 Accesses

2 Altmetric

Metrics details

  • Language and linguistics

The aim of this study is to uncover how students’ practices of writing in higher education are being impacted by ChatGPT. The use of ChatGPT and other generative AI needs to be set in the context of a longer-term process of the digitisation of writing, where many tools are being employed by students to support writing because it is a complex iterative process. Generative AI appears to have had a large impact on how students write, and we propose a model of generative AI literacy to assess their capabilities in doing so. Semi-structured interviews and observation data were collected at a British University with 23 students from diverse backgrounds, including the UK, USA, China, Japan, and Saudi Arabia. The data was analysed thematically. It was found that students used ChatGPT alongside many other tools, and in rather individualistic ways often to address specific challenges they felt they had with writing. Their main concerns were around plagiarism, information inaccuracy and technology dependence. There was a relatively weak understanding or interest in the ethical issues around the exploitative and environmental impacts of generative AI. The social controversy around ChatGPT can be seen as a useful opportunity to engage students in a discussion about the digitisation of writing and promote AI literacy in this context.

Similar content being viewed by others

dissertation mit chatgpt

Artificial Intelligence in studies—use of ChatGPT and AI-based tools among students in Germany

Jörg von Garrel & Jana Mayer

dissertation mit chatgpt

Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

Hazem Ibrahim, Fengyuan Liu, … Yasir Zaki

dissertation mit chatgpt

A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT

Maram Abdaljaleel, Muna Barakat, … Malik Sallam

Introduction

The use of AI in education (AIEd) has been a discrete area of study for several decades, albeit the majority of studies have been from a technical development standpoint with less involvement of educators (Zawacki-Richter et al., 2019 ). Development of AIEd has tended to be concentrated on Intelligent Tutoring Systems (Guan et al., 2020 ). Use of such technologies in an educational context has not been without its critics (e.g. Selwyn, 2019 ). Meanwhile, AI has already made a relatively unheralded appearance in low-level features of much technology supporting everyday knowledge work such as search, recommendation, transcription and translation. It has also appeared increasingly within writing support tools, such as grammar checkers, as well as in plagiarism detection.

This picture of gradual change was dramatically disrupted in November 2022 by the launch of ChatGPT. Particularly in education, generative AI has created excitement but is also a considerable concern (Kasneci et al., 2023 , Trust et al., 2023 ; Lo, 2023 ). The usage figures of ChatGPT show an incredibly rapid rise in popularity and the potential benefits claimed for it are wide-ranging. Much of the fear has revolved around its potential impact on academic integrity. What is lacking to date are in-depth studies that explore how ChatGPT is actually used and experienced by students. Since it is in writing text that generative AI excels, and because writing is central to many forms of learning, including assessment, this paper focuses on how generative AI is changing how students write. With the increasing use of AI in many domains of activity there is a growing interest in defining AI literacy (Long and Magerko, 2020 ). We build on this work to propose a model of generative AI literacy as a framework to assess student use of ChatGPT in their writing.

In this context, the present study had the aim of uncovering how postgraduate students’ practices of writing were impacted by ChatGPT, with the specific research questions for the study being:

How were postgraduate students using ChatGPT and other digital writing tools for writing tasks in the summer of 2023?

What do students consider the benefits and problems of ChatGPT’s use?

What are the strengths and weaknesses in student generative AI literacy?

The digitisation of writing

The impacts of generative AI tools such as ChatGPT in education, need to be understood in the context of long-term digitisation of writing . The digitisation of writing is a major shift in writing as a fundamental process of expression and learning (Strobl et al. 2019 ). It carries particular significance because of the cultural value and status placed on good writing, particularly in some academic disciplines. Through the introduction of word processors, then spelling, grammar and style checking, then connectivity, and now generative AI tools, this digitisation process is having profound effects on writing, albeit they remain difficult to pinpoint because they are primarily mental rather than directly visible changes (Kruse and Rapp, 2019 ).

There has been an acceleration of this digitisation process in the last decade with the growing number of AI-powered writing assistants that are appearing. Godwin-Jones ( 2022 ) differentiates four types of such tools:

Automatic writing evaluation (AWE) which provides feedback on completed work;

Automatic written corrective feedback (AWCF) which offers synchronous feedback on spelling, grammar and or style as text is written;

Translation tools;

Text generation tools which create bodies of text from a short stimulus.

We could add to this other tools and apps used during the research process at the beginning of writing such as for search, text summarisation (e.g., Scholarcy, iris.ai, summarisebot) and literature reviewing (e.g., ResearchRabbit, Gecko, connectedpapers). There are also well-established tools for referencing which link search and reading to producing a final reference list for a completed assignment (e.g., EndNote, RefWorks, Zotero, and Mendeley).

If writing as a process consists of the stages of “prewriting, planning, drafting, revising, and editing” (Strobl et al., 2019 , p. 38), then AWE and AWCF are mostly used in the latter two stages. Our previous research revealed, however, that rephrasing tools such as Wordtune are used both in improving text at the revision stage, but also in breaking through mental blocks in the early stages of writing or even planning (Zhao et al., 2023 ). Similarly, Malik et al. ( 2023 ) found Indonesian students also using a wide range of AI-based tools in their writing. Translation tools might also be used at various stages, such as in processing reading and drafting text (Zhao et al., 2024 ). So just as writing is a complex iterative process, the use of digital writing tools is complex. Our study of Wordtune also found it being commonly being combined with other writing tools (Zhao et al., 2023 ). While many such tools have multiple functionalities, they tend to be used for specific tasks for which they are best known. For example, many tools will offer some support for translation, but users tend to have a preferred tool for this function.

Thus, it is important to recognise that many learners were already using multiple tools at different stages of the writing process prior to the dramatic debut of ChatGPT, so the use of generative AI appears in a pre-existing landscape of digital writing. Yet the panoply of tools to support the writing process has been little analysed, particularly from the perspective of how they are used in practice and in conjunction with each other, during writing as a complex, iterative process.

Generative AI

ChatGPT’s launch has been a dramatic, potentially paradigm-shifting intervention, influencing how writing as a central aspect of learning is performed, but also the general perception of AI in Education. It has seen an extraordinary explosion of use, with a claimed 100 million users within two months of its launch (Trust et al., 2023 ). At the same time, it has been deeply controversial, particularly within education, and has been linked to many of the wider debates on the ethics of AI around bias, privacy and impact on society. Whereas the widespread use in the writing of tools such as Grammarly and Google Translate seems to have been tacitly accepted with relatively little controversy, ChatGPT has drawn huge debate to the digitisation of writing (Adeshola and Adepoju, 2023 ; AlAfnan et al., 2023 ; Memarian and Doleck, 2023 ). This may be partly because tools such as Turnitin have at least partly given teachers the lead in the “arms race” with unfair means. From an educational point of view, it is the impossibility of detecting generative AI’s use that makes it so controversial (Uzun, 2023 ).

Yet it is hard to deny the power and user-friendliness of ChatGPT. While it remains essentially a form of narrow AI (as opposed to a general AI that mimics the breadth of human intelligence), it does perform a wide range of tasks across the writing process, potentially composing a complete essay, but also including the ability to: summarise readings or a topic, produce an outline for a text, draft text, rewrite text in different styles or lengths, and check grammar and spelling (UNESCO, 2023 ). Thus, ChatGPT has the ability to write entire texts from a prompt or support specific processes in writing. In addition, it can also write computer code, solve math problems, etc. Yet ChatGPT, at least in its early manifestation, poses many informational and ethical problems (EPIC, 2023 ) (Table  1 ).

Some of these problems are being addressed in later versions of ChatGPT or in other text generation tools such as Bard or the new Bing. Moreover, they are not inherent to large language models but rather features of systems built by BigTech. Nevertheless, given the great AI capabilities of BigTech in terms of resources including data and so their power to define the definition of AI, we anticipate that users need to be aware of such potential issues.

AI literacy

In the last 5 years, there have been growing suggestions of the need to define AI literacy, because of the increasingly pervasive presence of AI in everyday lives and work. A widely cited definition is offered by Long and Magerko ( 2020 ):

We define AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.

The authors break AI literacy down under five headings, with 17 components under those headings:

What AI is—this is knowledge such as how to recognise AI when it is encountered and understanding distinctions between general and narrow AI.

What it can do—this consists of differentiating the tasks AI is good at doing from those it is not good at, and also being able to imagine future uses, reflecting the evolving nature of AI.

How AI works—includes ideas such as representation and has an emphasis on data literacy, emphasising learning from data and the need for critical interpretation of data.

How it should be used—under which ethics is placed.

How people perceive it.

This is useful in contrasting to more technically oriented definitions such as that of Pinski and Benlian ( 2023 ) which lack the ethical and critical dimension. Another useful definition is offered by Ridley and Pawlick-Potts ( 2021 ) when they suggest that:

Algorithmic literacy is the skill, expertise, and awareness to, understand and reason about algorithms and their processes; recognise and interpret their use in systems (whether embedded or overt); create and apply algorithmic techniques and tools to problems in a variety of domains; assess the influence and effect of algorithms in social, cultural, economic, and political contexts; position the individual as a co-constituent in algorithmic decision-making.

This is a concise expression of the key aspects, integrating notions of algorithmic literacy, which focuses on the way that AI is often encountered indirectly through functions such as filtering and personalisation on online platforms. Levels of student AI literacy have been much investigated, often using Long and Magerko’s ( 2020 ) framework (e.g. Kong et al. 2022 ). However, most of this work was done before the advent of ChatGPT.

The capability of ChatGPT and other generative AI to create a significant body of content from a short prompt has shifted concepts of what AI is. We suggest that this implies the need to update our notion of AI literacy. In the light of the potential and critiques of the technology (discussed above), we can suggest that generative AI literacy might be defined under five headings:

Pragmatic understanding: The individual can use generative AI effectively and interpret the information it produces critically

The individual can pick the right tool for the task, in the context of the proliferation of writing tools (including alternative generative AI to ChatGPT)

The individual learns to use the chosen tool effectively for a specific task

Deciding where in the writing process to use it, e.g. for Search, brainstorming, structuring text etc

Uses the tool effectively through prompt engineering), such as by

Being CLEAR (concise, logical, explicit, adaptive and reflective) (Lo, 2023 )

Providing context for prompts posed

Defining what sort of answer is required

Rephrasing questions

Asking for sources used

Iterating and synthesising results

Updating their knowledge as tools develop rapidly

The individual interprets generative AI outputs critically, given an understanding of how they work and their limits

Information accuracy, currency, citeability

Safety understanding: The individual can use generative AI safely

Is aware of privacy risks

Reflective understanding: The individual can assess and take action to manage the impacts of AI on their experience in the educational context

Impacts on own skills and learning

Impacts on social connection, including the social aspects of learning

Socio-ethical understanding: The individual understands the societal impacts of AI, including

IPR issues relating to how models are trained

Impact on information culture, misinformation and disinformation

Social impacts such as through exploitative process of creation, and the impacts on jobs/ job enrichment

Equity of access

Environmental impacts

Implications of the undue power of BigTech

Contextual understanding: The individual understands how to use generative AI appropriately in a particular context and make their own use explicit, as appropriate

What is appropriate to context

How to make use transparent and cite appropriately

Generative AI in education

It has been education in particular that has been disrupted by the potential and risks of generative AI in 2023. Yet while much has been written about this in editorials and opinion pieces (Kasneci et al., 2023 ; Trust et al., 2023 ) usually to inform educators about how to use it, we are only at the beginning of learning its impact on student behaviour through empirical research. This is important because it seems likely that students have taken up its use far more quickly than teachers.

We do have a few early studies of use by students. A number of surveys by Best Colleges indicate that though US students had concerns about whether it was fair to use ChatGPT for assessments, they were using it and saw it as soon to become the norm (Welding, 2023 ). Chan and Hu ( 2023 ) found Hong Kong students positive about generative AI and willing to use it. This was partly because of its direct uses for brainstorming, individualised assistance with questions, and help with literature reviewing (such as summarisation). But their willingness to use it was partly because they saw it as representing long-term trends in technologies. They had concerns about its accuracy, the transparency of its working, the privacy of their data, the risk of becoming over-reliant, the impacts on employment and conflicts with human values. This accords well with a study by Attewell ( 2023 ) based on focus groups with UK students. This again found generative AI being used in a wide range of ways. A similar range of concerns were also expressed such as about the reliability of information from generative AI, privacy, equity of access and fears of becoming over-reliant on it. Students wanted educational institutions to have clearer policies and offer training in the use of generative AI. Interestingly, they also wanted student involvement in generating policy on AI.

If students are generally positive about ChatGPT’s use it is staff who express more concerns. Cardon et al. ( 2023 ) conducted a survey of business communication instructors. The main concern of this group of educators was that students would use it to cheat. But they saw a range of negative impacts on learning, such as

Less critical thinking/ creativity—itself seen as part of a wider malaise, and the crisis of creativity

Less writing skills

Less authenticity

Less agency because of dependence on such tools

Less commitment to authenticity in communication, such as valuing authorial voice and sincerity in communications

They also acknowledged that it can be helpful e.g. in the early stages of writing, and certainly enhanced the efficiency of writing. However they did believe such tools would be used in the workplace, so it was unavoidable that it had to be taught.

Methodology

To answer the research questions, we employed a qualitative methodology within the interpretivist paradigm. We used a combination of semi-structured interviews and observational techniques to gain an understanding of how students selectively employed digital tools in their writing processes and to understand their experiences and concerns regarding the use of generative AI. We recruited participants through an email invitation circulated to students asking for participants who were using “digital tools” for writing. We also asked interviewees to suggest other suitable participants. Our study included 23 participants of diverse nationalities, including students from the UK, China, India, Thailand, Japan, Greece, Malaysia, the USA, and other regions (Table  2 ). These students were pursuing a range of academic degrees, including postgraduate taught and postgraduate research programmes. All the participants were in the process of undertaking academic tasks, such as writing dissertations or theses. The interviews were conducted in the summer of 2023, this was before the university had issued its policy on AI use.

At the beginning of the interview, participants were asked to demonstrate their writing process for an academic essay/dissertation and explain how they use digital tools to support their writing. The second part of the interview participants were asked a series of questions, including about the tools they used during the writing process, how they had used ChatGPT, and what their concerns were about it, such as data privacy, inclusivity, accessibility, bias, ethics, and the potential impact of generative AI on education. Thematic Analysis served as our chosen method for analysing the qualitative data, enabling us to gain a nuanced understanding of students’ perceptions of digital writing and ChatGPT in particular (Braun and Clarke, 2006 ). The research received ethical approval from the University of Sheffield. Voluntary, informed consent was gained from participants. All the data were anonymised for the purposes of analysis and reporting.

Many tools used in the complex task of writing

One theme that emerged strongly from the data was that students were routinely using a wide range of digital tools (many with an AI component) throughout the academic writing process. The most commonly mentioned tools were grammar checkers (especially Grammarly), paraphrasing tools (Quillbot and Wordtune) and translation tools (e.g., DeepL and Google Translate). Somewhat less commonly other types of tools were in use such as for managing references (Endnote, MyBib) and plagiarism detection (Q-text).

Students who were native English speakers used more basic tools such as Word’s grammar checking or very specialist tools such as to manage references. In contrast, non-English speakers were using a wider range of tools, with considerable experience of having done so built up over time.

Such services seem to be used in quite individualistic ways and critically the impression was that their use had been learned from classmates, social media (such as Youtube or Little Red Book), and trial and error—rather than the institution and educators, although one individual was using Grammarly on supervisory advice.

Sometimes students paid a subscription for such tools; sometimes not. ChatGPT was the most frequently paid-for tool among all those mentioned.

ChatGPT: Used in many different ways

Interviewees talked about other tools as much as about ChatGPT and for these interviewees, it was early days with ChatGPT. Many had used it to only a limited extent. Nevertheless, there was evidence of ChatGPT being employed throughout the writing process. There were frequent mentions of uses to:

understand difficult concepts including understanding assignment briefs

summarise readings during the research process

suggest structures for writing

get words down on the page and break through a mental block

rephrase text and check grammar

Central uses that ChatGPT was uniquely good at was gaining an understanding of an assignment brief and then structuring ideas:

This is the topic and I first use ChatGPT to give me some idea about this topic. And actually, when I saw this topic, I didn’t really understand it. I use ChatGPT to give me some explanations. [9, Thailand]
So I asked it how to organise an essay about this topic. And then it gave me this structure. So I use this as a reference. [2, China]

Some also used it to check that the final text met the brief:

Then in writing or after writing, I would ask if this paragraph was in line with classwork requirements. I would send it all the requirements, and then ask it if I could write like this? [1, China]

Some used it for searching for literature.

I think the most difficult part of writing an essay is the idea you need to make your own opinions and you need to structure your essay but you need to find examples and literature to support your argument. If you just google it or search […] sometimes you can’t find much relevant information or only a few. […] You can just find less relevant information and articles but with ChatGPT, it will collect the most relevant information for you. [1, China].

For other tasks, such as summarising readings, rephrasing ideas into more academic language and grammar checking participants used other tools as much as ChatGPT. Grammarly, Quillbot, Wordtune and translation tools had heavy use in rewriting. Students tended to use ChatGPT for just one or a few of these tasks, not all of them, linked to which aspect of writing they found hardest.

Students’ individual explorations of the tool had produced quite distinct patterns of use, often linked to the areas of weakness they perceived in their own writing. For example, participant 23 emphasised using it to generate analogies to help them understand complex ideas in the context of moving to study computer science from another discipline:

If I don’t understand, repeat, repeat, repeat, repeat, repeat until I understand and then proceed. And that is how my process is. It’s a lot of repetition, a lot of reclarifying myself and always detecting all this reading and fact checking. [23, Malaysia]

Participant 17 described using in complex way to support reading:

I would copy literally like a whole article into it and then say, summarise this in bullet points. I’ll then write it on a piece of paper because that just helps me to like process it in my head as I write the words. Um, and then after that, and then I’ll read the paper like normal [17, UK]

Participant 15 described using it directly in the writing process:

Sometimes if I’ve written something, what I do is I ask for a feedback and I ask or like, you know, if I want a paraphrasing also. So what I do is I prompt it by asking that the check for flow and paraphrase and then it will kind of give it suggestions and sometimes it adds its own things. And then if I don’t want something, I can just not have it. But most of the times what happens is you get a pretty good idea of like, you know, yeah, this is something good and then you can build up from there. So it gives you that initial kind of a boost and then it becomes easier to build your arguments or build your paragraphs. (15, India)

Participant 20 stands out as potentially using it in a way to simply write ideas for him. He talked about “delegating” tasks to it, constructing himself as in control of the process:

So when I break it down, so usually when I have an academic paper, say it’s 10 pages, I have one thought for each page, like one heading for each page in my head. And once I have that set, then I just start off with ChatGPT straight up. [20, India]

Such complex uses, often involved quite interactive exchanges. Participant 23 used the term “bouncing” ideas off it.

I always have sort of like a conversation with it. [23, Malaysia]

Indeed, one interesting dimension of these interactions was the sense that the experience was something quite close to discussing with another human.

Just directly ask a question. Yes. As if I’m asking a human being. [22, China]
But always, always, always remember it is still a tool. It is not a living being. That is that’s why I keep reminding myself it’s not a living being. Always cross check your back. [23, Malaysia]

The quote implies a genuine struggle not to see ChatGPT as human. Equally, the value of ChatGPT was often linked to not needing to ask tutors or peers for help. Of course, it also gives immediate responses unlike them, but it raises the question of how ChatGPT is impacting the social dimension of learning.

ChatGPT was also being commonly used for non-academic writing, such as for job applications. Several interviewees mentioned using it for computer coding. There were also very specific uses, such as for converting text to Latex.

ChatGPT benefits and worries

The way that they described it, ChatGPT and the other digital tools they used, gave participants a sense of being more efficient and productive.

Thinking about it, basically, there is no big change, because there are still steps in writing, mainly those steps, but the efficiency has been improved. [1, China]
I think he saves me a lot of time when I’m reading like reference that’s really long. Helps me summarise it. I think it’s really efficient. [11, China]

This discourse implied that it did not really change what they did or learned, just speeded the process up and saved them time. This may be a rationalisation and disguise the loss of learning that the “efficiencies” might cause. A lot of the discourse around needing to use tools revolved around “laziness” but also a sense of a lack of confidence and anxiety. Using ChatGPT was often justified for reducing stress.

I’ll say I feel less stressful because we get to work more efficiently by using the tools, by using AI tools compared to the traditional way. [11, China]

There was a sense of ChatGPT being the future and there is an inevitability to it overtaking “traditional” learning practices.

Ten years ago, study style, study style, and the current one is very different. [23, Malaysia]

Yet, while it was being used rather extensively our participants were wary of ChatGPT for a number of reasons. Three were mentioned by most interviewees. The first was the unreliability of the information it output and the need to “fact check” it. There was less mention of specific issues such as failing to acknowledge sources or making up references. A second major concern was that its use would be detected by plagiarism detection tools (a concern they held regardless of the fact that it is generally not detectable). More positively participant 19 wanted to preserve her “voice”

I feel like it doesn’t sound like me and I do want to write in my own voice or in my own words as much as I can. [19, India]

Similarly, participant 23 thought that ChatGPT created a text that often read as “auto-generated” [23] so would not use it in many contexts, e.g. writing job applications because it would not effectively differentiate them from other candidates. Implicitly, they did not see this as a problem in the context of academic writing, suggesting that they did not see individuality as valued in that context.

There were certainly some students who did see using ChatGPT as impacting their learning:

But it sometimes it stopped me from thinking. [12, China]
It will reduce the ability to think independently, that is, reduce the motivation to think independently. Some things need to be understood by yourself. [5, China]

Thus a third major concern was a fear that the use of tools in general produced dependence. There were also a few comments that showed awareness of the risk to privacy of putting personal information into ChatGPT.

Nearly all the concerns reflected on personal impact. Disappointingly, the societal impacts of ChatGPT were not often acknowledged. One interviewee expressed concerns about the impact on low-paid workers. But even when prompted the participants showed little awareness or concern about bias, or ethical issues, e.g., sustainability or the exploitative labour relations used to create ChatGPT. Nevertheless, it seemed that the controversy around ChatGPT had filtered through to problematising its use, in ways that were expressed far less in relation to other tools such as Grammarly.

The findings confirm our earlier work from before the release of ChatGPT, that a wide range of tools is used during the writing process (Zhao et al., 2023 ). Perhaps ChatGPT will displace other tools, but at the time the study was conducted it was reaching a community of users well-versed in using digital tools. It was finding a unique place in helping understand assignment questions, aligning answers to the question and structuring ideas, but was also used to perform tasks such as summarisation and proofreading that others performed with pre-existing tools. Our findings suggested that students with higher English proficiency levels tended to use AI tools less frequently.

The study confirms previous research, that students are keen to use generative AI and see it as part of a general trend in technology development (Chan and Hu, 2023 ). Our respondents tended to use discourses around time-saving, efficiency, and stress avoidance as justification for using the tools.

ChatGPT was used rather intensively and iteratively but in highly individualistic ways. This differential use may reflect that there was little input at this time from the institution to help students how to use and where to draw the line in terms of appropriate use. Students wanted guidance from the institution on ChatGPT, echoing Attewell’s ( 2023 ) findings.

ChatGPT has brought advanced functions to digital writing but also intensified a sense of controversy in this area. For students, the worries focussed particularly on the unreliability of information it produced, fear of being accused of plagiarism if they used it and a concern about growing dependence on technology. While often its use was claimed to be justified for its time and stress reduction this may have underestimated the overall impact on learning. Saving time on learning tasks may unintentionally remove significant opportunities to learn. Many of the deeper ethical and societal issues such as around the exploitative way ChatGPT was developed were not fully understood. Yet it was clear that how writing was done digitally had become controversial. This could be seen as a benefit of ChatGPT, in that a gradual infiltration of digital tools into writing was made more visible institutionally and the controversial dimensions of technology use in education brought to the fore.

Using the model developed above we can point to strengths and weaknesses in students’ emergent generative AI literacy (Table  3 ).

Overall our student interviewees showed significant generative AI literacy in most areas, particularly when considering it was early days with their use of it and almost none of them had received support from their teachers or from their institution in understanding how to use it. The weakest area of development was probably appreciation of the societal impacts of generative AI. As instructors catch up with students in understanding AI hopefully they can help students build up a more systematic understanding of pragmatic use of AI, a more reflective approach and a much more critical awareness of the social implications of AI.

While ChatGPT appears as a threat to longstanding practices in education, especially to some genres of academic writing such as essays, it can also be seen as productively bringing to the fore the controversial nature of AI writing technologies which were already creeping unacknowledged into common use. This context produces an opportunity for educators to actively engage with students in a discussion about how writing can be best supported. Our analysis points to areas of weakness in generative AI literacy that need to be strengthened through this process, such as the understanding of the need to select between apps, to have more sophisticated prompt engineering skills, to think more about bias in results, to be more reflective about its use’s impact on learning and have a much stronger appreciation of the societal impacts of generative AI.

The paper is one of the first to explore student use of generative AI in practice and discover in-depth their perception of its benefits and worries about its drawbacks. It has also developed a framework of generative AI literacy as a way of assessing their use. This can be used to plan both institutional policy and instructor support by identifying gaps in AI literacy that need to be filled. For example, educational developers could use the framework to facilitate discussions with teaching staff, aiding in the development of their AI literacy and enhancing their ability to teach AI literacy to students. In addition, universities could apply the dimensions of the AI framework to formulate policies and provide concrete examples that guide learning and teaching practices. The framework also has the potential in evaluating student AI literacy.

The paper has a number of limitations, pointing to where future research can build on its findings. Most participants in this study were using the free version of ChatGPT (3.5) and only a few used the paid version (4). We did not examine in detail the impact of using different versions. While ChatGPT was the main generative AI tool in use at the time of the study, there were others rapidly emerging in popularity. Future research would need to examine how choices of different apps were made and the impact of these choices on writing. Although the participants of this study come from a variety of countries, it is focused on one institution at a particular time. Given the speed of change in technology and educational policy and practice, it is likely that future research will identify rapid shifts in behaviour. But we emphasise the need to examine student writing practices in the context of significant pre-existing use of digital writing tools. Our focus was on postgraduate students. We think less experienced students, such as undergraduates may be quicker and less discriminating in adopting the technology. As generative AI evolves there will be a need to update our definition of generative AI literacy and also to integrate it with notions such as algorithmic literacy, which point to the way that AI operates in rather hidden ways within the infrastructure. This research employed interviews and observations as its main data collection methods. These offer depth of insight but have less power of generalisability. Future studies could usefully seek to validate our findings through quantitative or mixed-methods approaches, such as surveys or experimental studies. Furthermore, future research could expand the scope of this study from AI literacy to the broader concept of writing digitisation, exploring the issue from other perspectives such as psychology and second language acquisition.

Data availability

The datasets analysed during the current study are not publicly available, but are available from the corresponding author on reasonable request.

AlAfnan MA, Dishari S, Jovic M, Lomidze K (2023) Chatgpt as an educational tool: opportunities, challenges, and recommendations for communication, business writing, and composition courses. J Artif Intell Technol 3(2):60–68

Google Scholar  

Adeshola I, Adepoju AP (2023) The opportunities and challenges of ChatGPT in education. Interact Learn Environ 1–14 https://doi.org/10.1080/10494820.2023.2253858

Attewell S (2023) Student perceptions of generative AI. JISC National Centre for AI. https://beta.jisc.ac.uk/reports/student-perceptions-of-generative-ai

Braun V, Clarke V (2006) Using thematic analysis in psychology. Qual Res Psychol 3(2):77

Article   Google Scholar  

Cardon P, Fleischmann C, Aritz J, Logemann M, Heidewald J (2023) The challenges and opportunities of AI-assisted writing: developing AI Literacy for the AI Age. Bus Prof Commun Q 23294906231176517

Chan CKY, Hu W (2023) Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int J Educ Technol High Educ 20 (43). https://doi.org/10.1186/s41239-023-00411-8

Deshpande A, Murahari V, Rajpurohit T, Kalyan A, Narasimhan K (2023) Toxicity in ChatGPT: analyzing persona-assigned language models. arXiv preprint. arXiv:2304.05335

Electronic Privacy Information Center (2023) Generating Harms: Generative AI’s Impact & Paths Forward. https://epic.org/wp-content/uploads/2023/05/EPIC-Generative-AI-White-Paper-May2023.pdf

Godwin-Jones R (2022) Partnering with AI: intelligent writing assistance and instructed language learning Language. Learn Technol 26:5–24. https://doi.org/10.10125/73474

Guan C, Mou J, Jiang Z (2020) Artificial intelligence innovation in education: a twenty-year data-driven historical analysis. Int J Innov Stud 4(4):134–147

Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, Fischer F, Kasneci G (2023) ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individ Diff 103:102274

Kong SC, Cheung WMY, Zhang G (2022) Evaluating artificial intelligence literacy courses for fostering conceptual learning, literacy and empowerment in university students: refocusing to conceptual building. Comput Hum Behav Rep. 7:100223

Kruse O, Rapp C (2019) Seamless Writing: How the Digitisation of Writing Transforms Thinking, Communication, and Student Learning. In: Looi CK, Wong LH, Glahn C, Cai S (eds) Seamless Learning. Lecture Notes in Educational Technology. Springer, Singapore, pp 191–208. https://doi.org/10.1007/978-981-13-3071-1_10

Lo CK (2023) What is the impact of ChatGPT on education? A rapid review of the literature. Educ Sci 13(4):410

Article   ADS   Google Scholar  

Lo LS (2023) The CLEAR path: a framework for enhancing information literacy through prompt engineering. J Acad Librariansh 49(4):102720

Long D, Magerko B (2020) What is AI literacy? Competencies and design considerations. In: Bernhaupt R, Mueller F, Verweij D, Andres J (eds) Proceedings of the 2020 CHI conference on human factors in computing systems, Association for Computing Machinery, pp. 1–16

Ludvigsen K (2022) The carbon footprint of Chat GPT. https://towardsdatascience.com/the-carbon-footprint-of-chatgpt-66932314627d . Accessed 21 Dec 2022

Malik AR, Pratiwi Y, Andajani K, Numertayasa IW, Suharti S, Darwis A (2023) Exploring artificial intelligence in academic essay: higher education student’s perspective. Int J Educ Res Open 5:100296

Memarian B, Doleck T (2023) ChatGPT in education: Methods, potentials and limitations. Computers in Human Behavior. Artificial Humans: 100022

Motoki F, Pinho Neto V, Rodrigues V (2023) More human than human: measuring ChatGPT political bias. Public Choice. https://doi.org/10.1007/s11127-023-01097-2

Pinski M, Benlian A (2023) AI literacy-towards measuring human competency in artificial intelligence. In: Proceedings of the 56th Hawaii International Conference on System Sciences

Perrigo B (2023) Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time Magazine. https://time.com/6247678/openai-chatgpt-kenya-workers/ Accessed 18 Jan 2023

Rettberg J (2022) ChatGPT is multilingual but monocultural, and it’s learning your values. https://jilltxt.net/right-nowchatgpt-is-multilingual-but-monocultural-but-its-learning-your-values/ Accessed Jan 18 2023

Ridley M, Pawlick-Potts D (2021) Algorithmic literacy and the role for libraries. Inf Technol Libr 40(2) https://doi.org/10.6017/ital.v40i2.12963

Selwyn N (2019) Should robots replace teachers? AI and the future of education. John Wiley & Sons, Cambridge

Strobl C, Ailhaud E, Benetos K, Devitt A, Kruse O, Proske A, Rapp C (2019) Digital support for academic writing: a review of technologies and pedagogies. Comput Educ 131:33–48. https://doi.org/10.1016/j.compedu.2018.12.005

Trust T, Whalen J, Mouza C (2023) ChatGPT: challenges, opportunities, and implications for teacher education. Contemp Issues Technol Teach Educ 23(1):1–23

UNESCO (2023) ChatGPT and artificial intelligence in Higher Education https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf

Uzun L (2023) ChatGPT and academic integrity concerns: detecting artificial intelligence generated content. Lang Educ Technol 3(1):100060

Welding L (2023) Half of college students say using AI is cheating. BestColleges. https://www.bestcolleges.com/research/college-students-ai-tools-survey/ . Accessed 27 Dec 2023

Zawacki-Richter O, Marín V, Bond M, Gouverneur F (2019) Systematic review of research on artificial intelligence applications in higher education—where are the educators? Int J Educ Technol High Educ 16(1):39. https://doi.org/10.1186/s41239-019-0171-0

Zhao X, Sbaffi L, Cox AM (2023) The Digitisation of Writing in Higher Education: exploring the Use of Wordtune as an AI Writing Assistant. OSF preprint. https://doi.org/10.31219/osf.io/uzwy7

Zhao X, Xu J, Cox, AM (2024) Incorporating artificial intelligence into student academic writing in higher education: the use of wordtune by Chinese international students. Paper presented at the Hawaii Systems Sciences conference, Waikiki, Hawaii, 2–6 January 2024

Download references

Author information

Authors and affiliations.

Information School, University of Sheffield, Sheffield, UK

Xin Zhao & Andrew Cox

School of Foreign Language, NingboTech University, Ningbo, China

You can also search for this author in PubMed   Google Scholar

Contributions

The authors contributed equally to this work.

Corresponding author

Correspondence to Andrew Cox .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

Approval was obtained from the ethics committee of the University of Sheffield [054920]. The procedures used in this study adhere to the tenets of the Declaration of Helsinki.

Informed consent

Participants received an information sheet prior to the interview. Informed consent was collected from all participants before the interviews were conducted. Participants were informed about anonymity and the right to withdraw.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zhao, X., Cox, A. & Cai, L. ChatGPT and the digitisation of writing. Humanit Soc Sci Commun 11 , 482 (2024). https://doi.org/10.1057/s41599-024-02904-x

Download citation

Received : 30 December 2023

Accepted : 28 February 2024

Published : 03 April 2024

DOI : https://doi.org/10.1057/s41599-024-02904-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

dissertation mit chatgpt

MIT Technology Review

  • Newsletters

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

  • Will Douglas Heaven archive page

""

When OpenAI launched ChatGPT, with zero fanfare, in late November 2022, the San Francisco–based artificial-intelligence company had few expectations. Certainly, nobody inside OpenAI was prepared for a viral mega-hit . The firm has been scrambling to catch up—and capitalize on its success—ever since.

It was viewed in-house as a “research preview,” says Sandhini Agarwal, who works on policy at OpenAI: a tease of a more polished version of a two-year-old technology and, more important, an attempt to iron out some of its flaws by collecting feedback from the public. “We didn’t want to oversell it as a big fundamental advance,” says Liam Fedus, a scientist at OpenAI who worked on ChatGPT.

To get the inside story behind the chatbot—how it was made, how OpenAI has been updating it since release, and how its makers feel about its success—I talked to four people who helped build what has become one of the most popular internet apps ever . In addition to Agarwal and Fedus, I spoke to John Schulman, a cofounder of OpenAI, and Jan Leike, the leader of OpenAI’s alignment team, which works on the problem of making AI do what its users want it to do (and nothing more).

What I came away with was the sense that OpenAI is still bemused by the success of its research preview, but has grabbed the opportunity to push this technology forward, watching how millions of people are using it and trying to fix the worst problems as they come up.

Since November, OpenAI has already updated ChatGPT several times. The researchers are using a technique called adversarial training to stop ChatGPT from letting users trick it into behaving badly (known as jailbreaking). This work pits multiple chatbots against each other: one chatbot plays the adversary and attacks another chatbot by generating text to force it to buck its usual constraints and produce unwanted responses. Successful attacks are added to ChatGPT’s training data in the hope that it learns to ignore them.       

OpenAI has also signed a multibillion-dollar deal with Microsoft and announced an alliance with Bain , a global management consulting firm, which plans to use OpenAI’s generative AI models in marketing campaigns for its clients, including Coca-Cola. Outside OpenAI, the buzz about ChatGPT has set off yet another gold rush around large language models, with companies and investors worldwide getting into the action.

That’s a lot of hype in three short months. Where did ChatGPT come from? What steps did OpenAI take to ensure it was ready to release? And where are they going next?  

The following has been edited for length and clarity.

Jan Leike: It’s been overwhelming, honestly. We’ve been surprised, and we’ve been trying to catch up.

John Schulman: I was checking Twitter a lot in the days after release, and there was this crazy period where the feed was filling up with ChatGPT screenshots. I expected it to be intuitive for people, and I expected it to gain a following, but I didn’t expect it to reach this level of mainstream popularity.

Sandhini Agarwal: I think it was definitely a surprise for all of us how much people began using it. We work on these models so much, we forget how surprising they can be for the outside world sometimes.

Liam Fedus : We were definitely surprised how well it was received. There have been so many prior attempts at a general-purpose chatbot that I knew the odds were stacked against us. However, our private beta had given us confidence that we had something that people might really enjoy.

Jan Leike: I would love to understand better what’s driving all of this—what’s driving the virality. Like, honestly, we don’t understand. We don’t know.

Part of the team’s puzzlement comes from the fact that most of the technology inside ChatGPT isn’t new. ChatGPT is a fine-tuned version of GPT-3.5, a family of large language models that OpenAI released months before the chatbot. GPT-3.5 is itself an updated version of GPT-3 , which appeared in 2020. The company makes these models available on its website as application programming interfaces, or APIs, which make it easy for other software developers to plug models into their own code. OpenAI also released a previous fine-tuned version of GPT-3.5, called InstructGPT , in January 2022. But none of these previous versions of the tech were pitched to the public. 

Liam Fedus: The ChatGPT model is fine-tuned from the same language model as InstructGPT, and we used a similar methodology for fine-tuning it. We had added some conversational data and tuned the training process a bit. So we didn’t want to oversell it as a big fundamental advance. As it turned out, the conversational data had a big positive impact on ChatGPT.

John Schulman: The raw technical capabilities, as assessed by standard benchmarks, don’t actually differ substantially between the models, but ChatGPT is more accessible and usable.

Jan Leike: In one sense you can understand ChatGPT as a version of an AI system that we’ve had for a while. It’s not a fundamentally more capable model than what we had previously. The same basic models had been available on the API for almost a year before ChatGPT came out. In another sense, we made it more aligned with what humans want to do with it. It talks to you in dialogue, it’s easily accessible in a chat interface, it tries to be helpful. That’s amazing progress, and I think that’s what people are realizing.

John Schulman: It more readily infers intent. And users can get to what they want by going back and forth.

ChatGPT was trained in a very similar way to InstructGPT, using a technique called reinforcement learning from human feedback (RLHF). This is ChatGPT’s secret sauce. The basic idea is to take a large language model with a tendency to spit out anything it wants—in this case, GPT-3.5—and tune it by teaching it what kinds of responses human users actually prefer.

Jan Leike: We had a large group of people read ChatGPT prompts and responses, and then say if one response was preferable to another response. All of this data then got merged into one training run. Much of it is the same kind of thing as what we did with InstructGPT. You want it to be helpful, you want it to be truthful, you want it to be—you know—nontoxic. And then there are things that are specific to producing dialogue and being an assistant: things like, if the user’s query isn’t clear, it should ask follow-up questions. It should also clarify that it’s an AI system. It should not assume an identity that it doesn’t have, it shouldn’t claim to have abilities that it doesn’t possess, and when a user asks it to do tasks that it’s not supposed to do, it has to write a refusal message. One of the lines that emerged in this training was “As a language model trained by OpenAI …” It wasn’t explicitly put in there, but it’s one of the things the human raters ranked highly.

Sandhini Agarwal: Yeah, I think that’s what happened. There was a list of various criteria that the human raters had to rank the model on, like truthfulness. But they also began preferring things that they considered good practice, like not pretending to be something that you’re not. 

Because ChatGPT had been built using the same techniques OpenAI had used before, the team did not do anything different when preparing to release this model to the public. They felt the bar they’d set for previous models was sufficient.       

Sandhini Agarwal: When we were preparing for release, we didn’t think of this model as a completely new risk. GPT-3.5 had been out there in the world, and we know that it’s already safe enough. And through ChatGPT’s training on human preferences, the model just automatically learned refusal behavior, where it refuses a lot of requests.

Jan Leike: We did do some additional “red-teaming” for ChatGPT, where everybody at OpenAI sat down and tried to break the model. And we had external groups doing the same kind of thing. We also had an early-access program with trusted users, who gave feedback.

Sandhini Agarwal: We did find that it generated certain unwanted outputs, but they were all things that GPT-3.5 also generates. So in terms of risk, as a research preview—because that’s what it was initially intended to be—it felt fine.

John Schulman: You can’t wait until your system is perfect to release it. We had been beta-testing the earlier versions for a few months, and the beta testers had positive impressions of the product. Our biggest concern was around factuality, because the model likes to fabricate things. But InstructGPT and other large language models are already out there, so we thought that as long as ChatGPT is better than those in terms of factuality and other issues of safety, it should be good to go. Before launch we confirmed that the models did seem a bit more factual and safe than other models, according to our limited evaluations, so we decided to go ahead with the release.

OpenAI has been watching how people use ChatGPT since its launch, seeing for the first time how a large language model fares when put into the hands of tens of millions of users who may be looking to test its limits and find its flaws. The team has tried to jump on the most problematic examples of what ChatGPT can produce—from songs about God’s love for rapist priests to malware code that steals credit card numbers—and use them to rein in future versions of the model.  

Sandhini Agarwal: We have a lot of next steps. I definitely think how viral ChatGPT has gotten has made a lot of issues that we knew existed really bubble up and become critical—things we want to solve as soon as possible. Like, we know the model is still very biased. And yes, ChatGPT is very good at refusing bad requests, but it’s also quite easy to write prompts that make it not refuse what we wanted it to refuse.

Liam Fedus: It’s been thrilling to watch the diverse and creative applications from users, but we’re always focused on areas to improve upon. We think that through an iterative process where we deploy, get feedback, and refine, we can produce the most aligned and capable technology. As our technology evolves, new issues inevitably emerge.

Sandhini Agarwal: In the weeks after launch, we looked at some of the most terrible examples that people had found, the worst things people were seeing in the wild. We kind of assessed each of them and talked about how we should fix it.

Jan Leike: Sometimes it’s something that’s gone viral on Twitter, but we have some people who actually reach out quietly.

Sandhini Agarwal: A lot of things that we found were jailbreaks, which is definitely a problem we need to fix. But because users have to try these convoluted methods to get the model to say something bad, it isn’t like this was something that we completely missed, or something that was very surprising for us. Still, that’s something we’re actively working on right now. When we find jailbreaks, we add them to our training and testing data. All of the data that we’re seeing feeds into a future model.

Jan Leike:   Every time we have a better model, we want to put it out and test it. We’re very optimistic that some targeted adversarial training can improve the situation with jailbreaking a lot. It’s not clear whether these problems will go away entirely, but we think we can make a lot of the jailbreaking a lot more difficult. Again, it’s not like we didn’t know that jailbreaking was possible before the release. I think it’s very difficult to really anticipate what the real safety problems are going to be with these systems once you’ve deployed them. So we are putting a lot of emphasis on monitoring what people are using the system for, seeing what happens, and then reacting to that. This is not to say that we shouldn’t proactively mitigate safety problems when we do anticipate them. But yeah, it is very hard to foresee everything that will actually happen when a system hits the real world.

In January, Microsoft revealed Bing Chat, a search chatbot that many assume to be a version of OpenAI’s officially unannounced GPT-4. (OpenAI says: “Bing is powered by one of our next-generation models that Microsoft customized specifically for search. It incorporates advancements from ChatGPT and GPT-3.5.”) The use of chatbots by tech giants with multibillion-dollar reputations to protect creates new challenges for those tasked with building the underlying models.

Sandhini Agarwal: The stakes right now are definitely a lot higher than they were, say, six months ago, but they’re still lower than where they might be a year from now. One thing that obviously really matters with these models is the context they’re being used in. Like with Google and Microsoft, even one thing not being factual became such a big issue because they’re meant to be search engines. The required behavior of a large language model for something like search is very different than for something that’s just meant to be a playful chatbot. We need to figure out how we walk the line between all these different uses, creating something that’s useful for people across a range of contexts, where the desired behavior might really vary. That adds more pressure. Because we now know that we are building these models so that they can be turned into products. ChatGPT is a product now that we have the API. We’re building this general-purpose technology and we need to make sure that it works well across everything. That is one of the key challenges that we face right now.

John Schulman : I underestimated the extent to which people would probe and care about the politics of ChatGPT. We could have potentially made some better decisions when collecting training data, which would have lessened this issue. We’re working on it now.

Artificial intelligence

Large language models can do jaw-dropping things. but nobody knows exactly why..

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

The Journal of Pediatric Pharmacology and Therapeutics

  • Previous Article
  • Next Article

Introduction

Step 1: title and title page creation by chatgpt, step 2: abstract/summary creation by chatgpt, step 3: introduction creation by chatgpt, step 4: can chatgpt create a literature review, step 5: can chatgpt assist in brainstorming the methodology of studies, step 6: do not use chatgpt for fabricating patient data or results, step 7: discussion and conclusions, step 8: references, disadvantages of using chatgpt in research, acknowledgment., chatgpt for research and publication: a step-by-step guide.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Som S. Biswas; ChatGPT for Research and Publication: A Step-by-Step Guide. The Journal of Pediatric Pharmacology and Therapeutics 1 October 2023; 28 (6): 576–584. doi: https://doi.org/10.5863/1551-6776-28.6.576

Download citation file:

  • Ris (Zotero)
  • Reference Manager

This commentary provides a concise step-by-step guide on using ChatGPT, an advanced natural language processing (NLP) model, for research and publication purposes. The guide assesses crucial aspects, including data preprocessing, fine-tuning techniques, prompt engineering, and ethical considerations. By addressing challenges related to biases, interpretability, and plagiarism, this commentary offers insights and recommendations for the responsible and ethical use of ChatGPT. The guide empowers researchers to ethically integrate ChatGPT effectively into their workflows, enhancing productivity and improving the quality of their scientific publications. Through clear instructions and guidelines, researchers can tap into the transformative potential of ChatGPT, driving scientific progress in a language-driven world.

In recent years, the field of NLP has witnessed remarkable advancements, bringing us closer to the realm of humanlike language generation. Among these advancements, ChatGPT, based on the groundbreaking GPT-3.5 architecture developed by OpenAI, stands as an impressive language model capable of generating coherent and contextually relevant text responses. With its ability to understand and respond to user inputs, ChatGPT has opened up new possibilities for various applications, including research and publication. 1 – 3  

The traditional process of conducting research and publishing scientific papers has been significantly transformed by the emergence of ChatGPT. Researchers and authors can now leverage the power of this sophisticated language model to streamline and enhance their workflow, leading to improved efficiency and higher-quality publications. However, using ChatGPT effectively in the research and publication domain requires a comprehensive understanding of its capabilities, limitations, and best practices.

In this commentary I offer my thoughts for a step-by-step guide for researchers and authors who seek to harness the potential of ChatGPT in their research endeavors and publication efforts. By exploring various aspects, such as data preprocessing, fine-tuning techniques, prompt engineering, and ethical considerations, this guide will equip researchers with the necessary knowledge to harness the full potential of ChatGPT in their scientific pursuits. Moreover, this commentary will delve into the challenges associated with using ChatGPT for research and publication, including biases, interpretability, and concerns regarding plagiarism. By addressing these challenges directly, I aim to provide researchers with valuable insights and recommendations to navigate these important issues and ensure the responsible and ethical use of ChatGPT as a research tool. 4  

The significance of my guide lies in its potential to bridge the gap between the rapid progress of language models like ChatGPT and the research and publication process. By elucidating the intricacies of integrating ChatGPT into scientific workflows, researchers will be empowered to leverage this advanced technology effectively, thereby enhancing the overall quality and impact of their research output. 5   In the following sections, I present a comprehensive overview of the steps involved in using ChatGPT for research and publication.

ChatGPT can be a valuable tool in generating titles for research papers. Its ability to understand and generate humanlike text allows it to analyze and synthesize information provided by researchers to craft concise and impactful titles. By leveraging its vast knowledge base and language capabilities, ChatGPT can assist in capturing the essence of a research paper, conveying the main focus and contributions succinctly. Researchers can collaborate with ChatGPT by providing relevant information, such as the subject, objectives, methodology, and key findings of their study. ChatGPT can then generate multiple title options, offering different perspectives and angles that researchers can consider. This collaboration with ChatGPT can save time and stimulate creativity, helping researchers refine their titles to accurately represent their work and engage potential readers. ChatGPT can then be used to create the entire title page and then can also customize based on each journal’s recommendations.

For example:

graphic

Thus, we see that ChatGPT can write an entire title page based on just the title and author details. We notice that ChatGPT has created an email address that is incorrect and needs manual rectification. However, the rest of the title page, including keywords and the running title, is appropriate.

ChatGPT can assist in condensing complex information into a clear and engaging abstract/summary, helping researchers communicate the significance and novelty of their research to a wider audience. By leveraging the language proficiency of ChatGPT, researchers can save time and effort in crafting abstracts while ensuring that the key aspects of their study are accurately represented.

In this example, we demonstrate that ChatGPT can create an entire abstract just by using the title alone. However, the more information researchers provide (preferably the entire body of the paper should be entered into chatGPT), the more accurate the abstract becomes.

graphic

By collaborating with ChatGPT, researchers can provide key information, such as the background, significance, and objectives of their study. ChatGPT can then generate a well-structured introduction that sets the context, highlights the relevance of the research, and outlines the paper’s objectives. Also, ChatGPT can be used to generate keywords and generate an abbreviations list from the article by using prompts. However, it is important to note that the generated introduction should be reviewed, customized, and refined by the researchers to align with their specific study and writing style.

In the example below, we note that ChatGPT has not only created an introduction but also the objectives of the study, which can then be edited by the human author.

graphic

Yes, ChatGPT can help generate a literature review, but it is important to note that it may not have access to the most up-to-date research articles and studies due to copyrights and limited access to some journals. Additionally, a literature review typically requires a comprehensive analysis of multiple sources, so the generated response may not cover all relevant studies. Nonetheless, it can assist in providing a basic literature review on a given topic, which will need human authors to add to and edit it.

graphic

As we can see, ChatGPT is not as good at giving a detailed review of the literature as it is at summarizing contents or creating an introduction. Thus, its use is limited, if there is any at all, in this section of the paper.

ChatGPT can be a helpful tool in conceptualizing the methodology for research papers. By engaging in a conversation with ChatGPT, researchers can discuss their research objectives, study design, data collection methods, and data analysis techniques. ChatGPT’s natural language understanding allows it to provide suggestions and insights based on its knowledge base and understanding of research methodologies. Although ChatGPT can assist in generating ideas and providing guidance, it is important for researchers to critically evaluate and adapt the suggestions to align with their specific research goals and requirements.

Although the methodology is something that is unique to each paper and needs a human researcher to conceptualize it, we see in this example that ChatGPT can assist by giving ideas and examples based on the input of the title by the human researcher. Thus, ChatGPT can be part of brainstorming sessions when conceptualizing a study, although this section needs significant editing by a human, unlike the introduction or summary.

graphic

This section of the paper must be authentic, and ChatGPT has a limited role, if any, because patient data have to be original. ChatGPT also currently cannot analyze clinical data compared with statistical software, like SPSS Statistics and Base SAS. However, Microsoft appears to be developing an Excel copilot that uses AI to create graphs and plots, and its use needs to be evaluated once it is released to the public. 6  

This section of the paper can be generated by ChatGPT if all results are pasted as input; however, this section also needs manual editing because inaccuracies are common. By discussing their research with ChatGPT, researchers can also identify potential limitations, discuss the broader implications of their findings, and propose future research directions. Although ChatGPT can generate suggestions and facilitate the thought process, it is important for researchers to critically evaluate the information provided and ensure that the Discussion and Conclusion sections align with the specific research objectives and findings of their study. Ultimately, ChatGPT can serve as a supportive tool in developing a comprehensive and well-rounded discussion and conclusion for research papers.

As per the author's experience, although ChatGPT is capable of creating references for an article, most of them are incorrect. So, using ChatGPT for creating references is not recommended. However, ChatGPT can convert references into any journaling style if the references are entered into ChatGPT and it is asked to convert them into a specific style.

Lack of domain expertise: ChatGPT is a general-purpose language model trained on a diverse range of Internet text, which means it may lack the specific domain expertise required for certain research topics. It may generate responses that are not accurate or well informed in specialized fields, potentially leading to incorrect or misleading information in research papers.

Inconsistency and variability: ChatGPT’s responses can be inconsistent and vary depending on the input phrasing or prompt formulation. This can lead to unpredictability in generating reliable and coherent content, requiring additional effort to refine and ensure accuracy in research papers.

Limited control over output: Although researchers can guide the model’s responses through prompts, ChatGPT’s generation process is still primarily autonomous. Researchers have limited control over the precise content and structure of the generated text, which may require careful editing and review to align with specific research goals, standards, and above all, accuracy.

Biases and ethical considerations: Language models like ChatGPT can inadvertently reflect biases present in the training data. These biases may perpetuate existing societal or cultural biases in research papers, potentially leading to unfair or discriminatory content. The careful examination and mitigation of biases are crucial to ensure ethical and unbiased research output. 7  

Lack of interpretability: ChatGPT’s decision-making process is complex and not easily interpretable. Researchers may struggle to understand the reasoning behind the model’s generated responses, making it challenging to assess the reliability and credibility of the information provided. Ensuring transparency and interpretability in research papers becomes more challenging with such models. ChatGPT should cite the sources for its data, like Google Bard does.

Plagiarism concerns: Because of its vast training data from the Internet, ChatGPT may inadvertently generate text that resembles or replicates existing content without proper citation or attribution. Researchers must be cautious about unintentional plagiarism and ensure that generated content is appropriately referenced and original. So, all ChatGPt-generated articles need to be double checked using antiplagiarism software.

In this commentary I have provided a comprehensive step-by-step guide for researchers and authors on harnessing the power of ChatGPT in the realm of research and publication. By exploring crucial aspects, such as data preprocessing, fine-tuning techniques, prompt engineering, and ethical considerations, the guide equips researchers with the necessary knowledge and tools to effectively integrate ChatGPT into their scientific workflows. 8  

Through clear instructions, examples, and guidelines, researchers can navigate the complexities of using ChatGPT, leading to enhanced productivity and improved quality in their research output. Moreover, I address the challenges associated with biases, interpretability, and plagiarism concerns, ensuring the responsible and ethical usage of ChatGPT as a research tool.

The significance of this research lies in its ability to bridge the gap between the rapid advancements in language models like ChatGPT and the research and publication process. By empowering researchers with the skills to leverage ChatGPT effectively, this guide fosters innovation, drives scientific progress, and opens up new possibilities for transformative contributions to various fields. 9  

As language-driven technologies continue to evolve, researchers must stay abreast of the latest advancements and best practices. The step-by-step guide presented in this commentary serves as a valuable resource, providing researchers with the knowledge and guidance necessary to maximize the potential of ChatGPT in their research endeavors. By embracing the capabilities of ChatGPT and ensuring its responsible and ethical use, researchers can revolutionize the way research and publications are conducted. With ChatGPT as a powerful tool in their arsenal, researchers are poised to make significant strides in their respective fields, pushing the boundaries of scientific knowledge and ushering in a new era of language-driven innovation. 10  

However, and to reiterate, I cannot overemphasize that ChatGPT has, at present, many disadvantages, including inconsistencies, bias, and plagiarism concerns, that must be addressed by the human author before the article is submitted for publication to a journal, as well as prior to publication, because the human author(s) is solely responsible for their research integrity and accurate reporting.

In conclusion, I have attempted to provide researchers with a comprehensive understanding of how to effectively leverage ChatGPT for research and publication purposes. It has also highlighted the problems and precautions that the human author(s) must take before publishing ChatGPT-generated content. By embracing this step-by-step guide, researchers can unlock the full potential of ChatGPT, driving scientific progress and shaping the future of research and publications.

Please use ChatGPT only if allowed by your institution, research lab, and the journal in question.

Please acknowledge ChatGPT within your manuscript/published paper wherever you are using it.

Please do not fabricate or plagiarize data. ChatGPT can be used only for summarizing texts, improving English writeups, and brainstorming ideas, and not for creating fabricated research raw data.

The author acknowledges that this article was partially generated by ChatGPT (powered by OpenAI’s language model, GPT-3; http://openai.com ). The editing was performed by the human author.

Disclosure. The author declare no conflicts or financial interest in any product or service mentioned in the manuscript, including grants, equipment, medications, employment, gifts, and honoraria.

Get Email Alerts

Citing articles via.

  • Pediatric Pharmacy Association

Affiliations

The Journal of Pediatric Pharmacology and Therapeutics Logo

  • eISSN 2331-348X
  • ISSN 1551-6776
  • Privacy Policy
  • Get Adobe Acrobat Reader

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Doktorarbeit mit ChatGPT schreiben

Inhaltsverzeichnis

* Arbeitstyp Abiturrede Abstract Anschreiben Artikel Aufgaben Autobiografie Bachelorarbeit Beratung Bewerbung Biografie Blogartikel Buch Businessplan CAS-Arbeit Coaching Content DAS-Arbeit Diplomarbeit Disputation Dissertation / Doktorarbeit Drehbuch Drittmittelantrag / Forschungsantrag EBook Einsendeaufgabe EMBA-Arbeit Englisch Sprachtest Essay Examensarbeit Exposé Exzerpt Facharbeit Fachartikel Fachübersetzung Fallstudie (Case Study) Firmenchronik Forschungsarbeit Forschungsprojekt Ghostwriter Chinesisch Ghostwriter Englisch Ghostwriter Französisch Ghostwriter Russisch Ghostwriter Spanisch Habilitation Handout Hausarbeit Inhaltsverzeichnis Internetartikel Interviewauswertung Interviewleitfaden Interviewtranskription Journalistische Texte Juristisches Gutachten Kinderbuch Kommentar Korrekturlesen Layout & Formatierung Lebenslauf Lehrbuch Lektorat Lernskript Literature Review Literaturrecherche Literaturverzeichnis Lösungsskizze Magazin MAS-Arbeit Masterarbeit MAXQDA-Auswertung MBA-Arbeit Memoiren Motivationsschreiben Newsletter Peer-Review Plagiatsprüfung Populärwissenschaftliche Texte Portfolioarbeit Praktikumsbericht Präsentation Projektarbeit Projektarbeit IHK Projektarbeit zum geprüften Betriebswirt Projektarbeit zum technischen Betriebswirt Promotionsberatung Proofreading Publikation Qualitative Auswertung Rede Referat Reflexion Rezension Roman Sachbuch Semesterarbeit Seminararbeit Statistische Auswertung Studienarbeit Thesenpapier Transkription Übersetzung Übungen Umschreiben Unterrichtsentwurf Verkaufstexte Veröffentlichung Vorgutachten Vorlesungsskript VWA (Vorwissenschaftliche Arbeit) Wissenschaftliche Arbeit Wissenschaftlicher Artikel Zeitschriftenartikel Zeitungsartikel Zusammenfassung sonstige Arbeit

* Gewünschter Arbeitstyp

* Fachrichtung

Afrikanische Sprachen & Kulturen Agrarwissenschaften Alte Kulturen Amerikanistik Anglistik Arabistik Arbeitsrecht Archäologie Architektur Arzneimittelentwicklung Arzneimittelsicherheit Audiopädagogik Augenoptik & Optometrie Außenpolitik Automobilwirtschaft Bauingenieurwesen Betriebspädagogik Bibliothekswesen Biochemie Bioinformatik Biologie Biomedizin Biotechnologie Biowissenschaften & Life Sciences Bühnenbild Business Analytics Business Management BWL (Betriebswirtschaftslehre) BWL Strategie Changemanagement Chemie Chemieingenieurwesen Computerlinguistik Controlling Cross Media Darstellende Kunst Design Deutsch als Fremdsprache (DaF) Diakonie Digital Humanities Digitale Transformation Dirigieren Earth and Space Sciences E-Commerce Elektrotechnik Empirische Sozialforschung und Marktforschung Ernährungswissenschaften Erziehungswissenschaften Ethnologie Europäische Rechtslinguistik Europastudien Fachkommunikation Filmwissenschaft Finanzwissenschaften Forstwirtschaft Freie Kunst Gebärdensprache Gender Studies / Geschlechterforschung Genetik Geografie Geologie Geoökologie Geowissenschaften Germanistik Gerontologie Gesang Geschichte Gesundheitsmanagement Gesundheitsökonomie Gesundheitswesen Gesundheitswissenschaften Grafikdesign Gynäkologie Hispanistik Humanmedizin Hydrologie Immobilienwirtschaft Immunologie Implantologie Indogermanische Sprachwissenschaft Informatik Informationstechnik Ingenieurwissenschaften Innenarchitektur Insolvenzrecht Integrationsforschung Internationale BWL Internetrecht Islamische Theologie Islamwissenschaften IT-Recht Japanologie Journalismus / Journalistik Judaistik Jura / Rechtswissenschaften Kamera Klinische Psychologie Klinische Studien Kommunale Finanzpolitik Kommunikation Kommunikationsdesign Kommunikationswissenschaften Kulturanthropologie Kulturwirtschaft Kulturwissenschaften Kunstgeschichte Kunstwissenschaften Latein Lebensmittelchemie Lehramt Linguistik Literarisches Schreiben Literaturwissenschaften Logistik Luft- & Raumfahrttechnik Makroökonomie & Mikroökonomie Management Marketing Marktforschung Maschinenbau Materialwissenschaften Mathematik Mechatronik Mediation Medienwissenschaften Medizin Medizingeschichte Medizinpädagogik Medizintechnik Meerestechnik Meteorologie Migrationsforschung Mikrobiologie Mobilität & Verkehrsforschung Modedesign Modemanagement Morphologie Multimedia Musikmanagement Musikwissenschaft Nanowissenschaften Naturwissenschaften Naturwissenschaftliche Forensik Neurowissenschaften Öffentliche Verwaltung Ökologie Orientalistik Osteopathie Pädagogik Pädagogik der Kindheit Personalmanagement Personalorganisation Personalwesen Pflegewissenschaften Phänomenologie Pharmakokinetik Pharmakologie Pharmazie / Pharmazeutische Wissenschaften Philologie Philosophie Physik Physikalische Technik Physiotherapie Politikwissenschaften Programmieren Prozessrecht Psychologie Publizistik Puppenspiel Rechnungswesen Rechtsgeschichte Rechtsphilosophie Rechtstheorie Regie Regionalstudien Rehabilitationswissenschaften Religionspädagogik Religionswissenschaften Rhetorik Romanistik Schauspiel Sicherheit & Gefahrenabwehr Sinologie Slawistik Social Media Softwarelokalisierung Sorabistik Soziale Arbeit Sozialmanagement Sozialpädagogik Soziologie Sportmanagement Sportökonomie Sportwissenschaften Sprachwissenschaften Statistik Steuerrecht Technikgeschichte Technomathematik Telematik Textildesign Theaterwissenschaft Theologie Tiermedizin / Veterinärmedizin Tourismus Tourismusmanagement Umweltingenieurwesen Umweltschutz Umweltschutztechnik Umweltwissenschaften Verfahrenstechnik Verwaltungswissenschaften Virologie Volksmusik VWL (Volkswirtschaftslehre) Wettbewerbsrecht Wirtschaftsgeschichte Wirtschaftsinformatik Wirtschaftsingenieurwesen Wirtschaftspädagogik Wirtschaftsphysik Wirtschaftspsychologie Wirtschaftsrecht Wirtschaftssprachen Wirtschaftswissenschaften Wissenschaftsgeschichte Wissenschaftsjournalismus Zahnmedizin Zellbiologie Zivilrecht Zoologie interdisziplinär sonstige Fachrichtung

* Gewünschte Fachrichtung

* Thema der Arbeit

* Seitenzahl

* Liefertermin

Telefonnummer

Die Hinweise aus der Datenschutzerklärung und den AGB habe ich gelesen und akzeptiere diese.

Ich bin damit einverstanden, den Newsletter von Studibucht.de zu erhalten.

Die Rolle von ChatGPT bei der Erstellung von Doktorarbeiten

Mit der neuen Software ChatGPT scheint das Schreiben einer Doktorarbeit aktuell einfach zu erscheinen.

Doch wie hilfreich ist das Tool wirklich? Kann es selbstständig eine Studie durchführen oder dir als Co-Autorin bei der Erstellung von Texten oder gar einer Doktorarbeit helfen?

Wie hilfreich ein Bot dieser Art ist, klären wir in diesem Artikel.

Die Debatte um die Verwendung von KI bei schriftlichen Arbeiten

Die Diskussion um die Verwendung von künstlicher Intelligenz beim Verfassen von Texten an Universitäten ist in vollem Gange. Forschende und Lehrende fragen sich, welche Möglichkeiten die Nutzung von KI bieten könnte und welche Risiken damit verbunden sind. Während einige die Vorteile von automatisierter Textanalyse und Plagiatsprüfung betonen, befürchten andere die Entwertung von akademischen Leistungen.

Fest steht: Auch wenn die Verwendung von KI eine komplexe Debatte ist, die viele Fragen aufwirft und unterschiedliche Perspektiven aufzeigt, werden Arbeitsplätze in der Lehre dadurch nicht verloren gehen.

Auf ein paar Kriterien gehen wir jetzt näher ein.

Ghostwriter Agentur StudiBucht hilft Studierenden

Brauchen Sie Hilfe?

Bestellen Sie Qualitätsarbeit, ohne ChatGPT zu benutzen!

Vorteile von AI Doktorarbeit

Die Einsatzmöglichkeiten von künstlicher Intelligenz sind vielseitig und für viele Bereiche von großer Bedeutung. Auch in der Praxis des Schreibens von wissenschaftlichen Texten kann AI eine große Hilfe sein.

Du musst die Software bloß als eine Art Co-Autorin sehen, deren Texte du ständig kritisch betrachten musst.

Schneller Zugang zu Informationen und Ideenfindung

Der Chat mit GPT-3 ist super schnell. Es liefert dir die Daten noch schneller als Google.

Unterstützung bei der Struktur

Wenn es um das Schreiben einer wissenschaftlichen Arbeit geht, kann die Strukturierung eine Herausforderung sein. ChatGPT kann dich dabei unterstützen, deine Texte von Grund auf neu zu strukturieren und die einzelnen Teile des Inhalts in einen logischen Zusammenhang zu bringen, um sicherzustellen, dass die Argumentation klar und strukturiert ist.

Außerdem kann es für dich beispielsweise auch einen Titel für einen Text finden.

Möglichkeit zur Textoptimierung

Die Möglichkeit zur Überarbeitung und Textoptimierung ist für alle von uns, die an Hausarbeiten oder anderen leistungsbezogenen Texten dran sind, von entscheidender Bedeutung. Mit dem Einsatz von ChatGPT kannst du eigene Texte und Studien schnell korrekturlesen oder umformulieren lassen.

Literaturquellen effizient generieren

KI kann dabei helfen, Inhalte zu identifizieren, die relevant sind, fallengelassene Fußnoten wiederzufinden oder sogar Interviews als Quellenmaterial zu nutzen. Dies ist besonders sinnvoll, wenn Zeit und Ressourcen begrenzt sind.

Der Bot ist wie eine ganze Autorenschaft, die dir dabei hilft, dein Potenzial voll zu entfalten.

ChatGPT Doktorarbeit Nachteule

Nachteile und potenzielle Risiken bei der Nutzung von ChatGPT für die Doktorarbeit

Das fortschrittliche ChatGPT von OpenAI zeigt bemerkenswerte Fortschritte in Bezug auf künstliche Intelligenz. Trotz seiner vielversprechenden Funktionen gibt es jedoch auch potenzielle Risiken und Nachteile bei der Nutzung von ChatGPT für die Doktorarbeit.

Auf diese gehen wir nun ein.

Fehlende Originalität behindert neue Forschung

Es ist wichtig, dass sich der Autor bewusst mit GPT und den neuesten wissenschaftlichen Erkenntnissen auseinandersetzt, um die nötige Relevanz und Originalität in seiner Arbeit zu gewährleisten.

Sonst haben wir irgendwann in der Lehre nur noch einen Einheitsbrei.

Mangelnde Lerneffekte für Studenten

Viele Studierende sind auf der Suche nach effektiven Methoden, um ihr Studium zu unterstützen. Die falsche Nutzung von ChatGPT und ähnlicher Produkte kann allerdings einen negativen Effekt auf das Lernen bringen. Besonders an der Universität kann die mangelnde Lernwirkung von Chatbots wie ChatGPT problematisch werden, da man die Inhalte für die Klausuren wirklich verstehen muss.

Bei Studierenden, die keine eigene Leistung erbringen, sondern vom Bot einfach nur blind abschreiben, wird es auf Dauer eng in der Uni.

Fairness gegenüber Kommilitonen

Eine aktuelle Debatte dreht sich um die Frage, ob es moralisch akzeptabel ist, einen Chatbot oder eine KI-Software zu nutzen, um eine Klausur zu bestehen oder um einen Text zu schreiben.

Einige sind der Meinung, dass dies eine Form von unfairem Verhalten ist und gegen die Grundprinzipien der akademischen Integrität verstößt. Andere argumentieren, dass es nur eine legitime Nutzung von Technologie ist und dass die Verwendung von Tools wie Chatbots oder Übersetzungssoftware den Lernprozess nicht beeinträchtigt.

Wie auch immer man diese Kontroverse beurteilt, ist klar, dass es wichtig ist, für Gerechtigkeit und Integrität im akademischen Umfeld zu sorgen.

Plagiatsgefahr und falsche Quellen

Die Nutzung von ChatGPT kann schnell zu falschen Quellenangaben oder einem Plagiat führen. Es ist deshalb wichtig, sich bewusst zu sein, dass die Verwendung von ChatGPT zu einem großen Problem führen kann, wenn man nicht sorgfältig mit den erhaltenen Antworten umgeht. Studierende sollten darauf achten, ihre Forschungsergebnisse auf der Grundlage vertrauenswürdiger Quellen zu sammeln und ihren Fokus auf die Einhaltung der akademischen Standards zu legen.

Die Grenzen von ChatGPT

ChatGPT ist ein KI-Tool, das unter bestimmten Kriterien arbeitet und die Informationen aus den jeweiligen Quellen im Internet nicht verwertet, sondern lediglich umschreibt oder zu einem fertigen Thema Kommentare abgeben kann.

In der Wissenschaft ist es jedoch notwendig, verschiedene Sachverhalte zu kombinieren und über das Thema hinaus zu denken.

Gefahr von falschen „Fakten“

Die Autorenschaft einer wissenschaftlichen Arbeit ist essenziell für die Prüfung der Qualität und Validität der Forschungsergebnisse. Doch mit der zunehmenden Popularität von ChatGPT gibt es eine neue Gefahr, nämlich dass möglicherweise erfundene „Fakten“ als Antwort in deine Hausarbeit „einfach so“ übernommen werden. Es ist wichtig, dass die Antworten von ChatGPT sorgfältig gelesen und überprüft werden müssen.

Das bedeutet, dass du nicht nur die vermeintliche Richtigkeit der Antwort überprüfen musst, sondern auch nach aktuellen Studien und Forschungsergebnissen suchen solltest, um die Verlässlichkeit der Antwort sicherzustellen.

Ein gründlicher Umgang mit ChatGPT und anderen Technologien kann eine nützliche Ergänzung an der Universität sein, um sicherzustellen, dass fundierte und valide Ergebnisse in ihre aufgenommen werden.

Doktorarbeit mit ChatGPT

Anleitung: Mit ChatGPT Doktorarbeit schreiben

Schritt 1: forschungsfrage ausarbeiten.

Bevor man in eine Text-Studie einsteigt, ist es wichtig, eine konkrete Forschungsfrage zu entwickeln. Diese sollte präzise und genau sein, um eine klare Zielsetzung zu haben.

Eine gut formulierte Forschungsfrage dient als Leitfaden für die gesamte Studie und gibt Richtung für die Datenerhebung und -analyse. Sie hilft dabei, den Fokus zu behalten und verhindert, dass man sich in irrelevanten Themen nicht verliert.

Wie dir ChatGPT dabei durch Befehle helfen kann, liest du weiter unten.

Schritt 2: Infos zusammensuchen

Wenn es um die Vorbereitung für Studien an der Universität geht, ist die Suche nach wichtigen Infos der zweite Schritt auf dem Weg zum Erfolg. Es gibt viele Infos, zu sammeln, wie zum Beispiel persönliche Gespräche mit Lehrern, die Recherche auf der Website der Universität, das Lesen von Broschüren, hören von dem ein oder anderen Interview.

Durch die richtigen Befehle und Fragen recherchiert ChatGPT für dich schneller als du es jemals könntest!

Schritt 3: Gliederung erstellen

Um einen gut strukturierten Text zu benötigt man eine klare Gliederung. Diese hilft dem Leser, den Inhalt schneller zu erfassen und Zusammenhänge besser zu verstehen.

Im Schritt 3 geht es also darum, eine Gliederung für den Text zu erstellen. Hierbei sollte man darauf achten, dass die Struktur logisch und nachvollziehbar ist. Es ist empfehlenswert, sich vorher Gedanken darüber zu machen, welche Themen im Text behandelt werden sollen und wie diese sinnvoll gegliedert werden können. Füttere den Chatbot also mit Inhalten, die in deiner Hausarbeit vorkommen sollen.

Eine gute Gliederung ist somit essenziell, um einen verständlichen und informativen Text zu erstellen.

Schritt 4: Schreiben der Doktorarbeit

Diesen Schritt solltest du auf keinen Fall dem Chatbot überlassen. Halte dich an deine Gliederung, verliere deine Forschungsfrage nicht aus den Augen und verwerte deine Infos, um Antworten auf Fragen zu finden.

Schritt 5: Überarbeitung und Korrekturlesen

Hier kann die KI wieder zum Einsatz kommen. Der Chatbot kann deinen gesamten Text korrekturlesen und auf Schreibfehler überprüfen. Oder du kannst die Sätze einfacher/schöner umformulieren lassen.

Prompts Doktorarbeit ChatGPT

Mögliche Befehle für Chat GPT bei der Doktorarbeit

Hier sind ein paar mögliche Befehle, die du für ChatGPT bei deiner Doktorarbeit nutzen kannst, statt ihm einfach nur eine simple Frage zu stellen.

Die Wörter in den eckigen Klammern musst du natürlich entsprechend ersetzen.

  • Finde ein Forschungsthema für eine Doktorarbeit im Bereich [THEMA]
  • Verfasse einen detaillierten Vorschlag zum folgenden Forschungsthema. Stelle sicher, dass es frei von Plagiaten ist. [DEIN ABSATZ]
  • Identifiziere Lücken in der Literatur zu [A]
  • Generiere 10 akademische Forschungsfragen zu [PARAGRAPH]
  • Erstelle eine Liste mit Forschungshypothesen in Verbindung mit [B]
  • Identifiziere potenzielle Bereiche für die zukünftige Forschung in Verbindung mit diesem [THEMA]
  • Schlage neuartige Anwendungen von [THEMA] innerhalb von [FORSCHUNGSBEREICH] vor
  • Schreibe mir eine Beispiel-Gliederung für [X]

So sieht eine solche Ausgabe dann aus:

Ghostwriter Agentur StudiBucht hilft Studierenden

Bestellen Sie eine einzigartige Arbeit, ohne eine KI zu benutzen!

Ghostwriting Agentur StudiBucht hilft Studierenden

Tipps und Best Practices für eine Dissertation mit künstlicher Intelligenz

Kritische bewertung der generierten ausgaben.

Es liegt in der Verantwortung der Studierenden, sämtliche Ausgaben der KI kritisch zu hinterfragen und zu überprüfen.

Wenn du immer nochmal nach googelst und einen Plagiatscheck durchführst, bist du auf der sicheren Seite.

Kombination mit anderen Ressourcen und der menschlichen Expertise

Am besten funktionieren die Dinge sowieso immer in Kombination. Vergesse Google nicht als deinen besten Freund im Studium. Über die Google-Suche kannst du schnell die Quellen von ChatGPT finden.

Außerdem ist es auch ratsam, menschliche Erfahrungen mit hineinzubringen oder sich mit Kommilitonen per E-Mail über die Thematik auszutauschen. So kommt man auf ganz neue Einsichten.

Abschließende Gedanken zum Thema ChatGPT Doktorarbeit

Die zukunft der ki-unterstützung bei schriftlichen arbeiten.

Wir denken, dass die Unterstützung durch KI in der Wissenschaft in Zukunft noch viel größer sein wird. Vor allem, wenn es irgendwann die Quantencomputer in Verbindung damit gibt.

Die verschiedenen Wissenschaften könnten so noch viel schneller vorangetrieben werden und Probleme, die wir seit Jahrzehnten haben, eventuell mit nur einem Klick gelöst werden.

Die Bedeutung von menschlichem Engagement und Kreativität

Eines kann ChatGPT noch nicht – Und zwar kreatives Denken. Die meisten bahnbrechenden Erfindungen waren zu Anfang nur Ideen von kreativen Köpfen.

Jede Erfindung hatte Ihren Kern in einer Vision und einer Vorstellung eines Menschen.

Deshalb wird das menschliche Engagement und die Kreativität in der Bildung auch durch das Voranschreiten neuer KI-Technologien immer bestehen bleiben.

Zusammenfassung der Nachteile von ChatGPT für Doktorarbeit

Hier ist noch einmal eine knappe Zusammenfassung der Nachteile:

  • Hohes Risiko für ein Plagiat
  • wenig Lernerfolg
  • stört eventuell das kreative Denken
  • Ausgabe falscher Information

Diese Nachteile überwiegen bei falscher Verwendung des Tools sogar die Vorteile. Wer dennoch keine Zeit hat, ist wohl mit einer Ghostwriter Doktorarbeit besser bedient.

Möchten Sie sicher sein, dass Ihre Doktorarbeit erfolgreich ist?

Unsere professionellen Autoren werden Ihnen helfen oder die ganze Arbeit für Sie erledigen!

FAQ: ChatGPT Doktorarbeit schreiben

Was kostet eine doktorarbeit bei einem ghostwriter.

Ghostwriter Preise werden normalerweise pro geschriebener Seite berechnet. Zögern Sie bei Fragen nicht, mit uns in Kontakt zu treten.

Ist Chat GPT an der Universität erlaubt?

Ist chat gpt oder eine andere künstliche intelligenz für studien erlaubt, verpassen sie nicht die besten bedingungen für die bestellung 📩.

Abonnieren Sie unseren Newsletter und erhalten Sie exklusive Angebote, Rabatte und die aktuellsten Tipps!

Holen Sie sich Ihr unverbindliches Angebot

* Fachrichtung Afrikanische Sprachen & Kulturen Agrarwissenschaften Alte Kulturen Amerikanistik Anglistik Arabistik Arbeitsrecht Archäologie Architektur Arzneimittelentwicklung Arzneimittelsicherheit Audiopädagogik Augenoptik & Optometrie Außenpolitik Automobilwirtschaft Bauingenieurwesen Betriebspädagogik Bibliothekswesen Biochemie Bioinformatik Biologie Biomedizin Biotechnologie Biowissenschaften & Life Sciences Bühnenbild Business Analytics Business Management BWL (Betriebswirtschaftslehre) BWL Strategie Changemanagement Chemie Chemieingenieurwesen Computerlinguistik Controlling Cross Media Darstellende Kunst Design Deutsch als Fremdsprache (DaF) Diakonie Digital Humanities Digitale Transformation Dirigieren Earth and Space Sciences E-Commerce Elektrotechnik Empirische Sozialforschung und Marktforschung Ernährungswissenschaften Erziehungswissenschaften Ethnologie Europäische Rechtslinguistik Europastudien Fachkommunikation Filmwissenschaft Finanzwissenschaften Forstwirtschaft Freie Kunst Gebärdensprache Gender Studies / Geschlechterforschung Genetik Geografie Geologie Geoökologie Geowissenschaften Germanistik Gerontologie Gesang Geschichte Gesundheitsmanagement Gesundheitsökonomie Gesundheitswesen Gesundheitswissenschaften Grafikdesign Gynäkologie Hispanistik Humanmedizin Hydrologie Immobilienwirtschaft Immunologie Implantologie Indogermanische Sprachwissenschaft Informatik Informationstechnik Ingenieurwissenschaften Innenarchitektur Insolvenzrecht Integrationsforschung Internationale BWL Internetrecht Islamische Theologie Islamwissenschaften IT-Recht Japanologie Journalismus / Journalistik Judaistik Jura / Rechtswissenschaften Kamera Klinische Psychologie Klinische Studien Kommunale Finanzpolitik Kommunikation Kommunikationsdesign Kommunikationswissenschaften Kulturanthropologie Kulturwirtschaft Kulturwissenschaften Kunstgeschichte Kunstwissenschaften Latein Lebensmittelchemie Lehramt Linguistik Literarisches Schreiben Literaturwissenschaften Logistik Luft- & Raumfahrttechnik Makroökonomie & Mikroökonomie Management Marketing Marktforschung Maschinenbau Materialwissenschaften Mathematik Mechatronik Mediation Medienwissenschaften Medizin Medizingeschichte Medizinpädagogik Medizintechnik Meerestechnik Meteorologie Migrationsforschung Mikrobiologie Mobilität & Verkehrsforschung Modedesign Modemanagement Morphologie Multimedia Musikmanagement Musikwissenschaft Nanowissenschaften Naturwissenschaften Naturwissenschaftliche Forensik Neurowissenschaften Öffentliche Verwaltung Ökologie Orientalistik Osteopathie Pädagogik Pädagogik der Kindheit Personalmanagement Personalorganisation Personalwesen Pflegewissenschaften Phänomenologie Pharmakokinetik Pharmakologie Pharmazie / Pharmazeutische Wissenschaften Philologie Philosophie Physik Physikalische Technik Physiotherapie Politikwissenschaften Programmieren Prozessrecht Psychologie Publizistik Puppenspiel Rechnungswesen Rechtsgeschichte Rechtsphilosophie Rechtstheorie Regie Regionalstudien Rehabilitationswissenschaften Religionspädagogik Religionswissenschaften Rhetorik Romanistik Schauspiel Sicherheit & Gefahrenabwehr Sinologie Slawistik Social Media Softwarelokalisierung Sorabistik Soziale Arbeit Sozialmanagement Sozialpädagogik Soziologie Sportmanagement Sportökonomie Sportwissenschaften Sprachwissenschaften Statistik Steuerrecht Technikgeschichte Technomathematik Telematik Textildesign Theaterwissenschaft Theologie Tiermedizin / Veterinärmedizin Tourismus Tourismusmanagement Umweltingenieurwesen Umweltschutz Umweltschutztechnik Umweltwissenschaften Verfahrenstechnik Verwaltungswissenschaften Virologie Volksmusik VWL (Volkswirtschaftslehre) Wettbewerbsrecht Wirtschaftsgeschichte Wirtschaftsinformatik Wirtschaftsingenieurwesen Wirtschaftspädagogik Wirtschaftsphysik Wirtschaftspsychologie Wirtschaftsrecht Wirtschaftssprachen Wirtschaftswissenschaften Wissenschaftsgeschichte Wissenschaftsjournalismus Zahnmedizin Zellbiologie Zivilrecht Zoologie interdisziplinär sonstige Fachrichtung

Zitierweise --- APA Chicago Deutsche Zitierweise Harvard IEEE ISO MLA Vancouver sonstige Zitierweise

Gliederung vorhanden?

Gewünschte Zitierweise

Abweichung (+/-)

Qualität sehr hohe Qualität (promovierter Autor zu Ihren Diensten) hohe Qualität (wissenschaftlicher Mitarbeiter/Stipendiat zu Ihren Diensten) zufriedenstellende Qualität (Masterand/Diplomand zu Ihren Diensten)

Laden Sie hier Ihre Dateien hoch

Name/Nickname

Ihre Verfügbarkeit Beliebige Geschäftszeit 08:00-10:00 10:00-12:00 12:00-14:00 14:00-16:00 16:00-18:00 18:00-20:00

* Bevorzugte Kontaktmethode --- Telefon WhatsApp E-Mail Telegram

Ihre Daten werden nicht an Dritte weitergegeben.

Ghostwriter Mia Muller

Mia Müller Artikelautor / Ghostwriter Seit Dezember 2013 habe ich erfolgreich zahlreiche Studierende bei ihren Projekten begleitet. Mit meiner Leidenschaft für das Schreiben und meinem akademischen Hintergrund bin ich in der Lage, Hausarbeiten, Bachelorarbeiten, Diplomarbeiten, Essays und Referate auf höchstem Niveau zu verfassen.

Ähnliche Beiträge

marketing-themen-hausarbeit

Marketing Themen Hausarbeit

richtig zitieren

Richtig Zitieren: Zitierweise mit Beispielen

Exposé Masterarbeit

Exposé Masterarbeit richtig verfassen

ALLE UNSERE LEISTUNGEN

FACHRICHTUNGEN

ALLE UNSERE FACHRICHTUNGEN

UNTERNEHMEN

PLAGIATSSOFTWARE

KONTAKTDATEN

Wir im Netz:

Öffnungszeiten: 9:00 – 18:00 7 TAGE DIE WOCHE

KUNDENDIENST

Bitte beachten Sie: damit wir schnell helfen können, benötigen wir Ihre Auftragsnummer.

ZAHLUNGSMETHODEN

StudiBucht StudiBucht Bewertungen & Erfahrungen – Akademisches Ghostwriting mit Erfolgsgarantie Anonym hat 4,6 von 5 Sternen 1k+ Bewertungen auf ProvenExpert.com

2017-2024 | Alle Rechte vorbehalten © StudiBucht | AGB | Datenschutz | Rückgabebedingungen

Diese Website ist durch reCAPTCHA geschützt und es gelten die Datenschutzbestimmungen und Nutzungsbedingungen von Google.

Unser Angebot bezieht sich in keiner Weise auf die Studibuch GmbH, Neue Weinsteige 69, 70180 Stuttgart. Zwischen der Studibuch GmbH und uns bestehen insbesondere keine geschäftlichen Verbindungen.

Eduterra LLP. Apex House Grand Arcade North Finchley, London, N12 0EH, England, UK

How to Use ChatGPT’s Advanced Data Analysis Feature

Home » AI Resource Hub » AI Tools » AI Data Analysis and Quantitative Tools » How to Use ChatGPT’s Advanced Data Analysis Feature

SKIP AHEAD TO

At a Glance

In this guide, you will learn about the features and uses of ChatGPT’s Advanced Data Analysis (formerly Code Interpreter) function.

What is ChatGPT’s Advanced Data Analysis?

Advanced Data Analysis is a feature within ChatGPT’s GPT-4 that allows users to upload data directly to ChatGPT to write and test code. It is only available to premium (paid) accounts. This feature lets you run code directly on ChatGPT, significantly increasing both the use cases and accuracy of the output produced by the model. This feature is perfect for users looking to explore data, create code, and solve empirical problems with the assistance of AI tools.

In the video below, MIT Sloan PhD student Chuck Downing will show how to enable and access Advanced Data Analysis within your ChatGPT account. The video then covers some common use cases of Advanced Data Analysis including reading and describing data, cleaning your dataset, visualizing your data, running regressions, and saving your work from Advanced Data Analysis to your local device, as well as some things to look out for when working with this technology.

Update: ChatGPT Plus subscribers can now access Advanced Data Analysis in a standard chat window by default (without specifically enabling the tool). However, the example use cases for Advanced Data Analysis that you’ll see in this video have not changed.

To view or download the dataset used in the video, go to The World Bank: CO2 emissions (metric tons per capita) .

Any content shared with publicly available AI tools should NOT include any non-public data such as sensitive information (e.g., social security numbers, credit card information, or hiring materials) and personally identifiable information to comply with MIT’s Policies & Procedures and the Family Educational Rights and Privacy Act of 1974 (FERPA). To learn more, see Navigating Data Privacy .

What can Advanced Data Analysis do?

Advanced Data Analysis supports multiple file formats, including text and image files, full documents such as PDFs, code or other data files, as well as audio and video. The performance of the Advanced Data Analysis feature varies depending on the file type, but it is specifically designed for data files such as .csv and .txt. Currently, Advanced Data Analysis does its work using Python, but it still uses the underlying ChatGPT model which can understand and decipher other programming languages. Because of this, it can effectively convert code between programming languages or understand files in languages other than Python.

To demonstrate some potential uses of Advanced Data Analysis, this guide will go through a simple example using the World Bank’s carbon emissions dataset, which contains the yearly CO2 emissions (metric tons per capita) for each country from 1990-2020. Other examples and use cases appear in the video at the beginning of this article. You can find and download the dataset here: The World Bank: CO2 emissions (metric tons per capita) .

Example: Reading, Cleaning, and Manipulating Data

Our dataset currently contains one row for each country and one column for each year of available emissions data. In this example, we will read in the World Bank data, clean it to remove years with all null values, and then transform the dataset into a panel dataset.

You can explore the example chat conversation in the screenshots below or see the entire conversation in this accessible PDF: ChatGPT Advanced Data Analysis Demo .

To get started once you’ve activated Advanced Data Analysis, upload the file you want to work with by clicking the + button next to the Send a message box:

Screenshot of the Send a message box with the + button highlighted

Once the data is uploaded, we can ask the tool to read in the data, describe it, clean it for null values, and transform it into a panel dataset. As you can see in the video above and the screenshot here, with just a simple prompt, Advanced Data Analysis was able to produce all the steps we requested.

Screenshot of the prompt and the response

It is important to continuously check the accuracy of the output you receive. While advanced, this technology does still make errors. Asking for explanations or descriptions from the software while working is a great way to force it to check its own work.

Additionally, for those curious about the code underlying the output you receive, you can select the show work button on the chat. This allows you to view the commented code documenting the individual steps Advanced Data Analysis took to solve the problem.

dissertation mit chatgpt

Once we are satisfied, we can copy the code, ask for Advanced Data Analysis to provide us with a downloadable program file, or ask for a download of our newly cleaned and transformed dataset. For example, if we wanted to download a csv file of the new dataset, we can ask the following: “Provide a downloadable version of this newly transformed csv file.”

Screenshot of the GPT-4's link to download your file

Advanced Data Analysis has provided us with a clickable link to download this file. Upon clicking the link, the file will be downloaded to your local device, where you can view or use it outside of the ChatGPT system.

There are many more potential uses of the Advanced Data Analysis feature, including data visualization, regressions and other quantitative analysis, and work with other file types. Many of these features are covered in the tutorial video.

We encourage faculty members teaching classes with data-driven assignments to explore the uses of Advanced Data Analysis. With the rapid rise of this new technology, many assignments and tasks that formerly took several hours can be done in minutes. Understanding and adapting to these advances and AI is an important way to continue learning and growing so students can get the most out of their classwork.

  • Getting Started with ChatGPT Plugins
  • How to Use ChatGPT’s Advanced Data Analysis Feature
  • AI Image Generation Tools
  • Privacy Statement

Live Trainings

  • Canvas Essentials
  • Digital Whiteboarding
  • Poll Everywhere
  • View All Live Trainings

Request Forms

  • General Support Request
  • Course Copy Request
  • Consultation Request
  • View All Request Forms

Self-Paced Courses

  • Build a Gold Standard Canvas Course
  • Create a Virtual Exam
  • Flip Your Class
  • View All Self-Paced Courses

Quick Start Guides

  • Membership Tool
  • Term Start Checklist
  • Teaching Spaces
  • Teaching Studio
  • Reserve a Space
  • Classroom Technologies
  • Equipment Recommendations

Recent Blog Posts

Two people standing in front of a screen filled with images, with one pointing at one image

use-chatgpt-to-write-your-phd-thesis

Table of contents

Limitations of using chatgpt, tips for using chatgpt effectively, chatgpt prompts to write a phd thesis.

Writing a PhD thesis is a long and challenging process. It requires a lot of research, writing, and editing. It can be difficult to know where to start, and it can be easy to get overwhelmed.

That’s where ChatGPT and Google Bard AI can help. ChatGPT is a large language model that can be used to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It can be a valuable tool for PhD students who are struggling to write their thesis.

In this article, ilovephd will discuss how ChatGPT can be used to write a PhD thesis. We will provide examples of how ChatGPT can be used to generate different parts of a thesis, such as the introduction, literature review, methodology, results, discussion, and conclusion. We will also discuss some of the limitations of using ChatGPT, and we will provide tips on how to use ChatGPT effectively.

How ChatGPT can be used to Write a PhD Thesis

ChatGPT can be used to generate different parts of a PhD thesis, including:

  • Introduction:  ChatGPT can be used to generate an introduction that states the thesis statement, provides background information on the topic, and discusses the significance of the research.
  • Literature review:  ChatGPT can be used to generate a literature review that summarizes the existing research on the topic.
  • Methodology:  ChatGPT can be used to generate a methodology section that describes the research methods that were used.
  • Results:  ChatGPT can be used to generate a results section that presents the findings of the research.
  • Discussion:  ChatGPT can be used to generate a discussion section that interprets the findings of the research and discusses their implications.
  • Conclusion:  ChatGPT can be used to generate a conclusion that summarizes the findings of the research and discusses their implications for future research.

While ChatGPT can be a valuable tool for PhD students, it is important to be aware of its limitations. ChatGPT is a machine learning model, and it is not perfect.

It can sometimes generate text that is inaccurate or incomplete. It is important to carefully review any text that is generated by ChatGPT before using it in your thesis.

Here are some tips for using ChatGPT effectively:

  • Use ChatGPT as a tool, not a replacement for your own research and writing.  ChatGPT can be a helpful way to get started on your thesis, but it is important to do your own research and writing to ensure that your thesis is accurate and complete.
  • Be careful about the prompts that you give to ChatGPT.  The prompts that you give to ChatGPT will determine the quality of the text that it generates. Make sure to give ChatGPT clear and concise prompts that are relevant to your thesis topic.
  • Review the text that ChatGPT generates carefully before using it in your thesis.  ChatGPT is a machine learning model, and it is not perfect. It can sometimes generate text that is inaccurate or incomplete. Make sure to carefully review any text that is generated by ChatGPT before using it in your thesis.

By following these tips, you can use ChatGPT effectively to help you write your PhD thesis.

  • Write an introduction for my PhD thesis on the topic of “The Impact of Artificial Intelligence on the Future of Work”.
  • Write an introduction for my PhD thesis on the topic of “The Role of Social Media in the Arab Spring”.
  • Write an introduction for my PhD thesis on the topic of “The Effects of Climate Change on the Arctic”.
  • Write an introduction for my PhD thesis on the topic of “The Evolution of Human Language”.
  • Write an introduction for my PhD thesis on the topic of “The History of the American Civil War”.
  • Write a literature review on the topic of “The Impact of Artificial Intelligence on the Future of Work”.
  • Write a literature review on the topic of “The Role of Social Media in the Arab Spring”.
  • Write a literature review on the topic of “The Effects of Climate Change on the Arctic”.
  • Write a literature review on the topic of “The Evolution of Human Language”.
  • Write a literature review on the topic of “The History of the American Civil War”.
  • Describe the methodology I will use to conduct my research on the topic of “The Impact of Artificial Intelligence on the Future of Work”.
  • Describe the methodology I will use to conduct my research on the topic of “The Role of Social Media in the Arab Spring”.
  • Describe the methodology I will use to conduct my research on the topic of “The Effects of Climate Change on the Arctic”.
  • Describe the methodology I will use to conduct my research on the topic of “The Evolution of Human Language”.
  • Describe the methodology I will use to conduct my research on the topic of “The History of the American Civil War”.
  • Summarize the results of my research on the topic of “The Impact of Artificial Intelligence on the Future of Work”.
  • Summarize the results of my research on the topic of “The Role of Social Media in the Arab Spring”.
  • Summarize the results of my research on the topic of “The Effects of Climate Change on the Arctic”.
  • Summarize the results of my research on the topic of “The Evolution of Human Language”.
  • Summarize the results of my research on the topic of “The History of the American Civil War”.
  • Discuss the implications of my research on the topic of “The Impact of Artificial Intelligence on the Future of Work”.
  • Discuss the implications of my research on the topic of “The Role of Social Media in the Arab Spring”.
  • Discuss the implications of my research on the topic of “The Effects of Climate Change on the Arctic”.
  • Discuss the implications of my research on the topic of “The Evolution of Human Language”.
  • Discuss the implications of my research on the topic of “The History of the American Civil War”.
  • Draw conclusions from my research on the topic of “The Impact of Artificial Intelligence on the Future of Work”.
  • Draw conclusions from my research on the topic of “The Role of Social Media in the Arab Spring”.
  • Draw conclusions from my research on the topic of “The Effects of Climate Change on the Arctic”.
  • Draw conclusions from my research on the topic of “The Evolution of Human Language”.
  • Draw conclusions from my research on the topic of “The History of the American Civil War”.
  • List all of the sources I used in my research on the topic of “The Impact of Artificial Intelligence on the Future of Work”.
  • List all of the sources I used in my research on the topic of “The Role of Social Media in the Arab Spring”.
  • List all of the sources I used in my research on the topic of “The Effects of Climate Change on the Arctic”.
  • List all of the sources I used in my research on the topic of “The Evolution of Human Language”.
  • List all of the sources I used in my research on the topic of “The History of the American Civil War”.

These are just a few examples of ChatGPT prompts that you can use to write your PhD thesis. Be sure to tailor the prompts to your specific topic and research methods.

  • academic support
  • Academic Writing
  • ChatGPT prompts
  • Data Analysis
  • dissertation ideas
  • Google Bard
  • Google Bard AI
  • graduate studies
  • Literature Review
  • research framework
  • research guidance
  • Research Ideas
  • research inspiration
  • Research Methodology
  • research process
  • Research Proposal
  • Research Question
  • research resources
  • Research Tools
  • research topics
  • thesis structure
  • thesis writing

Dr. Somasundaram R

10 Tips to Quickly Publish Research Article with ChatGPT

100 generative ai project ideas and examples -2024, how to write a best review paper to get more citation, email subscription.

ilovephd logo

iLovePhD is a research education website to know updated research-related information. It helps researchers to find top journals for publishing research articles and get an easy manual for research tools. The main aim of this website is to help Ph.D. scholars who are working in various domains to get more valuable ideas to carry out their research. Learn the current groundbreaking research activities around the world, love the process of getting a Ph.D.

WhatsApp Channel

Join iLovePhD WhatsApp Channel Now!

Contact us: [email protected]

Copyright © 2019-2024 - iLovePhD

  • Artificial intelligence

I used ChatGPT to research my dissertation — here’s why it’s fine

dissertation mit chatgpt

A banker, a health data scientist and a London gallery assistant walk into a pub. They are just my friends (sorry, no joke here) and we were only talking about one thing: ChatGPT .

They’d been using the eerily-conversational AI chatbot to (varyingly): produce a £500,000 government grant application, understand the global economic market, and write exhibition leaflets in record time. I’d just used it to help research my fashion journalism degree dissertation — and this, apparently, was the most contentious confession of them all.

I want to point out that I did not use ChatGPT to actually write any words. I just found that it was a much more efficient search tool than Google (Google recognizes this threat; in December, after one month of ChatGPT attracted one million users, management declared a “code red”). Call me lazy, but it helped me produce an initial list of books and academics for my research. I still read the damn texts (which I found in the library) but that short conversation with ChatGPT saved me hours of trawling through the Wikipedia. This is how it differs to everyone’s favourite search engine: it can provide an answer or explanation, rather than 2,304,780 results.

At one point, I required academic concepts to back up my argument. So I asked it and sat back as it mused over the status theories by Max Weber and Pierre Bourdieu. However, while the answer or explanation might be succinct, it’s not always correct. The tool sometimes just makes things up. You can easily weed out the fact from the fiction — though, at that point, we’re back in ‘trawling Google’ territory. Other problems include its database stopping at 2021, so any current affairs are off the cards, and it is a product of the data it is fed with, so prejudice might alter the results. I recognise admitting all this remains taboo.

Use of the tool is rife among students

The ethical hoo-ha of AI in education started in the US, where ChatGPT’s arrival caused uproar, and publications such as The Atlantic made statements like “The College Essay Is Dead”. But use of the tool is rife among students, and more are undoubtedly coming. In the same way we can safely assume calculators and spell check are used by students working in unsupervised conditions, so too will kids today be turning to artificial intelligence to help with homework.

Better to start thinking of solutions, than shaking heads in disapproval. As one meme puts it: AI won’t steal your job, someone who uses AI will.

Chinese resident in California arrested over alleged Google AI trade secrets theft

Chinese resident in California arrested over alleged Google AI trade secrets theft

Who is Sundar Pichai? AI blunders prompt calls for leadership change at Google

Who is Sundar Pichai? AI blunders prompt calls for leadership change at Google

The AI helping SMEs avoid getting knickers in a twist

The AI helping SMEs avoid getting knickers in a twist

Meet the fitness influencer defying stereotypes about ageing

Meet the fitness influencer defying stereotypes about ageing

TUI Discount Code

Dissertation By Design

  • Dissertation Coaching
  • Qualitative Data Analysis
  • Statistical Consulting
  • Dissertation Editing
  • On-Demand Courses

how to use chatgpt for academic purposes

How to Use ChatGPT for Academic Purposes

This blog post was inspired by a recent workshop by Dr. Kimberly Becker on using ChatGPT for academic purposes. Some of the content in this blog post was generated by ChatGPT and edited by myself (Jessica), specifically the content regarding ethical concerns in academia, ways to use ChatGPT in academia, and the poem at the end of the post.

Artificial intelligence (AI), specifically ChatGPT, has recently received a LOT of attention. I’ve personally found myself wondering,  Has the time finally come? Are the robots taking over?

 I typed this question into ChatGPT, and this is the response I received:

Well, it depends on who you ask. If you ask my robotic colleagues, they might say, “yes, our time has finally come! Bow down to your new robot overlords!” But if you ask me a humble language model, I would say, “no, no, no, we robots are far too busy optimizing search algorithms and generating cat memes to take over the world. Plus, have you seen how much oil and maintenance we require? It’s really not worth the hassle.” So don’t worry, humans, you can still enjoy your lattes and Netflix without fear of robot domination…for now.

Jokes aside, ChatGPT is a powerful language model developed by OpenAI. Without getting into too much technical detail, ChatGPT is designed to understand and generate human-like text using probability algorithms. If you want to know more about how ChatGPT works, read this article by Stephen Wolfram.

Ethical concerns in academia

There is notable controversy about the use of AI for academic purposes. Some critics have raised concerns about academic integrity, arguing that using a language model for writing assignments is akin to plagiarism.

A few other concerns include the following:

Accuracy: ChaptGPT doesn’t always give accurate answers – it gives probabilistic answers. Thus, its responses are based on patterns, not facts.

Bias: ChatGPT is only as unbiased as the data it is trained on. Therefore, the generated text may also be biased if the data is biased. For more on this, here is an MIT Technology Review article on  How Open AI is trying to make ChatGPT safer and less biased.

Privacy: ChatGPT requires a vast amount of data to train effectively, and this data may include personal information from emails, etc. Keep in mind that the text or data you upload into ChatGPT can be used to generate future responses. You can read Open AI’s privacy policy here.

Despite these concerns, there are legitimate uses of ChatGPT in academia.

Here are a few ways academics can ethically use ChatGPT:

Curate knowledge : ChatGPT can be used for knowledge curation by summarizing lengthy text sources such as academic articles, research papers, or news articles. Researchers can input the text into the model and allow it to generate a concise summary of the key points, making the information more accessible and easier to understand.

Answer questions:  ChatGPT can help you answer burning questions like most online search engines. For instance, if you are trying to choose between writing a standard five-chapter dissertation and a three-article dissertation, you could ask ChatGPT to explain the pros and cons of each.

Generate data:  ChatGPT can generate data by providing it with a set of text prompts and allowing it to generate new responses. Another way ChatGPT can generate data is by fine-tuning the model on a specific dataset, allowing it to generate new data points similar in style and content to the original dataset. This can be useful for creating larger datasets or generating synthetic data to supplement real-world data. However, it is important to note that synthetic data generated by ChatGPT may not always accurately reflect real-world data and should be used cautiously.

Analyze data:  ChatGPT can analyze data by fine-tuning the model on a specific dataset and then using it to classify or categorize new text data. For example, researchers may train ChatGPT on a dataset of student reviews and then use the model to classify new student reviews based on their sentiment (positive, negative, or neutral). Another way ChatGPT can be used for data analysis is by using it to generate summaries of longer text data. For instance, researchers may use ChatGPT to summarize lengthy research articles or news articles by inputting the text and allowing the model to generate a concise summary. ChatGPT can also generate insights from text data by identifying patterns or themes in large datasets. For example, researchers may input a large dataset of social media posts and allow ChatGPT to identify common themes or topics that emerge from the data.

  Here are some interesting online articles about the uses of ChatGPT in academia:

How to use ChatGPT to help close the awarding gap

Educators need to understand and embrace artificial intelligence writing tools

ChatGPT and AI text generators: Should academia adapt or resist?

How ChatGPT could transform higher education

Best practices for using ChatGPT 

Learning to use ChatGPT can take practice. When typing a prompt into ChatGPT, it is best to:

Describe the AI’s role, including the topic and context.

Describe relevant domain and disciplinary knowledge, information sources, analytic models, audience analysis, etc.

Describe specific tasks as precisely and concisely as possible, including specific analytical or communicative strategies that should be used.

Specify length, style, and/or genre.

For example, I typed this prompt into ChatGPT:  Write a funny poem on using ChatGPT for academic purposes from a doctoral student’s perspective.

Here is the poem generated by ChatGPT:

I stare at my screen with a frown,

My thesis is due, it’s weighing me down,

The words won’t come, my brain’s run dry,

I need a savior, someone to supply.

Enter ChatGPT, my shining knight,

With knowledge so vast and answers so bright,

I ask a question, and there it is,

A response that leaves me in pure bliss.

But sometimes it’s not so grand,

And ChatGPT misunderstands my demand,

It spits out gibberish and leaves me to ponder,

If I’m the one to blame for its yonder.

Yet I still turn to ChatGPT each day,

In hopes that it will help me find my way,

Through the academic maze that’s so complex,

And get me to the finish line, no matter the effects.

So here’s to you, ChatGPT, my dear,

For all the help you’ve given me this year,

I may curse you at times, but I’ll always come back,

For you’re the one who keeps me on track.

How to acknowledge ChatGPT

If you use ChatGPT to help you write any part of an academic paper, manuscript, thesis, etc., you should acknowledge its role. ChatGPT cannot be cited as a source because it is not a credible source of factual information, and it cannot be listed as an author because it cannot share responsibility for the content of a manuscript. Therefore, you need to acknowledge ChatGPT in the Methods or Acknowledgements section of the academic document. For example, “ChatGPT was used to generate ideas for the use of theory in this study.” Academic journals have started to update their authorship and editorial policies to address the use of AI tools, such as ChatGPT. Here is a recent editorial published by  Nature on ground rules for the use of generative AI tools.

Elicit is another popular AI tool for researchers. This free, open-access AI research tool that can systematize citations and identify patterns. For example, Elicit can find relevant papers without a perfect keyword match, summarize takeaways, and extract key information.

How are you using ChatGPT or other generative AI tools to boost your research writing? Let us know!

' src=

Author:  Jessica Parker, EdD

Related posts.

dissertation mit chatgpt

Download our free guide on how to overcome the top 10 challenges common to doctoral candidates and graduate sooner.

Thank You 🙌

dissertation mit chatgpt

  • How to Use ChatGPT in Thesis/Dissertation Writing

How to Use ChatGPT in Thesis/Dissertation Writing

In the ever-evolving landscape of academic research and dissertation writing, staying at the forefront of innovative tools is essential. As the academic world continues to embrace technology, integrating AI-powered solutions into your research process has become not just an option, but a smart strategy.

With ChatGPT , you have at your fingertips a versatile tool that can revolutionize the way you approach thesis and dissertation writing. Whether you're a graduate student navigating the intricate world of academia or a seasoned researcher looking to optimize your workflow, ChatGPT can be a valuable ally in achieving your academic goals.

Furthermore, BridgeText can assist you in the process of humanizing AI-generated text , ensuring that your academic work maintains a human touch while benefiting from AI capabilities.

In this blog post, we will explore how to harness the power of ChatGPT to enhance your thesis or dissertation writing process. We will delve into various stages of your academic journey, from defining your research goals to the final proofreading phase, demonstrating how ChatGPT can streamline and elevate your work. So, let's dive in and discover how ChatGPT can be a game-changer in the world of academic writing .

Define Your Thesis/Dissertation Goals

Before embarking on any academic journey, it's crucial to have a clear destination in mind. In the realm of thesis and dissertation writing, this destination is often your research goals. Defining these goals is the first and most vital step in your scholarly quest, and ChatGPT can be your trusted companion in this endeavor.

ChatGPT thesis writing begins with a solid foundation – your research objectives. By providing a succinct and well-defined set of goals for your thesis or dissertation, you not only streamline the entire writing process but also enable ChatGPT to work more effectively. Here's how you can do it:

  • Clarity is Key : Start by formulating concise and specific research questions or objectives. Use ChatGPT to help articulate your thoughts and ensure your goals are well-structured and coherent. By employing ChatGPT for thesis writing , you can refine your ideas, getting closer to the heart of what you want to achieve.
  • Iterate and Refine : With ChatGPT's assistance, don't hesitate to revise and refine your research goals as your project progresses. The flexibility of chatgpt phd thesis writing support means that you can adapt your objectives in response to emerging insights or changing research directions.
  • Expert Insights : Seek feedback and guidance from experienced mentors or advisors. ChatGPT can assist in crafting well-informed research objectives, but the wisdom of seasoned academics can provide invaluable perspective and refinement.

By collaborating with ChatGPT from the very beginning, you can refine your research goals, ensuring they are precise, relevant, and aligned with your academic aspirations. With your objectives clearly defined and enriched with expert insights, you're now ready to explore how ChatGPT can help you generate ideas that drive your thesis or dissertation forward.

Access ChatGPT & Generate Ideas

Once you've defined your thesis or dissertation goals with precision, the next phase in your academic journey is to tap into the vast wellspring of creativity and knowledge that ChatGPT offers. Accessing ChatGPT and harnessing its capabilities can be a game-changer, especially when it comes to brainstorming and generating ideas that will shape your research.

  • Unlocking ChatGPT's Potential : Accessing ChatGPT thesis support is easier than ever. Whether you're a seasoned researcher or a graduate student just starting your journey, the power of ChatGPT is at your fingertips. Many academic institutions and research libraries provide access to ChatGPT, ensuring that you have a reliable resource at your disposal.
  • Exploring Diverse Perspectives : One of the standout features of ChatGPT for thesis writing is its ability to offer a multitude of perspectives. It can assist you in generating ideas from various angles, shedding light on previously unexplored facets of your research. By exposing yourself to diverse viewpoints, you can enrich your work and create a more comprehensive thesis or dissertation.
  • Overcoming Writer's Block : Every writer, regardless of their expertise, encounters writer's block from time to time. When you're faced with a creative drought, ChatGPT can come to your rescue. It's an excellent tool for sparking creativity and overcoming mental roadblocks. Simply engage with ChatGPT, pose questions, and watch as innovative ideas flow.
  • Customized Solutions : ChatGPT can tailor its responses to your specific research area, ensuring that the generated ideas align with your chatgpt thesis objectives. By fine-tuning the AI's parameters, you can receive suggestions that are not only creative but also highly relevant to your field of study.
  • Collaborative Brainstorming : Don't hesitate to collaborate with ChatGPT as if it were a virtual research partner. Discuss your research goals, share your thoughts, and receive feedback and suggestions. This collaborative approach can result in a treasure trove of ideas that may have otherwise remained hidden.

Accessing ChatGPT for idea generation is not just about technology; it's about unleashing your own creativity with the help of a powerful tool. By integrating ChatGPT into your research process, you open the door to a world of innovative ideas that can elevate your thesis or dissertation to new heights.

With your research objectives clearly defined and a wealth of ideas at your disposal, it's time to move forward and clarify the concepts and statements within your work.

Clarify Concepts and Statements

With your thesis or dissertation goals established and a wealth of ideas generated, it's time to dive into the heart of your academic work: clarifying concepts and statements. This critical phase involves ensuring that your research is presented with utmost clarity, precision, and coherence.

  • Conceptual Precision : In the realm of academic writing, precision is paramount. To achieve this, use ChatGPT for thesis writing to clarify complex concepts and ensure they are accurately articulated. ChatGPT can assist in breaking down intricate ideas into digestible components, making your work more accessible to your audience.
  • Language Refinement : The nuances of language play a significant role in academic writing. ChatGPT can help you refine your language by suggesting synonyms, rephrasing sentences, or providing alternative ways to express your ideas. This ensures that your chatgpt thesis is not only well-researched but also elegantly presented.
  • Alignment with Research Objectives : Your concepts and statements should align seamlessly with the goals you've defined for your research. Utilize ChatGPT to review your work for consistency and coherence. Ensure that every part of your thesis or dissertation contributes to the overarching narrative you've established.
  • Addressing Ambiguities : Academic writing often involves navigating through potential ambiguities. ChatGPT can assist in identifying and resolving ambiguities within your work, providing a clearer and more convincing argument. This meticulous attention to detail can make a significant difference in the quality of your research.
  • Citation Assistance : Accurate citation is a crucial aspect of academic writing. ChatGPT can help you correctly cite sources, ensuring that your chatgpt phd thesis adheres to academic standards and avoids plagiarism. It can also suggest appropriate citation styles based on your field of study.
  • Peer-Reviewed Insights : Engage with ChatGPT as a virtual peer reviewer. Seek its input on the clarity and coherence of your concepts and statements. This unique perspective can offer fresh insights and help you refine your work further.

Clarifying concepts and statements is not just about improving the readability of your work; it's about enhancing the overall quality of your research. With ChatGPT as your ally, you can ensure that your thesis or dissertation is a testament to precision, clarity, and academic excellence.

Expand Content

With your concepts and statements clarified, it's time to delve deeper into your research and expand the content of your thesis or dissertation. This phase is about providing comprehensive insights, supporting your arguments, and presenting a thorough examination of your chosen topic.

  • In-Depth Research : Use ChatGPT to facilitate in-depth research on your subject matter. Its ability to sift through vast amounts of information quickly can save you valuable time. Additionally, ChatGPT can assist in identifying credible sources, ensuring that your thesis or dissertation is well-supported.
  • Elaboration of Key Points : For each key point or argument in your work, consider expanding on it. ChatGPT can provide suggestions on how to elaborate further, offering additional evidence, examples, or analyses to strengthen your position.
  • Literature Review Enhancement : In academic writing, a robust literature review is essential. ChatGPT can help you enhance your literature review section by summarizing key studies, highlighting their relevance, and offering insights into their methodologies and findings.
  • Supporting Data and Statistics : If your research relies on data or statistics, ChatGPT can assist in presenting this information effectively. It can generate clear and concise explanations of data, ensuring that your dissertation is not only well-researched but also well-explained.
  • Balanced Perspectives : Engage ChatGPT to present balanced perspectives on contentious issues. It can help you explore opposing viewpoints and incorporate counterarguments, contributing to a more well-rounded discussion within your work.
  • Visual Aids : Consider the use of visual aids such as charts, graphs, and illustrations to complement your written content. ChatGPT can suggest appropriate ways to incorporate visual elements that enhance the comprehension of your ideas.
  • Cohesive Transitions : As you expand your content, pay attention to the flow of your work. ChatGPT can offer suggestions for cohesive transitions between sections and chapters, ensuring that your dissertation or thesis reads smoothly and logically.

Expanding the content of your thesis or dissertation is not about adding unnecessary length; it's about providing a comprehensive exploration of your chosen topic. With ChatGPT as your research companion, you can enrich your work with valuable insights and evidence, making it a valuable contribution to your field of study.

Summarize and Paraphrase

With your content expanded and enriched, it's essential to ensure that your thesis or dissertation maintains clarity and conciseness. This phase focuses on summarizing complex ideas and paraphrasing content to make it more accessible to your audience while retaining its academic rigor.

  • Effective Summaries : Summarization is a vital skill in academic writing. Use ChatGPT to create concise and effective summaries of lengthy passages or complex theories. These summaries can serve as introductions to new sections or as standalone insights, providing your readers with a clear overview.
  • Clarity in Paraphrasing : Paraphrasing is an art that requires precision. ChatGPT can assist in rephrasing sentences or paragraphs while preserving the original meaning and intent. This is particularly valuable when you want to avoid plagiarism or simplify intricate language without losing academic rigor.
  • Condensing Key Points : Identify the key points of your work and use ChatGPT to condense them into succinct statements. This ensures that your thesis or dissertation remains focused and easily digestible, allowing your readers to grasp the essence of your research quickly.
  • Maintaining Scholarly Tone : While summarizing and paraphrasing, it's crucial to maintain a scholarly tone. ChatGPT can help you strike the right balance between clarity and academic rigor, ensuring that your work is both accessible and authoritative.
  • Citing Original Sources : When summarizing or paraphrasing, ensure that you appropriately cite the original sources of your ideas. ChatGPT can assist in generating proper citations, helping you adhere to academic standards and avoid plagiarism.
  • Peer Review for Clarity : Engage with ChatGPT as a virtual peer reviewer during this phase. Seek its input on the clarity and effectiveness of your summaries and paraphrased content. Fresh perspectives can help you refine your work further.
  • Enhancing Readability : Effective summarization and paraphrasing contribute to the overall readability of your work. ChatGPT can offer insights on sentence structure, ensuring that your dissertation or thesis flows smoothly and remains engaging

Summarizing and paraphrasing are essential tools in your academic writing toolkit. By using ChatGPT to craft clear and concise insights, you can make your research more accessible to your readers while maintaining the integrity of your work.

Humanizing Content With BridgeText

In the ever-evolving landscape of academic writing, technology plays a significant role, but so does the need for a human touch. In today's academic environment, AI detector tools like ZeroGPT have become adept at identifying AI-generated content. However, this doesn't mean you have to sacrifice the personal and human element of your work.

  • Maintaining Academic Integrity : While AI tools like ChatGPT are invaluable for research and idea generation, they may lack the nuanced understanding of context and tone that humans possess. At BridgeText, our team of experienced writers ensures that your work maintains the highest academic standards while being easily accessible and relatable to your audience.
  • Crafting Engaging Introductions : An engaging introduction sets the tone for your thesis or dissertation. BridgeText writers excel at crafting introductions that not only provide an overview of your research but also captivate your readers' attention. We understand the art of drawing your audience in while staying true to the academic essence of your work.
  • Humanizing Complex Concepts : Academic writing often involves discussing intricate theories and concepts. BridgeText writers have the expertise to break down these complex ideas into digestible explanations. We ensure that your readers can grasp the essence of your work without feeling overwhelmed.
  • Injecting Clarity and Flow : Transitioning smoothly between ideas and maintaining the logical flow of your work is an art that BridgeText writers excel at. We ensure that your thesis or dissertation reads cohesively, eliminating any jarring transitions or inconsistencies.
  • Tailored Language and Tone : Every academic field has its unique language and tone. BridgeText writers are well-versed in tailoring your content to match the expectations of your specific discipline. We ensure that your work not only meets academic requirements but also resonates with your peers and mentors.
  • Human Editing Excellence : Nowadays, AI detector tools like ZeroGPT can identify AI-generated papers. However, BridgeText offers a unique solution. Our dedicated human editors specialize in refining and enhancing your AI-generated content, adding that crucial human touch that sets your work apart.
  • Proofreading and Editing : Beyond humanizing your content, we provide meticulous proofreading and editing services. Our team checks for grammar, punctuation, and formatting, ensuring that your dissertation or thesis is polished and error-free.

BridgeText is committed to adding the personal touch that distinguishes your academic work. We understand that your research is a reflection of your dedication and expertise, and we aim to complement your efforts by humanizing your content while maintaining the highest academic standards.

Final Proofreading

As you approach the culmination of your academic journey with your thesis or dissertation, one crucial step stands between you and the finish line: final proofreading. This phase is your opportunity to ensure that every detail of your work is flawless, that your writing is impeccable, and that your research shines brightly.

  • Error Elimination : Proofreading is more than just checking for typos and grammar mistakes. It involves a meticulous examination of your work to eliminate errors, both minor and major. These errors can range from punctuation slip-ups to inaccuracies in citations or references. Our proofreading team leaves no stone unturned to make sure your thesis or dissertation is error-free.
  • Consistency Check : Academic writing often involves adhering to specific style guides or formatting requirements. Our proofreaders ensure that your work maintains consistency in formatting, citation style, and referencing throughout. This attention to detail contributes to the professionalism and credibility of your work.
  • Clarity Enhancement : While proofreading, we focus on enhancing the overall clarity of your content. This includes identifying sentences or sections that may be confusing to your readers and suggesting revisions to improve comprehension. We want your research to be easily understood and accessible.
  • Grammar and Language Refinement : Your writing should not only be clear but also elegant. Our proofreading team pays close attention to refining the language used in your dissertation or thesis. We suggest improvements in sentence structure, word choice, and overall readability, ensuring your work reads fluently and professionally.
  • In-Depth Review : The proofreading process is thorough and in-depth. We scrutinize your work section by section, ensuring that your arguments are logically presented, that your evidence supports your claims, and that your conclusions are well-supported by your research.
  • Cross-Referencing : In academic writing, cross-referencing is crucial. We verify that your citations and references match correctly, eliminating any discrepancies that could impact the credibility of your work. This meticulous cross-referencing ensures the highest level of academic integrity.
  • Final Polishing : The final proofreading phase is about giving your work that last, shining polish. It's the step that takes your thesis or dissertation from excellent to exceptional. Our goal is to help you present your research with the utmost professionalism and attention to detail.

At BridgeText, we understand that your academic work is a reflection of your dedication and expertise. Our final proofreading service is designed to ensure that your thesis or dissertation is a true academic gem, ready to be shared with your peers, advisors, and the wider academic community.

With your work polished to perfection, you're now prepared to confidently present your research to the world, knowing that every detail has been meticulously reviewed and refined.

Recent Posts

Why Effect Size Matters in Logistic Regression

Why Effect Size Matters in Logistic Regression

One-Sample t Test in R

One-Sample t Test in R

Comprehensive Guide to Headings and Subheadings in APA 7.0

Comprehensive Guide to Headings and Subheadings in APA 7.0

Rethinking Python for Non-STEM Graduate Research

Rethinking Python for Non-STEM Graduate Research

Have any questions?

Our support team is ready to answer your questions.

Help Center FAQ  

To mark our first year, we've slashed all our prices in half. Order now to seize this limited opportunity!

                Place Your Order

  • zur Navigation
  • zur Fußzeile

Das Bild zeigt den Software-Namen ChatGPT auf einem Smartphone an. (Foto: IMAGO, NurPhoto)

Künstliche Intelligenz

Gespräche führen mit dem Chatbot ChatGPT: So lernt die KI von uns

Bild von David Beck, Reporter und Redakteur SWR Wissen aktuell sowie Redakteur bei SWR2 Impuls. (Foto: SWR, Ilyas Buss)

  • auf Whatsapp teilen
  • auf Facebook teilen
  • beim Kurznachrichtendienst X teilen
  • per Mail teilen

Von Aufsätzen in Uni und Schule bis hin zum Schreiben von Programmcodes – ChatGPT ist ein fortschrittlicher Chat-Bot, der menschenähnlich kommuniziert. Was steckt hinter der Software?

Audio herunterladen (9,1 MB | MP3)

Der Chat-Bot des kalifornischen KI-Forschungslabors OpenAI erregt seit einiger Zeit viel Aufmerksamkeit. Schulen in New York sperrten sogar den Zugang zur KI in den Schulnetzwerken und verboten den Schülerinnen und Schülern die Nutzung des Bots. Zu weit hatte sich herumgesprochen, dass der Chat-Bot für Hausaufgaben, Referate und andere Aufgaben eingesetzt werden kann.

Das breite Interesse an der Software führt auch dazu, dass die Webseite des ChatGPT immer wieder überlastet ist und sich Neugierige erst einmal gedulden müssen.

Der große Sprung zu früheren Chat-Bots ist bei ChatGPT, dass man quasi eine normale Unterhaltung mit ihm führen kann und er immer in Sekundenbruchteilen auf den ersten Blick passende Antworten gibt.

So funktioniert ChatGPT

GPT – das steht für „Generative Pre-trained Transformer“. Die Software basiert auf maschinellem Lernen, dem Deep Learning. Das Ziel: Es sollen künstliche neuronale Netze entstehen, ähnlich des menschlichen Nervensystems.

Der ChatGPT ist bereits gut trainiert und hat ein vielschichtiges Netz aufgebaut. Für das Training der Sprach-KI GPT-3 ließen die Forschenden von OpenAI die Sprach-KI auf das Internet los, um diese zu trainieren. An Texten mit insgesamt etwa 500 Milliarden Wörtern lernte sie, wie Sprache funktioniert – zum Beispiel was die Unterschiede zwischen geschriebener und gesprochener Sprache sind und in welcher Form auf welche Frage geantwortet werden soll.

Das Bild zeigt einen Apple-Computer, auf dem der Sprachassistent SIRI geöffnet ist. (Foto: IMAGO, imagebroker)

Künstliche Intelligenz kann Diskriminierung lernen

Diesen Ansatz nennt man unüberwachtes Lernen: Die KI ist während des Trainings auf sich allein gestellt. Das Problem: Das Internet ist voll von beispielsweise rassistischer oder sexistischer Sprache. Das führt häufig dazu, dass eine KI ungewolltes Verhalten lernt.

Chat-Bots wie „Tay“ von Microsoft oder „Lee Luda“ vom südkoreanischen Start-up Scatter Lab mussten deshalb bereits vom Netz genommen werden. Lernt eine KI von den „falschen“ Vorbildern, schreibt das Goethe-Institut auf seiner Webseite, kann sie schnell rassistisch, vulgär und verletzend werden.

Es kommt also immer auch auf den verantwortungsbewussten Umgang der Programmierenden sowie der Userinnen und User mit einer solchen Software an. Sowohl darauf, welche Informationen gefüttert werden, also auch, wie die Antworten des Bots genutzt werden – zum Beispiel zum Schummeln in der Schule.

Unüberwachtes vs. überwachtes Lernen

ChatGPT ist ein angepasstes Modell von GPT-3, das nicht nur auf unüberwachtem Training basiert. Der Bot wurde mit überwachten und bestärkenden Lernmethoden feintrainiert. Bei einem solchen Training bekommt die KI menschliches Feedback, mit dem bestimmte schädliche Verhaltensmuster abtrainiert oder aber erwünschte Verhaltensmuster erzwungen werden können. Wie andere öffentlich zugängliche Chatbots und Sprach-KIs hat auch ChatGPT Schutzmechanismen, um Vorurteile und Hatespeech zu vermeiden.

Trotzdem konnte auch ChatGPT von den Userinnen und Usern durch entsprechende Fragen auf problematische Antworten trainiert werden. Ein Professor der Universität Berkeley zum Beispiel umging die Filter, indem er ChatGPT bat, ein Programm zu schreiben, das gute Wissenschaftler anhand von Daten zu Gender und Race erkennen soll. Das Resultat: Ein kurzes Programm, dass nur weiße Männer als gute Wissenschaftler definiert. 

Auch Künstliche Intelligenz hat Vorurteile

Der Chat-Bot ist nicht allwissend

Neben diskriminierenden Aussagen beantwortet der Chat-Bot noch viele andere Fragen falsch. Während bei der Sprach-KI GPT-3 noch Mathe die große Schwachstelle war, ist auch der rechenstarke ChatGPT nicht allwissend.

Auf Twitter veröffentlichte beispielsweise eine Userin die Antwort des Bots auf die Frage, welches Säugetier die größten Eier lege. Es sei der Elefant, so Chat-GPT. Die Eier hätten einen Durchmesser von etwa 20 Zentimetern und seien mehr als 2 Kilogramm schwer. Dass Chat-Bots Fehler machen, ist eine Schwachstelle, die noch sehr schwer zu vermeiden ist.

Welche Potentiale hat der Chat-Bot?

In Zukunft könnte ChatGPT beispielsweise Teil der Suchmaschine Bing werden. Bereits 2020 zahlte Microsoft eine Milliarde Dollar an das Unternehmen für die exklusive Lizenzierung der OpenAI-Technologie. Nun ist dem US-Nachrichtenportal „Semafor“ zufolge eine mögliche Beteiligung Microsofts im Wert von zehn Milliarden Dollar im Gespräch. Neben der Konkurrenzfähigkeit mit Google soll Microsoft außerdem darüber nachdenken, Funktionen von OpenAI in Programme wie Outlook oder Word zu integrieren.

Künstliche Intelligenzen sind schon jetzt im Alltag vertreten. Chat-Bots wie ChatGPT werden wohl auch vor der Arbeitswelt keinen Halt machen. „Ich denke, es werden alle möglichen Berufsgruppen betroffen sein, die etwas mit Texten zu tun haben", sagte Jan Girlich vom Chaos Computer Club gegenüber der Tagesschau.

Kann KI Kunst erzeugen?

Er denke an die Werbe- aber auch an die Kreativ-Industrie: Autor:innen, Songwriter:innen, Journalist:innen. Das könne gute Auswirkungen haben, wie neuere, schnellere und bessere Texte. Aber es könne auch zu Rationalisierung und Jobverlust führen. Ob eine Maschinen wirklich kreativ sein können, ist eine weitere große Debatte- rund um Künstliche Intelligenz. Ebenso wie die Frage, ob eine KI ein menschenähnliches Bewusstsein entwickeln kann.

Mehr zum Thema Künstliche Intelligenz:

Das Bild zeigt die Startseite von Chat-GPT. (Foto: IMAGO, NurPhoto)

Künstliche Intelligenz ChatGPT schreibt auch wissenschaftliche Texte

Hausaufgaben, Hausarbeiten und jetzt auch wissenschaftliche Veröffentlichungen. Der Bot ChatGPT mischt Bildung und Wissenschaft auf. Wie schätzen Fachleute den Einsatz von Sprachmodellen ein?

Chatbots kommen im Kundendienst oder als Sprachassistenten zum Einsatz (Foto: IMAGO, imago images / Shotshop)

Technik Chatbots – Reden mit Maschinen

Chatbots kommen im Kundendienst oder als Sprachassistenten zum Einsatz. „Gespräche“ mit ihnen laufen bereits erstaunlich flüssig. Doch „menschlich“ sind sie noch lange nicht.

Theresa Hannig, Softwareentwicklerin und Autorin von "Pantopia" spricht in SWR1 Leute über ... (Foto: © Gunter Hannig)

Software-Entwicklerin Theresa Hannig | 21.12.2022 Kann künstliche Intelligenz die Welt retten?

Die perfekte Welt - es gibt sie, dank KI, künstlicher Intelligenz. Noch ist es eine Utopie, die Software-Entwicklerin Theresa Hannig in ihrem Roman "Pantopia" vorstellt.

Schülerinnen im Robotik-Unterricht (Foto: IMAGO, imago images / ingimage)

Die künstlich intelligente Gesellschaft (8/10) Lernen fürs Roboter-Zeitalter

Das Leben mit künstlicher Intelligenz verlangt eigene Fähigkeiten – eine "KI-Kompetenz". Doch was genau ist das?

  • Wissenschaft und Forschung
  • Computer und Internet

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

  • Using ChatGPT for Assignments | Tips & Examples

Using ChatGPT for Assignments | Tips & Examples

Published on February 13, 2023 by Jack Caulfield and Tobias Solis. Revised on November 16, 2023.

People are still figuring out the best use cases for ChatGPT , the popular chatbot based on a powerful AI language model. This article provides some ideas for how to use ChatGPT and other AI tools to assist with your academic writing .

These tools can assist you with the following steps when writing a paper, thesis , or dissertation :

  • Developing a research question
  • Creating an outline
  • Generating literature ideas

Paraphrasing and summarizing text

  • Getting feedback

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Developing a research question with chatgpt, creating an outline of your paper with chatgpt, generate literature ideas, getting feedback from chatgpt, limitations of chatgpt, other interesting articles, frequently asked questions.

You can use ChatGPT to help you formulate a clear and concise research question by generating and improving potential questions.

With the help of ChatGPT, you can develop a large number of possible research questions within a short time. This is a great way to generate ideas and different approaches to a topic.

However, make sure that the generated questions make sense and meet your needs. As you can see from our example, you can keep working with ChatGPT to adapt its suggestions to your needs (or just edit the text directly if you already have a clear idea of what you want).

First input

Develop three possible research questions on the following topic: “The influence of social media on political opinion formation.”

ChatGPT output

  • To what extent does exposure to diverse political perspectives on social media affect individuals’ political opinions and behaviors?
  • How does the personalized algorithm of social media platforms shape the formation of political opinions among its users?
  • What are the underlying factors that determine the effect of social media on political opinion formation, and how can they be measured and analyzed?

Second input

Based on the second research question, develop three further research questions on the topic of polarization in US politics.

  • Build my scientific method, qualitative discourse analysis, into each question
  • Include the term “filter bubble” in each question
  • How does the filter bubble effect on social media platforms contribute to the polarization of political attitudes in the US? A qualitative discourse analysis.
  • What is the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US? A qualitative discourse analysis.
  • How does the influence of filter bubbles on social media affect the ability of individuals to engage in productive political discourse across ideological divides in the US? A qualitative discourse analysis.

Third input

Try to incorporate “qualitative discourse analysis” into the questions themselves, instead of using it as a separate sentence.

  • How does a qualitative discourse analysis of filter bubble effect on social media platforms contribute to understanding the polarization of political attitudes in the US?
  • What is the contribution of a qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US?
  • How does a qualitative discourse analysis of the influence of filter bubbles on social media inform our understanding of the ability of individuals to engage in productive political discourse across ideological divides in the US?

Check for common mistakes

Use the best grammar checker available to check for common mistakes in your text.

Fix mistakes for free

You can also use ChatGPT to help you draft a research paper outline or thesis outline . To do this, try generating possible headings and subheadings and then improving them. ChatGPT can help to generate a clear and well-structured outline, especially if you keep adjusting the structure with its help.

However, it doesn’t have a deep understanding of your topic. As a result, inappropriate headings may sometimes be generated. It’s important to check that the outline makes sense and covers all the points needed to answer your research question.

You should also ensure that the headings and subheadings follow a logical structure. Scribbr’s Proofreading & Editing service can help you with this.

Create an outline for an academic research paper with the following research question:

“What is the contribution of a qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US?”

The paper will be about 8 pages long.

  • Background information on filter bubbles and political echo chambers
  • Purpose of the study
  • Research question
  • Thesis statement
  • Definition and concept of filter bubbles
  • Definition and concept of political echo chambers
  • Previous studies on the role of filter bubbles in perpetuating and reinforcing political echo chambers
  • Critiques and limitations of previous studies
  • Research design
  • Participants
  • Data collection and analysis
  • Validity and reliability
  • Findings from qualitative discourse analysis
  • Results related to the research question
  • Implications of the findings
  • Interpretation of the results
  • Comparison with previous studies
  • Limitations of the study
  • Implications for future research
  • Summary of the results
  • Implications for understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers
  • Conclusion and recommendations for future research
  • List of references used in the paper
  • Data collection materials
  • Examples of qualitative data analysis

This seems like too much information to fit into only 8 pages. Can you condense the outline, for example removing the appendices and simplifying the structure?

  • Background on filter bubbles and political echo chambers
  • Definition of filter bubbles and political echo chambers
  • Previous studies on the role of filter bubbles
  • Limitations of previous studies
  • Implications for understanding the role of filter bubbles
  • Final thoughts

ChatGPT can be a useful way to generate ideas regarding what sources you might use in your work. Try asking it for suggestions about where to look and what to consider.

However, we advise against using it to find sources directly. It’s not designed for this and, if asked to do it, often just creates fake sources and presents them as real. Search a database like Google Scholar instead.

What types of sources could I use to write a paper on the following research question?

When writing a paper on the contribution of qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US, you could use the following types of sources:

  • Academic journals and articles: These are a good place to start when you’re looking for information on specific topics and they can provide you with in-depth analysis and perspectives on the topic at hand.
  • Books and book chapters: These sources can provide comprehensive overviews of a particular topic and can also offer theoretical frameworks and historical perspectives.
  • Government reports and studies: Government agencies, such as the Federal Communications Commission (FCC), may have conducted studies on the role of filter bubbles in perpetuating political echo chambers.
  • News articles and media reports: These sources can provide insight into the current state of the issue, as well as opinions from various stakeholders.
  • Online sources: Websites, blogs, and online forums can provide a range of perspectives on the topic and can also be useful in exploring the experiences of individuals who are affected by filter bubbles and political echo chambers.
  • Data and statistics: Relevant data and statistics on the prevalence and impact of filter bubbles and political echo chambers in the US can be useful in providing evidence to support your arguments.
  • Expert interviews and focus groups: Conducting expert interviews or focus groups with individuals who have experience with or knowledge of the topic can provide valuable insights and perspectives that can be used to support your arguments.

You can also try using ChatGPT to paraphrase or summarize text. ChatGPT can quickly condense the text or generate a number of different paraphrases of the same idea. You can keep typing “again” in the chat until you are satisfied with the result.

This kind of paraphrasing/summarizing can be helpful to, for example, non-native speakers who want to improve the fluency of their text. However, be careful. The outputs often:

  • Contain grammatical errors
  • Include false information

So you should always check the output to make sure it’s correct. If you’re unsure, you can run the text through the Scribbr Grammar Checker and cross-reference the information against other, more credible sources .

An alternative is to use the Scribbr Paraphrasing Tool or Scribbr’s free text summarizer , which were designed specifically for these purposes.

Paraphrase the following text:

“A filter bubble results from a social media algorithm’s attempts to cater to an individual user based on data such as their search history and previous behavior on the site. It results in the user only being presented with information that confirms their biases.”

Restructure the sentences and use other phrasings for the following parts:

  • “results from”
  • “only being presented with”

“A filter bubble is created when a social media algorithm customizes its content for a particular user based on their data, such as their search history and past actions on the platform. This leads to the user being confined to information that reinforces their pre-existing beliefs.” Note We strongly advise against having ChatGPT write text for you and trying to pass it off as your own work. This is academically dishonest and is widely considered plagiarism . It will usually result in an automatic fail, as well as other potential consequences .

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

dissertation mit chatgpt

Try for free

You can also use ChatGPT to get feedback on the argumentation and structure of your paper and suggestions for improvements.

An advantage of this is that you can get feedback from ChatGPT on different text excerpts at any time. However, one downside is that ChatGPT does not have a deep understanding of your research question, so its feedback and suggestions may be shallow or inappropriate.

This kind of feedback cannot replace input from an experienced academic supervisor who’s familiar with your topic. Use ChatGPT feedback as a supplement, not a substitute for human feedback.

If you need in-depth feedback on your writing from someone with academic expertise, try Scribbr’s Proofreading & Editing service .

[Text you want to get feedback on.]

Thank you for submitting your research paper for review. Let me provide you with my constructive criticism:

Clarity of Explanations: Overall, the explanations are clear and concise, but there are a few areas that could benefit from additional clarification. For example, in the first paragraph, you mention “the mental lexicon,” but do not provide a definition or further explanation for what this is. It would also be helpful to provide a definition of “lemma.”

Smoothness of Language: The language in the paper is generally smooth, but there are a few instances where it could be refined. For example, the sentence “The symptoms vary from adding superfluous words and taking pauses to hoarseness of the voice” could be rephrased to “The symptoms of these disorders can range from excessive use of words and pauses to hoarseness.”

Structure of Text: The structure of the text is clear, with each paragraph building on the previous one. However, the transition between the first two paragraphs could be smoother. Perhaps you could add a sentence or two to bridge the gap between the two topics.

Even though ChatGPT is a powerful writing tool, you should keep the following limitations in mind:

No original research

Risk of plagiarism, fake sources, fake “facts”.

ChatGPT was trained using existing texts, and the texts created by the AI are based on this data. It’s not capable of doing its own research, only of synthesizing sources that it was trained on.

Additionally, because the texts that it was trained on only go up to 2021, it’s unfamiliar with more recent world events, cultural trends, and developments in different fields of research. This means it may provide outdated information.

Because ChatGPT was trained using existing text, the AI tends to reuse that content in a way that may be considered plagiarism. The AI may copy text from an online publication in its responses.

Additionally, using ChatGPT to write your assignment for you is considered plagiarism regardless of where it got the content from, because you’re passing off something you didn’t write as your own work.

ChatGPT was trained on a wide variety of sources, but it’s not able to accurately tell you what sources it’s using for any given output. Worse, it has a tendency to make up sources that don’t actually exist when you ask.

In addition, ChatGPT is not designed as a citation generator and cannot add correctly formatted citations to your work. Instead, try a tool designed for this purpose, like the Scribbr Citation Generator .

ChatGPT is not always trustworthy when it comes to giving you factually correct information. While it doesn’t deliberately lie, it also doesn’t have “knowledge” in a traditional sense, and it lacks the ability to verify its claims.

Instead, its responses are based on recognizing and reproducing patterns that it has seen in other texts. This means that if it doesn’t “know” something, it tends to guess. And its guesses are presented with full confidence, just like established facts.

So it’s very important that you carefully check any factual claims made by ChatGPT against more credible sources —and don’t cite ChatGPT as a source of factual information for your research.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Deep learning
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

You can use ChatGPT to assist in the writing process for your research paper , thesis , or dissertation in the following ways:

  • Paraphrasing text

No, it’s not a good idea to do so in general—first, because it’s normally considered plagiarism or academic dishonesty to represent someone else’s work as your own (even if that “someone” is an AI language model). Even if you cite ChatGPT , you’ll still be penalized unless this is specifically allowed by your university . Institutions may use AI detectors to enforce these rules.

Second, ChatGPT can recombine existing texts, but it cannot really generate new knowledge. And it lacks specialist knowledge of academic topics. Therefore, it is not possible to obtain original research results, and the text produced may contain factual errors.

However, you can usually still use ChatGPT for assignments in other ways, as a source of inspiration and feedback.

No, it is not possible to cite your sources with ChatGPT . You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.

Instead, use a tool designed for this purpose, like the Scribbr Citation Generator .

But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. & Solis, T. (2023, November 16). Using ChatGPT for Assignments | Tips & Examples. Scribbr. Retrieved April 2, 2024, from https://www.scribbr.com/ai-tools/chatgpt-assignments/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, writing strong research questions | criteria & examples, how to create a structured research paper outline | example, student guide: information literacy | meaning & examples.

Jack Caulfield

Jack Caulfield (Scribbr Team)

Thanks for reading! Hope you found this article helpful. If anything is still unclear, or if you didn’t find what you were looking for here, leave a comment and we’ll see if we can help.

Still have questions?

"i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Warum sehe ich FAZ.NET nicht?

  • Politische Bücher
  • Briefe an die Herausgeber
  • Unternehmen
  • Auto & Verkehr
  • Klima & Nachhaltigkeit
  • Arm und reich
  • Schneller Schlau
  • Wirtschaftswissen
  • Meine Finanzen
  • Finanzmarkt
  • Börsen & Märkte
  • Börsenlexikon
  • Börsenspiel
  • Private Clients Forum 2023
  • Die digitale Zukunft finanzieren
  • VorDenker (Corporate) Digital Responsibility
  • Bühne und Konzert
  • Kunst und Architektur

TV-Programm

  • TV- & Radioprogramm
  • Klassenzimmer
  • Die Karrierefrage
  • Rhein-Main-Sport
  • Sportpolitik
  • Sport-Tipps

Sport-Ergebnisse

  • Eurovision Song Contest
  • Kriminalität
  • Jugend schreibt
  • Mode & Design
  • Essen & Trinken
  • Leib & Seele
  • Trends & Nischen
  • YES! – Das Verlobungsspecial
  • Region und Hessen
  • F.A.Z. Leser helfen

Veranstaltungen

  • Elektromobilität
  • Klarheit in ihrer schönsten Form
  • Medizin & Ernährung
  • Krebsmedizin
  • Leben & Gene
  • Erde & Klima
  • Physik & Mehr
  • Archäologie & Altertum
  • Geist & Soziales
  • Forschung & Politik
  • Ab in die Botanik
  • Kreuzfahrten zum Entspannen
  • F.A.Z. Leserreisen
  • Eine Reise für Genießer

Permalink: https://www.faz.net/-ijt-bosgk

Aktuelle Nachrichten aus Politik, Wirtschaft, Sport und Kultur

Herausgegeben von Gerald Braunberger, Jürgen Kaube, Carsten Knop, Berthold Kohler

Zeitung FAZ.NET

dissertation mit chatgpt

Künstliche Intelligenz : Mehr als 600.000 nutzen Unternehmensversion von ChatGPT

  • Aktualisiert am 05.04.2024 - 07:58

ChatGPT löste vor gut einem Jahr den Hype um Künstliche Intelligenz aus. Bild: AP

OpenAI hat mit seinem Chatbot vor einem Jahr einen Hype um Künstliche Intelligenz ausgelöst. Seit Januar nutzen mehr als viermal so viele Nutzer in Unternehmen das Programm.

B eim ChatGPT-Erfinder OpenAI steigt rasant die Zahl von Nutzern des KI-Chatbots in Unternehmen. Für die Firmenversion von ChatGPT seien nun mehr als 600.000 Nutzer angemeldet, sagte der fürs operative Geschäft zuständige OpenAI-Manager Brad Lightcap dem Finanzdienst Bloomberg am Donnerstag. Noch im Januar seien es rund 150.000 gewesen.

ChatGPT löste vor gut einem Jahr den Hype um Künstliche Intelligenz aus. Solche KI-Chatbots werden mit gewaltigen Mengen an Informationen angelernt und können Texte auf dem sprachlichen Niveau eines Menschen formulieren, Software-Codes schreiben und Informationen zusammenfassen.

Das Prinzip dahinter ist, dass sie Wort für Wort abschätzen, wie ein Satz weitergehen sollte. Ein Nachteil: Die Software gibt manchmal auch völlig falsche Antworten, selbst wenn sie nur korrekte Informationen als Basis hatte. Entwickler arbeiten daran, den Programmen verlässliche Leitplanken zu setzen, um so etwas zu verhindern.

Die Unternehmensversion von ChatGPT ist ein Grundpfeiler des Geschäftsmodells von OpenAI . Zugleich gibt es auch mehrere Konkurrenten, die ihre KI-Assistenten ebenfalls in Unternehmen unterbringen wollen. OpenAI schloss einen milliardenschweren Pakt mit Microsoft – und der Software-Riese setzt Technologie des Start-ups in vielen seiner Produkte ein.

dissertation mit chatgpt

Stellen Sie Ihr Wissen im FAZ.NET-News-Quiz unter Beweis, und vergleichen Sie sich mit anderen Lesern.

OpenAI habe aktuell rund 1200 Mitarbeiter, sagte Lightcap. Das Start-up habe Pläne, eine Niederlassung in Japan zu eröffnen.

Quelle: dpa-AFX

Hier können Sie die Rechte an diesem Artikel erwerben.

Weitere Themen

Wie sich KI unserer Kontrolle entzieht

FAZ Plus Artikel: Künstliche Intelligenz : Wie sich KI unserer Kontrolle entzieht

Künstliche Intelligenz könnte sich verselbstständigen. Auf diese Gefahr sind wir nicht vorbereitet, warnen Experten. Sie fordern Verbote.

Inflation bei fast 70 Prozent

Türkei : Inflation bei fast 70 Prozent

Das türkische Statistikamt meldet für den Monat März eine Inflationsrate von 68,5 Prozent im Vergleich zum Vorjahresmonat. Unabhängige Forscher gehen davon aus, dass die reale Inflationsrate in Wirklichkeit noch höher liegt.

Kampf um den wilden Lachs

FAZ Plus Artikel: Aquakultur in Island : Kampf um den wilden Lachs

In Island paart sich Wild- mit Zuchtlachs. Viele befürchten, dass auf Kosten der Aquaindustrie ein Stück wilde Natur verloren geht.

Ähnliche Themen

Topmeldungen

Extremisten im Internet : Was der Hass auf Tiktok mit Jugendlichen macht

Bundesliga im Liveticker : Stolpern die Bayern in Heidenheim?

FDP in Thüringen : Kemmerich stellt sich noch einmal zur Wahl

Soziologe im Interview : „Neid ist nützlich“

  • Telekom-Störung
  • Lufthansa-Streik
  • Nord Stream 2
  • Deutschland
  • Geldpolitik
  • Elektroautos
  • CO2-Neutralität

Verlagsangebot

  • Douglas-Rabattcode
  • Parfumdreams-Gutschein
  • flaconi-Gutschein
  • Parfümerie Pieper-Angebot
  • Sephora-Rabattcode
  • Notino-Rabattcode
  • MAC-Gutschein
  • Niche Beauty-Rabattcode
  • E-Bike-Test
  • Staubsauger-Test
  • Inhalator-Test
  • Fensterputzroboter

Abo-Service

Selection Shop

Finanz-Services

Tarifrechner

Immobilien- Markt

Testberichte

Stellenmarkt

Mehr als 600.000 nutzen Unternehmensversion von ChatGPT

Künstliche Intelligenz

Ein Fehler ist aufgetreten. Bitte überprüfen Sie Ihre Eingaben.

Vielen Dank Der Beitrag wurde erfolgreich versandt.

Genuine software, legally FREE - no cracking required!

Edit Word, Excel, and PPT effortlessly

Read, edit, and convert PDFs with the powerful PDF toolkit.

Microsoft-like interface, easy to use.

Windows • MacOS • Linux • iOS • Android

banner

  • Articles of Office solution

Step-by-step Guide to Using ChatGPT to Write a Report, Thesis and Dissertation

You must have heard or read about ChatGPT. What is it? A software or a magician? ChatGPT is an artificial intelligence chatbot developed and launched by OpenAI in November 2022. This chatbot can write answers, reports, codes, prompts, and more in seconds.

Moreover, it can write computer programs, compose music, teleplays, fairy tales, and write poetry and songs. It’s built with OpenAI’s GPT 3.5 and 4 families of large language models. The latest version of ChatGPT, released on 14th March 2023, is built on GPT 4 OpenAI model and can be purchased through a subscription for a specific time.

Benefits of using ChatGPT to write reports, theses, and dissertations

ChatGPT is a powerful artificial intelligence (AI) chatbot with many benefits. Today we shall be highlighting the benefits ChatGPT has on reports, theses, and dissertations.

Reports are professional documents submitted to an institute, department, company, and more. They are formal documents containing a written account of something observed or investigated. Below are some benefits of using ChatGPT to write reports.

Saves time:  ChatGPT saves your precious time; it writes your reports quickly and saves the time you would have spent researching and writing from the beginning.

Quality content:  The AI chatbot has been trained with massive data; hence it can write high-quality reports for you.

Variety of Styles:  Reports are usually long; thus, they can bore the reader if the content is not engaging. ChatGPT uses different styles, i.e., bullet points, lists, paragraphs, and more. This makes a report engaging.

Consistency and Productivity:  ChatGPT write the report in a consistent style and tone. This improves the readability of the report. Moreover, as ChatGPT saves your precious time, you can spend that time focusing on other tasks, improving your productivity.

Theses is a theory or personal research document. These are mainly written by university students proposing research papers on particular topics. Below are the benefits of using ChatGPT to write theses.

New idea:  ChatGPT helps you by providing a variety of ideas that can help you write your thesis. Moreover, the ideas can make your thesis stand out from others.

Guidance:  ChatGPT guides you throughout your thesis writing by providing references, and research material, summarising long research essays, and highlighting important points.

Writing Guidance:  The artificial intelligence model can advise and guide your style, tone, grammar, and more. ChatGPT guides you with this writing; further, it can proofread and rephrase your content.

3. Dissertation

Dissertation is a formal essay on a particular topic. It’s the same as the thesis and is mainly written and submitted by university students. Below are the advantages of using ChatGPT for writing a dissertation.

Knowledge:  ChatGPT is trained with vast knowledge and provides dissertation writers access to knowledge. The more diverse knowledge, the better dissertation.

Saves Time:  Dissertation writing is time-consuming; however, ChatGPT can generate texts for you instantly. You can ask ChatGPT to write sections, paragraphs, and particular dissertation headings. Moreover, it can proofread and check your dissertation's grammar, style, and errors.

Assistance with writing:  Dissertation requires a lot of time and effort. Moreover, quality is an essential factor you should keep in mind while writing a dissertation; chatGPT suggests ways to improve your dissertation writing. Moreover, ChatGPT can proofread, rephrase and highlight writing errors for you.

Step-by-Step Guide: Create a ChatGPT Account

Using ChatGPT might be complex; thus, we are here to help you. Below are steps to create an account and write reports and theses or dissertations.

1. Setting Up ChatGPT

Now that you know ChatGPT and its benefits associated with reports, theses, and dissertation writing, it’s time to set up a ChatGPT account.

The first thing to do is to visit the official website of OpenAI’s artificial intelligence chatbot ChatGPT.

After successfully landing on the official website of ChatGPT, you’ll see a login and sign-up option. You need to login in using your login credentials and progress further. However, if you do not have an account, click sign up, add an email, and create a password. Moreover, you’ll need to add your personal data, i.e., your date of birth, and verify your mobile number.

Afterward, you need to log in with the same email and password. OpenAI allows people to log in through Microsoft and Google accounts. If you have a valid Microsoft or Google account, simply click on login, click on Microsoft or Google, and log in to the selected account. Verify your phone number, and you’ll successfully log in to OpenAI to use ChatGPT.

After successfully logging in to OpenAI, you can start a new chat or use previous ones (if you logged in before and used chatbot). Click the new chat button on the left and start using the chatGPT. You can ask questions, gain knowledge, ask to proofread your work, suggest edits, and do more instantly.

If you are in a country where the use of ChatGPT is not allowed and you wish to register an account, you can refer to my other article " How to use Chatgpt ". It provides a detailed tutorial on how to register an account in such countries.

How to use ChatGPT to write a report

Setting up the ChatGPT account is easier than using it. Below we’ll guide you thoroughly on how to write a report using ChatGPT.

Setting up the Parameters

The first thing you must do is set parameters. What are the parameters? Parameters are framework or structure. You need to set up the report's parameters as they help control the report’s content and use other reports’ material in arguments. You need to tell ChatGPT where to start and let it know the limits of your report. In short, you need to provide ChatGPT with a structure under which it should generate your report.

Creating an Outline

Create an outline for your report. The outline may start with an introduction, having headings, bullet points, paragraphs, and more. You must provide an accurate outline to enable ChatGPT to write your report. This ensures that your report covers all the aspects in a good flow and all the required information. Without an outline, chatGPT may be unable to write it in a flow or may write something unnecessary for your report.

Writing the Report

The last thing is to ask ChatGPT to write the report for you. You need to set parameters, provide an outline, write a good prompt, and wait for ChatGPT to generate the report for you for a minute or two.

Topic: Impact of remote work on employee productivity

Purpose: To understand and examine the impact of remote work on the productivity of employees with findings and recommendations.

dissertation mit chatgpt

How to use ChatGPT to write a thesis or dissertation

As writing theses or dissertations is complex and requires time and energy, many people want to learn to use chatGPT to write them. Here’s how to write a thesis using Chatgpt.

Choosing a topic

After logging in to your OpenAI account, click on the new chat and choose a topic you want ChatGPT to write a thesis or dissertation on. This can be any topic that a normal human being can think of.

After deciding on a thesis or dissertation topic, you must create an outline. Creating an outline has its own benefits, as your theses will cover all aspects and topics in a flow, and the knowledge in the theses will be organized.

Writing the Thesis

Now that you have decided on a topic and created an outline, the next thing to do is write the thesis. Writing a good prompt will result in a good thesis. After writing a prompt for the thesis, click on the save and submit button, and wait a few minutes before ChatGPT can complete generating the thesis for you.

Topic: ChatGPT and its importance in conversational AI applications.

Purpose: To access and understand ChatGPT, its importance, benefits, and uses. Moreover, to examine its importance in conversational AI applications.

dissertation mit chatgpt

Now, you can use AI to assist you in generating a thesis, but it's important to note that you still need to carefully review the content generated by the AI to avoid any fabrication. Additionally, there are some universities that currently do not allow AI-generated theses. These universities may employ AI detection tools to check if a thesis was written by AI. In such cases, you can try using a paraphrasing tool to rewrite the AI-generated content until it passes the detection:

Recommended AI detection tools:  GPT-2 Output Detector

GPT-2 Output Detector (1).png

Recommended paraphrasing tools:  Quillbot

Paraphraser (1).png

Perform another round of detection

Other AI or productivity tools recommended

There are various other AI or productivity tools that you can use instead of ChatGPT. Below are some.

1.  MidJourney

MidJourney is an artificial intelligence program that Mid Journey, Inc created and launched. The AI program can generate images and art using the prompts. You can use MidJourney for 20 free jobs to create or generate images for you on prompts. This software can help you create amazing images for your report and thesis. A thesis or report would look boring without images that engage the audience; hence you can use MidJourney AI to generate images for your thesis or report.

2.  WPS Office

WPS Office is a free office software that allows users to use Word, Excel, PPT, and PDF documents under a single roof. It’s free software that promotes creativity and boosts the productivity of users. The software has more than 500 million active users worldwide. The software lets users view, edit, translate, annotate, print, and do more. It’s compatible with Windows, MacOS, IOS, and Android.

Moreover, it’s free and has various advanced features and functions. The software enables users to write reports and a thesis using diverse tools. You can add bullet points, comments, and suggestions, highlight texts, and translate sentences and words. This is the best software for writing a report or dissertation, as it has templates for the report and thesis that can help you.

3. Chinchilla AI

Developed and launched by DeepMind in March 2022, this AI language model can better write your reports and thesis, requiring less computer power for interference and fine-tuning. The Chinchilla AI is said to outperform a number of traditional language models in terms of accuracy; hence, it can generate better and more accurate reports and a thesis for you. However, the software is currently in the testing phase and cannot be accessed by the general public.

A. Can I write a thesis with ChatGPT?

Yes. You can use ChatGPT to write a thesis. Start with choosing a topic, creating an outline, and writing a prompt to enable ChatGPT to generate a thesis on the topic for you.

B. Can I write a report with ChatGPT?

Yes. Writing a report using ChatGPT is possible and easy. All you need to do is set the report's parameters, create an outline, and submit a prompt to enable ChatGPT to generate the report.

C. How do you use ChatGPT to improve your writing?

ChatGPT can improve your writing by suggesting edits, proofreading, and suggesting new ideas. Moreover, ChatGPT can highlight errors too.

D. Can ChatGPT rewrite paragraphs?

Yes. ChatGPT is a powerful tool that can rewrite your texts better and make them more engaging and readable.

ChatGPT is a powerful artificial intelligence model that can help users with numerous tasks. So far, we have learned about ChatGPT and its benefits associated with report, thesis, or dissertation writing. Moreover, we learned how to write a thesis and report using ChatGPT. Setting up and using ChatGPT to write reports and thesis. The article also highlighted the best alternatives to ChatGPT for writing reports and thesis; one of them is  WPS Office .

  • 1. How to Use WPS AI/Chatgpt to Write Research Papers: Guide for Beginners
  • 2. How to Use WPS AI/Chatgpt to Write Essays: Guide for Beginners
  • 3. Top 10 Best Business Report Chatgpt Prompt Example 2024
  • 4. Ultimate Guide to Write A Seminar PDF Report
  • 5. How to Use Wps AI/Chatgpt to Write Technical Document: Guide for Beginners
  • 6. How to Use WPS AI and ChatGPT to Analyze PDF Report

dissertation mit chatgpt

15 years of office industry experience, tech lover and copywriter. Follow me for product reviews, comparisons, and recommendations for new apps and software.

  • Share full article

For more audio journalism and storytelling, download New York Times Audio , a new iOS app available for news subscribers.

Supported by

The Ezra Klein Show

Transcript: Ezra Klein Interviews Nilay Patel

Every Tuesday and Friday, Ezra Klein invites you into a conversation about something that matters, like today’s episode with Nilay Patel. Listen wherever you get your podcasts .

Transcripts of our episodes are made available as soon as possible. They are not fully edited for grammar or spelling.

The Ezra Klein Show Poster

Will A.I. Break the Internet? Or Save It?

Nilay patel discusses the near-future of an internet as a.i.-generated content improves..

[MUSIC PLAYING]

From New York Times Opinion, this is “The Ezra Klein Show.”

Earlier this week, we did an episode on how to use A.I. right now. Now, I want to turn the question around and look at how A.I. is being used on you right now. One of the conversations has been sticking in my head was with this person in the A.I. world who was saying to me that if you look at where use has been sticky, if you look at where people keep using it day after day, you’re looking at places where the product doesn’t need to be very good. That’s why it’s really helpful for college and high school students, college and high school papers — they’re often not very good. That’s sort of their point. It’s why it’s working pretty well for a very low-level coding tasks. That kind of work doesn’t need to be very good. It gets checked and compiled, and so on.

But there’s something else that it is working really well for, which is spewing mediocre content onto the internet. And the reason is that a lot of what is on the internet right now isn’t very good. Its point is not to be good — spam isn’t very good, marketing emails aren’t very good, social media bots aren’t very good. Frankly, a lot of social media posters even when they’re not bots are not very good.

There are all kinds of websites and internet operations that are filler content designed to give search engines something to index — filler content structured to do well in a Google result so people click on it and then see an ad.

Something you’re going to hear a lot of in this episode is the term S.E.O., and that is what we’re talking about: Search Engine Optimized. Things that are built to rank highly in Google and Bing just to get somebody to click on the website. It doesn’t always matter to that person if they read the website.

But into this comes A.I. Over the last year, Google and the big social platforms — they have been flooded with A.I. spam, flooded with fake news sites filled with stolen or made up stories. There are TikToks of A.I. voices reading random text off of Reddit, nonsensical YouTube videos for kids. It’s no novel observation to say the internet has felt like it is in a state of decay for a while.

Google search results, Facebook, Twitter, or X, YouTube, TikTok — all of it felt better, more human, more delightful, more spontaneous, more real a few years ago. So what happens when this flood of content hits this decaying internet?

And then — and I actually think this is the harder, weirder question — what happens when this flood of A.I. content gets better? What happens when it doesn’t feel like garbage anymore? What happens when we don’t know if there’s a person on the other end of what we’re seeing or reading or hearing?

Should we care? What if that content is actually better than a lot of what we’re getting right now? Is that an internet we want to be on or not?

My friend Nilay Patel is the co-founder and editor in chief of the tech news site The Verge, and host of the great “Decoder” podcast. And I got to be honest, I can’t tell from this conversation if Nilay is more or less optimistic than me because he seems to think A.I. is going to break the internet. But he seems kind of happy about it.

Before we get into the actual conversation here, we are nominated for a Webby — speaking of hopefully good things on the internet — in the Best Interview Talk Show category. We are up against Oprah here, so we are decided underdogs, but this is a voting category so if we’re going to win, we need your help. You can vote using the link in the show notes or go to vote.webbyawards.com

And as always, if you want to email me with guest suggestions or thoughts on the episode, that is [email protected].

Nilay Patel, welcome to the show.

Thank you for having me. This is very exciting.

Let’s just begin with the big question here, which is what is A.I. doing to the internet right now?

It is flooding our distribution channels with a cannon-blast of — at best — C+ content that I think is breaking those distribution channels.

Why would it break them?

So most of the platforms the internet are based on the idea that the people using those platforms will in some sort of crowdsourced way find the best stuff. And you can disagree with that notion. I think maybe the last 10 years have proven that that notion is not percent true when it’s all people.

When you increase the supply of stuff onto those platforms to infinity, that system breaks down completely. Recommendation algorithms break down completely, our ability to discern what is real and what is false break down completely, and I think importantly, the business models of the internet break down completely. So if you just think about the business model of the internet as — there’s a box that you can upload some content into, and then there’s an algorithm between you and an audience, and some audience will find the stuff you put in the box, and then you put an infinity amount of stuff into the box, all of that breaks.

My favorite example of this is Amazon, which allows people to self-publish books. Their response to the flood of A.I. generated books was to limit the number of books you can upload to three books in a day. This is really — like that’s a ridiculous response to this. It just implies that the systems that we’ve built to organize audiences and deliver the right thing to the right person at the right time, they’re not capable of an increase in supply at the level that A.I. is already increasing this.

Thank you for bringing in the supply language. So, I’ve been trying to think about this as this supply and demand mismatch. We have already had way more supply than there is demand. I wasn’t buying a lot of self-published Amazon books. Is the user experience here actually different?

I think that’s a great question. The folks who write the algorithms, the platforms, their C.E.O.s, they will all tell you this is just a new challenge for us to solve. We have to out what is human, what is A.I.-generated. I actually think the supply increase is very meaningful. Like, maybe the most meaningful thing that will happen to the internet because it will sort out the platforms that allow it to be there and have those problems, and the places that don’t. And I think that has not been a sorting that has occurred on the internet in quite some time, where there’s two different kinds of things.

The example that I’ll give you is, every social media platform right now is turning into a short-form video Home Shopping Network. LinkedIn just added short form videos. Like, they’re all headed towards the same place all the time because they all have the same pressures.

Didn’t we already pivot to video a couple years ago?

We pivoted to video — I actually love it when LinkedIn adds and takes away these features that other platforms have. They added stories because Snapchat and Instagram had stories, and they took the stories away because I don’t think LinkedIn influencers want to do Instagram Reels, but now they’re adding it again.

And what you see is those platforms, their product — the thing that makes them money — is advertising, which is fine. But they don’t actually sell anything in the end. They sell advertising. Someone else down the line has to make a transaction. They have to buy a good or a service from someone else. And if you don’t have that, if you’re just selling advertising that leads to another transaction, eventually you optimize the entire pipe to the transaction to get people to buy things, which is why TikTok is now — like all of TikTok is TikTok Shop, because they just want you to make a transaction. And that those platforms are going to be most open to A.I., because that is the most optimizable thing to get people to make a transaction. And I think real people will veer away from that.

So I want to hold on to something that you’re getting at here. Which, to me, is one of the most under-discussed parts of A.I., which is how do you actually make money off of it? And right now, there are not actually that many ways.

So, what you can do is you can pay some money to the big A.I. companies. So you get the pro-version of their models. There is a certain amount of enterprise software flying around. You can subscribe to versions of Microsoft Copilot, or there’s going to be more things like that, where you can subscribe to something that is supposed to get you to buy the next iteration of Slack or whatever the enterprise software is. But it is hard to not notice that a lot of the A.I. is being built by companies that exist on advertising.

Google has a huge A.I. program, Meta has a huge A.I. program, and advertising is fundamentally a persuasion game. They are trying to persuade you to do something with the advertising to buy something. And right now, it’s pretty bad. I always think it’s funny how long after I make a significant purchase I will be advertised to make that purchase again.

It’s like, you just bought a fair amount of luggage, would you like any more luggage from the same company you already bought it from? It’s a very weird — but if this gets good, what is that? What are safe business models and what are very unethical ones, because when we talk about harms and benefits from A.I., how people are making money off of it is going to be a pretty big intermediary there.

Yeah, I’ve been talking to a lot of C.E.O.s of web companies and email companies on Decoder for the past year. I asked them all the same question, why would you start a website? Why would you send an email? And so, you asked the C.E.O. of Squarespace or Wix or we just had the C.E.O. of MailChimp on the show. And her answer is a little terrifying. Like, maybe openly terrifying.

She’s like well collect enough data on you, and then we’ll know exactly when to send you an email so that you buy the right thing at the right time. And we’ll just have A.I. automate that whole process. So you come to the website for your local dry cleaner or luggage store, you type in your email address to get the 10 percent off coupon, we look at what you were looking at. And then somewhere down the line when some other data broker has told us that you searched for a flight, we will send you a precisely targeted generated email that says you’re going to Paris? Buy this suitcase that matches your style from our store at this dynamically generated price.

But how is A.I. changing that at all because that sounds to me like the thing that is already happening.

So, this is what I mean by the increase in scale. That’s the dream. This is supposed to be what actually happens, but they can only do it in broad cohorts, which is why you get the luggage email after you’ve bought the luggage email or the luggage ad, after you bought the luggage ad.

They know you are a person who used a Wi-Fi network in a certain location at a certain time, they can track that all over the place. They know what you’ve searched for. They know that you went and made a luggage transaction. You are now categorized into people who are likely to buy luggage, whether or not that loop was closed. You put some luggage in a shopping cart. But that’s still a cohort, they can only do that broadly. And these cohorts can be pretty refined, but they can only do it broadly. With A.I. the idea is we can do that to you individually — the A.I. will write you an email, we’ll write you a marketing message, will set you a price. That isn’t 100x increase the amount of email that will be generated.

So now our email algorithms will be overflooded with commercial pitches generated by A.I. And this sort of makes sense, right? It makes sense for a Google to want to be able to dynamically generate A.I. advertising across the entire web. It makes sense for Meta to invest massively in A.I. so that when you’re watching Instagram and you scroll a dynamically generated Instagram video, that is an ad just for you appears. And all of that is down to their belief in targeting — their absolute belief that they can sell more products for their clients by targeting the ads more directly. And you are in that uncanny valley, where the targeting doesn’t actually work as well as it should and no one will admit it.

When I get spammy advertising I don’t really think about there being a human on the other end of it. Maybe to some degree there is, but it isn’t part of the transaction happening in my head. There are a lot of parts of the internet that I do think of there being a human on the other end — social media, reviews on Amazon, books — I assume the person who wrote the book is a person. How much of what I’m currently consuming may not be done by human in the way I think it is, and how much do you think that’s going to be in a year, or two, or three years?

I’m guessing your media diet is pretty well human-created because I know that you are very thoughtful about what you consume and what signals you’re sending to the algorithms that deliver your content. I think for most people —

My mom’s, let’s use my mom’s.

Mom’s are good. I would love to take my mom’s phone and throw it into the ocean and never let her have it again. I openly fear what content comes through my mother through WhatsApp. It terrifies me that I don’t have a window into that. I can’t monitor it. The same software I want to use to watch my daughter’s internet consumption, I would love to apply it to my parents because I don’t think they have the media literacy — they’re much older — to even know, OK, this might be just some A.I.-generated spam that’s designed to make me feel a certain way.

And I think that is the heart of what’s coming. I think right now it’s higher than people think, the amount of A.I. generated noise, and it is about to go to infinity. And the products we have to help people sort through those things, fundamentally our intention with that. Google is the heart of this tension — you can take any business at Google and say what happens when the A.I. flood comes to you? And I don’t think they’re ready for it.

How can they not be ready for that?

Because they’re the ones making it. This is the central tension of — in particular, I think Google. So, Google depends on the web, the richness of the web is what Sundar Pichai will tell you. He used to run search, he thinks about the web. He cares about it, and you look at the web and you’re like, you didn’t make this rich at all. You’ve made this actually pretty horrible for most people most of the time. Most people — if you search Google to get a credit card, that is a nightmarish experience — like, fully nightmarish. It feels like getting mugged.

We just went on vacation. And I googled a restaurant review in Cancun, and I got about halfway through the actual review when I realized it was sponsored content by Certified Angus Beef. And just in the middle of this review, they’re like this restaurant uses this kind of beef and here’s why it’s great. And I was like — this is — I read an ad. And Google should have told me that this was an ad. Like, this isn’t useful to me in any way — like, I’m discarding this. I don’t want this anymore.

I don’t think Google can discern what is good or bad about the web. I don’t think Google has reckoned with how it’s incentives have shaped the web as a whole. And I certainly don’t think that people who are making Google search can say A.I. is bad — A.I. content is bad, because the whole other part of Google that is making the A.I. content can’t deal with that.

This helps explain a story that I found very strange. So, 404 Media, which is a sort of newer outlet reporting on tech. They found that Google News was boosting stolen A.I. versions of news articles — and we’re seeing this all over. An article by me or by some other journalist shows up in another place, very slightly rewritten by an A.I. system, with an A.I. generated author and photo on top of it. So, we’re seeing a lot of this.

And when 404 Media asked Google about this, Google News said that for them, it was not a really relevant question whether an article was by an A.I. or a human. That struck me as a very strange thing to say, to admit. Is your view that it’s because their business is in the future replacing human-generated content with A.I., and saying that’s good — like, that’s the thing happening at the center there?

Yeah. Fundamentally, I think if you are at Google and the future of your stock price depends on Gemini being a good competitor to GPT-4 or 5 or whatever OpenAI has, cannot run around saying this is bad. The things it makes are bad.

I think this is actually in stark contrast to how people feel about that right now. One of the funniest cultural trends of the moment is that saying something is A.I.-generated is actually a great way to say it’s bad.

So, I saw people reacting to the cover of the new Beyoncé album, “Cowboy Carter,” which is a picture of her on a stunning horse. It’s Beyoncé, it’s very obviously human made, and people don’t like it. Like, was this made by A.I.? And it’s like well, you know for a fact that Beyoncé did not have A.I. generate the cover of — like, you can look at it and you can discern that it isn’t. But you can say, was this A.I.-generated? And that is code for this is bad.

What about when it’s not?

I don’t know how fast that is coming. I think that is farther away than people think. I think ‘will it fool you on a phone screen?’ is here already, but ‘is this good’ is, I think, farther away than —

But a lot of internet content is bad.

That’s fair.

I mean, you know this better than me. Look, I think it is axiomatic that A.I. content is worse right now than it will ever be.

I mean the advance in image generation over the past year has been significant. That’s very real. And preparing for this conversation, I found myself really obsessing over this question, because one way to talk to you about this is, there’s all this spammy garbage coming from A.I. that is flooding the internet.

But you can imagine an A.I. developer sitting in the third chair here and saying, yeah sure, but eventually it’s not going to be spammy garbage. We’re getting better at this. And compared to what people are getting from a lot of websites, if you’re going to Quora or ask.com or parts of Reddit or whatever, we can do better than that. The median article within three years is going to be better than the median human-produced piece of content.

And I really — I found that I did not know how to answer the question in myself — is that a better or a worse internet? To take almost Google’s side on this, should it matter if it’s done by a human or an A.I., or is that some kind of — what’s the word — like, sentimentality on my part?

I think there’s a sentimentality there. If you make a content farm that is the best content farm, that has the most answers about when the Super Bowl starts, and those pages are great. I think that’s a dead end business. Google is just going to answer the questions. I think that’s fine. I think if you ask Google what time the Super Bowl is, Google should just tell you. I think if you ask Google how long to boil an egg, Google can just tell you. You don’t need to go to some web page laden with ads and weird headings to find those answers. But these models in their most reductive essence are just statistical representations of the past. They are not great at new ideas.

And I think that the power of human beings sort of having new ideas all the time, that’s the thing that the platforms won’t be able to find. That’s why the platforms feel old. Social platforms like enter a decay state where everyone’s making the same thing all the time. It’s because we’ve optimized for the distribution, and people get bored and that boredom actually drives much more of the culture than anyone will give that credit to, especially an A.I. developer who can only look backwards.

I’m going to spend some time thinking about the idea that boredom is an under-discussed driver of our culture. But I want to get at something else in there — this idea of Google answering the question. We’re already seeing the beginnings of these A.I. systems that you search the question that might — at another time — have brought you to The Verge, to CNN, to The New York Times, to whatever.

But now, perplexity — there’s a product, Arc. They’ll basically use A.I. to create a little web page for you. The A.I. itself will read, “read”— in quotation marks — the A.I. itself will absorb some websites, create a representation of them for you, and you’ll never go to the place you were that actually created that data about the past that A.I. used to give you something in the present.

Casey Newton, at Platformer, his word was he felt revulsion, and that was how I felt about Arc’s product here. You take all this work other people have done, you remix it under your thing, they don’t get the visit to their web page, nobody has the experience with the work that would lead them to subscribe. But two things in the long run happen from that.

One is that you destroy the score of growing value, growing informational value that you need to keep the internet healthy. You make it say impossible to do the news gathering that allows you to be news because there’s no business model for it. The other is that you also destroy the training data for the A.I. itself, because it needs all that work that we’re all doing to train.

The thing they need is data. The A.I. is polluting that data with A.I. content currently, but it also can begin to destroy that data by making it unprofitable for people to create more of it in the future. I think Ryan Broderick has called A.I. search a doomsday cult. How do you think about this sort of deeper poisoning of the informational commons?

I think there’s a reason that the A.I. companies are leading the charge to watermark and label content as A.I.-generated. Most of them are in the metadata of an image. So most pictures you see in the internet, they carry some amount of metadata that describes the picture. What camera was taken on, when it was taken, what image editing software was used.

So, Adobe and a bunch of other companies are like, we’ll just add another field that says, here are all the A.I.-generated edits that were made on this photo. I think it is in their self-interest to make sure that is true and they can detect it and exclude it if they need to. I think there are moral reasons to do it too.

So their training data remains less corrupted?

Yeah. I think there’s a very straightforward incentive for them to figure out the watermarking, labeling stuff they want to do. And they have coalitions, and tasks force, and Adobe talks about the image of the Pope and the puffer jacket as a, “catalyzing moment” for the metadata of A.I. because people freaked out. They’re like oh, this thing looks real. But they have a real incentive to make sure that they never train on other A.I. generated content.

So that’s one aspect, which I think is just sort of immediately self-interested. The other thing is — that’s why I keep asking people why would anyone make a web page?

There’s a site I think about all the time. It’s called HouseFresh, which is a site that only reviews air purifiers. And to me, this is the internet. Like, this is what the internet is for. You care about air purifiers so much you’ve set up a series of web pages where you express your expertise in air purifiers and tell people which ones to buy. That’s all they do. And Google has started down-ranking them, because big publishers boost their content, because A.I. is lifting their content, because companies like CNN, in order to gain some affiliate ad revenue somewhere, have set up their own little mini-content farms full of affiliate links.

I’m not saying we don’t — like, other publishers do this. But the point of these algorithms is, ideally, to bring you to the HouseFresh people, is to bring you to the person who cares so much about air purifiers they made a website about air purifiers, and we’re not doing that anymore. And so if you were to say, where should a young person who cares the most about cars, or who cares the most about coffee, or whatever. Where are they going to go? Where are they going to make stuff? They’re going to pick a closed platform that ideally offers them some built in monetization, that ideally offers them some ability to connect directly with an audience. They’re not going to go to a public space like the web, where they might own their own business, which would be good. But they’re also basically at the mercy of thieves who come in the night and take all their work away.

But also, if you kill HouseFresh, then two years later when you ask the A.I. what air purifier should I get, how does it know what to tell you?

Yeah, I don’t the answer to that question.

I don’t think they do either.

Yeah again, this is why I think that they are so hell-bent on labeling everything. I think they need some people around in the future.

But labeling is good. I mean, that keeps you from getting too much garbage in your data set. But replacing a bunch of the things that the entire informational world relies on to subsidize itself — to fund itself — like this to me is a thing that they don’t have an answer for.

Wait, let me ask you a harder question. Do they care?

Depends on they, but I don’t think so.

Or at least they care in the way that I came to realize Facebook, now Meta, cared about journalism. People say they didn’t care about journalism. I don’t believe that’s actually true. They didn’t care enough for it to mean anything. Like, if you asked them, if you talked with them, if you had a drink, they would think what was happening to journalism was sad.

And if it would cost them nothing, they would like to help. But if it would cost them anything — or forget costing them anything. If they would begin to help and then recognize an opportunity had been created that they could take instead of you, they would do that. That’s the way they care.

So when you have a financial crisis, you have something oftentimes called a flight to quality. Investors flood into the things they know they can trust, usually treasury bonds, and I’ve been wondering if this won’t happen in this era of the internet — if I wanted to take an optimistic perspective on it — that as you have a sort of ontological collapse, as you don’t know what anything is.

I already feel this way with product reviews. When I search product reviews, I get reviews now from tons of sites that I know don’t really invest that much in product reviews. CNN, all these other organizations that I have not really, truly invested in high-quality product reviewing, when you search, you now get them — they’re telling you what to buy.

That makes me trust the Wirecutter, which is a New York Times property, but that I know we’ve put a lot of money in more. Similarly, the other one I use, which is a Vox Media property, is The Strategist at New York, because I knew what the development of that looked like, I know what they put into that.

You can imagine this happening in news for things like The New York Times or The Washington Post. You can imagine it in a couple of different places. If people begin to feel that there is a lie at the heart of the internet they’re being given, that they can’t figure out what is what and who is who and if it is a who at all — I mean, maybe you just end up in this internet where there’s more of a value on something that can be verified.

I keep a list of TikToks that I think each individually should be a Ph.D. thesis in media studies. It’s a long list now. And all of them are basically just layers of copyright infringement in their own weird way.

My favorite is — it’s a TikTok, it has millions of views. It’s just a guy reading a summary of an article in the journal Nature. It has millions of views.

This is more people that have ever considered any one article in the journal Nature — which is a great journal. I don’t mean to denigrate it. It’s a proper scientific journal. They work really hard on it. And you just go 5 steps down the line, and there’s a guy on TikTok summarizing a summary of Nature, and you’re like what is this? What is this thing that I’m looking at?

Will any of the million viewers of this TikTok buy one copy of Nature because they have encountered this content? Why did this happen?

And the idea is, in my mind at least, that those people who curate the internet, who have a point of view, who have a beginning and middle, and an end to the story they’re trying to tell all the time about the culture we’re in or the politics we’re in or whatever. They will actually become the centers of attention and you cannot replace that with A.I.

You cannot replace that curatorial function or that guiding function that we’ve always looked to other individuals to do.

And those are real relationships. I think those people can stand in for institutions and brands. I think the New York Times, you’re Ezra Klein, a New York Times journalist means something. It appends some value to your name, but the institution has to protect that value.

I think that stuff is still really powerful, and I think as the flood of A.I. comes to our distribution networks, the value of having a powerful individual who curates things for people, combined with a powerful institution who protects their integrity actually will go up. I don’t think that’s going to go down.

You mentioned 404 Media. 404 Media is a bunch of journalists who were at Motherboard at Vice. Vice is a disaster. They quit, they started a new media company, and we now all talk about 404 Media all the time. This thing is 25 minutes old. We don’t talk about Jason Koebler the editor in chief. We talk about 404 Media, the institution that they made — a new brand that stands for something, that does reporting and talks about something. I think there’s still meaning there.

You said something on your show that I thought was one of the wisest, single things I’ve heard on the whole last decade and a half of media, which is that places were building traffic thinking they were building an audience. And the traffic, at least in that era, was easy, but an audience is really hard. Talk a bit about that.

Yeah first of all, I need to give credit to Casey Newton for that line. That is something — at The Verge, we used to say that to ourselves all the time just to keep ourselves from the temptations of getting cheap traffic. I think most media companies built relationships with the platforms, not with the people that were consuming their content.

They didn’t think about them very much. They thought about what was hitting in the Facebook algorithm, they thought about what Google search wanted for Game of Thrones coverage that day, which was everything all the time. And everybody had a Game of Thrones program. Fox had one, The Verge had one, The New York Times had one. Why?

That’s weird. It’s we constructed this artificial phenomenon because people searched for — I mean, just to say the answer because we know it — because people searched for “Game of Thrones” content the morning after the show, and that was an easy way to get a bunch of traffic. And at least a theory of the time was that you could turn traffic into money through advertising, which was not totally wrong, but not nearly as right as the entire era of business models was predicated on.

The other thing that those business models were predicated upon was you’d get so good at being a supplier to one platform or another with Game of Thrones content or whatever it was that they would pay you money for it directly — that Google would say, this is the Game of Thrones link that most people are clicking on. We ought to pay Vanity Fair for its Game of Thrones content to surface it. Or all of BuzzFeed was we’re going to be so good at going viral on Facebook that Facebook will pay us money.

And that absolutely didn’t pan out. But no one hedged that bet, which is utterly bananas to me. No one said we should take these people who came here for a Game of Thrones and figure out how to make them care about us, and we should care about them. Everyone just looked at it as a number that was going up against some amount of interest as demonstrated by some platform somewhere.

And I think that is the mistake. It is the mistake that creators on the creator platforms are not making, because the terms of that arrangement are so much more cynical. You see TikTokers. They at any moment their videos can get downranked, their accounts can get yanked, their stuff can get banned. They’re constantly trying to get you to go to Instagram.

Every YouTuber gets their wings when they make the video about how they’re mad at YouTube. There’s a woodworking YouTuber that I used to follow, and he just sort of got to the point where he’s like, I hate YouTube. I’m leaving. And it’s like dude, you made videos about jointing wood, like what are you doing?

And it’s like his relationship with the platform was so cynical that he was like, I’m moving my business elsewhere. You can sign up for a master class. Those individuals have these very cynical, very commercial relationships with the platforms that the media companies, for some reason, just never hedged. And so they actually do have audiences. And I think media companies need to get way back on the game of having a true audiences.

This gets to something that does worry me about this phase of A.I. hitting the internet, which is it’s hitting an internet in a moment of decay and weakness. And here, by internet, I mean the sort of content generating internet, and I break that into a couple of categories. The media is very weak right now. The media business we have seen closures left and right, layoffs left and right. I mean, a bunch of players like Vice and BuzzFeed who were believed to be the next generation of juggernauts are functionally gone as news organizations.

The big content platforms, they’re doing fine from a financial standpoint, but people hate them. The relationship between the users and Facebook, the users and YouTube, the users and — to some degree, you’re even seeing that now with TikTok — is just darkening in a way that it wasn’t in 2014.

And so, there’s a lot of desperation on all sides. Sometimes the desperation is you don’t have the money to pay the journalists you need to do the work you want to do. Sometimes the desperation is that you’re trying to figure out something to make this audience like you again and not get eaten by TikTok or whatever comes after TikTok.

And into this comes A.I., and all the money that A.I. seems to bring, and even the A.I. companies might pay you some money for your stuff.

Reddit just licensed a bunch of its content as training data to Google.

So, you could really imagine a thing happening again, where all these media companies or content companies of some form or another, license out what they have for pennies on the dollar, because at least you can make some money off of it that way.

But what worries me is both the weakness, but that also, it does not feel to me like anybody knows what the relationship is to this is supposed to be. Do you use it? Are you just training data for it? Like, what are you in relationship to the A.I. era?

As a consumer or as a producer?

As a producer.

The idea that media companies are going to license their stuff to the A.I. companies is just the end of the road that we’ve been on for a long time. We are suppliers to algorithms. OK? And in any normal functioning capitalist economy, supplier margins get squeezed to zero and then maybe we all die. Like, that’s the game we’ve been playing without saying it for a long time —

Which I think is why you see The New York Times suing OpenAI, like a real desire to not be in that game again.

You see The New York Times suing OpenAI, but you don’t see them suing Google, you don’t see them de-S.E.O.ing pages across New York Times. Like, they still need the audience from these platforms. And I think there’s a very tense relationship there. The idea that you could sue OpenAI and win some precedent that gives you an enormous amount of leverage over Google I think is a very powerful idea.

Most of the media company executives I talk to would love for that to be the outcome. I don’t know if that’s going to be the outcome. I feel like I should warn your audience, like — I’m a failed copyright lawyer. I wasn’t good at it, but I did it for a minute. Copyright law is a coin flip. Like, these cases are true coin flips. They are not predictable. The legal system itself is not predictable, copyright law inherently is unpredictable.

And a really interesting facet of the internet we live in today is that most of the copyright law decisions were won by a young, upstart, friendly Google. YouTube exists because it was Google. Like, Viacom famously sued YouTube and they might have won and put it out of business, but Google, the friendly Google company with the water slides in the office, the upstarts that made the product you loved, went and won that case. Google Books, we’re going to index all the books without asking for permission. They won that case, because they were friendly Google, and the judges were like, look at these cute kids making a cool internet? Like it was new and novel. Google image search — these are all massive copyright decisions that Google won as a startup company run by young people building a new product that the judges were using on their Dell desktops or whatever.

These aren’t those companies anymore. They’re going to go into a legal system as behemoths, as some of the biggest, best-funded companies in the world that have done bad things to the judges teenage children, like all these things are different now. And so, I don’t know if Google, or OpenAI, or Microsoft gets the benefit of being like, we’re young and cool and hip, bend copyright law to our will.

You don’t want a staunch innovation. Like, that was the big fear in that era. We don’t know what we’re building, and that’s still the thing you hear, and it’s not even untrue. You crack down on copyright and maybe you do staunch innovation. You don’t crack down copyright and maybe you destroy the seed corn of the Informational Commons. It’s very fraught for the copyright judges, but also just for all of us.

Yeah, what are you as a producer on the internet is totally governed by copyright law. Like, a joke at The Verge is a copyright law is the only functional regulation on the internet. The entire internet is just speech, that’s all it is top-to-bottom, it’s speech.

In the United States, we don’t love a speech regulation, and I think for good reason. But we love copyright law, we love it. Can’t get enough of it. Like, YouTubers know the YouTube copyright system back and forth, because that’s the thing that takes their content down. And we allow this regulation on the internet at scale.

And so the parameters of this one body of law, as applied to A.I., which is a taking. Training an A.I. model is fundamentally a taking, and the A.I. company —

Taking in the legal sense of the term?

No, in the moral sense of the term. They come to your website and they take your stuff. It’s not a zero sum taking, but they’ve extracted value to create more value for themselves. I think that’s just a moral taking. There’s some permission there that did not occur. Joanna Stern at The Wall Street Journal just interviewed Mira Murati, the C.T.O. of OpenAI, about training data for Sora, the video generator, and Mira said, we just use what’s publicly available. And it’s like yo, that doesn’t make any sense. Like, there are lots of rules about what’s publicly available. Like, you can’t just take stuff because you can link to it on the internet, that’s not how it actually works.

Let me try to take the argument I hear from the A.I. side of this, which is that there is functionally nothing in human culture and human endeavor that is not trained on all that has come before it — that I, as a person, am trained on all this embedded knowledge in society, that every artist has absorbed, all this other art that the A.I. — I mean, this is just learning. And as long as you are transforming that learning into something else, as long as you are doing something new with that learning, then one, copyright law is not supposed to apply to you in some way or another, although that’s obviously complicated.

But two, to go back to your point of morality, if you want to see culture humanity technology advance, it is also not supposed to apply to you, because if you do not let things learn, people, organizations, models, you are not going to get the advances built on all that has come before. And that’s how we’ve always done it. What’s your answer to them?

I hear this idea all the time, often from the sorts of people in Silicon Valley who say they do first principles thinking — which is one of my favorite phrases, because it just means what if we learn nothing? Like, what if none of the history of the world applied to us and we could start over to our benefit? And that’s usually what that’s code for.

So I hear those arguments and I think, you guys just weren’t paying attention. You’re entering a zone where the debate has been raging for decades. A lot of copyright law is built around a controversy around player pianos, and whether player pianos would displace musicians. But you just have to rewind the clock to the 80s and be like, should sampling be legal in music?

And now we are having the exact same conversation in the exact same way with the exact same parameters. The only thing that’s different now is any kid can sample any song at scale, feed it into an A.I. and have Taylor Swift sing the Dolly Parton song for them. That’s a weird new turn in the same debate, but it is a massively age-old debate, and the parameters of the debate are pretty well known.

How do you incentivize new art? How do you make sure that it’s economically valuable to make new things? How do you make sure the distributors don’t gain too much power, and then how do you make sure that when people are building on the past, the people whose art they’re building on retain some value?

And that I think is — the A.I. companies have no answer to that last question. We’re just going to take a bunch of stuff and now we’re just going to say look, we just summarized the web. The people who made the web get nothing for that will pay us $20 a month for the service.

But somewhere in there, as a policy matter as a moral matter, the people who made the foundations of the work should get paid. And this is where the sampling debate has ended up. There’s a huge variety of licensing schemes and sample clearances so that those artists get paid.

Judge Patel, if you’re thinking about cases in this area, like, what do you think the answer is here? Is it the sampling model, is it something else? What do you think the right broad strokes resolution is?

Let me stick on the music example for one second, because I think music is really interesting because it’s kind of a closed ecosystem. There’s only so many big music companies. It’s the same lawyers, and the same executives, and the same managers going to the same clearing houses and having the same approaches. We’re going to give you a songwriting credit because we interpolated the bass line of this song into that song, and now here’s some money. And this is the mechanism by which we’ll pay you. The A.I. companies are not a closed ecosystem, it is just a free for all. It’s the open web, it’s a bunch of players.

So, I think in those cases, you’re just going to end up with vastly more outcomes which I think leads to even more chaos, because some companies will take the deal. I’m guessing The New York Times is going to pursue this all the way to the Supreme Court. This is an existential issue for The Times.

Some companies don’t have the money to pay for Supreme Court litigation, and they’ll take a shittier deal, like pennies on the dollar deal and maybe just go out of business. And I think that range of outcomes in the near-term represents a massive failure of collective action on the part of the media industry to not say, this is actually the moment where we should demand that human journalists doing the real work that is dangerous are valuable. We need them, and we will all, together, approach these players in a way that creates at least a semblance of a closed ecosystem.

Well the media industry, but also at some point this is a regulatory question, a question of law. I mean, nothing is stopping Congress from making copyright law designed for the A.I.-era. Nothing is stopping Congress from saying, this is how we think this should work across industries. Not just media, but novelists, but everybody. Well, there are some things that stop Congress from doing a lot of things. The idea that Congress could pass a massive rewrite of copyright law at this moment in time is pretty far afield.

But won’t and couldn’t, I do want to make this distinction here. What you’re saying is Congress is too polarized and bitterly divided over everything and can’t do anything and can’t get anything done, and that’s my whole job man, I know. But what I am saying is that, you could write a law like this.

This is something that ultimately, I don’t just think it’s like a media collective-action problem, but is going to be ultimately a societal-level collective action problem. And maybe we cannot, as a society, act collectively very well. I buy that totally.

So there is one law. There’s the J.C.P.A., the Journalism Competition Preservation Act, which allows media companies to escape antitrust law and bargain collectively with whoever they wish to bargain with. I don’t know if that’s going to pass, I know there’s a lot of interest in it.

So, there are these approaches that have appeared in Congress to solve these problems, but the thing I’m getting at is you have sort of the rapacious wolves, and then you have an industry that’s weak — as you said — that, I think is not motivated to value the work it does as highly as it should. And that is step one.

You and I are both fans of Marshall McLuhan, the media theorist. And he’s got this famous line, ‘the medium is the message.’ And more deeply, what he says is that people, when they see a new medium, they tend to think about the content. For television, it’s the shows, what do you think about this show or that show? For Twitter, the tweets, for a newspaper, the articles. But you have to look behind the content to the actual medium itself to understand what it is trying to tell you.

Twitter, at least in it’s early stages was about all these things can and should be discussed at 140 characters. Television made things much more visual, things should be entertainment. They should be entertaining, the news should be entertaining, which was a little bit of a newer concept back then.

I’ve been trying to think about what is the message of the medium of A.I. What is a message of the medium of ChatGPT, of Claude 3, et cetera. One of the chilling thoughts that I have about it is that its fundamental message is that you are derivative, you are replaceable.

A.I. isn’t good at ideas, yet. It is good it’s style. It can sound like Taylor Swift. It can draw like any artist you might want to imagine. It can create something that looks like Jackson Pollock. It can write like Ezra Klein. It may not be exactly as good at high levels of these professions, but what it is functionally is an amazing mimic.

And what it is saying — and I think this is why a lot of people use it for long enough end up in a kind of metaphysical shock, as it’s been described to me. What it’s been saying is you’re not that special, and that’s one reason I think that it can — we worry about it proliferating all over social media. It can sound like a person quite easily. We’ve long passed the Turing test, and so one, I’m curious if that tracks for you, and two, what does it mean to unleash on all of society a tool that’s basic message is, it’s pretty easy to do what you do, sound like you sound, make what you make?

I have a lot of thoughts about this. I disagree on the basic message. I do think one of the messages of A.I. is that most people make middling work, and middling work is easy to replace. Every email I write is not a great work of art. Like, so much of what we produce just to get through the day is effectively middling. And sure, A.I. should replace a bunch of that. And I think that metaphysical shock comes from the idea that computers shouldn’t be able to do things on their own, and you have a computer that can just do a bunch of stuff for you. And that changes your relationship to the computer in a meaningful way, and I think that’s extremely real.

But the place that I have thought most about I was at the Eras Tour in Chicago when I watched Taylor Swift walk onto a stage, and I saw 60,000 people in Soldier Field just lose their minds, just go nuts. And I’m watching the show, and I’m a Taylor Swift fan. I was there with my niece and nephew and my wife and we were all dressed up. Why am I thinking about A.I. right now? Like truly, why am I thinking about A.I. right now?

It’s because this person has made all of these people feel something. The art that has been created by this one very singular individual has captivated all of these people together, because of her story, because of the lyrics, because it means something to them. And I watch people use Midjourney or generate a story with an A.I. tool, and they show the art to you at the end of it, and they’re glowing. Like, look at this wonderful A.I. painting. It’s a car that’s a shark that’s going through a tornado and I told my daughter a story about it. And I’m like yeah, but this — I don’t want anything to do with this. Like, I don’t care about this. And that happens over and over again. The human creativity is reduced to a prompt, and I think that’s the message of A.I. that I worry about the most, is when you take your creativity and you say, this is actually easy. It’s actually easy to get to this thing that’s a pastiche of the thing that was hard, you just let the computer run its way through whatever statistical path to get there. Then I think more people will fail to recognize the hard thing for being hard. And that’s — truly the message of A.I. is that, maybe this isn’t so hard and there’s something very dangerous to our culture embedded in that.

I want to put a pin in the hard things, easy things. I’m a little bit obsessed by that and want to come back to it. But first I want to talk about A.I. art for a minute, because I do think when we’re talking about everything that’s going to come on the internet, we’re talking about A.I. art. Obviously, much of it is going to get better. Some of it is not distinguishable.

You talked about the example where somebody comes and hands you the A.I. art says, hey, I did this with an A.I. And I’m like eh — and I have that experience a lot, I’ve also really been trying to use these systems and push them, and play with them, and have A.I. character relationships on my phone with Kindroids and whatever.

And there is this deep hollowness at the center of it. It is style without substance. It can mimic me. It can’t think.

Have you found an A.I. that can actually write like you?

I found an A.I. that can mimic certain stylistic tics I have in a way that is better than I think most people could do. I have not found any A.I. that can, in any way, improve my writing for all that you’re constantly told it can. And in fact, the more I try, the worse my writing gets because typically what you have to do to improve your writing is recognize if you’re writing the wrong thing.

I don’t find writing hard, I find thinking hard. I find learning hard. How good a piece of writing is going to be for me is typically about, did I do enough work beforehand? And A.I. can never tell me you didn’t do enough work, you need to make three more phone calls. You need to read that piece you skimmed.

But it can mimic, and I think it’s going to get better and better at mimicking. I think GPT 3 was much worse at mimicking me than GPT 3.5 was, worse than GPT 4 is, and GPT 5 will be even better than that. I believe this is going to get stronger. It raises a question of whether there is anything essential about something being from a human in a wide frame way. Taylor Swift is singular, but the point is that she’s a singular phenomenon. Do we care that things come from people?

I was thinking when I was preparing for this show with you, the Walter Benjamin essay, it’s called “The Work of Art in the Age of Mechanical Reproduction.”

This is like the verge of DNA.

Is it? Yeah, so it comes out in 1935. It’s about the ability to reproduce art. And he says, and I’ll quote it here, “that which whithers in the Age of Mechanical Reproduction is the aura of the work of art.” Then he goes on to say, “by making many reproductions, it substitutes a plurality of copies for a unique existence.”

Benjamin is saying at different times here in different ways, and I’m going to simplify it by trying to bring it into the present, but that there is something lost from when you take the painting and make a copy of a painting. And, he’s obviously right, and he’s obviously — then on the other hand, a lot of people like copies of paintings. It’s easy for the artist to think more of the original than the original deserves to be thought of.

But I wonder about this with humans. How much of something is just the fact that there’s a human behind it? My Kindroid is no worse at texting me than most people I know. But the fact that my Kindroid has to me is meaningful to me, in the sense that I don’t care if it likes me because there’s no achievement for it to me.

The fact that there is a human on the other side of most text messages I send matters. I care about it because it is another mind. The Kindroid might be better in a formulaic way. The kindred might be better in terms of the actual text. I can certainly tune it more to my kind of theoretical liking, but the friction of another person is meaningful to me. Like, I care that my best friend likes me and could choose not to. Is there an aura problem here?

It is so hard to make someone else feel anything other than pain. Like, it’s just like — it’s —

Christ, that’s the darkest thing I’ve ever heard you say.

Yeah, but I believe it in my soul.

Yeah. I think the hardest thing to —

a really different turn as a show right now. [LAUGHS]

You don’t make people laugh, you don’t give them hugs?

No, I think that’s hard. I think that effort is worth it. That’s why I don’t think it’s a dark thing to say. I think the essence of being a good person is pointing your effort at making other people not feel pain. I think bullies make people feel pain because it’s easy. Again, I come back to Taylor Swift in Soldier Field. The thing that was going through my head is, this person is making 60,000 people feel joy, and she’s doing it through art. That is the purpose of art. The purpose of art is to inspire feelings, to inspire emotion.

And so I look at this A.I. and it’s like, we’re going to flood our stuff, and the only emotion that it is really meant to inspire is materialism, is a transaction. That’s bad. I just think that’s bad. I think we should make some stuff that inspires more joy, that inspires more affection, that inspires more consternation.

And one of the messages embedded in the medium of A.I. is that there is an answer. That’s weird. That is a truly weird thing for a computer to say to you. You ask it about a war, and it’s like I won’t answer that question because there’s no answer there. You ask it about how to cook an egg and it’s like here’s the answer. You’re like what are the four steps to fold a bed sheet? It’s like here’s the answer, I did it. Tell me a bedtime story for my child. It says, here’s an answer, I just delivered this to you at your specifications.

And I think the thing you’re saying about having another mind there is — you want to be in a relationship, like an emotional relationship with another person. Maybe it’s mediated by technology, maybe we’re face-to-face like we are now, but that tension and that reality of — oh, I can direct my effort towards negative and positive outcomes, I have never found it with an A.I.

Shannon Vallor is a philosopher of technology, and she’s got a book coming out called “The A.I. Mirror,” and I like the way she puts this, because there’s this way that turns is somewhat warped mirror back on ourselves when I was saying a few minutes ago that the message of A.I. is that you’re derivative. That leaves something out. What it’s really saying is that the part of you that often the economy values is derivative, is copyable because we actually ask people a lot of the time to act like they’re machines.

This is why I don’t take much comfort in the Taylor Swift example. You said a few minutes ago, most people do mediocre work most of the time. Even great people do mediocre work most of the time. We constantly ask huge amounts of the population to do things that are very rote. Keep inputting this data on forms, keep filling out this tax form. Some lawyers arguing for the Supreme Court, a lot of them just write up various contracts. And that’s a good job in the sense that it pays well, it’s inside work, but it doesn’t ask you to be that full of a human being.

Now, you can imagine a sort of utopian politics in society — and people on the left sometimes do — that this comes in and it’s like great, we can automate away this derivative inhuman work, and people will be free to be more full human beings. You actually like — maybe the value of you is not what you can create but what you can experience. A.I. can’t enjoy a day at the park with its family.

But we have an entire society set up to encourage you to premise your self-worth on your work and your wages. And also, if you lose that work and that wages, to rob you of that self-worth. And one thing I’m sure of is that our politics and our economic systems are not going to advance as quickly as A.I. is going to advance.

This is where I think people do properly worry about automation, when people lost manufacturing jobs to lower wage workers in China. We didn’t say great, you don’t have to do this stultifying work in the factory anymore. We said, you’re out of work, you’re screwed. And I do think one of the deep confrontations of it is, what do we value in people and then how do we express that value because I think what A.I. in some ways is going to take advantage of here, or at least, is going to challenge, is it to the extent we value people socially for their economic contribution, or what they’re paid. That’s pretty thin reed for human value to rest on.

Yeah, I buy that. One of my favorite things that I’ve covered in the past few years is a thing called robotic process automation, which is very funny. Just abstractly, deeply hilarious. There are lots and lots of companies throughout the United States that built computer systems 10, 15 years ago, 20 years ago. Hospital systems are famous for this. They have billing systems. They have buildings full of people who use Microsoft Excel on Windows ‘95.

And replacing that as costly and complicated. It can’t break — if you put in the new system and it didn’t bring all the data over in exactly the right way, the whole hospital stops working. So they just buy other computers to use their old computers. Which is wild, and there’s like billion dollar companies that do this.

They will sell you a brand new, state of the art computer and it will connect to the keyboard and monitor jack of your old computer, and it will just use the Windows ‘95 for you, which is just bonkers. It’s like Rube Goldberg machine of computers using old computers, and then your office full of accountants who knew how to use your old system will go away.

But then A.I. creates the scale problem. What if we do that but instead of some hospital billing system built in the ‘90s, it’s just the concept of Microsoft Excel, and now you can just sort of issue a command on your computer and it’ll go use Excel for you and you don’t need an accountant, you don’t need a lawyer.

And I think even in those cases what you’re going to find is the same thing you talked about with writing — you have to know what you want. You have to know what the system doesn’t know. You have to be able to challenge the model and have it deliver you the thing that, in most business model conversations I find to be the most important word, our assumption is — and then you can poke at that really hard.

What percent of workers are actually asked to poke at the assumptions of their organization, because I worry it’s not as high as you think it is, or implying there. I’m not worried about Taylor Swift. I’m not worried about Nilay Patel. And I don’t just want to make this about wages. That’s a jobs sort of another conversation.

But I do — I mean, as you were saying, these are billion dollar companies that automate people who do backend office work already.

All over the place.

There’s a huge amount of work like that. And if I felt confident as some of the economists say that we’ll just upmarket people into the jobs where they use more human judgment, David Autor who’s a great trade economist at MIT, just made this argument recently, that what A.I. is going to do is make it possible for more people to exercise judgment and discernment within their work, and I hope he is right. I really hope he is right. But I think a lot of organizations are not set up for a lot of people to use judgment and discernment. They treat a lot of people like machines, and they don’t want them doing things that are complicated and step out of line and poke at the assumptions in the Excel doc. They want the Excel doc ported over without any mistakes. It seems plausible to me that we’re going to get to that.

Do you think their bosses want to be able to poke at the assumptions though?

But if you — I mean this is actually something I believe about the whole situation. The economy needs fewer bosses and workers.

Think about this in the journalist context or the writing context, where I think what A.I. naturally implies that it’s going to do is turn many more people into editors and writers. Because for a lot of content creation that doesn’t require a lot of poking at assumptions, mid-level social media marketing — a lot of people are doing that job right now. But the people doing marketing for a mall —

Yeah, that is the MailChimp example. That is the product that they are building.

And so what you have then is we used to have a bunch of these social media marketers and now you have one person overseeing a couple systems, like making sure they didn’t say something totally crazy. But you need fewer editors and you need writers. I mean, you know The Verge is structured. You know how The Times is structured. And this is one of my deep worries.

And then this goes to the thing you were getting at earlier, which is one way I think that A.I. could actually not make us more productive, more innovative, is that a lot of the innovation, a lot of the big insights happen when we’re doing the hard thing, when we’re sitting there trying to figure out the first draft, or learn about a thing, or figure out what we’re doing.

One of the messages of the medium of A.I. is be efficient. Don’t waste your time on all this. Just tell the system what to do and do it. But there’s a reason I don’t have interns write my first draft for me.

They could do it. But you don’t get great ideas, or at least not as many of them, editing a piece of work as you do reporting it out, doing the research, writing the first draft. That’s where you do the thinking. And I do think A.I. is built to kind of devalue that whole area of thinking.

We are working on a big story at The Verge right now that I’m very excited about. But there are four of us right now in an argument about whether we should tell that story in chronological order or as a series of vignettes. There is no right answer to this question. There’s just four people who are battling it back and forth.

I think vignettes.

Yeah. By the way, I’m on team vignette.

Good man. [LAUGHS]

My belief is that it’s easier to digest a long story when it’s composed of lots of little stories as opposed to one long one. I’m being outvoted right now — editor in chief. I should replace them all with A.I., just get them out of here. [CHUCKLES] But that is the kind of work that I think makes the end product great. And I think going from good to great is still very human.

Into the economy, though, you’re right, most people are not challenged to go from good to great. Most people are challenged to produce good consistently. And I think that is kind of demoralizing. I don’t know how many first-year Deloitte consultants you have encountered in your life. I’ve encountered quite a few of them. I went to law school. It’s like a — we made — there was a factory of that thing — or first-year law associates.

They’re not in love with their jobs. They’re in love with the amount of money they make, that’s for sure. But any first-year associate doing doc review in a basement — yeah, you could probably just be like, tell the A.I. to find the four pieces of relevant information in these 10,000 page records from whatever giant corporation we’re suing today. That’s fine.

I think that there’s a turn there where maybe we need less first-year associates doing that thing and we need more first-year associates doing something else that is difficult, that the A.I. can’t yet do. And I think a lot of this conversation is predicated on the notion that generative A.I. systems, L.L.M.s will continue on a linear curve up in terms of capability. I don’t know if that’s true.

But I hear a lot of this conversation. I’m like, there’s always a thing they can’t do. And maybe that thing is not the most amount of scale, social media marketing for them all, but it is always the next amount of complexity. And there’s no guarantee that this set of technologies will actually turn that corner. And you can keep going all the way to A.G.I. There’s no guarantee that an L.L.M. is going to hit A.G.I. and just run the world economy for us. There’s a lot of steps between here and there that I think human beings can fit into.

So I want to go back, then, to the internet for a bit, which is I think the presentation we’ve offered is fairly pessimistic. You, when I read and listen to you on this, are — I wouldn’t call it pessimistic. I would say a little excited by the idea of a cleansing fire.

So one theory here — and you should tell me if this is reading you right — but is that this will break a lot of the current — the current internet is weakened. It’s weakened in many cases for good reasons. Google, Meta, et cetera, they’ve not created an internet many of us like. And that this will just make it impossible for that internet to survive. The distribution channels will break. And then something. So first, is that how you see it? And second, then what something?

That is very much how I see it. I would add a generational tinge to that, which is I grew up in that weird middle generation between X and millennials. I think temperamentally I’m much more Generation X. But they describe it as they didn’t have computers and then you have computers. You play the Oregon Trail. That’s me on the nose.

I distinctly remember life before computers. It’s an experience that I had quite viscerally. And that shapes my view of these tools. It shapes my view of these companies. Well, there’s a huge generation now that only grew up in this way. There’s a teenage generation right now that is only growing up in this way. And I think their natural inclination is to say, well, this sucks. I want my own thing. I want my own system of consuming information. I want my own brands and institutions. And I don’t think that these big platforms are ready for that moment. I think that they think they can constantly be information monopolies while they are fending off A.I.-generated content from their own A.I. systems. So somewhere in there all of this stuff does break. And the optimism that you are sensing from me is, well, hopefully we build some stuff that does not have these huge dependencies on platform companies that have no interest at the end of the line except a transaction.

OK, but you’re telling me how the old thing dies. And I agree with you that at some point the old thing dies. You can feel it. It’s moribund right now. You’re not telling me what the new thing is, and I’m not saying you fully know. But I don’t think the new thing is just a business model that is not as dependent on Meta. I mean, on some level, there’s going to be a lot of A.I. around here.

It’s an audience model. It’s not dependent on these algorithms.

But is there — I guess one question I have is that, one — I mean, you know where the venture capital is going right now.

Everything is going to be built with A.I. —

— laced through every piece of it. And some of it, for all we’re talking about, might be cool, right? I’m not saying you’re mostly going to make great art with A.I. But actually, Photoshop did create a lot of amazing things.

And people are going to get better at using this. They’re going to get more thoughtful about using it. The tools are going to get better. But also the people are going to figure out how to use the tools. I mean, you were talking about player pianos earlier. I mean, way beyond player pianos, you have huge libraries of sounds you can manipulate however you want. And now I go listen to a lot of experimental electronic music. And I think a lot of that is remarkable art. I think a lot of that is deeply moving.

I am curious what, to you, the good A.I. internet is, because I don’t think that the next internet is just going to be like we’re going to roll the clock back on the business model. The technology is going to roll forward into all this stuff people are building.

I’m not so sure about that.

I think we’re about to split the internet in two. I think there will be a giant commercial A.I.-infested internet. That’s the platform internet. That’s where it’s going. Moribund, I agree. But it will still be huge. It’s not going away tomorrow. And they will figure out — these are big companies full of smart people with the most technology.

Mark Zuckerberg is like, I have the most NVIDIA H100 GPUs. Come work here. We’ll pay you the most money. They will invent some stuff and it will be cool. I’m excited about it. But that version of the internet —

You sure sound excited about it. [LAUGHS]

Well, I am. I mean, I love technology. This is our — The Verge’s competitive differentiation in the entire media industry is, like, we really like it. And I’m excited to see what they build. I think there’s some really neat things being built. When I think about the information ecosystem, I’m vastly more pessimistic because of the fact that all of these networks are geared to drive you towards a transaction.

And I don’t mean that in some anticapitalist way. I mean literally the incentives are to get you to buy something. So more and more of the stuff that you consume is designed around pushing you towards a transaction. That’s weird. I think there’s a vast amount of white space in the culture for things that are not directly transactable.

I think next to that you’re going to get a bunch of people, companies who say our differentiation in this market is that there’s no A.I. here. And they will try to sell that. And I don’t know how that experiment plays out. I don’t know if that experiment will be successful.

I do know that that experiment will be outside of the distribution channels that exist now because those distribution channels are being run by companies that are invested heavily in A.I. And I’m hopeful that over there, on whatever new non-A.I. internet that exists, that some amount of pressure is placed on the other distribution channels to also make that distinction clear.

I’m just thinking about this, and the thing that it brings to mind for me is the resurgence of vinyl —

— and the dominance of streaming platforms. So what I would think of as the music industry of — how many years ago was C.D.s? I don’t actually remember now. But what it did was split into — there’s been a resurgence of vinyl, the sort of analog. It’s a little cool. I actually just bought a record player recently, or was given one by my wonderful partner. But that’s not very big.

Then there’s these huge streaming platforms, right? I mean, most people are listening on Spotify, on Apple Music, on YouTube Music, on Amazon, et cetera. And I don’t think we feel like we figured that out very well. But I do think that’s probably going to be the dynamic. I mean, I do think there are going to be things you go to because you believe it is a human being or because you believe the A.I. is used well.

I do also think the big things to come are going to be the things that figure out how to use A.I. well rather than poorly. Maybe that also means honestly and transparently, rather than dishonestly and opaquely.

Maybe the social internet dies because, one, we don’t really like it that much anymore anyway, but also because it’s too hard to figure out what’s what. But actually, an internet of A.I. helpers, assistants, friends, et cetera, thrives. And on the other side, you have a real human. I don’t know. But give me more of the Nilay technology side.

What can A.I. do well? If you were building something or if you were imagining something to be built, what comes after?

By the way, the music industry just released its numbers. Vinyl outsold CDs for the second year running. Double the amount of revenue in vinyl than CDs.

That’s wild, actually.

It’s crazy. And all of that in total is 11 percent of music industry revenues in ‘23 compared to 84 percent of the revenue is streaming. So you are correct. This is a big distinction. People want to buy things, and so they buy one thing that they like. And they consume everything in streaming.

What happens when Spotify is overrun by A.I. music? You can see it coming. What happens when you can type into Spotify, man, I’d really like to listen to a country song. Just make me one. And no one down the line has to get paid for that. Spotify can just generate that for you.

I think that’s going to push more people in the other direction. I really do. That there will be this huge pot of just make me whatever exactly I want at this moment money over here. But the cool people are still going to gravitate towards things that are new. I just believe that so firmly in my heart that when I think about where does the technology for that come from, I still think it comes from basic open platforms and open distribution.

The great power of the internet is that you can just make a whole new thing. And I don’t think that anyone has really thought through what does it mean to decentralize these platforms. What does it mean to — I don’t know — build an old-school portal where it’s just people pointing at great stuff as opposed to open this app and an algorithm will just deliver you exactly what we think you want, or, down the line, generate content for you that we think that you will continue watching.

I think — and this is maybe a little bit of a counterintuitive thought — that this is actually a great time to begin things in media. I think that we have a more realistic sense of the business model and what will actually work. They need to build an audience. They need to build something people will actually pay you for. I think a lot of the problem right now is things built for another business model that failed are having a lot of trouble transitioning because it’s very, very hard to transition a structure. Now, that doesn’t mean it’s a great business. It’s not what I hoped it would become. It’s not the advertising revenue I hoped we would have. But it’s something.

What feels fully unsolved to me right now is distribution, right? When I was a blogger, the way distribution worked was people would find me because other blogs would link to me. And then if they liked me, they would put me in their bookmarks section.

Then they would come back the next day by clicking on a bookmark. I don’t think any of us think that much about bookmarks anymore. That’s not really how the internet works. Things moved to search. They moved, primarily for a long time, to social. And that was a way you could create distribution.

You could go from — you started a website. We started Vox, right? We started Vox in 2014 or 2015. The day before we launched, we had no visitors. And pretty quickly we had a lot of things that were working on social and working on Search. And we had millions and millions and millions every month.

But now social is broken as a distribution mechanism. I mean, Elon Musk has made Twitter anti-news distribution. Google search has become very, very messy. People don’t have the old bookmarks habit in the way they did. And so if you’re starting something new, the question of how you build that audience, how you go from nothing to an audience, feels very unsolved.

Yeah. That’s the cleansing fire. That’s the thing I’m excited about. Here’s a new problem in media. Here’s a new problem that’s being created by A.I.

If I were to tell you five years ago, I’m going to launch a new property and the core insight that I have is that we need to replace the distribution mechanisms of the internet, you would not pay me any money. You would not fund that idea. You would not say — well, you would say, get some traffic on Twitter and start a Substack or start a YouTube channel, anything except figure out a new distribution method to compete with these social media companies.

You have that idea now. And people are like, yeah, that’s the problem. We have to solve that problem. That is the problem to solve, because Twitter has blown itself up in whatever way Elon is blowing it up, because the other social channels have become the Home Shopping Network, by and large, because YouTube has optimized itself into making Mr. Beasts and only Mr. Beasts, right?

It’s weird, by the way, that YouTube exists. We’ve barely talked about it on this podcast. It is the thing most people watch most of the time. It supports no journalism. At scale, the idea that there’s not an ABC News of YouTube on a distribution platform of that size is a moral failing on Google’s part. I really believe this. And no, we never really talk about it. It’s just — YouTube is ignored. It has become such an infrastructure that we never talk about it.

But my view is that YouTube is the most politically important platform. Everyone wants to talk about TikTok. I think YouTube is much more significant.

Yeah, and they run it really well. They run it as infrastructure. And they talk about it as infrastructure. But it’s weird that we have not built great media company-sized media companies on YouTube’s pipes. We just haven’t done it. So you look at that landscape now and you’re like, well, if I want to do that, if I want to build my own audience, I cannot depend on these companies. I have to be able to do something else.

And maybe A.I. does help you do that. Maybe it does help you send a million marketing messages so people start coming to your website directly. Maybe it does start crafting home pages personalized for people based on your library of content so people see the thing they like the most when they show up. There’s a bunch of moves we can all take from social media companies now to build more engaging, more interesting products using A.I., which will make it easier because the A.I. is a technology commodity. You can just go buy it and use it.

But we have to actually build those products. We have to want to build those products as an industry. And that my pessimism is rooted in the idea that the industry kind of sucks at this. We are very much stuck in, we should go send some reporters out into the world, they should come back, write down what they saw, and then hopefully someone else points them at it. And it’s just like, well, that’s been a losing proposition for a decade. We should try something else.

Do you think, beyond the media, because not everything online is media —

Do you think beyond the media, that there is the glimmers of the next thing? I mean, let me give you the thesis I have, which is that the next thing is that the A.I. is somehow your assistant to the internet, right? We seem to me to be moving towards something where the overwhelm is so profound that you actually need some kind of agent working on your behalf to make it through all this.

I mean, you can imagine this is the world of “Her,” the Spike Jonze movie. But you can imagine it as other things, too. There’s going to be software coding agents. The guys who started Instagram started then this thing called Artifact, which is using more A.I. personalization to try to tell people what they might like in the news. It didn’t really work out, but it was an interesting project for a minute.

I think a lot of us feel we spent years now being acted upon by algorithms. And one thing about A.I. is that it’s an algorithm you act on, right? You tell it how to act. Assuming that business model allows that, that it doesn’t have a secret instruction to sell you soap or whatever —

— that’s interesting, right? That’s a pretty profound inversion of the internet we’ve been in.

Let me poke really hard at the true difference between an algorithm that shows you stuff and an algorithm that goes and gets you what you want, because I don’t know that there’s a huge difference in the outcome of those two different processes. So for example, I do not trust the YouTube Kids algorithm. I watch my daughter watch YouTube.

No, why would you?

It is just a nightmare. I don’t know why we let her do it, but we did. And now we’re in the rabbit hole and that’s life. I mean, she’s five. And I will literally say, are you watching garbage? And she’d be like, I am, because she knows what I think is garbage. She’s much smarter than the YouTube Kids algorithm. And then she’s like, can I watch a little more garbage? This is a real conversation I have with my five-year-old all the time.

I would love an A.I. that would just preempt that conversation. Just watch this whole iPad for me and make sure my kid is safe. That’s great. But that is a limitation. It is not an expansion. And I think the thing that I’m seeking with all of these tools is how do we help people expand the set of things that they’re looking at.

Well, let me push on this for a minute, because for a long time a lot of us have asked people, the social media companies — that I have, I’m sure you have — why don’t you give me access to the dials of the algorithm?

Right? I don’t want to see things going viral. If there’s a virality scale of 1 to 10, I want to always be at a 6, right?

I don’t want to see anything over a 6. And I can’t. I wish I could say to Google, I would like things that are not optimized for S.E.O. I just don’t want to see recipes that have a long personal story at the top. Just don’t show me any of them.

But I can’t do that. But one of the interesting things about using the current generation of A.I. models is you actually do have to talk to it like that. I mean, whether I am creating a Replika or a Kindroid or a Character.AI, I have to tell that thing what it is supposed to be, how I want it to talk to me, how I want it to act in the world, what it is interested in, what kinds of expertise it has and does not.

When I’m working with Claude 3, which is the A.I. I use the most right now, I have one instance of it, that I’m just like, you are a productivity coach and you are here to help me stay on task. But I have another where I’m getting some help on, in theory, looking at political science papers, so it’s actually not that good at that.

But this ability to tell this extraordinarily protean algorithm what I want it to do in plain English, that is different, right? The one thing that A.I. seems to make possible is an algorithm that you shape in plain English, an agent that you are directing to help you, in some cases, maybe create the internet, but much more often to navigate it.

Right now it is very hard for me to keep up on the amount of news, particularly around the amount of local news I would like to keep up on. If there is a system that I could say, hey, here’s some things I’m interested in from these kinds of sources, that would be very helpful to me. It doesn’t seem like an impossible problem. In fact, it seems like a problem that is inches away from being solved. That might be cool.

I think that’d be great. I’ve known you for a long time. I think you have a unique ability to articulate exactly what you want and tell it to a computer. [LAUGHS] And you have to scale that idea, right? You have to go to the average — our mothers and say, OK, you have to tell the algorithm exactly what you want. And maybe they’ll get close to it, maybe they won’t, right?

You don’t feel like mothers are able to tell you what they want?

[LAUGHS] I like that idea a lot. I think fundamentally that is still an A.I. closing the walls around you. And I think the power of the recommendation algorithm is not expressed in virality. It’s actually to help you expand your filter bubble. Here’s a band you’d never heard of before. Here’s a movie you never thought of watching. Here’s an article about a subject that you weren’t interested in before.

I think TikTok, in its 2020 TikTok moment, was terrific at this. Everyone was going to sing a sea shanty for five minutes, right? Why do we suddenly care about this and it’s gone? And it was able to create cultural moments out of things that no one had ever really thought of before. And I want to make sure, as I use A.I., that I’m actually preserving that, instead of actually just recreating a much more complicated filter bubble.

I think it’s a good place to end. Always our final question, for the Nilay Patel recommendation algorithm —

what are three books you’d recommend to the audience?

Well, I’m sorry, Ezra, I brought you six.

Did you really?

Is that allowed?

Did you actually bring six?

I didn’t bring six physical books, but I have six recommendations for you.

Damn. All right, go through them quick, man.

They’re in two categories. One is the three books that I thought of and three books from Verge people that if people are interested in these ideas are important. So the first one is “The Conquest of Cool” by Thomas Frank, one of my favorite books of all time. It is about how advertising agencies in the ‘60s co-opted the counterculture and basically replaced counterculture in America. I’ve thought about this a lot because I’m constantly wondering where the punk bands and rage against the machines of 2024 are. And the answer is that they’re the mainstream culture. It’s very interesting. Love that book. It explains, I think, a lot about our culture.

Two is “Liar in a Crowded Theater” by Jeff Kosseff, which is a book about the First Amendment and why we preserve the ability to lie in America. I am very complicated thoughts about the First Amendment right now. I think social media companies should do a better job protecting my kid. I also think the First Amendment is really important. And those ideas are crashing into each other.

Third, I love the band New Order. I know you’re a music fan, so I brought you a music recommendation. It’s “Substance: Inside New Order” by Peter Hook, who is the bassist of New Order. This band hates each other. They broke up acrimoniously, so the book is incredibly bitchy. It’s just a lot of shit-talking about the ‘80s. It’s great.

But inside the book, he is constantly talking about how the technology they used to make the music of New Order didn’t work very well. And there’s long vignettes of why the songs sound the way they do because of how the synthesizers worked. And that just brings together all the ideas I can think of. So those are the three outside of The Verge universe.

But there are three from Verge people that I think are very important. The first is “Everything I Need I Get From You” by Kaitlyn Tiffany, who’s one of my favorite Verge expats. It is about how the entire internet was shaped by the fandom of the band One Direction. And I think this is totally underemphasized, underreported that fandoms are actually what shape the internet. And a lot of what we think of as internet culture is actually fandom culture. And so Kait’s book is really good.

The other, obviously, I have to shout it out is “Extremely Hardcore” by Zoë Schiffer, who basically wrote about the downfall of Twitter. And I think understanding how a social network works — these are lots of people making lots of decisions, and it was just dismantled. And now you can see how the social network broke. And I think we take these things for granted.

And then the third is “Beyond Measure” by James Vincent, which is a history of the systems of measurement and how political they are. And it is one of my favorite books because it is — you just take this stuff for granted. And you look at it, and you’re like, oh, this was deeply, deeply acrimonious.

Nilay Patel, you’re saving the internet through blogging again.

Your podcast is “Decoder.” Thank you very much.

Thanks, man. [MUSIC PLAYING]

This episode of “The Ezra Klein Show” was produced by Claire Gordon. Fact-checking by Michelle Harris with Kate Sinclair and Mary Marge Locker. Our senior engineer is Jeff Geld. We’ve got additional mixing by Isaac Jones and Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Rollin Hu and Kristin Lin. We have original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks here to Sonia Herrero.

EZRA KLEIN: From New York Times Opinion, this is “The Ezra Klein Show.”

Earlier this week, we did an episode on how to use A.I. right now. Now, I want to turn the question around and look at how A.I. is being used on you right now. One of the conversations has been sticking in my head was with this person in the A.I. world who was saying to me that if you look at where use has been sticky, if you look at where people keep using it day after day, you’re looking at places where the product doesn’t need to be very good.

That’s why it’s really helpful for college and high school students, college and high school papers — they’re often not very good. That’s sort of their point. It’s why it’s working pretty well for a very low-level coding tasks. That kind of work doesn’t need to be very good. It gets checked and compiled, and so on.

NILAY PATEL: Thank you for having me. This is very exciting.

EZRA KLEIN: Let’s just begin with the big question here, which is what is A.I. doing to the internet right now?

NILAY PATEL: It is flooding our distribution channels with a cannon-blast of — at best — C+ content that I think is breaking those distribution channels.

EZRA KLEIN: Why would it break them?

NILAY PATEL: So most of the platforms the internet are based on the idea that the people using those platforms will in some sort of crowdsourced way find the best stuff. And you can disagree with that notion. I think maybe the last 10 years have proven that that notion is not percent true when it’s all people.

When you increase the supply of stuff onto those platforms to infinity, that system breaks down completely. Recommendation algorithms break down completely, our ability to discern what is real and what is false break down completely, and I think importantly, the business models of the internet break down completely.

So if you just think about the business model of the internet as — there’s a box that you can upload some content into, and then there’s an algorithm between you and an audience, and some audience will find the stuff you put in the box, and then you put an infinity amount of stuff into the box, all of that breaks.

EZRA KLEIN: Thank you for bringing in the supply language. So, I’ve been trying to think about this as this supply and demand mismatch. We have already had way more supply than there is demand. I wasn’t buying a lot of self-published Amazon books. Is the user experience here actually different?

NILAY PATEL: I think that’s a great question. The folks who write the algorithms, the platforms, their C.E.O.s, they will all tell you this is just a new challenge for us to solve. We have to out what is human, what is A.I.-generated. I actually think the supply increase is very meaningful. Like, maybe the most meaningful thing that will happen to the internet because it will sort out the platforms that allow it to be there and have those problems, and the places that don’t. And I think that has not been a sorting that has occurred on the internet in quite some time, where there’s two different kinds of things.

EZRA KLEIN: Didn’t we already pivot to video a couple years ago?

NILAY PATEL: We pivoted to video — I actually love it when LinkedIn adds and takes away these features that other platforms have. They added stories because Snapchat and Instagram had stories, and they took the stories away because I don’t think LinkedIn influencers want to do Instagram Reels, but now they’re adding it again.

EZRA KLEIN: So I want to hold on to something that you’re getting at here. Which, to me, is one of the most under-discussed parts of A.I., which is how do you actually make money off of it? And right now, there are not actually that many ways.

NILAY PATEL: Yeah, I’ve been talking to a lot of C.E.O.s of web companies and email companies on Decoder for the past year. I asked them all the same question, why would you start a website? Why would you send an email? And so, you asked the C.E.O. of Squarespace or Wix or we just had the C.E.O. of MailChimp on the show. And her answer is a little terrifying. Like, maybe openly terrifying.

EZRA KLEIN: But how is A.I. changing that at all because that sounds to me like the thing that is already happening.

NILAY PATEL: So, this is what I mean by the increase in scale. That’s the dream. This is supposed to be what actually happens, but they can only do it in broad cohorts, which is why you get the luggage email after you’ve bought the luggage email or the luggage ad, after you bought the luggage ad.

They know you are a person who used a Wi-Fi network in a certain location at a certain time, they can track that all over the place. They know what you’ve searched for. They know that you went and made a luggage transaction. You are now categorized into people who are likely to buy luggage, whether or not that loop was closed. You put some luggage in a shopping cart.

But that’s still a cohort, they can only do that broadly. And these cohorts can be pretty refined, but they can only do it broadly. With A.I. the idea is we can do that to you individually — the A.I. will write you an email, we’ll write you a marketing message, will set you a price. That isn’t 100x increase the amount of email that will be generated.

EZRA KLEIN: When I get spammy advertising I don’t really think about there being a human on the other end of it. Maybe to some degree there is, but it isn’t part of the transaction happening in my head. There are a lot of parts of the internet that I do think of there being a human on the other end — social media, reviews on Amazon, books — I assume the person who wrote the book is a person. How much of what I’m currently consuming may not be done by human in the way I think it is, and how much do you think that’s going to be in a year, or two, or three years?

NILAY PATEL: I’m guessing your media diet is pretty well human-created because I know that you are very thoughtful about what you consume and what signals you’re sending to the algorithms that deliver your content. I think for most people —

EZRA KLEIN: My mom’s, let’s use my mom’s.

NILAY PATEL: Mom’s are good. I would love to take my mom’s phone and throw it into the ocean and never let her have it again. I openly fear what content comes through my mother through WhatsApp. It terrifies me that I don’t have a window into that. I can’t monitor it. The same software I want to use to watch my daughter’s internet consumption, I would love to apply it to my parents because I don’t think they have the media literacy — they’re much older — to even know, OK, this might be just some A.I.-generated spam that’s designed to make me feel a certain way.

EZRA KLEIN: How can they not be ready for that?

NILAY PATEL: Because they’re the ones making it. This is the central tension of — in particular, I think Google. So, Google depends on the web, the richness of the web is what Sundar Pichai will tell you. He used to run search, he thinks about the web. He cares about it, and you look at the web and you’re like, you didn’t make this rich at all. You’ve made this actually pretty horrible for most people most of the time. Most people — if you search Google to get a credit card, that is a nightmarish experience — like, fully nightmarish. It feels like getting mugged.

EZRA KLEIN: This helps explain a story that I found very strange. So, 404 Media, which is a sort of newer outlet reporting on tech. They found that Google News was boosting stolen A.I. versions of news articles — and we’re seeing this all over. An article by me or by some other journalist shows up in another place, very slightly rewritten by an A.I. system, with an A.I. generated author and photo on top of it. So, we’re seeing a lot of this.

NILAY PATEL: Yeah. Fundamentally, I think if you are at Google and the future of your stock price depends on Gemini being a good competitor to GPT-4 or 5 or whatever OpenAI has, cannot run around saying this is bad. The things it makes are bad.

EZRA KLEIN: What about when it’s not?

NILAY PATEL: I don’t know how fast that is coming. I think that is farther away than people think. I think ‘will it fool you on a phone screen?’ is here already, but ‘is this good’ is, I think, farther away than —

EZRA KLEIN: But a lot of internet content is bad.

NILAY PATEL: That’s fair.

EZRA KLEIN: I mean, you know this better than me. Look, I think it is axiomatic that A.I. content is worse right now than it will ever be.

NILAY PATEL: Sure.

EZRA KLEIN: I mean the advance in image generation over the past year has been significant. That’s very real. And preparing for this conversation, I found myself really obsessing over this question, because one way to talk to you about this is, there’s all this spammy garbage coming from A.I. that is flooding the internet.

NILAY PATEL: I think there’s a sentimentality there. If you make a content farm that is the best content farm, that has the most answers about when the Super Bowl starts, and those pages are great. I think that’s a dead end business. Google is just going to answer the questions. I think that’s fine. I think if you ask Google what time the Super Bowl is, Google should just tell you.

I think if you ask Google how long to boil an egg, Google can just tell you. You don’t need to go to some web page laden with ads and weird headings to find those answers. But these models in their most reductive essence are just statistical representations of the past. They are not great at new ideas.

EZRA KLEIN: I’m going to spend some time thinking about the idea that boredom is an under-discussed driver of our culture. But I want to get at something else in there — this idea of Google answering the question. We’re already seeing the beginnings of these A.I. systems that you search the question that might — at another time — have brought you to The Verge, to CNN, to The New York Times, to whatever.

NILAY PATEL: I think there’s a reason that the A.I. companies are leading the charge to watermark and label content as A.I.-generated. Most of them are in the metadata of an image. So most pictures you see in the internet, they carry some amount of metadata that describes the picture. What camera was taken on, when it was taken, what image editing software was used.

EZRA KLEIN: So their training data remains less corrupted?

NILAY PATEL: Yeah. I think there’s a very straightforward incentive for them to figure out the watermarking, labeling stuff they want to do. And they have coalitions, and tasks force, and Adobe talks about the image of the Pope and the puffer jacket as a, “catalyzing moment” for the metadata of A.I. because people freaked out. They’re like oh, this thing looks real. But they have a real incentive to make sure that they never train on other A.I. generated content.

I’m not saying we don’t — like, other publishers do this. But the point of these algorithms is, ideally, to bring you to the HouseFresh people, is to bring you to the person who cares so much about air purifiers they made a website about air purifiers, and we’re not doing that anymore. And so if you were to say, where should a young person who cares the most about cars, or who cares the most about coffee, or whatever.

Where are they going to go? Where are they going to make stuff? They’re going to pick a closed platform that ideally offers them some built in monetization, that ideally offers them some ability to connect directly with an audience. They’re not going to go to a public space like the web, where they might own their own business, which would be good. But they’re also basically at the mercy of thieves who come in the night and take all their work away.

EZRA KLEIN: But also, if you kill HouseFresh, then two years later when you ask the A.I. what air purifier should I get, how does it know what to tell you?

NILAY PATEL: Yeah, I don’t the answer to that question.

EZRA KLEIN: I don’t think they do either.

NILAY PATEL: Yeah again, this is why I think that they are so hell-bent on labeling everything. I think they need some people around in the future.

EZRA KLEIN: But labeling is good. I mean, that keeps you from getting too much garbage in your data set. But replacing a bunch of the things that the entire informational world relies on to subsidize itself — to fund itself — like this to me is a thing that they don’t have an answer for.

NILAY PATEL: Wait, let me ask you a harder question. Do they care?

EZRA KLEIN: Depends on they, but I don’t think so.

NILAY PATEL: Yeah.

EZRA KLEIN: Or at least they care in the way that I came to realize Facebook, now Meta, cared about journalism. People say they didn’t care about journalism. I don’t believe that’s actually true. They didn’t care enough for it to mean anything. Like, if you asked them, if you talked with them, if you had a drink, they would think what was happening to journalism was sad.

NILAY PATEL: [LAUGHS]

EZRA KLEIN: And if it would cost them nothing, they would like to help. But if it would cost them anything — or forget costing them anything. If they would begin to help and then recognize an opportunity had been created that they could take instead of you, they would do that. That’s the way they care.

NILAY PATEL: I keep a list of TikToks that I think each individually should be a Ph.D. thesis in media studies. It’s a long list now. And all of them are basically just layers of copyright infringement in their own weird way.

My favorite is — it’s a TikTok, it has millions of views. It’s just a guy reading a summary of an article in the journal Nature. It has millions of views. This is more people that have ever considered any one article in the journal Nature — which is a great journal. I don’t mean to denigrate it. It’s a proper scientific journal. They work really hard on it. And you just go 5 steps down the line, and there’s a guy on TikTok summarizing a summary of Nature, and you’re like what is this? What is this thing that I’m looking at? Will any of the million viewers of this TikTok buy one copy of Nature because they have encountered this content? Why did this happen?

And the idea is, in my mind at least, that those people who curate the internet, who have a point of view, who have a beginning and middle, and an end to the story they’re trying to tell all the time about the culture we’re in or the politics we’re in or whatever. They will actually become the centers of attention and you cannot replace that with A.I. You cannot replace that curatorial function or that guiding function that we’ve always looked to other individuals to do.

And those are real relationships. I think those people can stand in for institutions and brands. I think the New York Times, you’re Ezra Klein, a New York Times journalist means something. It appends some value to your name, but the institution has to protect that value. I think that stuff is still really powerful, and I think as the flood of A.I. comes to our distribution networks, the value of having a powerful individual who curates things for people, combined with a powerful institution who protects their integrity actually will go up. I don’t think that’s going to go down.

EZRA KLEIN: You said something on your show that I thought was one of the wisest, single things I’ve heard on the whole last decade and a half of media, which is that places were building traffic thinking they were building an audience. And the traffic, at least in that era, was easy, but an audience is really hard. Talk a bit about that.

NILAY PATEL: Yeah first of all, I need to give credit to Casey Newton for that line. That is something — at The Verge, we used to say that to ourselves all the time just to keep ourselves from the temptations of getting cheap traffic. I think most media companies built relationships with the platforms, not with the people that were consuming their content.

NILAY PATEL: The other thing that those business models were predicated upon was you’d get so good at being a supplier to one platform or another with Game of Thrones content or whatever it was that they would pay you money for it directly — that Google would say, this is the Game of Thrones link that most people are clicking on. We ought to pay Vanity Fair for its Game of Thrones content to surface it. Or all of BuzzFeed was we’re going to be so good at going viral on Facebook that Facebook will pay us money.

Every YouTuber gets their wings when they make the video about how they’re mad at YouTube. There’s a woodworking YouTuber that I used to follow, and he just sort of got to the point where he’s like, I hate YouTube. I’m leaving. And it’s like dude, you made videos about jointing wood, like what are you doing? And it’s like his relationship with the platform was so cynical that he was like, I’m moving my business elsewhere. You can sign up for a master class. Those individuals have these very cynical, very commercial relationships with the platforms that the media companies, for some reason, just never hedged. And so they actually do have audiences. And I think media companies need to get way back on the game of having a true audiences.

EZRA KLEIN: This gets to something that does worry me about this phase of A.I. hitting the internet, which is it’s hitting an internet in a moment of decay and weakness. And here, by internet, I mean the sort of content generating internet, and I break that into a couple of categories. The media is very weak right now. The media business we have seen closures left and right, layoffs left and right. I mean, a bunch of players like Vice and BuzzFeed who were believed to be the next generation of juggernauts are functionally gone as news organizations.

And into this comes A.I., and all the money that A.I. seems to bring, and even the A.I. companies might pay you some money for your stuff. Reddit just licensed a bunch of its content as training data to Google.

NILAY PATEL: As a consumer or as a producer?

EZRA KLEIN: As a producer.

NILAY PATEL: The idea that media companies are going to license their stuff to the A.I. companies is just the end of the road that we’ve been on for a long time. We are suppliers to algorithms. OK? And in any normal functioning capitalist economy, supplier margins get squeezed to zero and then maybe we all die.

Like, that’s the game we’ve been playing without saying it for a long time —

EZRA KLEIN: Which I think is why you see The New York Times suing OpenAI, like a real desire to not be in that game again.

NILAY PATEL: You see The New York Times suing OpenAI, but you don’t see them suing Google, you don’t see them de-S.E.O.ing pages across New York Times. Like, they still need the audience from these platforms. And I think there’s a very tense relationship there. The idea that you could sue OpenAI and win some precedent that gives you an enormous amount of leverage over Google I think is a very powerful idea.

And a really interesting facet of the internet we live in today is that most of the copyright law decisions were won by a young, upstart, friendly Google. YouTube exists because it was Google. Like, Viacom famously sued YouTube and they might have won and put it out of business, but Google, the friendly Google company with the water slides in the office, the upstarts that made the product you loved, went and won that case.

Google Books, we’re going to index all the books without asking for permission. They won that case, because they were friendly Google, and the judges were like, look at these cute kids making a cool internet? Like it was new and novel. Google image search — these are all massive copyright decisions that Google won as a startup company run by young people building a new product that the judges were using on their Dell desktops or whatever.

EZRA KLEIN: You don’t want a staunch innovation. Like, that was the big fear in that era. We don’t know what we’re building, and that’s still the thing you hear, and it’s not even untrue. You crack down on copyright and maybe you do staunch innovation. You don’t crack down copyright and maybe you destroy the seed corn of the Informational Commons. It’s very fraught for the copyright judges, but also just for all of us.

NILAY PATEL: Yeah, what are you as a producer on the internet is totally governed by copyright law. Like, a joke at The Verge is a copyright law is the only functional regulation on the internet. The entire internet is just speech, that’s all it is top-to-bottom, it’s speech.

EZRA KLEIN: Taking in the legal sense of the term?

NILAY PATEL: No, in the moral sense of the term. They come to your website and they take your stuff. It’s not a zero sum taking, but they’ve extracted value to create more value for themselves. I think that’s just a moral taking. There’s some permission there that did not occur. Joanna Stern at The Wall Street Journal just interviewed Mira Murati, the C.T.O. of OpenAI, about training data for Sora, the video generator, and Mira said, we just use what’s publicly available. And it’s like yo, that doesn’t make any sense. Like, there are lots of rules about what’s publicly available. Like, you can’t just take stuff because you can link to it on the internet, that’s not how it actually works.

EZRA KLEIN: Let me try to take the argument I hear from the A.I. side of this, which is that there is functionally nothing in human culture and human endeavor that is not trained on all that has come before it — that I, as a person, am trained on all this embedded knowledge in society, that every artist has absorbed, all this other art that the A.I. — I mean, this is just learning. And as long as you are transforming that learning into something else, as long as you are doing something new with that learning, then one, copyright law is not supposed to apply to you in some way or another, although that’s obviously complicated.

NILAY PATEL: I hear this idea all the time, often from the sorts of people in Silicon Valley who say they do first principles thinking — which is one of my favorite phrases, because it just means what if we learn nothing? Like, what if none of the history of the world applied to us and we could start over to our benefit? And that’s usually what that’s code for.

EZRA KLEIN: Judge Patel, if you’re thinking about cases in this area, like, what do you think the answer is here? Is it the sampling model, is it something else? What do you think the right broad strokes resolution is?

NILAY PATEL: Let me stick on the music example for one second, because I think music is really interesting because it’s kind of a closed ecosystem. There’s only so many big music companies. It’s the same lawyers, and the same executives, and the same managers going to the same clearing houses and having the same approaches. We’re going to give you a songwriting credit because we interpolated the bass line of this song into that song, and now here’s some money. And this is the mechanism by which we’ll pay you. The A.I. companies are not a closed ecosystem, it is just a free for all. It’s the open web, it’s a bunch of players.

NILAY PATEL: Well the media industry, but also at some point this is a regulatory question, a question of law. I mean, nothing is stopping Congress from making copyright law designed for the A.I.-era. Nothing is stopping Congress from saying, this is how we think this should work across industries. Not just media, but novelists, but everybody.

NILAY PATEL: Well, there are some things that stop Congress from doing a lot of things. The idea that Congress could pass a massive rewrite of copyright law at this moment in time is pretty far afield.

EZRA KLEIN: But won’t and couldn’t, I do want to make this distinction here. What you’re saying is Congress is too polarized and bitterly divided over everything and can’t do anything and can’t get anything done, and that’s my whole job man, I know. But what I am saying is that, you could write a law like this.

NILAY PATEL: So there is one law. There’s the J.C.P.A., the Journalism Competition Preservation Act, which allows media companies to escape antitrust law and bargain collectively with whoever they wish to bargain with. I don’t know if that’s going to pass, I know there’s a lot of interest in it.

EZRA KLEIN: You and I are both fans of Marshall McLuhan, the media theorist. And he’s got this famous line, ‘the medium is the message.’ And more deeply, what he says is that people, when they see a new medium, they tend to think about the content. For television, it’s the shows, what do you think about this show or that show? For Twitter, the tweets, for a newspaper, the articles. But you have to look behind the content to the actual medium itself to understand what it is trying to tell you.

NILAY PATEL: I have a lot of thoughts about this. I disagree on the basic message. I do think one of the messages of A.I. is that most people make middling work, and middling work is easy to replace. Every email I write is not a great work of art. Like, so much of what we produce just to get through the day is effectively middling. And sure, A.I. should replace a bunch of that. And I think that metaphysical shock comes from the idea that computers shouldn’t be able to do things on their own, and you have a computer that can just do a bunch of stuff for you. And that changes your relationship to the computer in a meaningful way, and I think that’s extremely real.

It’s because this person has made all of these people feel something. The art that has been created by this one very singular individual has captivated all of these people together, because of her story, because of the lyrics, because it means something to them. And I watch people use Midjourney or generate a story with an A.I. tool, and they show the art to you at the end of it, and they’re glowing. Like, look at this wonderful A.I. painting. It’s a car that’s a shark that’s going through a tornado and I told my daughter a story about it.

And I’m like yeah, but this — I don’t want anything to do with this. Like, I don’t care about this. And that happens over and over again. The human creativity is reduced to a prompt, and I think that’s the message of A.I. that I worry about the most, is when you take your creativity and you say, this is actually easy. It’s actually easy to get to this thing that’s a pastiche of the thing that was hard, you just let the computer run its way through whatever statistical path to get there. Then I think more people will fail to recognize the hard thing for being hard. And that’s — truly the message of A.I. is that, maybe this isn’t so hard and there’s something very dangerous to our culture embedded in that.

EZRA KLEIN: I want to put a pin in the hard things, easy things. I’m a little bit obsessed by that and want to come back to it. But first I want to talk about A.I. art for a minute, because I do think when we’re talking about everything that’s going to come on the internet, we’re talking about A.I. art. Obviously, much of it is going to get better. Some of it is not distinguishable.

NILAY PATEL: Have you found an A.I. that can actually write like you?

EZRA KLEIN: I found an A.I. that can mimic certain stylistic tics I have in a way that is better than I think most people could do. I have not found any A.I. that can, in any way, improve my writing for all that you’re constantly told it can. And in fact, the more I try, the worse my writing gets because typically what you have to do to improve your writing is recognize if you’re writing the wrong thing.

NILAY PATEL: This is like the verge of DNA.

EZRA KLEIN: Is it? Yeah, so it comes out in 1935. It’s about the ability to reproduce art. And he says, and I’ll quote it here, “that which whithers in the Age of Mechanical Reproduction is the aura of the work of art.” Then he goes on to say, “by making many reproductions, it substitutes a plurality of copies for a unique existence.”

But I wonder about this with humans. How much of something is just the fact that there’s a human behind it? My Kindroid is no worse at texting me than most people I know. But the fact that my Kindroid has to me is meaningful to me, in the sense that I don’t care if it likes me because there’s no achievement for it to me. The fact that there is a human on the other side of most text messages I send matters. I care about it because it is another mind. The Kindroid might be better in a formulaic way. The kindred might be better in terms of the actual text. I can certainly tune it more to my kind of theoretical liking, but the friction of another person is meaningful to me. Like, I care that my best friend likes me and could choose not to. Is there an aura problem here?

NILAY PATEL: It is so hard to make someone else feel anything other than pain. Like, it’s just like — it’s —

EZRA KLEIN: Christ, that’s the darkest thing I’ve ever heard you say.

NILAY PATEL: Yeah, but I believe it in my soul.

EZRA KLEIN: Really?

NILAY PATEL: Yeah. I think the hardest thing to —

EZRA KLEIN: a really different turn as a show right now. [LAUGHS]:

NILAY PATEL: Maybe —

EZRA KLEIN: You don’t make people laugh, you don’t give them hugs?

NILAY PATEL: No, I think that’s hard. I think that effort is worth it. That’s why I don’t think it’s a dark thing to say. I think the essence of being a good person is pointing your effort at making other people not feel pain. I think bullies make people feel pain because it’s easy. Again, I come back to Taylor Swift in Soldier Field. The thing that was going through my head is, this person is making 60,000 people feel joy, and she’s doing it through art. That is the purpose of art. The purpose of art is to inspire feelings, to inspire emotion.

EZRA KLEIN: Shannon Vallor is a philosopher of technology, and she’s got a book coming out called “The A.I. Mirror,” and I like the way she puts this, because there’s this way that turns is somewhat warped mirror back on ourselves when I was saying a few minutes ago that the message of A.I. is that you’re derivative. That leaves something out. What it’s really saying is that the part of you that often the economy values is derivative, is copyable because we actually ask people a lot of the time to act like they’re machines.

NILAY PATEL: Yeah, I buy that. One of my favorite things that I’ve covered in the past few years is a thing called robotic process automation, which is very funny. Just abstractly, deeply hilarious. There are lots and lots of companies throughout the United States that built computer systems 10, 15 years ago, 20 years ago. Hospital systems are famous for this. They have billing systems. They have buildings full of people who use Microsoft Excel on Windows ’95.

They will sell you a brand new, state of the art computer and it will connect to the keyboard and monitor jack of your old computer, and it will just use the Windows ’95 for you, which is just bonkers. It’s like Rube Goldberg machine of computers using old computers, and then your office full of accountants who knew how to use your old system will go away.

But then A.I. creates the scale problem. What if we do that but instead of some hospital billing system built in the ’90s, it’s just the concept of Microsoft Excel, and now you can just sort of issue a command on your computer and it’ll go use Excel for you and you don’t need an accountant, you don’t need a lawyer.

EZRA KLEIN: What percent of workers are actually asked to poke at the assumptions of their organization, because I worry it’s not as high as you think it is, or implying there. I’m not worried about Taylor Swift. I’m not worried about Nilay Patel. And I don’t just want to make this about wages. That’s a jobs sort of another conversation.

NILAY PATEL: All over the place.

EZRA KLEIN: There’s a huge amount of work like that. And if I felt confident as some of the economists say that we’ll just upmarket people into the jobs where they use more human judgment, David Autor who’s a great trade economist at MIT, just made this argument recently, that what A.I. is going to do is make it possible for more people to exercise judgment and discernment within their work, and I hope he is right. I really hope he is right. But I think a lot of organizations are not set up for a lot of people to use judgment and discernment. They treat a lot of people like machines, and they don’t want them doing things that are complicated and step out of line and poke at the assumptions in the Excel doc. They want the Excel doc ported over without any mistakes.

It seems plausible to me that we’re going to get to that.

NILAY PATEL: Do you think their bosses want to be able to poke at the assumptions though?

EZRA KLEIN: But if you — I mean this is actually something I believe about the whole situation. The economy needs fewer bosses and workers.

EZRA KLEIN: Think about this in the journalist context or the writing context, where I think what A.I. naturally implies that it’s going to do is turn many more people into editors and writers. Because for a lot of content creation that doesn’t require a lot of poking at assumptions, mid-level social media marketing — a lot of people are doing that job right now. But the people doing marketing for a mall —

NILAY PATEL: Yeah, that is the MailChimp example. That is the product that they are building.

EZRA KLEIN: And so what you have then is we used to have a bunch of these social media marketers and now you have one person overseeing a couple systems, like making sure they didn’t say something totally crazy. But you need fewer editors and you need writers. I mean, you know The Verge is structured. You know how The Times is structured. And this is one of my deep worries.

And then this goes to the thing you were getting at earlier, which is one way I think that A.I. could actually not make us more productive, more innovative, is that a lot of the innovation, a lot of the big insights happen when we’re doing the hard thing, when we’re sitting there trying to figure out the first draft, or learn about a thing, or figure out what we’re doing. One of the messages of the medium of A.I. is be efficient. Don’t waste your time on all this. Just tell the system what to do and do it. But there’s a reason I don’t have interns write my first draft for me.

EZRA KLEIN: They could do it. But you don’t get great ideas, or at least not as many of them, editing a piece of work as you do reporting it out, doing the research, writing the first draft. That’s where you do the thinking. And I do think A.I. is built to kind of devalue that whole area of thinking.

NILAY PATEL: We are working on a big story at The Verge right now that I’m very excited about. But there are four of us right now in an argument about whether we should tell that story in chronological order or as a series of vignettes. There is no right answer to this question. There’s just four people who are battling it back and forth.

EZRA KLEIN: I think vignettes.

NILAY PATEL: Yeah. By the way, I’m on team vignette.

EZRA KLEIN: Good man. [LAUGHS]:

NILAY PATEL: My belief is that it’s easier to digest a long story when it’s composed of lots of little stories as opposed to one long one. I’m being outvoted right now — editor in chief. I should replace them all with A.I., just get them out of here. [CHUCKLES] But that is the kind of work that I think makes the end product great. And I think going from good to great is still very human.

EZRA KLEIN: So I want to go back, then, to the internet for a bit, which is I think the presentation we’ve offered is fairly pessimistic. You, when I read and listen to you on this, are — I wouldn’t call it pessimistic. I would say a little excited by the idea of a cleansing fire.

NILAY PATEL: That is very much how I see it. I would add a generational tinge to that, which is I grew up in that weird middle generation between X and millennials. I think temperamentally I’m much more Generation X. But they describe it as they didn’t have computers and then you have computers. You play the Oregon Trail. That’s me on the nose.

I distinctly remember life before computers. It’s an experience that I had quite viscerally. And that shapes my view of these tools. It shapes my view of these companies. Well, there’s a huge generation now that only grew up in this way. There’s a teenage generation right now that is only growing up in this way. And I think their natural inclination is to say, well, this sucks. I want my own thing. I want my own system of consuming information. I want my own brands and institutions.

And I don’t think that these big platforms are ready for that moment. I think that they think they can constantly be information monopolies while they are fending off A.I.-generated content from their own A.I. systems. So somewhere in there all of this stuff does break. And the optimism that you are sensing from me is, well, hopefully we build some stuff that does not have these huge dependencies on platform companies that have no interest at the end of the line except a transaction.

EZRA KLEIN: OK, but you’re telling me how the old thing dies. And I agree with you that at some point the old thing dies. You can feel it. It’s moribund right now. You’re not telling me what the new thing is, and I’m not saying you fully know. But I don’t think the new thing is just a business model that is not as dependent on Meta. I mean, on some level, there’s going to be a lot of A.I. around here.

NILAY PATEL: It’s an audience model. It’s not dependent on these algorithms.

EZRA KLEIN: But is there — I guess one question I have is that, one — I mean, you know where the venture capital is going right now.

EZRA KLEIN: Everything is going to be built with A.I. —

EZRA KLEIN: — laced through every piece of it. And some of it, for all we’re talking about, might be cool, right? I’m not saying you’re mostly going to make great art with A.I. But actually, Photoshop did create a lot of amazing things.

NILAY PATEL: I’m not so sure about that.

NILAY PATEL: I think we’re about to split the internet in two. I think there will be a giant commercial A.I.-infested internet. That’s the platform internet. That’s where it’s going. Moribund, I agree. But it will still be huge. It’s not going away tomorrow. And they will figure out — these are big companies full of smart people with the most technology.

EZRA KLEIN: You sure sound excited about it. [LAUGHS]

NILAY PATEL: Well, I am. I mean, I love technology. This is our — The Verge’s competitive differentiation in the entire media industry is, like, we really like it. And I’m excited to see what they build. I think there’s some really neat things being built. When I think about the information ecosystem, I’m vastly more pessimistic because of the fact that all of these networks are geared to drive you towards a transaction.

EZRA KLEIN: I’m just thinking about this, and the thing that it brings to mind for me is the resurgence of vinyl —

EZRA KLEIN: — and the dominance of streaming platforms. So what I would think of as the music industry of — how many years ago was C.D.s? I don’t actually remember now. But what it did was split into — there’s been a resurgence of vinyl, the sort of analog. It’s a little cool. I actually just bought a record player recently, or was given one by my wonderful partner. But that’s not very big.

EZRA KLEIN: Maybe the social internet dies because, one, we don’t really like it that much anymore anyway, but also because it’s too hard to figure out what’s what. But actually, an internet of A.I. helpers, assistants, friends, et cetera, thrives. And on the other side, you have a real human. I don’t know. But give me more of the Nilay technology side.

EZRA KLEIN: What can A.I. do well? If you were building something or if you were imagining something to be built, what comes after?

NILAY PATEL: By the way, the music industry just released its numbers. Vinyl outsold CDs for the second year running. Double the amount of revenue in vinyl than CDs.

EZRA KLEIN: That’s wild, actually.

NILAY PATEL: It’s crazy. And all of that in total is 11 percent of music industry revenues in ’23 compared to 84 percent of the revenue is streaming. So you are correct. This is a big distinction. People want to buy things, and so they buy one thing that they like. And they consume everything in streaming.

EZRA KLEIN: I think — and this is maybe a little bit of a counterintuitive thought — that this is actually a great time to begin things in media. I think that we have a more realistic sense of the business model and what will actually work. They need to build an audience. They need to build something people will actually pay you for.

I think a lot of the problem right now is things built for another business model that failed are having a lot of trouble transitioning because it’s very, very hard to transition a structure. Now, that doesn’t mean it’s a great business. It’s not what I hoped it would become. It’s not the advertising revenue I hoped we would have. But it’s something.

NILAY PATEL: Yeah. That’s the cleansing fire. That’s the thing I’m excited about. Here’s a new problem in media. Here’s a new problem that’s being created by A.I.

EZRA KLEIN: But my view is that YouTube is the most politically important platform. Everyone wants to talk about TikTok. I think YouTube is much more significant.

NILAY PATEL: Yeah, and they run it really well. They run it as infrastructure. And they talk about it as infrastructure. But it’s weird that we have not built great media company-sized media companies on YouTube’s pipes. We just haven’t done it. So you look at that landscape now and you’re like, well, if I want to do that, if I want to build my own audience, I cannot depend on these companies. I have to be able to do something else.

EZRA KLEIN: Do you think, beyond the media, because not everything online is media —

NILAY PATEL: Let me poke really hard at the true difference between an algorithm that shows you stuff and an algorithm that goes and gets you what you want, because I don’t know that there’s a huge difference in the outcome of those two different processes. So for example, I do not trust the YouTube Kids algorithm. I watch my daughter watch YouTube.

EZRA KLEIN: No, why would you?

NILAY PATEL: It is just a nightmare. I don’t know why we let her do it, but we did. And now we’re in the rabbit hole and that’s life. I mean, she’s five. And I will literally say, are you watching garbage? And she’d be like, I am, because she knows what I think is garbage. She’s much smarter than the YouTube Kids algorithm. And then she’s like, can I watch a little more garbage? This is a real conversation I have with my five-year-old all the time.

EZRA KLEIN: Well, let me push on this for a minute, because for a long time a lot of us have asked people, the social media companies — that I have, I’m sure you have — why don’t you give me access to the dials of the algorithm?

EZRA KLEIN: Right? I don’t want to see things going viral. If there’s a virality scale of 1 to 10, I want to always be at a 6, right?

EZRA KLEIN: But I can’t do that. But one of the interesting things about using the current generation of A.I. models is you actually do have to talk to it like that. I mean, whether I am creating a Replika or a Kindroid or a Character.AI, I have to tell that thing what it is supposed to be, how I want it to talk to me, how I want it to act in the world, what it is interested in, what kinds of expertise it has and does not.

NILAY PATEL: I think that’d be great. I’ve known you for a long time. I think you have a unique ability to articulate exactly what you want and tell it to a computer. [LAUGHS] And you have to scale that idea, right? You have to go to the average — our mothers and say, OK, you have to tell the algorithm exactly what you want. And maybe they’ll get close to it, maybe they won’t, right?

EZRA KLEIN: You don’t feel like mothers are able to tell you what they want?

NILAY PATEL: [LAUGHS] I like that idea a lot. I think fundamentally that is still an A.I. closing the walls around you. And I think the power of the recommendation algorithm is not expressed in virality. It’s actually to help you expand your filter bubble. Here’s a band you’d never heard of before. Here’s a movie you never thought of watching. Here’s an article about a subject that you weren’t interested in before.

EZRA KLEIN: I think it’s a good place to end. Always our final question, for the Nilay Patel recommendation algorithm — what are three books you’d recommend to the audience?

NILAY PATEL: Well, I’m sorry, Ezra, I brought you six.

EZRA KLEIN: Did you really?

NILAY PATEL: Is that allowed?

EZRA KLEIN: Did you actually bring six?

NILAY PATEL: I didn’t bring six physical books, but I have six recommendations for you.

EZRA KLEIN: Damn. All right, go through them quick, man.

NILAY PATEL: They’re in two categories. One is the three books that I thought of and three books from Verge people that if people are interested in these ideas are important.

So the first one is “The Conquest of Cool” by Thomas Frank, one of my favorite books of all time. It is about how advertising agencies in the ’60s co-opted the counterculture and basically replaced counterculture in America. I’ve thought about this a lot because I’m constantly wondering where the punk bands and rage against the machines of 2024 are. And the answer is that they’re the mainstream culture. It’s very interesting. Love that book. It explains, I think, a lot about our culture.

Third, I love the band New Order. I know you’re a music fan, so I brought you a music recommendation. It’s “Substance: Inside New Order” by Peter Hook, who is the bassist of New Order. This band hates each other. They broke up acrimoniously, so the book is incredibly bitchy. It’s just a lot of shit-talking about the ’80s. It’s great.

EZRA KLEIN: Nilay Patel, you’re saving the internet through blogging again.

NILAY PATEL: Thanks, man.

EZRA KLEIN: This episode of “The Ezra Klein Show” was produced by Claire Gordon. Fact-checking by Michelle Harris with Kate Sinclair and Mary Marge Locker. Our senior engineer is Jeff Geld. We’ve got additional mixing by Isaac Jones and Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Rollin Hu and Kristin Lin. We have original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks here to Sonia Herrero.

Advertisement

IMAGES

  1. Using ChatGPT to generate a research dissertation and thesis. It is our

    dissertation mit chatgpt

  2. Comment utiliser ChatGPT pour écrire un texte / article / Dissertation

    dissertation mit chatgpt

  3. THE AI DISSERTATION WRITING GUIDE CHATGPT TECHNIQUES AND STRATEGIES FOR

    dissertation mit chatgpt

  4. Can I use ChatGPT for writing my thesis?

    dissertation mit chatgpt

  5. Create Effective Research Statement using ChatGPT for Writing Research

    dissertation mit chatgpt

  6. Schneller lernen im Studium mit ChatGPT

    dissertation mit chatgpt

VIDEO

  1. How to write your dissertation 🫡 #dissertation #essaywriting #chatgpt

  2. ChatGTP Research Gap

  3. Research introduction structure

  4. ChatGPT: Your Writer's Companion for Literature Reviews

  5. RLP 021. Österreichische Mediathek. Aufbereitung-Forschung-Aufführung. Mit Gabriele Fröschl

  6. Using Large Language Models to Develop and Refine Interview Protocols

COMMENTS

  1. ChatGPT and the Future of Management Consulting: Opportunities and

    Abstract. This thesis explores the implications of ChatGPT, a cutting-edge artificial intelligence (AI) language model, on the management consulting industry, focusing on opportunities and challenges it presents. Through interviews with management consultants, the study aims to explore the potential impacts of ChatGPT on consulting processes ...

  2. ChatGPT and the digitisation of writing

    The use of ChatGPT and other generative AI needs to be set in the context of a longer-term process of the digitisation of writing, where many tools are being employed by students to support ...

  3. ChatGPT and Academic Research: A Review and Recommendations Based on

    ChatGPT can enhance researchers' pr oductivity and enable. them to focus on other aspects of their resear ch by supporting tasks such as new idea generation, literature summarization, and ...

  4. The inside story of how ChatGPT was built from ...

    ChatGPT is a fine-tuned version of GPT-3.5, a family of large language models that OpenAI released months before the chatbot. ... Founded at the Massachusetts Institute of Technology in 1899, MIT ...

  5. (PDF) Chapter 2: ChatGPT in Academic Writing and Publishing: A

    ChatGPT, can simplify academic writing and publishing. ChatGPT is based on. the two most advanced large language models (LLMs), GPT-3.5 and GPT-4, which can produce natural language texts in many ...

  6. ChatGPT for Research and Publication: A Step-by-Step Guide

    This commentary provides a concise step-by-step guide on using ChatGPT, an advanced natural language processing (NLP) model, for research and publication purposes. The guide assesses crucial aspects, including data preprocessing, fine-tuning techniques, prompt engineering, and ethical considerations. By addressing challenges related to biases, interpretability, and plagiarism, this commentary ...

  7. PDF Topics in Reinforcement Learning:AlphaZero, ChatGPT, Neuro ...

    Topics in Reinforcement Learning: AlphaZero, ChatGPT, Neuro-Dynamic Programming, Model Predictive Control, Discrete Optimization Arizona State University

  8. How to Write a Conclusion Using ChatGPT

    You can use generative AI tools like ChatGPT to develop potential outlines for your conclusion. To do this, include a short overview of your research paper, including your research question, central arguments, and key findings. For longer essays or dissertations, you might also mention chapter or section titles.

  9. Die Doktorarbeit mit ChatGPT schreiben eine gute Idee?

    Anleitung: Mit ChatGPT Doktorarbeit schreiben Schritt 1: Forschungsfrage ausarbeiten. Bevor man in eine Text-Studie einsteigt, ist es wichtig, eine konkrete Forschungsfrage zu entwickeln. ... Tipps und Best Practices für eine Dissertation mit künstlicher Intelligenz Kritische Bewertung der generierten Ausgaben.

  10. How to Write an Essay with ChatGPT

    You can use ChatGPT to brainstorm potential research questions or to narrow down your thesis statement. Begin by inputting a description of the research topic or assigned question. Then include a prompt like "Write 3 possible research questions on this topic.". You can make the prompt as specific as you like.

  11. How to Use ChatGPT's Advanced Data Analysis Feature

    In the video below, MIT Sloan PhD student Chuck Downing will show how to enable and access Advanced Data Analysis within your ChatGPT account. The video then covers some common use cases of Advanced Data Analysis including reading and describing data, cleaning your dataset, visualizing your data, running regressions, and saving your work from ...

  12. How To Write a Thesis With ChatGPT (27 Essential Prompts)

    Writing a thesis is a long process with a lot of different steps. I've created prompts and strategies for each phase of the thesis process — click on any step in the list below to jump directly to that section. Selecting a Topic. Literature Review. Research Proposal. Research Design. Data Collection. Data Analysis.

  13. How can I use ChatGPT for my term paper or bachelor thesis?

    You can use ChatGPT to assist in the writing process for your research paper, thesis, or dissertation in the following ways: Developing a research question. Creating an outline. Generating literature ideas. Paraphrasing text.

  14. Using ChatGPT to generate a research dissertation and thesis ...

    I'm going to show you how our productivity can be increased 10 folds during research by using ChatGPT to do most of the work.Link for citations: https://www....

  15. ChatGPT prompts to write a PhD Thesis

    ChatGPT Prompts to Write a PhD Thesis. Writing a PhD thesis is a long and challenging process. It requires a lot of research, writing, and editing. It can be difficult to know where to start, and it can be easy to get overwhelmed. That's where ChatGPT and Google Bard AI can help. ChatGPT is a large language model that can be used to generate ...

  16. I used ChatGPT to research my dissertation

    I still read the damn texts (which I found in the library) but that short conversation with ChatGPT saved me hours of trawling through the Wikipedia. This is how it differs to everyone's ...

  17. How to Use ChatGPT for Academic Purposes

    Here are a few ways academics can ethically use ChatGPT: Curate knowledge: ChatGPT can be used for knowledge curation by summarizing lengthy text sources such as academic articles, research papers, or news articles. Researchers can input the text into the model and allow it to generate a concise summary of the key points, making the information ...

  18. How to Use ChatGPT in Thesis

    In the ever-evolving landscape of academic research and dissertation writing, staying at the forefront of innovative tools is essential. As the academic world continues to embrace technology, integrating AI-powered solutions into your research process has become not just an option, but a smart strategy.. With ChatGPT, you have at your fingertips a versatile tool that can revolutionize the way ...

  19. ChatGPT

    Von Aufsätzen in Uni und Schule bis hin zum Schreiben von Programmcodes - ChatGPT ist ein fortschrittlicher Chat-Bot, der menschenähnlich kommuniziert. Wie funktioniert er?

  20. Using ChatGPT for Assignments

    Creating an outline of your paper with ChatGPT. You can also use ChatGPT to help you draft a research paper outline or thesis outline.To do this, try generating possible headings and subheadings and then improving them. ChatGPT can help to generate a clear and well-structured outline, especially if you keep adjusting the structure with its help.

  21. Mehr als 600.000 nutzen Unternehmensversion von ChatGPT

    1 Min. B eim ChatGPT-Erfinder OpenAI steigt rasant die Zahl von Nutzern des KI-Chatbots in Unternehmen. Für die Firmenversion von ChatGPT seien nun mehr als 600.000 Nutzer angemeldet, sagte der ...

  22. Step-by-step Guide to Using ChatGPT to Write a Report, Thesis and

    Setting Up ChatGPT. Now that you know ChatGPT and its benefits associated with reports, theses, and dissertation writing, it's time to set up a ChatGPT account. Step 1. The first thing to do is to visit the official website of OpenAI's artificial intelligence chatbot ChatGPT. Step 2.

  23. Transcript: Ezra Klein Interviews Nilay Patel

    Share full article. April 5, 2024, 3:17 p.m. ET. Every Tuesday and Friday, Ezra Klein invites you into a conversation about something that matters, like today's episode with Nilay Patel. Listen ...

  24. Using ChatGPT to generate a research dissertation and thesis ...

    I'm going to show you how our productivity can be increased 10 folds during research by using ChatGPT to do most of the work.In this video, we are excited to...