The false promise of AI for PhD research and writing

CORRECTION: In the video I said it was OK to use AI for transcription. Having given it some more thought I’m not sure about this. I would not trust ANY online AI platform with sensitive data, and you may be in breach of ethics or privacy regulations if you upload audio files of participant interviews. If you can auto-transcribe without the data leaving your computer you might be OK. Double check with your ethics committee on the rules, and if in doubt, don’t risk it.

I’ve seen several videos lately on the use of AI for PhD research and writing with titles like;

  • “Effortless research with these AI tools”
  • “How to use AI to write your literature review in a week”, and
  • “Don’t get left behind! Use these AI tools”

These kinds of headlines create the impression not only can you use AI to make your work a lot easier, but that you should be using them.

But it’s not true. There is no AI platform that can do the work for you, and if you believe the hype and trust it too much, it can cause you far more problems than it solves.

How AI can be used ethically in your research

There is a very limited set of tasks where AI can be used effectively and ethically;

  • It’s great for proofreading for typos and grammar.
  • It’s also fantastic for audio transcription (though you should still listen, check the transcript and correct any mistakes) I’ve changed my view on using AI for transcription: See footnote1
  • If you’re just starting out in a new field and there are concepts you don’t understand, you can ask chatGPT for explanation and it will usually do a good job (though again you should verify through other sources)

And, finally, and this is where AI can do things that human’s can’t, you can use machine learning techniques to analyse huge amounts of data. But even then you know enough to be able to check what it has done on your behalf.

Never Use AI to generate text for you

If you want to use chatGPT, or any other platform to generate text for you, or to paraphrase papers to avoid plagiarism, or to summarize an area of the literature so you don’t have to read the papers… just don’t do it.

Aside from any ethical issues, The danger with relying on AI is that the results tend to look good on superficial reading, but it makes mistakes that you might not spot if you don’t have the expertise. And it will always make mistakes, because it doesn’t understand anything.

Anything you submit, you have to be able to defend

As I said in my last video on AI, anything that you submit as your work, you have to be able to defend. If an examiner questions something that doesn’t make sense because of some AI hallucination, you’re in trouble. If they spot one instance where you didn’t do the work, it calls your whole thesis into question. For sure, some people get away with it, but I think It’s just not worth the risk.

But what about these other platforms, that do things like providing research topics or summarizing literature?

Be very skeptical about what others are saying

Something to bear in mind when you listen to YouTubers recommending AI platforms, is that some of them are being paid to promote them and not declaring it. And I know this because one AI company offered me money to review their product, and they told me who else they’d worked with, which they probably shouldn’t have done.

So if something sounds like a sales pitch for an AI platform, treat it like one, and be very skeptical about any software that promises to solve all your problems.

The problem with using AI to summarize academic literature

When reading the literature, the problem for humans and AI, is that a lot of published research is flawed. It takes a bit of experience to spot these problems.

For example, in my own research I used a technique called atomic force microscopy. This technique in some ways is very easy (you could learn enough to get images in an afternoon). But the images could have artifacts in them, basically giving false results. With a bit of experience you could spot these easily, but there were so many published papers with these kinds of problems (or worse2).

How can you trust AI?

Whether AI could spot problems in papers is almost irrelevant, because how would you know? Without reading the papers yourself, how can you put your trust in what an AI platform is telling you? It seems risky If you don’t have the expertise to check.

If you do have the expertise, then maybe an AI platform can assist you, but that’s quite a major caveat.

You still have to do the work

There is no way around developing your own expertise and doing the reading yourself. There are ways to do this a lot more efficiently, which I talk about elsewhere, but you’ve still got to do the work.

Ultimately, that’s where the fun is. When you solve a problem you’ve been struggling with, or get a flash of insight, or when you find a paper that changes the way you think about your subject, those are the moments that make academic research worthwhile.

And in a world where people are using AI to churn out mediocre work, the people with genuine skill and expertise, far from being left behind, will be the ones to stand out.

See also

How to write your PhD literature review WITHOUT using AI (Part 1)

  1. Under no circumstances should you trust online AI platforms with interview data! If you do, you may be in breach of ethics or data privacy rules. If you can use software to help with transcription without any data leaving your computer you might be OK, but check with your ethics committee. ↩︎
  2. For a great example of bad papers getting through peer review, see this video featuring my PhD supervisor ↩︎

Stay up to date

New posts every Tuesday and Thursday. Enter your email below and get them delivered fresh to your inbox!

share this with someone who needs it:

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. Hi,
    Thank you very much for the clarification. You gave me an objective for not using AI . All people around me were advising me to use them and I was reluctant towards using them. But taking into account the fact that, I have to finish my PhD by June, and that I still have a lot of work to do made me think of using them

  2. Your video was informative. It is true that you need to know your work as you have to defend your work ultimately. As AI is a tool , AI must have specifications as what and how and why one need to use the same .

  3. Thanks for the great advice. I think AI is a tool that guides us in doing research and it is here to stay. We need to check out everything generated by AI

    • It shouldn’t guide you- it should assist you. I really think that you should be the one guiding the direction of your research

  4. Thanks James for yet another clearly written and delivered video on current issues. I am glad to hear AI is useful in some contexts but I wouldn’t trust it for gathering any niche or ground breaking information. I asked chatbox GPT ‘Who is ….’ and put in my stage name as a little known musician with a small music presence online. Chatbox GPT gave me several hundred words saying that I was a company in Southern France that invented and sold highly sophisticated audio equipment. No such company exists. I asked it the same question a few months later and it said that I was a songwriter, which is true. Although it named some of my songs, the long ramble it wrote about my magnificence, could have applied to any acoustic female songwriter from the past fifty years. The style of the writing was clearly derived from publicity materials online of other female songwriters, combined with online reviews of my music. It was very superficial and did not analyse my music, my lyrics, performance, audience etc. Basically it was nonsense. So I wouldn’t rely on it for research into musicians. Wikipedia is better, but is still not reliable as academic information without other checked sources. Research is about finding out new information, AI seems to recycle established information available online, and does it badly. But it has other uses for looking at data as you mentioned in your video.

    • The real danger is that academics are using AI and publishing based on what it produces. This then feeds back into the LLM…

      Wikipedia is pretty good for a lot of academic concepts because it’s often academics who write and edit the pages about obscure technical concepts (but of course everything needs verification)

PhD: An uncommon guide to research, writing & PhD life

By James Hayton (2015)

PhD: an uncommon guide to research, writing & PhD life is your essential guide to the basic principles every PhD student needs to know.

Applicable to virtually any field of study, it covers everything from finding a research topic, getting to grips with the literature, planning and executing research and coping with the inevitable problems that arise, through to writing, submitting and successfully defending your thesis.

Subscribe

Get the latest PhD tips delivered fresh to your inbox every week.

AI-free zone

All the text on this site (and every word of every video script) is written by me, personally, because I enjoy writing. I enjoy the challenges of thinking deeply and finding the right words to express my ideas. I do not advocate for the use of AI in academic research and writing, except for very limited use cases.

See also:

Why you shouldn't rely on AI for PhD research and writing

The false promise of AI for PhD research

© James Hayton. All rights reserved.
PhD Academy Ltd is a UK registered company #16183073