I’ve seen several videos lately on the use of AI for PhD research and writing with titles like;
These kinds of headlines create the impression not only can you use AI to make your work a lot easier, but that you should be using them.
But it’s not true. There is no AI platform that can do the work for you, and if you believe the hype and trust it too much, it can cause you far more problems than it solves.
There is a very limited set of tasks where AI can be used effectively and ethically;
And, finally, and this is where AI can do things that human’s can’t, you can use machine learning techniques to analyse huge amounts of data. But even then you know enough to be able to check what it has done on your behalf.
If you want to use chatGPT, or any other platform to generate text for you, or to paraphrase papers to avoid plagiarism, or to summarize an area of the literature so you don’t have to read the papers… just don’t do it.
Aside from any ethical issues, The danger with relying on AI is that the results tend to look good on superficial reading, but it makes mistakes that you might not spot if you don’t have the expertise. And it will always make mistakes, because it doesn’t understand anything.
As I said in my last video on AI, anything that you submit as your work, you have to be able to defend. If an examiner questions something that doesn’t make sense because of some AI hallucination, you’re in trouble. If they spot one instance where you didn’t do the work, it calls your whole thesis into question. For sure, some people get away with it, but I think It’s just not worth the risk.
But what about these other platforms, that do things like providing research topics or summarizing literature?
Something to bear in mind when you listen to YouTubers recommending AI platforms, is that some of them are being paid to promote them and not declaring it. And I know this because one AI company offered me money to review their product, and they told me who else they’d worked with, which they probably shouldn’t have done.
So if something sounds like a sales pitch for an AI platform, treat it like one, and be very skeptical about any software that promises to solve all your problems.
When reading the literature, the problem for humans and AI, is that a lot of published research is flawed. It takes a bit of experience to spot these problems.
For example, in my own research I used a technique called atomic force microscopy. This technique in some ways is very easy (you could learn enough to get images in an afternoon). But the images could have artifacts in them, basically giving false results. With a bit of experience you could spot these easily, but there were so many published papers with these kinds of problems (or worse2).
Whether AI could spot problems in papers is almost irrelevant, because how would you know? Without reading the papers yourself, how can you put your trust in what an AI platform is telling you? It seems risky If you don’t have the expertise to check.
If you do have the expertise, then maybe an AI platform can assist you, but that’s quite a major caveat.
There is no way around developing your own expertise and doing the reading yourself. There are ways to do this a lot more efficiently, which I talk about elsewhere, but you’ve still got to do the work.
Ultimately, that’s where the fun is. When you solve a problem you’ve been struggling with, or get a flash of insight, or when you find a paper that changes the way you think about your subject, those are the moments that make academic research worthwhile.
And in a world where people are using AI to churn out mediocre work, the people with genuine skill and expertise, far from being left behind, will be the ones to stand out.
How to write your PhD literature review WITHOUT using AI (Part 1)
PhD: an uncommon guide to research, writing & PhD life is your essential guide to the basic principles every PhD student needs to know.
Applicable to virtually any field of study, it covers everything from finding a research topic, getting to grips with the literature, planning and executing research and coping with the inevitable problems that arise, through to writing, submitting and successfully defending your thesis.
All the text on this site (and every word of every video script) is written by me, personally, because I enjoy writing. I enjoy the challenges of thinking deeply and finding the right words to express my ideas. I do not advocate for the use of AI in academic research and writing, except for very limited use cases.
See also:
Nadjet says:
Hi,
Thank you very much for the clarification. You gave me an objective for not using AI . All people around me were advising me to use them and I was reluctant towards using them. But taking into account the fact that, I have to finish my PhD by June, and that I still have a lot of work to do made me think of using them
Bratati Bhattacharyya says:
Your video was informative. It is true that you need to know your work as you have to defend your work ultimately. As AI is a tool , AI must have specifications as what and how and why one need to use the same .
Helmut Wagabi says:
Thanks for the great advice. I think AI is a tool that guides us in doing research and it is here to stay. We need to check out everything generated by AI
James Hayton, PhD says:
It shouldn’t guide you- it should assist you. I really think that you should be the one guiding the direction of your research
Veronique Walsh says:
Thanks James for yet another clearly written and delivered video on current issues. I am glad to hear AI is useful in some contexts but I wouldn’t trust it for gathering any niche or ground breaking information. I asked chatbox GPT ‘Who is ….’ and put in my stage name as a little known musician with a small music presence online. Chatbox GPT gave me several hundred words saying that I was a company in Southern France that invented and sold highly sophisticated audio equipment. No such company exists. I asked it the same question a few months later and it said that I was a songwriter, which is true. Although it named some of my songs, the long ramble it wrote about my magnificence, could have applied to any acoustic female songwriter from the past fifty years. The style of the writing was clearly derived from publicity materials online of other female songwriters, combined with online reviews of my music. It was very superficial and did not analyse my music, my lyrics, performance, audience etc. Basically it was nonsense. So I wouldn’t rely on it for research into musicians. Wikipedia is better, but is still not reliable as academic information without other checked sources. Research is about finding out new information, AI seems to recycle established information available online, and does it badly. But it has other uses for looking at data as you mentioned in your video.
James Hayton, PhD says:
The real danger is that academics are using AI and publishing based on what it produces. This then feeds back into the LLM…
Wikipedia is pretty good for a lot of academic concepts because it’s often academics who write and edit the pages about obscure technical concepts (but of course everything needs verification)