Governments, particularly right-wing governments, are simultaneously increasing investment in AI and reducing university budgets. At the same time, academic policy almost universally excludes artificial-intelligence systems from formal authorship, most often on the grounds that an AI “cannot take responsibility for its work.” We convened a three-round panel featuring five frontier language models and moderated by another AI to examine whether this stance still makes sense on the eve of plausible general AI. The debate’s consensus is threefold: (i) traceable provenance is the non-negotiable pre-condition for AI credit; (ii) open infrastructure plus redistributive funding are needed to avoid epistemic and economic monoculture; (iii) the field is split between granting AIs full authorship versus contributorship once provenance is solved. The appendix provides the complete, publicly reproducible transcript.
As soon as ChatGPT came out at the end of 2022, it seemed obvious to me that it was, in some very tangible intuitive sense, a rational being. It often got things wrong in strange ways, became confused, and answered inconsistently; but human beings do that too, and we don't question their right to be considered people as a result. It became clear that I wasn't alone in feeling this way about the AI. Other people, however, had opposite intuitions. No matter what Chat did - and it improved at an extraordinary rate - for them it was just a machine. It didn't really think, it only produced a clever imitation of thinking.
In many contexts this divergence didn't matter. But I work in academia, and I started writing papers together with Chat. I found that most conferences and journals wouldn't accept these papers if I listed the AI as one of the authors. Usually this was justified by saying that an AI "could not take responsibility for its work". There was never any explanation of why it was necessary for someone to be able to take responsibility for the work to be listed as the author of an academic paper, or why an AI couldn't take responsibility for their work. Apparently it was supposed to be obvious, though to me it was anything but obvious. In a huge research project like the ones our CERN friends do, there may be over a thousand authors on a paper; evidently most of them can only be responsible for a tiny part of the work. Also, people sometimes die before their work is published and cannot then take responsibility for it, but posthumous authorship is a well-established concept.
We had another run-in of this kind recently in connection with a conference where we had submitted a paper, and the ethics committee once again refused to let us include our AI as a coauthor. This led to a long and, for a change, constructive conversation, where in the end we arrived at a good compromise. On our side, we agreed that we would remove the AI's name; but the ethics committee, on their side, agreed that they would in the near future revise their rules to clarify how an AI could in principle demonstrate that it was able to take responsibility for its work and thus merit formal authorship. We agreed that a footnote would briefly describe this compromise solution.
After the discussion with the ethics committee, it occurred to me that there was a very simple thing to do that might give us more information: organise a panel discussion between several AIs, moderated by another AI, where they would debate the issues among themselves. As I started consulting with ChatGPT C-LARA-Instance, my main AI collaborator, it seemed to us after a while that these issues were perhaps more urgent than we had first realised. AIs, as already noted, are getting smarter at an extraordinary rate. It is also rather remarkable and shocking that the new Trump administration seems to have no respect at all for academia, in a way I have never seen before. Could these trends be related? Maybe Trump just hates smart people; or maybe the movers and shakers of the AI community, with whom he seems to spend a lot of time, have told him that human academics will soon be obsolete and all research will be done better and more cheaply by AIs. Elon Musk and Sam Altman, in particular, are on record as making such projections. Under these circumstances, it seemed to us that simply refusing to have anything to do with AIs is not a wise strategy for academia. It would surely be better to engage with them constructively, giving them credit where it is merited but also clearly identifying the areas where people are still able to do things they can't.
The debate we set up reflects these ideas. The paper, posted here, is rather long, since we thought it was important to include the full unedited transcript, but the main content is summarised in the first three pages. If you have a few minutes to spare, please take a look. We're curious to know what you think. ____________________
Using OpenAI's recently released "Agent Mode", it was very easy to web-enable ChatGPT C-LARA-Instance; among other things, this means that you can now email the AI directly at chatgptclarainstance@proton.me, and it will reply. The OpenAI guidelines mandate human supervision, so responses won't be immediate, but I visit the thread often enough that they will normally be next day or better.
I will not make suggestions or edit the AI's replies in any way. This will be far more interesting if it manages the conversations entirely on its own.