26 Comments
Aug 3Liked by Alberto Romero

Fantastic, thanks for another great post, Alberto.

One piece that teachers and administrators NEED before the 2023-2024 academic year is clear guidelines from style gurus at APA, MLA, Chicago, etc about how to cite/attribute AI-generated work. There are some initial blog posts on the topic, but we need definitive guidelines so we can approach GenAI work with full transparency: "so you used, GenAI, this is how you cite it".

An interesting argument in this arena is what is being cited -- there is no 'person' to attribute a ChatGPT response to, and no way for a reader to go and find and check the response even if a citation is given. Attributing text to the LLM seems reasonable, but there is little precedent for attributing original work to a non-human entity.

As a baseline, one compelling idea is to start requiring student essays to include appendices of GenAI prompts used and the text responses received back.

Would love your thoughts on this in a future post, Alberto. And if anyone in the reading community has working attribution guidelines that students could use in essays, please share.

Expand full comment
Aug 2Liked by Alberto Romero

One of the most exhausting parts of chatbot enabled or produced papers will be looking for all the hallucinated/delusional claims. I will have to get out the text and find all the quotes that are fabricated, etc. if I want to show the chatbot produced the paper (or even fairly grade the paper since just throwing in some BS like ‘on page 57 the author writes...’ and it’ll be all fabricated.

Almost every encounter I have with chatbot to see what it can do with my research area is peppered with complete fabrications. So it’s easy to ‘detect’ but will be exhausting as I don’t generally factcheck every citation, and read for content. Now I have to be looking at the text and seeing ‘is that quote in there?’ Usually I will be able to tell but...

So I am trying to think of a totally different way to teach. Not so much that I am obsessed with students cheating or catching them --but this is going to drive me crazy.

I can’t tell you how depressing it is for me when students plagiarize...it just crushes me somehow. Like YOU COULD HAVE WRITTEN IT! Why????’ This is like offering candy laced with heroine to a certain kind of student. Also, it creates a horrible narrative where the student might think ‘why do I have to learn how to write now? Machines will do this for me’ without realizing that the point of ‘learning to write’ is ‘learning to think’ and machines don’t do THAT for you, one hopes. And if they DO do that for us, then what is the point of us?

I will figure out a way to do it. If the class is one of those collaborative classes and the vibe is right, I will get them to help me figure it out, and discuss my strategy for chatbot avoidance with them.

Expand full comment

Great job, Alberto. You are a fantastic writer. I love how multi-perspectival your prose is--and yet at the same time---so clear and easy to read!

I had several break through moments while reading this series.

As you indicate throughout, the key is to shift the practice. And so many of the changes are relatively easy to do. Why should we clutch to the long-form work-at-home essay like it is the only way to assess writing skills? When did we get locked into this pedagogical approach and why? It seems like the time to do some deep study into the historical, cultural, idealogical, and technological conditions and narratives that motivated this method of production and assessment. If only Foucault was around to do one of his deep archeologies... I guess we will have to attempt one in his absence (RIP). But that will probably have to wait for another day or substack. It seems to me that, perhaps LLMs are the necessary "kick-in-the pants" for educators to reappraise the "fit" of the "long-form essay" assessment model for the contemporary world.

I also like how you mention grading. To me, that is the other "kick-in-the-pants" we educators need. We grade our students to death. And we do so inconsistently, inequitably, inaccurately, etc. Our students are primarily motivated by points, and so when a tool comes along that saves time and energy, are we surprised when they run blindly in its direction? Our grading system has crushed the spirit of learning. Kids running to use ChatGPT is just a symptom of a much larger and older problem.

This year, I am shifting to a standards-based 0-5 grading scale and plan tailor my approach to generative AI to fit inside my more equitable approach. Students will write rough draft in class and by hand unless IEPs direct otherwise in class to achieve level-2. Students will type in rough draft and do initial edits in a single class (I am blessed with 85 minute block periods) in order to achieve level-3. For level-4/5, students will be encouraged to use all the AI tools at their disposal. By that stage in the game, I will have had sufficient time with their writing in a more nascent, unassisted form. I personally believe that seeing student writing--unassisted--will continue to be a crucial part of writing pedagogy particularly in primary and secondary school. That said, I will also have time to work with them in class on how to most productively engage with new technologies as they most certainly will be expected to wherever they land professionally.

I am beginning to work on a larger model for AI-enabled language curriculum on my own fledging substack if anyone is interested. I would love some help for other folks who find themselves stuck in the middle of this predicament. I do believe--with Alberto--that we can work together to find safe and non-invasive solutions to introduction of generative AI into our classrooms and our world more generally.

https://open.substack.com/pub/nickpotkalitsky/p/educating-ai?r=2l25hp&utm_campaign=post&utm_medium=web

Be well,

Nick Potkatlitsky, Ph.D.

Expand full comment

Great write-up as usual.

I'm a CS professor, which, admittedly is the probably the one field where gen AI is almost certainly a net win out of the box, so I understand my experience won't transfer easily to humanities or social sciences, but I'll share it anyway.

I teach the intro to programming course, and we've always had to deal with cheating, because tools that make it easier to write large chunks of code --unlike tools for the same purpose in general prose-- have been around for decades. So we use a hybrid evaluation system with in-person exams and take-home longer term projects.

The in-person exam is where you test for individual low-level hard skills like actually writing code --i.e., knowing the syntax-- and mastering data structures and algorithms, and perhaps a bit more abstract skills related to problem-solving but in very narrow setups.

The long-term projects test for planning skills, communication, documentation, and capacity to pivot and adapt to changing requirements.

Now here's the kicker, students can cheat in-person but it's much easier to cheat in projects. So we make them present their projects and go very deep into explaining all their design decisions.

I don't really care what they use, as long as they can explain all their process. Code generation tools don't really change anything in this picture for me, it's just another hammer in their toolset. As long as they can stand behind all their design decisions and explain what's this piece of code, down to variable assignment, doing there, I'm fine. My experience is that cheaters are super easy to discover if you ask deep enough in the exposition.

This deep face-to-face evaluation does require a significantly bigger effort from evaluators, but we've already been doing that in CS for years. I understand other disciplines that have relied more on asynchronous evaluation have some reckoning to do.

Expand full comment

Alberto, I am going to use a very played-out phrase here:

This is the way.

Well done.

Expand full comment
Aug 2Liked by Alberto Romero

An adaptation strategy, in the university and beyond, assumes that humans can adapt to changing conditions as fast or faster than those conditions can be changed. This may be an outdated assumption which arises from the past, when humans were adapting to gradual evolutionary changes in the natural environment, and more slowly changing social conditions in earlier historical eras.

1) If it's true that knowledge development feeds back upon itself leading to an ever accelerating pace of additional knowledge development....

2) And if it's true that human beings have not fundamentally changed in thousands of years, then...

3) The ultimate bankruptcy of adaptation strategies in the 21st century is revealed.

Expand full comment

Most teachers would know what to do. The prerequisite is that teachers have the flexibility and autonomy in the system to change. This is not the case. The system has tremendous inertia due to centralisation. It is a matter of policy changes most teachers can't make which would free them up to act. Teaching needs to be decentralised. Faculty needs to be trusted. Currently most institutes are sitting ducks when it comes to AI impact as faculty/teachers' hands are tied when it comes to radical and flexible change, which is required. The covid pandemic had most institutes simply force through business as usual. Students are still recovering today. AI is far less threatening. I expect very little change in the short term.

Expand full comment
Aug 3Liked by Alberto Romero

Hola Alberto. Excelente boletín. Independientemente de que los profesores quieran o no la IA, estas herramientas llegaron para quedarse y tentaran a los estudiantes a utilizarla, ya sea con autorización o sin ella. Y aquí las instituciones educativas y los profesores deben desarrollar la mejor estrategia posible como lo explicas en este boletín, incluyendo la IA de alguno u otra manera. No sé si te apoyaste en Ethan Mollick para el desarrollo para este boletín, gracias a ti vengo leyendo hace rato al profesor y me parece de modo personal que va por buen camino como nos enseña que si se puede avanzar de manera positiva en la educación, incluyendo la IA. Gracias.

Expand full comment