Jump to ratings and reviews
Rate this book

Teaching With AI: A Practical Guide to a New Era of Human Learning

Rate this book
How AI is revolutionizing the future of learning and how educators can adapt to this new era of human thinking. Artificial Intelligence (AI) is revolutionizing the way we learn, work, and think. Its integration into classrooms and workplaces is already underway, impacting and challenging ideas about creativity, authorship, and education. In this groundbreaking and practical guide, teachers will discover how to harness and manage AI as a powerful teaching tool. José Antonio Bowen and C. Edward Watson present emerging and powerful research on the seismic changes AI is already creating in schools and the workplace, providing invaluable insights into what AI can accomplish in the classroom and beyond. By learning how to use new AI tools and resources, educators will gain the confidence to navigate the challenges and seize the opportunities presented by AI. From interactive learning techniques to advanced assignment and assessment strategies, this comprehensive guide offers practical suggestions for integrating AI effectively into teaching and learning environments. Bowen and Watson tackle crucial questions related to academic integrity, cheating, and other emerging issues. In the age of AI, critical thinking skills, information literacy, and a liberal arts education are more important than ever. As AI continues to reshape the nature of work and human thinking, educators can equip students with the skills they need to thrive in a rapidly evolving world. This book serves as a compass, guiding educators through the uncharted territory of AI-powered education and the future of teaching and learning.

Audio CD

Published April 30, 2024

214 people are currently reading
578 people want to read

About the author

José Antonio Bowen

9 books3 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
142 (25%)
4 stars
255 (45%)
3 stars
124 (22%)
2 stars
36 (6%)
1 star
5 (<1%)
Displaying 1 - 30 of 91 reviews
Profile Image for Lizzy Karnaukh.
22 reviews
August 27, 2025
Faculty summer reading. To no one's surprise, I found myself frustrated throughout. I can entertain some of the authors' points about, for example, emphasizing process over outcomes in assignments - that's an important conversation about pedagogy to have on its own. And for those who are already using AI in their teaching, I'm sure this book is helpful.

But to anyone who has strong ethical concerns about the proliferation of AI - not just about cheating and academic integrity, but about the environmental impact of AI technology and its anti-humanism - the authors are unconvincing. For all of the studies they cite and examples they share, they fail to grapple with counterarguments (which perhaps makes for C-level writing). They wave off ethical dilemmas as something that would make the book too long, but frankly the book is littered with suggestions for prompts and assignments that render the reading dull and lifeless. Such is the "practical" aspect, I suppose, but that assumes that we have all agreed to this practice, which I remain unconvinced in.

Another assumption the authors make is that the role of AI in increasing efficiency and productivity in a workplace is beneficial for that workplace. For high-level computing or automatable technological tasks, I can see the benefit. But education is an inherently inefficient process - we are slow to learn how to read, write, think, etc. After all, learning is a lifelong process, even after official schooling! In that sense, it seems impossible to me to speed up the learning process without losing something core to that process, which is finding a way through and make sense of complicated thoughts, emotions, and ideas - the internal struggle to understand something that is larger than oneself. If teachers or students are feeling pressed for time and want to cut corners, the solution is not to use AI - instead we should question what conditions make them feel that way and make the environment more conducive to the gradual, iterative process that is learning and teaching.

And, as I will continue to believe until I can be convinced by sounder arguments or life experience, to be human is to experience thinking, empathy, love, culture, art, nature in relation to the other humans around us. Teachers are engaged in a relational type of work. I fear that incorporating AI technology into the classroom risks undermining the unique connections we cultivate with students and colleagues. Rather than asking AI to help with drafting an email to a concerned parent, why not ask a trusted colleague? Rather than having AI create individualized assignments for students, why not spend time getting to know the students? Thinking is messy - as it should be, because humans are messy. And there is something beautiful in that mess - something that the sanitized, vague sounding writing of ChatGPT cannot - nor should not - reproduce.
Profile Image for Kyle C.
642 reviews88 followers
August 23, 2025
The first chapters in this book annoyed me: lots of hype about the strengths of AI, lots of utopian dream casting about a future with generative AI ("previous AI helped curate your world but GPT AI will allow you to create your world"), and lots of references to dubious research—promotional advertisements, Forbes op-eds, Substack posts, YouTube lectures, Goldman Sachs reports, GitHub blogs, self-published syllabi, all cited in MLA format (often with that scholarly sounding "et al.") in a way that presented them on the page as authoritative, peer-reviewed sources—and in some cases disguised their special interests and unscientific credentials. The content sounded convincing until you double-checked the bibliography and realized that the data and statistics were culled from a radio program or a business school brochure. I balked at the salesmanship and window-dressing, the outlandish claims that AI will teach empathy and improve human communication, exalting AI as a classroom must on the flimsy basis of speculative think-pieces and amped-up marketing.

The book is much stronger when it eventually gets into the practical. I learned a lot about the so-called science of "prompt engineering" (I think this is a dressy name to describe the simple need for clear specifications when using AI interfaces like ChatGPT or Gemini). The authors offer a number of different strategies to generate better AI output:
* asking it to write in the particular style of an author or expert;
* using positive rather than negative language in the prompt (e.g. "use an informal tone" rather than "avoid formal language");
* giving more explicit instructions about how substantial and detailed the response should be (e.g. tell it to be thorough, ask for a complete response; if looking for a creative response, tell it to "transform" or "reimagine" the material provided);
* wrapping prompts in XML tags to get more specific and creative answers (for example, it may elicit a more elaborate response to write the prompt as "&<;thinking&/>;create an unusual hook for an opinion piece about the use of AI in higher education&<;thinking&/>;");
* including meta-instructions to get more accurate solutions with clearer process (e.g. "take a deep breath and work on this problem step-by-step" or "let's think carefully about the problem and solve it together").
All of these techniques can result not just in better answers but more explicit reasoning, depth and creative variety. I am skeptical of the idea of "prompt engineering" as some new specialist skill but they did show some unexpected ways (such as using XML tags) to modify and improve AI behavior.

Although the authors push heavily for the use of AI in education, I came to see them as much more even-keel and balanced than the initial chapters suggested. Their book is not just pedagogical trends and theory divorced from classroom realities; the authors shows real awareness of students' needs and challenges. They acknowledge the risks and trade-offs in using AI; they admit that students will often prefer to shirk the hard work of thinking and writing for themselves; they are attuned to the dangers of plagiarism and the futility of AI detectors; they suggest ideas for AI-resistant assignments but they also emphasize the need to discuss authorship, to explain the goals and benefits of the assignment, and to provide students with the curiosity and agency to do the work themselves; they are sensitive to the classroom pitfalls of introducing more digital apparatuses when human rapport and face-to-face interactions have been proven to be so essential in learning; they also note that students who over-rely on AI will often not be able to distinguish high-quality responses. They do not recommend that students "use AI as a tool" (a lazy mantra that I often hear); instead, they suggest that teachers use AI as the bar to define mediocrity—what AI can do should be a C and students should have to figure out the human-added value to obtain an A. "Rather than banning AI, let's just ban all C work," they argue.

And this gets to the real core of the book: numerous examples of prompts, activities, assignments, all involving AI—roleplaying with AI, using AI as a tutor, asking AI for feedback, or employing AI to present material in more engaging and personalized ways. The book presents different ways in which teachers might foster a more critical AI literacy in students: inviting students to fact-check and critique AI, "stress-testing" controversial claims by asking students to examine the claim and then get them to compare with AI output. Teachers could demonstrate how AI might fall for certain biases. While all of this may sound abstract in this summary form, what Bowen and Watson provide is a voluminous guide to AI lesson-planning: pages of varied sample prompts, often with select AI responses.

It's a useful resource but I still have many reservations about the research in the book. The opening chapters smacked too much of snake-oil charlatanism, and sometimes I felt the authors didn't do the kind of close reading of AI that they argued was so essential (the section in which they ask ChatGPT to write about Hamlet in the style of a Harvard professor was particularly bad—the AI response was repetitive, vague, and clumsy, everything a teacher would grade harshly). So all in all, useful but not authoritative.
Profile Image for Madison Marsh-Keptner.
16 reviews
Read
July 31, 2024
Faculty book club book! A lot of really great information on AI for anyone working in academia.
Profile Image for Mark.
653 reviews16 followers
March 3, 2025
The central problem with AI is that it tends toward a generic lack of specificity, which I term "vague complexity." Thus any book written using primarily AI, such as this one, by definition cannot comment meaningfully on any topic. I know the "authors" of this book used AI not just because roughly half the book is simply lists of prompts, but because it concludes with "Key Themes" in exactly the same fashion as AI output. Other reviewers have pointed out bad misspellings and grammatical errors which would be embarrassing enough for a self-published book, let alone for a supposedly reputable publisher like Johns Hopkins University Press. In my case, I noticed at least three (3+) occasions when the audiobook reader said "IA" (Iowa) instead of "AI" (artificial intelligence). I have never heard anyone say "IA" aloud, and trying it myself, as a human, it feels awkward and difficult. This backs up my theory that the audiobook was not read aloud by a human, but rather by an AI that they gave a pseudonym to. I'm honestly not sure which is worse (a human reader who didn't catch these blatant errors, or Blackstone Audio using an AI), but either way the reader read very unclearly, and even attempting to speed it up to a meager 1.3x made basic pronunciation comprehension difficult.

To return to the start, that's the problem with AI: its quality degrades extremely quickly. The "danger" of AI recursion is to me the best thing that could happen to help undermine this non-technology, which fails on every single level. The biggest way that it fails is that it stupefies those who use it. In the case of this book, the "authors" (if you can even call them that) didn't explore even once the extremely debatable claim that AI "will" be used by everyone in the future, and furthermore that that's a good thing. The main problem with this assertion is that all AIs on the market today are imbued with a vaguely secular humanist neoliberal tech bro worldview. This is a large part of why they are so superficial, dated, and unreliable. For anyone who operates outside of that worldview, and for anyone who wants content not aligning with those assumptions, AI is worthless. Even telling it to roleplay doesn't help, because it's still trained off of such huge amounts of garbage data, and it tends toward the average of that landfill, which is roughly the consistency and odor of noxious sludge.

I genuinely don't know what the authors were thinking when they claimed "The second benefit was that this learning was not bound by human knowledge and bias." What do they think the LLM was trained on? The Platonically Ideal Thoughts of God? AI is biased and unethical, just like us. Furthermore, our bias and strangeness is precisely what makes human writing fun to read, and so removing that is precisely what makes AI writing so soulless. To me, there are no questions of how to best use AI; instead, the only valid questions concern the ethics of its use (and whether it should ever be used in the first place). For example, the "authors" wrote that you could ask AI to "Make this sound more empathetic and personal." Right there I had to freak out a bit, because if you ever get to the point in your life where you have to ask an AI how to be more empathetic, you probably need to seriously consider how you got to this point in your life. The only way to become a more empathetic human... is to be more human: consume and create art; make new friends; experience life in all its diversity; NOT reaching for the crutch of AI whenever you have the slightest dilemma.

Related to the shocking lack of humanity in that prompt, I also found shocking that the authors didn't understand that AI's popularity has absolutely no bearing on how ethical it is. The fact that it has been trained almost exclusively on illegally-sourced, copyrighted material should cause a class-action lawsuit so large that no corporation would ever touch the stuff again. Even though that hasn't happened and probably won't, AI is essentially worthless. An easy way to think about this is that AI use is analogous to steroid use. Though someone on steriods might still have an average physique, you can definitely tell in the extreme cases when it is being used. Therefore, it's at best something mediocre, and at worst it sticks out and causes red flags. This helps illuminate my initially paradoxical thoughts about its use in higher ed:

+ "AI is the new average of work" (totally agree)
- "With a little editing, a student could create a B paper with minimal effort" (totally disagree)

AI has not increased the quality of the average paper. It has lowered it. Superficial competency is always worse than unhinged human creativity. Operating with the assumption of inevitability is the central fallacy of this book. If you re-orient your rubrics to have AI as the new "F," then it forces students to use AI, rather than continuing to encourage them to avoid it, which is necessary lest we lose our souls. As I stated above, AI writing necessarily trends toward a mediocre average, so why build upon sand? Why not build upon the rock of human originality? Why settle for mediocre when you can rise above that? What sort of a hell do these authors live in, if they really are even the authors?

Despite how much I hate grading papers, the part near the end about AI for feedback feels exceptionally unethical. Because of the "authors'" fallacious starting assumption of AI's inevitability, they insanely think it’s fine. This is why thinking without AI is important, otherwise you’d never catch this and my many other complaints. Unless of course, this was entirely written by AI simply to comb negative reviews like mine, then to improve it via our responses. But even if that is the case, my valid points will get lost in a sea of irrelevancy, and the average will always win out. I can't wait for the eternal gray of the future, where there are no quality products, only average, half-functional products.

Snap out of your daydream Mark! You're writing a review! Returning to pedagogy, all I can see in the example assignments and prompts near the end is the many ways students will fall through the cracks who don't already have exceptional critical thinking skills. So many of these prompts totally skip any writing or critical thinking on the part of the student and basically just ask them to sit on a computer tinkering with AI all day, which to me is precisely the opposite of what students should be doing with their time. Rather than seeing how to get even more crutches in your writing and to make it even more average, we should look at how to avoid the crutches that all the lazy, stupid people will subject themselves to. People who write without AI are the future, not those who use it. Those who use it will be forgotten in the trash heap of the average, which curiously is shaped like a bell curve...
Profile Image for Erika.
431 reviews21 followers
May 25, 2025
I teach history at the college level. I keep waiting for the book or the podcast or the video that will prove to me that the pedagogical losses I see everywhere and everyday on account of student use of AI will somehow be replaced by AI-enabled teaching techniques and assignments that will not only fully compensate for these losses, but will actually expand students' critical thinking and historical reasoning.

After reading this book, my wait remains.

It is not that I don't see how AI could possibly be useful to some things I do. For instance, I use AI for rapid translation of sources and to suggest alternative organizational patterns to pieces I've written. I could see some marginal ways I could allow students to use this in a class without feeling it was unethical. But do I think the fact that I could ask my college freshman to learn how to skillfully prompt ChatGPT to "write a 200-word process for removing a peanut butter sandwich from a toaster in the style of the King James Bible" (actual example from this book), makes up for the fact that I cannot assign take-home research papers to history majors without fear that they will wholesale AI them? No, I don't. No number of party tricks this technology can perform make up for the fact that it is killing something central to the humanities and what it means to be human, namely, the struggle of thinking something through that is wider than your own little self. Immersive, deeply researched, other-focused writing remains central to history, even if history CAN be done as a graphic novel (illustrated by AI) or a podcast (with AI voices) or a movie or whatnot. So what are we to do about writing? I noticed that all the writing assignments the authors provide as a means of bypassing AI are "you" centered - what choices "you'd" make, how the information you receive might effect "you," how "you" would talk with Socrates. I see why they propose this, but tt is so sad that we have now effectively hijacked any ways of wrestling with understanding the worlds that are not just about one's self.

Secondly, I also remain unconvinced that teaching AI literacy is a central part of my job. Not only do young people almost inevitably learn technologies faster than people my age - I don't remember any of my professors teaching me how to navigate the internet, and I would have scoffed at them had they tried! - but I did not receive a doctorate in machine learning. Now, I can teach them skills that could usefully be applied to AI, like how to verify information, or how to properly contextualize, or what makes for a strong historical argument. Students will learn this information in the course of study (that is if they don't use AI to bypass the heavy lifting of learning to do so). But should I be weighing in on whether they use Grok or Claude? No. I do not think so.

This leads me to the author's ultimate takeaway on mitigating unethical use of AI. Their suggestion is to just raise the bar on the level of work expected. If AI can produce C student work (let's not kid ourselves - for many of us, it's B or even A work), that work should be an F. That is just not practical. I cannot fail entire classes, and frankly most of my students struggle to think beyond the level that AI can think. If I flunked all students who cannot think beyond generalizations, I'd be meeting with the dean in no time.

At any rate, this book did not help me. I just felt frustrated and even sadder after reading.
Profile Image for Mallory.
194 reviews3 followers
January 28, 2025
*4.5 rounded up

This was incredibly helpful as a tool to navigate my classroom AND my dissertation 🙏🏼

This version is for academia, but there is another in the series for K-12 Ed if anyone is looking!
Profile Image for Emma Ann.
559 reviews847 followers
Read
August 7, 2025
Read for work. Parts of this one made me mad. I wished for more nuance and more specifics.
Profile Image for Justine.
188 reviews4 followers
December 10, 2024
I read this as a part of a faculty read this semester. My biggest issue with this book is it takes very broad strokes, it is more of a primer about AI in general with some potential applications to the classroom. However, it does not address essential issues like ethical concerns with AI or practical usage of AI with actual students. For instance, other books I've seen have shown data from their own research (even if small samples) of implementation. Now for that first portion (re: ethical issues of AI), they do address that it should be its own book but why?! To me that took away from the credibility of their arguments and made me, as someone who is weary of AI, a bit more critical. I understand that no book is going to be fully comprehensive, especially with a newer tech as AI but I did feel it was missing key elements that I would look for as a professor.
22 reviews
April 12, 2025
Every teacher should read this book. I don’t agree with all of the ideas, but there is a great balance of pedagogical considerations and practical classroom applications.

Whether we like it or not, AI literacy will be a crucial demand of the workforce moving forward. The vast majority of our students are already experimenting with AI in our courses. We desperately need to catch up—not only in building AI-resistant assignments, but to also take advantage of this unique opportunity to raise academic standards and offer true differentiation to our students.
Profile Image for Rachel Pollock.
Author 11 books79 followers
June 6, 2025
A must-read for educators seeking to navigate the rapidly-changing complex landscape of the range of technologies lumped under the umbrella term “artificial intelligence”/AI. Watson & Bowen have written a guide that is both comprehensive and practical.

The book is divided into three main sections: "Thinking with AI," "Teaching with AI," and "Learning with AI." Each section explores the various ways AI could transform education, from enhancing critical thinking skills to redefining assessment strategies.

This book’s primary focus is empowering educators. Watson & Bowen provide strategies for integrating AI into teaching practices, enabling educators to not just be passive recipients of technology but active shapers of their teaching and curriculum. The authors address common, valid concerns such as academic integrity, cheating, and the balance between content and process, offering thoughtful solutions to these challenges

On the topic of maintaining educational integrity. Watson & Bowen explore how AI can be used to foster creativity, enhance student engagement, and support personalized learning, all while emphasizing the importance of ethical considerations and critical thinking.

"Teaching with AI" offers a visionary look at the future of education. The authors encourage educators to reflect on how AI can be utilized to benefit students and society. This forward-thinking approach is both inspiring and necessary as we prepare for a future where AI may play an increasing role in our lives.

The book is an invaluable resource for educators. Watson & Bowen have provided a roadmap for navigating AI-driven education with confidence and integrity.

A couple caveats—

—I imagine it is helpful for the reader to have at least used a generative AI model to write or create something at least once before diving into this book.

—I listened to the audiobook, and I found myself frustrated with the need to reference the accompanying PDF. In retrospect, I probably should have gotten the print or e-book version, and will probably still do so as a reference volume.

—Much of the content concerns text-generating LLMs and teachers who will continue to use written essays, papers, and other compositional assignments.

Basically, if you read this review and it sounds like it might be helpful for your fall semester planning, check it out. I’m glad I did.
Profile Image for Kyle Lorenzano.
5 reviews
May 23, 2025
Higher education is in desperate need of resources on this topic but, to borrow the parlance of exhausted Gen Z/Alpha students whose shamelessness re: cheating apparently knows no bounds, this book ain’t it chief. To be fair, there are some useful tips in here. Also, the idea of using AI to “assist” in thinking is nice in theory, but the line between “assist” and “do most of the thinking for you” is too squishy for most instructors to get their heads around, let alone our shortcut-starved students. And what is the AI hype all of in service of anyway? Stronger critical thinking skills? Better, more fulfilling work for the average person? No, according to this book. We need to allegedly succumb to our AI overlords because 1) it’s not going anywhere and 2) it will further enable continued growth, productivity, and profitability for the shareholders. I understand this is a pretty negative take on what these authors are trying to say, but I couldn’t help feeling this way throughout. Also, give me a break on the whole “what will we do with all of this extra leisure time we have once AI makes everything so much more efficient?!” canard. We know exactly what’s going to happen - middle managers will move the goalposts and we’ll all be expected to be even MORE productive, show MORE growth, and ultimately create MORE shareholder wealth. Isn’t technology fun?!??!

The authors are correct that instructors need to put a greater importance on process rather than simply outputs. But I fear that something deeply human and just good for its own sake is being lost when we relegate even part of our thinking to a machine that actually isn’t thinking, but rather is simply synthesizing (*cough* stealing *cough*) actual human writing and spitting out sophisticated predictive text. Why do we need to give in to all of this? Why shouldn’t instructors be the ones to hold the line against the outsourcing of our students’ thinking? And most importantly, why should students accept all of this for expediency’s sake at the expense of being truly educated?
Profile Image for Lee.
1,096 reviews35 followers
January 10, 2025
Much of this is amazing, thoughtful work about AI and how we integrate it into the classroom. Some of this is namby-pamby hand-wringing about AI and inequality. Not that there's anything wrong with thinking about the latter, but the way Bowen does it is the problem. He repeatedly talks about how the questions surrounding AI and inequality are a problem but he is not going to talk about. And then he keeps talking about it, again, never analyzing the problem, only flashing us with his concerns over the problem before moving on to another topic and then coming back to his hand-wringing. Still, overall this is a very useful resource with lots of great material…just could have used a stronger editorial hand.
Profile Image for Brett.
36 reviews
December 1, 2024
A helpful book. My main takeaway: students are going to use GenAI; we need to show them what is unique and important about the human in the loop, building critical thinking and creative skills to enhance those attributes. Since GenAI can do C work, C work is the new F, and we need to recalibrate rubrics accordingly. The book was a bit of a slog: too many lists of different prompts to try.
Profile Image for Richard Weaver.
182 reviews1 follower
April 30, 2025
Why kick against the pricks?
Creativity is independent of the tools used. The sooner students realize that, the more creative and innovative they will become.
As for the SPED kids? I’m not really sure AI will be a winning strategy, I wish it was, but if your parents are only focusing on the grade, you do anything you can to get that grade. Comprehension challenges are only going to lead to greater frustration.
Profile Image for Christopher Kulp.
Author 4 books5 followers
October 28, 2024
Thought provoking. I have already begun using the tips to improve my workflow. I look forward to using what is presented in my classroom.
Profile Image for Grace.
276 reviews9 followers
May 1, 2025
Minus a star for all of the typos and the number of times I made a note that said "is this a joke?" It has some specific ideas and strategies that I think faculty could benefit from, particularly how to shift instructional design and assessment from product towards process; but it glosses over some serious concepts such as, how do we teach writing fundamentals or critical reading skills when we students can cognitively offload them? They often made sweeping statements about serious concerns or pitfalls of AI that they almost immediately moved on from. I'd recommend this for people who want to educate themselves about what's out there and need ideas for updating their curriculum in the age of generative AI and Large Language Models.
Profile Image for Illysa.
293 reviews1 follower
January 9, 2025
Excellent practical guide for incorporating generative AI into the classroom
Profile Image for Julie Tedjeske Crane.
99 reviews45 followers
May 23, 2024
The authors are José Antonio Bowen and C. Edward Watson. Bowden. Bowen is the former president of Goucher College. Watson is the Associate Vice President for Curricular and Pedagogical Innovation at AAC&U. Their book is divided into three sections: Thinking with AI, Teaching with AI, and Learning with AI. I will review each section separately.

Thinking with AI

This section begins with an overview of AI technology and a brief history of generative AI. It then covers work, AI literacy, and creativity. Concerning work, the authors note that jobs involve various components. They suggest that for most jobs, it is likely that AI can perform at least some tasks more effectively than humans. They also cite studies showing that AI is most helpful to those with low performance or limited experience in a field. This is because AI often operates at an average proficiency level, and even average suggestions can be valuable for those with below-average proficiency.

The chapter on AI literacy includes guidance on drafting clear prompts, including lists of useful words associated with each prompt component: task, format, voice, and context. The authors offer tips on writing prompts, such as being consistent with terminology, saying explicitly what you want the AI to do, and framing instructions positively. For example, they advise against using synonyms because this may confuse the AI. They also suggest being direct in your instructions. For example, ask the AI to “rewrite” an assignment considering feedback rather than asking it to “apply” the feedback. Finally, they recommend telling the AI what to do rather than what not to do because AI responds better to positive framing.

The chapter on creativity focuses on how to work with AI as a collaborator or partner in solving problems. Problem-solving generally involves both divergent and convergent thinking. AI is good at idea generation, making it particularly useful for assisting with divergent thinking.

Teaching with AI

This section examines the impact of AI on faculty work, including research, teaching, and grading. It opens with an overview of generative AI applications like Consensus, Elicit, and ResearchRabbit. These tools facilitate research by assisting with tasks such as writing abstracts, summarizing papers, and outlining alternative article structures. The authors also suggest using AI to summarize teaching evaluations or refine emails to students.

Regarding classroom applications, the authors recommend using AI to brainstorm for discussion ideas, providing a curated list of prompts to facilitate this process. They also highlight the potential of using AI to create low-stakes multiple-choice quizzes to enhance student learning. The authors offer practical guidance on developing such assessments, such as instructing the AI on whether to include a "none of the above" option and having it generate feedback for incorrect responses. The "Designing New Assignments" section presents a collection of prompts for creating and refining assignments. I was amused by the authors’ suggestion to ask the AI, "How might students use AI on this assignment?” and “How might I make it harder to cheat using AI on this assignment?"

The chapter on academic integrity is particularly interesting because it includes survey data indicating a growing willingness among students to use AI even when prohibited. In the spring of 2023, 51% of students said they would use AI even if it were not allowed. By fall 2023, that number had risen to 75%. The authors propose several strategies for curbing cheating, such as employing traditional blue books for handwritten exams, administering in-class pop quizzes, and designing multi-step assignments prioritizing process over product.

The chapter on grading suggests that instructors must "reconsider and clarify how we discern quality, motivate higher standards, and grade in this new era of learning." The authors provocatively claim that all students should be expected to surpass the quality of AI-generated content:

"If AI can do it, then it is pointless to give it a C, both because students will be able to dupe us with AI, but more importantly because we will end up passing students (even the ones who actually wrote their essays) with skills that do not distinguish them from a typical AI result. . . . Rather than banning AI, let's just ban all C work."

Learning with AI

Despite its title, the final section of the book focuses on designing assessments and assignments that take advantage of the capabilities of AI. The first chapter explores the topic of feedback, offering a series of general prompts students can use to refine their work. The authors also provide more detailed examples that give the AI additional context and detailed instructions. They suggest providing students with a prompt like these more detailed examples to copy and paste into the chat to start a conversation.

The next chapter focuses on designing assignments that tap into students' intrinsic motivation. The authors suggest that students, like everyone else, are driven by three core desires: "I care," "I can," and "I matter." They argue that assignments should be crafted with these motivations in mind. This chapter also includes several examples of how to structure an assignment in which students prompt AI to complete a task, evaluate the AI-generated response, refine their prompt based on that evaluation, and then demonstrate how they can improve the AI's output.

Finally, the chapter on writing includes an interesting discussion as to whether AI will prove to be a complementary or competitive cognitive artifact. Drawing on David Krakauer's framework, the authors explain that complementary artifacts, such as the abacus, enhance human abilities in a way that persists even when the tool is no longer available. In contrast, competitive cognitive artifacts, like GPS or calculators, can diminish humans’ innate abilities in their absence. As the authors note, it remains to be seen which category AI will fall into when it comes to writing skills. 

Conclusion

The authors compare the current state of AI to the early days of search engines, describing it as the "Altavista/Lycos/Ask-Jeeves/WebCrawler phase of AI," meaning that they anticipate significant advancements that are difficult to imagine now. Just as the Internet transformed our relationship with information, the authors envision AI changing our relationship with thinking, not by replacing human thought but through human-AI collaboration.

The Epilogue highlights key takeaways. I consider these three the most important:

"AI is a new baseline for average or adequate."

"AI is only going to get better and more ubiquitous: specialized AI tools are about to proliferate."

"All students will need AI literacy and will need to be able to use AI as a partner and collaborator."

The numerous sample prompts throughout the book are excellent starting points for generating customized prompts tailored to specific contexts. I intend to use them in this way. I highly recommend this book to anyone looking for ideas on incorporating AI into their teaching practice.
Profile Image for Matthew Warner.
39 reviews
May 26, 2024
Bland. Some informative insights about the development of artificial intelligence. Uninspired examination of possible uses and known current uses of generative artificial intelligence. The book also strikes me as lacking reflection on significant topics. The authors admit from the outset that they will not weigh in on the topics of ethics, stating that deserves its own book. Great, maybe write that book first. Instead, I felt like I was reading another unexamined, resigned to the "inevitable future". Consequently, I did not learn much about A.I., A.I. situated into education, or heuristics for thinking about what to integrate. I am still waiting for a book to consider the FERPA implications of feeding student and instructor materials to third-party platforms without agreements or consent, but I have been waiting for that book since the 2000s when Turn It In was ascending as a plagiarism detection tool.
43 reviews
September 23, 2024
This book is everything I loathe in academic literature:
1. Begins by discussing things far in the past to its topic for no payoff. There isn’t even a tangential link between the predecessors of AI and the opening pages set in 2006 (as compared to say, the brief mention of DARPA in the next chapter)
2. Tells me to basically hand out As like candy, because as page 6 puts it, “grading especially needs to be rethought: no one is going to hire a student who can only do C work if an AI can do it more cheaply.”
3. The authors cherrypick sensationalist quotes.
4. The authors cite academic sources, who themselves, at points, appear to have factual errors.
5. The authors make some wild leaps in ‘logic’ that have nothing to do with what they think their leaps do (e.g. LLMs and generative models are not about to create the Singularity, no matter the authors misguided views).
6. Calls for AI literacy while showing remarkably little AI literacy, despite examples of using AI in the book, making all examples in the book suspect.
7. Bland and poorly edited (e.g. p. 157, “AI is like a free like a puppy”).
8. The section chunk sizing and writing style is actually reminiscent of something a student would turn in with AI; not saying AI was used to write this (aside from examples given and clearly cited by the authors), I don’t believe that to be the case. I just find it sadly ironic how substandard writing is indistinguishable from AI generated content...all while the people engaging in it want to ban substandard work to prevent AI cheating.
9. The authors admit up front they do not touch the ethical components of AI use, which would have been a ripe field of discussion for participants in higher ed.
10. While the AI-proofing rubrics section puts forward an interesting idea on how to catch the most basic of AI cheating, it works off of the thesis statement idea on p. 151, “Rather than banning AI, let’s just ban all C work.”

Honestly this reads like a quick CV line item; I read it through a book club with my campus’s Teaching and Learning Center, which seems to be how the book has spread (and unsurprisingly so, given the author bio blurbs on the back). If you’re engaging with this text via a free copy from a similar faculty book club, it might at least generate worthwhile discussion between colleagues (it did with our book club on my campus); if you’re thinking of buying this with your own funds, don’t do so, it would be a waste of your money.

I was going to give this book two stars, but just listing out ten problems and then giving up on providing further examples because I’d barely scratched the surface, has made me downgrade it to a one-star rating. The end of the final chapter warns perceptions of our profession continue to fall, even amongst graduates of our institutions; this book will only continue to add to those unfortunate perceptions with its unique mixture of lazy banality and undeserved hubris.
Profile Image for ñick.
10 reviews
April 19, 2025
The impression I get from Teaching with AI is that JHUP hustled it out the door as quickly as possible to take advantage of a trend that they thought was a passing fad, but which was potentially lucrative enough to reward the fast-tracked publication of a sloppy product. This book is a bundle of baffling typos, confounding section construction/organization, and meaningless platitudes held together with twine and spit. I don't want to pretend major publishers aren't above cashing in on this sort of thing, but Teaching with AI is a remarkably cynical effort even by these standards.

Here are some choice excerpts:

"Fire, like other human technological achievements, has been a double-edged sword; a source of destruction and change as well as an accelerant to advancements."

"They are different in neural networks, and they have been trained differently: different (metaphorically) in both nature and nurture."

"There are easy ways to get both more useful interesting responses from AI. Consider these features of a clear prompt: task, format, voice, and context and." [sic]

"If it often useful to create a very literal if/then statement (prompting is programming, even if in more natural language): if the example above might be too complicated for my audience, then repeat step 1."

"Other AI programs are starting to add creativity in a variety of scientific domains, including star mapping, climate modeling, automating, and conducting virtual experiments, generating new antibodies, and furthering research into hydrogen fusion."

The colon and the semicolon ought to seek reparations for the abuses they have suffered at the hands of José Antonio Bowen and C. Edward Watson.

It's really difficult to communicate this book's total disorganization on the sentence, paragraph, and section levels in these single-sentence excerpts. The "Outline and Organization" section tells me that chapter 2 "chronicles how the world of work (and job interviews!) has already changed for our graduates," but chapter 2 makes no mention of interviews of any kind (let alone job interviews). My favorite section header, "Moneyball for Morality," opened a section of text that had nothing to do with Moneyball on any conceivable level and was mostly dedicated to chatbot "creativity” (read: ability to produce ad copy for Trader Joe’s).

There's nothing you'll read here that you couldn't get from any of the other warmed-over chunks of breathless, technophilic slop that have invaded store shelves in the past couple years, and most of those probably had better copyeditors. Perhaps the authors' intent was to damage the credibility of major publishers to the extent that readers would abandon the notion of scholarly credibility altogether, thus priming them to resort to chatbots for information retrieval. We’ll see if their mission succeeds.
2 reviews1 follower
May 13, 2025
I don't normally write reviews, but I'll make an exception for this book. I have been reading a lot about the social justice issues of AI and this was my first AI "pedagogy" book.

I came to it specifically with the question of how do we encourage/allow our students to use AI in a way that is ethical, and above all, still allows them to learn deeply, particularly in research projects. If you want that nuanced approach, you won't get it here. There are no stories of their implementations in the classroom. There are weird typos. There are lists with very short paragraphs under each that seem underdeveloped and disjointed. Also, underdeveloped metaphors. There are some good ideas that a professor could develop for AI assignments, but the description of them is broad and shallow.

There were several claims that made me say "EWW!" out loud, such as when they insist AI hallucinations=creativity. There was no discussion about whether that "creativity" has any value. There was very little discussion of evaluation, other than a few assignment suggestions include undergraduates evaluating AI output. They say we don't have to conduct timely focus groups and feedback anymore, because we can just ask AI what a group WOULD say. And they think it's unclear if AI is a complementary artifact (like Arabic numerals) or competitive artifact (like GPS and calculators). It's pretty clear that using AI harms our ability to do those same tasks when we don't have access to it.

Above all, they're unclear in what they mean when they discuss "AI-assisted writing". It sounds like that means AI wrote it, but students used prompts to improve the original. Meaning students didn't actually write any of the paper, that all of their writing is in the prompts.

They wait until the very last page to give a list of environmental, ethical, and copyright (etc.) that AI produces.

The pro-AI bias runs strong in this book. I think it has a lot of value for professors, but it also has the potential to do a lot of harm when it's oversimplified like this. Ethical and pedagogical use of generative AI probably exists, but discussion of what it looks like and how to ensure deep learning still happens won't be found in this book.
Profile Image for KM.
181 reviews4 followers
February 5, 2025
In November 2022, the release of ChatGPT became a viral hit within two months. By then, the milestone of reaching 100 million daily active users (within two months, let me repeat here) was a testament not only to the explosion of artificial intelligence, but also to a new era of technology. This year, a powerful model DeepSeek emerged in China and shunned the world.

Fifty years ago few of us were familiar with VCR, let alone computers. Now that we get problem comprehending TYVM and ICYMI texts from our children, we will get further behind if we don't prepare ourselves with mainstream real-world AI skills.

To get myself less outdated, I borrowed this book written by two educators, José Antonio Bowen and C. Edward Watson. They cautioned that AI will eliminate some jobs, but it is going to change every job: those who can work with AI will replace those who can't. Rather than banning AI, they clarified the need for teachers to help students move above and beyond what AI produced for them. A few good examples of writing assignments would be asking students to grade a paper produced by AI, and to write a better paper or improve the AI-generated essay (and include tracked changes and comments).

Another way of understanding the new changes is to look at the history of calculators. Think about the days when we were busy with learning multiplication tables, doing long division, or adding long columns of numbers. Then came the technology of calculators. The calculator did not really eliminate the need for human math, but changed which math skills we needed. By the same token, AI won't eliminate the need to write well and with ease, clarity and voice. Trust me, never ask AI to write your wedding vow, or your Valentine's Day love letter.
Profile Image for Carolyn Kost.
Author 3 books135 followers
June 29, 2025
If you teach, read this book. You have no idea how much students are using AI. No, you can't tell. Students can train it to emulate their style.

Any at-home reading or writing assignment will be done by AI. Every issue of The Chronicle of Higher Education for the past year has featured at least one article about the profound impact of AI on higher education. Many university administrators ask how a case can be made for the value of higher education when AI can do all of the assignments. Goldman Sachs predicts AI will eliminate 300M jobs. Google CEO Sundar Pichai says, “AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire.”

It's not an overstatement.

As soon as a student now at CalTech whom I helped with his college application showed me ChatGPT in December 2022, it was clear it would change everything and educators needed to change immediately. I warned about the imperative for changing assignments to obviate AI in January 2023.

This book is thoughtful and helpful. Part 1: Thinking with AI. Find the right AI for the task. I use ChatGPT, Claude, and Gemini, but friends use different mixes. Learn how AI work and how one differs from another.

Part 2: Teaching with AI, detection, policies, grading, etc. Practical advice here.

Part 3: Feedback and roleplaying and extremely creative ideas for teaching with AI, designing assignments and rubrics.

This is a practical, helpful guide. If you teach, you need this book.
Profile Image for Joe.
137 reviews4 followers
September 20, 2024
Teaching with AI by Jose Antonio Bowen and C. Edward Watson is a timely read, especially for educators looking to integrate AI into their classrooms. I appreciated the examples provided, along with the sample prompts and activities tailored for both teachers and students. These practical tools make it easier to use AI in the classroom in meaningful and accessible ways. While some sections were a bit dry, the overall content is very valuable for anyone interested in improving their teaching & learning through AI. I highly recommend it to both educators and students. For instructors who ban AI, it's time to reconsider your stance - you and your students will be left behind. There are ways to leverage AI to the mutual benefit of students and teachers as this book shows. One of the challenges of a book like this is that the world of AI is moving so fast, it's hard to keep current. The authors did a good job of citing sources and dates, (most are from 2023) so you know how current the references are. One late breaking innovation not mentioned in the book since it came out after publication is Google's NotebookLM. Check that one out. BTW, I'm both an instructor (part-time lecturer at two local colleges) and also a student (currently pursing another Master's Degree at Georgia Tech). This book was very relevant to me personally.
Profile Image for Psych_Panda.
44 reviews1 follower
September 17, 2024
This was obviously rushed to publication amidst the fervor around generative AI and abounds with the lack of editing. When you ignore that, you'll find too many ideas to wade through for much practical application, especially in the latter part of the book. The most disappointing part of the book is the glossing over of two extremely big things: ethics of this technology and whether or not this tech will ultimately be helpful or harmful for learning. These are each, of course, very large topics and I understand that they are beyond the scope of the book to dig into. But when your claims are that every student is using genAI for every assignment, every assignment is a genAI assignment, that C is the new F (because genAI produces C-level work), and therefore educators need to deeply engage with genAI for themselves and their students, then you cannot completely ignore either of these issues. If fact, I would claim that to do so is itself unethical.

I do appreciate the incorporation of research throughout, highlighting some key psychological and educational literature. The most useful thing I personally got from the book is prompt engineering templates. I would not have read this if it was part of my job and I don't recommend.
Profile Image for Stephen Stewart.
318 reviews5 followers
October 18, 2024
I read “Teaching with AI: A Practical Guide to a New Era of Human Learning” by Bowen and Watson for my university book club. While reading the book, I was grateful that it came out quite recently (April 2024) and wonder how long it’s shelf life will be before technology changes and the content is obsolete. Still, it’s topical and relevant, and I appreciate the book changing a bit of how I think of AI and its uses.

I walked out of the book with the following actionable advice:
- Taking my course review comments and having Chat GPT summarize them and help me better digest the feedback from students.
- Having ChatGPT give me suggestions when creating a tax escape room (I don’t know what it will look like, but I will create one)
- Using Chat GPT to create a more personalized study assistant for students as another way to encounter and practice the material outside of class.

I think the book’s argument that we need to redefine what a C letter grade is was fascinating. If a machine can do it, then we have to do better, otherwise our job will be outsourced. I generally don’t know how much of the writing I am assigning can be and is being completed by ChatGPT. I don’t know if that’s something I need to struggle against or if there are ways I can avoid having ChatGPT be a successful tool. I’ll revisit this book in a couple of months to flip through the most thought provoking chapters at least.
Profile Image for Debbie.
62 reviews
April 3, 2024
I gained a lot of knowledge from the variations of AI and how each of them can help design impactful student growth. That is me as a tutor and a systems analyst.

As a a mom, every time I read “cheating” or even that example about the student and her professor, where the professor was “inflexible,” I cringe.

I read your section on “cheating and detection”. But the fact remains that most kids are scared of the consequences of doing something wrong, including cheating. Until there are universal AI literacy classes and policies on all campuses and schools, there are confusing dialogues. My high school senior won’t go near an AI because she thinks AI will be the end of us( or at least so says the talking heads).

I absorbed so much AI trivia in your book. AI is here, and you are right, it has to be carefully incorporated in our culture. Many great issues were addressed, but teaching with AI is going to take village—it’s such a huge undertaking. This could be the revolution for education in general?

I enjoyed your prompts and prompt ideas! Really thorough and clever. Good luck to the book’s success!
Profile Image for Christina Vourcos.
Author 9 books135 followers
June 16, 2025
Highly Skeptical, Eventually Empowered

When I started reading this book, I was highly skeptical about what the book would cover. I was concerned it would favor too much on AI and not enough on the ethics of AI.

While I feel there could have been more focused on providing more ethical information of AI, for example how some AI has stolen work used to create the AI. I still found it helpful to see how AI could be incorporated into the classroom as a way to incorporate AI literacy skills, knowledge, and understanding just like you would with digital literacy. As a college English instructor, I would have wanted more of chapter 11, but this book gives a good overview along with how AI impacts writing assignments.

As many have mentioned previously, including in this book, AI is here. We must find a way to make sure students understand AI and how it will impact their lives.
Profile Image for Noa.
233 reviews26 followers
June 25, 2025
I annotated the crap out of this book. It is full of such great ideas and there are a ton of example prompts in so many different posed situations.
The only reason I didn't give it 5 stars is because the premise is problematic -- that AI should be taught in schools -- I'm k-12 and so we are working with very impressionable minds and MIT just came out and said AI dramatically usurps critical thinking to the point where it doesn't get developed by the human participant in the AI-partnered writing. Furthermore, another recent study was reposted by UCH that AI isn't creating reasoning but just pattern completion. It is goofier than this book gives it credit.
BUT the book did bring up how AI is solid "C" level work and the urgency for students to bring more than the "average AI" brings to the table to ensure their marketability within our capitalistic hellscape. That last part might be my word-choice.
Displaying 1 - 30 of 91 reviews

Can't find what you're looking for?

Get help and learn more about the design.