Prompt Engineering for Generative Future-Proof Inputs for Reliable AI Outputs is purchased directly from the publisher or approved distributor and spiraled by a 3rd party. Seller is not affiliated with, endorsed by, or pre-authorized by the publisher or author for the spiral listing. Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly reduced today, practically any developer can harness LLMs and diffusion models to tackle problems previously unsuitable for automation. With this book, you'll gain a solid foundation in generative AI, including how to apply these models in practice. When first integrating LLMs and diffusion models into their workflows, most developers struggle to coax reliable enough results from them to use in automated systems. Authors James Phoenix and Mike Taylor show you how a set of principles called prompt engineering can enable you to work effectively with AI. Learn how to empower AI to work for you. This book The structure of the interaction chain of your program's AI model and the fine-grained steps in between How AI model requests arise from transforming the application problem into a document completion problem in the model training domain The influence of LLM and diffusion model architecture—and how to best interact with it How these principles apply in practice in the domains of natural language processing, text and image generation, and code
I was just expecting something different. It's 20% foundational things that are great and 80% code examples that will be outdated tomorrow. I'm not sure what's the point of including pages and pages of code in the book and explaning them line by line.
Este libro te da un conjunto de técnicas e ideas para poder sacarle buen provecho a los modelos generativos tanto de texto como de imagen y supongo que aplica a los nuevos modelos multimodales.
Me dio buenas ideas para las aplicaciones que estoy haciendo actualmente, pero siento que estos consejos tienen corta vida por lo rápido que avanza la tecnología. Algunas técnicas, sin embargo, sí parecen más duraderas.
Otra cosa que siento sobre el libro es que es un poco repetitivo y tiene un algo de paja, lo que lo hace sentir un poquito difícil de leer, por la relación valor/volumen.
Good overview of latest prompt engineering methods and tools from langchain and llamaindex. Good tools for covering advanced rag, advanced prompting, output parsing, chunking methods etc.
Just finished Multisolving by Elizabeth Sawin (Island Press, 2024) — an essential and timely guide for anyone working at the intersection of climate, equity, and systemic change.
Sawin introduces a powerful framework rooted in systems thinking, offering tools to address multiple problems at once rather than tackling them in isolation. From stocks and flows to feedback loops and leverage points, she makes complex concepts surprisingly accessible.
What sets this book apart is its clarity and optimism: rather than trying to control broken systems, Multisolving teaches us how to work with them — amplifying what’s working and strategically shifting what’s not.
Whether you’re a policymaker, activist, or just a curious thinker trying to make sense of a fractured world, this book gives you language, direction, and hope.
Highly recommended for anyone interested in real, lasting change across disciplines.
The first few chapters of the book were solid and I would give them 4 stars, I learned something from them. Then large part of the book were code examples of how to use LangChain, and later how to use Stable Diffusion (with lots of pictures) and somehow I finished a 400+ page book in 2 evenings. I’d certainly prefer to have shorter, but more packed with guides on prompt design book without so much code that might be outdated a year from now
First for the book review. Prompt Engineering for Generative AI by James Phoenix and Mike Taylor is a hefty tome covering many subjects. It’s useful, but not without faults.
Good points:
* Covers a lot of the foundational concepts * Many code examples from common tools (mainly Python and OpenAI)
Cons:
* Too many code examples, that will be dated by next year (through principles behind them will likely still apply). * Not the best in clearly separating the foundational concepts from implementation.
It’s a good book if you’re a programmer (or close enough) and want to skill up in this area. It can help get an understanding of concepts that may be more thorough than just surfing articles. Then again, you can learn most of what’s on offer via reading free articles on the web.
Some bits do feel like they were written by AI (hardly surprising), and as said above I would have loved to see concepts explained more clearly first in standout sections, rather than the somewhat rambling tone.
While the book offers accessible language and clear explanations, it falls short of its intended timeless approach by heavily focusing on tool-specific tutorials, particularly for frameworks like LangChain, which are likely to become quickly outdated. The explanatory style resembles typical Medium articles - a format that could be seen as either a strength or weakness depending on your perspective. Although the book provides valuable insights in certain areas, its effectiveness is somewhat diminished by an overwhelming emphasis on technical tutorials. The balance between conceptual understanding and practical implementation wasn't quite what I expected, leaning too heavily toward the latter. The fundamental concepts of prompt engineering could have been explored more deeply instead of concentrating on current tools and implementations.
A collection of various commands and queries on how to best use AI chats. I must admit that I expected significantly more. While the presentation of techniques and code examples is extremely helpful, but after finishing reading it, it is unlikely that someone will return to this book, or the data will simply become outdated.
Zbiór różnych komend i zapytań dotyczących korzystania z chatów AI. Przyznam, że spodziewałem się znacznie więcej. O ile przedstawienie technik i przykładów kodu jest niezwykle pomocne, o tyle po przeglądnięciu materiału raczej nie wróci się do tej książki, albo po prostu się ona przeterminuje.
This is a book targeting programmers implementing projects that leverage gen-AI. It is not a book for people wanting to use gen-AI as an end-user, though there is a lot that such a person can learn from this book.
The audiobook is not a great experience. There is a lot of code mentioned in the book, and it is quite hard to go through that in audio vs reading.
I finished the book with a deeper understanding of how to use gen-AI than I had before I started. But my expectations were misaligned. I think there may be better resources out there for people who want to get better at gen-AI usage.
Finally finished. This was the primer I needed for my LLM work. While it may be true that the code examples could easily get outdated in a near future, the ideas here are acrobatic and colorful. For content creation to production grade LLMs, there's a good amount in these pages to stoke the imagination and expand awareness of ways LLMs are being used. I'm excited to review the code examples and the aggressively highlighted points I made throughout the book. This will supercharge my projects and serve as quality reference moving forward.
This book offers invaluable insights into crafting effective prompts for text and image generation, making it an excellent resource for those delving into generative AI. However, sections on LangChain feel outdated, despite the book being only six months old—a challenge inherent in rapidly evolving fields. For those comfortable navigating examples that may require adaptation, it’s a worthwhile read. Recommended for pioneers ready to tackle the dynamic nature of AI development.
I only read the first four chapters. While the book offers a wealth of practical advice, much of it feels anecdotal—as if it were cobbled together from Reddit threads rather than rooted in research or science (though there are occasional evidence-based insights, which I found valuable). That said, it could still fill a niche as an introductory guide, given the lack of comparable resources currently on the market.
This book is absolute crap. There is like 37 pages about prompt and then get the whole history about how LLMs works, what is generative AI, langchain, vector databases, and all pretty much useless stuff not related to title, so it’s absolutely misleading title on this book and it was written just to make some cash. absolute shame.
The first chapters are really great, tricks and a lot of practice, how to work with artificial intelligence, how to reduce artifacts, generally good knowledge, but then it starts with a lot of code to write certain applications, which is good, but the book, as its title suggests, should not consist of 50% of writing two lines of code and then explaining them.
Pretty good, I bought this as I've been working with agentic AI and I wanted to see if this book can help my prompting. I picked up some good techniques on how to construct better prompts. Glossed over much of the LangChain stuff as I'm working with Bedrock.
Great book. I read it about a year ago and am still actively using the knowledge from it. Its main flaw is that it spends too much time explaining string manipulation, which is really not something I feel like I need to learn from a book about prompt engineering.
There are too many tool dependent listings from Langchain explained line by line. And too few examples of iterative improvements of prompts for real life applications.