Science and Inquiry discussion

The Alignment Problem: Machine Learning and Human Values
This topic is about The Alignment Problem
124 views
Book Club 2022 > March 2022 - Alignment Problem

Comments Showing 1-17 of 17 (17 new)    post a comment »
dateUp arrow    newest »

message 1: by Betsy, co-mod (new)

Betsy | 2160 comments Mod
For March 2022 we will be reading The Alignment Problem: Machine Learning and Human Values by Brian Christian.

Please use this thread to post questions, comments, and reviews, at any time.


David Rubenstein (davidrubenstein) | 1040 comments Mod
The Alignment Problem: Machine Learning and Human Values is an excellent book. The biggest problem in artificial intelligence (AI) is to devise a reward function that gives you the behavior you want, while avoiding side effects or unforseen consequences. It was pleasure to read a well-researched book that plumbs to the depths of a complicated subject. Here is my review.


aPriL does feral sometimes  (cheshirescratch) | 352 comments Got it from the library! Now, one, time to read it; two, hope it's not over my pay grade.
; )


message 4: by aPriL does feral sometimes (last edited Mar 14, 2022 05:13PM) (new) - rated it 5 stars

aPriL does feral sometimes  (cheshirescratch) | 352 comments word2vec answer for question put to it, 2015

Doctor - man + woman

Nurse

O _ O

: @


aPriL does feral sometimes  (cheshirescratch) | 352 comments You know, I come from a low/medium income, blue-collar childhood, and I currently live in a senior park full of older people, many who still work, some with no-math and other than math-centric 4-year or Masters college degrees, who downsized from stick housing to trailers. Yet I can see how much harm Big Data algorithms would totally screw up most of these hard-working and well-meaning decent people who've worked all of their lives, but maybe had health issues messing up their work history, and other things that happened that 'look' bad when assigned digitalized factors, (gender, age, "blind" factors) but in real life were harmless, not applicable to the issue at hand being assessed or nothing at all really occurred but yet was flagged.

This may seem off-point, but while in college, I was 'volunteered' by my accounting teachers to help people with with their 1040's, so two of us sat at a table to help with members of the public who had made an appointment for free help. It was shocking. People were so very instruction-illiterate as well as math-scared. Later, I was asked by the manager of my apartment building to help them fill out a 1040 because my husband had told them what I was doing in school. They were in their 40's, and unbelievably math, hell, super form/instruction-illiterates. I also, at one point, volunteered for "Each One Teach One" literacy instructing of adults. I was shocked by the numbers of American men, averaging 30 years old, working at construction and other poorly/medium-paid jobs, who had a high school diploma but could not read much less do any math. Some of them supported a wife and kids, all worked full-time. These algorithms would have destroyed these people if in play thirty years ago.

Most of the people I have worked with (I was a secretary, office manager, and I'm really not very strong mathematically) and many working seniors couldn't begin to understand anything in this book. A REALLY strong effort should be made to explain to regular non-science or non-math-oriented managers these machine-learning programs should not be taken at face value. Managers should be given a 101-simplified version in how these programs go wrong in assessing people's inputted histories. Some successful but now retired seniors I know would have been financially destroyed if these algorithms had been in play during their lives, and they would never have been able to understand how or why, so they wouldn't have been able to defend themselves, then or now.


message 6: by Ajay (last edited Mar 20, 2022 01:18PM) (new) - rated it 5 stars

Ajay Iyengar (aiyengar82) | 10 comments About ~13yrs ago I was majoring in ML (Machine Learning) which was right around the time there was a major "paradigm shift" under way: a move away from human designed ML algorithms that we could reason about, to these deep convolutional neural nets that were essentially black boxes. Nobody then understood what the learnt "weights and biases" of these neural networks meant, apart from some minor insights from computer vision. Yet these NNs trounced the human designed algorithms by any metric. The magic that made this paradigm shift happen were advances in H/W and I wondered whether future AI advances would simply boil down to faster MAC (multiply accumulate) H/W. I quit grad school and took up work in the embedded industry writing firmware.

I can honestly say that at least for myself the question of AI ethics was FAR from my mind when I was working on ML. I'm glad the AI community is seriously collaborating with researchers from psychology/neuroscience/etc. to tackle this challenge and that there is actually funding available for research work on this topic.

@april, STEM or not, everyone is susceptible to the increasing perception of AI infallibility, and the appeal to authority (It's in the numbers stupid!). Additionally, the greater concern isn't just systemic bias, but that these systems are replacing humans entirely. What odds are the gambling houses offering on AI based automation of America's largest employment sectors in the coming decade? We need to be having serious discussions around UBI right now.


message 7: by Daniel (new) - added it

Daniel Gessel (danielmgessel) | 1 comments Ajay wrote: "About ~13yrs ago I was majoring in ML (Machine Learning) which was right around the time there was a major "paradigm shift" under way: a move away from human designed ML algorithms that we could re..."

I signed up too late to read the book and my computer science grad school days are significantly more historic than yours, but I can confirm NNs were a small part of AI then. I have always felt their black boxiness is an abandonment of human understanding as progress (which is dear to me). A younger person recently told me that the exciting thing is they are able to solve problems we are incapable of understanding, let alone solving. It all might just be my dinosaur side showing…

I agree that this approach blows up the entire concept of infallible machine, not only because the training set can be biased, but it can also fail to include critical corner cases - and they are always used against untested inputs when in the field. There seems to be no hope of (or interest in) formal verification.

In my on-off work in the GPU industry, I definitely saw a huge drive toward increasing MAC flops as well as higher throughput reduced precision ops for the sake of NN performance. Some innovations will come from different NN structures, activation functions and training algorithms, but many will come from big-iron research projects transferred when performance is available on, say, a phone.

As to tech displacing human work, I see it as a point on a historical arc where, eventually, machines are better at everything that humans might do to earn an income, and agree totally that UBI should be on the table. (Now! btw) Beyond that, whether “the end of work” is dystopian or utopian depends on whether we value human beings for their productivity or for their inherent humanity.

I think there’s room for books that have varying degrees of technical detail - the impact is large and everyone should have the resources to understand this coming change.


message 8: by Matt (new)

Matt Jorgensen (mattdjorgensen) Finished the book today and am so glad this was the selection for this month. While I've come across some of these ideas from time to time in a piecemeal way, Alignment Problem was an incredibly thorough and in-depth exploration of the many issues involved in machine learning including how cognition works, how children learn by understanding the (often unstated) intentions of others, the merits of behaviorism vs intrinsic motivation, and, as the title indicates, how we can align AI to our values (when we may not even be certain what those values are). Relatedly, a few years ago I read MacAskill's "Doing Good Better" book on effective altruism and would highly recommend it.


message 9: by Betsy, co-mod (last edited Mar 20, 2022 06:01PM) (new)

Betsy | 2160 comments Mod
Daniel wrote: "I signed up too late to read the book..."

What do you mean?!! It's never too late to read one of our group reads. Our members are all over the world and have very different access to bookstores and libraries. And the books we read are sometimes in very high demand. So people read them whenever they can.

The month associated with the group read is a target only. You are encouraged to read the book whenever it works for you, whether it's six weeks ahead of the target month or two years after. And we encourage discussion at any time. Some of our best discussions have lasted many months.


message 10: by Betsy, co-mod (new)

Betsy | 2160 comments Mod
P.S. Your comment is welcome whether or not you have read the book. Provided it is respectful and on topic.


message 11: by Daniel (last edited Mar 20, 2022 04:46PM) (new) - added it

Daniel Gessel (danielmgessel) | 1 comments Betsy wrote: "P.S. Your comment is welcome whether or not you have read the book. Provided it is respectful and on topic."

Thank you.

I hope I didn’t veer too far astray on either the respectful or on-topic axes - I am frustratingly long winded and it’s easy for me to do. Please alert me if a post needs to be deleted - I don’t think Goodreads will do it automatically.


message 12: by Steve (new)

Steve Van Slyke (steve_van_slyke) | 400 comments Betsy wrote: It's never too late to read one of our group reads.

In fact, I often decide whether or not to read the selection of the month after I've seen the comments of the group. I'm likely to read the Alignment Problem for that reason.


message 13: by aPriL does feral sometimes (last edited Mar 25, 2022 04:31AM) (new) - rated it 5 stars

aPriL does feral sometimes  (cheshirescratch) | 352 comments "Rulers are malignant"

I wonder what machine learning program was used to scratch my application for employment as a secretary years ago in San Francisco when I was told the company couldn't consider me because I didn't know what a saw horse or a Phillips screwdriver was? I couldn't answer those questions on a psych/personality test I was given before my interview. The test was considered extremely scientific, created by college researchers as a test for general intelligence. Answers were fed into a modern Big Iron mainframe.

This was only ten years after I graduated from high school. Girls were required to take home economics classes, and boys were forced to take woodshop/machine shop classes in high school. Girls could not take woodshop/machine shop or vice versa. The counselor also refused me my request to drop home economics because I wanted to take foreign language classes (Spanish). I wanted to save money because high school classes were free - community college classes for Spanish cost a lot of money, and I'd lose time. The boys could squeeze in high school foreign language classes easily into their four-year schedules because of their required classes needed for high school graduation and college entrance requirements. Girls could not squeeze in necessary high school classes required for college at all.

The counselor said, and I quote "You won't need college anyway. You'll get married and have kids. End of story." I got some of my teachers, women, to talk to the counselor, a man. I got out of home economics, took three years of Spanish, and thus was one of the few girls who graduated from high school with the required classes necessary to apply to university.

Using primitive machine learning algorithms by major corporations on who would be a good fit for a secretary job was all the rage in San Francisco in the 1980's/1990's. It had a number of questions regarding things I was forbidden to learn because I was a female. These "relevant employment factors" were one of the many things that dropkicked me to the curb during these decades. When I finally learned programming, every ad stated, "must be able to lift 50-75 pounds", because men could and women couldn't. The surface reason was carrying boxes of computer printer paper was now a requirement for all programmers in the 1980's. Bias isn't new - unintentional AND intentional.


message 14: by aPriL does feral sometimes (last edited Apr 07, 2022 11:41AM) (new) - rated it 5 stars

aPriL does feral sometimes  (cheshirescratch) | 352 comments It’s interesting to see how machine learning experts are trying to translate the biochemical-based responses of babies, like serotonin and dopamine releases, into a spray of math numbers to be summed, then judged to be either lacking or a win.

Computer scientists are making mathematical equations, what are essentially the acts of earning points for a best score, like football, to mimick what are essentially human reward emotions which give us the feeling of satisfaction or make us happy.

Will a computer ever be happy?


message 15: by Daniel (last edited Apr 07, 2022 05:10PM) (new) - added it

Daniel Gessel (danielmgessel) | 1 comments I don't think digital computers will ever be conscious in the way we are. But I think it could be different with an analog or quantum computer (and certainly a biological "computer", which could include our own brains). Then you need a precise definition of "happy" and a way to test for it...


message 16: by Steve (new)

Steve Van Slyke (steve_van_slyke) | 400 comments As I read the final chapter I couldn't help but recall the title of a popular movie: Terminator3-Rise of the Machines.


message 17: by KG (new) - rated it 4 stars

KG | 11 comments Well, I finally read this book and I'm so glad I did! When I began it in March, I was oversaturated with books on the impacts of computer algorithms and modeling on our lives and I just couldn't get my head into another one. I'm glad I returned to it. This book brought such a unique perpective on the learning process itself - be it animal, human or machine - and the interaction of the different fields in improving our understanding in each of them. Just a really interesting book that kept me thinking. I definitely recommend it!


back to top