Science and Inquiry discussion

This topic is about
The Alignment Problem
Book Club 2022
>
March 2022 - Alignment Problem
date
newest »

The Alignment Problem: Machine Learning and Human Values is an excellent book. The biggest problem in artificial intelligence (AI) is to devise a reward function that gives you the behavior you want, while avoiding side effects or unforseen consequences. It was pleasure to read a well-researched book that plumbs to the depths of a complicated subject. Here is my review.
message 4:
by
aPriL does feral sometimes
(last edited Mar 14, 2022 05:13PM)
(new)
-
rated it 5 stars

This may seem off-point, but while in college, I was 'volunteered' by my accounting teachers to help people with with their 1040's, so two of us sat at a table to help with members of the public who had made an appointment for free help. It was shocking. People were so very instruction-illiterate as well as math-scared. Later, I was asked by the manager of my apartment building to help them fill out a 1040 because my husband had told them what I was doing in school. They were in their 40's, and unbelievably math, hell, super form/instruction-illiterates. I also, at one point, volunteered for "Each One Teach One" literacy instructing of adults. I was shocked by the numbers of American men, averaging 30 years old, working at construction and other poorly/medium-paid jobs, who had a high school diploma but could not read much less do any math. Some of them supported a wife and kids, all worked full-time. These algorithms would have destroyed these people if in play thirty years ago.
Most of the people I have worked with (I was a secretary, office manager, and I'm really not very strong mathematically) and many working seniors couldn't begin to understand anything in this book. A REALLY strong effort should be made to explain to regular non-science or non-math-oriented managers these machine-learning programs should not be taken at face value. Managers should be given a 101-simplified version in how these programs go wrong in assessing people's inputted histories. Some successful but now retired seniors I know would have been financially destroyed if these algorithms had been in play during their lives, and they would never have been able to understand how or why, so they wouldn't have been able to defend themselves, then or now.

I can honestly say that at least for myself the question of AI ethics was FAR from my mind when I was working on ML. I'm glad the AI community is seriously collaborating with researchers from psychology/neuroscience/etc. to tackle this challenge and that there is actually funding available for research work on this topic.
@april, STEM or not, everyone is susceptible to the increasing perception of AI infallibility, and the appeal to authority (It's in the numbers stupid!). Additionally, the greater concern isn't just systemic bias, but that these systems are replacing humans entirely. What odds are the gambling houses offering on AI based automation of America's largest employment sectors in the coming decade? We need to be having serious discussions around UBI right now.

I signed up too late to read the book and my computer science grad school days are significantly more historic than yours, but I can confirm NNs were a small part of AI then. I have always felt their black boxiness is an abandonment of human understanding as progress (which is dear to me). A younger person recently told me that the exciting thing is they are able to solve problems we are incapable of understanding, let alone solving. It all might just be my dinosaur side showing…
I agree that this approach blows up the entire concept of infallible machine, not only because the training set can be biased, but it can also fail to include critical corner cases - and they are always used against untested inputs when in the field. There seems to be no hope of (or interest in) formal verification.
In my on-off work in the GPU industry, I definitely saw a huge drive toward increasing MAC flops as well as higher throughput reduced precision ops for the sake of NN performance. Some innovations will come from different NN structures, activation functions and training algorithms, but many will come from big-iron research projects transferred when performance is available on, say, a phone.
As to tech displacing human work, I see it as a point on a historical arc where, eventually, machines are better at everything that humans might do to earn an income, and agree totally that UBI should be on the table. (Now! btw) Beyond that, whether “the end of work” is dystopian or utopian depends on whether we value human beings for their productivity or for their inherent humanity.
I think there’s room for books that have varying degrees of technical detail - the impact is large and everyone should have the resources to understand this coming change.

Daniel wrote: "I signed up too late to read the book..."
What do you mean?!! It's never too late to read one of our group reads. Our members are all over the world and have very different access to bookstores and libraries. And the books we read are sometimes in very high demand. So people read them whenever they can.
The month associated with the group read is a target only. You are encouraged to read the book whenever it works for you, whether it's six weeks ahead of the target month or two years after. And we encourage discussion at any time. Some of our best discussions have lasted many months.
What do you mean?!! It's never too late to read one of our group reads. Our members are all over the world and have very different access to bookstores and libraries. And the books we read are sometimes in very high demand. So people read them whenever they can.
The month associated with the group read is a target only. You are encouraged to read the book whenever it works for you, whether it's six weeks ahead of the target month or two years after. And we encourage discussion at any time. Some of our best discussions have lasted many months.
P.S. Your comment is welcome whether or not you have read the book. Provided it is respectful and on topic.

Thank you.
I hope I didn’t veer too far astray on either the respectful or on-topic axes - I am frustratingly long winded and it’s easy for me to do. Please alert me if a post needs to be deleted - I don’t think Goodreads will do it automatically.

In fact, I often decide whether or not to read the selection of the month after I've seen the comments of the group. I'm likely to read the Alignment Problem for that reason.
message 13:
by
aPriL does feral sometimes
(last edited Mar 25, 2022 04:31AM)
(new)
-
rated it 5 stars

I wonder what machine learning program was used to scratch my application for employment as a secretary years ago in San Francisco when I was told the company couldn't consider me because I didn't know what a saw horse or a Phillips screwdriver was? I couldn't answer those questions on a psych/personality test I was given before my interview. The test was considered extremely scientific, created by college researchers as a test for general intelligence. Answers were fed into a modern Big Iron mainframe.
This was only ten years after I graduated from high school. Girls were required to take home economics classes, and boys were forced to take woodshop/machine shop classes in high school. Girls could not take woodshop/machine shop or vice versa. The counselor also refused me my request to drop home economics because I wanted to take foreign language classes (Spanish). I wanted to save money because high school classes were free - community college classes for Spanish cost a lot of money, and I'd lose time. The boys could squeeze in high school foreign language classes easily into their four-year schedules because of their required classes needed for high school graduation and college entrance requirements. Girls could not squeeze in necessary high school classes required for college at all.
The counselor said, and I quote "You won't need college anyway. You'll get married and have kids. End of story." I got some of my teachers, women, to talk to the counselor, a man. I got out of home economics, took three years of Spanish, and thus was one of the few girls who graduated from high school with the required classes necessary to apply to university.
Using primitive machine learning algorithms by major corporations on who would be a good fit for a secretary job was all the rage in San Francisco in the 1980's/1990's. It had a number of questions regarding things I was forbidden to learn because I was a female. These "relevant employment factors" were one of the many things that dropkicked me to the curb during these decades. When I finally learned programming, every ad stated, "must be able to lift 50-75 pounds", because men could and women couldn't. The surface reason was carrying boxes of computer printer paper was now a requirement for all programmers in the 1980's. Bias isn't new - unintentional AND intentional.
message 14:
by
aPriL does feral sometimes
(last edited Apr 07, 2022 11:41AM)
(new)
-
rated it 5 stars

Computer scientists are making mathematical equations, what are essentially the acts of earning points for a best score, like football, to mimick what are essentially human reward emotions which give us the feeling of satisfaction or make us happy.
Will a computer ever be happy?



Books mentioned in this topic
The Alignment Problem: Machine Learning and Human Values (other topics)The Alignment Problem: Machine Learning and Human Values (other topics)
Please use this thread to post questions, comments, and reviews, at any time.