Zaxxon wrote: Mon Apr 03, 2023 8:11 pm
Jaymann wrote: Mon Apr 03, 2023 7:48 pm
I listened to both podcasts, and I'm glad I did Eliezer Yudkowsky first, as it is probably the most depressing session I have heard in years.
Same. I agree that while it's clearly not AGI, it's already immensely disruptive in many different areas. It's not replacing RM9's team of developers (yet), but focusing on whether it 'understands' its output is a mistake. It's doing useful things today, and the pace of development is scorching.
I think the notion that "it's clearly not AGI" is an interesting one, especially in light of the back and forth between Eliezer and Lex and between Sam and Lex. All seemed to agree that while we
might be able to defend the position that GPT4 is not AGI right now, they all seemed genuinely challenged to come up with a bright line for when AI crossed over to the realm of AGI. This bit by Eliezer regarding how AI development is like "boiling a frog" struck me:
It probably isn't happening right now. We are boiling the frog. We are seeing increasing signs bit by bit, but not like spontaneous signs. Because people are trying to train the systems to do that using imitative learning. And the imitative learning is like spilling over and having side effects. And the most photogenic examples are being posted to Twitter. Rather than being examined in any systematic way. So when you are boiling a frog like that, what you're going to get like . . . a thousand people looking at this. And the one person out of a thousand who is most credulous about the signs is going to be like, "that thing is sentient." While 999 out of a thousand people think, almost surely correctly, though we don't actually know, that he's mistaken. And so the like first people to say like, "sentience," look like idiots. And humanity learns the lesson that when something claims to be sentient and claims to care, it's fake because it is fake. Because we have been training them using imitative learning rather than, and this is not spontaneous. You're going to have a whole group of people who can just like never be persuaded of that. Because to them, like being wise, being cynical, being skeptical is to be like, oh, well, machines can never do that. You're just credulous. It's just imitating. It's just fooling you. And like they would say that right up until the end of the world. And possibly even be right, because, you know, they are being trained on an imitative paradigm. And you don't necessarily need any of these actual qualities in order to kill everyone, so.
Zaxxon wrote: Mon Apr 03, 2023 8:11 pm
To take this to a far-ish future example that may be absurd (but which none of us here knows is absurd), we're not going to care altogether too much whether it has a holistic understanding of the cure for cancer it eventually develops, or whether it came to a cure by simply analyzing successive iterations of work against a series of inputs. Just that it gets there.
I continue to be puzzled by the notion that thoughts of AI curing cancer are thoughts about a "far-ish future . . . that may be absurd." Why? Given the rate that current AI capabilities are increasing, why would we think that?