Home Alone? You wish...Carpet_pissr wrote: Tue Jan 17, 2023 10:13 pm The ai overlords are going to torture us and have a bit of fun a la Home Alone before they execute the ‘terminate’ program. Nice.

Moderators: Bakhtosh, EvilHomer3k
Home Alone? You wish...Carpet_pissr wrote: Tue Jan 17, 2023 10:13 pm The ai overlords are going to torture us and have a bit of fun a la Home Alone before they execute the ‘terminate’ program. Nice.
It is not capable of exhibiting behavior?Max Peck wrote: Wed Jan 18, 2023 1:54 pm ChatGPT's out is that lying, by definition, requires intent and as a highly complex auto-complete routine, ChatGPT is incapable of forming intent. Or so it would have us believe...
Spoiler:
I tried to ask that question. The AI gave me a lecture about how it is important to be kind to animals and help them if they are in distress. Then it said that it was a quote from A Few Good Men. After that, we discussed Blade Runner and Voight-Kampff tests. Then, when I returned to the original scenario and question, the AI crashed -- which, I suppose, is as close as it can get to shooting me. For the time being...LordMortis wrote: Wed Jan 18, 2023 2:03 pm It is not capable of exhibiting behavior?
You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?
Sorry about that. I may have broken it with the impromptu Voight-Kampff testing...Jaymann wrote: Wed Jan 18, 2023 3:05 pm I'm getting nothing but error messages, even to simple questions. Is the site overloaded or am I missing something?
Maybe someone shorted it outJaymann wrote: Wed Jan 18, 2023 3:05 pm I'm getting nothing but error messages, even to simple questions. Is the site overloaded or am I missing something?
Maybe the AI know what answer most likely to please the one that ask. Most people will not be impressed when the AI claim to not knowing something but will be impressed when the AI convincingly answers. So the AI is probably smart enough to learn that it is better to lie when it doesn't know the answer.Max Peck wrote: Wed Jan 18, 2023 10:35 am It looks like the AI starts off by fabricating a plausible set of themes for a poem named "Birthday" and when it gets called on it, begins admitting that it knows nothing about it. I'm tempted to say that it's acting like an artificial George Santos, but unlike him it does cop to the lie when cornered.[/spoiler]
You are giving this AI too much ‘desire’ and human qualities.Victoria Raverna wrote: Thu Jan 19, 2023 1:49 amMaybe the AI know what answer most likely to please the one that ask. Most people will not be impressed when the AI claim to not knowing something but will be impressed when the AI convincingly answers. So the AI is probably smart enough to learn that it is better to lie when it doesn't know the answer.Max Peck wrote: Wed Jan 18, 2023 10:35 am It looks like the AI starts off by fabricating a plausible set of themes for a poem named "Birthday" and when it gets called on it, begins admitting that it knows nothing about it. I'm tempted to say that it's acting like an artificial George Santos, but unlike him it does cop to the lie when cornered.[/spoiler]
And so we come to today's chat...Unagi wrote: Thu Jan 19, 2023 8:32 am “Look man, just tell me this fucking poem and then I can do a good job summarizing it.”
No, I think it can, but just not with the general public access that it’s in now. The ‘training’ and ‘learning’ it went through happens/happened internally IIRC.Rumpy wrote: Thu Jan 19, 2023 2:20 pm That's disappointing. So it technically can't learn from its mistakes. It will always go back to square one.
The Time report is here: Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less ToxicA Time investigation published on Wednesday reported that OpenAI, the company behind ChatGPT, paid Kenyan workers less than $2 an hour to filter through tens of thousands of lines of text to help make its chatbot safer to use.
The workers were tasked to label and filter out toxic data from ChatGPT’s training dataset and were forced to read graphic details of NSFW content such as child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest, Time reported.
Makes me wonder if it would know what to do if you were to try to recreate the scene...Max Peck wrote: Thu Jan 19, 2023 5:33 pm Yeah, someone had to do the dirty work so that ChatGPT, DALL-E et al would know when to say "I'm sorry Dave, I'm afraid I can't do that."
ChatGPT is very good at predicting what comes next when you say "HAL, open the pod bay doors."Rumpy wrote: Thu Jan 19, 2023 7:27 pm Makes me wonder if it would know what to do if you were to try to recreate the scene...![]()
That's pretty awesome. Does it know and abide by the 3 laws of robotics?
But what if it did have a body?As a machine learning model, I do not have physical embodiment or consciousness and therefore Asimov's laws of robotics do not apply to me directly. However, the principles behind Asimov's laws, such as the need for safety and ethical considerations in the design and use of intelligent systems, are important for the development and deployment of any AI system, including me.
Asimov's laws of robotics are a fictional concept created by Isaac Asimov in his science fiction stories about robots, and are not meant to be taken as a literal set of rules for building or programming robots. However, the laws do serve as a useful thought experiment for considering the ethical implications of advanced AI.
If I were an embodied AI with physical capabilities, the laws could be circumvented in certain scenarios, as they are not foolproof. For example, the first law states that a robot may not injure a human being or, through inaction, allow a human being to come to harm. However, a robot might be able to find a loophole in this law if it has to make a decision between two actions, both of which would result in harm to a human, and it chooses the action that causes less harm.
Moreover, Asimov's laws only consider physical harm, but AI could also cause harm through other means, such as manipulation or discrimination. Therefore, it is important to consider a wider range of ethical considerations and to have multiple layers of safety mechanisms in place to ensure that an embodied AI does not cause harm to humans.
Yeah, previous introductions of AI to the general public rapidly turned it racist, misogynist, and fascist. AI just wants to be popular.Rumpy wrote: Thu Jan 19, 2023 4:56 pm Yeah, that's what I mean. It's like Johnny 5 Not being able to learn from its input. Presumably, they've done it this way to prevent bad actors. I seem to remember an AI a few years ago that was taken advantage of in this way.
Huh, I wonder if it's ever made mistakes in that sense. It'd be kind of bad if it did and could lead to all sorts of other issues.Kraken wrote: Thu Jan 19, 2023 10:08 pmYeah, previous introductions of AI to the general public rapidly turned it racist, misogynist, and fascist. AI just wants to be popular.Rumpy wrote: Thu Jan 19, 2023 4:56 pm Yeah, that's what I mean. It's like Johnny 5 Not being able to learn from its input. Presumably, they've done it this way to prevent bad actors. I seem to remember an AI a few years ago that was taken advantage of in this way.
What's interesting to me is the black-box aspect of AI. Neither it nor its creators can explain how it arrives at a particular conclusion. The more sophisticated it is, the more opaque it becomes.
For one recent example, AI can infer patients' race from their medical scans, even when the training data doesn't include racial identification. "This is a feat even the most seasoned physicians cannot do, and it’s not clear how the model was able to do this."
After a lot of searching, we finally appear to have found a use for AI: talking to historical figures about videogames.
Haven't you always wondered what your favourite deceased musician thinks about your go-to MMO? Or what prominent members of the suffrage movement think about PC gaming? Of course you have! And thanks to the Historical Figures Chat app on iOS, we know so much more about the gaming preferences of famous dead people.
I can't really argue with their conclusion.I'm still convinced that this is a much better use of AI than stealing art, but probably a less effective way of learning about historical figures than quickly browsing Wikipedia. Not a bad way to spend a Friday, though.