[yes, there are several extraneous variables here that need to be glossed over for the sake of brevity, but whatever.]
so let's say you have a true ai [quite the jump to make, but bear with me] and you place him inside a large, 15 meter x 15 meter cubic room. the walls aren't really walls but giant blue shutters. humanoid, and unsurprisingly chrome, he's got his fair sense of cognition and that paired with his ability to learn makes him quite curious about everything.
one day you open the right shutter and you give him a glimpse into the humans researching through a one way mirror. he gets to see a scientific ecosystem and workplace hierarchy. he has absolutely no idea what they're talking about, but he visualizes everything the humans do with intent curiosity. they don't seem to be paying any attention to him at all.
question 1: does the ai, in his attempt at learning, attempt mimicry of human beings?
alright, so whatever happens, the shutters are closed for a day, so the ai can process everything and collect data and perhaps even philosophize on his own [it's gotta be something pretty big]. then, the left shutter is opened [with the right shutter closed]. behind this shutter is a system of three other ai. they don't pay any attention to the ai at all either.
question 2: does the ai, in his attempt at learning, attempt mimicry of synthetic beings?
question 3: what would you expect the three ai to be doing, assuming they're all of the same build as him but have only had contact with themselves?
so after another period of visualization the shutters are closed for a day. ai is thinking. then, without warning, both shutters are opened and a small door opens up on the corners of both. the doors will close behind him.
[b]question 4: which side does the ai choose?[/b]
question 5: will the ai attempt to go back after choosing it at any point?
and, perhaps most importantly,
[spoiler]which group conducted the experiment?
lol, there's an answer to this question simply due to the fact that i'm incoherent[/spoiler]
tl;dr: TAKE ME TO ROBOT PIRATE ISLAND, I WANT TO ARM WRESTLE WITH COWBOYS ON THE MOON
[url=https://www.youtube.com/watch?v=BjKQMattaVk]thread theme[/url]
-
i would think it would depend entirely on how the thing was made.
-
1 AntwortenDoes the ai see it's hands, I would imagine it would go with what it is more similar to at first, but would try to interact with humans later.
-
2 Antworten[quote]question 1: does the ai, in his attempt at learning, attempt mimicry of human beings? [/quote]Yes. They are the only thing he knows exists and he would want to be apart of them. He would digest as much information from them as he can. The human actions would intrigue him and he'd want to know more. [quote]question 2: does the ai, in his attempt at learning, attempt mimicry of synthetic beings? [/quote]No. He'd more curious as to why they are acting differently than the Humans. [quote]question 3: what would you expect the three ai to be doing, assuming they're all of the same build as him but have only had contact with themselves?[/quote]Interacting with each other. Questioning how they got here and what is past the walls that have them sealed in. But since they can only interact with each other, their philosophical discussions would end quickly as they have nothing more to share with each other. [quote][b]question 4: which side does the ai choose?[/b][/quote]The left. The Humans would be more interesting to him. Because he encountered the Humans first, his "brain" has already begun wiring itself to be more Humanlike, he might not be able to relate much to the Synthetic's other than physical. If he saw the Synthetic's first, it'd most likely be the other way around. [quote]question 5: will the ai attempt to go back after choosing it at any point?[/quote]No. The AI would lack emotion (atleast in it's current state) to ever regret making certain decisions. [quote]which group conducted the experiment?[/quote]The Humans. If the Synthetic's never had contact from the outside world they couldn't have possibly done this. I need to go lay down. I think something in my brain ruptured thinking this over.
-
No data available.
-
4 AntwortenWHY DO YOU NOT USE CAPITAL LETTERS?! Ahem. Seriously I can't even read this, I'm sorreh >:(
-
I liked Kamots reply, I think he would first identify as human. But... he would go to the synthetics out of curiosity of the unknown.
-
I've never seen a more accurate tl;dr.
-
1. yes 2. yes 3. something 4. both 5. ^^^ am i smart yet i didn't think so
-
-
3 AntwortenOkay then everybody! It's time for a new question. Let's take Goji's AI's, all four of them, and transfer them into a new mechanized frame. They are then released into the room of people, with their natural curiosity intact, and left there to interact with the people. Keep in mind, this is their first time ever interacting with people! This question is brought to you straight from the musings of Goji himself.
-
4 AntwortenI should derail this thread. One simple image, and a few strings of words and the theoretical merit of this discussion could entirely be lost on anybody who gazed upon the post. Half would merely ask why in their confusion, and the rest would have the image stuck in their head. It's too good not to pass up. This is the only time I could ever break this nifty weapon out that I found.
-
-
-
4 AntwortenWho cares.
-
2 AntwortenIs it a "smart" ai?
-
2 Antwortenshit goji that's a lot of words
-
4 AntwortenI apologize if this reply is a half thought out jumble of nonsense. 1. Mimicry [i]is[/i] important to learning, at least regarding intelligences we have experience with (biological). So it's a distinct possibility. I'm also inclined to agree with Sandtrap, it would likely ask questions and try to reason as to what the humans were doing. 2. This is probably just as likely. I'm not sure how AI would determine whether it should mimic one or the other. Perhaps physical similarity? 3. I'm assuming they'd never had contact with anything other than one another. In such a case they probably wouldn't do too much, other than mill around the room and share information that they've gathered between one another. Possibly philosophize about the nature of their existence. 4. Oh man. . .like I said in number 2, I don't see how the AI would choose other than physical similarity. Perhaps the humans seem more interesting? 5. Is regret solely a human concept?
-
I don't know what it will choose, but I can tell you we'll need to destroy them all before they destroy us if they work together.
-
-
[quote][u]Posted by[/u]: [b]Gojira[/b] right.[/quote] left
-
Why does godzilla care about this?
-
3 AntwortenBearbeitet von Sandtrap: 6/16/2014 6:04:54 PMThis is too deep for #Flood to handle. The important thing to remember is, that an AI is an intelligence. And this is an intelligence that's exposed to very little, meaning then when it is, it's going to soak everything up like a sponge. That means it's in an "infancy" stage, where the world around it and what it sees will heavily play an influence on it, both in how it thinks, and potentially how it acts. I believe the answer to the second question would not be mimicry, but questions, now that an established Human line of action and thought has taken root into it's head, it would ask why they act in the manner that they do. In answer to the third, with no external contact, the AI would likely question each other on who and what they are, and may reason that they need to wait for something, or, depending on what their results of discussion come up with, they may act in an attempt to figure out the environment around them, much like a child. Question 4 is difficult to answer, but I believe that due to the first experience with the outside world, this will have a pull on the intelligence's strings, giving it a feeling a familiarity and comfort that it can return to. And question 5, is, possibly. As with any intelligence, sometimes when you realize something, even if it once felt safe, or homey, or like it was the right thing to do, when your mind logically puts the pieces of the puzzle together, you have little choice but to turn around and go against the established belief's inside of yourself. But this possibly all depends on if the AI is happy, or feels content in its environment. If there is pain and misfortune, then it will no doubt turn back to find a new place of safety, the other AIs. But in doing so, it too could change their own perceptions in a negative manner, making the first AI an unwilling bearer of negative change to it's own environments. Did I finish your homework for you?
-
[quote]Who cares.[/quote]
-
5 AntwortenI guess he mimics the humans, as they show activity and movement, whilst the other AI or just kinda static. Also, would he even relate to the other AI, as he doesn't have a mirror and doesn't know recognise them?
-
2 AntwortenBearbeitet von Kamots: 6/16/2014 6:09:00 PMI'd say he might imagine at first that he is human, knowing them as the only existing entities that share similarities with him. At that point he is likely to adapt, if without realizing it, to acting similarly. At which point the synthetics might appear as a curiosity only. Maybe he realizes that they look similar to him, but he could also begin to wonder why they move and speak so differently compared to what he imagined to be his own people (which would probably be the case). Considering that he might identify with humans, this could further push him away from them. Depending on his reaction, he might choose the go to the synthetics first because of physical similarities or to go to humans because he identified them first. Regardless, he would likely want to go to the other side at some point, given his curiosity. tl;dr: idk [spoiler]Also, since you said the other A.I. Have only had contact with themselves, they probably aren't the ones who conducted the experiment.[/spoiler]
-
gojira why havent you realised that this shit isn't canon to halo!!!!!!!!!!!!!!!!!!!