Facebook Twitter YouTube Frictional Games | Forum | Privacy Policy | Dev Blog | Dev Wiki | Support | Gametee


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Spoiler] Brain scan question.
Abion47 Offline
Senior Member

Posts: 369
Threads: 22
Joined: Oct 2015
Reputation: 46
#11
RE: [Spoiler] Brain scan question.

(11-05-2015, 02:09 PM)Dundle Wrote: As far as we understand, consciousness is a result of a biological process.

Since when? Last I checked, we haven't gotten any closer to "understanding" the nature of consciousness since the time of Ancient Greece. The only actual definition of consciousness is the state of being aware of one's self and surroundings. Whether or not the conscious thing is biological has no bearing.

And yes, the realm of research into artificial consciousness has a well-established and dedicated scientific community. It's not some side-interest that people just toy with when they aren't working on "real" science. One hundred years is plenty enough time for them to achieve a breakthrough.

(11-05-2015, 02:09 PM)Dundle Wrote: Because it still doesn't fit the criteria for even now. There's a reason they call it "artificial" ... The computers in question still have a completely programmed protocol for behavior. Can the computer make arguments? If it can't, it becomes glaringly obvious again that its just a program following what its only been made capable of "thinking" of.

Here's the thing, though. You say that a big indicator that a computer isn't conscious is the fact that it can't make arguments. But humans can't make arguments either, at least not for a while. When a baby is born, it is essentially a bundle of biological programming by way of genetic "code". It is only over the course of the child's growth and life cycle that it comes to know how to learn, how to think, how to communicate, and ultimately how to reason and to form new ideas from existing concepts in their mind. Just about all computers nowadays are programmed for express purposes, sure. But what if a computer was programmed in a way that enabled it to learn on its own, either autonomously or through interaction with the world and people around it? In what way would that computer's journey of learning have any practical difference over that of an infant's?

(11-05-2015, 02:09 PM)Dundle Wrote: I never claimed that it was, just that it's, like consciousness, an emergent property of a biological process. You have to get past just simulating and actually emulating where a computer has brain waves, neurons, and synapses like a brain does.

I'll say it again. There is no reason whatsoever to think that consciousness can only be attained by biological means. It's that way of thinking that I was talking about when I said what I did about how being human is not a requisite to being sentient. Also, why would simulated neurons be so inferior to actual neurons? The difference between simulated and replicated becomes superficial when the result is so indiscernible from reality that for all intents and purposes it can be considered real.

The only argument that can be made in that scenario against the legitimacy of the AI's sentience is the same argument that people make when they say they don't trust genetically modified foods, in that because it's not "natural" there must therefore be something wrong with it, or how "the only reason we consider it to be real is because we haven't devised a thorough-enough test to expose its farce". I'd be willing to bet money that at some point, eventually people seeking conclusive evidence for an AI's sentience will devise a test that is so ridiculously "thorough", even humans would fail it.
11-05-2015, 03:04 PM
Find
Dundle Offline
Junior Member

Posts: 24
Threads: 1
Joined: Oct 2015
Reputation: 0
#12
RE: [Spoiler] Brain scan question.

(11-05-2015, 03:04 PM)Abion47 Wrote:
(11-05-2015, 02:09 PM)Dundle Wrote: As far as we understand, consciousness is a result of a biological process.

Since when? Last I checked, we haven't gotten any closer to "understanding" the nature of consciousness since the time of Ancient Greece. The only actual definition of consciousness is the state of being aware of one's self and surroundings. Whether or not the conscious thing is biological has no bearing.

Yes, we have made ways since then in deepening our understanding of consciousness, however little that understanding still remains. Mostly in understanding and identifying the physical parts of the brain the processes that occur in it. By "nature" I'm not really sure. The core principal of what consciousness is, to be aware to have the ability to experience, has not changed since that time, but explanations for that phenomenon have.

(11-05-2015, 03:04 PM)Abion47 Wrote: And yes, the realm of research into artificial consciousness has a well-established and dedicated scientific community. It's not some side-interest that people just toy with when they aren't working on "real" science. One hundred years is plenty enough time for them to achieve a breakthrough.

I'm not denying the scientific efforts that have gone into attempting to develop a human-like AI. Certainly the interest is there, and there are a lot of smart people working towards it - but the ways that have been made in it have been nominal at best. Because of how little we understand about the brain as it is, it's doubtful that in a hundred years there would be working, designed consciousness. You'd have to recreate every little aspect that makes the brain what it is, and then somehow find out a way to make it work not using the biological tissue we know it needs to work. It'll be like designing a new DNA structure and hoping to make a complex organism without using DNA that already exists. (That's an analogy, not my understanding of it.) I think there will be more luck replicating brains and sticking them into robot bodies before it happens.

(11-05-2015, 03:04 PM)Abion47 Wrote: Here's the thing, though. You say that a big indicator that a computer isn't conscious is the fact that it can't make arguments. But humans can't make arguments either, at least not for a while. When a baby is born, it is essentially a bundle of biological programming by way of genetic "code". It is only over the course of the child's growth and life cycle that it comes to know how to learn, how to think, how to communicate, and ultimately how to reason and to form new ideas from existing concepts in their mind. Just about all computers nowadays are programmed for express purposes, sure. But what if a computer was programmed in a way that enabled it to learn on its own, either autonomously or through interaction with the world and people around it? In what way would that computer's journey of learning have any practical difference over that of an infant's?

That's not really a good comparison. The point is the inconsistency in intelligence. Babies can't make arguments or conversations shortly after they're born. The inability to make an argument would show the AI's lack of full awareness and capacity, and the facade that the "conversations" are made out of.

If a machine is programmed to learn, what type of learning is it? The programmer would have to account for all the dynamics of learning in the same way the human brain does it. It would probably take a super brain to comprehend all the ways the brain interacts with the environment and itself. If someone spent decades on developing an AI that learns, it may still be less aware and capable than a dog. The problem is that its easy to simulate partials of the upper, more visible aspects of the brain, those being the parts easiest to see. Understanding what builds it will probably take decades or centuries of research.

(11-05-2015, 03:04 PM)Abion47 Wrote: I'll say it again. There is no reason whatsoever to think that consciousness can only be attained by biological means. It's that way of thinking that I was talking about when I said what I did about how being human is not a requisite to being sentient. Also, why would simulated neurons be so inferior to actual neurons? The difference between simulated and replicated becomes superficial when the result is so indiscernible from reality that for all intents and purposes it can be considered real.

The goal is so far away that there will be very outlandish and technologically advanced things to come before it. Since we're discussing the game, it already fails on "not enough time to progress" and that everything else massively lagged behind in technology which is extremely unrealistic. There would probably be space colonization and ways to deflect massive objects in space.

Who is to say that we can't slip through dimensions with ease in the far, far future? Maybe there will be something that's even better than computer that will replace them. The problem is that its so far away that you can't really pretend to understand how its going to work in the here and now. Posing questions of ridiculous magnitude is pointless, remember that we're working within the hundred years frame.

(11-05-2015, 03:04 PM)Abion47 Wrote: The only argument that can be made in that scenario against the legitimacy of the AI's sentience is the same argument that people make when they say they don't trust genetically modified foods, in that because it's not "natural" there must therefore be something wrong with it, or how "the only reason we consider it to be real is because we haven't devised a thorough-enough test to expose its farce". I'd be willing to bet money that at some point, eventually people seeking conclusive evidence for an AI's sentience will devise a test that is so ridiculously "thorough", even humans would fail it.

Well, "that's not natural" isn't an argument, so I agree that would not disprove computer intelligence in anyway. I can say that it is strange to argue about its legitimacy because of how little is known at this time.
11-05-2015, 07:35 PM
Find




Users browsing this thread: 2 Guest(s)