2009-03-20, 11:04 | Link #1901 | |
Logician and Romantic
Join Date: Nov 2004
Location: Within my mind
Age: 43
|
Quote:
__________________
|
|
2009-03-20, 11:13 | Link #1902 | ||
Seitenkan no Greeter
|
Quote:
Quote:
Anyway, have you ever read Chobits? If so, then you'll understand what I'm about to say and have no need to read it. If not, read on (unless you don't want to be spoiled about a big plot point from the end of the series, in which case you might not actually want to read the rest of my post either due to the potential indirect spoilers). Spoiler:
When something is emulated (say, using an emulator to play video game that you don't have the right system for), you want it to work just as well as, or sometimes even better than, the original. While many emulators for video games do have glitches here and there, the overall effect is to run the game much as it would on its original system; often, the emulator will have options that let you make the game look even better than it would otherwise (for example, being able to increase the resolution of the emulated output, among other things). The only downside is that emulators often require high system requirements to function properly. My point here is that, given enough processing power, storage space, and memory (as well as time, I suppose, as one would have to write the programs required first), one could create an AI that emulates emotions quite well. "Under the hood", so to speak, it would function differently from the way a human's brain produces emotions, but the end result would be very much the same. There may be small glitches here and there, but those could be minimized by self-editing in response to certain external stimuli, or something similar. To my mind, the emotions of such an AI need not be treated any differently than those of a human; you may think differently, however. (Of course, we'll probably have to wait until quantum computing becomes commonplace before an AI able to emulate all the ranges and subtleties of human emotion can be created due to the enormous processing power that would be required.) Also: if the above post reads weirdly at all, note that I did an all-nighter last night to finish a paper that I had to turn in earlier this morning, so I'm kinda tired right now and thus being a bit absentminded. This does not affect the main idea of this post, however, so... I dunno. EDIT: And yeah, somewhat repeating what people have previously mentioned, now that I look back again, what I described above would be much more easily implemented by simply giving the AI a few simple notes on human emotions to start with, then have it observe and interact with other humans and have it develop its emotions in that manner. It's entirely possible that, in the real world, we will never have an artificial intelligence that is exactly the same as a human, mentally, but even if that's so, it doesn't t mean we won't be able to come very very close. |
||
2009-03-20, 11:19 | Link #1903 |
Absolute Haruhist!
Artist
Join Date: Mar 2006
Age: 36
|
On robot AI and human brains.
A human has 100 billion neurons, each one is like a piece of tiny wire. If you can create a quantum computer, (actually a nano computer would be enough), with the same number of wires, you would have an artificial human brain, an AI. All you have to do is program a baby mind into the AI, you can get the AI to learn just like a human. If we know the exact programs to do for a human mind, we can even do it in the present day, just that it would be a super computer size. But with quantum technology, you can squeeze everything into an insignificant space, fundamentally more efficient than a human brain. Human brains also deteriorate, but quantum brains can probably store virtual data and have near infinite storing capacity, with no need to recycle neurons to learn new things and forget old ones.
__________________
|
2009-03-20, 11:32 | Link #1904 | |
NYAAAAHAAANNNNN~
Join Date: Nov 2007
Age: 35
|
Quote:
That is why I quoted the human touch. It is good to be able to live and learn forever, but if you learn almost everything, where is all that fun in relearning the things you forgot?
__________________
|
|
2009-03-20, 11:33 | Link #1905 | |
Seitenkan no Greeter
|
Quote:
(On that note, those of you who haven't watched ZegaPain should go do so. It's a damn good show.) |
|
2009-03-20, 11:51 | Link #1907 | |
Absolute Haruhist!
Artist
Join Date: Mar 2006
Age: 36
|
Quote:
Zegapain is indeed quite good, and 'Sunrise Experts' don't know anything about it. @Imouto Maidoroid, remember she is not a loli. EDIT: How come the forum stopped moving? I was enjoying the relatively serious posts lol
__________________
Last edited by C.A.; 2009-03-20 at 12:05. |
|
2009-03-20, 13:20 | Link #1908 |
NYAAAAHAAANNNNN~
Join Date: Nov 2007
Age: 35
|
Sorry I decided to take an emotional rollercoaster by watching both seasons of Clannad in one go up to 23. That is one hell of a trip.
I like learning new stuff, but I am starting to slow down of late and started asking the more dangerous questions I have never asked before, like those that do not have a fixed answer, or even an answer at all. If this is the path to enlightenment, my recent insomnia might be foreboding a long rest I will have in the near future. I have to resist claiming Imoto, otherwise my 2nd waifu might get a training buddy. Actually mapping the brain is a tediously difficult task, assuming that we have 100 billion brain cells (10^11), the total number of paths mapped would be ([10^11 x 10%]! x [<10^11-1> x 10%]!......... x 2! x 1!) assuming we use 10% of our brain cells at max. I refuse to calculate the answer because it would be somewhere close to infinity, even if it is 10%. Never underestimate the human brain. It is not that we are just physically weak to use it at its max constantly, we wouldn't be able to find a way to fully maximise the total power of our brain cells because of our lack of total intelligence and wisdom. EDIT : Note my less random and more concise English in the 2nd paragraph. There is something seriously wrong with me.
__________________
|
2009-03-20, 21:14 | Link #1909 | |
Tsukkomi power!
Join Date: Jan 2009
Location: For great justice!
|
Quote:
Uh, that's all I have to add, really, it's been a long day and I thank all participants for the discussion on physics. It's been quite educational. Now if only my proper physics lectures were this interesting. ^_^ |
|
2009-03-21, 02:00 | Link #1910 | |
NYAAAAHAAANNNNN~
Join Date: Nov 2007
Age: 35
|
Quote:
Physics lectures are never interesting. The ones I have in high school are so boring that I drop dead regularly in class, but of all the subjects, I find physics the most interesting.
__________________
|
|
2009-03-21, 08:22 | Link #1911 | |
Tsukkomi power!
Join Date: Jan 2009
Location: For great justice!
|
Quote:
I don't find all physics lectures tedious, though I do think that in many cases, the mathematics involved detract from the enjoyment of the concepts introduced. |
|
2009-03-21, 10:57 | Link #1912 | |
NYAAAAHAAANNNNN~
Join Date: Nov 2007
Age: 35
|
Quote:
Yes. I HATED MATH, but somehow it is the math that gives some plausibility to unthinkable concepts, like the Chaos Theory, the fourth dimension of time and the Copenhagen Interpretation in quantum physics. Since Honoka looks like a Nekomimi, we shall use her as a substitute for the cat in the Copenhagen Interpretation. Suppose we put her and Sakura in the same small room, the effects observed and the plausibility of the outcomes (Honoka gets yuri-ed over by Sakura, or not) can be determined, but we cannot determine what WILL happen. That is where stats keep into to calculate probability base on primary actions of the two.
__________________
|
|
2009-03-21, 16:10 | Link #1913 |
Tsukkomi power!
Join Date: Jan 2009
Location: For great justice!
|
Now we will extend the above scenario to Schrodinger's Honoka. We put Honoka in a box, then we have Sakura placed in a cage inside the box. Depending on whether a radioactive isotope decays or not, the door of the cage will or will not open, and hence whether Honoka gets yuri-ed.
Before we use Leopard's Soul Shout to blast the box open and rescue Honoka (who will probably be running out of air even if she doesn't get yuri-ed), we cannot know whether Honoka has been yuri-ed. However, due to the fact that Honoka and Sakura are both conscious beings, they instantaneously collapse the wavefunction of the system, therefore the above scenario is moot. Hence SaintessHeart's original experiment should be verified. I love coming up with junk science. ^_^ |
2009-03-22, 10:15 | Link #1915 |
Wise Otaku Seeker
Join Date: Feb 2008
Location: Philippines
Age: 34
|
Hello guys so what did i missed?
sorry if i have been well a lurker latelt. its just that today was the final day of my finals week and well it took my whole week trying to at least get decent grades. so anyway what did i missed? |
2009-03-22, 11:04 | Link #1916 | |
Tsukkomi power!
Join Date: Jan 2009
Location: For great justice!
|
Quote:
X-ref the Copenhonoka Interpretation and Schrodinger's Nekomimi. |
|
2009-03-22, 12:26 | Link #1917 |
Absolute Haruhist!
Artist
Join Date: Mar 2006
Age: 36
|
I seriously love your Schrodinger's Nekomimi lol, but I can't give you cookies yet.
And I have no idea what SaintessHeart is talking about most of the time, even in the mecha thread. I don't know if he's seriously making a point or not, or if he actually understands what he's talking about lol Anyway watching Gundam 24 suddenly makes me wonder if the Brains actually carry consciousness of someone who once had a human body.
__________________
|
2009-03-22, 12:30 | Link #1918 |
Tsukkomi power!
Join Date: Jan 2009
Location: For great justice!
|
^ Haha, thanks a lot.
Most of what SH says is quite true, from my less-than-perfect knowledge, though the way he jumps through the different fields is a bit disorienting. I've been wanting to give you cookies for a couple of things on other threads, but I can't find enough people to give cookies to in between... :P |
2009-03-22, 13:06 | Link #1919 | |
NYAAAAHAAANNNNN~
Join Date: Nov 2007
Age: 35
|
Quote:
Eventually you will go mad, or spout "nonsense". I blame Sakura for all these. Regarding the part of G00 24, I believe that actually it is possible to download consciousness from a human mind into an artificial one by mapping the memory engrams (every possible neural link throughout the head not reaching the cerebellum), turning them into algorithms then sending into the machine. Who knows if your memory actually fills less than 100 pages of a PDF document? Qmeister's Schroedinger's Nekomimi* is epic. It should be counted as an anime-fan's version of determining the Uncertainty Principle. * - IT IS ONLY A THOUGHT EXPERIMENT, NOT A PRACTICAL ONE. Using my waifu as a test subject is just plain cruel.
__________________
|
|
Tags |
shoujo, shoujou, shounen, sunrise |
Thread Tools | |
|
|