Jump to content

HRP-4C "Gynoid" Robot Japanese Girl


Shaorin

Recommended Posts

chobits.jpg

http://en.wikipedia.org/wiki/Chobits

:o Are we there yet? not quite, but a hell of a lot closer than i had ever figured back in 2001 that we might be by now !! :o

090317niprobot--123728719799398500.jpg

large_Japanese-girl-robot-Mar16-09.jpg

womanoidroboticsfashion.jpg

robotgirlAFP.jpg

090317people_robot--123728817610829800.jpg

500x_freak-robot-girl2.jpg

500x_freak-robot-girl1.jpg

- VIDEOS -

http://www.cnjianqing.com/index.php?key=hrp4c

- ARTICLES -

http://www.cleveland.com/world/index.ssf/2009/03/walking_talking_female_robot_t.html

http://www.xyberlog.com/2009/03/16/high-tech-talking-robot-girl-in-japan/

http://robotionary.com/robotics/womanoid-catwalk-model-robot.php

and here's yet another take on the concept!!

ManRobogirlBCroft_450x300.jpg

RobotGirlBCroft_450x300.jpg

http://www.metro.co.uk/news/440494-no-time-for-a-real-woman-date-a-robot-instead

So, after humanity perfects the Android Girlfriend for all us Ronery Otaku,

what do you suppose'll be next? Robots to fight our wars? Transformable ones, perhaps?

Or will humanity be so happy with their Robo-Girlfriends/Boyfriends

that no one will care, and warfare will cease to exist?!?!

Edited by Shaorin
Link to comment
Share on other sites

Flamethrower, MOVE!!!!

Oh they'll probably rebel against their creators, and drive humanity to near extinction, then we fight back etc, etc, pew, pew, pew.

Meh, I don't mind the idea of being put into stasis and my mind put into a dream state so that my body energy can be fed to the ruling machine empire...

Edited by myk
Link to comment
Share on other sites

Flamethrower, MOVE!!!!

Meh, I don't mind the idea of being put into stasis and my mind put into a dream state so that my body energy can be fed to the ruling machine empire...

As long as we get a special Otaku Matrix with tons of crazy anime sh!t. Who wouldn't be up for that?

Link to comment
Share on other sites

I think this maybe going the way of "Ghost In the Shell", when do we get to transfer to robotic bodies?

As much as I love GITS, there have been some studies into human consciousness showing that our current algorithm-based computing architecture can never imitate human soul. As proof of this, you can count how many times seemingly intelligent people make irrational or dumb choices in life that goes contrary to what algorithms seek - which is the decision that offers the best payoff.

So dropping your "Ghost" into a mechanical shell would be akin to dropping your consciousness into the body of a dog or cat.

Edited by Ghost Train
Link to comment
Share on other sites

So dropping your "Ghost" into a mechanical shell would be akin to dropping your consciousness into the body of a dog or cat.

Even then, a life as a dog or a cat would be better than what I've got now, lol. Besides, I'm sure I'd wind up in a Disney movie or something...

Link to comment
Share on other sites

As much as I love GITS, there have been some studies into human consciousness showing that our current algorithm-based computing architecture can never imitate human soul. As proof of this, you can count how many times seemingly intelligent people make irrational or dumb choices in life that goes contrary to what algorithms seek - which is the decision that offers the best payoff.

So dropping your "Ghost" into a mechanical shell would be akin to dropping your consciousness into the body of a dog or cat.

It's a lovely collection of large words, but... it's also a complete load of crap?

1. You can write fairly complex algorithms with a wide variety of input variables and output solutions. And you could always add a random input to churn the waters a bit. Or throw in a few bugs, because no code much more complex than PRINT "HELLO WORLD" is perfect.

And making routines that choose blatantly wrong solutions to a complex situation is actually rather easy, much to the chagrin of video game designers the world over.

Just because it's computerized doesn't mean it's inherently BETTER at making choices.

2. There's also a strong possibility that many of the dumb, irrational choices we make are driven by ancient caveman(or even PRE-caveman) thought patterns that simply aren't applicable to the modern world, but are still a large part of how the brain expects things to work. Effectively analogous to glitchy legacy code in a computer environment. It's old, outdated, and kludgy, but it still works... most of the time.

3. It wouldn't be at all like a dog or cat, because dogs and cats have organic brains that process information in similar manners to our own, albeit with different hard-wired priorities and much less horsepower.

4. Neural networks are designed to create organic-style "thinking" from computers, and they're rather good at it. They're also INCREDIBLY inefficient, so they take LOTS of power and space. But those are cheap these days, and getting cheaper almost by the minute.

5. GitS interfaced organic brains to technology. It didn't replace brains WITH technology.

There's certainly problems with that(and Masamune Shirow actually calls some of them out himself in author's notes), but it completely sidesteps the issues with simulating the human brain(which isn't a system we understand well enough to simulate anyways).

It also means going "total cyborg" isn't a cure for Alzheimer's or Parkinson's in the GitS world.

So... ummm... you're pretty much wrong on all counts?

Bring on the robo-bodies!

Link to comment
Share on other sites

YES I LIEK BIG WuRDSZ!!1!!

Oh look! My Hatsune Miku android just walked to the neighbor's yard, is standing on her head, peeing, while singing "The World is Mine" simultaneously, because it's a product of her randomized programming!

It's a lovely collection of large words, but... it's also a complete load of crap?

1. You can write fairly complex algorithms with a wide variety of input variables and output solutions. And you could always add a random input to churn the waters a bit. Or throw in a few bugs, because no code much more complex than PRINT "HELLO WORLD" is perfect.

And making routines that choose blatantly wrong solutions to a complex situation is actually rather easy, much to the chagrin of video game designers the world over.

Just because it's computerized doesn't mean it's inherently BETTER at making choices.

Inserting randomness into your algorithm on purpose to obtain a "blatantly wrong solution" as you say, does not solve the problem of creating hardware + software that processes human consciousness. Otherwise given the amount of crappy outsourced programmers that attended Ranjeev's Technical Institue (specializing in buggy untested er... I mean highly unpredictable code), we would have created extreme forms of artificial life by now.

Sure you can add some context to the randomness, but that would make the algorithm so incredibly complex - requiring vast amounts of time and or computational power. Since the latter will be difficult to cram into something shaped like the human skull, we will need time.... and I want my ronery anime otaku android waifu to have a decent response time.

More importantly, this defeats the purpose of algorithms, which is to create an efficient set of instructions to find a solution to a problem. What you suggest will not be efficient, as said code would need to have a large amount of unpredictability, nor will it solve problems well, since it will sometimes just have to pick the wrong solution to be human. Perhaps one day we will have data structures capable of replicating this human imperfection, but for now - algorithms are good for telling computers to calculate stuff, and are only modestly effective at simulating human behavior.

Consider the case of Deep Blue vs. Kasparov, where despite the latter's defeat, had to be tweaked in between rounds to prevent Kasparov from tricking it. Of course, IBM denies this is a form of cheating.

2. There's also a strong possibility that many of the dumb, irrational choices we make are driven by ancient caveman(or even PRE-caveman) thought patterns that simply aren't applicable to the modern world, but are still a large part of how the brain expects things to work. Effectively analogous to glitchy legacy code in a computer environment. It's old, outdated, and kludgy, but it still works... most of the time.

I actually like this comparison, though it proves my point - You can rewrite and upgrade legacy code relatively quickly (with the most poorly managed projects only taking a decade or so), but you can't change the human hardware as our brains are fundamentally the same from when we we roamed the world as cavemen. The glitchy legacy code is patched through civilization, language, and education.

You would have thought that natural selection would have created near perfect humans by now after an ice age, countless wars, plagues etc, yet a quick glance of the internet reveals that our society is one that seemingly glorifies stupity. Despite society's attempt to fix and patch the human software, people still do really really dumb things sometimes - proving that computing is a poor analogy for understanding or simulating humans.

3. It wouldn't be at all like a dog or cat, because dogs and cats have organic brains that process information in similar manners to our own, albeit with different hard-wired priorities and much less horsepower.

My point here (and I concede the dogs & cats example was poor) was that there is a disconnect between human consciousness and software & hardware we associate with modern computing. Although there have been advances in creating interfaces - prosthetics, vision & hearing aids to bridge the gap, there is no prosthetic or substitute to carry out human consciouness.

Meaning if I dump the source code for humanity into a different type of hardware - nothing will happen as my code is not optimized or configured for it.

4. Neural networks are designed to create organic-style "thinking" from computers, and they're rather good at it. They're also INCREDIBLY inefficient, so they take LOTS of power and space. But those are cheap these days, and getting cheaper almost by the minute.

One of many technologies, with quantum computing being another that comes to mind. Still does not refute the fact that algorithm based computing simply will not work for this purpose. Despite the incredible processing power available even to desktop PC's, we still do not have true A.I. or cyborg technology. Proponents of this can simply keep pushing the finish line further and further back claiming that the available computing horsepower will be available soon, while ignoring the real problem - which is the fundamental incompatibility between computing technology and the human biology.

So... ummm... you're pretty much wrong on all counts?

Bring on the robo-bodies!

Happy holidays to you too! :p

Edited by Ghost Train
Link to comment
Share on other sites

Inserting randomness into your algorithm on purpose to obtain a "blatantly wrong solution" as you say, does not solve the problem of creating hardware + software that processes human consciousness. Otherwise given the amount of crappy outsourced programmers that attended Ranjeev's Technical Institue (specializing in buggy untested er... I mean highly unpredictable code), we would have created extreme forms of artificial life by now.

Inserting randomness is actually a highly valid approach in many circumstances. Modeling the real world being one of the more obvious.

It's why microprocessors HAVE random number generators.

Sure you can add some context to the randomness, but that would make the algorithm so incredibly complex - requiring vast amounts of time and or computational power. Since the latter will be difficult to cram into something shaped like the human skull, we will need time.... and I want my ronery anime otaku android waifu to have a decent response time.

Yup. Power and space are major obstacles. But when you look at what a supercomputer twenty years ago could do and what a laptop now can do... you start wondering how fast those barriers will come down.

More importantly, this defeats the purpose of algorithms, which is to create an efficient set of instructions to find a solution to a problem. What you suggest will not be efficient, as said code would need to have a large amount of unpredictability, nor will it solve problems well, since it will sometimes just have to pick the wrong solution to be human. Perhaps one day we will have data structures capable of replicating this human imperfection, but for now - algorithms are good for telling computers to calculate stuff, and are only modestly effective at simulating human behavior.

How would that defeat the purpose? If the problem is "generating human-like behavior" then your algorithms SHOULD do exactly that. It can't be picking the "wrong" solution if it does a dumb thing that a real human with a similar background would have done.

Consider a weather forecast program.

There's a lot of data that CAN'T be input into such a program because it can't be measured with the limited sensing equipment and computer capacity available. And some of it just looks like random data with no visible cause. Some of it will have known patterns, but some of it will appear as white noise to us, because we simply don't understand the system.

I'm not a meteorologist, so this is admittedly speculation, but I would assume there's actually a good bit of randomization in the computerized weather models. And as paradoxical as this sounds, this can actually IMPROVE accuracy.

Arguably, the human brain IS a computer running algorithms. They're just very complex ones with large quantities of input, some if it highly randomized.

And, well, it's clearly NOT a classical binary Von Neuman machine.

Consider the case of Deep Blue vs. Kasparov, where despite the latter's defeat, had to be tweaked in between rounds to prevent Kasparov from tricking it. Of course, IBM denies this is a form of cheating.

There's a lot of iffiness in the Deep Blue VS Kasparov rematch. And you grossly oversimplify how much reprogramming was done between rounds.

That's also over a decade out of date.

The best chess computers still have great difficulty beating the best human players. My quick and dirty research shows that most such matches end in draws. But they do it without being reprogrammed between rounds, and without forcing the opponent to come in effectively blind.

I do take your point that computers play a very different style of chess than humans. I maintain that it could be coded to play more like a human if the problem was understood and the computer existed that could run the math in any reasonable amount of time.

I actually like this comparison, though it proves my point - You can rewrite and upgrade legacy code relatively quickly (with the most poorly managed projects only taking a decade or so), but you can't change the human hardware as our brains are fundamentally the same from when we we roamed the world as cavemen. The glitchy legacy code is patched through civilization, language, and education.

Well, my point was that there IS a deterministic process at work in the brain. It's just not one that works very well by computing standards.

Ultimately, it comes down to physics. And physics is math.

If the brain was understood, it could be modeled in a computer(as opposed to the more abstract neural networks that exist currently and only model neuron connections and not the whole assemblage). If it was understood well enough, it could be modeled accurately. And if that accurate model was run... you'd have an artificial brain.

And legacy code can only be rewritten if you have the source code and the hardware necessary to read and write the media it's stored upon. This is a bigger problem than you might think. And you still need someone that can actually make heads or tails of that code.

Obviously, we have none of that for the brain.

You would have thought that natural selection would have created near perfect humans by now after an ice age, countless wars, plagues etc, yet a quick glance of the internet reveals that our society is one that seemingly glorifies stupity. Despite society's attempt to fix and patch the human software, people still do really really dumb things sometimes - proving that computing is a poor analogy for understanding or simulating humans.

Evolution isn't as rapid or all-powerful as you make it out. It's stuck choosing between different sets of humans, it can't just make an arbitrary "man 2.0" model and implant that on a new person.

It's also more concerned with the short-term. Who's making the most babies? Which babies are surviving long enough to have their own babies? That's what evolution cares about.

Besides, modern medicine has largely short-circuited evolution. Survival of the fittest no longer applies in a world of corrective surgeries and antibiotics. If someone has a defective heart, we fix it. If we can't fix it, we replace it. They live and pass those genes on.

Anyways, computers do math. Given enough computer, you can solve any possible math.

And like I said, the brain operates according to the laws of physics, and physics is math. Once you know the initial conditions and system constraints, you can program those in and simulate it accurately.

My point here (and I concede the dogs & cats example was poor) was that there is a disconnect between human consciousness and software & hardware we associate with modern computing. Although there have been advances in creating interfaces - prosthetics, vision & hearing aids to bridge the gap, there is no prosthetic or substitute to carry out human consciouness.

Meaning if I dump the source code for humanity into a different type of hardware - nothing will happen as my code is not optimized or configured for it.

And my point is that if you know how it works, you can simulate it.

Everything in the universe is math. Some things are just more math than others.

One of many technologies, with quantum computing being another that comes to mind. Still does not refute the fact that algorithm based computing simply will not work for this purpose.

Actually, you're comparing apples and oranges. Quantum computing is a whole new hardware field. Neural networking is a SOFTWARE field.

NNs run on deterministic binary math computers. Yes, some even run on ye olde IBM PC clones.

Not incredibly advanced networks in comparison to organic life, but... it's PROVEN that modern computers CAN simulate small collections of neurons.

It's even possible to simulate small collections of biologically-accurate neurons, to the extent that we know how they work.

Despite the incredible processing power available even to desktop PC's, we still do not have true A.I. or cyborg technology. Proponents of this can simply keep pushing the finish line further and further back claiming that the available computing horsepower will be available soon, while ignoring the real problem - which is the fundamental incompatibility between computing technology and the human biology.

I'd say the issue is one of raw power, and understanding of the problem. Moreso the latter.

But regarding the former... as a rough apples to oranges comparison, there are over a hundred billion neurons in an adult human brain. An Intel Core 2 Quad has some 500 million transistors, each of which is capable of doing far less than a neuron.

Biology can be simulated once it is understood.

Whether we'll still be running binary transistors once we understand it well enough to simulate is a less clear issue.

...

Though some claim they can do it now.

http://bluebrain.epfl.ch/

"With the present simulation facility, the technical feasibility to model a piece of neural tissue has been demonstrated. "

Also:

" In the cortex, neurons are organized into basic functional units, cylindrical volumes 0.5 mm wide by 2 mm high, each containing about 10,000 neurons that are connected in an intricate but consistent way. These units operate much like microcircuits in a computer. This microcircuit, known as the neocortical column (NCC), is repeated millions of times across the cortex. ... This structure lends itself to a systematic modeling approach."

And the reality check is here:

"Our Blue Gene is only just enough to launch this project. It is enough to simulate about 50'000 fully complex neurons close to real-time. Much more power will be needed to go beyond this. We can also simulate about 100 million simple neurons with the current power. In short, the computing power and not the neurophysiological data is the limiting factor."

On one of the fastest supercomputers in the world, they can simulate 50,000 neurons out of 100,000,000,000.

And here's the largely unrelated, but incredibly fascinating part...

"Will consciousness emerge?

We really do not know. If consciousness arises because of some critical mass of interactions, then it may be possible. But we really do not understand what consciousness actually is, so it is difficult to say."

If they get enough virtual neurons firing, and the computer becomes conscious, it'll be just incredibly amazing(though obviously they need a major upgrade to their Blue Gene to even try).

Either way, it's just incredible.

Happy holidays to you too! :p

That actually wasn't meant to be a RUDE "wrong on all counts."

And may the joy of Hanukwanzmasgiving be visited upon you as well.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...