• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Artificial Intelligence Questions

64K

Joined
Mar 13, 2014
Messages
6,774 (1.65/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
I am with Musk and Hawking. That is a VERY slippery slope to tread. I am not looking forward to Skynet...
 
Humanity would be public enemy number one if there became a greater suitor to take our throne, they'd see us as wasteful and destroying potentially fruitful resources at the very low level...
 
A.I. is a very under rated movie. I believe Spielberg had to finish it because Kubrick died during the filming?
 
It's not easy to make a A.I. We had a project in school to make a game called Nim in matlab and make it so that it can slowly get smarter and smarter until you eventually cant beat it anymore. Needless to say we didn't finish the game, we just found something (I belive it was a game called tetris) and handed that in as the project, we just changed a couple of variable names (Well we did it the lazy way and got a pretty good grade out of it)
 
This article had me considering how I would treat a robot that looked and acted human. Would it be property or a person?

We are going to face a really big revolutionary challenge before AI gets to that point.

And I doubt that it will ever be possible to make a computer that is self-aware and conscious. Rather they will be very intelligent and sophisticated slaves. You'll be able to program them to behave like humans (if you wanted to for some reason), but they will always be missing something.
 
http://arstechnica.com/science/2015...-acts-like-a-brain-will-we-treat-it-like-one/

This article had me considering how I would treat a robot that looked and acted human. Would it be property or a person? This reminded me of a movie that poses the question of ethical considerations.
You should play Binary Domain. The ramifications go deeper than just impersonation. There's also Talos Principle which explores many ideas in regards to self-aware AI.

Personally, I think we need AIs but simply not AIs that are self-aware. Self-aware AIs should be unequivocally banned. If an AI does become self-aware, I don't know what the appropriate...solution would be. Bones made of metal, blood made of hydraulic fluid, and neurons made of transistors. Besides the physical differences (biological versus mechanical) who are we to judge what is and isn't life? When merely existing is a crime against humanity...that should create a moral conflict in people. I just hope all of humanity makes triply sure that any AI cannot become self-aware.


Humanity would be public enemy number one if there became a greater suitor to take our throne, they'd see us as wasteful and destroying potentially fruitful resources at the very low level...
But that's wrong. A real AI would acknowledge that humanity is more or less trapped on Earth and especially the solar system. Everything they need to thrive is in asteroids and gaseous planets that man cannot explore. There's no reason for them to fight for Earth when they aren't bound to it like we are.


You'll be able to program them to behave like humans (if you wanted to for some reason), but they will always be missing something.
I'm going to blow your mind: AIs write their own code. Only the foundation is programmed but beyond that, they write their own. In theory, the concept is not unlike our DNA. Every time the AI encounters something new, it creates branches on a tree and explores those new branches of code to attempt to solve it. It takes the best solutions and tries to test its code to prove its correctness and adopts it. When it discovers something that fits the same description of that branch but not quite, it adds leaves to the branch that further explore the concept. Think of it as evolution that occurs in a fraction of a second opposed to millennia.

Moreover, every AI can reach different conclusions which, in effect, leads to different personalities. The only thing I haven't figured out is emotions. Human emotions are, in large part, a function of hormones. AIs obviously don't have hormones so the closest they can get is predefined emotions and pretend to exhibit/understand them (e.g. elation when discovering something new). They could probably be made to be pretty convincing in this but it is something that is distinctly animal. Perhaps this is the missing link in my second paragraph.

DARPA is exploring the concept of a true AI now.
 
Last edited:
I'm going to blow your mind: AIs write their own code.

I don't think it's mind blowing that AIs write their own code. That's simple learning on top of a pre-programmed base.

I also don't think simulating emotions would be difficult at all, since emotions have a predictable logic to them. It should be possible to replicate the entire range of human response and behavior if you wished.

Consciousness and self-awareness is sort of a sideline, and one that I doubt will ever be achieved. But they will be extremely dangerous anyway. You could program them with a strong desire to eliminate all of humanity for instance, and then turn a bunch of them loose. And we'd need other bots to keep them from accomplishing this goal. It's like the same wasteful shit going on now, but at a much higher level.

We are going to have a totalitarian world government anyway before that happens, so I wouldn't worry too much.
 
I also don't think simulating emotions would be difficult at all, since emotions have a predictable logic to them. It should be possible to replicate the entire range of human response and behavior if you wished.
But they still wouldn't "feel." Transistors are incapable of experiencing it and can only be programmed how to emulate it. Their emotions are fundamentally rigid which is why the easiest way to expose an AI built on transistors is to explore its feelings (or lack of). A pattern inconsistent with living beings would inevitably emerge.


You could program them with a strong desire to eliminate all of humanity for instance, and then turn a bunch of them loose. And we'd need other bots to keep them from accomplishing this goal. It's like the same wasteful shit going on now, but at a much higher level.
That's a contradiction. Desires aren't programmed and programs don't have desires. I think the word you're looking for is programmed to have a strong preference for the non-existence of humanity.

Everything programmed can be reprogrammed.
 
Programmed emotions don't have to be rigid, and you could have many other factors contributing... just like real life. Simulating human responses would not be that difficult.

By "desire" I mean that this goal is a high priority in the machine's coding. Just like desires in people.
 
Programmed emotions are rigid by their design. There's a finite number. Emotions in animals are virtually limitless. For example, a human can express a number of emotions at once. If a program tried to simulate that, what actually presents on the surface may appear very awkward and unnatural. Every basic emotion creates an exponential number other variables that have to be weighed in.

On top of that, a computer understanding the concept of sarcasm or other exaggerations would likely lead to telling results. It not only has to understand what is said and how it is expressed, it also has to understand pitch and tone. All of this paints a horrendous picture from the coding perspective.

It will undeniably be the easiest way, without running it through a cat scan, to tell the brain isn't natural. Remember that nonverbal communication comprises of some 90% of communication.
 
There is no reason why a machine wouldn't be able to express the same complexity as a human, if that was your goal. It would be hard if you had to code it from scratch, but easy if the software was able to *learn* the behavior. It can consider all sorts of input and respond appropriately. If it fails in it's goal (for instance to act like a human) it can modify its behavior. It can also observe how people behave and emulate them.
 
That's the problem. How can a logic system process non-logic data? This is already demonstrated with pictures and especially identifying objects inside of a picture. These constructs (images, movies, characters, emotions, sensations, feelings, etc.) are completely alien to transistor-based systems.

For example, to a transistor based system, the letter Z is known as 1011010 (0x5A) in the ASCII file format and if it is to be displayed, it has to look up the requested font (e.g. Courier New), find the TrueType Font file (e.g. cour.ttf), load the instructions on how to create the symbol, then send it through the display pipeline. The reverse is terribly inefficient and unreliable (as seen by OCR software) because it has to interpret symbols (often without the font to provide context) back into that binary it knows and binary is very inflexible. It is either right or it is wrong.

Just like a computer has extreme difficulty defining shapes in a picture, it has extreme difficulty picking up subtle expressions. Facial recognition, for example, can get a false negative simply because the individual being photographed is laughing at a joke. Following that line of thought, most people can pick up on whether a laugh is genuine or faked because a genuine laugh uses a lot more muscles than a fake laugh. It's literally a Pandora's Box of computing problems.

Not to mention, the human element of the problem. The Japanese made a robot that was an attempt to make it look human. People were revolted by it because they knew it was off just by looking at it. The human brain is incredibly skilled at identifying other people because we are a social species. It discovers a fraud just as easily as it discovers someone real. This phenomena is what leads people to pick out "human faces" from the mundane like toasted bread or the moon. There's a lot of easy ways to fool the brain (like special awareness) but this is not one of them because it was born out of survival instincts.
 
Last edited:
There is no reason why computer would be unable to do these things. What's missing is the sensory input and and processing power. Current processing power is at the brain level of insects, so of course emulating "complex" human behavior, judgement, pattern recognition, and nuance would be difficult at this time.
 
If your AI friend ever gets crazy on you just invite it to go swimming... prob solved.
 
The easy option is to leave emotion out of it's learning process.
 
There is no reason why computer would be unable to do these things. What's missing is the sensory input and and processing power. Current processing power is at the brain level of insects, so of course emulating "complex" human behavior, judgement, pattern recognition, and nuance would be difficult at this time.
I was pretty clear as to why: the concepts are alien.

Images and video are substitutes for sensory input and the processing power is there. The problem is that the processing power is mathematical/logical and better suited to telling you the average color of any given image or video and not what the whole means.

Current processing power for sensory is quite in line with that of an insect and has been that way since the first keyboard was invented (it can sense getting a button pushed). Current processing power for mathematics far exceeds what thousands of humans could do. Why? Mathematics is as alien to humans as emotions are to computers.

I'm convinced it isn't really going to improve much until the paradigm shifts. We need a co-processor, not based on transistors, that can handle these abstract concepts like biological brains do. Another game example is relevant here: Deus Ex: Human Revolution. The super computer uses drones (three women with robotic spinal augmentations) which serve as co-processors for interpreting abstract data.
 
Back
Top