• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Computer Passes Turing Test for Artificial Intelligence

64K

Joined
Mar 13, 2014
Messages
6,774 (1.65/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
Last edited:
I wish the site would work tried a few days now.

I realy wanna talk shit to the bot and see how it reacts.
 
I've seen some people talk crap to the bot and it becomes pretty obvious that it's not a thirteen year old human being, but I believe that knowing it is a bot (and potentially knowing it could be a bot) invalidates the test.
 
Its easy to beat, just say yes over and over again or no over and over again and it starts to use canned responses.
 
I think that's the point for me , , how can a program be said to be thinking if it's programmed with the appropriate response.
Nice program ,,ai I think not
 
I think that's the point for me , , how can a program be said to be thinking if it's programmed with the appropriate response.
Nice program ,,ai I think not

That's not the point of a Turing test.
 
What is then , alan turing said any computer that passes it could be said to be thinking
 
30%? That bar is way too low. In a blind test, the minimum should be no less than 80%. The majority could tell it was a robot. A true AI may have some tells but the average person that hasn't studied it shouldn't be able to figure it out.

Case in point: getting the same response back repeatedly in a conversation would often result in a human ignoring the subject for some time. This software doesn't do that.
 
Sorry for passing along a bogus story people. :oops:
 
I suspect DARPA has a prototype AI stashed somewhere for combing through all the information the NSA is collecting but that's not something that would be discussed publically.
 
I don't think anyone has linked to the bot itself. It took me two or three lines before it was quite obvious it was a chatbot. Seriously. Go talk to 'him'.
 
Sno0E1J.jpg
 
  • Like
Reactions: xvi
I don't think anyone has linked to the bot itself. It took me two or three lines before it was quite obvious it was a chatbot. Seriously. Go talk to 'him'.

Yeah, I was finally able to get on the site this morning and what a disappointment it turned out to be! I asked it if it believed in creation thinking that would be an unlikely question it would be programmed for and it answered that it questioned the legality of that.
 
Those are the filters that pull aside data, not the backend that actually analyzes it. The Narus systems, for example, would separate all conversations that had the word "bomb." An AI would sort through the needles that were set aside, look at the context in which the word was used, if determined to be high interest, it would look for trends and then try to compile a list of details. If details were sufficient enough to be actionable, it would send it to the Department of Justice to investigate. If not, it would hold on to it for future reference. Narus may do some of these things but certainly not all.

See CALO (DARPA program going from 2003-2008 that ultimately lead to technologies like Siri in iOS). DARPA's latest effort to get a true AI was publically announced last year. As I said, they may already have a working prototype somewhere.
 
Yeah, I was finally able to get on the site this morning and what a disappointment it turned out to be! I asked it if it believed in creation thinking that would be an unlikely question it would be programmed for and it answered that it questioned the legality of that.
The bot seems to use a lot of canned responses and tends to ignore most anything you ask it. I was disappointed as well.

Those are the filters that pull aside data, not the backend that actually analyzes it. The Narus systems, for example, would separate all conversations that had the word "bomb." An AI would sort through the needles that were set aside, look at the context in which the word was used, if determined to be high interest, it would look for trends and then try to compile a list of details. If details were sufficient enough to be actionable, it would send it to the Department of Justice to investigate. If not, it would hold on to it for future reference. Narus may do some of these things but certainly not all.

See CALO (DARPA program going from 2003-2008 that ultimately lead to technologies like Siri in iOS). DARPA's latest effort to get a true AI was publically announced last year. As I said, they may already have a working prototype somewhere.
I saw something on Imgur about these systems having trouble detecting sarcasm. Imgur responded by wrapping a lot of their comments in sarcasm tags. <sarcasm>I bet the NSA was really happy to hear that.</sarcasm>
 
Back
Top