Offered again: https://www.biatob.com/p/11788533128982732233

2018.03.24 turing bet with anna (email "yearly turing bet report") done! 
either anna owes dreeves $100, or dreeves owes anna $10k
ANSWER: anna paid dreeves $100!

2010 Anna asks: what’s your take on the extent to which IBM’s Watson constitutes evidence that a Turing test could be beaten by brute force (as with Deep Blue) as opposed to human-like insight?

Dreeves replies: IBM’s Watson project (pitting computers against humans in the game show Jeopardy) looks fun and interesting and even impressive but I see it as pretty much imperceptible progress toward the goal of passing the Turing test. So I guess my answer to the brute-force question is no. In fact I would say no for fundamental reasons. Even for other games, like Go, brute-force seems doomed and the game of “have an open-ended conversation” has a mind-bogglingly bigger search tree than anything like chess or Go.

2017 Anna asks: What are your overall impressions re: AI trajectories (surprisingly fast, surprisingly slow, or just as expected 9 years ago)?

Good question! I wouldn’t have predicted such fast progress on Go and the progress on things like learning to play Atari video games from bare pixels is impressive and surprising. Basically all the deep learning neural net stuff. But fundamentally that’s all incremental progress from what machine learning could do 9 years ago. And it feels like there’s been no progress at all on passing the Turing test. We talked about IBM’s Watson before, how it doesn’t seem to have even baby steps towards real understanding of natural language. Maybe baby steps toward the first baby step?

As for my take on AI risk in general, I’ve always been on board in theory with the only question being timelines. I summarized my latest thoughts a year ago and continue to view it as a more and more important problem to work on:

Summary: Applied meta-ethics (as I think of it) is awesome regardless so can’t hurt to dive in!

I’m not exactly scared yet but I view the AI alignment problem as very worthy research now that will, when panic-time comes, be vital. Especially since it’s pretty awesome research regardless. It’s basically formalized meta-ethics, i.e., coming up with a machine-readable encoding of humanity’s utility function. An insanely hard but fascinating and fun problem. So I’m sort of in between the singularitarians and the pooh-poohers. The latter say it’s premature or pointless to worry about it this soon. Maybe it’s academic to worry about it this soon, but in a good way. And it might actually be super important, pragmatically, to be getting a head start on it. Since the stakes are so high (the existence of humanity and all) it’s worth being conservative and treating it as more urgent than it seems.

2018-03-26:
It’s interesting to think about whether I’d make this bet again for the next 10 years. Eliezer and others have definitely shaped my thinking on this over the last 10 years. One thing I’ve become convinced of is that the uncertainty around AI timelines is super high and that I should absolutely mistrust my own intuitions. Also Stuart Russell made me realize that AI can be an existential threat even without passing the Turing test.

So I’m guessing we no longer have much in the way of big wager-worthy disagreements about AI. But I love having bets on the books and would jump at the chance to have more with you!

Email thread between Anna Salamon and Daniel Reeves detailing their wager in 2008:

Date: Sat, 29 Mar 2008 14:39:39 -0400 (EDT)
To: Anna Salamon
From: Daniel Reeves dreeves@umich.edu
Subject: Re: turing test bet

Sounds perfect. Yes, I’ll pay up early (maybe the net present value?) if it looks bleak for me.

(Btw, I agree that Deep Blue’s win was legit. Just unsatisfying to those who predicted that Strong AI would be needed to beat the top human chess players. I could of course be being similarly naive about the Turing test, but I’m very confident it’s fundamentally different. After all, I’ve basically fallen head over heels in love via nothing but text chat so I better believe it would take Strong AI to fool me!)

I’ll resend this email to us each year on March 24.

Danny

PS: <virtual hand shake> (just to make it official!)

--- \/   FROM Anna Salamon AT 2008.03.29 02:22   \/ ---

That sounds fairly reasonable. I am good with the $100 / $10,000 amount. My main concern with your proposed details is that: (1) if we make AI smart enough to pass the Turing test, I think there’s a fair probability that the world will be destroyed not long thereafter (whereupon I presumably could not benefit from the bet), and (2) the longbets.org bet does not provide for especially prompt testing.

Would you be up for paying me if at some point it seems 90% likely that the longbets.org procedure would yield a win for Kurzweil, even if the procedure has not yet been done? (Without waiting for 2018.03.24?) I would then pay you back if the procedure turned out against Kurzweil.
 
Like you, I will try to abide by the pornography version of the test and not claim a victory if longbets passes the program for spurious reasons. I am less confident in my ability to do so, mainly because I am less clear on what the pornography version of the test should mean and whether there is a crisp distinction. Did Deep Blue win against Kasparov for spurious reasons, since it used computational power more than insight and also was specially programmed to play chess against Kasparov in particular? I would say “no”. Anyhow, no program is able to deal with most day to day language (in English or whatever it knows) in most contexts, with the basic behaviors associated with “understanding language” and “keeping track of what’s been said”, I’ll pay you.
 
So, my proposal: we’ll decide the bet by the “know it when we see it” version if we both agree about whether we’ve seen it, or the longbets.orgprocedure if we disagree. You’ll pay me as soon as you’re 90% certain that a procedure like the longbets procedure would yield a Kurzweil victory by 2018.03.24 (e.g., because similar bets have already been won) (with me paying you back if the prediction turns out false); I’ll pay you on 2018.03.24 if the longbets procedure has not yielded a Kurzweil victory.
 
Does that sound good to you?
 
On Tue, Mar 25, 2008 at 3:00 PM, Daniel Reeves dreeves@umich.edu wrote:
 

The Turing Test is tricky. Ie, there are loopholes that make it tricky to pin down a bet. Emulating a dumb human may well be almost as hard as emulating a smart human but distinguishing a dumb/uncooperative human from a very crude emulation of such (particularly over text chat) may be quite hard. In other words, it’s tricky to eliminate false positives. Nonetheless, it’s like pornography — I’ll know it when I see it.
 
But how about we piggyback off of this bet:
http://www.longbets.org/1
I’ll give 100:1 odds against Kurzweil winning that bet by 2018.03.24.

 
To make it exciting enough that we won’t forget (I actually have some infrastructure for keeping track of things like this so I’ll volunteer to do the reminding) how about you pay $100 on 2018.03.24 if Kurzweil hasn’t won (by then) and I’ll pay you $10,000 if he has. I really am confident enough that that seems like a good deal to me.
 
Also, I promise not to do any weaseling and just go by the longbets.org verdict. (That’s the biggest source of risk from my perspective, that longbets could declare Kurzweil the winner for spurious reasons.)

Conversely, I think we can count on longbets.org sticking around that long (it has a lot of famous bets and famous bettors) but I’d also pay up if a program passed the pornography version of the test, which I think I could administer in good faith even with such a large incentive to say “fail”.
 
PS: to remind ourselves for later, the original spirit of this bet is “will we have strong AI in 10 years?”. My claim is that the probability is low enough that it should not be a priority just yet when considering human extinction risks. Ie, I agree we’ll get strong AI eventually but we’re far enough away from it now that it’s not going to “sneak up on us”, ie, it won’t happen in the next 10 years.


 

http://ai.eecs.umich.edu/people/dreeves - - search://"Daniel Reeves"

 
Latest odds from intrade…
81.0% chance Obama wins nomination (last trade 14:40:54 TUE)
45.0% chance Obama becomes president (last trade 05:44:31 TUE)
71.0% chance of US recession in 2008 (last trade -)