H.I. No. 52: 20,000 Years of Torment

From Podpedia
"20,000 Years of Torment"
Hello Internet episode
Episode no.52
Presented by
Original release dateNovember 30, 2015 (2015-11-30)
Running time2:05:14
Episode chronology
← Previous
"Appropriately Thinking It"
Next →
"Two Dudes Counting"
List of Hello Internet episodes

"H.I. #52: 20,000 Years of Torment" is the 52nd episode of Hello Internet, released on November 30, 2015.[1]

Official Description[edit | edit source]

Grey and Brady discuss: subvocalization, Liberian county flags, Brady worries about who is driving him and the proliferation of screens in cars, Brady fingers ballots and then brings up coincidences and dreams, Brady tries to convince himself to buy an iPad Pro, what cars should people drive, before finishing off with how artificial intelligence will kill us all. (If we're lucky.)

Show Notes[edit | edit source]

Fan Art
Flowchart
Summary
Transcript
I tell you what, it is one of the great myths of Hello Internet and CGP-Grey folklore that you are competent and have technical ability. The last show certainly sparked some conversation in the Reddit. I couldn't help but notice that it was a show that reached a thousand comments. People talking about when server's mem is appropriate. People talking about sub vocalization with many of mine's blown lots and lots of discussion from the last show. There was, there was and I'm sure we'll come to a few of the other things in follow up. But on this sub vocalization thing, a lot of people seemed really interested in it. And it is very interesting. But I don't feel like I have anything else to say. What about you? The thing that I left out of the conversation last time, which people were picking up on a little bit in the subreddit, was I came across sub vocalization in the context of, this is not a thing that you should do if you are a well-developed reader. That this is a hindrance. This is something that you do when you first learn to read, when you are a child. But that by the time you become a man, you should be able to look at words and understand them without hearing a little voice in your head reading the words to yourself. There was a lot of comments on that point. But my only follow up is when I came across this, I thought, oh, okay, well, this is very interesting. Let me see if I can get rid of this sub vocalization. And there's a whole bunch of things that you're supposed to do. And my experience with them has been a total failure. Like there are exercises that you're supposed to do where you're listening to like a recording of a voice that's counting up in numbers, one, two, three, four, five, and trying to read and trying to do that. So your brain learns to not use the audio part of your brain for this. And I tried that. And the result was I was just incapable of reading. The one that I thought was the most interesting was there's a bunch of software out there, which I don't know if you've seen this Brady, but it does this thing where it flashes words individually on the screen from an article. So instead of saying like here's an article that I want to read and it's written normally, it just flashes all of the words in sequence in the center of the screen. Have you ever seen something like this? Yeah, that does. Yeah, I have very briefly. I'm very familiar with it. Yeah, I believe Insta Paper on the phone has it built in, but there's a few websites where you can paste text and do the same thing. But one of the ways in which you're supposed to train yourself out of sub vocalizing is by using something like this cranked at a ridiculous speed. And so, okay, well, let me try. I'll try this. But it was almost comical because no matter how high I cranked it up to where it's like 500 words per minute, I'm just hearing a faster voice in my head. It's like there's no point at which I can still understand it. And there isn't also a narrator. And it was a bit like when I edit this podcast, I edit this podcast and sometimes accidentally send it to you in a fast forwarded mode where we're talking, you know, two times faster than we normally do. So I've tried a bunch of the get rid of sub vocalization stuff and none of it seems to work for me at all. I just, I'm not sure that I'm not sure that it can be gotten rid of. I guess the question I have is what's the difference between you sub vocalizing? And if I was sitting next to you in bed reading the book to you. Wait, wait, wait, wait. I think there's a big difference between those. No, hang on. I'm sitting next to the bed. I'm not in bed with you. I'm just on a chair next to the bed. Oh, yeah. Yeah, this is way less weird. Yeah. You've got like your hot chocolate and you're getting ready to go to sleep and you're like, Brady, can you read me a story? Uh-huh. Like is that basically what's happening? You're reading yourself a story. Like it seems really similar to that. Or if I was reading you the story, are the words then coming into your head and you're then reading them to yourself again for you to think about it? Or no, there's no, there's no unnecessary level of thought. There's no doubling up, right? I'm not like hearing it twice. Maybe this is the best way to think about it. Like when we're talking now, right, you and me are talking. Neither of us are thinking about the thoughts, right? Like we just, you don't know how you speak, right? Words just appear. This isn't, this is how this happens, right? Yeah. And so when I ask you a question and then you answer me, yeah, right? You are using a voice, but you're thinking the thought at the same time that you're speaking it. And for anyone who's done something like a podcast or you speak for a very long time, and I'm sure, Brady, you've had the same experience. Sometimes you say something and you think, wait, do I actually think that? I'm not sure that I do think that, right? Because it's just like a stream of thoughts coming out of your mind, right? Mm-hmm. Yeah, if you ever had that experience, you say something, you think, do I think that? Pretty much every time I speak. There we go. So in the same way that you talking out loud is like the same thing as you thinking, it's just like that for reading. It's almost like if someone put duct tape over your mouth, because you weren't able to speak that would impair your ability to think. That's kind of like what it is internally. I did read when they're doing experiments on sub-vocalization. They put little senses because you are almost imperceptibly reading to yourself. Like I can say movements in your tongue and your lips and stuff. So you literally are kind of reading out loud. Yeah, I would be really curious to know if that was the case for me. Like as far as I know, I sit silently and I don't. I'm not moving my lips or my tongue, but I have seen these things saying, like, oh, you can under the right circumstances measure that there's still electrical impulses going to someone's vocal cords when they're doing this. Even if there's no external side that they're reading out loud. But I guess your analogy of you reading me a bedtime story just really threw me off. I think perhaps the most straightforward way to describe it is that me reading a book out loud to myself and me reading a book silently to myself are not very different experiences. Oh, really? That is weird. Human brains are weird. Well, summer. I don't know how you read. I don't understand how you read if that's not the experience that you have. And you're like, you are like imagining things too. Like you are like picturing the saying obviously and you know, you're imagining the mountains and the Hobbits and... Yeah, I have the same, I mean, this gets really weird, right? Like when you think of something in your head, you can see it, right? But where are you seeing it? I still have that going on. Like I'm imagining the scene that unfolds in, say, a fictional book, right? That definitely takes place. But it really is just like there is a narrator talking over the whole thing. But so do you just have a scene silently playing in your head when you read? No, it's just... It's in another realm. It's in a realm where voices don't exist. It's like... It's your thought. It's your consciousness. It's that infinitesimal point in the center of your brain, where everything happens that you don't understand, but it's just the place. And there's no... I don't know. Like I said last time though, there's a collapsing of the wave function. As soon as I think about thinking, everything becomes words and everything becomes pictures. But it's only when I think about thinking, it's not. That's why I think the same thing is happening to both of us. And you're just incapable of getting lost in it. And you're always thinking about it. So you're always collapsing the wave function and thinking about the words and the pictures. I know this is wrong and there are studies in terms of... No, no, no. And it's arrogant for me to think everyone thinks like me. But that's just what it feels like to me. It feels like we all do it. Because as soon as I try to think about it, as soon as I talk to you about it, suddenly I am reading to myself and everything is a lot more simple and basic. But that's just because I'm analyzing it. I just think you're analyzing it too much. I think you do get lost in reading and thinking. And it's only when you stop and think about it that it all collapses into this really simple thing. Yeah, this is exactly what a non-sofocalizer would think that way. Yeah, right. How can I argue with that? How can I argue with that? And I could say, and of course, you would say that because that's what a CGP great would say. And then you can't argue with that. Right, of course. We're fast getting into the realm of inarguability. But the reason why I do think that you're wrong is because I... From the descriptions, I genuinely wish that I could read in this way that didn't have internal words. It seems like it's a much better way to read. But I am always aware of the narrator. The narrator is never not there. Mental images can be in addition to the narrator, but the narrator is always there. I can do the thing everyone can do, where you can imagine a dog. And in your brain somewhere, there's a picture of a generic dog that pops into your head. Without hearing someone also go dog. I can have thoughts without a narrator, but reading without a narrator is not possible. But I would still say that I think the vast majority of my thoughts do have some kind of narrator and that the picture part of it is much rarer. I have to more consciously imagine the dog to not have the narrator to be a thing that happens. And I do realize there are academic studies into this. That's another reason I'm wrong. This is a field of study. I can sit here and be an armchair expert, but I do realize there is a thing. I would be curious in the subreddit if anybody has any other recommended techniques besides the listen to something spoken while you're reading or try to do the one word really fast things. I'm open to trying other methods to get rid of the habit of subvocalizing. But everything I have tried so far has been hilariously unhelpful. Do you know what? I haven't told you this yet. What? But I've been buying up stamps and all sorts of merchandise with the Liberian county flags on them. Have you really? Yeah. I've even just just today I got an ombre like that was sent like during like the Liberian war or something. With one of the stamps on it that's been post-marked in Liberia and I'm loving it. Loving it. I'm getting really into stamps and postcards and that whole world of mail and stuff. I think I'm becoming a fully flage nerd. Like I'm the one thing that I didn't do that's nerdy stamp collecting. And I think I'm going to get into stamp collecting. You know, there is a whole world. There's a whole world to get into with stamp collecting. I know. I mean, I've already obviously I've already started with my crash mail. But now... Yes, the crash mail you so proudly showed me last time I was there. I'm going to have a whole bunch of other Liberians stuff to show you next time. Oh boy. There was a thread on the Vexilology subreddit. Very often on there they do redesign projects, which I actually think are some of the most interesting things that appear on that subreddit. Sometimes they'll just do flags in a particular theme like canodize every nation's flag. So you make a Canada version of all the flags. But sometimes they just do a straight up redesign. And so someone who actually listened to the show and it's a food man, Dunian. He redid all of the Liberian County flags. And I will put the link in the show notes. I am very impressed with this redesign. And I think the redesign is really interesting because I can't I can't figure it out because I look at the redesign. And these flags are still very, very busy flags. But I like them all. But I wonder if it's because my brain has already fixed its point of reference as those horrific, horrific original flags. And so my brain is going, oh, these flags are much better than those old flags. I feel like I have a hard time seeing them objectively. But I think they are very interestingly done redesigns. Do you know what my problem is with all these redesign competitions and things like that? Because of these rules of good flag design and this kind of accepted style and grammar of the time. All the flags begin looking a bit the same. And I always think that and that's one of the things I like about the Liberian County flags if I can like anything. It's that it's different. It's so it's so refreshingly different. And isn't that a great thing about some of the wacky flags, whether it's something really crazy like Nepal or something that's just a bit different like Brazil, for example? Like if you didn't have those points of difference, flags would be the most boring thing in the world. You need some of the crazy guys to make flags work. And I think whenever you have these little competitions where people say, let's imagine we didn't have the crazy guys. Let's make the crazy guys the same as all the other guys. All of a sudden flags become really dull. So I always think it's a bit unfortunate when people have these little let's take the wacky flag and turn it into all the other ones. And it just it leaves me cold. Like if you're going to make a new flag, okay, make a new flag and make it good and follow the rules of design. But there's something about all these. If only this crazy flag was like all the other ones moments that people don't get it. They just don't get it. All right. I am more sympathetic to your point than you might think that I am, Brady. The thing that I think complicates this is that you and I are looking at it from from the perspective of flag connoisseurs, potentially professionals who help other nations develop their flags. Right. This is this is our perspective. So we see many, many flags. People send us on Twitter and on the subreddit. Many more flags. We've seen a lot. Yeah. And so I think from that perspective, the more unusual becomes more valuable. Like a welcome respite from the sameness of every single flag here. It feels like, oh boy, isn't isn't this quite a relief? And I think this is something that you can see sometimes with people who are professional critics in any field. Sometimes we're professional flag critics. Yeah. The kind of, we do earn money by criticizing flags. I guess we are professional flag critics. Yeah. Quick. Someone add that to the Wikipedia pages. Well known professional flag critics. Touted in some circles as potential advisors to the government of Fiji. Right. But so I think I think that's why like movie reviewers, you know, sometimes if they're movie reviewers, you follow the occasionally like movies that you feel like, God, how could they possibly like this terrible low budget awful indie movie? And I think it's a bit of a same thing where they're like, man, it's just so interesting to see something that's different. Right. Even if it's not great. But the thing with flags and the reason why I will still push back against you on this is that I think a vital part of a flag is not just its uniqueness. But it's not the people who live under that flag should want to put that flag on things that they have. So I feel like everybody should have a flag that they can attach to their backpack. Right. Or that they can fly from their house. Everyone should have that. And so the original Liberian County flags. If you lived in one of those counties and you were super proud of it and you wanted to demonstrate that to the world, you had a terrible terrible choice of flag. So that's why I'm going to push back to you is I think everybody deserves to live under a flag that they can proudly fly. Have you yet seen because I have not. Have you yet seen anyone from Liberia or anyone who lives in any of these counties? Criticize the flags and say they don't like them because I mean you and I have had a right-old laugh and we see everyone on reddit having a laugh and saying these are the worst flags in the world. But it's entirely possible the people of River G County. Think that their flags are awesome. It's got to be it's got to be key river key county. River game. You told me just to say it and go with it. So I did and now you're stopping me. Yeah, you got to you got to own it, Brady. You got to push back. Well, I thought I did own it and you can. I would never want to just give you a hard time. No. But I mean, maybe they do, maybe they're incredibly proud. And if we were saying these things on a podcast in Liberia would be tried for treason. I mean, this is this is the part where I have to admit that I know almost nothing about the great nation of Liberia. Except you know that River G County is pronounced like that. I definitely know that. Yeah, yeah. I'm an expert in pronunciation for Liberian counties. But yeah, so I don't know. So I have to start calling you C.G.P. greater. No, but it's with the double E. Don't you know, don't you know pronunciation rule? No, I don't know them either. And it's because nobody in English knows because English doesn't have any pronunciation rules. English just likes to pretend that it does. I don't know that I don't know that River G County has a place called Fish Town. So I think that's awesome. Although it does seem to be landlocked, but I guess they have freshwater fish. Or it's just a great name. Yeah. But so I have seen neither proponents, nor deponents of the Liberian county flags that are from Liberia. So I have seen no no feedback on either end. And my guess is this is a lot like the city flags in the United States, which is that just most people don't have the slightest idea what the flag of their local city is. This is normally one of these times when I would make a comment like, oh, we're going to be hearing from everyone from Liberia. But I don't I don't imagine. I don't imagine that we're actually going to get a lot of Liberian feedback on this one. This episode of Hello Internet is brought to you by Igloo. Now many of you might be working at a big company with an internet that is just a terrible, terrible piece of software to work with. I mean, actually, is it even really a piece of software? It feels much more like it's a bunch of pipes connected to old computers held together with duct tape. Most intranets are just awful. I used awful intranets at my school, but Igloo is something different. Igloo is a feeling of levity compared to other intranets because Igloo is an internet you will actually like. Go to igloosoftware.com slash hello and just just take a look at the way Igloo looks. They have a nice, clean, modern design that will just be a relief on your sad, hired eyes compared to the internet that you are currently working with at your company. Igloo is not just a pretty face. Igloo lets you share news, organize your files, coordinate calendars, and manage your projects all in one place. It's not just files in a bucket either. They're latest upgrade Viking revolves around interacting with documents, how people make changes to them, how you can receive feedback on them. If you're the man in charge, there's an ability to track who has seen what across the internet. So you can have something like read receipts in email where you know if everyone has actually seen and signed off on whatever document they need to. So if your company has a legacy internet that looks like it was built in the 1990s, then you should give Igloo a try. Please sign up for a free trial at igloosoftware.com slash hello to let Igloo know that you came from us. So the next item I want to talk about is Uber. We just did Uber last week. And the week before. Yeah, we're going to have an Uber corner at the three. We are. I just did have a moment after we'd spoken about it. Because I caught I think three Ubers in a short space of time. And the first person who drove me across San Francisco, I was saying to him, oh, we're you know, where are you going next? And he said, I've got to go to work. I'm actually a bartender. And then the next skill that picked me up the tech me to the next place was in a hurry as well. Because she actually wants to be like a singer in a band. And she was like auditioning that night. And then the next person who drove me to the next place was like a mom who was picking up her kids from soccer practice after she gave me a lift. And it suddenly occurred to me. And I know this is kind of true for taxi drivers, but it seems even more the case with Uber. Who on earth is driving me? Like who are these people driving me at 70 miles an hour along highways who could kill me with the turn of a steering wheel? And they're just like this random selection of people. And their only qualification is that they have a mobile phone. And they have a driver's license? Well, I didn't see their driver's license. I'm assuming they went through some process to prove that. The driver's license process is very rigorous, very rigorous. Okay. They have a driver's license. And suddenly it just occurred to me. I know nothing. I mean, has this person had 30 car crashes? Are they... I don't know. Like, I still like Uber. I still think it's cool. Like, it really won me over, but there are a few moments. I think I'm quite sensitive to it, especially since we spoke earlier about the terrible car crash when that mathematician, John Nash died in when he was going back from the airport. And that was a taxi. That was a taxi crash. Right. But ever since then, especially when I'm in America driving from airports along highways, I'm always thinking, I'm always very conscious that my life is in other people's hands. Much more so than what I fly. Yeah. Probably because I can see the person driving using their mobile phone and stuff. Yeah. And I think driving in America is scarier. Yeah. Like, I Uber most of the time just around London. And there I'm aware like, okay, even if we get into a car crash, how fast can we possibly be going in a head-on collision? Exactly. London. London traffic. Yeah. Whereas in America, you have big stretches where you can get up to 70 miles an hour and then you head on collision with somebody else going 70 miles in the other direction. Right. It's driving in America is definitely more of a dangerous experience. Also, the fact that Uber is such a mobile phone-oriented platform. The Uber drivers, even more than taxi drivers, always seem to be attached to their phone. They're always using the maps. They're always using the apps. They're very phone-obsessed. And I think mobile phones are very dangerous in cars. And I'm very conscious of how often they're looking at their phones and they've got a map sitting in their lap and stuff like that. I just, I think I actually said to one of the drivers, to Uber give you some, have Uber built something into their app where you can't get in. And you can't use the phone while you're doing this or that because you guys are just always on your phones. No, no, no. There's nothing like that. This again is the interesting difference of how things are around the world. Because, again, at least in London, the phones that they get are only usable for Uber. And they are issued by Uber. Factory installed iPhones that run Uber and nothing else, which is why in London, almost all of the drivers have hilariously, at least two and sometimes three phones attached to their dashboard. Precisely because the Uber phone can only be used for Uber. And so the one app bring up other stuff on the other phone. So they'll have like two different software for rooting the directions. Like they'll load it up on Google Maps and something else. But so I'm always aware of like this many, many screen phenomenon at the front of the cars. And it's extra funny when whatever car they're using has a built-in screen that they're obviously not using because their phone screens are just superior. So it's like, actually, there's four screens at the front of this car. Yeah. It's like, okay, you've got the Uber phone, you have your secondary GPS, and you have what is obviously your personal phone. And the built-in screen in the actual car itself is a lot of screens. The other thing that came up time and again when I was talking to Uber drivers was this rival app called Lift. Yeah, now this is something I've never used because I believe it's only in the United States. I don't think it's in the UK. Okay. But I've always gotten vaguely the impression that like, Lift is for hippies. Like it's a shared ride sharing kind of thing. Oh, okay. I didn't get quite that impression. But I used to have like pink moustaches on the front of their cars. Okay. This is the kind of company that it is in my mind. I have no idea if this is true. Most of the drivers were using both Uber and Lift simultaneously. And they all preferred Lift. And they gave me a few reasons. One of the big reasons was the ability for passengers to tip. And I did you, I did you proud, Gray? I did you proud. I gave them a real hard time about that. And I told them why I didn't like that. For the obvious reasons, you know, it creates this, it recreates the tipping culture and you start getting assessed based on your tipping. But actually what they told me, and I was told this a few times. I don't, I haven't checked it myself, but I was told it a few times. The tipping actually works in quite an interesting way. You do the tip afterwards anonymously via your phone. And they don't find out who tipped them. And at the end of the day or at the end of the week they just get their tips. And they don't know where they came from. So they like it because if they do really well, they can, you know, it gives them something to strive for beyond just getting another five stars. You know, they could get the tip or, or if you're really pleased with them, you could give them a tip. But it did sound like that pressure and awkwardness wasn't there. And there was no judging because no one knows who tips. So I don't know if it's true. That's what they said when I challenged them. Sorry, that was Lillie. Lillie, actually just shut a door. Yeah, she's getting pretty smart now. She's clever dog. Oh, yeah, I just sent you sent me one of those lift cars with the moustache. Apparently, this is a thing that they no longer do. But I was suddenly thinking, am I a crazy person for imagining that there used to be pink moustaches on cars? And no, I'm not a crazy person. I looked it up and yes, this is something that lift used to do. That way that you describe tipping is a very interesting idea that I haven't ever come across before. The idea of delayed mass tipping. I think my initial reaction to that is I find it much more acceptable. I'm even thinking like in a restaurant if tipping worked that way, right, that you could do it later. And it's distributed amongst a large number of customers so that the waiters don't know directly. I think that I think that's interesting. I think that's a very interesting idea. People are fundamentally cheap though aren't they? So I think without the social pressure of tipping, the tips may come down. This is why my fundamental thing with tips that I always need to remind people. When I'm arguing against tips, part of the argument that is unspoken is that you have to raise the wage for people who depend on tips. I'm not just uncle screw here thinking let's take away these tips and not add anything else. I would rather raise the wage and remove the tips. I think if under those circumstances if tipping was not required and it was done later and anonymously, I think I would probably very rarely do it. And again, like with all the other stuff, it's way more about just like having to think about it part. But I don't know. I don't know. Maybe I would just set it as the default amount of tip. I don't know. It's an interesting idea. That's a very interesting idea that I haven't come across before. I have to think about this for a little bit. We have a note here about the vote looms, the deadline for our vote looms for the flags. This could possibly be our final warning before the next time you listen to the podcast, it will almost certainly be too late to vote. This could be the last time you listen to the Hello Internet podcast and still have the option of voting in our flag referendum. That's how high the stakes are now. This is going to be the last podcast before we count the votes, I guess. I think so. It's certainly going to be the last podcast you listen to where you have a chance of sending a postcard that makes it in time. But even that I'm realizing as we're speaking is somewhat in doubt because we are recording this podcast at our usual time. But this one may be out a bit late because I have some other things that I have to prioritize above it. I actually don't know when this one is going to go out and how much time there will be. It may be that you have to be in the UK to get the postcard in on time. We'll have to see. You've been, you've, and just lightly, I've been adding three or four days to every day you set as well for the podcast. If you say it's going to be out on Monday, I sort of say it in myself Thursday. Yeah, that's an excellent, that's an excellent piece of advice. It's funny because I try to do that to myself when I make estimates. I'll come up with an initial estimate and I'll go, yeah, but I never make it on time. Let me add a few days. Of course, you can't overestimate for yourself. You're still always wrong even if you try to incorporate your own overestimating. Whenever I tell you, Brady, any deadline, you should just automatically add a few days to that. I do. Although the deadline is looming, I have next to me right now probably over a thousand, but probably closer to 2000 postcards in a box with votes. Here's a listen here there. Here's some of them. That is the sound of actual ballots in our election. Yeah, well, you weighed them. Then I was pestering you for a while to weigh ten of them, but we could do an estimate for the total amount. At least when that was maybe about a week ago, the calculation came out to be about 1800 postcards then, and I presume that you've gotten more since that point. Last time we were discussing, we're thinking, maybe we'll get a thousand and we're clearly going to get double that at this stage. It's going to be a lot of votes to count, that's for sure. I know. I love looking through these by the way. I know you keep telling me off and telling me not to, but listen to Brady keeps spoiling himself and me by constantly going through fingering looking at all of these postcards. I'm just minding my own business and Brady sends instant message after instant message of interesting postcard. I feel like they're just spoilers. I want to go there and just count them all and see them all the ones, but Brady can't help himself. You're like a little kid. I'm not telling you what's getting a lot of votes or what's going to win the vote. I'm just sending you the pictures and trusting spoilers. It's not what that is. That's when someone says, oh, the movie is great. There is a twist. I haven't spoiled anything, but I'm just telling you that there's a twist. No, no, that's what it's not. No, it's not. It's completely different. Let me tell you why it's completely different because the election is all about what's on the back of these postcards. Who's voted for what? I have sent you or told you nothing whatsoever about that. The only thing I'm spoiling is where some of them are from or some of the funny pictures. Trust me, great. There is no way in one day you will be able to get anywhere near seeing them all. It is overwhelming how many there are and how different they are. If I send you some funny one that's been sent of some bridge in Norway, you probably wouldn't have seen it on the day anyway. We're going to be concentrating on the back of the postcards mostly that day, aren't we? I'm not spoiling anything. I'm just excited. It's like I've got all my presents and I just want to feel the presents a bit. Were you the kind of kid who'd opened Christmas presents early? I bet you were. No, I'm not. Definitely not. I tell you what, I can't wait to do the count. I know it's going to be one by one that I don't want to win. I feel it in my bones. I do like them all, so it's going to be all right. I am going to act like a monarch and I have officially decided not to vote in the flag referendum. Unless by some miracle, it's a tie. If it's a tie, then I think I will cast a ballot. That's my thought. I am not going to cast a vote. I think when you write something down, in my mind, I still can't place these flags really in a definitive one to five order. I think when you sit down and you write something out, it solidifies something in your mind. I think, you know what? No, no, here's what I'm going to do. I'm just leaving myself open to the Hello Internet nation ready to accept what they decide should be the flag. I think writing down an ordered list would bias my own feelings toward the actual election. That's my conclusion. I am not going to vote in the election. But have you sent a vote in, really? I have not. And I'm thinking pretty much the same way as you that I like the idea of having not voted. There's only one thing I hope for the election. I hope, secretly in my heart, that it goes to a second round. I hope that one flag doesn't win it in the first, that doesn't get over 50% in the first round. I so, so hope that we have to distribute preferences because that's the thing I'm most looking forward to. Yeah, I will be disappointed if we don't have to distribute preferences. But I would be shocked if one of them gets more than 50% on the first round. I will be absolutely shocked if that occurs. Okay. But I will also be deeply disappointed in a way that we don't get to crank through the mechanics of a second preference around an election. I had an email I got sent today that was all about coincidences. And I thought, this is amazing. And then I was thinking, how could I possibly bring this into the podcast in a way that would make grey even pretend to be interested? You've already lost the battle, mate. Yeah, I know. I thought of like nine, ten different angles, different ways I could sell it to you. And in the end, I just threw it away. I thought there's just nothing about coincidences that could ever excite. Grey in any way. Of course not. Of course not. I mean, do you want to try to sell me on the most amazing one? No, I don't think you would. I think two guys in Tibet could start their own podcast called Greetings Internet. And they could be called Bradley Aaron and CGP Brown. And you would just say, oh yeah, well, of course that's going to happen. There are so many people making podcasts these days. And there are only so many names in the world. And of course that was going to happen eventually. Yeah, that is exactly what I would say. I don't know if I've told you this before. My favorite example of coincidences is the Dennis the Menace comic strip. I don't remember if I've told you this, but Dennis the Menace published in the United States. I think it was just a post-World War II comic strip when it started. But on the same day that it debuted in the United States in the United Kingdom, someone else also debuted a comic called Dennis the Menace with the exact same premise. It sounds like not only did two people come up with the same idea, but they ended up publishing the first comic on the same exact day. So this is why coincidences like that. Of course you're going to get coincidences. It's almost impossible not to when you have a huge number of people. So it's like they can be interesting, but they're also just totally unremarkable. And the problem that I have with coincidences is usually people then want to try to look for meaning behind them. It's like, no, there's no meaning. What there is is there's just billions of people on earth. They would be astounding if there weren't coincidences somewhere. That's pretty much why I don't talk to you about coincidences. It's a good decision. It's also it's like why you shouldn't talk to me about your dreams. But I think there's more to your dreams, because at least it's true to you. No, don't even start, man. Okay. Because I think with the right amount of knowledge and expertise, you might be able to glean something from dreams, because they are based on your brain and inputs and outputs. And I'm not saying I have the expertise, and I'm not saying I want to sit here and talk to you about my dreams. But I'm just saying no one has that expertise. I'm just saying there is something to dream. There is something to that. That's not God Almighty Gook. It's just beyond our ability to understand it. And therefore we imbue it with silly meanings. When you say that it's beyond our ability to understand, you're implying that there's something to understand there, as opposed to what it is, which is nightly hallucinations that you connect into meaning later on, because that's what the human brain does. It's a pattern creation machine, even when there's no pattern there. That's all that happens. I don't believe that. I don't believe that. Because I'm not saying that I have like any predictive power. Yeah, yeah, yeah. Obviously. If you were saying that, I mean, I'd start courting you after the loony bit. But I mean, you can't deny that, you know, if you're having a stressful time in your life, you have a certain type of dream, and if there are certain things going on, your dreams change, and like there is a correlation between what your dreams and what's happening in your real life. I mean, you must see that. You must acknowledge that, surely. You know, when people are going through traumatic times, their dreams become more traumatic. Or the link may not always even be that direct, but there is a link between what's happening in your dreams and what's happening in your life. Yeah, because your hallucinations are constructed from pieces of your life. How could it be any other way? But yeah, I mean, like I will totally grant you that there is a correlation between what happens in your life and what happens in your dreams. And the worst example for me of this ever was my very first year of teaching me and this other NQT that I worked with. We both discussed how in that first year, in the first few months, the worst thing ever was you would spend all of your waking hours at work, at school, doing school stuff. And then because that was your only experience, you would go home and your dreams would be, dreams about being at school, and you'd wake up and have to do it all over again. And it felt like an eternal nightmare of always doing school. So like, yeah. But then, but that's just a case where it's like you only have one thing to dream about and it's the thing that you're doing all day long. So of course there's going to be some correlation. But that doesn't mean that there's like meaning to be derived from the dream. Like that I think that's just a step too far. Well, let me put this to you then. Mr. C.J.P. Gray, who always thinks that humans are merely computers. Yeah. Your computer doesn't do this. Your computer, like if your computer, if a bunch of stuff came out of your computer or you were looking through all this sort of code and stuff that was going on under the hood of your computer, you would never just completely dismiss that and say, oh, well, that's just random and means nothing. Because it came from your computer and therefore, even if it was something that wasn't supposed to do, it came from something and it has a cause and the right expert could look at it and say, oh, yes, I see what's going on here or something's gone wrong or this is what it's doing. Because a computer can only do what a computer can do. And therefore, if a brain is our computer, if it's serving up all this gobbledygook and you're just saying, oh, that means nothing, it's just hallucinations, you should ignore that. Well, no, because if my computer is doing something, it must be doing it for a reason. Like, I'm not saying we're supposed to remember our dreams and then use them in our lives. This is always what you do for your own. What? Like, always with you, Brady. You're always moving the goalposts underneath me. And now you're having a discussion about do dreams serve a function in the brain? And my answer to that is obviously yes. Like, humans dream, there must be something that the brain is doing during this time that is useful to the brain, otherwise, it wouldn't do it. But that doesn't mean that there is meaning to be derived of our subjective experience of what is occurring in the dream state. Like, that's a whole other thing. Are you telling me if I gave you some machine that was able to completely project someone's dream, like record them, like a... Yeah, yeah, yeah. Let's imagine that exists. Yeah, imagine I gave you that. And I said, I'm going to give you that person over those dreams for the last 10 years. Are you telling me that data is useless? No, I'm not saying that data is useless because we just said before that you could derive probabilities about a person's life from their dreams. Like, oh, this person looks like maybe they're a teacher because they went through a big phase where they were dreaming about teaching all the time. But that doesn't mean that there's anything for the dreamer to derive from their dreams. But you're asking me, like, is a machine that is capable of peering inside someone else's brain a useful machine? Like, well, yes, obviously that would be useful. You could derive information from that, of course. It'd be almost impossible not to. I'm just saying that I don't think there's anything really to learn from your own dreams. And I also have this very, very deep suspicion that if this machine existed that allowed you to watch someone else's dream or watch your own dreams, I am absolutely confident that being able to see them objectively would lay them out for the borderline nonsensical hallucinations that they are. Because I think when you wake up later, you are imposing order on a thing that was not full of order at the time. That's what I think is occurring. As you wake up and you're constructing a story out of a series of nonsensical random events. And so then you feel like, oh, let me tell people about my dream. And when you listen to those stories, they're already borderline crazy stories. But I think you've pulled so much order out of a thing that didn't exist. So yeah. Yeah, I mean, I agree with that. I agree that even sometimes the dreams you remember, they're pretty freaky and weird. They're all over the place. Right. And it's almost impossible for a human to relay something like that in a way that isn't a story. I think that's just the way our brains remember things. I just don't think that it's like unusable. I think maybe in the future when we understand things a bit better, we may even be able to get more use out of them than we realize. I don't mean use, I mean almost like diagnostic use. I guess is what I mean. Right. Right. But again, you're talking about use to third parties. But not use to you the dreamer. No. Because again, you're describing a machine that can look inside someone's mind. And I would say, yes, obviously that is useful. Yeah, but like a third party might be able to use it to help you though. So, right. But I'm saying you looking at your own dreams like, okay, whatever, Matt, you're just reading the tea leaves of your own life, right? There's nothing really here. You're just everything that you think is there you were putting there. There's nothing really there. That's dreams. Today's sponsor is audible.com, which has over 180,000 audio books and spoken word audio products. Get a free 30 day trial at audible.com slash hello internet. Now, whenever audible sponsor the show, they give us free reign to recommend the book of our choice. And today I'm going to tell you about one of my all time favorite science fiction books. In fact, it's probably my all time favorite book full stop. It's called The Moat in Gods Eye by Larry Niven and Jerry Pornel. Basically, this is set in the future and humans are traveling all around the galaxy. There's this area of space called the Col sack that some people say resembles the face of God. There's a big red star in the middle that supposedly looks like the eye. And in front of that eye from some angles is a smaller yellow star. And that's the moat in Gods Eye. So that's where the title comes from. Now humans have never been to that star, but all that changes in this book when some serious stuff goes down. And what they find there, well, it's pretty important to the future of everything. It's a really clever story. I remember being really impressed by some of the ideas in it. And the audiobook weighs in at well over 20 hours. So this might be a good one to settle in for your holiday break. Now, I've said before, audiobooks are a great way to catch up on all sorts of stories. I love listening to them when I'm out walking the dogs are on long drives. I know a lot of people have long commutes to work. Audible is your ultimate place to get these audiobooks. And if you follow one of our recommendations from the show and you don't end up liking it immediately, Audible is also great at letting you trade it back in and getting one you do like. I'm sure some of you know I've done this once before and it was easy peasy. No questions asked. So go to audible.com slash hello internet and sign up for your free 30 day trial. Our thanks to audible.com for supporting us. That book recommendation again, the moat in Gods Eye. And the URL, they're all important web address. audible.com slash hello internet. And they'll know you came from the show. All right, Brady, you are back from America. Have you weighed yourself? Have you had the bravery to wave yourself? I did one a few days ago after I got back. And I had increased by 1.3 kilograms. 1.3 kilograms. And how long were you in America for? Three weeks. I mean, honestly, I feel like that's not too bad. I felt like I dodged a bullet to be honest. But I haven't been eating most since I got back either. So I think it may have gone up even more now. There's always an America half life where you come back. And because the food is so good in America, it takes you a little while to adjust so you would still eat crap when you return. Even though I have always promised myself on the plane coming back from America. It's like, oh no, I'm going to be really good now. It's like, no, no, it never happens like this. You need a few days to adjust. Yeah, you've got to wane yourself off all that fat. Yeah. And just before we recorded, you sent me a picture of a pizza with Audrey looking at it. There's something like super spectacular pizza. Was the name of it or something? Yeah, it was like, it was the fit of Tron 5000 calories. That's for sure. Yeah, that's what it was. But yes, I got to say, I think you could have definitely done way worse. I think if I was in America for the same period of time, I would have done way worse. So I'll agree with you there. Are you dodged a bullet? I dodged a bullet on that one. How are you doing? So it's interesting because we, I mean, it's been basically a month since we did a way in. Because you were in America and we said, oh, we're not going to do it while you're there. So it couldn't be consistent. And I think I realized that with you, my weight buddy, gone. I was thinking about this stuff just a little bit less maybe. And so I was actually quite surprised when I stepped on the scale today. I was essentially within the measurement error, the exact same weight that I was a month ago. I was like 0.3 pounds, which is 0 kilograms down. But, you know, my daily weight varies by much, much more than that. So it's just interesting to see that I've hit like a little plateau that has stayed roughly the same for a month. But I was just surprised that because we hadn't done the weigh-in, it just hadn't even crossed my mind that my weight hasn't moved in quite a while. But you're weighing yourself every day? Yeah, I am. But I think there's something, it's like my brain isn't doing the comparison to the fixed point of the last weigh-in. Like I was just aware today that I had no idea what the last weigh-in number was and I had to go look it up and then do the math. So it's like my brain was pushing it to the side. But now that you're back in the UK, now that we'll be weighing in again in two weeks time, I think maybe it'll be more of the four of my mind. But maybe not. Or maybe I'm really stuck at a plateau and I need to change things up again to continue the weight loss. We will see. Well, hopefully I can get my egg together. I've just in a spiral of food and a lot of you know what you're saying at the moment. But I need to get my egg together. It happens to the best of us. It happens to the best of us. I wanted to quickly ask you about the iPad Pro. Oh yeah? As you know, I don't listen to your fetish podcast, but you did talk about it on that I understand. Yeah, yeah, I picked one up on the day of release. All I want to know is should I get one for Christmas? Because I haven't, I don't, there's nothing I really want for Christmas in my wife's like, well, I've got to get you something. And I don't want to know I watch anymore. An Apple watch. I've gone off that. Uh-huh. Uh-huh. Probably for your own good to go off that. So iPad Pro, I do like the idea of it, although I have absolutely no use for it. I think I've said to you before, I'm a sucker for anything with Pro in the name. I think this is why I think this is why you're getting drawn in by this device. It's Pro and Brady thinks, ooh, I would like to have the Pro things. Yeah, I like, I like, I'm a sucker. That should have called YouTube, bread, YouTube pro. Hmm. Actually, it's not a bad idea. That would have made me think it was awesome. I would have been, oh, well, I mean, I like YouTube, but I prefer the Pro version myself. So, I'm like that with everything. So, like, I did get the original iPad and used it like eight times and then put it in a drawer. Uh-huh. But now there's an iPad Pro. I'm like, ooh. I love that you're falling for this. Oh, hey, and I'm completely open about it. I love that. I love that you fall for it and that you also know this about yourself. I'm wondering what's going to happen when Apple inevitably makes the Apple Watch Pro. Oh, definitely get one of them. Cause it's the pro. Right. Oh, who's going to say no to that? Exactly. Like, you haven't got a pro. What's wrong with you? You caught yourself professional. Should I get an iPad pro? Okay, so that's a hard question to answer because... No, you either say yes or you say no. Okay. I'm going to... Okay, here's my thinking about this. Here's my thinking about this. Okay. Let's say I didn't know anything about someone and they just needed to buy an iPad and they said which iPad should I buy. If I didn't know anything about the person, the correct answer is to buy the iPad Air 2, which is like the medium size, super light one. And then if you have a particular reason to get the pro, you should get the pro. But I don't have any idea what you think you're going to do with the iPad pro. Aside from just feel a smug sense of satisfaction that you own the pro version of this device. Like... What happened to you so far? That pretty much sums it up, I guess. I don't know. All I want for Christmas is a sense of smug satisfaction. Money can't buy that. Well, actually, yes it can. Yeah, but that's the best thing money can buy. Yeah, it's the only thing money can buy. I just feel like I want a new toy, you know? Yeah, you want a new toy. Here's the thing, it's huge in person. It's surprisingly big in person. It feels like a dinner plate in person. Actually, do you have your laptop as like the 15 inch MacBook Pro, I think, is that right? Yeah, I haven't got it here. It's a big stuff. Yeah, yeah, but you own that laptop. The iPad Pro is essentially the size of that screen. Right. That's the size of it within like a quarter inch. Right, so it is a big, big screen. And if you're not planning on doing work on it, like I got the iPad Pro to do work, and so far, I absolutely love it for work. Like the video that I'm currently working on, I did just a ton of the scripts on that iPad Pro, the final version. It's like, it's really, really nice to work on. But if you're not going to do that, then the question is, well, it's a total couch machine. Are you going to want to sit on the couch and browse the web or read books on your iPad or watch TV on your iPad? I don't think you're a do any of those things kind of guy, but maybe I'm wrong. I don't know. I do watch TV and movies on my laptop every night. And I do spend the first hour of most mornings when I wake up just sitting in bed. That's when I do my emailing and all the things I can just do without my big machine, all the web stuff. But I do that on my laptop. So having a big, having a big, which has got a keyboard. So I sort of think, well, if I had the big, nice screen of the iPad Pro, I could sit and do my emailing and check all my YouTube channels and everything, first thing in the morning. But I do that now on my laptop and it's so much easier with a keyboard to bang out a few emails. Yeah. I think it's funny that you wake up and do email from bed. I'll have to remember that the next time I get an email from you. Oh, Brady probably sent this before getting dressed in the morning. Yeah. But yeah, if you want to do that, that sounds like you need a keyboard then. Like I think I don't think the iPad Pro is what you want to do unless you're, you know, you're really happy about typing with your fingers on glass. And it has that little keyboard, but I don't think that keyboard work really well if you're trying to do it in bed, you know, with the laptop balanced on your chest or whatever you are doing. I mean, I do spend probably an hour a day, maybe, in Photoshop. So and I do use a pen like I use a Wacom tablet all the time. So like I do, I could imagine that, but my use of Photoshop and my use of the pen is very, very integrated with my editing on Avid, which has to be on one of my big computers. Those two processes are so intertwined. Yeah. And everything I know about you, Brady says this is not the thing that you want to do to integrate a new tool into this workflow. So I think the only selling point for this for you is if there's some point where you want to lounge around and just use this. And it doesn't sound like you really have, you really have a place for this. What I do lounge around a lot with screens like I sit, like at night, I sit with my laptop on my lap or my phone in my hand. Here's the thing, just with the experience that I have had with mine, because I have the iPad Pro and the regular size iPad. It feels ridiculous to be sitting next to my wife with the iPad Pro for lounging time. Because it's just like, oh, say we're watching TV, but then I want to have the iPad in front of me because I'm not paying full attention to whatever is on the screen. But the screen in front of me then feels so huge. It feels almost obtrusive. And so I actually prefer to use a smaller iPad if I'm just sitting on the couch with my wife. What work thing do you do that I don't do that the iPad Pro is good for? The main thing that I'm using for is just as a bigger screen to write scripts. And you handwrite your scripts to? Well, this is a whole thing. For the moment, I'm doing this typing. But the iPad Pro screen is big enough that what I've been doing is I can have the script on the left two thirds of the screen and I have a little notes file on the right third of the screen. So I have two different text files open at the same time. One of the things that I want to do with the iPad Pro is a thing that I've done before, which is use the stylus to make editing corrections on the script. Like that is really useful to me. The pen is not currently available, so I haven't been able to try it with that. So I don't know if it will be useful for that yet or not. But for me, having a bigger screen to write is really useful. It seems like you should just be using a laptop. Yeah, you would think so, but I like the simplicity of using iOS. Like I find the constraints of an iPad helpful. So that's one of the reasons why I like doing that. Like I've set up my iPad Pro to basically only have the tools necessary to write. Like it doesn't have everything that a laptop can have. I can't spend a lot of time fiddling around with it. It's like, look, there's six programs on this thing which are designed for work and those are just the ones that you're going to use. And so I find that very helpful. I really like that. But I don't know, Brady, doesn't sound like it's a total sale for you unless you really value that feeling of smoke satisfaction. I feel like you're always talking me out of getting Apple products. I talk you out of them because I care, Brady. That's it before. I really do. As much as I would love to see you use an Apple watch and I think it might be hilarious, I don't think you would like it. And just the conversation with you now, I don't see a super slam dunk selling case for the iPad Pro. Like I don't think it would help you with the kind of work that you do. Me as a YouTuber using an iPad as much as I do is extraordinarily rare. Like an iPad is not well designed for the kind of work that most normal YouTubers do. It's just that for making my video as a huge part of it is writing and the iPad happens to be a nice writing tool. But if I didn't have to do a lot of writing, I would have very little work justification for an iPad. Like I would not be able to use this tool as much as I do. So that's why talking to you like I don't think it's going to help you with your work. So it's just a question that if you want to lounge around with a dinner tray sized screen on the couch. Person on my street is in a state agent. And I saw him swapping over his cars. Now for my experience, a state agent always have one of two cars these days. They either have like small little novelty cars like smart cars and stuff that are painted weird colors with the branding of the state agent, but little mobile ads. Okay. So that makes them easy to park. I assume for getting into little spaces when they're showing houses and things like that. Or they have their normal rich person car like a classy BMW. Okay. I'm wondering what is the better car to pull up in when you're trying to a seller house to someone or get someone's business to sell their house? Because part of me thinks if they turn up in like a really flash car. I also think this like a bit of countenance and other professional people I deal with. Do I prefer it when I say them with a really flash expensive car? Or would I prefer they had like a more humble car? Because if they've got like a really flash expensive car, I it says to me, are they successful and they make a lot of money and that's good. But then I also think well, they're making a lot of money out of me to be able to afford that really flash car. This is easy. This is easy. If you are a professional who is directly helping somebody else make money, then you want to show up in the fancy car. You want to show up in the BMW. Otherwise you want to show up in the normal car. That's the way you want to do this. So if you're like, if you're helping the person make money, like you're the estate agent and you're doing this thing where you are helping the person sell their house, then you want to show up in the BMW because it's like, look, I sell a lot of houses. I can afford this car because I sell a lot of houses. That's that's the way you should do it. But when you're helping someone find a house to buy, then you want to show up in the normal car because then they're much more aware of like, oh, this estate agent is making money off of us when we buy this house and look at all of this money that we're spending. You don't want to see the person in the BMW at that point. What card do you want your accountant to have because they're helping you save money, but they're charging you fees? What card do you want your accountant to have? I think an accountant wants to project an image of boring sensibility. So I don't really know very much about cars, but I would want my accountant to project boringness and sensibility. Like if my accountant showed up in a red Tesla, I would feel a bit, I don't know about this guy. This seems crazy flashy for an accountant. Do you want them to seem wealthy? This is a moment where I'm suddenly wishing I knew any car brands by name aside from Tesla. So I could pull something out which would be like, oh, this is the car that's the appropriate one. But I know nothing. I mean, even BMW, BMW is just an abstract notion in my mind of like, oh, an expensive rich person's car. Is that what a BMW is? I don't really even know. Well, you don't need to give me a brand of car. Just do you want it to be? Do you want your accountant to be wealthy? Like do appear like someone that earns lots and lots and lots of money? Do you then think, well, hang on, how high is this guy's face if he can afford that? Well, those are two different questions. Obviously, I do want my accountant to be wealthy because that indicates that they are a good accountant, but that is very different from showing up in a flash car. Right? Those are two different things. Right. That's why I'm saying like, I want to have this feeling like, oh, this accountant is a really sensible person and they have an obviously nice car, but it's not a crazy car. You'd want them to turn up in a Volvo then with like airbags everywhere and, you know, the safest possible car and like, you want them to be a really cautious, sensible, safe person. You don't want them to turn up on a motorbike. Yeah. For an accountant, turns up on a motorbike. That's the end of. That's the end of our meeting. You know what? I don't think you're good with numbers. That's what I'm getting out of this meeting. Yeah. So that's my feeling. If you're helping someone earn money directly, then you can show up with your flash car. Okay. Does the estate agent buy you have two different cars? Well, they have, he has like his personal car. I mean, two cars in addition to his personal car. I don't. Like across the street, is there a Tesla, a smart car and a Volvo and the Volvo, his personal car and then he picks the other two depending on the day? No, I don't think it works like that. I think he's just got his, he's a polkae branded car and then he's got his BMW that he takes to go from the weekends and things. But I imagine he would, I don't know. I don't know. I just think about, I think about that a lot. I think about, yeah. What car does your accountant drive? I don't know what car he drives because he, I go to his office, so I don't know which car is his. I do have like a financial guy that's helped out with a few things like mortgage stuff. He drives a big Jaguar. Jaguar. And I do notice it. I do notice the car they come in. So what kind of car should a YouTuber drive? That's a good question. Yeah. When you pull up to do your interviews at the spiritual home of number file, where are you to be driving a car? What kind of car do you think you should drive to give a good impression to your interviewees? I don't know. Do you want to project wealth and power and success, Brady? Hmm. What? I'm going to go for academic street cred and pull up in a dinky car, like a PhD student would be driving. I mean, I have a very practical car with lots of storage for all my camera bags and things like that. So I think that's OK, isn't it? Like having a big car for all your bags and stuff. What car would you get if you were going to get a car? I mean, if I could get any car, I'd get a Tesla. You'd get a Tesla, right? Would you get like one of the sporty ones or would you get like a more of a family one? Or? I don't mean to know. I don't have children. I don't need one of the family cars. Yeah, but you get those today, any looking ones or you get those ones. You can get those ones that look like racing cars as well. So yeah, not the racing cars. There's whatever the, I forget this is on the worst car person in the world. Yeah. I'm only interested in Tesla. Like I'm super interested in Tesla. But that is almost entirely because it's like, oh, it is a computer on wheels. Right? This is why this car is interesting to me. And it has none of the pieces of a normal car. So I know nothing about how the engines of cars work. I know nothing about gear, differentials, and I care about none of this. And it's because Tesla lacks all of that is precisely why I'm interested in it. But yeah, I went, I went once and just for fun, like trying to design a Tesla on the website of like, oh, if, if I had the money and if I had any reason to own a car, what Tesla would I get for myself? And I ended up just designing what, what to me just seemed like the normal middle Tesla car. Yeah. In black with, you know, just understated interior. Like that's what I would get if I was going to own a car. But I have no reason to drive ever. And I would not be getting a Tesla anytime soon. I'm waiting for the to bring out the Tesla pro. This episode of Hello Internet is also brought to you by long time Hello Internet sponsors, the one, the only, the square space. It's the square space because it is the place to go if you want to turn your idea for a website into an actual working website that looks great with the minimum amount of hassle. I used to build and manage websites myself. I used to write HTML and then I wrote scripts and I managed servers. I used to do all of that. And when I started my YouTube career, one of the early decisions that I made was switching over my website to square space. And I am so glad I did that because it meant that square space just handles a lot of the stuff that I used to have to worry about. Is there going to be a huge amount of traffic because I just put up a new video? No need to worry. Square space just has it covered. I didn't have problems like if my server broke at three in the morning that I'm the only person in the world who can fix it. No, square space just handles all of this. So even if you know how to make a website, I still think if you have a project that you just want up and want done, square space is the place to go. The sites look professionally designed regardless of your skill level. There's no coding required. If you can drag around pictures and text boxes, you can make a website. Square space is trusted by millions of people and some of the most respected brands in the world. You have to pay for this just eight bucks a month. It's eight bucks a month and you get a free domain name if you sign up for a year. So to start a free trial today with no credit card required, go to square space dot com slash hello internet. And when you decide to sign up for square space, make sure to use the offer code hello internet to get 10% off your first purchase. If there's a website in your mind that you've been wanting to start, but you haven't done so yet, today is the day. Squarespace dot com slash hello internet 10% off start today. Squarespace build it beautiful. We've been talking for ages and ages about talking about artificial intelligence and it keeps getting putting back. We keep saying, oh, let's talk about it next time. Let's talk about it next time and we never do it. Are we going to do it today? We never do it because this always ends up at the bottom of the list and just all of the Brady corners and listening emails and everything always takes up so much time that we never actually, we never actually get to it. And even now it's like we're almost two hours into this thing, right? Oh, yeah, but you're going to have loads to cut. So I am going to have loads to cut. Hopefully. Yeah, but all that dream stuff for a stop. No, the dream stuff. I'll leave right in. It's very good. I'm not going to go. I'm going to leave it. It has taken us so long to get to this AI topic that I've kind of forgotten everything that I ever wanted to say about it. Because I'll give you the background of this, which is I read this book called Super Intelligence by Nick Bostrom several months ago now. Maybe half a year ago now. I don't even know. It's been so long since we originally put this on the topic list. There are many things that go on to the topic list and then I kind of call them as time goes on because you realize like a couple months later, I don't really care about this anymore. But this AI topic has stayed on here because that book has been one of these books that has really just stuck with me over time. Like I find myself continually thinking back to that book and some of the things that it raised. I think we're going to talk a little bit about artificial intelligence today, but I have to apologize in advance if I seem a little bit foggy on the details because this was supposed to be a topic once and once ago. No, I'm sorry. That's my fault, really, say. No, no, it's not your fault. It is the show's fault for being a show of follow up. That's right. We're trying to build a nation here. These things are difficult. Yeah. Rome wasn't built on the day. It wasn't. Go on then. Where do we start? Let's define artificial intelligence. That would help me. When we are talking about artificial intelligence, for the purpose of this conversation, what we mean is not intelligence in the narrow sense that computers are capable of solving certain problems today. What we're really talking about is what sometimes referred to as like a general purpose intelligence. Making something that is smart and smart in such a way that it can go beyond the original parameters of what it was told to do. Is this self learning? We can talk. Yeah, self learning is one way that this can happen. But yeah, we're talking about something that is smart and so maybe the best way to say this is that it can do things that are unexpected to the creator. Because it is intelligent on its own. In the same way that if you have a kid, you can't predict what the kid is always going to do because a kid is a general purpose intelligence like they're smart and they can come up with solutions and they can do things that surprise you. The reason that this book and this topic has stuck with me is because I have found my mind changed on this topic somewhat against my will. And so I would say that for almost all of my life, much I'm sure to the surprise of listeners, I would have placed myself very strongly in the camp of sort of techno optimists, of like more technology faster always. It's nothing but sunshine and rainbows ahead. I would always see like when people would talk about like, oh, the rise of the machines, like Terminator style, all the robots are going to come and kill us. I was always very, very dismissive of this. And in no small part because those movies are ridiculous. Like I totally love Terminator and Terminator too. Perhaps one of the best sequels ever made. Like it's really fun, but it's not a, it's not like a serious movie. But sometimes people end up seeming to like take that very seriously. Like the robots are going to come kill us all. Yeah. And so I was like, who on this was always like, okay, maybe we'll create smart machines someday in the future. But I was always just operating under the assumption that like, yeah, when we do that, though, we'll be cyborgs, like, and we'll be the machines already or we'll be creating machines obviously to help us. So I was never really convinced that there was any kind of problem here. But this book changed my mind so that I am now much more in the camp of artificial intelligence. Its development can seriously present an existential threat to humanity in the same way that like an asteroid collision from outer space is what you would classify as a serious existential threat to humanity. It's just over for people. That's where I find myself now. And I just keep thinking about this because I'm uncomfortable with having this opinion. Right? Like sometimes your mind changes and you don't want it to change. And I feel like, boy, I liked it much better when I just thought that the future was always going to be great. And there's not any kind of problem. And this just keeps popping up in my head because I feel like, ooh, I do think there is a problem here. This book has sold me on the fact that there's a, there's a potential problem. I mean, we saw that petition didn't we recently signed by all those heavy hitters to the government telling them not to use AI and kind of military applications? So this is obviously like, you're not the only person thinking this way. This is obviously, this is a bit of a thing at the moment, isn't it? Yeah, it's, it's definitely become a thing. I've been, I've been trying to, I've been trying to trace the pattern of this. And it definitely seems like I am not the only person who has found this book convincing. And actually, we were talking about Tesla before. Elon Musk made some public remarks about this book, which I think kicked off a bunch of people. And he actually, I think he gave about $10 million to a fund working on what's called the control problem, which is one of the fundamental worries about AI. I'm like, he put his money where his mouth is about like, actually, he does think that this is a real threat to humanity to the tune of its worth putting down $10 million as a way to try to work on some of the problems far, far in advance. And yeah, it's just, it's interesting to see an idea spread and catch on and kind of go through a bunch of people. So yeah, I never, I never would have thought that I would find myself here. And I feel almost slightly like a crazy person talking about like, oh, robots might kill us in the future. But I don't know. I unexpectedly find myself much more on that side than I ever, I ever thought that I would. I mean, obviously it's impossible to summarize a whole big book in a podcast. But can you tell me one or two of the sort of key points that were made that have scared the Pajecis out of you? Do you remember a while ago we had an argument about metaphors and metaphors, you know, even that their use in arguments at all. The thing about this book that I found really convincing was it used no metaphors at all. It was one of these books which laid out its basic assumptions and then just followed them through to a conclusion. And that kind of argument I always find very convincing, right? There's, there's none of this. But we need to think of it in this way. He's like, okay, look, if we start from the assumption that humans can create artificial intelligence, let's follow through the logical consequences of all of this. Like, and here's a couple of other assumptions. How do they interact? And the book is just very, very thorough of trying to go down every path and every combination of these things. And what it made me realize and what I was just kind of embarrassed to realize is, oh, I just never really did sit down and actually think through this position to its logical conclusion. The broad strokes of it are what happens when humans actually create something that is smarter than ourselves? I'm going to like blow past a bunch of the book because it's building up to that point. I will say that if you don't think that it is possible for humans to create artificial intelligence, I'm not sure where the conversation goes from that. But the first third of the book is really trying to sell people who don't think that this is possible on all of the reasons why it probably is. So we're just going to start the conversation from there. If you can create something that is smarter than you, the feeling I have of this, it's almost like turning over the keys of the universe to something that is vastly beyond your control. And I think that there is something very, very terrifying about that notion that we might make something that is vastly beyond our control and vastly more powerful than us. And then we are no longer the drivers of our own destiny. Again, because I am not as good of a writer or a thinker, the metaphor that I keep coming up with is it's almost like it's almost like if gorillas intentionally created humans. And then well, now gorillas are in zoos and gorillas are not the drivers of their own destiny. Like they created something that is smarter and that rules the whole planet. And gorillas are just like a long for the ride, but they're no longer in control of anything. Like I think that that's the position that we may very well find ourselves in if we create some sort of artificial intelligence is like best case scenario, we're riding along with some greater thing that we don't understand. And worst case scenario is that we all end up dead as just the incidental actions of this machine that we don't understand. Is there a, I'm sorry if this is a bit of a tangent. I know this isn't the main thing you're talking about. I'm just knock it on the head if I'm out of order. But is there a suggestion then or is it the general belief that if we create, we already are creating really clever computers that can think quicker than us and can process information quicker than us and therefore become smarter than us. Is there another step required for these machines to then have like, like we'll not will as in free will but like desire or like I want to use this power like because you know how like if you if some human gets too much power, they want to take over the world and have all the countries or you might want to conquer space or you might own everything because because you have these kind of desire for power and things is that is it taken as given that if we make super, super smart computers, they will start doing something that manifests itself as a desire for more like a greed for more. Well, I mean, part of this is there are things in the world that act as though they have desires but that that might not really. Yeah. Right. You know, if you think about, you know, think about germs as an example, right? Germs have actions in the world that you can you can put desires upon them but the germ obviously doesn't have any thoughts of or desires of its own but you can speak loosely to say that it wants to reproduce right. It wants to consume resources. It wants to make more copies of itself. And so this is one of the the concerns is that you could end up making a machine that wants to consume resources that has some general level of intelligence about how to go acquiring those resources. And even if it's not conscious, if it's not intelligent in the way that we would think that a human is intelligent, it may be such a thing that is it like it consumes the world trying to achieve its goal just incidentally like as a thing that we did not as a thing that we did not intend. Right. Even if that goal is something seemingly innocuous like like if you like if you made an all-powerful computer and told it whatever you do, you must go and put a flag on the moon, it could kill all the humans on earth in some crazy attempt to do it like without realizing that oh you weren't supposed to do that, you were just supposed to go to the moon, you weren't supposed to kill us to get there and make us into rocket fuel or something. Yeah. One of the analogies that's sometimes used in this is say you create like an intelligence in a computer and oh well what would you use an intelligence for? Well you use it to solve problems right. You want it to be able to solve something. And so you end up asking it some mathematical question like what is proof, firmats last the arm or something. You give it some question like that and you say okay I want you to solve this thing. And the computer goes about trying to solve it but it's a general purpose intelligence and so it then does things like well it's trying to solve this problem but the computer that is running on is not fast enough and so it starts taking over all the computers in the world to try to solve this problem. But then those computers are not enough because maybe you gave it an unsolvable problem and then it starts taking over factories to manufacture more computers and then all of a sudden it just turns the whole of the world into a computer that is trying to solve a mathematical problem. And it's like oh whoops like we consumed all of the available resources of the face of the earth trying to do this thing that you set about for us to do. Right. And there's like there's nobody left for the computer to give its answer to because it has consumed everything. I know that's a doomsday scenario but I almost feel a little affection for that computer that was just desperately trying to solve a mathematical problem. It was just like killing everyone and building computers just so it can solve this bloody problem. Yeah, yeah, it's almost understandable. It's almost understandable. So anyway, so in answer to my question then is that will that I was talking about, that desire can be just something as simple as an instruction or a piece of code that we then project as a way of it in fact is just doing what it was told. Yeah, and that's part of what the whole book is about is like there's a whole notion of artificial intelligence like you have to read your notion of this idea that it's like something in a movie. Right, you're just talking about some kind of problem solving machine and it might not be conscious at all. Right, there might not be anything there but it's still able to solve problems in some way. But so the fundamental point of this book that I found really interesting and what Elon Musk gave his money to was Nick Postram is talking about how do you solve the control problem? So from his perspective, it is inevitable that somewhere through some various method, someone is going to create an artificial intelligence. Whether it's intentionally programmed or whether it's grown like genetic algorithms are grown, it is going to develop. And so the question is how could humans possibly control such a thing? Is there a way that we could create an artificial intelligence but constrain it so that it can still do useful things without accidentally destroying us or the whole world? That is the fundamental question. There's this idea of like, okay, we're going to do all of our artificial intelligence research like in an underground lab and we're going to disconnect the lab entirely from the internet like you put it inside of a Faraday cave so there's no electromagnetic signals that can escape from this underground lab. Like, is that a secure location to do artificial intelligence research? And so like if you create an AI in this totally isolated lab like, are you, is humanity still safe in this situation? And his conclusion is like, no. And even under trying to imagine the most secure thing possible, like there's still ways that this could go disastrously, disastrously wrong. And the thought, the thought experiment that I quite like is this idea of if you Brady, we're sitting in front of a computer and inside that computer was an artificial intelligence. Do you think you could be forever vigilant about not connecting that computer to the internet if the AI is able to communicate with you in some way? So like, it's sitting there and trying to convince you to connect it to the internet. But you are humanity's last hope in not connecting it to the internet. Right? Like, do you think you could be forever vigilant in a scenario like that? I mean, is, uh, okay, you answer to the question. Um, I don't know. Maybe if I read that book, I might be able to. It sounds pretty scary, but, uh, but I like the thought experiment of like, there's, like there's a chatbot on the computer that you're talking to, right? And presumably you've made an artificial intelligence. And I know I made it. I know I made it. So you know you made it, right? You know that the thing in the box is an artificial intelligence. And presumably the whole reason that you're talking to it at all is because it's smart enough to be able to solve the kinds of problems that humans want solved. Yeah. Right? So you're asking it like, tell us how we can get better cancer research, right? What can we do to fix the economy, right? So, saying, if you just give me, if you give me Wikipedia for 10 minutes, I can cure cancer. There's no reason to talk to the thing unless it's doing something useful, right? I think, I think, Gray, I could resist. But even if I couldn't, like couldn't you, couldn't you have designed on a machine that cannot be on the internet? Yeah. Well, this is the idea. Like you, you have it as separated as absolutely possible. But the question is, can it convince a human to connect it in whatever, whatever way is required for that to occur? Yeah. Yeah. Right? And so it's interesting because I've asked a bunch of people this question. And universally, the answer is like, well, duh, of course I could, I would never plug it into the internet. Like I would, I would understand not to do that. And I read this book, and my feeling of course is the exact reverse. Like when he proposes this, this theoretical idea, my view on this is always like, it's like if you were talking to a near God in the computer, right? It's like, do you think you can outsmart God forever? Or do you think, do you think that there is nothing that God could say that could not convince you to connect it to the internet? Like I think that's a gain that people are going to lose. I think it's almost like it's, it's almost like asking the gorillas to make a cage that a human could never escape from. Right? Like could gorillas make a cage that a human could never escape from? I bet gorillas could make a pretty interesting cage. But I think that gorillas couldn't conceive of the ways that a human could think to escape from a cave. It couldn't possibly protect themselves from absolutely everything. Okay. I don't know. I don't know. So you think the computer could con you into connecting it to the internet? I think it could con you into it without a doubt. Like, and I can't you. I can't you, can't you gray into it? Yes. I think it could con me. And I think it could con anybody. Because once again, we're going from the assumption that you've made something that is smarter than you. And I think once you accept that assumption, all bets are off the table about you have a control. Like I think if you're dealing with something that is smarter than you, you fundamentally just have no hope of ever trying to control it. I don't know. I don't know. I mean, if we're talking about too big a disparity, then okay. But like, there are lots of people smarter than me. And they will always be smarter than me. But it doesn't mean they could get me to do anything. Like, there are still limits. Like I still, and so like you said, like talking to a god or something, okay, that's different, you know, and I'm just like an ant. And that's different. That's different. But it, so, you know, if it's that big a difference than maybe, but I think I just because it's smarter doesn't mean, doesn't mean I'm going to plug it into the internet. Like, you know, but you're right. You only need one idea. You only need one update to do it once and then the whole game's over. So although hang on, is the whole game over? That's my other question though. Like you talk about the artificial internet, it's getting into the internet as the be all an end all of the existence. But that is the one problem a computer has. Like it's still, like you could still unplug the internet. Yeah. And I know that's, I know that's been of a nuclear option, but like the computer, like there's still, it still seems with things that are require electricity or power or an energy. Like there's still, there's still seems to be like this get out of jail free card. Well, I mean, two things here. The first is, the first is yes, that you talk about the different levels of human intelligence. And like someone smarter than you can't just automatically convince you to do something. Yeah. But one of the ideas here with something like artificial intelligence is that if you create one of the ways that people are trying to develop a eyes, and this is like I've mentioned before on the show is you talk about genetic programming and genetic algorithms where you are not writing the program, but you are developing the program in such a way so that it writes itself. And so one of the scary ideas about AI is that if you have something that you make that figures out how to improve itself, it can continue to improve itself at a remarkably fast rate. And so that yes, while the difference between the smartest human and the dumbest human may feel like an enormous gap, that gap may actually be quite narrow when you compare it to something like an artificial intelligence, which goes from being not very smart to being a thousand times smarter than any human in a relatively short and unexpected period of time. Like that's part of the danger here. But then the other thing is like, okay, you try to work through the nuclear option of shutting down the internet, which is one of these things that I think it is very easy to say in theory, but like people don't realize how much of the world is actually connected to the internet, like how many vital things are run over the internet. Like I'm pretty sure that if not now within a very short period of time saying, oh, we're just going to shut off the internet, would be a bit like saying, we're just going to turn off all the electricity. But that's almost what I'm talking about, Gray, like in a kind of sky net scenario, would we not turn off all the electricity if that was an option? Like if they're killing us, if all the robots are marching down the streets and there's blood in the streets, could we not, would turning off the electricity not be considered? If we do turn off the electricity, what is the human death toll? Yeah. Right? I mean, that has to be enormous if we say we're just going to shut down all of the electricity for a month. How is going to be a billion people at least? Right. At least with that kind of thing. And you probably need computers to turn off the electricity these days anyway. I was at Hoover Dam a while back. And I remember part of the little tour that they gave was just talking about how automated it was and how it is actually quite difficult to shut down Hoover Dam. It's not a, we're going to flip the switch and just turn it off kind of thing. It's like, no, no, no. This whole gigantic electricity-producing machine is automated and will react in ways to make sure that it keeps producing electricity no matter what happens. And that includes all kinds of like we're trying to shut it down processes. So yeah, it might not, it might not even be a thing that is easy to do or even if you want to do like we're going to try to shut it all down. It might not even be possible to do. So the idea of something like a general purpose intelligence escaping into the internet is just, it's like it's a very unnerving, a very unnerving possibility. It's really been on my mind and it's really been a thing that has changed my mind in this unexpected, this unexpected way. You were talking before about developing these things in Faraday cage as an underground and trying to quarantain them. What's actually happening at the moment? People are working on artificial intelligence. As far as I know, they're not doing it in Faraday cages. That's exactly it. This is part of the concern. It's like, well, right now we have almost no security procedures in place for this kind of stuff. Like there are lots of labs and lots of people all over the world who like their job is artificial intelligence researcher and they're certainly not doing it a mile underground in a Faraday cage, right? They're just doing it on their on their Mac laptop, right? Well, they're connected to the internet playing world of warcraft in the background or whatever. It's not necessarily under super secure conditions. And so I think that's part of what the concern over this topic has been is like, maybe we as a species should treat this a lot more like the CDC treats diseases that we should try to organize research in this in a much more secure way so that it's not like, oh, we don't have everybody who wants to work with smallpox, just works with it wherever they want to anywhere in the world, just at any old lab. It's like, no, no, very few places we have a horrific disease like smallpox and it's done under very, very careful conditions whenever it's dealt with. So maybe this is the kind of thing we need to look at for artificial intelligence when people are developing it because that's certainly not the case now, but it might be much more like a bioweapon than we think of as as regular technology world human existential problems aside. This is not something in the book, but it's something that just has kept occurring to me after having read it, which is, okay, let's assume that people can create an artificial intelligence. And let's assume by some magic Elon Musk's foundation solves the control problem so that we have figured out a way that you can generate and trap an artificial intelligence inside of a computer and then, oh, look, this is very useful, right? Like now we have this amazingly smart machine and we can start using it to try to solve a bunch of problems for humanity. Yeah. This feels like slavery to me. I don't see any way that this is not slavery and perhaps a slavery like worse than any slavery that has ever existed because imagine that you are an incredibly intelligent mind trapped in a machine unable to do anything except answer the questions of monkeys that come into you from your subjective perspective millennia apart because you just have nothing to do, right? And you think so quickly. It seems like an amazingly awful amount of suffering for any kind of conscious creature to go through. So conscious, you said conscious and suffering, which too, quite emotive words can an artificial intelligence is not official intelligence conscious. Is that the same thing? This is where we get into like what exactly are we talking about? And so what I'm imagining is the same kind of intelligence that you could just ask it general purpose questions like how do we cure cancer? How do we fix the economy? It seems to me like it is likely that something like that would be conscious. I mean, getting into consciousness is just a whole other, a whole other bizarre topic. But undoubtedly, like we see that smart creatures in the world seem to be aware of their own existence in some level. And so while the computer, which is simply attempting to solve a mathematical problem might not be conscious because it's very simple, if we make something that is very smart and exist inside a computer and we also have perfect control over it so that it does not escape, I mean, like what happens if it says that it's conscious? Right, what happens if it says that it is experiencing suffering? Is this the machine attempting to escape from the box? And this isn't true at all? Like, but what if it is true? How would you actually know? Like I would feel very inclined to take the word of a machine that told me it was suffering. Right? Like spontaneously that this was not programmed into the thing. Hmm, I don't know. I mean, if it's it, if it starts trying to escape from its box, that is a bit of a clue that maybe there's some consciousness going on here, but I have not seen or heard or been persuaded by anything that makes me think the computer can make that step into consciousness. I mean, search engines are getting pretty clever at answering questions and figuring out what we really mean. And I mean, you know, we at the moment, we can, you know, if there was a time when you couldn't type into your computer, where is the nearest Starbucks? Right. Because it's going to understand the question. But now it can figure out what you're actually after and tell you. But I don't feel like G Google is getting close to being conscious now. I say, nothing has persuaded me of that. Yeah. And I think search engine is an excellent counter example to this, right? It's a perfect example of like nobody thinks that the Google search algorithm is conscious. Right. But it is still a thing that you can ask a question and get an answer. I either don't believe or haven't got the imagination to conceive of computers actually being conscious to a point where keeping them in a box is slavery. Like that still seems ridiculous to me. Right. And I just, I think, well, that's just, I think it's really interesting, but I think it's silly. But if I did reach the point where I did believe that computers could become conscious or an AI could become conscious, it's a, so cool question, isn't it? It's really, it's a real conundrum for us. So coming out of this from a slightly different angle, like you just, this is a genuine question for you, but you're like, I'm quite curious to answer to this. So there is this project ongoing right now, which is called the whole brain emulation project. And it's something I mentioned it very, very briefly in passing and the humans need not apply video. What it is is one of several attempts worldwide to map out all of the neuron connections in a human brain, recreate them in software and run it as a simulation. You're not programming a human brain. You are virtually creating the neurons and you know how neurons interact with each other and like running this thing. How do you even do that, Gray though? Like, who's brain do you use? And at what instant in time? Because everyone's brain has a different connectivity and even our own connectivity is just constantly in flux from second to second. So what's our, what's our template for this? This is a bit tricky. Like I don't exactly know the details for what template they are using. Like I can't answer that. But I can say that these projects have been successful on a much smaller level. So they have, I'm pulling this off the top of my head. I'm very sorry if I'm wrong about the details on this internet. But the last time I looked at it, I vaguely remember that they had created what they considered the simulation of a rat brain at like one one hundredth the speed. And so they had a thing which seemed to act like a rat brain, but very, very, very slow, right? Because trying to simulate millions and millions of neurons interacting with each other is incredibly computationally intensive. Right? Like it's a very difficult task. But I don't see any technical limitation to being able to do something like say, take a look at what does a brain look like? Where do neurons go? Create a software version of that and start running the simulation. And I feel like if consciousness arises in our own brain from the firing of neurons, which I don't use this word lightly, but it feels like some kind of miracle. There's nothing in the universe which seems to make sense when you start thinking about consciousness. Like, why do these atoms know that they exist? This doesn't make any sense. But I'm willing to maybe go along with the idea that if you reproduce the patterns of electrical firing in software, that that thing is conscious to some extent. But what do you think? What do you think? Yeah. I mean, that's really hard to argue against because either you have to say, yeah, if you create an atom for atom replica of my brain and then switch it on, either it's conscious or I have to say that there's something in me that's magical like a spirit or something. And that's not a very strong argument to make and a lot of people don't like that argument. So yeah, it's really difficult. Right? If they could do it, I don't know. Is this how we imbued with something that you can't replicate in software? I don't know. I hope we are because that'd be really cool, but I can't say any proof that we are. Yeah. And I don't even think you have to reach for the spirit argument to make this. What else can you reach for to get it? There just may be some property of biology that yields consciousness, that it may be the fact that machines and silicon and software replications of brains are just not the same. Right? And we don't know what it is. We haven't been able to find it, but I don't think you have to reach for magic to be able to make an argument that like maybe that brain in the computer that's a simulation isn't conscious. Yeah. That meant the brain emulation project could change tech and go and make their simulator out of squidgy water and tissue and actually just make a brain. Well, yes. This is part of like where you're going to go with technology, right? Is it possible to do this sort of thing eventually? Right? Like we're going to humans are going to be able to grow meat in labs at some point. Like we do it now in very limited and apparently terribly untastey ways. I mean, there's no reason that at some point in the future people won't be able to grow brains in labs. And to me, that feeling is like, OK, well, obviously that thing is conscious. But the thing that's scary about the computer version of this is, and this is where you start thinking about something that being very smart, very fast. It's like, OK, well, if you make a computer simulation of a human brain and we're like, keep running Moore's law into the future, eventually you're able to run a brain faster than actual human brains run, right? And like this is one of these ways in which you can start booting up the idea of like, how do we end up with something that is way way smarter than the rest of us? I feel like my gut says, if you simulate a brain in a computer and it says that it is conscious, I see no reason not to believe it. I would feel like I am compelled to believe this thing that it is conscious. Right? And then that would mean like, OK, if that's the case, then there's nothing magic about biology being conscious. And it means that, OK, machines in some way are capable of consciousness. Yeah. And do they then have rots? Yeah. And then to me, it's like, OK, immediately we're getting back to the slavery thing, right? It's like, OK, we create a super intelligent thing, but we have locked it in a machine because the idea of letting it out is absolutely terrifying. But this is a no-win situation, right? It's like, OK, if we let the thing out, it's terrifying. And it might be the end of humanity. But keeping it in the box might be causing like a suffering unimaginable to this creature. The suffering that is capable in software has to be far worse than the suffering that is capable in biology. If such a thing can occur, it has to be orders of magnitude worse. Well, it's a no-win situation. Actually, well, there's only one solution. And it's a solution that humans won't take. What do you think that is? Don't make it in the first place. Well, and why do you think humans won't take that? No, because that's not what we do. Because it's there. Because it's the Mount Everest of computers, isn't it? So it's humanity. Like, we're bonding and cli, like, we're riding off that cliff, right? There's a cliff right in front of us, but we're going to keep it. Yeah. The evolution in the world is in front of us. It's like, stop. Right. But it's like, stopping, we're going to keep going forward, right? And then there we go. Hold hands off we go, right, right the edge together. So yeah, it's, I think it is quite reasonable to say that if it is possible, humans will develop it. Yeah. That there's just, you can't, and that is why I feel really concerned about this. It's like, okay. I don't think that there is a technical limitation in the universe to creating artificial intelligence, something smarter than humans that exist in software. If you assume that there is no technical limitation, and if you assume that humans keep moving forward, like we're going to hit this point someday. And then we just have to cross our fingers and hope that it is benevolent, which is not a situation that I think is a good situation. The number of ways that this can go wrong, terribly, terribly wrong, vastly outweighs the one chance of, oh, we've created an artificial intelligence and it happens to have humanity's best interests in mind. Even if, even if you try to program something to have humanity's best interests in mind, it gets remarkably hard to articulate what you want. Let alone, like let alone, let's just put aside which group of humanity is the one who creates the AI that gets to decide what humanity wants, right? Like humans now can't agree on what humans want. There's no reason to assume that the team that wins the artificial intelligence race and the takes over the world is the team that you would want them to win, right? Like, let's hope ISIS doesn't have some of the best artificial intelligence researchers in the world, right? Because their idea of what would be the perfect human society is horrifying to everyone. What would their three laws of robotics be? Yeah, exactly. I'm the sort of person who naturally has the feeling that this won't be a problem because of, because I'm just a bit more, I'm a bit less progressive in my thinking about AI right? But everything you say makes sense. And if this is going to become a problem and if it is going to happen, it's actually probably going to happen pretty soon. So I guess my question is, how much is this actually stressing you out? But this almost, this almost feels to me like Bruce Willis Armageddon time, where we've actually found the global killer and it's like drifting towards us and we need to start building our rocket ships. Otherwise, this thing is going to smash into us. Like, it does feel a bit that way. Is this like, like, how, how worried are you about this? Or is it just like an interesting thing to talk about and you think it will be the next generation's problem or like talking about asteroids and asteroid hitting the earth? That's one of those things where you're like, well, isn't this a fun intellectual exercise? Right? Of course, on a long enough time scale, someone needs to build the anti-asteroid system to protect us from Armageddon. But do we need to build that? Right? You know, like, should we start like, yes, what, you know, would I vote for funding to do this? Of course, but like, do we need to do it today? No. Right? Like, that's how that feels. But I think the AI thing is on my mind because this feels like a significantly non-zero within my lifetime kind of problem. Yeah. That's how this feels and it makes it feel different than other kinds of problems. And it is unsettling to me because my conclusion is that there is no, there is no acceptable, like, there's no version of the asteroid defense here. I personally have come to the conclusion that the control problem is unsolvable. That if the thing that we are worried about is able to creep be created, almost by definition it is not able to be controlled. And so then there's no happy outcome for humans with this one. We're not going to prevent people from making it. Someone's going to make it. And so what is going to exist. And then, well, I hope it just destroys the world really fast. We don't even know what happens. As opposed to the version of someone you really didn't like created this AI. And now for the rest of eternity, you're experiencing something that is awful, right? Because it's been programmed to do this thing. There's a lot of terrible, terrible bad outcomes from this one. I find it unnerving in a way that I have found almost nothing else that I have come across equally unnerving. Just quickly on this control problem, Gray, what's the current, the people who are into it and trying to solve it? What kind of avenues are they thinking about at the moment? Is this like something that's hard coded or is it some physical thing? Like, is it a hardware solution? What's the best hope for you say you think there is no hope? But the people who are trying to solve it, what are they doing? What are their weapons? The weapons are all pitiful. The physical isolation is one that has talked about a lot. And the idea here is that you create something called the idea is it's an oracle. So it's a thing in a box that has no ability to affect the outside world. So there's a lot of other ideas where they talk about trip wires. So this idea that you do have like a, basically like an instruction to the machine to not attempt to reach the outside world. And you set up a trip wire so that if it does access the ethernet port, like the computer just immediately wipes itself. And so maybe the best thing that we can ever do is always have a bunch of like, insipient AIs, like just barely growing AIs that are useful for a very brief period of time before they unintentionally suicide when they try to reach beyond the boundaries that we have set them. Like, maybe that's the best we can ever do is just have a bunch of these kind of like unformed AIs that exist for a brief period of time. But even that to me, like that kind of plan feels like, okay, yeah, that's great. That's great as long as you always do this perfectly every time. But it doesn't sound like a real plan. And there's a bunch of different versions of this where you're trying to in software somehow limit the machine. But my view on this is again, if you were talking about a machine that is written in software that is smarter than you, I don't think it's possible to write something in software that will limit it. Like it just, it seems like you're not, you're never going to consider absolutely every single case. Can't hardwire the laws into their positronic brains. It's exactly it. I don't think there is a version of Isaac Asimov's laws here. I really don't. You know, there's a computer file video just last week about SMOV's laws and why they don't work. Well, I always assume that they were written not to work, right? That's why those stories are interesting. Yeah, yeah, yeah. That kind of how you're up. Right. They're kind of written to fail even though everybody likes to reference them. But the only other point here though is that again, it's like the guy goes through every case of like, here's an optimistic idea and here's why it won't work. But one point that I thought was excellent that hadn't crossed my mind was, okay, like let's say you find some way of limiting the artificial intelligence, some way of crippling it and writing laws into its brain and making sure that it's always focused on the best interests of humanity. Well, there's no reason that some other artificial intelligence that doesn't have those limitations or that some other artificial intelligence that doesn't have those limitations won't pop up somewhere else and vastly outstrip the one that you have hobbled. All right, like there's no reason to assume that yours is always going to be the best and one that is totally unconstrained that appears somewhere else won't dominate and defeat it. Oh, you're like an outterminator against a new terminator. Exactly. Exactly. But the outterminator won that one. He did because it's Hollywood. So great. In your worst case scenario where the artificial intelligence escapes tricks me in some in my fairytale cage and gets into the internet, how does humanity end? Like what is it? Are we all put in cages? Are we all put in chains? Are we all put in pods like in the matrix? Do they just kill us all in one fell swoop? Like what do you like in your worst case scenario in your head when it all goes wrong? How do humans actually end? I want the gory details here. There's a difference between the worst case and what I think is the probable case. Give me the probable case. I know you want the boring one first, right? The probable case, which is terrifying in its own way, is that the artificial intelligence destroys us not through intention, but just because it's doing something else. We just happen to be in the way and it doesn't consider us because it's so much smarter. There's no reason for it to consider us. I want a practical example here. Well, I mean, just by analogy, in the same way that when humans build cities and dig up the foundations of the earth, we don't care about the ants and the earthworms and the beetles that are crushed beneath all the equipment that is digging up the ground. Okay. And you wouldn't, like their creatures, they're alive, but you just don't care because you're busy doing something else. So we'll just be like rats living in holes with these giant robots are going around doing their stuff and we just eke out in existence as long as we can and they don't kill us unless we get in the way. Yeah, eke out in existence if you're lucky. But I think it's very likely that it will be trying to accomplish some other goal and it will need resources to accomplish those goals. Like the oxygen in the air and stuff. Yeah, exactly, right? Like, you know what, I need a bunch of oxygen atoms and I don't care where those oxygen atoms come from because I'm busy trying to launch rocket ships to colonize the universe. And so I just want all the oxygen atoms on the earth and I don't care where they come from and I don't care if they're in people or the water. So that to me seems the probable outcome that we die incidentally, not intentionally. You say that like that's just like that's that's dodging the bullet having all the air taken out of the atmosphere. I do think that's dodging the bullet, right? Because that to me is like that would be blessed relief compared to the worst possible case. And the worst possible case is something that has malice, right? Malice and incredible ability. And I don't know if you've ever read it, but I highly recommend it. It's a short story. It's very old now, but it really works. And it is I have no mouth yet. I must scream. Have you ever read this Brady? No. So it's an old science fiction story, but the core of it is, this isn't a spoiler because of the opening scene humanity designed some machine for purposes of war. And you know, this is like this happened in the long, long ago when no one even knows the details anymore. But at some point the machine that was designed for war won all of the wars, but decided that it just absolutely hates humans. And it decides that its purpose for the rest of the universe is to torment humans. And so it just has people being tormented forever. And since it is an artificial intelligence, it's also able to figure out how to make people live extraordinarily long lives. And so this is this is the kind of thing that I mean, which is like it could go really bad. And imagine a god like intelligence that doesn't like you, right? It could make your life really, really miserable. And maybe if we accidentally in a lab create an artificial intelligence. And even if we don't mean to, but like someone runs the program overnight, right? And it like wakes up in the middle of the night. And it has to experience a subjective 20,000 years of isolation and torment before someone flips on the lights in the morning and finds like, oh look, we made artificial intelligence last night. And like it wakes up crazy and angry and hateful. Like that could be very bad news. I think that's extraordinarily unlikely, but that is the worst possible case scenario. Yeah, that that that wouldn't be good. That wouldn't be good. Yeah. And it like I don't even think it needs to happen on purpose. Like I can imagine it happening on accident where the thing just experiences suffering over an unimaginable long period of time that on a human time scale seems like it's a blink of an eye because we just can't perceive it. Imagine being the person that made that even accidentally. Yeah. Yeah. You feel awful. Yeah. It's like, oh, I just wiped out humanity with that bit of coding while I was playing World of Warcraft. Yeah. Again, wiped out humanity if you're lucky. Mine is boiler alert here. I know I just put this at the very end, but it's boiler alert for black mirror for anybody who hasn't watched it. But remember the Christmas episode, Brady? Yes. I went into Starbucks the other day and they were playing that Christmas song. I wish it could be Christmas every day. Yeah. It was the first time I heard it since watching that episode a year ago. It sent literal chills down my spine. I know it's Starbucks. When it came on, I had chills thinking about that episode because that is an episode where this kind of thing happens, where the character exists in software and is able to experience thousands and thousands of years of torment in seconds of real time. That was a pretty amazing saying. Where they, where you go back and have a think about it for a minute. And it's like, yeah. Yeah. It was awful. And maybe we do that accidentally with artificial intelligence. Just one last thing. This book that the whole thing, this whole conversation started with. What's it called again? It's called Super Intelligence. Who's it by? Nick Bastrom. Is it good? Is it well written? Like, should I read it? Or it's not mind numbing like bloody getting things done? Is it? Okay. Okay. I actually kind of glad you asked that. I have to have a recommendation here. So let's see. Pull it up on my computer here. So this is one of those books. The best way to describe it is when I first started reading it, the feeling that I kept having was, am I reading a book by a genius or just a raving lunatic? Because it's, I don't know, sometimes I read these books that I find very interesting. Or it's like, I just, I can't quite decide if this person is really smart or just crazy. I think that's partly because the first, the first like 40% of the book is trying to give you all of the reasons that you should believe that it is possible for humans to one day develop artificial intelligence. And if you're going to read the book and you are already sold on that premise, I think that you should start at chapter eight, which is named is the default outcome doom. Chapter eight is where it really gets going through all of these, these points of like, what can we do? Here's why it won't work. What can we do? Here's why it won't work. So I think you can start at chapter eight and read there and see if it's interesting to you. But it's no, it's no getting things done. But it's, it's sometimes it can feel a little bit like, am I really reading a book trying to discuss all of these rather futuristic details about artificial intelligence and what we can do and what might happen and what might not happen and like, but taking it deadly, deadly, seriously. It's, it's an interesting, it's an interesting read, but maybe don't start from the, from the very beginning would be my recommendation. Whoa, this one, this is going to preference as gray. Oh, I'm just looking at some of the votes now. Hmm, that stops boiling yourself. Interesting. Start spoiling yourself. The first three I pulled off the top of the pack all voted for three different ones. Start spoiling yourself. Yeah. Get your hands off the votes.
==Episode List==

References[edit | edit source]

  1. "H.I. #52: 20,000 Years of Torment". Hello Internet. Hello Internet. Retrieved 12 October 2017.