Futurist Logo

A Smarter World


Dan Jeffries

This week on the Futurists, Dan Jeffries, Managing Director of the AI Infrastructure Alliance and CIO at Stability.AI talks the doomsayers attacking ChatGPT, and the overblown fear over AI. Jeffries argues there are historical precedents for human adaptation to the disruptive technology of AI, but that learning to live with another intelligence might be a bit more challenging.

Analysis complete. No addtional information is required for context. Proceed with transcript display ...

View Transcript

document button

this week on The futurists Dan Jeff I don't think that there is anything inthe world that does not benefit from more intelligence there is nobody out there saying I wish my supply chain wasdumber and I wish drugs were harder to discover and cancer was more of a[Music] problem hi there and welcome back to another episode of the futurists I'm RobTurk with the world traveler Brett King back on the ground hi bre good to seeyou I'm back home so that's great um super nothing flying back and getting torecord a podcast first thing in the morning but great to see you great that you're back I think I've got I've thinkI've got a podcast recording um every day this week so you'reprolific speaking of prolific here comes a really really bad transition speakingof prolific Our Guest is a prolific writer uh an author who's been talking and writing about the future about thefuture and about AI for quite some time um and um though he can go verytechnical he's also created a tutorial for people called learning AI if you suck at math that has been wildlypopular some of his posts have been read five million times and um today he isthe managing director of the AI infrastructure Alliance and you can find him onsubstack where he writes the future history uh newsletter so give a big welcome to Dan Jeff hey Dan so it'sgreat to see you we were introduced by a mutual friend uh I was ventilating to that friend about the Doom the thedoomsayers all the fear and and and you know anxiety it seems to be getting drummed up about artificial intelligencewhen you know it's a new technology that presents all sorts of uncertainty all sorts of change but that doesn'tnecessarily mean it's the end of the world now can you comment on that because I know you've written prolifically on thattopic well I mean the end of the world we've been we've been doing the end of the world for two million years right uhit's whether it's Boulders from heaven or aliens or the gods steam powered Loum steam powerered looms you know that's right yeah that's right yeah they we you know the godswere going to kill us the tides were going to kill us everything's always going to kill us uh and you know we got this kind of Doomsday cult rising up nowuh about artificial intelligence and artificial intelligence it's strange right we like to think ofourselves as as totally unique the the only sort of super intelligent creature on the planet and so the idea thatthere's uh you know another thing that could be intelligent is scary to some folks and it's turned into it's kind ofjumped the shark into let's bomb all the data centers you know kind of thing with you know Hud's time article you knowhumans have always always with 100% uh success rate adopted technology becauseit's not outside of themselves it's it's not outside of us it is it's a part of us it's a reflection of us and we adaptto it and we adjust to it it doesn't mean there's never any any I don't know any bumps in theroad there are but the idea that you know this time is different I don't knowI'm going to go bet with like two million years of of history at this point maybe it is different but maybe aboulder will come from heaven and smash the Earth too I think the probability is pretty low you know I think the thingthat's different this time you've already uh said that the one thing that's different is you know we we'venever had to compete with another Intelligence on the planet um that that's the equivalent of ours you knowum I think we do a very poor job of recognizing the intelligence of other creatures that already exist on theplanet incidentally but um but the the the um amplitude of change this time ifyou like um comes for me from the the effect it's going to have onconventional philosophy around things like capitalism and governance and so forth right particularly I mean if youlook at Adam Smith you look at um you know how we respond to supply and demand economics you know as demand for aproduct goes off it's more workers get thrown into the mill right you know that's how we've handled it for thefirst time we have a technology in you know in the last 300 years that is goingto break that patent potential or significantly change it where we will no longer require human workers to producethings and particularly if we start getting closer to AGI although we don't needthat the potential for AI to disrupt a wide range of employment simultaneouslyis fairly significant so I understand why people are freaking out but is itthough like I mean question is is it though is is it what like it is thepotential for massive disruption just is it a fantasy or or is it or is it a realthing right I mean and you know like I look around and whenever I have a problem trying to understand something Itry to ground it in reality right and so I look around now and I've been readingyou know robots take all the jobs I mean there's you just Google that job you Google that story right and there's amillion articles with the same title they could have been written by chat gbt um you know so when I when I think aboutthis I go okay but what's happening in reality I see I don't know a 100 million people using CH gbt in two months I seeVCS pouring a ton of money into it I see a whole bunch of new companies I see a whole bunch of new academic you knoweconomic activity right so the the job thing hasn't hasn't quite materializednow it could still materialize but I would also add we've already destroyed all the jobs in history multiple timesright you know there it used to be 100% of people worked in hunting Gathering or agriculture now it's 3% of the planetright I know that uh you know you flick on the light switch and that unfortunately means that you know allthe Whale hunters are out of business and you know but I don't know that anybody is clamoring for a return towhale hunting so we can kill them and dig the white gunk out of their heads I'm sorry that the lamplighters are gonebut the thing is like this change doesn't happen instantaneously right like there'sdisruptions the idea though that the is some sort of exponential you knowexplosion that could take off in a way that we couldn't fully understand I suppose it's possible but I see just asmany trajectories in the future where the current technology is disruptedright like either we have a breakthrough and Alignment or I don't know AG Auto GPT has a 15 to 30% error rate yeah interms of its logic you know there's no guarantee that we can solve that problem tomorrow right so I there's certainly achance for kind of an exp enal explosion of recursive intelligence but I don't know it's like sometimes when I look atthese things and they go oh you know gbt is going to escape I'm like where's it gonna escape to another eight-way h100cluster with 8 terabytes of ram like a kubernetes cluster and a vector database on the back end right like I just don'tunderstand human I don't even think the chat GPT thing is is is an issue I youknow I'm not sure that um that chat gbt is or or that underlying technology iswhat's going to lead us to to AGI there's a bunch of other um you know types of AI Pursuits you know that arethat are you know currently in development that could um you know potentially Le lead down those paths soit's it's I think the the conversation around chat GPT is an issue the hype right now is that the you know Microsoftreleased this art this paper research theoretically a scholarly research paperuh talking about Sparks of AGI and that's really what triggered this concern about art general intelligenceit first of all I don't buy it secondly even the people who wrote it they said look here are some criteria which youcan you can object to like they set forth their criteria that was very fair uh and it say doesn't meet all of themit meets some of them that's why they say it's Sparks of general intelligence and so I guess the fear there Dan isthat um we're going to see some emergent behavior that we previously didn't assume if the if the model's largeenough we know that there are emerging behaviors and so some people are saying well could could general intelligenceemerge I think that's a pretty big leap or fairly low probability but just this week there was an article with NickBostrom in the New York Times um where he talked about uh he talked aboutemergence and he said look if you can accept the premise that there might be asmall amount of sentience that that that something could have small percent like a an animal would would you say ananimal has some self-awareness well sure you probably would go with that would you say a plant does well no but youknow it responds to light and dark well maybe it does you know maybe so if you can if you can go if you can follow thatlogic then you might say well then these emerging systems might actually have some level of self-awareness uh you knowsome small degree of sentience what's your perspective on that because I think that that's wherethat's where you get on that slippery slope where it's like you can't just say zero probability well it doesn't scare me Imean look I think there's lots of different kinds of intelligence I mean we don't necessarily think of squirrel is intelligent but you know try to keepthat squirrel out of your garden and you'll realize how intelligent that squirrel is because it's optimized to get into your garden right so there arelots of different kinds of intelligence in the world we're not the only creatures on the planet that haveintelligence we have different kinds of intelligence and so that's one thing the second thing is if you look at theemerging capabilities uh and obviously as the large language models have been scalingup there's a great graphic I've used where it kind of shows a tree bloominguh that came from the Palm uh paper by Google and it like shows different emerging capabilities you know chains ofreasoning understanding a joke Etc generally that's been more beneficial orcomplex emerging capabilities right so yes it's possible that we get a kind ofartificial general intelligence out of these just massive scale right and I've actually thought that one of the ways wepossibly get there as some kind of connection to the connectone projects where they're trying to map the entirebrain and maybe we just have enough compute essentially to simulate now people have tried that before but theytried it probably before we had a deep up understanding and at some point you just reverse engineer it you look at ityou go well that's lighting up over there like people worry about that because they go oh my God it's a black box I'm like well you're a black box youdon't know how you made decisions yesterday I mean we used we know how Consciousness works today right but weuse black boxes all the time people made bread for 2,000 years that they couldn't see yeast they didn't know that what washappening below the thing though but they they understood the outcome so I think when when I see these kind ofscary things from Bostrom and everything the paperclip maxifi I go man this is like a this isn't scary to mean this isa complete like breakdown of critical thinking ability like this it's like it's basically an AGI python script gonewrong like if if we have these complex emerging capabilities wouldn't we have complex you know intelligence thatemerged as well more likely than like a python script G crazy wouldn't we have other intelligences that are able tolook at it as a watcher and contend with it other human beings and augmentation and a million other things that likeinterrupt the flow of like the the the stupid thing turning the universe into paperclips it just to me I've looked atthese things and I go man you you start off with some really interesting ideas alignment is hard right like you knowyou don't necessarily have values and alignment and and intelligence linked up but at the same time I look at and goyeah but I mean you're going to tell me that something that's super intelligent is going to go worry about maximizing paper clips it's not just going to spawna python script to do itself I mean I'm pretty smart I probably would just spawn a python script to do it myself ratherthan spend all my time obsessing good point all these AI Doom scenarios where where the fear is ageneral intelligence that gets out of control or a super intelligence they all require that super intelligence to bestupid in one unique way like like the paperclip maximizer which for the people are listening that's a premise that wasput out there that if you wrote an artificial intelligence that was optimized to generate paperclips it would start to look at all the resourcesin the world including human beings as raw material for paper clips um and then it would go out of control and just turneverything into a paperclip so that's not really intelligent that's kind of like a massive blind spot that you wouldcall you know artificial stupidity I think uh so so all of these Doom scenarios require that we have that wehave we assume some massive flaw or some massive blind spot that's to the extent that we're monkeying around with blackboxes we don't we do have the blind spot that's a reality right we don't know yeah uh the emerging capabilities thatwe've seen so far with large language models those were unpredictable including writing code that was notsomething that was written nobody designed that into the system that emerged the the one thing that isinteresting with some of the AI that we've seen um you know and uh if if youread Eric Schmidt and Henry Kissinger's book on the age of AI they have some excellent examples of this right whereAI is already doing things outside of the bounds of human logic so there is anelement of you know we are getting some results out of this that we didn'tnecessarily expect and we can't explain exactly how how it happens in thosealgorithms but we do see evidence of that already you know yeah yeah that's I mean that's been well documented withchat GPT and now with GPT 4 uh new capabilities seem to arise all the timelike every week which is exciting scaled up I mean we never had a machine thatcan improve it's also cool to have something that competes with humans that makes usrethink our logic you know um and I don't necessarily think when we talk about alignment um Dan you know uh youknow the AI pause letter which everyone's been talking about obviously um you know uh uh from the future ofLife Institute I think it is um that that published that which you know I've got some questions about that as wellbut um when we talk about alignment it um you know the the the view that if wedon't set some regulatory uh guide rails if we don't set ethical standards for AIto operate within that we will never be able to claw it back I don't necessarily agree with that either um but but thisissue of alignment you know and if we are looking at the emergent uh emergenceof something that will compete with us in terms of intelligence that isn't itreasonable to have ai at least us attempt to get AI to work within somesort of ethical constraints that fits human values yeah I mean look there's a lot ofpeople working on the alignment problem we don't need a pause right like darp has poured money into it anthropic isworking on you useful things right now reinforcement learning reinforcement learning with uh human feedbackreinforcement learning with AI feedback where you have another model critiquing it uh and like aligning it to aconstitution we're going to see more and more of of these kinds of things there's there's AI middle wear coming that ofkeeps things on the guard rails there's lots of stuff it's not like suddenly people just started thinking about thisand I think we're probably gonna need AI to regulate AI actually you're gonna watch your AIright you're gonna have Watcher AIS and see and that's that's going back to even an earlier part before we go too far down the alignment thing the other majormistake that I think a lot of these super intelligence gone crazy things make is what I call like the classicSci-fi problem right it's like in modern sci-fi multiple people have a computer right it's not very interesting if oneperson has the cell phone but in classic Sci-fi like one guy would have the submarine and there's no other submarineon earth right and that's not how technology develops we're gonna have lots of different a we're gonna have lots of different like minor modelswatching the other ones right you know people go oh my God they're going to use it to like write attack programs andmalware I'm like sure we're also going to have Pro AI programs that are like stopping the malware and reest you knowrecursively fixing the code in real time like faster than any human good like fugging the hole this is how AI is usedalready right so already most people don't realize it because they're not consumer facing applications of AI butwhen you use search um when you're placing ads when you're getting a recommendation on Netflix um when you'reusing your credit Cod when your when your email is being screened for spam you know there there's an AI at work inthe background the only thing that changed here is that with chat GPT they put it in front of people and you could talk to it and that was definitely cooland definitely weird and new and startling I haven't seen a job that it can replace yet I've been studying chatGPT and I'm trying to say like okay here's the job that is going to go at first I thought it'd be copywriters butactually the copy that it write like the advertising copy it writes spam and it's being used for that so it'll generate a lot of spam I guess um you think wellmaybe legal writing can go but at the end of the day um you're still going to need a professional attorney so thismight be something that helps that attorney generate drafts faster maybe it'll make law cheaper but you're notgoing to repl that attorney with an AI anytime soon and even coding um as you said earlier you know it generates codethat's incredible and sometimes it works cool but if you don't know what you're doing you can't use that code because you have to double check I think youeven wrote a blog post about that where you know you have to go back and look at a bash script and make sure that it does what it says you can test it but thatrequires a human being who knows what he or she is doing the T you know if you look at the top coders and I'm I'm I'mbasically if if I fire up a company like I'm thinking of doing I'm going to require the coders to use this stuff and do the high level thinking I we're goinginto a centaur era where you do a lot of the logic and and I've read you knowblond post from some amazing you know coders written millions of lines of code and they're doing the high level thinking they and and they and to themthey're getting fiber six things done a day right they couldn't do a tool in the past because they couldn't explain it to their bosses because they didn'tunderstand why you'd write a tool now they can write two tools a day they can do two or three PRS and at the end of the day they're not blown out andexhausted you know one coder wrote it was like chatting with a buddy half the day right and and you know I look at itand I go you know I have a I have a a friend who started writing a Blog and she speaks like five languages right andEnglish isn't her native language so a lot of times she would you know write it she can think very well but she would write it and she you know have chat gbtlook at it and and then or or I would look at it well you know she wrote a blog post that was pretty good andimmediately me as a professional writer I looked at it and knew what was wrong you know it was using like very essaylike word choices like individual instead of people or we right that the language there were too many B verbconstructions all the paragraphs were the same size instead of varying it which makes the eye get tired now that'slike 20 years of experience of writing I immediately said go back make it more colloquial get rid of words like individual tell it to use colloquiallanguage and you know she did that and then immediately it came back with a better version and and still like thepunchline I looked at it was like okay wait a minute it doesn't really doesn't really Slam at the end like how can Imake that work so I think actually the most skilled people are going to have alevel up and then people go oh my God you know a 10x programmer it's going to you know make the the the juniorprogrammer you know invalid no that's going to make the Junior programmer 10x and the the super programmer 100x rightI think it's I think it's man plus machine right now rather than Man versus machine you know so people looking forthat competitive commercial Edge they're going to be using AI to you know reducethe cost of output um you know to expedite uh you know performance thingslike this but you know we're not at the stage where AI is taking jobs wholesale but you know we are also not preparingfor for that transition you know I agree with you somewhat Dan that um and this is what I'd like to get into after thebreak but that um you know we we've always adapted um but you know we haveseen the pain of adaptation in the past and and the the con here is that thespeed of which when this does when it is ready for prime time and it is taking jobs the speed with which it will beable to do that is is the question right you know what what uh you knowwe're not doing a lot to prepare that transition if we do have large scale technology unemployment because of AIwhat's the government's policy on that how are we going to deal with that you know do we ban AI do we come up withuniversal basic income you know what what are the what are the strategies H so should we be planning for thisdisruption or do you think it's just you know we just let it happen and and respond in real time I mean I think wewe let it respond in real time and we're talking about regulating something that doesn't exist we're talking about a fantasy problem and I think this is a achange in our thinking in general as a society right where we're like well the kitchen knife manufacturer can't put outthe kitchen kniv unless they can guarantee nobody ever gets stabed with it like it doesn't matter that 99% of the people are going to cut vegetableswith it right and we we didn't used to think that way like when they when they did have a pause in genetics they cameup with a policy that said you actually had to you know prove harm today we use this harm very Loosely right it's likeif I stab you in the chest that's harm if I won't rent to you because of your skin color that's you know that's harmright but if I if I make you know off color joke or insult your religion that's not harm you may not like it butbut that doesn't that's not harm right so when I look at these kinds of things I go we have to be able to deal withthese things in real time we have to be able to prove that something actually happened and a bunch of people writing abunch of papers about something that's supposedly going to happen you know the population bomb was pretty sure we weregoing to run out of food right and and like current the current trajectory it couldn't see the Green Revolution aroundthe corner that changed the trajectory in the future right and if we'd all prepared the regulation for the factthat like two million people were inevitably going to starve that regulation would lookthat's a fair point that's a fair point you know now we're concerned about whether the human human race is going toshrink out of existence right because the decline population growth the demographic Trends so um in the secondhalf we'll take this out uh into the future a little bit more we'll start to look at where things are heading beforewe do that though we like to get familiar with you we like to ask you a series of short questions uh that'll help our audience understand whereyou're coming from so Brett's going to do that he's going to ask you five short questions [Music]now I normally do this around sci-fi Dam but I'm going to make it a little bit more AI Focus so what was the first timeyou remember being exposed to the concept of artificial intelligence uh via TV booksresearch uh so you know college in the 1990s and I had a torent of all theseold papers the perceptron and lisp and and I was just I just started readingthem because I was really into computers and I I thought gosh there's something magic in here I didn't know how to unlock it I thought it was magic and youknow I read Neuromancer around that time as well and uh I just fell in love with the with the concepts uh of of whatartificial intelligence would look like probably even goes back a little further to the azimov series when I when I was a kid as well right but that was mostly aliterary construct masquerading his artificial intelligence but pretty cool one what technology do you think hasmost changed Humanity oh I mean there's no contest the most important event in history isprinting press right it single-handedly leveled up the entire collective intelligence of humanity it you knowkicked off the Scientific Revolution so that we you know uh we didn't think witches caused storms we could actuallyfigure out what caused storms we didn't think you know daggers would bleed next to the murderer we could actually figureout things like fingerprints to catch murderers so to me it's the most profound by a huge huge uh leap yeah wehad someone else say printing press recently as well I couldn't remember who was but they they said the compucomputers and the internet have just been an extension of that which I think is interesting um name a futurist or anentrepreneur that has influenced you and why H futurist and entrepreneur I don'tknow there's so many uh interesting folks who've kind of been around over time you know I don't knowit's maybe maybe lineus tvols I just I love open source and uh you know Red Hatwas a massive part of my career in my life I went to work in Linux when when a recruiter said you know all the jobs arein saris and I said it won't be here in 10 years and they looked at me like I had two heads so I think open source haschanged the game in terms of software and uh almost you know anything else in my lifetime it's been super excitingafter the break I want to ask you about open source and AI but let's bookmark that for just a minute sure and finallywhat science fiction story is most representative of the future you hopefor in respect to AI oh I you know I've been writing a couple short stories now because of the theanswer to this question is none right I generally think most of the stuff that's written about AI was written before AIexists with no no actual thing to tether it to reality so they basically wroteeither the robot buddy or the AI gone crazy or the homicidal maniac so there's two villains uh and it's basically justputting a you know an antagonist in a in a metal body or a buddy in metal bodybut there's no 's no actual relation to to reality so I think we're we're duefor a series of stories that are now grounded in how artificial intelligence is actually developing and I'd love to see an explosion of new science fictiontaking into account like how it's really developing awesome well let's get onto it that's it for the quick fire roundyou're listening to the futurist we'll be right back after these this break andI word from uh sponsors provoked media is proud tosponsor produce and support the futurist podcast provoke FM is a global podcastNetwork and content creation company with the world's leading fintech podcast and radio show Breaking Banks and ofcourse it's spin-off podcast breaking Banks Europe breaking Banks Asia Pacific and the fintech 5 but we also producethe official finovate podcast Tech on regg emerge everywhere the podcast ofthe Financial Health Network and NextGen Banker for information about all our podcasts go to provoke FM or check outbreaking Banks the world's number one fintech podcast and radioshow welcome back to the futurists I am your host Breck King with my co-host RobTurk um one of the things we like to do is do a bit of a deep dive so in respect to AIone of the most vocal proponents SL opponents of AI and it's March ofprogress in recent times has been none other than entrepreneur Elon Musk so hejust announced some new moves in the space Rob what can you tell us about it so you're right Brett when it comes toAI Elon musk's fingerprints are all over it there were reports last week and overthe weekend that Elon Musk is relaunching his business X um as x. and he did just purchase 10,000 gpusgraphical processing units for a new generative AI project inside of Twitterand that's to put that in perspective that's about how many uh processors open AI uses to serve GPT 4 so it seemspretty clear he intends to create his own generative AI project um and he has been conferring with uh and actuallyhired one researcher from deepmind uh to build a new AI lab so it seems like he'sembarked on this now what's interesting about that is Elon Musk has also been an outspoken critic of AI as you justpointed out uh for many years he's he's been tweeting and saying and interviews that AI is dangerous he said AI is likesummoning the demon and claim that it's worse than nuclear weapons uh and in 2017 he told State Governors that AI isthe biggest risk we face as a civilization um now what's interesting about that is that Tesla which is one ofthe many companies he's the CEO of uh Tesla uses AI they have a very active AI development program and during the lasttwo years uh Tesla has been celebrating AI day so that's you know sort of like athing they're very very bullish on of course Tesla vehicles are equipped with autopilot which is a kind of AI anonboard AI system that processes data from eight different cameras on the uh on the car to generate a real-time 3Dmodel that can identify vehicles and pedestrians and so on and sometimes that system doesn't work um so uh you know wealso had some spectacular crashes with Tesla vehicles that have been using or relying on um on auto I can tell you itit it it's not ready for prime time yet in my Tesla Model Y no some respects you can say for a guy who's been telling usthat AI is dangerous for about 10 years uh his company is responsible for more deaths at the hands of AI than any othercompany on the planet so he's sort of like number one right now in terms of killer AI um now interestingly he's alsoone of the original Founders and the original investor in open AI uh he started that with along with PayPalFounders Peter steel and um and Reed Hoffman um and in in 2018 or around 2018he tried to take over opening AI maybe it was 2017 um and it kind of bungledand the rest of the founders rejected it and so they pushed him out um and umthey had already been upset with him because he had poached some of the people from open AI to work on Tesla's autopilot um so he left and at thatpoint um he had made a pledge to contribute a billion dollars to open AI so that they could scale up the trainingof the large language model for the very first version of gpt1 um but of course when he left hereneged on that promise and that's what caused the new CEO Sam Alman to turn to Microsoft for funding and infrastructuresupport uh so in a sort of weird paradoxical way he founded open Ai and then by leaving caused open AI to gopartner with Microsoft which of course he's been complaining about ever since because he says I named the company opena open Ai and now it's a closed for-profit company that's working with the most evil Monopoly on the planet umso there's a there's a complicated story behind all that and then um in additionMonopoly instead of theirs well so this is the thing you know it's always there's always a couple different angles and they always seem to like conflictwhen it's when it's about you know must so in addition to all that he also was one of the most prominent signers ofthat famous petition uh from the future of Life Institute but that's not all he's also one of the people who fundedthe future of Life Institute and he's a longtime supporter of Nick Bostrom Nick Bostrom who is the founder of F and alsoof uh future of humanity Institute at Oxford um and in fact Elon Muskintroduced uh Nick Bostrom to Sam Alman at open AI so like he's kind of the spider in the middle of the web so a lotof people have speculated that what's it me well they speculated that the reason he wrote that letter the reason hesigned that letter was basically to buy a six-month pause so that Twitterstartup could catch up to open AI now it's not just me that's saying that his longtime friend and PayPal co-founderReed Hoffman said that he said uh he sort of mused out loud they didn't an interview recently he said um they askedhim about musk's uh reason for for signing that letter since he's already using AI in many ways and and andHoffman said I think some of it's a little bit less well-intentioned like everyone else slow down so that I canspeed up so that's uh that's that's Reed Hoffman's take on on x. and uh and Elonmusk's involved well he knows he knows El pretty well that's for sure true and so with that in mind I was thinking itmight not be a bad idea for us to just go back and recap that letter uh because Dan that's when you and I wereintroduced them when that letter came out so at that point about 2,000 people had signed this petition asking for asix-month pause today as of recording about 26,000 people have signed it andof course it's made zero impact in terms of slowing any company down if anything the companies are rushing just todaythis weekend Google has been announcing that they're doing everything they can to catch up to Microsoft so it appearslike we've got an arms race underway anything we need be concerned about there with the arms race are these bigcompanies likely to suspend their AI uh safety regulations or their their uhethical AI parameters are they going to just rush ahead recklessly and throw safety to the wind I mean well I mean sothere's a there's a lot to unpack there I would say you know I tweeted the same thing that multiple CEOs not just Timhad uh you know tweeted or had signed it in bad conscience right that they justlike slow down the competitor so they can catch up because they were training their own llms right um and and and I goback to you know kind of a great person Theory right as well when you think about things likeyou know how do you how do you assess someone like Alexander the Great he's a mass murderer or did they did did heunite all of the world and like encourage different cultures to interact and like you know his soldiers to marryfolks from different sides and like not murder the populace whatever the answer is both right it's like you know greatpeople don't kind of fit into these boxes so he it's a bit like Logan Roy or whatever in succession it's like is heis he a horrible pompous son of a [ __ ] is he also a person who like gets things done in an amazing way puts togetherfromin deals is a person who you know plays both hands to get what he wants yeah that's what that's what singularlypowerful people do so so nothing if you if you frame Elon from that perspectiveit all it makes perfect sense right like you play all the sides that you want now get it back to the letter uh it has zeroimpact no one's about to slow down uh right now and do it and I think the other thing is I don't think people areslowing down their safety regulations again go look at like they spent in open ey they spent six months you knowgetting the model to behave they had red teams on there right they had you know people like working to get the modelmore align it's like 146 page paper right that I read through where it you look at the early things that it coulddo right where they were they they tried to get it to I don't know say some you know say something anti-semitic rightbut that doesn't get picked up by Twitter and it would gladly give you that or where you could buy illegal guns and they they work through all thesethings and they're constantly upgrading the system to kind of deal I mean arguably that's why they made it open to the public because we're all betatesting it you you cannot fix things in uh an ivory Tower this is I don't knowhow we've gotten to this concept that you can't that you can fix things outside of the real world like we put refrigerators out there and then werealize sometimes the gas leaks and they blow up that's a tragedy it's horrifying we're not happy about that but you don'tknow that's going to happen until it happens opening ey spent a bunch of time worrying that it was going to be used for political disinformation and it'sbeen used for that about 0% of the time they didn't see spam right they didn't see that coming and that's been whathappened but that when when they put it out there that is like you know humans are infinitely created we find exploitsin in political systems in computer systems what we Mass they also didn't know what people were going to use itfor positively they didn't know anybody wanted to code with these models until they saw people coding with it right sothe thing is you have to put things in the real world and yet today now we we go oh it's got to be perfect or youcan't do it you know again I take this idea of of self-driving cars like we talk about okay the Teslas have havecrashed or whatever look I ask people how how many self-driving car desks are going to be acceptable off oh you knowzero like okay cool humans are absolute terrible drivers they kill one point3 five million people on the road everyyear worldwide let's stop humans driving cars then yeah right 50 million people are injured so if it cut that by halfyou know or or or down to a quarter that's a million people walking around we a different standard to the AI likethe AI has to be perfect or nothing right right okay but but the there is agroup that is very concerned about something that we may consider low probability the the three of us might consider low probability but they say nomatter how low the probability it's something that we all have to take very seriously in fact for them it's thenumber one concern the group I'm talking about is the long long termist that's what they're usually called uh oftenNick Bostrom is considered to be you know kind of the the ring leader or the the brains behind it he's the author ofsuper intelligence a professor at um at Oxford uh studying ethics of AI amongother things and this group is really the group that's behind this petition and their view is that we can't bemessing around with a technology that if it works if it escapes if it developssuper intelligence a lot of ifs but there's a nonzero probability there so it could happen might happen and they'resaying if it happens then the most likely outcome is that it's going to exterminate Humanity right this is uhprobably best expressed in that Time magazine article by elazer yatkowski whois a part of that group and he said the letter didn't go far enough so he refused to sign it's such a it's such a human emotion though applying that logicto AI right well let me direct it at Dan I want to hear what Dan has to say because Dan tell us a little bit aboutthe this perspective from the future of life Institute Nick Bostrom and elazeryanowski I think you know I think they use a lot of like the even sort of the blog list wrong comes from a criticalthinking concept right so I think that in the they kind of utilize blog for thePO less right so and that comes from the concept of like you make a less wrong choice so like the way that you thinkabout things is like if I'm doing a diet I plan out all the healthy eating things and I count calories but it itinevitably fails because it's boring and it's a tedious it saps the fun so a less wrong choice is going to the you knowgoing to the grocery store or going to you know the going to the restaurant and just scratching off the burger and thefries and the you know and getting yourself a salad you know putting the dressing on the side right that's a less wrong choice right that's how you getcloser you can maintain that easier so they've kind of co-opted that language and used a lot of that critical thinkingbut in in many ways and sometimes I read less Long a lot of the guest posts are good but um they've kind of co-optedthis language of you know clear thinking and corrupted it in kind of a Jonestownway right it's like you know there's a way it's like we're going to get to this thing and it's like the more you studymonsters right the more you start thinking that monsters exist it's like you study the dark arts and you're suddenly mad ey Moody looking around thecorner all the time right like my father always said what you focus on expands right so you keep focus it becomes thisinevitability we're going to destroy it but I think it's just based on a lot of weird sort of lazy assumptions like theytalk a lot about these kind of you know these really logical scenarios but the more you dig into them and peel themapart and expose them to the light they don't make a lot of sense like that that example of the paper cut maximizeearlier it sort of assumes that there's no other thing to interrupt the trajectory or that we don't have any anyadditional kind of development along the way it's like try to explain a uh youknow a web developer to an 18 century farmer you can't do it because there's like 15 other technologies that developthat you can't see so it's easy to imagine destruction it's very hard to predict the tra you know the trajectoryof these things and so I think that the probability is is immensely low and Ithink a lot of the reasoning is wrong too when you look at the paper that came out that said oh you know it's selfishspecies is the nature of evolution I'm like really I'm like you know humans are the most collaborative species of likeall times right like we you know we and we feel a kingship with 100 million other people 200 million a billion otherpeople have nothing to do with us it's called a nation St right who you know you might not give the time a day to but we take that very seriously you work ina company how many of those people do you actually like let's be honest right you probably have two or three friends there the other ones you wouldn't givethe time day to but you're all working towards a collective thing and then they go oh other species don't collaborate Igo GE you know I saw you know there's a like a shrimp that digs out like a ahole in the ocean and the goby fish goes into it and the goby has great eyes and the fish kind of you know the the shrimpclean cleans out the the thing every day and and it's it's kind of blind and thegoby fish acts as a guard and they both share this thing together so look collab you know it's it's based on these sortof weird faulty assumptions yeah and and for me it's like I don't know man I meanare we gonna spend all of our time thinking about this and then we're gonna miss out on all the benefits of like it you knowbetter age abolutely with all of that Dan but um you know there is there is aneed for a new set of reg ulations around AI just as there's been a a need for data privacy regulations in recenttimes because of the abuses of you know people like or organizations likeCambridge analytica would get a pop up in front of all the AIS like because that's definitely fixedthing no saying was raised by uh there was onegroup that was very vocally against this letter even though the letter aligned with a lot of what they had originally recommended and the group I'm referringto is D um which is run by um timnit gabu and she was a former uh ethicsresearcher AI ethics researcher at Google and in kind of a a story some may recall from a few years back uh wasfired in the clumsiest way by Google and then her boss U Margaret Mitchell wasfired just a few months later for similarly clumsy reasons and the the story just kept going on on made Google look really bad no one really knowsexactly what happened there's more to the story um but at any rate uh one of the reasons why Tim gabber was firedfrom Google is that she published a paper uh called stochastic parrots and it's about these large language modelsum and many of the criticisms that she made now looking back because that paper was about five years ago looking backthat actually was pretty accurate uh her her forecast or her analysis was pretty accurate um and so it's a little weirdagain it's a little weird that Google fired her um but at any R she came out and was very critical of this future oflife um our um future of Life Institute petition because she said all the attention and all the focus is going togo to this tiny possibility this minute possibility of a super intelligent AIThat's malevolent that destroys humanity and so on and now that's going to sort of redirect resources or distract peoplefrom the real issues and what she feels are the real issues are the very real tangible uh negative impacts not just ofAI but of algorithms uh that are prevalent in society today and she'll point to things like systemic race R ISMor discrimination or the ability you know some people can't get um insurance or car insurance or a car loan based onthe ZIP code they live in because that's correlated thanks to some algorithm uh or something in their credit reportthat's correlated to the area where they live or their zip code uh and so she points out that this kind of discrimination and unfairness alreadyexists and it's algorithmically reinforced and it should be getting the attention of legislators and regulatorsand it's not and it's because of all this Doom saying uh that's worrying us about this sort of sci-fi scenario thisvery low probability scenario in the future that we're not directing enough attention to the very real but mundanejust the problem of integration right and and the problem of equality and fairness and equal justice and alsothere's a gigantic inequality issue uh too because it's like who has access to these tools uh who's going to get thebenefit from these tools there is going to be an economic boost so I guess the point I'm making there is that you knowwe we talked briefly about this group that is referred to as the long-terms or the the philosophy philosoph philosophyis called long-termism but I think there's a different group uh that's worth paying attention to led by Tim Nick gibu and that is the you could callthem the short-termist they're the people who are focused on the issues with AI right now today that are real that are going to be in the financialindustry primarily or the insurance industry uh where algorithms are deciding someone's fate and unbeknownstto them it could be a kind of like racial or economic redlining that's occurring uh so I guess there's twodifferent perspectives on that letter but that letter ended up generating so much controversy and yet it achieved nothing because herewe have an arms race and it's not just Google and Microsoft who of course are in a death struggle over the future ofsearch the largest Computing category in the world but in the month of March more than 150 other apps were launched andnow I want to come back to that question right before the break I raised uh which is the combination of Open Source andartificial intelligence should be a kind of Turbo booster and it might be the thing that prizes this technology awayfrom those giant companies what's your take on that possibility I love open source you know I'm I'm a big believerof it I went to work at Red Hat like I said when there are 1400 people I was there for 10 years um I I'm a bigbeliever in it and you know a lot of times we look at artificial intelligence I agree with it that that long-termismor or shortsighted ISM as I call it um is um is kind of distracting us fromsort of real issues whatever we think the real issues are there's a lot of them right I mean I can think of ones outside of just that was um very us sortof centric based one but there's other ones too I mean you think about if we are good with alignment you know can atotalitarian regime you know utilize that or the way it's kind of used to you know keep track of you know populationsand those kinds of things there's a lot of you know potential negatives when I look at sort of Open Source now this isthe other thing you've seen you push back from like corporations or governments on the open source these things are too powerful blah blah blahand I go yeah I mean when I look at something like Linux though right the the long-term Poss abilities or the orthe things that have done that it has done has been wonderful right and now Linux is used to attack things rightit's used to do you know sneaking in and attacking people and penetration testing and all all this kind of stuff rightit's been used for malware it's been used for all kinds of things it's also been used in pretty much every supercomputer on the planet you know ifMicrosoft had been successful at destroying it in the old days they would have destroyed their new business model because 90% of their Cloud runs on itright um their supercomputers run on it right opening I runs on it um you know it's in every Edge router it's in youknow everything right so should we have open source AI because the folks at open AI are against it they're actually goingthe opposite direction they're going for more Clos yeah I I I completely disagree Imean I think when you look at the research that's coming out right now with people you know being able to finddifferent ways to align it or adding adapters uh you know to to the different large language models which makes themyou know faster to fine-tune and easier to to align in a lot of and give them new capabilities right um all of thesethings are coming out of the open source community and they're going to trickle back in and you're you're you're already seeing that you see like with stablediffusion all the communities are blending together models 15 different models and suddenly they're as good as midj you see all the research paperstaking these things and utilizing them so look I I always feel that open wins out in the long term it might lose inthe short term it could be in an era of closed but I think in the long term the benefit of being able everyone beingable to poke and prod at in and inspected is more beneficial than a few folks have in the keys to the kingdom sodo you think some of the Doom saying and the fear and Gloom mongering do you think some of that is is like a a codedargument in favor of closed source and big companies basically saying the only companies that we can trust to handlethis stuff because it's like Kryptonite are these very large established tech companies self-regulating companies yeahthe so-called self regulating it seems like a self-serving argument yeah I meanthere's look there there's always perspectives embedded in every argument right and one of them is that you knowwe can't have open source because people are too stupid or too or too dumb or too dangerous to be able to do these kindsof things and so we have you know we're going to have these kind of trusted Center things right that that are thatyou know it's like well you mean like the people are trusted to like keep our credit card data private and Le to halfthe United States or whatever like so and then the problem with this kind of this perspective of basically focusingon this centralized trust is that an oxymoron right it's like once that embedded into the system and it and thatone entity that you put in charge of the trust fails your trust because it's notit's not a fixed thing trust is a moving concept right it's the people there if Iyou know if I stack the EPA with a bunch of people who think Environmental Protection is nonsense I've destroyedthe concept of the EPA if I if so and if I have incompetent people in that trusted entity suddenly that entity isno no longer useful and you can't rip them out of the system so I favor the openness I favor that kind of abilityfor lots of people to be able to poke and prod at these things and build the Technologies and also build the guardrails and and the guard rails work is happening in the open source Community as well and I think it's going to trickle back that could be part of thesolution actually so take us out to the Future talk about recombinant artificial intelligence where let's say it's moreopen uh there are models there already are models out there that people can use and let's say that we're able tocompress the language down so that you can run the AI on a smaller set of machines you don't need 10 billiondollars to smartphone maybe well that's what's coming right so can you take us out to that future Dan tell us what wemight expect or what you imagine might be kind of exciting about a future where this is more democratized and morerecombinant yeah I think we're gonna have gigantic superintelligent models I think we're gonna have like medium youknow mediumsized models and then tiny models at the edge they're all going to be communicating I don't think thatthere is anything in the world that does not benefit from more intelligence there is nobody out there saying I wish mysupply chain was dumber and I wish drugs hard discover and cancer was more of a problem right so like to me I see thiskind of like embedded intelligence in every little aspect you've got a million researchers I don't see all the jobsdisappearing I just see an acceleration I see the WhatsApp effect right where like it used to take a thousandEngineers to now you can do it in 50 but it doesn't mean you have less software it means you have more so I think we're gonna have more companies we're gonnahave more buried jobs or we're going to have some people don't even need to work and I think you're going to have thiskind of ambient intelligence everywhere right right small models huge models like gigantic models they're all goingto be able to communicate there'll be protocols there'll be defenses there'll be Watcher AI there'll be regulationthat's in there but to me every single aspect of our entire life is going to be upgraded intelligence like the packagesare going to be smart you know they're going to know where they're going they're going to know how to reroute or call out for help you know you're you'reyou're going to be augmenting like your old software tools with intelligence and and maybe it has a crappy interface itdoesn't matter anymore because you're just talk to it and it's going to take care of that crappy interface for youand it's going to speed up everything from you know drug Discovery to U you know Material Science to to everythingand if we're lucky we're probably going to have some breakthroughs that lead us to kind of things that we can teach veryquickly through mimicry the way we teach children you take them out in the backyard throw them the baseball they'reprobably going to figure it out in a couple weeks they may not go to the pros but they're going to be able to understand it I think we're people areworking on things like that you already see that with people designing games people who don't write code who are designing games with chatGPT they say no not like that more like that they're using natural language and then the game the the GT rewrites thegame for them right in real time which is kind of astonishing um right I want to come back to yourpoint about uh supply chain because Supply Chain's been a focus of mine for many years and it's it's not smart youknow it's a dumb supply chain there's still people with clipboards and and walking around with paper and uh and checking items off of a list now umthere have been many attempts to apply blockchain to supply chain and they've mostly failed uh hundreds of attemptsand one of the reasons for that is that Supply chains are closed they're not open by Design you know Walmart doesn'twant an open supply chain they're they're quite content to dominate the one that they have right now yeah yeahum and so so blockchain and you know a um an open Ledger doesn't really work in that circumstance um that said there isa big effort underway that was accelerated during the pandemic to automate the supply chain to apply robotics uh automated systems all theway through from factories to trucks to you know ports and uh even containerships my thinking has been fora long time that blockchain is not for people blockchain is for the AI for one automated system to hand over to anotherautomated system same same as digital money or cbdcs and know tokens or Moneso money is just one thing that's being transferred right but there could also be pallets of goods or cars or ships andcontainers and so on and so you know for the human beings these are black boxes we don't actually understand how the AISwork right so we can't audit the thought process of the AI so we need a verifiable way to look and say okay butdid the goods actually get transferred and on what date and that's where the blockchain fits in so blockchain in this scenario is for the AI and you it's away for the humans to govern the AI I'm not quite sure that scenario is fully baked what do you think of that what'syour reaction when I share that idea I mean I I've always felt like maybe blockchain's been one of the mostdisappointing Technologies for me because I've been a fan of it for a long time and I felt like we we've missed opportunities for like to centralizedidentification and we spent you know we don't even have things like you know when you call your bank and and you needto reset your password you could automate that protocol of like what's your dog's name and all that to reset your local wallet we don't even havethat kind of stuff and I was writing about that five six seven years ago and so it's been disappointing to see likeyet another clone of a coin I think that people haven't leveled up to thinking about it as a protocol and as a way todo decentralized trust and I think that really that decentralized trust machinein a hostile environment where people don't agree uh it's a way to come to a consensus and I think that that is theexciting part of it I like the idea of machines being able to communicate to machines and and I like them being ableto verifiably Extended information I think we will eventually see uh good uses of the blockchain right in terms oflike maybe with photos like as soon as it comes off your camera now we're going to need you know a stamp on there thatgoes to the blockchain instantaneously because you're going to need to be able to say this is a verified photo versus generated by a human that's right rightyou know that kind of thing so you know I think that we're going to start to see real new uses for it it's going to be exciting um in but when going back tojust this you know the supply chain we are going to see artificial intelligence baked into that at every step um fromthe robotics to like the tracking of things it doesn't have to even be open for that to be the case right there are huge companies that serve you know theshipping industry and quite frankly giant container ships are already basically automated they're sky scrapersfloating on their side and you know that they got like 10 people on them who barely touch the controls right um so Imean it's you you mentioned this earlier that there's a whole bunch of things that that are basically artificialintelligence and there's a joke in artificial intelligence that once it works we don't call it artificial intelligence anymore right right goodit's software yeah that it's okay sof it's just software it's fine por algorithm right you talk to your phoneAI but it works well great fun talking to you Dan today thank you for joining us well this is this is yet anotherepisode of the futurist it's been a great pleasure to have Dan Jeff on Dan what's what's the best ways way forpeople to find you on the web uh Twitter danor Jeff the number one if you don'tput the number one you're going to find someone who studies the asexual reproduction of tree frogs so that couldbe interesting it's he's an interesting fellow Dan Jeff and I have had a couple of fun exchanges danor Jeff one orfuture history substack is probably the best way to see my writings and the thing I keep most updated that's coolyeah I found you on uh on medium and elsewhere I think also hacker noon but the most recent stuff these really deepDives you've been doing into AI uh quite interesting and also you share a lot of technical acument uh you'll find that onDan's substack which is future history well thanks for joining us this has been a fun episode and we enjoy this topicvery much we'll be sure to have you back for some more future history in in a future episode um and folks we want tothank the people who have been helping us make this show possible that's Kevin hson our engineer and Elizabeth sens ourproducer the whole team at provoke media they've been very supportive of this show and um I want to thank our audiencefor listening uh thank you very much we have been getting superb feedback uh from people who are listening to the show it's growing nicely that's beengreat fun to watch and in fact we're gonna hit 50k downloads this month soyeah that's awesome the introduction to Dan actually came from someone who I was talking to about the show and said youshould talk to my friend Dan so that's how this came about so we love that kind of feedback here and we welcome it so please reach out to us on social medialet us know if there's a question we should be asking or an expert we should be talking to and um in the meantime wewill be back next week with another futurist and until then Brett I'll seeyou in the future well that's it for the futuriststhis week if you like the show we sure hope you did please subscribe and share it with the people in your community anddon't forget to leave us a five-star review that really helps other people find the show and you can ping usanytime on Instagram and Twitter at futurist podcast for the folks thatyou'd like to see on the show or the questions that you'd like us to ask thanks for joining and as always we'llsee you in the [Music]future

Related Episodes