Futurist Logo

Holistic Principles and AI


Monica Anderson

AI researcher Monica Anderson guides the Futurists beyond first principles to the zeroth level of science, which is epistemology. Monica argues that science suffers from a kind of cognitive dissonance, relying upon a reductionist approach instead of holistic principles. In this episode, Monica explains why: AI alignment is trivial; malevolent behavior must be intentionally programmed; all problem-solving methodologies break down into two categories; complex problems differ from complicated problems; small syntax models outperform large language models; AI may be more impartial than a jury in court; AI will be required for licensed professions; home-schooled AI will be trained on your home computer; and a lot more.

Analysis complete. No addtional information is required for context. Proceed with transcript display ...

View Transcript

document button

[Music] this week on the futurists Monica Anderson in the world at the large I
mean is that science is going to lose its prime position as being the way we tackle the most complicated
[Music] problems hello and welcome back to yet
another edition of the futurists I'm Rob Turk with my co-host Brett King hi Brett
hey hey hey hey hey welcome back from your travels it's to see you again you say it's like every week we start with
that right welcome back true you're Globe TR I love it I love hearing about it so this time you were in Europe I was
London and Paris was a great trip um had Miss metaverse with me Katie so uh it
was we were able to hang out in Paris it was was wonderful can't beat that Paris in the spring my favorite so while you
were traveling the world of AI continues to evolve and explode and expand and
there's new headlines and new stories and new papers every single day it seems like the top IC we can't avoid uh
everybody we've interviewed on this show brings it up even um a couple weeks ago I interviewed an expert on on the future
of Law and of course the conversation veered into AI because it's an unavoidable topic you can't talk about
the future without addressing how AI is going to change it for sure so one of
the people who's been most refreshing in my news feed is an AI researcher who
calls herself an experimental epistemologist and she comes to us by way of Sweden her name is Monica
Anderson and I I wanted to invite her on the show for quite some time and it felt like this was the perfect moment so
Brett give a big welcome to Monica Anderson hey Monica thank you for having me happy to
have you here Monica tell me something um this term an experimental
epistemologist is going to throw people it certainly through me can you tell me what you mean by that term well if
you're going to work in artificial intelligence today when everything is focused on these large langu models and
neural networks and deep learning and the stuff that's been going on since 2012 um you have to realize that that
basically we are talking about something it hasn't happened before something we haven't dealt with before in order to
understand that fully we have to go belong below science science we talk
about the first principles of science as being very important to not be misled by
various other factors but in AI That's not enough you have to go to the zeros
level zero level of science would be the level of epistemology because science is built on top of
epistemology for the folks who are listening epistemology is epistemology it refers to our
understanding of knowledge or how the how the mind perceives reality correct and that's the key issue is that we are
trying to create our own minds and they have to have the reality and the perception reality that's useful and we
have to know how to do that and if I got you right you mentioned the zeroth level so uh we hear people like Elon Musk
frequently talks about first principles right this is considered a a sound way to do engineering and problem solving
you start with first principles and expand from there but you're saying there's a level below first principles
and that's the level the zero level yes and in order to understand why that's necessary we have to understand
the limits of Science and I have uh discussed that at length in several videos but I can give a flavor of that
by a a simple U examp example suppose you're back in grammar
school and uh the teacher gives us a story problem she says Holly is four
years old she has three boxes of chocolates and there are six boxes chocolates in each box how many
chocolates does Holly have and somebody will pipe up and say 18 and I typically
say my do I said that's almost right it's actually 18 chocolates and if you're an epistemologist you understand
why you have to add those chocolates to the answer um now those are easy questions I mean
the answer 18 Chocolat is what basically what science would give us given this
example the question that science cannot what is chocolate no than that suppose you're
trying to solve this problem and you come up with 18 how did you know you had to multiply how do you know that the age
that Hol is age four didn't factor in how do you know that you had to multiply
so knowing what's important or what's relevant is a key to the understanding is this is the domain that you're in and
so so so now help me relate that to AI because one of the things you've described in your writing is the
reductive approach is that the scientist or the person who's doing the inventing decides what's important and they
discard all the rest right but you're suggesting now that the approach with AI is to give it everything and let the AI
I determine what's important exactly right that is exactly how it works so we
are basically what we're doing is we're delegating the understanding of the problem itself with delegating the understanding of the problem domain the
problem at hand Etc to the AI to the point where the researchers themselves
don't have to understand the problem and this is obviously what in what we want
AI to do we want AI to think for us so that we don't have to that's the whole point of having an artificial
intelligence and this is the domain called uh deep deep learning right that's what we're referring to it's it's
not all this is everything that basically provides the service that we expect artificial intelligence to do the
the my technical Definition of artificial intelligence is any system that's capable of autonomous epistemic
reduction A system that can on its own determine what matters and learn that
and not learn all the noise for instance but in your writing you say that there's a kind of a fundamental conflict in the
way we go about science today that leads to kind of cognitive dissonance in fact you you argue that the whole field of
science is conflicted this way that it has a fundamental cognitive dissonance and that's between this reductive
approach and a holistic approach the holistic approach being you know here is the whole mess here's the
whole complex scenario hey AI you go figure it out and you're saying that
that that that approach is kind of radical right the way scientists typically would approach a problem like that is they'd break it down into constituent parts and then solve each
piece and hopefully the parts fit together breaking it down they have to understand
the problem there's no way around it you have before you can reason about anything before you can break it down in
a reductionist fashion you have to understand this so the understanding is the more fundamental
principle and the understanding it turns out can't be done
scientifically you mean the understanding of human thinking the thought process understanding of any problem domain so for for AI we talk a
lot about um you know guide rails you know in terms of things like ethical considerations and so forth and we know
that the quality of the data models that go into um the llms and so forth are
critical for establishing these types of guide rails I is that part of the
solution to the problem or is llm effectively you know not able to grasp
some of these fundamentals or there's no structure for that that sort of fundamental um epistemological
understanding uh this is a um this is a question that has more relevance in the
short term uh then in the long term uh we don't know what it's going to look like even five years from now but uh we
can definitely say that okay so in my post
about AI alignment is Trivial I basically triy to explain that
um skills and behaviors and wants and
needs and all of these things are learnable we have already seen chat GPT 3.5 knows English at the level of a
college uh educated person and has very little in knowledge about arithmetic
can't even do simple arithmetic and in humans that kind of a discrepancy is very unusual so people who look at the
earliest llms and their behaviors they are based basically saying that uh this is very strange this is this got to be
wrong but if we accept that these skills
such as arithmetic and English are very separable we can look
at what else the system has in the way of skills and we we notice that all behaviors are just skills you can learn
to snowboard you can learn Finnish language there's all kinds of stuff that we can learn or not and then we look at
Behavior as such and we understand that behaviors such as greed and pover
hungriness and and deceit and and so on all these behaviors they are learned and
we learn them as humans we learn them partly through our evolutionary history because we have a lyic system we have a
lizard brain and the lizard brains and liic system those are what keep us competing that's those are what keeps us
power hungry those are what keeps us trying to climb up and be on
top and this we don't have to put in our AIS and right right so we can have uh
you know because the AIS can be for the common good rather than individual um individual success it's
more about evolutionary theory Brett the the idea is that AI is didn't evolve uh the way humans evolved over
Millennia they are product of evolution they're product of intelligent design intelligent design in my brain and
others in in the mind of scientists so the intelligent designer can choose not to include behaviors uh you know
competitive behaviors and dominating behaviors and so forth so you're saying that those those arose through a
darwinian uh process of natural selection in humans yep and so when we worry about things like an AI taking
over the world or ai's killing us you hear people speculate about this all the time it's kind of a common science
fiction scenario but in your view that's unlikely to happen unless we intentionally design that behavior or
that motivation into the AI exactly that is basically it's it's ethos it's
anthropocentric view to believe that that our AIS would be like ourselves now
there are Reas many ways in which AIS are like we are because it's in inherent
in uh how understanding works and so on but there's also ways in which we can
make them completely different because we have complete control over what behaviors we want specifically I should
like to mention that um the world is a dirty place especially if you feed it internet as the main Corpus understand
from it's going to it's going to see a lot of the nasty behaviors and greed and other stuff on the web and bias and
discrimination and all of that stuff but we can't take that out of the Corpus it
has to be there because if we take it out then we're learning the system on a
polyana model of the world and that's not going to work when it actually gets down to try to solve some real problems
the problems are not going to be in the dirty world so we have to give it a complete view of the world every
holistic Etc we have to give it everything we have and then after the fact we have to give it behaviors in a
separate path and that's already happening open AI is doing the behavior and the re reinforcement learning with
human feedback process um and other things like that including supervised learning based on previous RF sessions
and um that is the way you get behaviors in there and we have full freedom to put in politeness and helpfulness and other
things and avoid greed and power hungriness and and take over the world kind of things so these are the
guardrails that Brett was just referring to a minute ago yeah you know it it it always strikes me when we have this sort
of conversation about the theoretical um uh formation of Consciousness or
intelligence at an AI level we always put human nature and human values upon
AI but we already have ai you know in some areas um you know Divergent from
Human logic you know so I I've always felt that um you know when we talk about
for example AI is taking over the world and and uh you know enslaving mankind that is a very human type of value
system right rather than yeah we're saying a lot about ourselves when we talk about that we're revealing our
worst motivations right you know but this is a different type of intelligence and there's no guarantee that's it's
going to follow that that um path especially if we can come up with the GU
right guide routes so um you know I guess philosophically where do you sit on that Monica well for better for worse
they are going to be more humanlike than we're used to and they're going to be less humanlike than we expect or at
least hope for the the the places where they're going to be humanlike is that they're
going to make humanlike mistakes um we remember expert systems from the 90s and
80s and they were very brittle they were basically they you hard the competence in some domain such as approving house
loans or something like that but if you start talking football with them they will completely fall flat on their face
and mistakes that these AI systems that expert systems made were expensive spectacular and and hard to fix and uh
today we have basically machines that make uh many mistakes but they're humanlike mistakes they're almost
correct if we can often immediately correct them if we find them and so on and Achieve to fix so this is a
we're getting closer the more we appreciate the human part of AI the more
we have to tolerate the human uh tendency to I mean there there's pros
and cons of making AI humanlike right I mean um you know like people are talking
about these hallucinations like we can't trust AI because they they hallucinate or they lie and it's like well humans do
that all the time you know all you have to do is look at the CNN Town Hall the other day right um but um so you know
that's not necessarily uh something that should preclude AI from from existing
cuz it's humanlike but we can eliminate some of those things we can try to make
uh you know as you say with those guide rails make AI more consistent um you know as as a class but again um this
back actually happen the role of you know role of what we want AI to play in society to some extent oh that's like
five question and I have to no I know I know I was just Roofing so the the uh
the places where we cannot avoid them being very similar to us to our frustration is basically exactly in the
level or CA or rather or explained by a pomology um the the most important
things that I have basically been talking about for since 2006 in my Outreach has been basically that um we
can look at like if you will three laws of of AI epistemology and the first one says that omniscience is unavailable you
can't be aware of what's happening in the world you don't know what if you want to predict Apple stock price a week
from now you don't know what happens in their boardroom or in Taiwan or whatever and uh the second one is that all
corpora are incomplete no matter what you try to teach your artificial intelligence you're not going to have a
complete Corpus you're going to not complete all cover all the corner cases Etc so the system will be ignorant in
everything that you did didn't tell it and in humans we don't notice that because we have an enormous Corpus going
back to birth everything we seen everything we heard since birth is part of our Corpus and AI have limited
corpora we think they're gigantic but they're puny compared to human and the big reason that they lie to us is
because they know much less they're much more ignorant than we are if it's not in the Corpus they know it um and so we are
not used to that again binary on or off for knowledge and so a lot of people
think the AI lie to us more than they do because they like to push the boundaries and it's like I like to say that a lot
of this noise we've heard about chat GPT three and 3.5 it's lying to you is
basically like adults tricking kids uh to underage kid to say stupid stuff so
they can right right right so yeah that's a good illustration you have to work hard to get Bing or to get chat GPT
to hallucinate it's not something that's going to naturally arise in in a normal interaction uh although to to Bret the
point you know Microsoft has now put um strict limits on what you can do with the with the GPT in in being that reason
right that is the trend I looked at the user license agreement for Google's B and it basically says you can't do
anything nasty with this and that's how they're guarding themselves now they're putting it in the Ula but you think
that's a temporary solution oh yeah that will get better because they learn more and and uh we basically we either have
to get better algorithms or bigger Mach or better corpora we don't know which dimension we're going to improve this in
and I'm in the better algorithms category uh but they will get much better very quickly and because they're
learning they they and you can see that too when you use chat GPT if you if it gives you an
erroneous answer you can just say please check your answer for accuracy you don't have to tell it what mistake it made you
just ask it to check its answer and it will and you can see it learning on the spot which I think is kind of a
impressive you know I've never seen a machine learn on its own it's not
learning at that point really it's it's just modifying its history learning is a much it's a Well we call it
reinforcement learning right you know because we're reinforcing the correct results yeah I'm seeing learning on a
daily basis I'm L jaded to that for 20 years you know um just before we get into the quickfire round which we do
before the break um you know you started on this in 2001 which was fairly early in the Deep learning um cycle absolutely
you know how how has how has the practice of deep learning changed over the last 20 years or so
well it's gotten a lot better recently but that's because there's tens of thousands of researcher working on it I
mean Hinton and so it's fuzzy who came up with what but from end of the 90s to
2006 we were basically multiple people were working on getting neural networks going including myself and I was the
only one as far as I know that was using discret neurons um but Hinton and others
they basically had it licked by 2006 and then in 2012 it sweeped the world and now we have the llms and which is
basically what I've been aiming at since the start but I am calling mine ssms because they're small syntax models and
the difference is that they are a million times cheaper to learn and a million times less energy to
learn but they run equally fast and we don't know what their capabilities are yet they are definitely good enough for
classification which is 90% of what you want to do with natural language in the industry patent recognition yeah yes
classification of messages spam uh by Topic in news feeds Etc but uh what chat
GPT and what GPT family is capable is basically completion and dialogue and those are more complicated API calls
that I have not yet licked so I'm working on it but it's it might be far away and it may not even
possible let's pick this conversation up in the second half uh what we like to do now is we like to get to know you like
to help our audience understand how you got started and how you how you arrived at thinking about the future um so we
typically ask a few questions here for very short answers uh just a a quick series of questions and um and Brett why
don't you like to do this you like to do this and why don't you go ahead and what was the first science fiction
you remember being exposed to on TV or in books uh Stranger in a Strange Land by
Highline take it back I back it was actually have space suit will travel have space will travel very cool turns
out that that's also the first book that we train our AI on interesting well that
you know that was also going to be one of the questions you know is what was your first exposure to
AI oh I was teaching AI
to college students in the 1980s okay um
what tech technology has most changed Humanity today the biggest change ever is the
basically the invention of reductionism that's my opinion that that's the biggest invention we ever done it beats
fire but recently basically having the ability to uh uh perform epistemic
reduction I mean yeah anything since 2006 that has done been world nwks is
very different and very very important interesting name a a future a researcher
or entrepreneur that has influenced you personally and
why let's see William Calvin professor at professor idas at University of
Washington uh has written a handful of books about uh about neural
darinmex and uh is there any story that you know of in popular fiction regarding
artificial intelligence that represents the type of future you think that you
hope for absolutely the first half of the movie Her good good that's a good uh good
answer all right great well that's it for the first half you're listening to the futurist we'll be right back after
these words from our sponsors provoke media is proud to sponsor
produce and support the futurist podcast provoke FM is a global podcast Network
and content creation company with the world's leading fintech podcast and radio show Breaking Banks and of course
it's spin-off podcasts breaking Banks Europe breaking Banks asiia Pacific and the fintech 5 but we also produce the
official finovate podcast Tech on regg emerge everywhere the podcast of the
Financial Health Network and next Banker for information about all our podcasts
go to provoke FM or check out breaking Banks the world's number one fintech
podcast and radio [Music] show and we're back in the futurists
with our guest Monica Anderson Monica is a artificial intelligence researcher and
has been researching large language models since 2001 but she described herself as an experimental
epistemologist and I want to dig into that a little bit so one of the one of
the things that epistemology is concerned with is how do we know things how do we understand how does the mind
understand the world um and you describe a fundamental cognitive dissonance that
arises because our scientific methodology is reductionist and you
posit something that's an alternative to it holism can you define these two terms and help our audience understand what
you mean uh about reductionism and holism well those terms go back a long
ways um and they are tainted by various bad bad connotations like reductionist
is often blamed for bad science and holism is often blamed for fuzziness and
and uh getting things wrong and and jumping to conclusions well first of all
the the definitions of holism reductions that we used for a long time are are are
weak and they missed one of they in my opinion I looked at that in 200 five
time frame I look they are missing something important here and the thing that they are missing if you go to Stanford encyclopedia philosophy and
look at reductionism there have all kinds of stuff that all is very similar and it
basically in my view I said reductionism is the view is the use of models and
models are scientific models equations hypothesis theories Etc even superstitions are models there
simplifications of reality that allows us to compute on them and and u holism is simply the
avoidance of such models how can you get anything done if you're not using models says the reductionist and it turns out
that it's the other way around almost everything we do on a daily basis is not
done scientifically if you're making breakfast you're not making hypothesis about what the breakfast is like you're
just doing it so the world breaks down into basically if you look at all the
problem solving methods in the world they go into two categories they go into categories where we use models and they
are basically the scientifically complicated things and then they are the things where we don't bother or we don't
we cannot use models and then we have to solve them directly in the problem domain so for instance in weather
reporting if you want to see the weather if you want to know if it's rains you can watch the better report which
gathers scientific information from multiple satellites and other sensors or you can just open a window and smell if
it smells like rain and that's solving the problem in the problem domain and everything we do on a daily basis is
pretty much done unscientifically even for scientists we're just basically doing what worked last time we are
remembering what works using that and the only requirements for being able to do that
is to have a good pattern mat that tells you when two situations are the same we can apply the same solution to the new
one perhaps in modifications one of the distinctions you make is the distinction between um complicated problems that can
be solved with this modelbased approach yes and complex problems where the model simply will fail because you cannot the
model won't replic replicate the sheer complexity so talk to me a little bit about the distinction between
complicated problems and complex problems yes complicated problems are
basically the kind of stuff we do with science we we break down the trip to the moon into smaller and smaller parts
until we have a component we can fabricate and but that approach didn't work for instance with protein folding
it doesn't work with language understanding it doesn't work with same go these uh require other kinds of
solutions and and uh an example of that is that they for
instance they may contain things that Professor kend Stanley talks about as things that you cannot find by looking
for them you have to basically look at a whole bunch of stuff and make little tiny um deductions if you not deductions
but you make little tiny uh correlations between basically intelligence is largely a matter of correlation
Discovery and once you find those correlations to tie them together and they form webs and these webs in some
sense have weights in in the case of deep learning they have actual weights and in my case they do other things um
but the but results from this thing where you basically solve a number of problems you not even know if you have
to use the Solutions in the top end you get a solution that actually works in most cases that's the e that's the way
the brain works and that's what we have to make our artificial intelligence also
let me see if I've got this um you know as as an illustration and I'll you know
I'm trying to come to grips with this is um you know we we have the James Webb
Space Telescope right now and it's discovering things like very large galaxies near the uh the the boundary of
um you know the known universe from from The Big Bang that seemed to challenge The Big Bang Theory itself so um going
back and revisiting I Stein's theory of relativity and whether the Big Bang is
is real um as sort of the basis of our scientific thinking of how the universe
uh formed is is tougher than using that as the Baseline for assumptions but
where does that get us in terms of sort of path to scientific discovery I have
to disappoint you by saying that I don't think AI is going to be any help in that regard well it will help in analyzing
the data but for instance getting the JS of a Space Telescope up there was almost entirely a reductionist feet it was
engineering through and through for decades and um the results uh we have to
understand the results but I have no comment on that I can't come it's not my appe okay so you're not rejecting uh
this this reductive approach you're observing that it does work it solves some problems it solves certain problem categories ah yes but that there's
another category of problem where reductionism simply can't solve the problem where you have to tackle the
whole of it the it's irreducibly complex exactly and I called I have a video
about that called bizarre systems uh which is basically it turns out that the remaining hard problems in the world um
this used to include language understanding cannot be solved reductions methods um and what's
happening now is basically in the in the world at the large I mean is that science is going to lose its prime
position as being the way we tackle the most complicated problems because we can
now un unload them to artificial intelligences and let them sold for us and that means that we are basically
saying okay this reductionist and stuff it worked fine for 400 years but we now are going to fix the problems that we
couldn't solve using these hacks of science so to speak we have to solve them the way we solve all other problems
in our daily environment and and that's going to be holistically and there's two ways to do that we can either write
special programs that work holistically that solve special problems for inance we could create something that tries to
do the understand the stock market in using holistic principles or we can just be lay eyes that are General enough that
we can basically let them solve the problems as they come along and either this way we have abdicated our own
understanding of the problem and I think that's a good thing now you mentioned the stock market as one such problem as
an example but what kinds of problems can science currently not solv are you talking about things like climate change
is it simply too complex and you believe that a a system stuff yeah Andi might be
able to address that right yeah there's there's a lot of domains like that like I said language how the brain works uh
drug interactions in the human body uh cellular biology uh the stock market the
global economy um immigration movements yeah those are yeah those are they go
into Political landcap that is very complicated and in fact the the the trick of governing country or world is
is is one of these also is one of these holistic problems I wish that our politicians were holists and in some
sense a lot of them are because if you have a legal education background you are in the smack in the middle of
epistemic reduction because the point of a jury is to do the epistemic reduction from the complicated crime situation to
determining yes or no is the defendant guilty or not and and that's basically they're doing an epistemic reduction of
something very complicated and we wish that we had AIS that could do that because they could be more impartial
than juries right I mean the legal system is is one of those Prime examples
of where AI is going to um be be used obviously in terms of Justice well we
had a really rich debate about that topic just a week back and listen to that yeah with the legal futurist and uh
and his perspective was probably not uh and and the reasoning is that we have
this we place pracy on human judgement you know so we we really have we still have respect for
judges Monica what's your take on that you Brett just articulated a view that I think makes sense that someday you
you'll have push button Justice you can simply go to Ai and and get real justice as opposed to the rough Justice we get
in the current system this is uh going to happen the way everything else happens here um as things get better
we're going to have more and more cries for using them compulsory for instance you go to doctor today the AMA might say
the doctor cannot use Ai and a month later they say okay now you can use it and six months later they say now you
have to use it and then they say if you don't use this we going to pull in your license and so this kind of I I see this
kind of progression happen in many domains where we more and more going to insist on AIS because they have a better
bedding average than the humans in the same situations people have been saying that about Vehicles though for years
they've been saying that we'll do this with autonomous vehicles that someday they'll be so good that you won't be allowed to drive that um you know human
drivers will be fired from driving but we're nowhere near that today we're nowhere near that I made that prediction
in my book let me point out that six months ago we were nowhere near that
either and we are lot closer today I mean the stuff is getting better it's a matter of machine size corporate size
computation algorithm power and and a few other things like that but it's all
going in the same direction and we all know roughly what the quality requirements are and we'll get there it'll just take some time and the reason
people are upset over AI making so many mistakes is that they are given the
first opportunity ever to be witnessing when the sausage is being made for the first time right and they are shocked at
how complicated and how eror prone it is and come on have you seen what happens when
you're building airplanes from scratch yeah and and you know that's a good example cuz airplanes have autopilots
that do 90% of the flying that didn't used to be the case right well it's been
over the decade it's been the case that the weather is bad the pilot is not allowed to land himself yeah yeah yeah
it's interesting though I think people would hesitate to get on a plane that was completely autonomous I don't think most well so I mean I think the way it's
going to go is I think it's going to go where you will have the autonomy portion
of it um you know the the actually Emirates is talking about this right now they're going to move to single pilot
operation during the flight using artificial intelligence so what that means is instead of carrying a secondary
crew on the flight one of the pilots will be able to go into the crew sleep time there'll be a single pilot running
the flight with the AI now this is not a significant increase in Risk because
this is actually what happens mostly today but those models are getting better so it's just the takeoff and
Landing where you know you have the highest potential for an issue where you
need the two human Pilots so it def definitely will reduce the um the need
for um having H having more um flying staff you know flight crew on on the
aircraft but I think and and then once we get comfortable with that then it's a
step to single pilot operations for the entire flight and I think your main point there is we're already
experiencing that we just don't know you know we used to have used to have five flight crew we used to have five flight
crew we used to have a radio uh operator we used to have Navigator we used to have two pilots and a backup right that
was that's how we used to do it years ago so we've already made you know on that on that Spectrum we've already made
significant progress so understood let me let me redirect to Monica here for a sec so Monica um this topic has come up
several times on our show where we talk about autonomous systems for transportation and there's a little bit
of a debate um some people say that the intelligence will be in the cloud um because we have an unlimited capacity in
the cloud other people say nope that's way too risky because if there's a disconnect uh in the telecom's network
um that system has to run on its own so the intelligence has to be on the car or on on the airplane what's your
perspective about that are we going to be able to run these gigantic language models uh locally on a in a car or in an
airplane or will we not be relying on large language models in the future oh um first of all we are not
running llms in our cars uh for instance in in in Tesla Etc it's special made uh
software that has nothing to do with the llms he has to train basically on uh the the main we started with AI in
the cars because we needed the vision we need to understand how division work what was around us is that a fire
hydrant is that a pedestrian and um and so that was solved with neural networks
there's no other way to do that and that is a holistic solution if you will a holistic flavored solution but then
there's all this in the sense that the cars are learning that they're being trained they learned the hard way
basically they learned it how to see understand vision and starting with that then they had a second phase which was
reductionist which was regular programming that took whatever the vision part had and they converted it uh
to uh they basically analyzed that and using programs they decided what the rules of of of the traffic should be
what the what how the car should behave and over the years they have moved removed more and more of that and now
almost everything is one single neural network or multiple neural networks communicating with with each other doing
when you say multiple neural networks so the Tesla cars are connected and famously they learn from each other right this is one of the powers of Tesla
that there's still one thing to remember here is that there is a lot a lot of times when you think about AI doing
something for you what you're looking at is doing something in the inference phase it is not learning there Teslas
don't learn as you're driving or they only learn overnight when they get a new version
and so what happens is that when you're driving your Tesla you're en encountering a a dangerous situation
pitting the brake or whatever the car records that and in the evening when you park at home it uploads it to the uh to
the cloud to Tesla and they can analyze that video and other things like when you push the brake pedal and whatever oh
so the connectivity is only it's not critical while you're driving most of what you need is on board and then it
upates yes I don't know if they use continuous connection I know that they connect when you get home okay but but
it is true that we're moving into a phase uh let's say post large language models we're moving into a phase where
large language models may be a thing of the past that's what the the CEO of open AI recently said yes uh I uh agree with
him for very strange reasons um which is that I have models that are much much
smaller myself like they like I said they can learn a million times faster
than the model that open AI is using and there are already models that can run on an iPhone this is kind of amazing like
this all came out the last month or so again there's two different situations the one situation is that the system is
learning and that requires the cloud and hundreds of computers from months if you're going to make an llm but test
light means that they're running it on the doo for some amount of time I don't know how long it takes but it might take weeks then what the thing does is it
emits a Model A large language model which is a significantly smaller chunk
and that small chunk you can put in a computer on the cloud and then that will become your GPT server and it is what
you're talking to when you're analyzing a GPT GPT interaction but these things
are frozen so are my own systems I have Ser in the is also Frozen so they basically you learn at the factory and
then you freeze the intelligence you shrink it you throw away everything that has to do with learning and you just
keep the model and that model is what you're serving either in the cloud or in your iPhone or in your car MH much small
and for GPT specifically for GPT 3.5 it had 120 gab memory model it fit in five
large GPU CS of 24 gig each that's still small compared to what it took to
learn in my you sent me a note about uh homeschooled AIS you sent me an
intriguing message about homeschooled AIS what what did you mean by that I mean that uh our computers are getting
stronger um our laptops our home computers are getting stronger at some point they will reach a point where the
simplest AI models can be learned at home and I myself have that because I am
working on a much like I said my am mods can be taught on much smaller computers they still require a lot of memory but I
can get started on a laptop and on on the cloud for in sorry in on my main
published website uh experimental epistemology doai in chapter nine I
demonstrate basically how uh I in chapter eight I demonstrate assistant can learn English in five minutes on a
laptop and and that's very impressive that's very impressive and and then chapter nine talks about how you can
access that in the cloud it is not as good as models these are small syntax models but uh they are good for
classification but if we're if if we're going to be doing agency based AI so an agent AI that acts on our behalf it has
to only learn a set of behaviors about how we would respond in those instances a much bigger deal I don't deal that I
don't do that this point I'm basically I'm trying to create a replacement for Transformers and
large what the impct for consumers will be what would the impact on a regular user of of a computer or a person who's
using their smartphone how will they how will their experience change uh well basically it depends on
who you are what you want to do with your AI so a lot of people will just be happy with whatever AI is available and
I've been sketching a future in where um there are thousands of AIS that have
various different biases and capabilities and they all have subscription prices with them and you go
to your iPhone settings and you decide which AI to use this week and you basically unsubscribe to the previous
one and subscribe to this one and it will come with new Ceres and features and and we're gonna start changing those
on a regular basis because there's going to be so many of them you know I have to point out that right now apps right the
business model for generative AI seems to be Take the Money and Run because they're charging you $200 up front for a
Year's access and it's quite clear the business the logic behind that decision
that pricing model is they know darn well in a year there'll be a much better AI from somebody else and they're like
let's grab the money now sell you a year of service you won't be using it a year
from now you'll be using something else uh this is what I'm observing is there's just been a proliferation of new variant
AIS and now with you know with the um llama leaking out into the public main
you have this crazy frenzy of Open Source activity that's happening such a it's almost impossible to keep up with
the new announcements that are coming out give us a little forecast like what can we expect by 2024 just a year into
the future what will this landscape look like oh it's hard to tell because it all depends on how popular my stuff gets and
it's not popular yet and it doesn't quite work the way I want it yet but yes uh the uh if if things go according to
plan and it doesn't have to involve me by the way there's other people people who are working on similar stuff but we
can see AI systems that can learn much more effectively than what uh current
Transformers can and I have a serious theory about how that is possible and it
could be interesting to touch upon that if you have a few minutes it is this is the part of the show where you should
get expanded let's get yeah let's get out there so this is this is controversial and a lot of that is conjecture on my part so take this with
a grain of salt but what happened in AI was that uh basically uh in 2006 and
onward we got these systems that understood Vision uh handwritten recognition of Digit was what we started
with and it went all the way up to these AI art systems that can manipulate stuff
at will but um in the beginning uh we so we started
with vision or rather Hinton and friend started with vision and uh that was what
was Vision derived technology what basically what won in 2012 then they wanted to get into text
and as it happens they had something already they had word to V which was this idea that you could create a
semantic space of uh word vectors and you could say basically King minus man
plus woman equals Queen and you could do this kind of arithmetic in this semantic space and everybody thought it was really cool and so what they did was
they used these word vectors to convert text to images
so all the Deep learning that deals with text starting after 2012 or so is
basically doing it by converting the text to an image and then processing that and this conversion from text to
image they felt it was necessary because they wanted the semantics so deep learning and Transformers they get the
majority of their semantics from the word tox so they start with the semantics now if you ask yourself if you
wanted to analyze language if you wanted to learn language what should you learn first syntax or semantics huh and it
turned out deep learning learned the semantics first they learn it before they even start learning syntax and
because they're doing it that way they have to shlep the information about semantics around throughout the learning
of syntax and this is why they need gpus because it became so expensive they turned a one-dimensional problem into a
two-dimensional problem and it became prohibitively expensive and that's why they had to so this is why you're
focused on syntax right you start with syntax you can parse it in millionth of
the cost of the millionth of the energy and the resulting small syntax model that you get is still powerful enough to
do classification and it may be good enough to do dialogue we don't
know what will it take to learn what will it take to prove that um I would need a much larger
machine at the moment I'm doing all my research on on an I can 128 gigabyte
Macintosh from 2013 so I've been working for nine years on a pretty un computer
to do this and I was forced to basically learn how to do it with little resources W but if you wanted to learn a r enough
English to compete with chat GPT um I would have to basically run the
experiment and the experiment itself requires a machine that cost $70,000 and I don't have kind of funding so I can't
do this but I would love to have somebody get me access to a machine or
give me a machine um with four terabytes of RAM and 100 in 28 course Monica you
need a rich benefactor that's need aor no I need mildly Rich benefactor because
I don't need more than that $70,000 to prove my point a patron but your forecast is that we'll be able to run
these run the syntax models with one million times less power
consumption uh what kind of microprocessors will you need will you still require a bank of Nvidia gpus or
are you saying that that is currently required because they're dragging along so what's the what's the configuration
of the $70,000 machine it's 128 course 250 sorry 256 threads four terabytes of
ram it's a TW month Rec server from Super Micro
okay right on well any other forecast for our audience that you'd like to share like what's what's AI gonna be
like in 2050 is it going to be her 2015 no
20550 sorry sorry I miss sorry 2050 oh we have no way of doing we have no way of saying what the AI is going to look
like in 2030
sorry I thought I'd ask no I think that so the model of AI
that I like to see going into the future is basically the young ladies Illustrated primer anybody not familiar
with Neil Stevenson yeah new Stevenson so basically I I imagine basically AI
being in your iPhone or your IP whatever and you you put your airpods in and you talk it all day and it talks back and
makes sense and in April it says hey iile your taxes um and and so on so it
does everything does a lot of the stuff that we do on a daily basis uh uh it
will do it for us and it will basically have knowledge which I call a us a US
useful uh consensus reality Corpus which is basically everything a US citizen
needs to know to be productive in society and we should have AIS that know all of that stuff and the cheapest
implementation to help the bottom uh people at the bottom of the social
ladder that have little skills Etc may not even speak English is to provide a phone number for them to call and they
can talk to their personal AI at any length any time of day for advice education and
friendship okay let me pause it an alternative view there though that sounds like you're suggesting we're
going to foster a dangerous dependency on technology where people won't have agency to solve problems on their own
we'll grow so used to relying on this AI on demand or this personal assistant
isn't that the case already today you have a cell phone come on yeah exactly like how would you order a taxi
without an app today why would be hold back you could stand on the straet and do that but you
know what a statement we already have the dependency
we're already there the nightmare scenario is already yeah no it's it's it's going to
be that everybody has an AI in their phone and some people pay more for theirs and therefore it has more capabilities and it could be that your
boss will require that you have a good AI because otherwise why would they hire you wow well Monica it has been a great
pleasure having you on the show uh We've enjoyed your perspectives it's always a refreshing conversation thanks for
joining us on the futurist this week where can people find your work give us the URLs where they can find your
writing and the the zerth principal and so forth certainly my mind publishing
site is experimental D epistemology Ai and I also have a substack called Sero
principles of AI and a meetup group and a YouTube channel with the same name
great well thank you for joining us this week it's always fun to catch up with you Brett it's good to see you back in
the in the saddle um folks thanks very much for listening to the show uh Brett and I enjoy this tremendously and we're
getting great feedback from people I want to give a thanks to uh Kevin hen who very patiently goes through this and
cleans up our audio and adds incredible music to it makes our show much much better much easier to listen to so
thankk you Kevin thank you to Elizabeth sance and all of the crew at provoke
media that make the show possible uh we will be um back next week with another
futur another person who's thinking hard about the future and until then we will see you in in the
[Music] future well that's it for the futurists this week if you like the show we sure
hope you did please subscribe and share it with people in your community and don't forget to leave us a five-star
review that really helps other people find the show and you can ping us anytime on Instagram and Twitter at
futurist podcast for the folks that you'd like to see on the show or the questions that you'd like us to ask
thanks for joining and as always we'll see you in the future [Music]

Related Episodes