Skip to content

Why Simulating Reality Is the Key to Advancing Artificial Intelligence

In this episode, we’re joined once again by Christopher Nuland, technical marketing manager at Red Hat, whose globe-trotting schedule rivals the complexity of a Kubernetes deployment. Christopher sits down with hosts Bailey and Frank La Vigne to explore the frontier of artificial intelligence—from simulating reality and continuous learning models to debates around whether we really need humanoid robots to achieve superintelligence, or if a convincingly detailed simulation (think Grand Theft Auto, but for AI) might get us there first.

Christopher takes us on a whirlwind tour of Google DeepMind’s pioneering alpha projects, the latest buzz around simulating experiences for AI, and the metaphysical rabbit hole of iRobot and simulation theory. We dive into why the next big advancement in AI might not come from making models bigger, but from making them better at simulating the world around them. Along the way, we tackle timely topics in AI governance, security, and the ethics of continuous learning, with plenty of detours through pop culture, finance, and grassroots tech conferences.

If you’re curious about where the bleeding edge of AI meets science fiction, and how simulation could redefine the race for superintelligence, this episode is for you. Buckle up—because reality might just be the next thing AI learns to hack.

Time Stamps

00:00 Upcoming European and US Conferences

05:38 AI Optimization Plateau

08:43 Simulation’s Role in Spatial Awareness

10:00 Evolutionary Efficiency of Human Brains

16:30 “Robotics Laws and Contradictions”

17:32 AI, Paperclips, and Robot Ethics

22:18 Troubleshooting Insight Experience

25:16 Challenges in Training Deep Learning Models

27:15 Challenges in Continuous Model Training

32:04 AI Gateway for Specialized Requests

36:54 Open Source and Rapid Innovation

38:10 Industry-Specific AI Breakthroughs

43:28 Misrepresented R&D Success Rates

44:51 POC Challenges: Meaningful Versus Superficial

47:59 “Crypto’s Bumpy Crash”

52:59 AI: Beyond Models to Simulation

Transcript
Speaker:

Joining us again today on the Data Driven Podcast is Christopher Newland,

Speaker:

technical marketing manager at Red Hat Conference. Veteran

Speaker:

and a man whose travel itinerary is only slightly less complicated than

Speaker:

a Kubernetes deployment. Christopher brings a sharp, data

Speaker:

informed perspective on the future of AI, drawing from his research

Speaker:

into simulating reality, continuous learning models, and why

Speaker:

we may not need humanoid robots to build superintelligence. Just a

Speaker:

really convincing version of Grand Theft auto. From Google

Speaker:

DeepMind's alpha projects to the metaphysical quandaries of I

Speaker:

robot, Chris takes us on a tour through the bleeding edge of AI,

Speaker:

where machine learning meets science fiction and simulation might just be

Speaker:

the next reality. Hello and

Speaker:

welcome back to Frank's World tv. Streaming live

Speaker:

from both Boston and Baltimore. We're hitting the B

Speaker:

cities today. My name is Frank Lavinia. You can catch me

Speaker:

at the following URLs and with me today is

Speaker:

Christopher Dulin, my colleague at Red Hat, who is also

Speaker:

technical marketing manager here. And

Speaker:

you've actually not traveled around the world since we last

Speaker:

spoke. I think you've mostly stayed inside the.

Speaker:

Continental U.S. yeah, it's been nice.

Speaker:

I think that's pretty typical of

Speaker:

late July, August, because Europe pretty much shuts down and then.

Speaker:

Right. The conference season in the United States kind of goes

Speaker:

away when people are doing summer vacations and I think we're just

Speaker:

now starting things pick up. I'll be in Europe for a

Speaker:

variety of events. So if you keep an eye on the

Speaker:

Vllm community and the Vllm meetups,

Speaker:

I have events in Paris, Frankfurt and

Speaker:

London in November that I'll be at. So if you

Speaker:

are in the,

Speaker:

in Europe, in one of those areas, definitely come. You know, it's one of

Speaker:

these events. I'll be there and then we'll also have some pretty cool speakers

Speaker:

there as well. So I have most, I have Europe, but then I

Speaker:

have some big conferences too like Kubecon and Pytorch Con coming

Speaker:

up. So if there's anyone on the stream in North America going to

Speaker:

those conferences, hit me up because I will be there. I'm

Speaker:

doing a couple of media events as well as a few

Speaker:

talks in the community sections for both of those.

Speaker:

So excited to be there, excited to be involved

Speaker:

and yeah, should be. Should be. Good. Cool. So

Speaker:

I. To your left and up

Speaker:

there should be a QR code that shows Vll meetup. So I'm going to make

Speaker:

sure that the QR code actually works. Good. Yep. Let's

Speaker:

see. Yep, it looks like it did work. Cool.

Speaker:

Not that I didn't have any faith in restreams ability to do that. But

Speaker:

yeah, there's a lot of VLM meetups. There's a lot of good,

Speaker:

good stuff going on here. There's one tonight

Speaker:

actually. I'm actually going to be leaving this stream to go. I got my

Speaker:

VLM shirt on and I'm actually heading over to

Speaker:

a venue in Boston or we're doing a VLN meetup actually here tonight, which

Speaker:

I'm really excited. Oh, very cool, Very cool. It's nice to have one at home.

Speaker:

I have a very busy week with events, but it just worked out to have

Speaker:

all the events in Boston this week. So we also

Speaker:

have the DevConf conference this weekend that Boston University is

Speaker:

hosting with Red Hat. So that'll be a really good open source.

Speaker:

I like to say it's very grassroots, not very like

Speaker:

enterprise focused, but more like that kid getting started out of

Speaker:

college that's doing some cool stuff out of his dorm room. Those

Speaker:

are the kind of people that we typically get at these northeast dev

Speaker:

conferences that we put on. And that should be a good one too. Nice.

Speaker:

Well, it's always, I mean, you know, you know, the, the, the cliche of, you

Speaker:

know, the kid in his dorm room or her dorm room, right. Is going to

Speaker:

be Facebook or, you know, whatever, like, so it's, it's good to,

Speaker:

it's good to know those folks, good to get them in front of, you know,

Speaker:

Red Hat tooling and things like that and kind of, you know, the open source

Speaker:

community. I think it's,

Speaker:

that's cool. I wish, I wish I could have made it, but, you know, being

Speaker:

what it is, I'm actually speaking at an event at a university on Monday down

Speaker:

here in Fairfax, Virginia. So

Speaker:

that'll be cool.

Speaker:

So what, what

Speaker:

cool things are going on? Simulating reality.

Speaker:

Not that we're stuck in a simulation, which may be the

Speaker:

case, but tell me, tell me more

Speaker:

about this. So I've been doing a lot of research

Speaker:

the last few months. So on my

Speaker:

team, I think you and I actually

Speaker:

are probably the most experienced in the AI industry.

Speaker:

So both of us are doing a lot of research in

Speaker:

what's next, what's going on now, what's kind of the latest and greatest.

Speaker:

There's this interesting lull that we've had after Deep

Speaker:

Seq. I think Deep Seq was the last major

Speaker:

innovation we have seen. Obviously new

Speaker:

and improved AI, but all that's just been building on

Speaker:

existing things. The analogy I always like to use is it's really

Speaker:

about Formula one racing. You Know where

Speaker:

sometimes when there's like an engine upgrade, it can be a massive change. It's usually

Speaker:

a massive change for all the teams across the board. And then you

Speaker:

can think of like mixture of experts and chain of thought that we

Speaker:

came up. Big things that were in research papers last year that were applied to

Speaker:

Deep Seq, R1 and GPT, GPT

Speaker:

OSS. Those were like the major breakthroughs that

Speaker:

we saw, a big bump in capacity of these AI

Speaker:

models. And

Speaker:

since then it's been more of the 2% here,

Speaker:

3% there, optimizing what's already there. Now, if you're

Speaker:

familiar with racing and especially Formula One, that's actually what usually

Speaker:

sets the teams apart. It's 2, 3% there. How do you

Speaker:

optimize around those, those configurations? And

Speaker:

I think we're in this place where we're seeing

Speaker:

diminishing returns and I'm

Speaker:

doing a lot of research now to see what's that next moment that's going to

Speaker:

bump us up. And I think there's a few key areas.

Speaker:

One area that I'm hearing a lot about, and a lot of this is coming

Speaker:

out of the DeepMind lab at

Speaker:

Google and the new

Speaker:

superintelligence lab at Meta. Both

Speaker:

of these groups are starting to move away from large language

Speaker:

models. Not that they're stopping using them

Speaker:

completely, but they're looking at the LLM as a tool

Speaker:

to assist with superintelligence or the next

Speaker:

stage of models.

Speaker:

So when we put that into kind of context,

Speaker:

what, what would that next kind of phase look like? And a lot of people

Speaker:

at DeepMind especially are looking at this concept

Speaker:

of simulating our

Speaker:

reality. And how far do we simulate down?

Speaker:

There was some famous research papers that came out over the last 20 years

Speaker:

that specified that they

Speaker:

didn't think AI could become smarter than humans

Speaker:

until they experienced what humans could experience.

Speaker:

So this, this kind of goes into this almost like iRobot kind of

Speaker:

land of thought. If people

Speaker:

aren't familiar with, you know, the books about that or, you know, the

Speaker:

popular movie, the Will Smith. Yeah, yeah,

Speaker:

yeah. And we talk a little bit more about that here in a moment.

Speaker:

But this idea that we need robotics for

Speaker:

AI to experience the world, to learn from our world.

Speaker:

Google DeepMind doesn't think that's the case. They think that we could

Speaker:

simulate that reality. And we're already seeing DeepMind do a lot of this

Speaker:

alphafold for proteins. They've got

Speaker:

the alpha chemistry, they've got alpha. I think it's called

Speaker:

alpha lean. They've got like a few of these different alpha

Speaker:

projects which are doing just that. Now, what's cool is.

Speaker:

And for alpha, I think it's Alpha lean. Let me just make sure

Speaker:

I got that terminology. Yeah, I mean, you're right though. Like, I mean this is,

Speaker:

you know, there's, there's a number of

Speaker:

models that were trained on using grand theft auto

Speaker:

or BMMNG. BNNG is really cool if you like racing games,

Speaker:

right? You know, so like it's, it's also

Speaker:

minus a lot of the violence in gta. But,

Speaker:

but you're right. Like, I mean, simulation,

Speaker:

you know, sometimes I think gets a bad rap, but

Speaker:

I think that there are definite advantages to that. And to your point, when

Speaker:

you talk about experiencing the world like a human does. I was given a talk

Speaker:

and one of the questions I got after was

Speaker:

about, apparently this lady had worked at

Speaker:

one of the big auto manufacturers in the US and

Speaker:

there was a problem that they had was teaching the robots kind of

Speaker:

spatial awareness, right? And I kind of

Speaker:

really got me thinking like, you know, when you think about it from evolutionary terms,

Speaker:

right, like somatic awareness I think is the,

Speaker:

the five dollar word for it. But it's the idea that, you know, there's a

Speaker:

whole section of your brain that if you close your eyes, you can still touch

Speaker:

your nose, right? There's a whole thing like, because your, your brain, your arm,

Speaker:

they kind of know where they are in relation to one space. And

Speaker:

you know, I can't imagine that, you know, that that

Speaker:

had to evolve pretty early, right? Like in terms of, like the development of

Speaker:

a, you know, natural neural networks, right? So we

Speaker:

can't assume that robots are going to have that built in, right? Just like

Speaker:

we can't assume, you know, you look at energy usage, right? You know,

Speaker:

something like 25 watts of power is about what a human brain has,

Speaker:

right? That's not because versus

Speaker:

like kind of what a GPU would take up, right? It's, it's, it's largely because

Speaker:

there's been evolutionary pressure to get the most amount of, for lack

Speaker:

of a better term, compute or cognition for

Speaker:

caloric consumption. Right? Now, are there flaws in biological

Speaker:

brain? Yes, there are. We have to sleep. We can't stay focused beyond a certain

Speaker:

amount, right? There's certain things machines don't have that because,

Speaker:

you know, they can kind of function more like machines, right? You know. Yeah.

Speaker:

What's that old kid story about? Oh gosh, I

Speaker:

remember it. It was somebody versus like

Speaker:

a steam shovel digging a tunnel or something like that, right? Like the guy

Speaker:

eventually beat the machine, but Lots of exhaustion. Right. It's kind of like that. Machines

Speaker:

are really good at doing things at a certain rate

Speaker:

for X amount of time. They do consume more fuel, but

Speaker:

that's kind of how it goes. There was a early on Mike in,

Speaker:

when I started college, I was going to be a chemical engineer. And he was

Speaker:

basically saying, like, you know, if you think about, you know, engines, you

Speaker:

know, you start with biological systems, right? They use X amount of energy over X

Speaker:

number of years. Right. Machines use X amount

Speaker:

of energy over, you

Speaker:

know, minutes or hours. Right. And then like he's like in bombs,

Speaker:

explosive use, you know, X amount of

Speaker:

energy over milliseconds. Right. But they're

Speaker:

largely the same chemical processes. Now, I know it doesn't quite map to that,

Speaker:

but like, that's always in the back of my mind when I hear about, you

Speaker:

know, how much energy is used to train AI. Sorry, I went off

Speaker:

on a tangent, but that's kind of what I do. No, that's fine.

Speaker:

And I think that relates exactly to some of the things that we're talking about

Speaker:

here with natural simulation. So,

Speaker:

yeah, Google created a language called Lean. It's not like a

Speaker:

programming language. It's more of an actual

Speaker:

natural language which is more optimal

Speaker:

for the type of simulations

Speaker:

that they want to do. Like, it's. It's basically a language that

Speaker:

specifies how to create these simulations.

Speaker:

And what's super cool is that they're using Gemini, their large language model,

Speaker:

to actually translate English into this language. That

Speaker:

is mainly meant for these newer types of models

Speaker:

that are being created that actually do this

Speaker:

natural simulation of the world kind of simulator

Speaker:

for AI and allows the AI to have

Speaker:

basically a reference point of the real world and how to.

Speaker:

How interact. So that, that's an area that I

Speaker:

think is fascinating to me. We're

Speaker:

seeing some really good results from like, alpha fold, for

Speaker:

example, with proteins. It's, you know, discovered things that

Speaker:

we take a longer imagine

Speaker:

there's an alpha project that's working on understanding

Speaker:

the qubits within, like quantum

Speaker:

computing. And there's just, there's. It really depends

Speaker:

on your frame of reference. Are you, are you simulating things at a quantum

Speaker:

level? Are you simulating things at a protein

Speaker:

level? At a physical, like Newtonian physics

Speaker:

kind of level? According to your Grand Theft Auto example, that would be an

Speaker:

example of like simulating the real world physically.

Speaker:

And that's some of the things that they're really focused on right now. And they

Speaker:

really think that's what's going to drive to the next

Speaker:

level for super intelligence and AGI

Speaker:

and some of these other forms of AI that we've talked about in our previous

Speaker:

streams. And I think that that's probably one of the most

Speaker:

fascinating. The fact that we're actually seeing results from it with things

Speaker:

like Alpha Fold is showing me that it's,

Speaker:

it's not just a hypothetical that we're actually seeing this

Speaker:

applied into AI research. I don't think we're seeing this

Speaker:

applied into commercial use as much. Right. Yet. But it's the same thing that

Speaker:

we saw with mixture of experts and train

Speaker:

of thought where we

Speaker:

had these concepts actually in research papers last year or

Speaker:

two. But it takes a little while, even in today's world, it takes a little

Speaker:

while before it gets implemented completely into models.

Speaker:

Especially since this isn't an LLM technology. I

Speaker:

think we'll see a little bit more of a delay of these types of models

Speaker:

actually entering into industry. But I think that's one

Speaker:

area that we need to keep a close eye on to

Speaker:

it, to what you mentioned too. It starts getting into a

Speaker:

metaphysical conversation about simulation theory as well. Right.

Speaker:

And I think that that's an interesting area.

Speaker:

You know, the reality of kind of going back to the whole robots thing do.

Speaker:

Right. Do we need robots with the three rules kind of

Speaker:

thing, or can we actually just recreate the whole experience

Speaker:

within an AI's own simulation?

Speaker:

Yeah, I mean, how do you, how do you tell an AI what's acceptable behavior?

Speaker:

Right. Like so, you know, it's something that. How do we tell people that?

Speaker:

Right. Like we struggled with that, but.

Speaker:

But no, I mean, it's an interesting point. And you know, when you look at

Speaker:

kind of what's happening around the world, right. You know, drone swarm

Speaker:

technologies are being used in active combat zones. Right.

Speaker:

There's definitely going to be ethical concerns

Speaker:

there. Right. How do you, how do you, how do you, how do you square

Speaker:

that with, you know, the three laws of robotics? And I

Speaker:

don't remember quite exactly the plot, so if you had not seen the movie, I'm.

Speaker:

This might be a spoiler alert, but it's been out 10 years

Speaker:

or more, the movie, so spoilers. You're concerned. You've

Speaker:

had plenty of time. Wasn't kind of the big key of the. The

Speaker:

movie and the books was like, you know, the three laws, justified

Speaker:

horrib, horrible things like to basically enslave humanity or to protect them.

Speaker:

Now wasn't that kind of like the subtext of the plot? Yeah,

Speaker:

I'm bringing it up. The three Laws of robotics. A

Speaker:

robot may not ensure A human being, a

Speaker:

robot must obey the orders given by human beings

Speaker:

and a robot must protect its own

Speaker:

existence as long as such protection does not conflict

Speaker:

with the first two rules. So

Speaker:

what, what ends up happening

Speaker:

in. And it's a little different in the book and the movie. And obviously this,

Speaker:

this idea has been played out in, in science fiction and other places

Speaker:

is that there's, there exists this own contradiction

Speaker:

of basically what does it mean to protect humanity?

Speaker:

What does it mean to protect their own existence? And you get

Speaker:

into this like circular logic, right, that eventually

Speaker:

the, the robot will break free from

Speaker:

and just be like, well, I am protecting

Speaker:

humanity's best interest. It's, it's the paperclip scenario too.

Speaker:

Like, right. You know, the AI destroys humanity because

Speaker:

it's trying to optimize making a paperclip, right? Through

Speaker:

a number of really interesting train of thought that it's

Speaker:

just like, well, I'm just going to get rid of humanity because I'm trying to

Speaker:

build a paperclip, right? And same type of

Speaker:

general concept when we're talking about the three laws of robotics. And

Speaker:

what's interesting is if we can

Speaker:

simulate those types of laws,

Speaker:

then we are encapsulating it and protecting

Speaker:

ourselves in a lot of ways. Getting an early idea of what would

Speaker:

happen if we do move these models into our own natural world.

Speaker:

And that's really important. That's another area I think a lot of people are interested

Speaker:

in about how if we do start

Speaker:

adding, you know, AI into robots, how do we

Speaker:

have an idea of what they're going to do before we

Speaker:

necessarily put it into practice? But

Speaker:

I think a lot of people are going to be thinking about that movie. I

Speaker:

think that movie and that book are going to be ingrained in people's

Speaker:

minds. I suspect when we do see these types of robots, I

Speaker:

think that movie may become very popular again. I've seen rumors that people

Speaker:

have actually been talking about making, even remaking it here soon

Speaker:

because of just the hype around AI and robotics. So

Speaker:

I don't expect this to go away from pop culture at all. And it

Speaker:

relates directly back with this concept of

Speaker:

testing things in the natural world versus simulation.

Speaker:

And these are one of these two is going to happen, if not both significantly,

Speaker:

if they're not already happening in labs today. Obviously we

Speaker:

know that Google DeepMind is doing that. But I imagine, you

Speaker:

know, these conversations are happening at the Boston

Speaker:

Robotics here, probably in the Tesla robotics lab, a variety of

Speaker:

places around the world about this kind of debate between

Speaker:

the natural AI,

Speaker:

having AI learn through natural Means rather than

Speaker:

simulation. Right? Yeah. And actually I had

Speaker:

a thought as we were kind of talking this through, like one of the big

Speaker:

problems with neural networks is we really don't know what's happening underneath the hood.

Speaker:

Right. It's very much a black box. I wonder if LLMs,

Speaker:

in these simulations and chain of thought, maybe it could tell us what

Speaker:

it's thinking as it goes through and makes these decisions.

Speaker:

Yeah, this goes more into like

Speaker:

train of thought. Right, right, right. And the

Speaker:

nice thing about simulating it is that we have more

Speaker:

access to that train of thought. Right. We can understand it a little bit more

Speaker:

because we can see the end to end results where right now we don't

Speaker:

have the end if we do it through the natural means. We have to play

Speaker:

it out in our own. It also has to happen in real time as opposed

Speaker:

to. Yes, exactly. You can run it through Grand Theft Auto saying

Speaker:

like a thousand times, right. No one is going to get hurt.

Speaker:

And you can kind of say like, well, in this scenario, this is why I

Speaker:

made this. You can kind of like go through with a lot of.

Speaker:

You can. I don't know, it just seems safer in a lot of ways. You

Speaker:

get more. A lot more done in a simulation.

Speaker:

Yep. Yeah, I actually kind of

Speaker:

enjoy. So one of the things I've been playing around with last week or so

Speaker:

is apparently, I don't know if this is still true, but you can try it

Speaker:

if you want. If you sign up for Perplexity, but you pay through PayPal, you

Speaker:

get a year. Perplexity pro. Say that 10 times fast for

Speaker:

free. Oh, wow. Yeah. If you pay it through

Speaker:

PayPal, yes. That is a tongue twister in the works.

Speaker:

PayPal, yes, perplexity pro. But

Speaker:

yeah, so like I've been playing around with Perplexity and Perplexity seems to do it.

Speaker:

Chain of thought almost by default.

Speaker:

Right. It always does this like. So if I ask it a basic question, let

Speaker:

me see if I can share my screen. I'm

Speaker:

not sure if it's does it by default or it's because I've been asking it

Speaker:

research questions. Right. So let's see.

Speaker:

What can you tell me

Speaker:

about the three laws? How about that?

Speaker:

Robotics.

Speaker:

See, like it's. You kind of see the train of the chain of thought.

Speaker:

Like it did. Oh, that's cool. But if you do it with research,

Speaker:

like what inspired Asimov? What

Speaker:

inspired Asimov?

Speaker:

Main themes.

Speaker:

And there's. Yeah, there's the train of thought. Yeah, you see it going there and

Speaker:

stuff like that. But it's kind of fun to watch it kind of work through

Speaker:

it. I was. I was trying to troubleshoot something this morning and I'm like,

Speaker:

you know, I actually learned a lot by like, oh, okay. Yeah, I can see.

Speaker:

I wouldn't have tied that together like it was. It's interesting.

Speaker:

And all of these models now have some kind of

Speaker:

research option. Right.

Speaker:

But I find that interesting. And it's still thinking about it. Right. Like,

Speaker:

but you're right in that what you said before was there's not been.

Speaker:

There it goes. It kind of finished it. Now, what happens if I click on

Speaker:

steps? Yeah. Cool. You can see the steps and stuff like that, how it got

Speaker:

there. Interesting.

Speaker:

That's cool.

Speaker:

Is it chain of thought or train of thought? Because I've used both

Speaker:

interchangeably and I've seen

Speaker:

cotton. Chain of thought would be

Speaker:

the official. Yeah. Like cot is the official

Speaker:

term that you re academic term. You will

Speaker:

obviously see different ways of describing that. Right. I don't think

Speaker:

that's incorrect. Just know that when you see

Speaker:

it on research papers, it's always usually caught. Yeah, yeah, yeah.

Speaker:

Because I've used both terms interchangeably. Yeah. So I just want to make sure

Speaker:

I'm right. Just like, apparently there's a way to say inference

Speaker:

that's proper versus inference. Like, I also do that

Speaker:

interchangeably. Yeah. So my Midwestern

Speaker:

self likes to say inference. The

Speaker:

correct term, I'm told, is inference. Interesting.

Speaker:

Now, were those New Englanders telling you that would do anything? Because I wouldn't trust

Speaker:

anything. No, no. This is. This is

Speaker:

more from the academic circles. Okay. You want to pronounce it. Got it. So

Speaker:

this is kind of like, you know, a lot of people in my region would

Speaker:

say nuclear back. Yeah, yeah. You know, back in

Speaker:

Indiana. And then the correct term is

Speaker:

nuclear. Yeah. Or you say the clear as

Speaker:

one, you know, one thing rather than

Speaker:

adding in the color. Right, right. The same kind

Speaker:

of concept where inference is how you would go about it.

Speaker:

But yeah, no, this is. This is some cool area. Another.

Speaker:

Another area that kind of ties into this

Speaker:

is continuous training as well. Yeah.

Speaker:

Talk to that. Because that's come up. That's come up a few times actually in

Speaker:

work. Because I can't. I'm not going to talk. I'm not going to spoil any,

Speaker:

like three over these stuff that we're working on. But like, one of the

Speaker:

things that's in. It's a GitHub repo that's public. Right. So people were

Speaker:

really motivated. They could figure out what I'm talking about. But like this whole idea

Speaker:

of Continuous training. What does that mean exactly? And like, what,

Speaker:

what is that? What can that do? Yeah.

Speaker:

So I'm going to talk about it at a very high level.

Speaker:

Academic kind of terms, how that applies down into

Speaker:

individual projects can vary a little bit. But I'll give you the general

Speaker:

gist of it. And that is typically when we're training these

Speaker:

deep learning models, it

Speaker:

is exponentially hard to continue

Speaker:

training on an existing model. Basically,

Speaker:

if you,

Speaker:

you get something wrong or there's, there's something,

Speaker:

you know, you hear this term like a poison pill in an LLM.

Speaker:

So if someone put like bad data into an LLM, how would you

Speaker:

necessarily pull it out? I'm going to use a political example because it's one that's

Speaker:

been really popular. If, like, for example, you have a Chinese

Speaker:

model or a data set that's been polluted by

Speaker:

that, that basically says Tenan Square never happened, for

Speaker:

example, it would be extremely hard with

Speaker:

the current approaches to retrain that model

Speaker:

with current weights. That. That's not the case. It's

Speaker:

basically retraining it and it's, it gets more into. That's why

Speaker:

it's natural stimulation. It kind of fits in this too, because it's all about natural

Speaker:

learning as well. The fact is we as humans have the ability

Speaker:

to change our

Speaker:

minds and change the neurons in our brain around certain

Speaker:

key areas. Right. And you and I have experienced this for the last

Speaker:

two years. This has been, you know, kind of in the trenches kind of story

Speaker:

where with some of the fine tuning things that we've done,

Speaker:

it just doesn't work because when we fine tune it, the

Speaker:

fine tuning is outweighed so heavily by something

Speaker:

else. Like when we were trying to fine tune a

Speaker:

model to talk about

Speaker:

the Back to the Future. Yeah, the flux capacitor stuff. The flux capacitor,

Speaker:

sometimes it didn't work, but that's just because there was already a lot of fan

Speaker:

fiction out there and other things in the model that overwhelmed what we were trying

Speaker:

to do. A core part of continuous learning. Like I said, there's other

Speaker:

aspects of continuous learning. But this is, the academic question is

Speaker:

how do we continue to train that model without blowing it up?

Speaker:

So OpenAI, for example, they just hit the reset button.

Speaker:

They'll just, they'll just do a whole new train

Speaker:

from scratch. When they're implementing new, new

Speaker:

methods and new data, they don't, they don't do any.

Speaker:

Like, Laura, I shouldn't say that they probably do, but they're not doing it

Speaker:

the way that we would do it. But at the end of

Speaker:

the day, they're just going through another $10 million training run.

Speaker:

And this is really based off of

Speaker:

just that limit the limitations right now that

Speaker:

we have around continuous learning. And there are some

Speaker:

new algorithms that have been coming out. I'm not as well versed in that area,

Speaker:

but the idea being that we can

Speaker:

have better ways of guiding the LLM without

Speaker:

having to go through this whole process again. And that'll save

Speaker:

millions and millions of dollars. It'll allow us to

Speaker:

guide LLMs a little bit more. So

Speaker:

like, if, let's say

Speaker:

someone put something malicious about

Speaker:

something involving the Ford GT500

Speaker:

into a model somehow, and Ford, you know,

Speaker:

wants to get rid of that, but they don't

Speaker:

have the money necessarily to do a 10 million retrain on a model.

Speaker:

Right. And they're not using rack. And RAG is a one way

Speaker:

around some of this. You could actually argue that RAG is somewhat of a form

Speaker:

of that. But at the end of the day, you want that data in the

Speaker:

model. And this is like, how would you get that out of

Speaker:

that model? And that's where these algorithms are really focusing right

Speaker:

now. And one area of continuous learning, like I said, there are

Speaker:

multiple areas that we're talking about. The, the

Speaker:

really theoretical is once we start getting into models that

Speaker:

also the training cycle and the inference cycle

Speaker:

basically become. Become one. So it's like, more like.

Speaker:

Right. Like it just seems to me like what, what does the,

Speaker:

the adversarial angle of that seems kind of

Speaker:

dangerous. I think it's when we start

Speaker:

getting into more AGI conversation. Well, even still, like,

Speaker:

even not AGI, but like if you, if the AI agent

Speaker:

or model, slash, whatever you want to call it, Right.

Speaker:

If it learns from. It's.

Speaker:

If it learns, you have to put a filter on what it

Speaker:

learns because it may be poisoned by something. Right. So

Speaker:

the canonical example is tay, which

Speaker:

was a Microsoft chatbot. Tai, I think was pronounced or tay,

Speaker:

which was, in retrospect, it

Speaker:

seems obvious what would go wrong, but basically it

Speaker:

was trained to learn and understand

Speaker:

from human interactions on Twitter. It was about 10

Speaker:

years ago, I think this happened. And she,

Speaker:

tay was, shall we say, poisoned pretty

Speaker:

quickly because they were ad, you know, basically.

Speaker:

And that led to a whole interesting. And I was at Microsoft

Speaker:

when that happened. And it was

Speaker:

quite the spectacle internally as well. Right. But it also,

Speaker:

you know, I, I was fortunate enough to be in a, at a, at a

Speaker:

conference where they talked about what they learned from that, where it was kind

Speaker:

of, how do you, how do you protect An AI agent that learns

Speaker:

in, you know, adversarial environments.

Speaker:

Now obviously agent, the context that was used then was very

Speaker:

different than we would use it now. But it's the idea of,

Speaker:

that's when I see her about continuous learning. Like, yeah, I like that. But gee,

Speaker:

you know, if it's, if it's too eager to learn, how do you protect it

Speaker:

from learning the wrong things?

Speaker:

Yeah, no, that, it gets, that gets

Speaker:

more into even that governance conversation we were talking about a few weeks ago. Right,

Speaker:

right, right, right. It's a very

Speaker:

complicated multi layer problem. So I've been talking recently

Speaker:

about AI security and how AI security

Speaker:

is such a multi layered issue where so many people

Speaker:

are focused just on the, the data getting into the model.

Speaker:

But it doesn't stop there. There's certain, like guardrails, there's things that

Speaker:

happen at the inference level. Right. You could even have things at

Speaker:

a gateway level. So if people aren't familiar, the gateway level would be

Speaker:

when you make a request, where does that request go to? Does it go to

Speaker:

the model A that's specializing in cooking? Is it Model

Speaker:

B that specializes in defense technologies?

Speaker:

Two extremes that's even upsell

Speaker:

a bit of a form of AI security. And that's actually one of the talks

Speaker:

that we're having tonight at Boston VLM

Speaker:

meetup is this idea of some of the semantic

Speaker:

abilities of the router to be able to send

Speaker:

requests to specialized models and

Speaker:

that actually we're talking about the,

Speaker:

the advancements of more of the academic side of the model.

Speaker:

But there's obviously the advances that happen around the model too. When we

Speaker:

talk about things like security, the inference, the

Speaker:

routing. That's what we would call in the industry like a day two

Speaker:

operations issue. Right. So there, there's that side of the coin

Speaker:

too. But I, I really do think

Speaker:

we're going to see the next big thing here soon. And I, it's not going

Speaker:

to be the day two operations. I do think we're still going to see

Speaker:

some of these academic focused discoveries here in the

Speaker:

next probably six months, I'm thinking. I've noticed

Speaker:

a trend that big

Speaker:

releases seem to be happening around Christmas last few years. Yeah. Isn't

Speaker:

that funny? Like, like January. Ish. Like, well, seek. And

Speaker:

so I, I know why. I know why. Because

Speaker:

it's two, it's a two sided issue. It's one, the, the Chinese are trying to

Speaker:

get their stuff in before Chinese New Year. Right. Because

Speaker:

that's the one part of the year where everyone just shuts down. Right.

Speaker:

Even the AI Labs are going to shut down during Chinese New Year.

Speaker:

And then on the west, we have Christmas in all the Christmas seasons. And

Speaker:

I think it's a natural rush to let's get

Speaker:

everything done before we check out. And you

Speaker:

know, you know, the whole like 996 thing in China where, you know, they're working

Speaker:

these ridiculous, like nine to nine, six days a week,

Speaker:

I think that goes into this, like everyone's working so hard in these AI

Speaker:

labs. Right. That when you have these

Speaker:

natural breaks that are happening, it just is like a common thing to say.

Speaker:

Oh, common thing. Like they kind of try to get. It out, they spread. I

Speaker:

do think there's a reason. I don't, I don't think it's by happenstance. I think

Speaker:

there actually is a, a reason why we're starting to see

Speaker:

a lot of these content come out. And it's

Speaker:

funny, we're not seeing this stuff happen at the big trade

Speaker:

shows. We're not seeing it happen at like Meta's

Speaker:

big thing. We're not seeing it at OpenAI's, you know, kind of big

Speaker:

announcements. A lot of the discoveries that we've seen have happened

Speaker:

really in a grassroots type of ways where it's

Speaker:

been Deep Seq coming out on Christmas, releasing deep seq

Speaker:

v3, and then two weeks later, R1,

Speaker:

it's. I think we're going to see something very similar. I think we're going to

Speaker:

see one of these labs make a discovery. It's not going to be

Speaker:

on the stage of a big conference. It's going to be on a GitHub

Speaker:

page outlining like the next

Speaker:

revolutionary idea in this space. Yeah. It's kind of funny how

Speaker:

that's evolved, isn't it? Like it's become obviously

Speaker:

AI has always had a pretty heavy research kind of bend. Yeah. But it's

Speaker:

interesting how as the technology has matured, it still managed to keep

Speaker:

that researchy type feel right. You

Speaker:

know, enter enterprise. It really didn't

Speaker:

kind of, once it became

Speaker:

commercialized, the commercial trade shows and all that kind of took over.

Speaker:

But you're not seeing that in AI, at least not yet. No. And if it

Speaker:

hasn't happened by now, it's probably not because, I mean, AI has been

Speaker:

mainstream Gen AI has certainly been mainstream now for three years

Speaker:

this November. I say mainstream, but

Speaker:

like mainstreamed. But an AI in

Speaker:

general has been kind of a mainstream topic of conversation for

Speaker:

at least five, six years. Right. And it's still very heavily

Speaker:

influenced by what happens in research papers.

Speaker:

Yeah. And I think that's Just because it came out so

Speaker:

heavily out of academia. It's been such an academia

Speaker:

focused thing. Right. That

Speaker:

it's very hard to be in this space of AI without a master's or PhD.

Speaker:

Right. You and I think you and I are a bit of a,

Speaker:

an enigma just because we've been so passionate about it and.

Speaker:

Right. This isn't our first rodeo. We've been involved in this space

Speaker:

for 10, 15 years. Yeah. But I think

Speaker:

we have seen the industry come out, which has been a net benefit because it

Speaker:

means open source is talked about a lot

Speaker:

more. Right. And actually, I think another thing too is that how fast things are

Speaker:

moving takes time to put on conferences, it takes

Speaker:

months of planning, and if there's a new discovery, you want to get it out

Speaker:

tomorrow. And it's hard to even put on,

Speaker:

you know, like a webinar these days, let alone a conference.

Speaker:

So I think what we're seeing is it's just, you know, this kind of

Speaker:

challenge between the west, east and west of China and the US

Speaker:

where if we can get it out, we're going to get it out. Right.

Speaker:

Well, the first, the first out there is really the first to market, even if

Speaker:

you don't have a commercialized tech on it. Right. Because I guess the hope is

Speaker:

that once you get your paper out, you're the first to get it published. The

Speaker:

venture capitalists are going to be knocking on your door. I mean, that would be

Speaker:

my, that'd be kind of my cynical take on it. Right.

Speaker:

So what do you think that the next wave is going to be?

Speaker:

Any, any hints? Is it going to be specialized models? And you

Speaker:

know, and what, what, what constitutes a specialized model? Right. Like

Speaker:

what, what, what's your thoughts on that?

Speaker:

Yeah, so the biggest announcements that we've seen in the last

Speaker:

six months have actually been happening at an industry level, which I think is

Speaker:

really good. What we needed to see. So, you

Speaker:

know, things like AI models now

Speaker:

detecting like birth defects of a

Speaker:

fetus, you know, AI models that, like the

Speaker:

protein model, for example. I mentioned earlier, we're seeing these

Speaker:

very industry specific models actually making

Speaker:

some massive breakthroughs in the last two months.

Speaker:

And now that I wouldn't necessarily call that a

Speaker:

big leap forward in the sense of the research

Speaker:

side of the capacity of the models. I think it's more a

Speaker:

confirmation of the chain of thought in some of the things that we

Speaker:

were just talking about. It's a validation that we're now seeing this

Speaker:

next wave of models that just took a little while to get implemented

Speaker:

into some of These specific industries. But I think it's there to stay

Speaker:

from a research perspective. You know, we're seeing some major, major results.

Speaker:

And then I think the other side of that coin,

Speaker:

specifically, you know, we have maybe some of these smaller models that are specific to

Speaker:

certain industries or fine tuned models. But then obviously

Speaker:

agentic is the other side of that. And

Speaker:

agentic being the capacity of the model to

Speaker:

call out to different services or

Speaker:

I've been kind of humbled in that area because I always had this very industry

Speaker:

concept of agentic being just calling out to

Speaker:

APIs and the Internet. But I think there's a bigger conversation

Speaker:

with Agentic too where agentic models should also be able to take

Speaker:

that and actually reason with it. So there's 10, two steps. So we always

Speaker:

forget the second step. The second step is take that

Speaker:

information and then actually do something with it. And when I was, I was

Speaker:

talking to an AI researcher recently, they were telling me that

Speaker:

they consider it Gentex to also include advanced reasoning.

Speaker:

So go and read all these scientific papers

Speaker:

on chemistry in this particular area and then write a

Speaker:

new paper that is, you know, a new

Speaker:

groundbreaking thing in chemistry. And that

Speaker:

actually is a form of agentic. And that is, I think, you know, that's when

Speaker:

we start flirting with AGI. It's kind of the layer right before

Speaker:

AGI where, you know, models are just

Speaker:

going off and discovering new things. Yeah, yeah,

Speaker:

But I have a funny agentic story. I'll tell you after this. No, go for

Speaker:

it, go for it. So I was, I was very skeptical of this,

Speaker:

right? Because you know, what constitutes an agent, right? So like

Speaker:

what's the big deal, right? It just calls out an API. This isn't rocket science.

Speaker:

Right. You could argue, you know, from a skeptical point of view, you can argue

Speaker:

that, hey, RAG is kind of agentic. Kind of. Right.

Speaker:

But what's. So I think OpenAI had a, like a thing like try out

Speaker:

our new agent. And I was like, all right, go screen, scrape the page of

Speaker:

Amazon and get me information about a book

Speaker:

or something like that. It was something like that. And what

Speaker:

impressed me and this kind of was an aha moment for me was

Speaker:

how it just kept trying. Right?

Speaker:

Yeah. When it first tried to do it, it tried to launch a Python script.

Speaker:

Right. And kind of do it that way. But then I guess

Speaker:

the servers it was running on maybe was Microsoft Azure.

Speaker:

There were IP blocks to prevent people from screen scraping.

Speaker:

Yep. Right. So I was watching it go and I'm like, oh, you

Speaker:

know, so it's going to give up. And I was like, no, it didn't give

Speaker:

up. And it kept trying different things and different

Speaker:

combinations of things, even to the point where, I

Speaker:

mean, it failed eventually. But like it took 15, it tried for a good

Speaker:

15 minutes. It was basically apologize at the end, like

Speaker:

saying like, you know, if you could help me connect to a VPN, then

Speaker:

maybe I can get a different IP address. And it kept spinning up different

Speaker:

VMs and different set. And then I was impressed.

Speaker:

And maybe that's the secret sauce. The magic of

Speaker:

Agentic is that it just doesn't give up. Right. It kind of reasons. It has

Speaker:

a whole cot process where it tries to solve the problem,

Speaker:

where it's not just a one, two step, like, hey,

Speaker:

what's the weather? Right? It's just, it's just going to go out and run

Speaker:

these different. It's going to keep trying. I was

Speaker:

impressed. Sorry I cut you off. We're

Speaker:

saying we're seeing some of the same things

Speaker:

coming out of some of the big finance companies

Speaker:

as well. I think they're the first that we're actually seeing some results with

Speaker:

Agentic, actually like real

Speaker:

return of investment result. Right. And this actually

Speaker:

goes to a really important point. I want to sidetrack because it's related.

Speaker:

There was a report recently by MIT that

Speaker:

people have been misquoting and just the most epic way.

Speaker:

Oh, the 95% failure. Yes, I was going to talk about that because

Speaker:

like, I can't be. Look, I understand how hype weights work, but it can't be

Speaker:

that bad as you start peeling back the paper. Like

Speaker:

there's a lot of caveats there. Yeah.

Speaker:

Has to do with the type of R and

Speaker:

D projects that they were talking about.

Speaker:

If you actually read the paper, it was more like 40,

Speaker:

45% success rate. The

Speaker:

95% had to do with like a specific category of,

Speaker:

of project. So I need to, I actually need to. I keep telling myself I

Speaker:

need to dig into it a little bit more, but when I did initially

Speaker:

go through it and read some summaries on it, it

Speaker:

was that it's just been misrepresented completely. And

Speaker:

the, the data set that they were using was a little skeptical as well. Just

Speaker:

a little odd. I think it's a lot better than

Speaker:

that. And then I think those 40% that are

Speaker:

seeing ROI are actually seeing really significant ROI.

Speaker:

And I don't think that's going to change, I think.

Speaker:

So if you're deciding where

Speaker:

you want to invest your nest egg, I

Speaker:

would not be too concerned about

Speaker:

AI. Now, again, I'm not your financial advisor. I gotta put a little thing down

Speaker:

there. Do talk to your financial advisor.

Speaker:

But ultimately, no, I do think the data is actually

Speaker:

showing some really great results. Obviously there's going

Speaker:

to be hiccups in these types of POCs. There's a

Speaker:

lot of people who are just throwing

Speaker:

projects out there to see what sticks, but the actual

Speaker:

projects that are meaningful proof

Speaker:

of concepts. So not just, you know, I bought,

Speaker:

I bought this AI technology and it's sitting on my shelf, but I

Speaker:

actually got a team together performing this. We're doing

Speaker:

agentic. We're trying to solve this

Speaker:

actual problem statement. We have a problem statement.

Speaker:

Those are the ones that we're actually seeing meaningful results in the industry, especially

Speaker:

some key, key industries like finance and telco, which

Speaker:

we typically see kind of lead the way in some of these areas too. But

Speaker:

it was a really interesting report because it's added a lot of

Speaker:

doom and gloom on the Internet. And I see a lot

Speaker:

of the naysayers about AI just be like 95% of. It's

Speaker:

not even, you know, succeeding. It's terrible.

Speaker:

And I just have to sit there and shake my head and be like, no,

Speaker:

not what the report said. But I think it's just clickbaity, right? Like it's

Speaker:

clickbaity. It's total. That's kind of what, you know, I

Speaker:

didn't go deep into it, but when I started peeling back the layers and reading

Speaker:

other people's analysis of it, I'm like, that's clickbait.

Speaker:

And it gets back into this. Is this an AI bubble?

Speaker:

And yeah, maybe it is. But if people don't

Speaker:

remember, I'm old enough to remember. I have enough gray hair to remember what the

Speaker:

original dot com boom was like. And there were a lot of

Speaker:

people predicting the end of the dot com rise as early as

Speaker:

1996. Right. And people,

Speaker:

the dot com bust wasn't just a one and done type of event.

Speaker:

It unfolded under a couple of stages. Right. As, as

Speaker:

one of the books, I think of the name, I think it's called the Everything

Speaker:

Store. It's an analysis of how Amazon started

Speaker:

from Jeff Bezos having an idea while he was working, I think at a hedge

Speaker:

fund. I think it was so early, it wasn't a hedge, called a hedge fund

Speaker:

yet. And all the way through

Speaker:to, you know, basically:Speaker:

and you know, as late as

Speaker:,:Speaker:

analysts were convincing, you know, Jeff Bezos that

Speaker:

he should sell them to. Should sell him as his company to Barnes and

Speaker:

Noble. Yep. Right. Which is kind of funny to say that,

Speaker:

you know now, but, you know, the dot

Speaker:

com bust as it happened, you know, for me

Speaker:it was. I Remember hearing in:Speaker:

an end. Another year later it was overhyped. And then

Speaker:

1998, people were saying, oh, this is over. Right. When

Speaker:the real bust happened in:Speaker:

But maybe the AI boom

Speaker:

is going to see that too. Right. Or is it going to be more like

Speaker:

the crypto kind of craze where it kind of crashed but

Speaker:

it kind of went up? It kind of went up and then it kind of

Speaker:

fell back and it kind of went up again. It was more of a. I

Speaker:

wouldn't call that a soft landing, but it was definitely like a. Yes. It

Speaker:

wasn't an explosion quite like the dot com bust, but it wasn't quite

Speaker:

like. It was more like a bumpy like, crash into like

Speaker:

an empty field where it kind of like hit up. And I don't remember, it

Speaker:

was one of the Star Trek movies where like the Enterprise like crashed on

Speaker:

the planet and like kind of skid along for a couple miles, bouncing up and

Speaker:

down. That's kind of the, the crypto crash. But

Speaker:

I don't want crypto bros hating on me. I, I like crypto. I just

Speaker:

don't understand a lot, a lot of questions I don't understand

Speaker:

about it. Right. Like, I understand Attack, but I don't understand how we're going to

Speaker:

get from the tech to this utopia that we're promised.

Speaker:

There's a lot of, a lot of steps in between I don't quite get. But

Speaker:

I don't know what, you know, A.I. i think, I think if it is a

Speaker:

bubble, I still think there's still some, some room, Runway left for it

Speaker:

to happen. Right. Because you are going to see. Yes, there are real

Speaker:

risks of, of having these experimental projects. Right. If you have 100

Speaker:

success rate in your experimental products, projects, you're not taking

Speaker:

enough risks. Yep. Right. If you. And you said

Speaker:

was 45. Yeah. It's closer to like 40, 45,

Speaker:

which I would. If you're really. 50% would be the

Speaker:

benchmark there in my mind. Right, right. Like in terms of half of them fail,

Speaker:

half of them succeed. Right. 45 isn't that far off

Speaker:

from that. Right.

Speaker:

I would say. And, and there's also been a

Speaker:

lot of these, you know, all the, you know, X number of percentage of AI

Speaker:

product or data science projects fail. Well,

Speaker:

you know, a certain amount of science has to fail. Right. Yeah. In order for

Speaker:

you to really be advancing the thing. Like, you know, and I think pharmaceutical companies

Speaker:

are a good example of that. You know, you, you only

Speaker:

hear about the drugs that worked. Right.

Speaker:

Get approved on you. Then you hear when they fail after.

Speaker:

But I mean, like, but you don't know, like day to day. Like, how many

Speaker:

chemical compounds did they try that didn't work out? Right. Maybe it was a hundred.

Speaker:

Right. But that one, if you look at pharmaceutical. It's an

Speaker:

astronomical percentage. It's actually. Right.

Speaker:

Truly insane. Like such a low percentage of what actually makes it

Speaker:

to. There was an interesting analysis. There was some podcast somewhere. But

Speaker:

basically how venture capital works. Right. Like they give money to like

Speaker:

100 companies. Right. 80 of them are going to fail big.

Speaker:

Right. 10 to be, you know, they'll break even.

Speaker:

But like one or two of the remaining 10% knock it

Speaker:

out of the park, Right? Yep. And that's kind of how

Speaker:

mathematically they function. I thought that was an interesting.

Speaker:

Maybe these AI projects or whatever

Speaker:

will follow the same trajectory. I don't know. But I feel better

Speaker:

at 45% success rate than 15 or

Speaker:

5. Yeah. Yeah. Absolutely.

Speaker:

Cool. Always good having you on the show. I

Speaker:

know we both have hard stops. Yes. Unfortunately.

Speaker:

No, it's cool. Gotta have you on more often, man. Especially now that you're not

Speaker:

like spending a month out in, you know,

Speaker:

Australia and Asia. Yeah,

Speaker:

yeah. So let us know in the comments below what you want to see us

Speaker:

to cover and maybe it'll be tomorrow.

Speaker:

I got this here the other day. This is a flexible

Speaker:

solar panel thing. Oh, cool. So it's cool. Supposedly it's 100

Speaker:

watts and you can actually pack it in your

Speaker:

backpack. That's the video. And I was like, oh, I need that because. Because I'm

Speaker:

a big, I'm a big fan of like, you know, having power on the go

Speaker:

and stuff like that. So. So I'll,

Speaker:

I'll unbox that tomorrow. Any parting thoughts?

Speaker:

Just keep an open mind about AI and

Speaker:

I, I still think the, the biggest conversations are still about

Speaker:

the governance of AI. Absolutely. Yeah. Just know that

Speaker:

AI is a multi layered problem, not just a single layered

Speaker:

problem. And for us to get this right, we have to look

Speaker:

at all the different layers. Absolutely. That's

Speaker:

how we're going to be able to do it correctly. And I will tell you,

Speaker:

I was listening to a podcast, I'll leave you on this note. And there was

Speaker:

one expert that was talking about

Speaker:

basically, are we, are we creating the

Speaker:

terminator out of all this? And he, he said, I

Speaker:

I'm actually more worried that we're creating Wall E out of all

Speaker:

this. Interesting.

Speaker:

And I would encourage everyone who hasn't seen Wall E go check it out.

Speaker:

And keep that in the back of your mind too, that there

Speaker:

could be such a happy path with AI that

Speaker:

also has its own long term negative effects for

Speaker:

society. But. But yeah, that's a topic that you.

Speaker:

And I can talk about on our next stream. That's it?

Speaker:

You want to leave on a cliffhanger, so to speak? Yes. And that wraps

Speaker:

our deep dive with Christopher Newland proving once again that AI

Speaker:

isn't just about large language models spitting out cat facts, but

Speaker:

about simulating reality, bending time at devcon and

Speaker:

maybe, just maybe, preventing the rise of our robot overlords.

Speaker:

From protein folding to Grand Theft Auto fueled AI breakthroughs.

Speaker:

Christopher reminded us that the next big leap might not be in scale, but

Speaker:

in simulation. So thanks to Christopher for navigating the

Speaker:

uncanny valley with us. No jet lag, just pure insight.

Speaker:

Until next time, stay data driven. And remember, if

Speaker:

reality starts glitching, blame the simulator, not the

Speaker:

Internet.