WEBVTT
00:00:00.320 --> 00:00:09.359
We as humans we do have unlimited uh content window, as I say, and a memory.
00:00:09.839 --> 00:00:21.519
So that's two things that we have but agents not agents language models don't do that, don't have that.
00:00:22.559 --> 00:00:26.480
They don't have unlimited context window and they don't have a memory.
00:00:27.760 --> 00:00:31.440
Welcome everyone to another episode of Dynamics Corner.
00:00:31.920 --> 00:00:37.759
I learned a lot today, and I want to know how it runs in the background of all the AI.
00:00:37.920 --> 00:00:39.359
I'm your co-host, Chris.
00:00:39.600 --> 00:00:40.479
And this is Brad.
00:00:40.560 --> 00:00:44.479
This episode is recorded on March 17th, 2027.
00:00:45.039 --> 00:00:46.719
You know what today is, Chris?
00:00:47.439 --> 00:00:48.560
March 17th.
00:00:48.640 --> 00:00:50.000
Yes, it's it is St.
00:00:50.159 --> 00:00:50.880
Patty's Day.
00:00:51.200 --> 00:00:51.359
St.
00:00:51.600 --> 00:00:52.880
Patrick's Day.
00:00:53.119 --> 00:00:54.640
Hope everyone's celebrating.
00:00:54.719 --> 00:01:10.799
I know it'll be a little bit of celebration later on, but as you had mentioned, we had the opportunity today to speak about AI and break it down to how the fundamentals of agents work, how large language models work, and the process that it goes through to give us the answers that we're looking for.
00:01:11.040 --> 00:01:13.519
With us today, we had the opportunity to speak with Demetri Katz.
00:01:15.920 --> 00:01:16.480
Hello.
00:01:16.879 --> 00:01:18.000
Good morning, sir.
00:01:18.159 --> 00:01:19.439
How are you doing?
00:01:20.079 --> 00:01:21.439
It looks evening there.
00:01:22.400 --> 00:01:23.519
Good morning.
00:01:25.280 --> 00:01:28.480
It's early morning for you there, I believe.
00:01:30.400 --> 00:01:32.719
No, it's uh it's evening.
00:01:32.879 --> 00:01:34.159
So it's 9 p.m.
00:01:34.959 --> 00:01:35.760
9 p.m.
00:01:36.239 --> 00:01:40.079
I I you know with these time zones I can't uh keep up with it.
00:01:40.640 --> 00:01:43.280
But wow, is all I can say.
00:01:43.519 --> 00:01:57.599
Since the last time we had spoken with you, the advances in AI and technology, I think we spoke to you last time about the uh the workshop hackathon that you had done about a year or so ago.
00:01:58.000 --> 00:02:05.040
And that seems light years ago as far as technology is concerned.
00:02:05.840 --> 00:02:09.599
It is, but it was only two years ago.
00:02:10.960 --> 00:02:12.240
Oh, two years ago.
00:02:12.879 --> 00:02:14.400
That was right two years ago.
00:02:14.639 --> 00:02:16.319
Oh yeah, time blue.
00:02:16.879 --> 00:02:20.960
I can't even I don't know if something was last week, a year ago, two years ago.
00:02:21.120 --> 00:02:29.199
See, I thought I thought it was a year ago, but it's it's i i i I can't keep up uh with uh time, as they say.
00:02:29.759 --> 00:02:33.759
Um and and right.
00:02:34.159 --> 00:02:36.800
I think we we talked twice.
00:02:37.520 --> 00:02:41.199
Two years ago and a year ago.
00:02:41.439 --> 00:02:45.039
So it becoming a good tradition to talk every year.
00:02:45.360 --> 00:02:45.919
Yes, yes.
00:02:46.159 --> 00:02:53.039
Well, you're doing some great things, and I see you involved in doing a lot with AI development.
00:02:53.199 --> 00:03:11.840
You know, you started out with Central Q, which is what we first spoke about, which was something that it's great that you've done for the Business Central community to allow for people or users, developers, partners, everybody that uses Business Central to be able to have a place to go to use AI to help search for content.
00:03:12.240 --> 00:03:21.840
And now you've done some great contributions to Business Central, and now I see you're doing some great things with development as well and uh some agentic development in conversation.
00:03:21.919 --> 00:03:27.919
So that's what uh I hope to talk with you about yeah, trying to not lose my job.
00:03:31.280 --> 00:03:34.800
We we that's a topic that we can talk about.
00:03:34.960 --> 00:03:38.400
Uh but before we get into all that, you mind telling us a little bit about yourself?
00:03:39.919 --> 00:03:42.800
Hey everyone, uh I'm Dmitry.
00:03:42.960 --> 00:03:49.199
Um I'm currently based in uh Thailand for six years already.
00:03:49.439 --> 00:03:50.960
Uh moved here.
00:03:51.919 --> 00:04:10.240
Uh 20 years in uh business central slash nov, 10 years in AI, starting from machine learning 10 years ago, and now it's uh long long way uh past.
00:04:11.280 --> 00:04:16.800
Um I'm doing a lot of things that I love.
00:04:17.360 --> 00:04:27.040
Um AL development, one side of my story, AI development, another side of my story.
00:04:27.279 --> 00:04:36.639
Now agents, architecture, that's what I put in my uh email uh as a job.
00:04:40.319 --> 00:04:40.959
Yes.
00:04:42.240 --> 00:04:58.480
Uh so yeah, love to help uh people, teach people, uh experiment with different things, um, and in in average, do love to do business central smarter.
00:04:59.279 --> 00:05:00.319
That's what I do.
00:05:00.959 --> 00:05:01.839
No, that's great.
00:05:01.920 --> 00:05:02.319
It's great.
00:05:02.399 --> 00:05:03.759
You're doing some great things.
00:05:04.000 --> 00:05:09.040
Um you joked about trying not to lose my job or trying to keep my job.
00:05:09.120 --> 00:05:14.000
I think a lot of people uh think that way, but I think there is some reality about it.
00:05:14.160 --> 00:05:18.079
I think AI can be efficient and help you with efficiency.
00:05:18.399 --> 00:05:25.279
Um I think uh uh we'll have to come up with new ways to do things because I know I use it quite frequently as well.
00:05:25.519 --> 00:05:28.959
Um just one thing I want to ask you.
00:05:29.040 --> 00:05:45.120
You you said you mentioned you started working with machine learning 10 years ago, and now you're doing agentic uh development or your a agentic engineering, business central development, uh and you also do a lot of presentations.
00:05:46.480 --> 00:05:57.120
AI it it's it's I think AI is it's one of those things at this point that everybody hears it, but a lot of people may not understand it.
00:05:57.360 --> 00:06:06.480
And even within the realm of your circle that you work with, like in our group of the individuals that we speak with, everybody's all into AI.
00:06:06.560 --> 00:06:13.920
But if you step outside the group a little bit, some people still have no idea what AI is.
00:06:14.800 --> 00:06:26.399
So in the context of AI, what do you consider AI, or what do you mostly focus on when you hear the word AI?
00:06:26.560 --> 00:06:27.920
If uh if that's a clear question.
00:06:28.000 --> 00:06:34.399
I know it's not too clear, but I think you can see the struggle that I'm finding a lot of people having with, oh, that's AI, right?
00:06:34.560 --> 00:06:35.680
Is it spell checking?
00:06:35.759 --> 00:06:36.959
Is it driving a vehicle?
00:06:37.120 --> 00:06:37.920
Is it coding?
00:06:38.079 --> 00:06:39.439
Is it robotics?
00:06:39.759 --> 00:06:41.920
So it's it's a little challenging.
00:06:42.560 --> 00:06:56.720
Yeah, sometimes I get um requests uh for 30 minutes uh consultation uh for people um and they ask me, can you can you teach us AI in 30 minutes?
00:06:58.240 --> 00:07:01.360
Uh that's a fun.
00:07:02.800 --> 00:07:05.439
I and I tried to do that.
00:07:06.480 --> 00:07:21.759
So well first what I try to explain to them is that uh AI could be very different depending on what you need.
00:07:23.120 --> 00:07:43.680
Um I started with the machine learning and uh those times uh it was called AI, but machine learning allowed us to uh train our models based on our data.
00:07:44.879 --> 00:07:49.279
Uh where and we can use these models to predict things.
00:07:49.839 --> 00:08:05.680
Yeah, it could be and prediction could be some number or an answer yes or no, you know, like uh in in this uh uh toy where you ask a question and there is yes no.
00:08:06.319 --> 00:08:14.560
Um yeah, but that was something that was trained on uh on the some of the experience.
00:08:14.959 --> 00:08:21.759
Um and that actually was AI, still AI, for many cases.
00:08:24.480 --> 00:08:44.480
In 2023, uh actually late 2022 when uh large language models appeared, uh not appeared but became popular because of the Chat GPT, uh it still it's also AI and it's also prediction.
00:08:44.960 --> 00:08:47.279
But in this case it's prediction of the next word.
00:08:48.159 --> 00:09:30.399
Based on based on the trained data, but in this case the trained data was the whole internet, the whole internet of words, uh texts, books, also trained by deep learning techniques, uh with some of course advanced of new architectures and new technologies uh of the transformers, but uh still it's AI, it's technically something technology that predicts the next word uh based on what it saw previously.
00:09:31.440 --> 00:09:53.039
So if uh people ask me okay, uh we are in the manufacturing business and uh we want to uh predict the maybe the capacity of production for the next year.
00:09:53.919 --> 00:09:56.000
What AI should we use?
00:09:58.559 --> 00:10:17.919
It's uh obvious to me that uh in this case they would need to use uh the machine learning approach because it's their capacity, it's their items, they have a history of what was before, it's not new business.
00:10:18.639 --> 00:10:25.919
It's obvious to me that they need to train their own model that will predict their capacity.
00:10:28.240 --> 00:10:33.919
People think that they can just go directly to the Chat GPT and ask about it.
00:10:35.279 --> 00:10:40.960
Uh and Chat GPT will tell you every time.
00:10:43.440 --> 00:10:48.080
How good will be that prediction according to your case, to your business?
00:10:48.799 --> 00:10:55.360
Well, you can try and compare in one year, yeah?
00:10:57.679 --> 00:11:12.240
But uh large language models are not trained to predict something like that, so to predict uh capacity to optimize manufacturing to do this complex stuff.
00:11:14.159 --> 00:11:19.759
So AI could be different depending on what task you want to solve.
00:11:22.080 --> 00:11:25.919
Like language models are trained on text, they predict text.
00:11:27.200 --> 00:11:50.799
Um if there is if there was a very good book about how the capacity should be planned, uh this life language model step could be used in the pipeline to set parameters for the machine learning trained model.
00:11:51.440 --> 00:11:59.679
Yeah, so it could be used as a step, but not as a final solution in the pipeline.
00:12:00.320 --> 00:12:10.720
So choosing the right uh tool is still uh what will keep our jobs.
00:12:15.440 --> 00:12:29.919
Someone sent me a meme, believe it, I still get memes occasionally, far fewer than I used to get, but it said uh AI won't replace you if your job is to use the AI or to build the AI or something, so it's uh it's true.
00:12:30.159 --> 00:12:32.000
But yeah, yeah, yes.
00:12:32.159 --> 00:12:36.879
Um I actually uh wanted to experiment with that.
00:12:37.039 --> 00:12:53.279
Uh and I think last month I asked um uh Claude uh knowing everything about me, uh can you predict would AI replace me or not?
00:12:53.519 --> 00:13:15.279
Yeah, so and Claude told me that as part of my job is to create uh AI agents, uh then I could be safe, yeah, because that's what what uh will be required as a job, at least for the next year.
00:13:15.759 --> 00:13:41.039
But then in the next week, yeah, yeah, but then in in the next week after that, um Langchain, which is actually the uh agents, one of the agents frameworks, uh they released agent that can build agents on top of their framework.
00:13:41.360 --> 00:13:53.279
So previously it was uh uh Langchain had very good uh agents framework, which is called Langraph, and then deep learning and so uh deep agents and so on.
00:13:53.440 --> 00:14:14.559
Uh, but it was it will required experience and knowledge how to do that, but then they released other agents that already have this knowledge, and you can just ask, and you know, uh it's it's actually very similar to uh what we what we had three years ago.
00:14:14.879 --> 00:14:22.000
Three years ago, everyone talked that the prompt engineering is everything, yeah.
00:14:22.080 --> 00:14:26.240
So yes, you need to know how to write prompts.
00:14:26.480 --> 00:14:28.320
Yes, in this case, you save.
00:14:28.480 --> 00:14:29.840
Yeah, you remember that.
00:14:30.399 --> 00:14:42.559
But then we now have large language models that write prompts on on behalf of us, and we use these prompts every day, and they do that work much better.
00:14:43.039 --> 00:14:50.159
Yes, yeah, use use use uh uh agents to ask how to write prompts better.
00:14:50.799 --> 00:14:54.879
Here's what we're gonna do create me a prompt for this agent.
00:14:55.519 --> 00:14:57.360
Exactly, and then build this agent.
00:14:57.440 --> 00:14:57.679
Yes.
00:14:57.759 --> 00:15:12.080
So uh so I would not so very sure, 100% sure that uh even my job would not be.
00:15:12.399 --> 00:15:13.759
That was only one week apart.
00:15:14.399 --> 00:15:23.039
I think that uh it's it's some some jest, some laughing in there, but uh I think it's a nervous laugh, Brad.
00:15:23.120 --> 00:15:42.559
That's I no, I laugh at it because I think it's with the rate of the change that it's going, I think worrying about your job or your future is is creating a lot of stress, where I think sometimes just focus on what you're doing, solving the problem, using the tools to solve the problem can make it a little bit easier.
00:15:42.720 --> 00:15:43.840
But I do agree with you.
00:15:44.000 --> 00:15:50.720
The the rate of change within AI is very difficult to keep up with.
00:15:50.799 --> 00:15:55.759
Uh it's almost if you go to sleep for the night, you'll miss out on something new.
00:15:56.000 --> 00:16:01.759
Uh and I almost think I should go to sleep for a couple weeks because then I'll miss all the talk about something.
00:16:02.000 --> 00:16:07.519
I'll be able to see just what's new or maybe what we level out on um with it.
00:16:10.000 --> 00:16:48.799
Uh many people are wondering nowadays if especially uh if we talk about uh AI for coding if I gain some experience and uh this speeds me up in creating the final product, so I I now know how to use AI to create software, should I share this knowledge to other people because if I have this knowledge nowadays, that's my advantage.
00:16:49.679 --> 00:16:59.919
Uh if other people don't have this knowledge and experience, um I can win comparing to them.
00:17:00.639 --> 00:17:07.279
Yeah, that's a question that many people ask, even in our MVP community.
00:17:09.119 --> 00:17:11.920
I recently wrote a blog about that.
00:17:12.319 --> 00:17:19.519
Um and um I had this question asked for myself as well uh many times.
00:17:20.720 --> 00:17:43.279
But uh I think that's uh what makes uh our generation uh still successful and uh we can still develop technologies in so rapid space space um because we share the knowledge.
00:17:43.759 --> 00:17:57.920
I think that's uh a lot of open source projects that appeared recently, uh a lot of uh blogs about a ro a lot of uh YouTube channels like that, like yours.
00:17:58.319 --> 00:18:08.079
Um if people share the knowledge, uh maybe maybe they think that they will lose in short term.
00:18:08.400 --> 00:18:44.079
Uh but I think that if if you share the knowledge, that's actually what I discovered on my experience uh myself, that if you share the knowledge, uh you become more respected uh in in the world, uh because other people can listen to you, uh they can can they can apply your experience or not, uh but not everyone will do the same things as you can do.
00:18:44.319 --> 00:18:48.720
I mean it's easier to do that, but not everyone will do that.
00:18:49.200 --> 00:18:51.200
Um so yeah.
00:18:51.759 --> 00:18:52.960
Yeah, no, I understand.
00:18:53.039 --> 00:18:59.920
I think I think the sharing is important, and I think this is new to everybody, and I think we can all be successful.
00:19:00.079 --> 00:19:08.160
And I I do believe, like you're saying, trying to be the only one to hold on to the information isn't going to bring you ahead.
00:19:08.319 --> 00:19:15.519
I think it's having the ability to share with others brings everybody forward, including yourself.
00:19:15.680 --> 00:19:26.000
Like you said, you do get respected if you're someone who works with it, people understand that you work with it, but as you had mentioned, even in some of the the groups that we're in, people ask questions or they say things that they've worked or didn't work.
00:19:26.160 --> 00:19:31.200
So the collaboration has become more important because it can bring everyone forward.
00:19:31.359 --> 00:19:44.880
Uh you you do quite a bit a lot with AI, and I was asked the question recently, and uh I I thought about my experience with it because I was extremely intimidated by it at first.
00:19:45.359 --> 00:19:45.680
Okay.
00:19:46.240 --> 00:19:54.559
And then I went from being intimidated by it to now I use it for many tasks in the day, not just coding.
00:19:54.720 --> 00:19:56.559
I use it for a lot of tasks.
00:19:56.880 --> 00:20:10.799
But if you were to talk with somebody, let's just say we can bring it into whether it's business central development, business central project management, uh, anybody who is maybe functional consulting, or even anybody who wants to be uh speaking.
00:20:10.960 --> 00:20:11.279
Okay.
00:20:11.839 --> 00:20:24.240
We hear of AI, we hear of agents, we hear of prompting, we hear of instruction files, I hear of skills, I hear of tasks, I hear of Claude, I hear of OpenAI, I hear of Copilot.
00:20:24.400 --> 00:20:29.119
There's so much information out there that it's overwhelming and can be intimidating.
00:20:29.359 --> 00:20:38.079
But someone such as yourself that has been working with AI for many years, you do a lot of sessions on AI, you create a lot of great things with AI.
00:20:38.400 --> 00:20:39.920
Where would somebody start?
00:20:40.160 --> 00:20:52.960
So if you were to have a new developer or somebody like that, where would you s recommend they start to be able to understand to use some of these tools to be able to help them be efficient?
00:20:53.440 --> 00:20:55.359
Or how do you think they should start?
00:20:55.839 --> 00:21:09.759
You know, uh there is a common opinion uh that to learn things, uh you need to start doing things, which I partly agree with.
00:21:11.039 --> 00:21:50.720
Um however I think that going through some webinars, uh like maybe workshops um or playlists of in the YouTube channels, which a lot of are free nowadays, um you first can get this structured understanding, not try to try this and then discover how this thing works and then try other things and discover how that works.
00:21:50.960 --> 00:21:54.799
Uh for many people, maybe that approach will work.
00:21:54.960 --> 00:21:59.519
Uh for a more structured approach, I would recommend to.
00:22:00.319 --> 00:23:47.359
start exploring and learning from um from trainers of from books or from from some structured uh content but uh on the same way you need to understand what you want to do what you want to achieve what's your task what's your goal and depending on that uh search for the the structured content uh one of the websites that I usually recommend is called uh deeplearning.ai there is a lot of there are a lot of uh free uh courses for many different topics uh from the respected um trainers uh the management from different AI labs uh the engineers and so on um so that's one of the places that I uh recommend to to go also um uh especially for um uh building agents um I would recommend to go uh through the Cloud uh academy uh also OpenAI has its own academy um but you need of course to understand that each of these courses uh built from the AI lab labs uh they are focused on their tools yeah so uh if it's actually okay because many tools work in the similar way uh but yeah uh also could be a good uh approach.
00:23:47.759 --> 00:24:07.759
Okay so your suggestion is instead of trying to learn everything try to find a task that you want to complete then find structured learning to take you through that task so you can go from start to finish versus trying to piece together little pieces.
00:24:08.799 --> 00:24:14.160
And that can help you get some comfort with how it all works.
00:24:14.559 --> 00:29:44.799
Yes but uh that's approach uh works very well if you don't know the area so if you like if you want to go from zero if you're already familiar with some tools with it uh with the technology uh how it works then probably trying to solve things uh would give a better um output so you you learn while doing things but only if you know the basics okay so if you know the basics try to solve problems if you're starting from the beginning find a task or something and then go through the path of learning the basics uh to complete some task because I think it's it's there's a lot uh again there's a lot of people I I got lost for a little while because everyone we speak with all they talk about is what they're doing with AI that you have a conversation with someone out of the the the pool that you're in and you realize that there still are a lot of people that are just learning AI even within business central business central has a lot of AI features in it outside of the coding even some of those features and functionality within the application many are still learning how to use and how to take advantage of uh the benefits of using those question for the both of you actually and Dimitri this may be uh pertinent to you but we we talked about the community in in in our little circle right so we we talked about AI and and how we use the tools and uh on the last episode we had the conversation a brief conversation about you know outside of that you know maybe family members what do they perceive AI they know you work in AI but uh but they're in a different industry um so but to me that's still within you know within my sort of larger bubble in this case my my region area I live in Washington state right next to Microsoft essentially now for you Dimitri living in Thailand you know do you like how are the people around you that are using or what's their perception of AI are is there anyone using it and to what degree knowing with your background you know a lot about AI and machine learning uh most of people who I know who is who are not in IT completely uh they know only one AI tool which is ChatGPT um so that's um I do I do know uh the owners of the uh dive dive uh office uh uh where they organize diving uh uh traveling you know car renting we we live in the island uh that's a touristic place so um many businesses like that are very popular here I do have many friends in them as well um sometimes they ask my advice I try to help them um as much as I can uh but they don't know actually anything about uh uh the agents the agents that they don't know the word agents um they know how to go to the chat GPT and ask questions there um which I think uh maybe it's okay I I mean uh that's that helps them uh but um I try to teach my wife for example uh to use uh clothing uh so uh as as uh the previous uh HR uh she's not very familiar uh with those kind of tools uh but at least I show you them I show her how to connect uh maybe some services that they that she used uh to get more content so uh she gets a better experience with the CI tools um yeah so yeah but we live in the bubble that's that's that that's true that's true and this bubble is not very big to be honest comparing to the world and to the number of people that who use just chat GPT uh maybe we have I don't know like a 10 million million people uh in the bubble and it sounds like a lot yeah it sounds like a lot but it's it's it's it's not a lot and that's where sometimes I have to take a step back and realize that everyone's at a different point in their their journey and also it's it's good that you had mentioned you're showing your wife how to use Claude or some tools for AI because AI now or the like we I use the word AI but the the agents and the tools aren't just for development.
00:29:45.039 --> 00:29:47.920
You can use them for many different things.
00:29:48.079 --> 00:33:22.880
You talk about doing some analysis uh of information you used it for uh it's HR point of view I know people use it to create from you know business center point of view create statements of work create business requirements documents create change request documents it does a lot more than just create software applications you can use it as a tool to assist you in in many areas um so it's it's good that you're you're you're doing that but to to bring it back go ahead yes but uh also for myself uh I'm using ai in the personal life as well not only for the job a recent case was interesting um I had to apply uh for the uh visa and one of the requirements uh was to list all of my travel links travels through the last 10 years so wow I I I I thought how how can I do that because where should I take all this information and to do this manually it would be I mean I don't have so much time for that so I thought okay uh definitely I need to use ai for this task but um I do have uh personal emails I do have work emails uh I thought that I need to find information about tickets yeah so we we all have this information in emails um I tried existing uh AI uh services like in chat GPT and even cloud uh but they couldn't do that uh properly so I created two different MCPs for myself uh for the Gmail and for the Outlook and also the agent that used this MCP uh with the personal instructions so I started that they actually did the work the agent uh looped through my emails uh did great search found all the tickets structured and um I checked that there were some mistakes but I edited them and I got uh the long list of my traveling for the last uh ten years um but existing AI tools couldn't do that even those it simple tasks I think but uh but I used my experience to in AI development so I wipe coded or used uh uh agents uh to develop these MCPs to uh develop the agent and then uh put all together uh so yeah I apply this knowledge this knowledge to my personal life as well it's it's amazing at what we can do with this technology and that that's where I just sometimes have to take a step back and just say wow and as you had mentioned it's it's I think a lot of people are afraid of it because they don't understand it.
00:33:23.119 --> 00:33:32.079
But if you flip it around to see some of the tasks like you had to do like searching emails it's how can I use it to help me do these tasks.
00:33:32.480 --> 00:33:36.960
So it's not replacing you again I think it's just a fear of the unknown.
00:33:37.119 --> 00:33:48.799
It's not necessarily replacing somebody it just makes someone's tasks a little bit easier which I guess you know you may need less people to do some of those tasks if you can do it quicker or you can do more tasks.
00:33:49.359 --> 00:33:53.039
But it's something that can be of a big benefit to you.
00:33:53.200 --> 00:33:58.880
Now you have mentioned agents we talk about these word agents what is an agent?
00:33:59.200 --> 00:34:08.400
Is it you know I it's uh it's I think they're people and I now talk to it but if you had to describe an agent to somebody if it was new to it what is an agent?
00:34:08.559 --> 00:35:07.920
Because Business Central we see a sales order agent we see a purchase order agent we see an expense agent and now we see an agent preview where we can create our own agents what is an agent how do you create one you said you created agents so how do you create an agent too a lot of questions I have um you know um I participate in uh also agent conferences uh in different cities and what is the agent is always the first slide on each of the keynotes in the agent con the term and description what is the agent is different to depending on who you ask.
00:35:08.159 --> 00:35:11.599
You ask me and I am the technical person.
00:35:12.320 --> 00:38:13.840
So my definition of agent is um is a large language model um working in a loop trying to solve a task using tools so that's my technical definition of the agent uh let me uh describe the step by step um in the in the world of large language models uh when we just started with the chat gpt it was always a question and then generated answer it was one interaction um when we tried to solve the task using large language models using the chat gpt we were the agent yeah we asked we asked we know we knew what we want to achieve and we asked large language model uh several times until we got the solution or the final answer that is the solution to our question so it was uh a dialogue yeah several uh multiple interactions but in this case we were the age um now when the light language model uh become became better um they achieved a new skill let's say so uh they achieved a skill to ask questions to themselves instead of us trying to ask chat b GPT so now they can ask a question to themselves and generate the answer and then look into the answer and ask question once again trying to uh make the final solution closer yeah so now it's a dialogue not between us and large language models now it's a dialogue between the large language models and the large language models yeah uh this all sounds so futuristic to me that uh you know the agents learn a skill to ask themselves uh uh I don't even want to try to understand it but I have to try.
00:38:14.239 --> 00:38:16.480
Yeah okay so maybe I'll rephrase this.
00:38:16.639 --> 00:43:54.400
So uh uh the easiest way to understand that if we want to create some software let's say it's easy to understand on the development of the software and we need to create uh a feature first thing that we need to understand uh what this feature should do yeah and then we as a developers usually plan how we will implement this feature we need to create a table then we need to create a page then we need to maybe add a field then we need to write a code unit then we need to create a code unit then the function uh then to link everything together yeah so that's our feature uh the agents uh not the agents okay life language models now can do the first step they can plan they can plan how to what tasks they need to uh execute to get this feature done yeah uh so first step they plan and then they start to execute this plan but the execution of the plan is once again asking questions to myself to myself as a large language model uh what is my next task yeah okay this is my next task uh what code should I generate to make this task happen yeah okay now I generate the code now I see I look I now I need to check my code I look at the code okay that looks good let's continue with the next task then we switch to the next task and so on so that's what uh I define as a large language using large language models in a loop okay they asking questions uh and they solve these questions in a loop are they asking these questions you mentioned before that the large language models are predictions of the next word yes are they determining how to ask these questions based on a prediction of the next words uh yes because um because they they see the previous conversations with themselves so yeah so uh so they they look in the in in the previous conversation uh what was the last question now I need to solve this question uh here is the answer now it's the bigger conversation then it's continue um until the job is done okay um we also have we also have what is called tools so um because the tools were in my definition yes large language models uh solving tasks in a loop using tools okay what are the tools the tools is something that already pre-built some functions that we have already that life language models can tell us to execute because life language models itself it's somet something that generates text yeah so this is the prediction of next text they can't go to our system and click on the actual click something run something run run some code they cannot do that so the tool is actually um a connection okay between between this so tool is something that we uh we have maybe uh the browser open the browser that's one tool okay so but large language models can't open the browser on our machine we can create a a code a script that will open the browser and with them we can tell the large language model that you can tell me to open the browser okay so when large language models decide that to achieve their goal to solve their task it needs to open the browser it tells us to open the browser we ours we run the script that opens the browser and then we send the output to the large language model what is there what what is in the browser yeah that's this is just amazing so now I just want to bring it back I like to put it to simple examples.
00:43:54.639 --> 00:44:04.960
So now an agent is a person that learned things just like We've learned things based on reading, conversation, and information available.
00:44:05.199 --> 00:44:07.760
And now it needs to build a house.
00:44:08.159 --> 00:44:12.480
And to build the house, it needs to use the tools to do the job.
00:44:12.559 --> 00:44:24.480
So if you want to screw in a screwdriver, it doesn't know how to screw in a screw, but we say here's a screwdriver that can screw in the screw for you, and it will execute that turning in the screwdriver.
00:44:24.719 --> 00:44:34.480
Taking it back to this, so then uh see these tools that you have are pieces of code that the language model knows that it has available to it to do things.
00:44:34.639 --> 00:44:55.440
So the language model isn't doing anything, it's the code that's running that does something, and then it returns information back to the large language model, and then the large language model reads that information or takes in that information and tries to predict the next words based on the results of what it had.
00:44:56.719 --> 00:44:58.400
100% accurate.
00:44:58.559 --> 00:44:58.800
Yes.
00:44:59.119 --> 00:44:59.440
Okay.
00:44:59.679 --> 00:45:14.079
And the difference with using agents versus using the chat GPT of yesterday or a couple years ago, I call it yesterday because it just seems like yesterday, it was more I ask a question, I get an answer, pretty static, right?
00:45:14.159 --> 00:45:16.559
So it's it was static interactions.
00:45:16.719 --> 00:45:29.519
Now with agents and tools, it gives us the perception of being more it's still static in steps, but it gives us the perception of being dynamic because it keeps looping, as you had mentioned a moment ago.
00:45:29.599 --> 00:45:37.920
So now the agent will get text that text it will use to determine I need to use a tool to go do something else.
00:45:38.079 --> 00:45:46.159
I'll get the result back, I'll analyze it, do the next step and keep going and keep going and keep going until I think I should be done.
00:45:46.400 --> 00:45:51.119
I use the word think, but I guess until until the next word is done.
00:45:51.840 --> 00:45:53.440
Yes, exactly.
00:45:53.679 --> 00:46:00.159
Um however, um you mentioned that it's um agent is a person.
00:46:01.039 --> 00:46:04.079
Um and well, I think of it's like a person.
00:46:04.639 --> 00:46:05.119
Yes.
00:46:05.360 --> 00:46:42.480
Uh yeah, I know that um uh many people when communication with the agents uh they uh try to communicate uh with as with a person um it can work in many cases, but I personally try to try to uh keep uh remembering that this is a technology.
00:46:44.000 --> 00:47:18.719
Uh and because the way to communicate with the agent to get the most of effective effectiveness uh from the for the for the final result is to know all these details, uh all these nuances how it uh how it ra how it's running, how it's solving the task.
00:47:20.159 --> 00:47:26.000
Because it's very different to a person how we solve the task.
00:47:27.760 --> 00:47:41.519
Uh and one of the main differences is that uh we as humans we do have unlimited uh content window, as I say, and a memory.
00:47:42.000 --> 00:47:53.599
So that's two things that we have, uh but agents not agents, large language models, don't do that, don't have that.
00:47:54.320 --> 00:47:59.199
Though they don't have unlimited context window and they don't have a memory.
00:48:01.280 --> 00:48:41.199
You always need to remember that when you ask a question inside of your VS Code, inside of your cursor, uh in the chat, you send a new request to somewhere and um this light language model that will uh shoot answer on this request, it doesn't know anything about you, about your previous interactions, about your tools, about nothing.
00:48:42.159 --> 00:48:57.760
Yeah, so if you don't use any tools and don't use any MCPs, just uh you know raw chart, uh it will not be effective comparing to the person.
00:48:58.559 --> 00:49:10.159
If you talk to the person yesterday, you suppose that if the person doesn't have low lobotamine, he will reply to you today.
00:49:10.719 --> 00:49:12.079
Yeah, yeah.
00:49:13.920 --> 00:49:54.480
So um when I also try to describe how this uh agents works or uh in particular one individual step in s inside of this loop, is that when you ask a question, you send uh the request to someone who is completely dead, and then it wakes up, you know, it's b reborn, it's born, and then uh it reads your request, try to answer your question, answers, and it's died once again.
00:49:55.519 --> 00:49:55.840
Okay.
00:49:58.239 --> 00:50:18.320
So it's uh so and then when you continue conversation, you ask the second question, you send it once again, it's once again uh reborn without remembering anything about the previous life.
00:50:18.880 --> 00:50:30.320
It reads then your previous conversation, message one, message two, and then oh okay, and then it's answers the question, and then it's died once again.
00:50:30.639 --> 00:50:30.960
Okay.
00:50:32.639 --> 00:50:44.000
So our job is to prepare this message history as effective as possible.
00:50:45.039 --> 00:50:53.119
So it on every call, it will read it and and can understand what you want from him.
00:50:53.599 --> 00:50:54.400
From it.
00:50:55.920 --> 00:51:02.559
You gave it a listen, I I talked to uh I have many agents that I talked with.
00:51:02.880 --> 00:51:14.159
So I like to use I like to any anytime that we talk uh anytime that I talk with anyone, I try to bring it back to make sure I understand what you're saying.
00:51:14.320 --> 00:51:19.119
Um and then also for anyone listening, if they can get a different way of hearing it.
00:51:19.760 --> 00:51:23.920
An agent isn't a person, we just make it look like it's a person.
00:51:24.239 --> 00:51:29.039
It's basically you send it a piece of text that's all it knows because it's a command.
00:51:29.119 --> 00:51:40.320
You're sending it one command, it will execute that command, like whatever it needs, predict the next word, or I use the word execute or run that command, send information back, and at that point it's gone.
00:51:40.480 --> 00:51:44.320
It's there's no memory or persistence of memory.
00:51:44.639 --> 00:51:55.280
If you're having a loop, what it's basically doing is taking the first thing that you sent it, a listing of all the tools that it has, right?
00:51:55.599 --> 00:52:02.480
And then you so you it first when you wake it up, you tell it, here's all the tools you have, here's my first question.
00:52:03.199 --> 00:52:04.320
It gives you an answer.
00:52:04.559 --> 00:52:30.159
Then when you ask the second question, it basically, with these frameworks, will send the first question, then the list of tools, their results, then your next question, so that every time it basic it's it's in essence resending the entire conversation and list of tools in the in the context of tools to the agent.
00:52:30.400 --> 00:52:39.039
So it doesn't have memory, it just has more information sent as part of your question that you don't see as being sent in the background.
00:52:39.360 --> 00:52:39.679
Okay.
00:52:40.400 --> 00:52:41.360
Exactly.
00:52:42.000 --> 00:52:54.719
Okay, so so in these terms, yes, in this in in these terms, nothing nothing actually changed since the uh large language models uh appeared.
00:52:54.960 --> 00:53:01.360
So that the t the architecture and the technology behind the large language models is the same.
00:53:01.679 --> 00:53:27.119
They uh we we just built a lot of uh framework services, whatever you call, that actually automate uh giving to the large language model more context uh in the most effective way so it can answer your question as best as it can.
00:53:27.599 --> 00:53:28.719
See, that's interesting.
00:53:28.880 --> 00:53:35.920
So the underlying architecture hasn't changed, it's how we interface with the architecture that's changed, it's it's progressed forward.
00:53:36.159 --> 00:53:49.679
So if you don't recommend talking to it as a person, what is the best and most efficient way to speak to an agent or to type to an agent?
00:53:49.760 --> 00:53:50.880
I don't even know.
00:53:51.119 --> 00:53:54.800
Because again, the context, I hear this word context, right?
00:53:54.880 --> 00:53:58.320
So context is how much stuff you can send to it at once, correct?
00:53:58.480 --> 00:54:07.360
So again, if we have 10 questions that we've asked, we keep sending it the same 10 questions, eventually, just like my brain, it fills up, right?
00:54:07.760 --> 00:54:17.519
So, how do we manage and talk to an agent to be able to maximize the results that we get back from them?
00:54:19.599 --> 00:54:38.480
You what I am trying to do, at least to keep in mind, that's my exercise, what I I do time to time, is to place myself on on the agent side.
00:54:39.119 --> 00:54:39.440
Okay.
00:54:39.760 --> 00:54:50.880
So I need uh I am asked to do this, and I see only this information.
00:54:52.000 --> 00:55:00.159
Am I able to effectively uh solve this task knowing only this?
00:55:01.519 --> 00:55:04.000
Or I need to know something more.
00:55:04.639 --> 00:55:10.880
Yeah, so that's uh how I look at this.
00:55:11.599 --> 00:55:32.559
Now, on our side, when we talk to the agent in the chat window, we need to think what information agents need to see to solve these tasks as effectively as um as possible.
00:55:34.079 --> 00:55:57.360
Yes, uh agents in the current IDEs like uh VS Code, uh the Cursor or the Clot COT uh or the judge on the GPT um or the OpenAI codex, they have built-in tools to search things on their own.
00:55:57.840 --> 00:56:05.440
They can search web, they can search files, uh that's tools that already per-built.
00:56:05.840 --> 00:56:25.119
Yeah, so even if we don't give this information during our conversation, the agents can uh help themselves, yes, and search things uh if agents think that it will help to solve the task.
00:56:26.159 --> 00:56:48.320
But still, if we um if this information is not available anywhere, yeah, if it sits somewhere in in the other folder in our computer, that the agents no clue about that I can search there.
00:56:48.639 --> 00:56:54.159
Yeah, so we need to provide this information that you can also search there in this folder.
00:56:54.320 --> 00:57:29.920
Okay, so um uh like uh in in the in the business central AL development, if if we uh return back uh to our uh trenches, uh we we have we have a great um uh repo um yeah made by uh Stefan, which is uh BC History, uh which I think all uh developers uh have on site with all the source code of uh the business central.
00:57:31.519 --> 00:57:35.840
And uh then we work in uh inside of our extension.
00:57:36.079 --> 00:57:40.480
Yeah, so try to connect them.
00:57:40.880 --> 00:57:57.440
What what I am doing, I um add the um uh the the history repo as a subdirectory, uh sub sub git directory uh inside of my XAL extension.
00:57:57.840 --> 00:58:10.960
So um that's one of the ways, or maybe if you have this on site in your computer, you can also tell the agent that you can go and search information there, uh, how the business central works.
00:58:11.280 --> 00:58:37.119
Yes, there are uh also MCPs uh that um can search information through the uh symbols, uh but um and I use those as well uh but not all information can be found through the MCP through the symbols.
00:58:37.679 --> 00:58:55.039
Uh so I prefer to use raw uh business central code as a place where my agents can search information for while trying to solve my task for my extension.
00:58:57.679 --> 00:58:58.639
Understood.
00:58:58.880 --> 00:58:59.199
Okay.
00:58:59.440 --> 00:59:08.639
I like that strategy so that it knows it can search other the frameworks allow them to search other tools or other things to be able to help them get it.
00:59:09.280 --> 00:59:16.320
They don't the agents don't have a memory, they only have the context of the conversation.
00:59:17.760 --> 00:59:23.840
I hear things now about memory, I hear things about context.
00:59:25.760 --> 00:59:27.760
How do you give it memory?
00:59:27.920 --> 00:59:34.159
Do you just save conversation history and say refer to my history?
00:59:34.639 --> 00:59:44.639
Or is there another way that you can do memory and how much space do we have for it to search?
00:59:44.880 --> 00:59:52.800
Because I know if I were to do a task, you said you said think of if I had this, how would I do it, right?
00:59:52.880 --> 00:59:54.480
Do I have enough information?
00:59:54.880 --> 01:00:07.119
If it doesn't remember anything, and it has to go through look Stefan's BC history repo, which is a great repo because you have the localizations and the versions of previous to go through.
01:00:08.880 --> 01:00:13.280
Doesn't it take a lot to search through all of that and remember it all?
01:00:14.639 --> 01:00:17.440
So remembering is not the right word here.
01:00:17.599 --> 01:00:18.880
There is no memory.
01:00:19.280 --> 01:00:19.599
Okay.
01:00:20.079 --> 01:00:40.960
Um it once again, uh what many people call a memory is information, external information, uh, where the agents can search for some content, depending on the current task.
01:00:41.519 --> 01:00:48.079
So it's not the memory of uh of the agents, they they don't have a memory.
01:00:48.159 --> 01:00:52.159
Uh I think that's we should clearly state that.
01:00:52.719 --> 01:00:59.440
Uh yes, um the frameworks uh like IDEs.
01:00:59.679 --> 01:01:19.199
Uh I don't know, frankly speaking, about Visual Studio Code uh way of storing the conversations, uh, but cursor I know that uh around months ago or two months ago they changed the way how they store the long conversations.
01:01:19.840 --> 01:01:38.239
Um previously uh they uh if you if you had a very long conversation, uh more than uh one 180,000 tokens, um they uh around a tool to summarize it.
01:01:38.320 --> 01:01:53.920
Yeah, so to uh to the summarization is actually take a big text and short it down uh to keep only the most of the important information from that.
01:01:54.800 --> 01:02:03.760
It worked, it works, uh but many nuances are lost during this step.
01:02:05.599 --> 01:02:21.840
So because of that, when we continue conversation after the summarization, and we ask uh the agent, hey, can you do the same as you did five steps before?
01:02:22.079 --> 01:02:29.199
Uh it cannot continue because it doesn't doesn't it doesn't have this information in his history.
01:02:29.760 --> 01:02:53.119
Uh what they did uh recently uh they now automatically save all the long conversation in a separate files in the system folders, and in the summary uh they have a reference uh uh points to the original files.
01:02:53.440 --> 01:03:00.320
Yeah, so this helps uh agents to see that oh okay, uh we talked about that.
01:03:00.480 --> 01:03:05.039
I don't know what exactly we just we discussed, but I can look there.
01:03:05.280 --> 01:03:14.719
Yeah, so it it gets it goes to the original file, search once again for this information, grab this piece of information, and then continue.
01:03:15.119 --> 01:03:22.960
So uh this is called offloading of uh uh of text or long run or long text.
01:03:23.360 --> 01:03:27.599
So it makes it look like it's memory, but it's really not memory.
01:03:27.679 --> 01:03:38.079
It's just here's a sentence with a pointer, in essence, to a larger file for a bigger context of conversation.
01:03:38.320 --> 01:03:45.119
So it gives us the perception that we are dealing with something that's remembering it, but it's not.
01:03:45.199 --> 01:03:53.519
It's still the same old loop of go here, read a piece of information, do something, save it.
01:03:54.000 --> 01:03:59.599
And just it's it's so all of the effort then on all of these tools or these frameworks, right?
01:03:59.679 --> 01:04:03.920
Because I hear a lot about models and frameworks, I could talk a few about all this for days.
01:04:04.239 --> 01:04:09.519
So you have all these models that are the large language models, that's the underlying technology.
01:04:09.760 --> 01:04:18.960
And then you have the framework, and the framework would be Claude, GitHub Copilot, Cursor, or other platforms that interact with these models.
01:04:19.199 --> 01:04:28.719
So what they're doing is just building technology that interacts with the large language model but becomes efficient at dealing with context.
01:04:29.039 --> 01:04:36.239
That's why I could use GitHub Copilot with Opus 4.6 with one prompt, get a result.
01:04:36.400 --> 01:04:45.519
I can go into Claude Opus 4.6 with the same prompt and get a different result because it's the way that they manage how it works with the model.
01:04:45.599 --> 01:04:47.199
It's not the model that's different.
01:04:47.440 --> 01:04:49.360
So it's the framework that's different.
01:04:49.599 --> 01:04:49.920
See?
01:04:50.159 --> 01:04:50.480
Yes.
01:04:50.719 --> 01:04:51.840
I have all this.
01:04:52.000 --> 01:04:54.800
I am now an AI expert.
01:04:56.880 --> 01:04:59.760
Until tomorrow, it changes.
01:05:00.400 --> 01:05:03.119
You only have 24 hours, but to claim that.
01:05:08.800 --> 01:05:09.760
Yes, yes.
01:05:09.920 --> 01:05:32.000
For maybe one more day because uh uh Wow, this is a lot, but it's it's I really like how you explain the separation between the large language model architecture and how the frameworks are using or looping through to communicate with the architecture and the perception of memory.
01:05:32.320 --> 01:05:40.960
So when I talk to very large, when I talk about when I talk with my agent, so I shouldn't be writing, and Chris knows me well, I don't do this to anybody.
01:05:41.119 --> 01:05:41.920
I write thank you.
01:05:42.000 --> 01:05:42.559
That was good.
01:05:42.800 --> 01:05:45.679
So I shouldn't write any of that back after it's done.
01:05:46.639 --> 01:05:49.440
Or should I just I shouldn't give it praise?
01:05:49.760 --> 01:05:51.920
Because it's gonna loop through that again.
01:05:52.239 --> 01:05:54.000
Every time you ask something.
01:05:54.719 --> 01:05:57.920
If if you if you feel better, you can.
01:05:58.880 --> 01:05:59.039
Okay.
01:06:01.119 --> 01:06:03.039
We want to humanize things like that.
01:06:03.199 --> 01:06:05.039
That's the I think that's the purpose.
01:06:06.159 --> 01:06:14.960
I haven't played with it yet because, again, there's just so much to to work with on this technology, and it's very difficult to keep up with it.
01:06:15.199 --> 01:06:22.159
Um and we didn't even go down the road of having agents manage agents because that's a whole other conversation.
01:06:22.400 --> 01:06:29.920
But I really want to get to the point where I can talk with it and it will do something and then talk back to me.
01:06:30.400 --> 01:06:30.719
Right?
01:06:30.880 --> 01:06:43.199
I know there's some voice prompts and there's ways you can get the voice in, but I want it to be like if I could cook up, look up hook up something like to with like 11 labs where it can like talk back the results or or do something, I think that would be impressive.
01:06:43.280 --> 01:06:45.119
Because then I would never leave the house.
01:06:45.199 --> 01:06:46.960
Uh uh I would talk with it.
01:06:47.119 --> 01:06:50.639
So I think it's still already available.
01:06:50.960 --> 01:06:52.719
Yeah, no, I've seen people that have it.
01:06:52.800 --> 01:06:56.960
I haven't really experimented with that because I now I'm I'm working on some other things.
01:06:57.039 --> 01:06:58.960
Like I said, I can't keep up with everything.
01:06:59.039 --> 01:07:03.280
I mean, between working with the Brad, don't you do that right now with Grok?
01:07:03.760 --> 01:07:06.559
Like like did you have a conversation with that?
01:07:07.119 --> 01:07:12.480
Man, I I I I used that the other day to be honest with you for the first time just to show someone.
01:07:12.719 --> 01:07:14.880
But uh I you're talking about grok on the phone.
01:07:14.960 --> 01:07:19.920
The grok in the vehicle is different because I just say find me a route to here at a charging spot or something.
01:07:20.000 --> 01:07:30.880
But the on the phone, I don't do it now because everything I I found that between GitHub Copilot and Claude, like the Claude framework, I can do everything I need to do.
01:07:30.960 --> 01:07:49.519
Um, and even now with Claude being able to access it on your phone, and you know, I can start something in one place, I can access it on my phone to continue doing it, and then even now you have, you know, I have this joke about the lobsters and everything where you can have like these frameworks set up where you can, you know, you can connect to them remotely.
01:07:49.840 --> 01:07:52.159
I don't really use Brock anymore.
01:07:52.559 --> 01:08:06.159
I guess I was thinking about the the vehicle because like you know, I think uh a couple of weeks ago I had a c I had a long drive, so I had to do a quick conversation about a topic, and it was just like a nice little quick banter, and I was like, okay, well I'm done for that topic.
01:08:06.239 --> 01:08:11.119
It's pretty interesting to have its own opinion about a specific topic.
01:08:11.280 --> 01:08:16.720
Obviously, it's based on context that it has access to, but um, but I thought it was pretty fascinating.
01:08:16.880 --> 01:08:37.279
So eventually I I would think that Business Central would have that down the road where it would be like a business manager agent in Business Central where you can have a conversation about strategy and it's gonna give you perhaps an answer, conversational answer, based on the information it has one day.
01:08:37.520 --> 01:08:40.000
I I will try I will try that though.
01:08:40.079 --> 01:08:48.720
I will try to have a conversation with Brock in the vehicle as I'm driving to see um if we can uh do some of that.
01:08:48.880 --> 01:08:58.000
I I think if you can get to the point where you can say summarize uh, for example, the sporting game for me, you know, or something like that.
01:08:58.079 --> 01:09:00.079
Where I I treat I do try to do that now.
01:09:00.239 --> 01:09:07.760
I do more of a like stock analysis, like go out and give me, you know, the current you know stock price of NVIDIA versus where it was last year, compare it to the dial.
01:09:07.920 --> 01:09:17.600
I try to do stuff like that and say give me a graph, but I think if I could do that type of stuff in the vehicle, like give me a summary and analysis real time like that, I think that would be worthwhile.
01:09:17.840 --> 01:09:20.079
Um so it's interesting.
01:09:20.960 --> 01:09:24.720
I use the voice mode um time to time.
01:09:24.960 --> 01:09:32.159
I find it very uh effective uh for me uh when preparing for the conferences, for example.
01:09:32.399 --> 01:10:09.039
Uh when I when I want to um when when I want to uh prepare for the speech and as a non-English local uh uh native uh speaker, um I always try to prepare for the for the session and the voice mode helps me to structure content, maybe to rephrase something, maybe to learn new words, new phrases, and so on.
01:10:09.279 --> 01:10:24.800
So um because the voice mode uh works uh not like uh voice to text, then text to voice, it's uh it's directly voice to voice.
01:10:25.039 --> 01:10:27.600
Yeah, so it's it's generating voice.
01:10:28.479 --> 01:10:53.680
Uh because this uh these modes, the multi-model uh large language models, are called uh those because previously they were trained on text as an input and text as an output, uh, but then they were trained on the uh audio as an input and audio as an output uh directly without uh transcription.
01:10:54.159 --> 01:10:55.039
I did not know that.
01:10:55.279 --> 01:10:56.560
I didn't know that either.
01:10:56.720 --> 01:10:58.239
I thought it was transcription.
01:10:58.560 --> 01:10:59.600
No, it's not.
01:10:59.760 --> 01:11:11.199
It previously was transcription, but I think around two years ago they changed they they trained the light language models uh from the audio uh sources uh directly.
01:11:11.439 --> 01:11:17.279
Um so that's why they can um they can hear the nuances of the voice.
01:11:17.520 --> 01:11:28.000
Um how loud are you, you know, all all these things, um that could not be possible just going through the transcription.
01:11:28.399 --> 01:11:36.399
Um that that's why these voice modes are really helpful, at least for me, for such kind of tasks.
01:11:37.199 --> 01:11:37.920
Like a lot of people.
01:11:38.079 --> 01:11:39.039
I have to look into that.
01:11:39.119 --> 01:11:52.319
I like that because as you had mentioned, if it can help you prepare for a session or a speech or an engagement where you're talking, and it can analyze your audio and provide feedback on that.
01:11:53.199 --> 01:11:54.960
I'm learning so much today.
01:11:55.039 --> 01:11:56.159
It's just it's crazy.
01:11:56.239 --> 01:11:58.960
I had no idea that was uh a thing.
01:11:59.279 --> 01:12:00.479
Do you want to know the problem?
01:12:00.640 --> 01:12:03.199
I'm so old, my context window is filling up.
01:12:03.439 --> 01:12:05.039
I need to stop doing this.
01:12:05.279 --> 01:12:11.520
I need to start doing the summary of, oh, I had this chat here, go back and listen to it, and then I can have the conversation.
01:12:11.600 --> 01:12:16.000
I need to have that put into my brain so they can just do all that with everything.
01:12:16.319 --> 01:12:27.680
Wow, I have to look into that more now because I think I think we're getting closer and closer to what I was thinking, and then you can have uh an animation of it as well when it's speaking.
01:12:27.840 --> 01:12:32.479
Again, it's all just visual, but then I could really feel like I'm talking with someone.
01:12:32.720 --> 01:12:35.760
Uh I'll ask you a question on your thoughts of this.
01:12:36.000 --> 01:12:39.359
So you understand the uh and I fast this to others.
01:12:39.680 --> 01:12:52.640
So large language models, they're trained on information, and now we talked about all these technologies, these tools, the perception of memory.
01:12:53.039 --> 01:13:10.000
If I went through and recorded my life every day, 24 hours a day, of my interactions, could AI create me to where you could sit down and talk with me and think you're talking with me.
01:13:11.520 --> 01:13:28.640
So if it recorded every like it had a recording, whether it be a text or audio or anything, of how I interacted and how I did everything and what I did, do you think it would create something or it could create something that resembled me?
01:13:29.840 --> 01:13:35.600
Uh that would be maybe a good imitation of you.
01:13:36.640 --> 01:14:02.159
Um however it will not be hundred percent uh your duplicate, your digital duplicate, because uh it doesn't uh have all the information it will not have all the information uh from your brain.
01:14:04.720 --> 01:14:17.520
Because um what I want to say is when you uh do something uh there are some signals that come into your brain.
01:14:17.920 --> 01:14:22.079
The signals are coming from the sensors, yeah.
01:14:22.239 --> 01:14:42.079
So we we have uh our body is a big sensor, um from audio, from ears, uh video, from eyes, um but also there are a lot of sensors inside of our body.
01:14:42.640 --> 01:14:53.199
Yeah, so there are a lot of neuros happening inside of our body, and our brain receives all of this information.
01:14:55.119 --> 01:15:06.479
So that information um allows to predict next next thing, yeah.
01:15:06.640 --> 01:15:17.119
So to allow us to make next step or how better.
01:15:17.520 --> 01:15:18.960
So it's like a large language model.
01:15:19.119 --> 01:15:21.920
It's predicting the next step based on what it knows.
01:15:22.239 --> 01:15:25.119
Yes, it is, yes, it is predicting, obviously.
01:15:25.600 --> 01:15:40.239
Uh but the amount of data we receive to make this prediction of the next step is uh a lot, a lot, a lot more than large language models uh receive.
01:15:40.560 --> 01:15:51.279
I I think for you too, I think for you too, it's like it's a um you know, I think it's um if if you were to store yourself, Brad, first they would need a ton of storage.
01:15:51.439 --> 01:15:52.000
Uh right.
01:15:52.159 --> 01:15:56.800
I think it's like by petabytes is how much your brain contains uh all the storage.
01:15:56.880 --> 01:16:03.680
And on top of that, you also have to it's almost it's almost like you're only at that point in time of what you store.
01:16:03.760 --> 01:16:08.560
So any new information you receive, it it would stop at that point.
01:16:08.720 --> 01:16:30.640
So yeah, I mean so yes, it's uh a lot it's a lot of um it's just a lot of information that our technology currently just we don't have the the the space where to store this information and uh the uh tools to uh to to receive this information, yeah.
01:16:30.800 --> 01:16:45.199
So if we can inject the electronic uh device to each of the neurons and each of our neurons receives uh thousands of uh inputs.
01:16:45.359 --> 01:16:55.840
Yeah, so if we if tech theoretically we can do this and save all this information and train this on top of this, maybe then yes.
01:16:56.000 --> 01:17:04.960
Uh did you did you did you uh read about uh the recent uh experiment uh with uh uh flower fly.
01:17:05.840 --> 01:17:06.720
I did not.
01:17:06.960 --> 01:17:14.960
But uh you uh just before I want to hear about it, but before you don't, I'm not saying if they could do this with the brain, I'm saying when.
01:17:15.279 --> 01:17:22.239
It may not be in my lifetime, but with the way this is going, we will be at that point at one point in the future, I believe.
01:17:22.640 --> 01:17:35.359
You do need power, which I also read, where we only consume 20 watts of power when we're using a brain compared to a prompt or what an agent uses for power.
01:17:37.119 --> 01:17:40.239
Yeah, so there was um there was an experiment.
01:17:40.319 --> 01:17:43.840
Um recently.
01:17:44.319 --> 01:18:08.640
Um the flower fly brain has um around uh maybe a thousand, maybe I could be not fully correct here, but uh let's say around thousand of neurons uh and around uh one hundred and fifty thousand of connections between all these neurons.
01:18:09.760 --> 01:18:42.239
So um the scientists uh they put all these uh neurons and put all these connections uh to the computer model yeah so discovered all these physical connections and copied that to the uh digital world and created not the model that was trained on this brain, but actually reproduced the brain digitally.
01:18:44.079 --> 01:18:45.439
I have to read this experiment.
01:18:45.760 --> 01:19:10.880
Yeah, so so they they just reproduced digitally the same brain of the fly um flower flower fly as it has and then they uh created the digital body of this uh foul fly fly and um connected this uh digital brain to this digital body and it became alive digitally.
01:19:11.199 --> 01:19:20.880
So it's uh it it's actually they they look how it moves, it searched for the food.
01:19:22.159 --> 01:19:24.319
Wow, it's wow.
01:19:25.119 --> 01:19:29.439
It's that's it it's actually pretty awesome.
01:19:29.600 --> 01:19:31.039
I mean that is amazing.
01:19:32.720 --> 01:19:38.479
But it it it happened uh last last um in February, so you can find this information.
01:19:38.720 --> 01:19:39.520
I will look that up.
01:19:39.600 --> 01:19:40.319
No, thank you.
01:19:40.479 --> 01:19:43.119
That is uh see, we're getting there.
01:19:43.199 --> 01:19:44.079
I do right.
01:19:44.159 --> 01:19:57.439
It's when it's when is that when I I do see a point because yeah, I won't even get into it because uh some people think I'm crazy, but I I do see a point in the future where you just take your brain and you plop it into a body and that's it.
01:19:57.600 --> 01:19:59.840
You're you're uh you know, we're getting closer to a closer.
01:20:00.159 --> 01:20:04.319
Isn't there a show like that where you could just hop into another uh another body?
01:20:05.600 --> 01:20:06.079
That was different.
01:20:06.159 --> 01:20:20.159
I think you could teleport into the bodies, but I'm saying now you could have a robot body that you control because now you know age is just you know deterioration of cells, and um, you know, it's just the age of your body.
01:20:20.239 --> 01:20:25.119
But if you can replace those parts and still control them, you could be a cyborg and live forever.
01:20:25.680 --> 01:20:26.479
Like the future.
01:20:28.000 --> 01:20:33.279
I could talk with you all day about this, and uh to be honest with you, uh, we covered quite a bit.
01:20:33.439 --> 01:20:37.680
Hopefully, uh those that are listening were able to get a little bit more understanding of how this works.
01:20:37.760 --> 01:20:39.680
Uh I learned a lot.
01:20:40.159 --> 01:20:41.359
It's just amazing.
01:20:41.600 --> 01:20:44.800
It's amazing, but we do we do appreciate taking the time to speak with us.
01:20:44.880 --> 01:20:47.119
Uh and again, as we talk about time is precious.
01:20:47.199 --> 01:20:49.600
It's truly the currency of life once you spend, you don't get it back.
01:20:49.680 --> 01:20:54.479
So any moment you spend talking with us is something uh it's time you spend not doing something else.
01:20:54.560 --> 01:20:55.520
So we do appreciate it.
01:20:55.680 --> 01:21:02.239
If anyone would like to, I I laugh a little bit because it's just mind-blowing all of the information that you can share.
01:21:02.399 --> 01:21:17.520
Um if anybody would like to reach you or contact you uh to learn more about AI and learn about some of the other uh great things you're doing, such as speaking sessions and conferences and uh the other again training services that you have for AI.
01:21:17.680 --> 01:21:19.600
What's the best way to get in contact with you?
01:21:20.560 --> 01:21:24.079
Uh LinkedIn, uh Dmitry uh Ketzon.
01:21:24.560 --> 01:21:26.159
Easily find me there.
01:21:26.560 --> 01:21:29.920
Uh my website Katson.com.
01:21:30.720 --> 01:21:33.039
Um all the contacts are there.
01:21:33.439 --> 01:21:42.079
I will be in person in Directions Azure in uh Ho Shimin uh this year.
01:21:42.399 --> 01:21:49.840
I'll deliver the AI development for AL workshop.
01:21:51.760 --> 01:22:01.439
Um and also then in uh BC that days um hopefully I will get visa without any issues.
01:22:01.680 --> 01:22:08.159
Um Antwerp, Belgium, June great place to be.
01:22:10.239 --> 01:22:14.720
I didn't hear Orlando in uh directions North America though.
01:22:15.039 --> 01:22:16.640
You're not making it to that one.
01:22:17.359 --> 01:22:23.119
Uh still US uh visa process, very challenging.
01:22:23.600 --> 01:22:29.439
Well, hopefully we can get you here uh for one of these events uh in the upcoming future.
01:22:29.600 --> 01:22:29.680
Yes.
01:22:29.920 --> 01:22:31.039
Yeah, I think it would be great to have you.
01:22:31.199 --> 01:22:33.359
I think everyone would benefit from uh all that you can do.
01:22:33.600 --> 01:22:39.680
But you sound like you have a big a big year ahead of you with these sessions, and we do appreciate everything that you do and all that you share.
01:22:39.760 --> 01:22:40.880
I think you share some great things.
01:22:41.039 --> 01:22:42.239
And thank you again for Central Q.
01:22:42.399 --> 01:22:56.239
I know a lot of people are using that, so that's another way if you go to Central Q to find a way to get in contact with you and also to use that service to be able to uh get information to assist with business central implementations, whether it be development or from a functional point of view.
01:22:56.399 --> 01:22:57.279
Uh thank you again.
01:22:57.439 --> 01:22:58.800
Look forward to speaking with you soon.
01:22:58.880 --> 01:23:03.199
And uh now I have to go take a break because my brain is is full.
01:23:03.680 --> 01:23:04.560
We appreciate it.
01:23:04.800 --> 01:23:05.119
Yes.
01:23:05.279 --> 01:23:06.880
Uh thank you, thank you, Brett.
01:23:06.960 --> 01:23:08.399
Thank you, thank you, Christopher.
01:23:08.560 --> 01:23:11.279
Thank you, everyone who listened to this podcast.
01:23:11.439 --> 01:23:13.119
I was glad to be here.
01:23:13.439 --> 01:23:15.760
Always, always nice to talk to you guys.
01:23:15.920 --> 01:23:16.239
Thank you.
01:23:16.479 --> 01:23:17.840
Likewise, Dimitri.
01:23:18.079 --> 01:23:18.239
Thank you.
01:23:18.479 --> 01:23:18.960
Talk to you soon.
01:23:19.039 --> 01:23:19.760
Chao ciao.
01:23:20.000 --> 01:23:27.119
Thank you, Chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.
01:23:27.359 --> 01:23:28.960
Thank you, Brad, for your time.
01:23:29.119 --> 01:23:32.640
It is a wonderful episode of Dynamics Corner Chair.
01:23:32.800 --> 01:23:36.159
I would also like to thank our guests for joining us.
01:23:36.399 --> 01:23:39.119
Thank you for all of our listeners tuning in as well.
01:23:39.359 --> 01:23:43.199
You can find Brad at developerlife.com.
01:23:43.359 --> 01:23:47.760
That is D V L P R L I F E dot com.
01:23:48.000 --> 01:23:53.520
And you can interact with them via Twitter, D V L P R L I F E.
01:23:54.159 --> 01:24:02.399
You can also find me at mattalino.io, m-a-t a l i no dot io.
01:24:03.199 --> 01:24:06.880
And my Twitter handle is mattalino16.
01:24:07.680 --> 01:24:10.640
And see you can see those links down below in the show notes.
01:24:10.800 --> 01:24:12.000
Again, thank you everyone.
01:24:12.159 --> 01:24:13.920
Thank you, and take care.