AI’s Impact On Marketing

Alex Babatunde & Jacob Miesner explore AI bias, workforce & marketing with Brand Mentality® tech in BTTM Ep. 17.

Find this episode of Breaking Through the Mayhem on these major podcast services:

Dive into a riveting episode that delves deep into the ethical considerations and practical applications of artificial intelligence (AI) and machine learning (ML). Join Alex Babatunde, VP of Product at Sightly, and Jacob Miesner, SVP of Technology at Sightly, as they navigate the complex terrain of AI bias, emphasizing the crucial role of diverse data sources and human oversight in upholding ethical standards.

Discover the transformative potential of AI in reshaping the workforce, with insights from Jacob advocating for a paradigm shift towards augmenting human capabilities rather than replacing jobs. Explore how AI streamlines tasks, empowering humans to focus on more meaningful and impactful work.

Transitioning to the realm of marketing, uncover the steps companies can take to become ML-empowered teams. Jacob sheds light on how Sightly utilizes AI within Brand Mentality® technology in our problem-driven approaches, where AI tools are strategically applied to solve specific business challenges. Delve into various AI use cases in marketing, including the innovative anticipation boards for real-time brand relevance monitoring.

Explore the cultural dynamics of building ML-powered teams, with a focus on fostering open-mindedness, optimism, and expertise. By tuning in, you'll receive pro recommendations on staying up-to-date with the rapidly evolving landscape of AI developments.

HostAlexander Babatunde

GuestJacob Miesner

Sightly Enterprises, Inc.

The Breaking Through the Mayhem podcast - Episode 17

AI’s Impact On Marketing

Host: Alexander Babatunde, VP Product at Sightly
Guest: Jacob Miesner, SVP Technology at Sightly
Recorded on April 12th, 2024

-------

TRANSCRIPT

Alex Babatunde
Hey, what's up, guys? Welcome to another episode of Sightly’s Breaking Through the Mayhem podcast. I'm Alex Babatunde, VP of Product here at Sightly And I'm thrilled to dive into a conversation with our SVP of Technology, Jacob, who's going to take us through the intricacies of artificial intelligence and machine learning. What's up, Jacob? How are you doing?

Jacob Miesner
I'm doing great. Thanks for the intro. Yeah, so I can do a little intro for myself as well. My name is Jacob Miesner. I'm our technology lead here at Sightly. I oversee our data science, product and engineering teams, and I started at Sightly seven years ago as of a couple of days ago, and I worked my way up through the organization.

I started as an intern, I worked as a data analyst, worked as a data scientist, worked leading our data science team, then data science and product, and now all of tech. So, you know, I think that it speaks volumes about the type of upward mobility that we have with this organization and and the type of opportunity that that is here at Sightly. Yeah.

Alex Babatunde
Awesome, awesome, awesome. Nah you definitely, big shoes to fill. You've been here from the ground up. You've been here since the beginning of our of the work we've done with brand mentality and one that's really dope as you've also been a big driver and implementing and bringing AI or leveraging AI in our platform. So before we like get into the conversation, I think it's always helpful.

Let's start with some definitions, All right? There is a AI, there's M.L., there's deep learning, and these are just all the terms out there. But I want the audience to be able to understand what the relationship between those things are. Why are there unique distinctions? So so yeah. Jacob, tell us a little bit more about it.

Jacob Miesner
Yeah, I'm glad to. And I'll try to adapt some of this conversation to the marketing and advertising space as well as brand mentality. But I think it is good to set the stage with a couple of just overall definitions, especially because these terms get thrown out a lot in conversation and they're often used interchangeably. But there are some technical differences between these different terms and what they actually mean, and there's not necessarily one definition that is correct for any of these, but I'll give some like overarching information as to how you can think about these different concepts.

So we'll start out with artificial intelligence, because this is really like the overarching concept around machine learning, deep learning, etc.. So the way that I like to think about artificial intelligence is when machines perform tasks that typically require human intelligence. And within artificial intelligence, you have a couple of subsections, one of those being machine learning. And this is obviously a very popular field nowadays, but machine learning is that subset of artificial intelligence where machines learn statistical relationships between inputs and outputs in order to perform tasks that a human would normally perform.

And these algorithms can learn by experience and acquire skills without human involvement. So this is like a fundamentally new type of software. In traditional software, you write the rules and the computer executes those rules on your behalf. Right. But in machine learning, you have the actual system, learn the rules from the data, so you provide it inputs and outputs, and it learns how to navigate there.

This makes it incredibly well-suited for certain tasks where writing all the rules down one by one, it would be difficult or relatively impossible. (Yeah.) But machine learning is really a domain that sits at the intersection of computer science and statistics. (Okay). So computers utilizing data and utilizing statistics in order to make predictions. Machine learning algorithms are essentially prediction machines.

And we'll talk a little bit more about that. But within machine learning, you have a couple of different subsections there as well. So you have what are called classical machine learning methods, and then you also have what's called deep learning. And these are two classes of algorithms that work well for different use cases. A typical rule of thumb that I like is that classical machine learning works incredibly well with tabular data.

So you can think of like an Excel spreadsheet data contained within that format. And deep learning works really well with what we call unstructured data and unstructured data is things like text, audio, video, images, etc.. And so I'll touch a little bit on classical machine learning and then deep learning. But in classical machine learning, you have some algorithms such as linear regression, logistic regression, decision trees, support vector machines, etc. and these algorithms don't rely on neural networks.

That's deep learning, which deep learning is the subset of machine learning that refers to the utilization of artificial neural networks. And within in artificial neural networks, they're essentially a mathematical algorithm that loosely models are understanding of how the brain works. We don't know everything about the brain, but we know a little bit. And these algorithms have been designed to kind of mimic the our understanding of of those functions.

And you can think of, like the smallest unit in a deep learning algorithm in an artificial neural network as a neuron, just as in the human brain. And these neurons process information, and they're made up of, well, there's networks of neurons that are connected to each other and share information in order to make predictions. Within these algorithms, you'll have layers of these neurons connected to each other.

And when you have many layers of these neurons, we call that a deep network. So that's where the term deep learning comes from. So hopefully that that explains kind of the distinction between some of these definitions. So just to recap, you have artificial intelligence as that overarching subject with machine learning as a subset of that. And then within machine learning, you'll have both classical machine learning and deep learning.

Alex Babatunde
Awesome. Nah, that makes sense. So A.I. is like the top of the pyramid. You have machine learning and then you break down the pair even more. And now you have classical machine learning and deep learning, right? That's awesome. I think that right now, everything that's hot in the market, you know, you hear about the chatgbts, the open A.I. and oftentimes with new things, people feel like it just popped up and out of thin air.

But I know that there's been a long history in AI and machine learning. Can you briefly, like walk us through through that short history? What are those key advances that have brought us to where we are today?

Jacob Miesner
Yeah, So, you know, I talked a little bit about how deep machine learning is the intersection of computer science and statistics.

Alex Babatunde
Yeah.

Jacob Miesner
And a lot of people, when they talk about the history of AI go back to the invention of the idea around the computer, Alan Turing and the Turing machine. But I like to go back even further and talk about the statistical component of machine learning, because this has been around for centuries.

I mean, in the 18th century you had Thomas Bayes inventing Bayesian statistics. In the 19th century, you had Carl Gauss invention, inventing least* squared, which is a form of linear regression, which is used in a lot of applications that that we utilize today, which is quite amazing. But there's been a long history of utilizing statistics to make predictions.

But when Alan Turing did introduce the idea of a computer, it really allowed the field of machine learning to happen. And he was talking about machines having the ability to express and have intelligent behavior all the way back in the 1930s, which is absolutely incredible to think about. Like that kind of foresight is really something to marvel at.

Alex Babatunde
Yeah.

Jacob Miesner
And, you know, there had been progress from there mostly on like the actual machine side and building machines that can do the computation. But, you know, as early as the 1950s, you would see research being published around mathematical models to represent artificial neurons. So neural networks can be traced back all the way to the 1950s. And that aspect, which is quite amazing, and we have built off of those since then, but really for four decades, the field of deep learning or deep learning wasn't really a term utilized back then, but the field of neural networks was kind of thought as a sideshow and something that was not going to end up being fruitful.

The idea that machines can learn without being explicitly programmed was kind of a ridiculous proposition to make. And when you think about it, it kind of is, but it works. So back then, the prevailing theory was that symbolic A.I. was what was going to take us to where we are today. And symbolic A.I. is the utilization of symbols and rules in order to mimic knowledge or contain the knowledge.

And this was the prevailing theory for decades. And we did see progress in machines in neural networks simultaneously with things like the invention of the back propagation algorithm, which is the optimization algorithm utilized in neural networks still today to learn from data. But we didn't really see the field of neural networks being widely recognized even by the scientific community, until the early 20 tens.

There's this competition called Image Net. Yeah, what image Net does is they have a dataset of a million images and a thousand classes within it. Some of the classes are things like, you know, dog, cat, car chair, etc. and people would build computer systems to try to recognize what was in the images. And it wasn't until the early 20 tens that a a neural network was submitted to this competition and won it.

And the type of neural network submitted is called a convolutional neural network at the time, it was the largest neural network ever produced. It was trained on two GPUs, which by today's standards is tiny, but it blew all the benchmarks out of the water, and it really started to draw some attention. And at that point, the scientific community looked bad neural networks as an actual viable way to move forward.

And I would say that's really the turning point. A lot of people point to that moment as the beginning of the field of deep learning in large general networks. But ever since then there's been a ton of research done in this field. It's really exploded. And we we saw, you know, developments all the way up until the next big milestone that I would call out is the publication of the transformer model by Google in 2017.

And the transformer model is really the basis for all the large language models that we're familiar with today. And, you know, the algorithms that power many of the applications and large language models that we use today, like ChatGBT Gemini, Claude etc.. Now, there have been a lot of algorithmic improvements since then, but really these are some of the milestones that have propelled us to where we are today.

And you know, we're just getting started. So I'm pretty excited for where all this is going to head.

Alex Babatunde
Okay. You shared a lot, a lot of terms. But one I really want to hone in on is the one that's probably most common to everybody out there that's used ChatGBT. That's used Gemini. What are large language models?

Jacob Miesner
Yeah, that's a great question. So large language models are a type of algorithm that sits within the field of deep learning as we've talked about. So large language models are large neural networks that make predictions based off of text, and these models have become enormously large. The amount of information that is communicated inside of them is immense, and the amount of computing power that it takes to run them is immense.

But large language models fundamentally are based on the transformer models that I had referred to that Google had first introduced in 2017. And these models are set up in a way that allows them to be able to work with text data extremely well. One of the main innovations around the transformer model that made this type of text processing possible is called attention.

And I won't dig too deep into exactly what attention does, but essentially it allows you to be able to look at words within a blob of text and understand relationships between all the other words within that text simultaneously. And that allowed the processing of text to be utilized in a way that was much more valuable for different use cases.

And I think what a lot of people think about large language models, they think about text generation, right? They think about chatbots and seeing the words flow off the screen. But large language models can actually be utilized for quite a quite a few different tasks outside of just text generation, things like text classification. Maybe you want to be able to see whether or not an article is spoken about in a positive light or a negative light.

Maybe you want to see, you know, if an article is about sports or politics. So that's one other example of the types of tasks that large language models can perform. But there's a whole suite of tasks that these models can perform. Text generation being one of them, but obviously text generation being an incredibly valuable one. That's that's gathering a lot of attention.

Alex Babatunde
Awesome. Thank you. I wanted to get back to one of the points that you were mentioning when it came to building these models. Right. And it was, you know, you're setting these rules. You have data sets that the models are referencing. One of the biggest concerns when it comes to A.I. ethically is bias. We've had things come up when police were looking to leverage A.I. for policing and people felt there was bias there.

And some of the terms, people say, is like, it depends who's building the product. We've seen that and other products that companies have built as well. How do you, you know, what are some of these ethical considerations, like personally when you're building a use leveraging A.I. model, what are you considering? What are some of the challenges you're facing when it comes to mitigating that bias in A.I.?

Jacob Miesner
Yeah, it's a great question. It's an incredibly important one and one that we take very seriously here at Sightly. So we talked about how machine learning algorithms learn statistical relationships in data. So this means that they're really a product of their environment in the data that they were trained on. So a critical part of ensuring that your A.I. adheres to the ethical standards that you want is making sure that the data itself reflects the values that you would like the model to adhere to.

So data is the backbone of AI, and pretty much every question about the output of the model can be traced back to the data itself. So paying a lot of attention to your data is the most critical point in ensuring that the data that you are training your model on adheres to the ethical standards that you demand in some good practices to ensure this happens is one, making sure that you vet any data source that you utilize thoroughly vetting to making sure that you have a diverse group of data labels, people with different thoughts, opinions, backgrounds, labeling your data to make sure that that distribution of the data is not too heavy on one

particular mind set. That's an incredibly important step to making sure that you eliminate bias in your data set and therefore eliminate bias in your model. But beyond the data itself, you can take additional measures to ensure as that the outputs are adhering to ethical standards. And some of those things include double checking the predictions that any of your models make.

Alex Babatunde
Yeah.

Jacob Miesner
That's incredibly important. And I think, you know, something that that cannot be understated is having humans in the loop, making sure that there is human oversight of these systems and that a human has the ability to override any decision at any given point in time. And in order to make sure that those those people are in the loop, are able to do their job to the utmost, the utmost potential is ensuring that they have the information that they need to make those decisions.

So a big part, a big part of people's apprehension around A.I. systems is that they don't understand why it's coming to the conclusion that it does. So making sure that your A.I. systems are explainable, transparent and provide justification for all of the actions and predictions that they're making will allow those humans to understand why it's reaching that conclusion and help improve the system moving forward and make those decisions better.

So those are a couple of the areas that we look at at Sightly to ensure that our systems are adhering to the ethical standards that we want and that we're addressing any potential concerns. And we definitely pay a lot of attention to explainability and justification. Those are some areas that we we definitely excel at in our brand mentality platform.

Alex Babatunde
Awesome. Yeah. Shifting gears a little bit, I love the human in the loop. So a statement that was brought up by Sam Altman recently was I was going to replace 95% of marketing creative tasks. And it speaks to another apprehension that a lot of people have when it comes to adopting A.I.. They're like, AI is going to take our job away.

I want to learn here a little bit about your thoughts with that. Do you believe it or do you have a another perspective on it?

Jacob Miesner
Yeah, I think a lot of people, when they think about it, they think strictly about automation and how they can leverage it to spend less money or have less employees and things of this nature. I think that's the wrong way to position this problem. I think it's important that we look at empowering people in augmenting humans capabilities to allow them to do more.

Instead of asking yourself, you know, if I have this powerful A.I. system, you know, maybe I don't need as many people in this specific role, you should be asking yourself, if I can leverage this A.I. system, how can I make the people in these roles five times more powerful, ten times more powerful, and really enable them to do more impactful work?

And, you know, maybe I can help us automate some of the things that are tedious, time consuming, or frankly, things that we just don't enjoy doing and allow humans to focus on those things that they're good at and the things that they actually enjoy doing. So I'm I'm incredibly optimistic about where the field is headed, and I think it's actually going to improve people satisfaction with their jobs moving forward and dramatically increase the amount of productivity that we see across pretty much all industry.

Alex Babatunde
Nah, 100%. I think A.I. is helpful to speed up a lot of work, whether it's around communication, and it definitely helps you move a lot quicker and really focus on on the things that you know, you're great at your area of expertise rather than the smaller task like maybe writing, writing an outline or meeting notes or summarizing things like that.

I'm going to shift a little bit specifically into marketing. Obviously, you know, we're a marketing advertising company and I want to kind of start at a company that, you know, put ourselves in the shoes of a company that is trying to get into leveraging ML. Like, what are those? It's two questions. One, like, what are those steps a company can take to be a ML, empowered team?

And two, like, how do you identify where you want to start? What's the right use case? What type of tasks do you want to leverage AI for?

Jacob Miesner
Okay. Yeah. I'll start with the second question first.

Alex Babatunde
Okay.

Jacob Miesner
So I typically think of AI as and machine learning and statistics and any methodology as a tool and a toolbelt and as a technologist developing a product, do you have access to all these different tools? Now, I don't think the right approach is to say, I want to use this tool.

Let me find something to use it on. The better way to approach these problems is understanding the problems that are most valuable to solve and then picking the tools that is best for that specific use case. So I look at development of AI systems the same as product development at large. So, you know, looking and trying to understand problems on a deep level from your users and your client base, trying to understand their value and their impact, and then obviously iteratively developing on them according to the user feedback.

But I'll talk a little bit about like some of the use cases where you may want to take that AI tool out of the tool belt. And as I said, it's not all these cases. Oftentimes you can use more simple tool -

Alex Babatunde
Yeah.

Jacob Miesner
- and a good rule of thumb in data science is when all else is equal, simpler is better, it's more explainable, it's easier to understand and easier to utilize.

But, you know, in certain use cases, you will have to pull out those more complex tools for more complex tasks. Some of those the commonalities that I see between those tasks where you do want to utilize AI are things that take people a lot of time or brainpower and saving people time and brainpower. If you can do both of those things, your users will be very happy.

And, you know, I think that we've done a good job of this within Brand Mentality, identifying places where people are spending a lot of time and a lot of brainpower and we can provide them solutions to help them be able to solve those problems on an easier or an easier fashion and in a fashion that is much quicker and on a larger scale as well.

So I want to touch on those. But I also want to say, you know, you can look at the things that people are currently doing and say, where where are they struggling? How can we help them? But another area that that I don't think you can ignore are what are those things that people want to get to, but they just can't because they don't have the time.

So those are great use cases to help utilize AI to help people as well, because you can help them do more than they're doing today with the same amount of effort, but a couple of different areas in Brand Mentality that we're utilizing AI that follow these types of principles are things like our anticipation boards. We're aggregating hundreds of thousands of moments multiple times on a daily basis.

These moments are comprised of news articles and social data speaking about happenings in the world, and we are able to associate all of those moments with the things that matter to brands, content categories, people, organizations, events and situations that may be important to them because they want to lean away from them, they want to associate with them. Or maybe it's just things that they want to monitor and keep an eye on in the moment.

And the utilization of A.I. within the system allows us to be able to process an immense amount of information. The entire news cycle. A human can't read the whole news cycle themselves. Like I can read, you know, a couple of articles a day, but I can't read, you know, hundreds of thousands of them. And even if you had a team of hundreds of people, it would be difficult to do that.

So that's a place in Brand Mentality where we’re utilizing AI to really help people be able to take their time back and perform those tasks at an even deeper and nuanced level than they're able to manually.

Alex Babatunde
Awesome. And when it comes like to necessarily building that that ML power team like what does it take like how you know I think like what steps did you take to introduce and bring AI into the organization?

Jacob Miesner
Yeah, there's a couple of key things to think about here. Yeah, I think a lot of people look at tools like, what tools do I bring to the table and give to my team in order to supercharge them? It's not a bad way to think, but I tend to think a bit more about the attitude of the team and like, what is the culture that you've built within your team?

And some of that, like two of the big aspects that I look at are one, open mindedness and two, optimism, because the field of AI and machine learning is moving so fast and things are possible today that weren't possible last year and next year you'll be able to say the same thing and the year after and the year after as well.

So teams need to be able to look on to the horizon and have curiosity about where we're headed and be ambitious to be the first ones there or the first ones to utilize these capabilities and realize their value. Another thing that I think is, quite frankly just as important, if not more important, is just inherent optimism in the team.

Yeah. There's something that I hear a lot when when people talk about AI is the default to what it can't do. And I think that this is the wrong way to think about it because you're essentially limiting what you're allowing yourself to think is possible and you're putting in like this mental block or artificial constraint that doesn't need to be there.

So instead, you should just be asking what it can do. And you know, it can't do everything. But what you what you can do as a team is you can push these systems to the limit until they break. And then when they break, you know what the full extent of the capabilities are. Yeah. So that that allows you to really explore that frontier and avoid putting this limiting factor in place.

And I think this is something that is incredibly helpful when building an ML powered marketing team or ML powered team at all are being open minded and being optimistic. And then one other thing that you know is important is having expertise. So, you know, while the field of AI is highly technical, it's not impossible to break into. There's a wealth of knowledge out there and materials to teach yourself on this subject.

And I think those teams that have a good grasp on the fundamentals are going to perform extremely well.
Alex Babatunde
Yeah.

Jacob Miesner
And, you know, we've talked about the history of AI, but this is a relatively young field still, and motivated individuals can ramp up pretty quickly if you have the determination and curiosity to learn and maintain that level of curiosity over time. Then you can acquire the skills to to be a practitioner in this field or just be generally educated on this field relatively quickly.

Alex Babatunde
Awesome. I know that you stay up to date. You are always like trying to learn what's new and exciting in the field, like, I want to leave the audience with a a few nuggets like how do you stay up to date on on new topics in the field, in this field that's moving so fast.

Jacob Miesner
Yeah. Yeah, it's a great question because there's so much stuff out there nowadays, like, yeah, especially over the past couple of years with all the media attention that's come into the AI space, it's it's expanded the amount of content that is out there. So you really ought to be able to like separate the wheat from the chaff, so to speak, because there's so much information out there, it's impossible to consume all of it.

So you really need to be able to focus on those areas that are going to be the most valuable and information dense. So what I look at are like, what are the credible and reputable sources to be reading this information from? I think the most credible source are the actual research papers that are being published themselves. So Archive, which is an open distribution for research papers that Cornell University created, is where a lot of the research papers in the AI space are published.

But there's so many papers there that it's hard to know which ones to read. So I'll look at some aggregation platforms. One of my favorites is a website called Papers with Code and then Hugging Face. They also have a research papers aggregator as well to let you know what things you may want to pay attention to. So I think that the research is really the the best place to stay on the absolute cutting edge.

And something that's awesome about reading these research papers is that you'll see stuff months before you see them starting to be realized in products. So a lot of the things that we see that are being put out onto the market as products today, there is research being published on them months ago. So if you read the research papers, a little bit of insight into where we're headed in the next couple of months, which is which is really exciting.

Alex Babatunde
Wow.

Jacob Miesner
Now, for like for people who don't want to dig that far deep into some of the details, I love reading on medium blog posts on Medium are are incredible. And you could search for almost any topic in machine learning and AI on there. And you'll find a ton of articles that are easily consumable. You know, take 5 minutes to read, but have a lot of valuable information in there.

So that's one of my favorite places to go and read about topics and machine learning. And AI is on Medium. And then I would also just note that there's a lot of free educational material out there in regards to AI. All of I recommend one course the Andrew NG courses provided by Deep Learning.A.I.. They have introductory courses on machine learning that you can take for free that almost every single practitioner in the space has taken at some point in time.

But they're not so technical that you won't be able to jump in and take them. I think anybody can jump in and learn about AI from these courses. So anybody looking to dig in a little bit more than blog posts and introductory topics, I think those are a great resource to utilize.

Alex Babatunde
Awesome. Alright.

These are a little bit more fun questions and stuff that I just want to like pick your brain on. What are some of like exciting things coming up in the field of machine learning?

Jacob Miesner
Yeah, there's there's a, there's a ton of stuff going on. Obviously it's almost overwhelming, but there there's a couple of areas that that I'm particularly interested. I think I think a few months ago the popular answer here would have been multi-modal models, which those who are not familiar with, those are models that can consume and process multiple types of data simultaneously.

So for example, text and images. But more recently, there's been a few areas that I'm really excited about, one of which is agent frameworks. Okay? And these agent frameworks allow language models to be able to perform longer term planning and execution of tasks autonomously. So a lot of the applications that we use today, they you know, it's kind of a one off where you ask a question, you get an answer.

These systems are able to actually go perform research, utilize tools, aggregate information, and have an internal monologue in order to come up with those answers. And this field is exploding right now. It's really exciting because you don't have to train a new language model in order to use to get more out of these these language models utilizing agent frameworks.

So basically you can use an agent framework in conjunction with a language model to improve the outputs and we're actually utilizing agent frameworks in brand mentality. So in our brand profile, where we allow brands to be able to define their perspective, their world view, and how they want to reflect themselves to their consumer base, we utilize agent frameworks to help perform research on their behalf, to give them suggestions to answers on how they may want to respond to certain things going on in the world.

According to all the publicly available information about their brand online. So I'm really proud that we're able to implement these cutting edge systems into production, but I'm really excited about where Agent Framework is going ahead in the near future. A couple of other areas include things like making models cheaper, faster and smaller. Yeah. So with this advent of large language models, one of the ways that we've been able to continue progress is by just throwing more compute and more data at these models.

Yeah. And with that it becomes much more expensive for people to be able to run these models and it creates a barrier to entry in order to utilize these systems are highly intelligent. But there has been quite a few areas of research recently that have helped take these larger models and turn them into something that you can run on smaller hardware and allowing more people access to these systems.

So a few different areas of research here include things like knowledge, distillation and quantization. But we've been able to see over the past few months people being able to run these highly intelligent systems on edge devices. So on their smartphone or I actually saw someone on Twitter post that they were running llama 2 model on a smartwatch. Wow.

So I think that this area of research is going to be incredibly important in making sure that more people have access to these systems and that there's not this big monetary buried barrier to entry in order to utilize these systems on the cutting edge. And then a couple a couple of other areas include things like prompt optimization. Okay.

So I think prompt engineering is something that a lot of people have heard over the past couple of years, and there's been a lot of research in regards to compiling and optimizing these prompts to pass to a language model using A.I., which is really interesting. So people have been able to find that A.I. systems are actually great at writing prompts themselves.

And the reason for that is they're kind of quirky, like you type in certain phrases that wouldn't be so obvious or intuitive to a person that this would give you a better output is often the case. So there's a lot of exciting research happening in this field where you can take labeled data and inputs and outputs and actually train a prompt, just like you would a deep learning algorithm or a machine learning algorithm.

So I'm really excited for, for where this prompt optimization is going to go. I think we're going to see a lot more come out about this over the next couple of months and next couple of years. And then and then lastly, I think we're just going to see improved systems overall, more high fidelity, generative outputs. We saw this with image generators first, where, you know, the first A.I. images that we saw, they're kind of crude, they didn't look so great, but it was still amazing at the time right.

Alex Babatunde
Wasn’t that like four or five finger like where it would make people with six fingers and everybody's like, Whoa, what's going on?

Jacob Miesner
Exactly, Exactly. I mean, nowadays you look at these images and they're like, they're stunning. It's it's almost indistinguishable from reality. And oftentimes it really is. And You know, we've seen some of this progress seep into other areas, such as video. Like with Open AI Sora model and now with audio models and music generators such as UDIO and SUNO AI.

Jacob Miesner
But I think we're going to see this trend continue-

Alex Babatunde
Yeah.

Jacob Miesner
-where these generative models are going to be able to produce more clear and high fidelity outputs over time?

Alex Babatunde
Awesome. All right, before we wrap up, I have three rapidfire questions to ask you. All right. First one is, what's your favorite A.I. application outside of marketing that you use?

Jacob Miesner
Aw man, that's a good one. So I, I actually love utilizing just the chat models. So I utilize ChatGPT, I utilize Claud from Anthropic on a daily basis and I love utilizing these for programming. I don't necessarily have them write the code for me, but it's great to have someone there. It's like having a pair programmer, a person there who can bounce ideas off of like, you know, I'm thinking of these different architectures for building this application.

What are the pros and cons of each of these? These types of questions just allow you to be able to iterate so much quicker. And you know, utilizing chat models for, for programming something I do on a daily basis. So I think I would have to say that,

Alex Babatunde
Awesome. Second question what's one AI myth you wish people would stop believing?

Jacob Miesner
Yeah, that's that's a great one. One that I don't know if this is necessarily a myth, but I'm glad that we we touched on the definitions in the beginning because one that that does hurt me a little bit is when the term algorithm is utilized to talk about social media recommendations, systems, algorithms are really any function that transforms data.

So, you know, I have X plus one, that's an algorithm. And all of the different machine learning systems, statistical models that we talked about today are algorithms as well. Social media recommendation systems are one form of algorithm but are not, you know, the entirety of algorithms. So I don't know that that's necessarily a myth, but that's something that I would call out there.

Alex Babatunde
Awesome. And the last one I see in the background, you have a few books there. I just want to know what's the best book you've read recently?

Jacob Miesner
Aw man, that's a good one. So I love of the I love the field of natural language processing. I love the field of statistics. One of the books that I've read recently, after taking the natural language processing and transformer courses provided by Hugging Face, was there natural language processing with Transformers book? Yeah. I love the the hugging face library and the work that they do for democratizing.

AI and making the barrier to entry nice and easy. But another book that I've read recently that I really enjoyed was the Cartoon Guide to Statistics. I've always loved cartoons and I love that they're mixing, you know, this type of comedy with a technical field. So that's another one I would call out there.

Alex Babatunde
Awesome, awesome. Well, thank you so much, Jacob. I really enjoyed chatting with you about AI and ML and to the audience. Thank you for tuning in to another of the Breaking Through the Mayhem podcast. Appreciate you. Peace.

Sightly
Scroll to Top