Ginger shares her unique perspective on the crucial distinctions between generative AI and machine learning, highlighting why accuracy, data quality, and the right questions matter more than ever. She explains why companies must address messy, fragmented data before they can effectively harness AI, and how grounding models in proprietary data unlocks more reliable business insights.
Listeners will gain insights into:
• Why machine learning has a more immediate impact than generative AI for business-critical tasks
• The persistent challenge of data organization and the dangers of Excel-driven reporting
• How companies like Goldman Sachs are grounding AI models in proprietary data to achieve accuracy
• Why dashboards and rote reporting are ripe for automation—and where humans remain indispensable
• How leaders can encourage AI adoption without stoking fear of job loss, through training and intentional change
For direct-to-consumer marketers and cross-industry leaders alike, Ginger offers actionable guidance on how to move beyond data collection toward true business impact. From defining success metrics to ensuring critical thinking remains at the core of analysis, this episode is a must-listen for executives and marketers preparing to navigate the future of AI-driven decision-making.
Contact Ginger at:
- Website Desert Isle SQL
- LinkedIn Ginger Grant | LinkedIn
Transcript
Episode 93 - Ginger Grant
Narrator: [00:00:00] Welcome to the Digital Velocity Podcast, a podcast covering the intersection between strategy, digital marketing, and emerging trends impacting each of us. In each episode, we interview industry veterans to dive into the best hard hitting analysis of industry news and critical topics facing brand executives.
Now, your host, Erik Martinez.
Erik Martinez:Hello and welcome to this episode of the Digital Velocity Podcast. Today we're diving into strategic data analysis in the age of AI. Our guest Ginger Grant, a thought leader, storyteller, and data strategist who has spent our career helping organizations bridge the gap between complex analytics and real world business impact.
Ginger, welcome to the show.
Ginger Grant: Thank you so much, Erik. Great talking with you.
Erik Martinez: Yeah I'm excited ' cause I get to talk to somebody who speaks my language and actually you're probably more proficient, but I love talking data, so this should be a lot of fun. Before we dive into the topic, can you just give us a kind of a brief sense [00:01:00] of who you are and what you do?
Ginger Grant: Sure. So I've been working in data for a number of years. Right now I'm looking at more enterprise wide data solutions. So looking at how you can combine the data that you might have and Salesforce and on-prem and maybe you got, Great Plains Accounting and put all that together in one spot for you so you can do visualizations and analysis and reporting off of it.
And I've been doing other flavors of databases. I've have my own consulting firm, Desert Aisle Group, and I've been doing that on my own for the last nine years. I think it's, and one of the ways that I get businesses, I am a Microsoft data platform MVP. I just got renewed recently and I'm in 10 years of that. So it helps in learning stuff and also getting business from people who are interested in Microsoft Technologies. Cause you gotta specialize in something.
Erik Martinez: Gotta specialize in something. And there's lots of choices. There's like different choices. [00:02:00] I grew up with Excel, Microsoft Office Suite, Quark which was another competing spreadsheet with Lotus 1, 2, 3. And each one of them had their own little cohort of super fans.
And today we've got kind of the same thing between Google and Microsoft. Now Open AI is jumping in and zoom's jumping in and it's kind of a big crazy mess. But anyways, I digress.
Ginger Grant: Not really 'cause Excel is still the single largest data store in the world and a lot of the stuff I do ends up with. Great. Now how can I put this into it?
Erik Martinez: Why in this age of AI use a spreadsheet when you can use a generative AI product?
Ginger Grant: Because people are comfortable with it. They know how to make it work and they trust it. I just saw something on the interwebs today where there was some woman who was going on this big vacation and she asked chat, GPT, do I need to get a visa to go to, I think it was Puerto Rico, but don't quote me.
And it said No. And guess what? [00:03:00] She didn't get on her vacation because it was wrong. And that is why there is a limitation on how much generative AI people are using to answer business decisions because you can't afford to be wrong. So you need to trust it over and above the, 90% rates that your large language models have.
Erik Martinez: The veracity of results is incredibly important. I'll give you an example of something I did working on a, project for a client doing it fast, merging multiple data sources and like, you know what, I'm gonna try this in Chat GPT just to see if I could do it and do it a little bit quicker.
It did everything I wanted. I checked all the numbers and then I changed the parameters to answer a question a slightly different way. And the data was organized in a way where it basically replicated each source row multiple times because there were multiple sources. And I had forgotten that because I had, [00:04:00] was running them singly before and I got great output and I checked every single one manually and it was good.
So I'm like, I'm good on this next one. You change the parameter, reverify the data. So there's a little lesson learned.
From your perspective, what do you think is changing in the fundamentals of data analysis as AI tools are getting better and stronger and more powerful?
Ginger Grant: You were nice enough to let me know what we are going to be talking about before we did this podcast, and so I asked chat GPT these kinds of questions and I found that the answers that it provided for me, I completely disagree with. Because what I think is different in data and analytics and analysis now is that people who have some experience are saying, I got enough. I can take on things that I don't know how to do because I can just get chat GPT to tell [00:05:00] me. I don't think that it is making the job a whole lot easier though, because the biggest problem with data analysis is trying to get people to tell you, or figuring out yourself what's important.
Cause everybody's got piles of data. But if you really wanna stymie somebody in business saying, how do you measure success here and how do you measure success in this particular area of your business? And you would be surprised how much pause that gives people. And they're like there's a lot of disagreement on how you calculate that?
Especially if people's bonuses are tied to it. I was at an organization and it was a chain of primary schools nationwide, and they calculated enrollment. There were four different departments that calculated enrollment differently. And the reason they calculated enrollment differently is 'cause they all got bonuses based on that [00:06:00] enrollment number.
They tweaked it so that it was to their benefit. And that's not a technical problem, but I went to the CEO of the company and I'm like you're gonna have to decide how to do enrollment. He's like, what do you mean it's enrollment? I'm like, oh no. This is how it's counted in your different departments.
He's like, so everybody's telling me a different number. And I'm like, yeah. He had no idea. So even if you're like, we want to count enrollment, okay, how do you calculate that? And do you have agreement within your organization how you do that? And that's not something that AI can fix because it'll say, yeah, you should calculate enrollment.
And then people will be like, yes, but if I do it like this, then it'll benefit my team. And that's a good thing for me.
Erik Martinez: Yeah, it was really interesting. I had a conversation with a team member this morning and we were talking about a report that we wanna automate that would help her do her analysis and her job better. And we loosely used the term dashboard and she got kind of worked up over the [00:07:00] word dashboard.
I don't like dashboards, I don't like charts because they can give you misleading information and the reality is that's true. But then she said. A number, doesn't lie. I'm like , no, it can. Look at percentages. We got a hundred percent increase.
Oh, it was only on three instances. I got a hundred percent increase on a thousand instances or 10,000 instances. Well, that's a more significant. So there's some statistics and probability in how we use the numbers and I think that is potentially a place where AI can help .
Ginger Grant: You're so right on that . So a little story about this, because I do data visualizations, although not so much anymore 'cause that's something that AI can do pretty well. But the reason that the whole analytics business kind of blew up was something that the Department of Defense had a problem with in 1996 and they were at Stanford 'cause they were busy there [00:08:00] working on Google at the same time.
And they're like, so why we're here? Our analysts have all this data and we've got more data than we have people that could possibly look at, how do we determine what's the important data and can you develop a way so that we can find the important data? And that's when data visualization really started to bust out.
The reason is that if I give you 150 page report, with columns and headers, and say, you find the thing that's wrong, you'd be like, yeah, it's not happening. But if I aggregate it in a way that you can see that this group looks messed up, you can drill down to the individual elements. So it's that aggregation that helps you.
I tell people that one of the jobs that I believe are gonna go away is the creation of dashboards. Because I can have AI do that now and it does an okay job.
Can I do a better job? Yes. Is it cheaper than hiring the offshore team that I worked with last year that [00:09:00] was doing all the reports for the company that I was working for? Yes. If you look from a cost benefit analysis most of the reports that you see will be done by AI and those will probably be better because all you need to do is program the rules of how to create good visualizations and say, please follow these. And a lot of people who do visual dashboard designs either don't do it or don't follow it. And so they do things like play around with the axes.
But if you follow the rules and lay it out like you're supposed to, then it can be very helpful. Well, I can get AI to do that. And that's not so much AI as it is machine learning because it is created to complete a specific task using patterns, following a specific set of rules.
Everybody wants to talk about AI, but it's really machine learning that is going to help people with very specific tasks. For example, there was a contest that this company called Kagel, it is a study [00:10:00] place for people to do machine learning and they're always doing a little contest, and they did a contest on if you could program a machine learning project to tell the difference between a cat and a dog.
That's all it does, but it does it better than a human can because they trained it. That same kind of technology is being used by radiologists now to say what's wrong with this lung? This X-ray. Now that is machine learning because they looked at millions and millions of x-rays. And a trained radiologist says that's a tumor.
And if you do that a million times, you can teach a computer that and teach a computer that better than a human. 'cause it's computer's not gonna get tired. Is it gonna be able to tell who won the ball game last night? No. So that's the machine learning element that I think is getting blown off like, AI is so cool and I really think that it's machine learning specific answers for specific tasks that is actually gonna impact us more because [00:11:00] those are really accuirate.
Erik Martinez: I've been piloting a couple of projects with a couple companies to figure out their workflows and their processes. We go through this question and answer sequence with five or six people in a room and we walk them through their jobs, the types of tasks that they do.
Then we take those tasks and we break 'em down, and then we break 'em down again. And it's really fascinating when you start talking to people about their jobs. It's very similar what you're saying, which is, hey, it really isn't this big thinking component. There are steps and you have intuitively ingrained those steps into your day.
But if you ask somebody how many steps are there in your process? Or in that specific task, they will say, I don't know, or they will inevitably give you a much shorter answer. And I always say, look, in any process there's gotta be a beginning, a middle, and an end. That everything we do, opening an email is at least a three step process.
And so when you're talking about data analysis and [00:12:00] machine learning, it's basically the same thing. The process is we're gonna show a machine these tumors in this part of the body over and over and over and over over again, and in all its varying sizes and we called it out and the machine goes, oh, I get that. Now here's a million more examples, and it'll pick 'em out. That's not the AI part of it, right?
Ginger Grant: That's not AI, that's machine learning, because that's pattern matching.
Erik Martinez: But both are important when we do data analysis.
Ginger Grant: Oh yes, absolutely. You know, I hope that if I get a lung x-ray that they're really confident that if they tell me I have a tumor, that they're right. But me going into chat, GPT saying, you know, Hey, do I need a visa to go to Puerto Rico? Well, Okay, if I don't go on vacation, I'll live versus the other, I won't.
So people are willing to put up with a lot more error, because it does a lot more. And so people just conflate these two things. It's like, well I don't wanna use AI because AI has mistakes. Well, you know, large language model, generative AIs, yes, they [00:13:00] do. Machine learning, well-trained, not so much. That's very accurate.
And so that's why when those things are conflated and people are like, oh, it's not accurate, I don't wanna use that. They're not the same thing. And they're very targeted versus very open. And that's the differences between the two.
Erik Martinez: I'm reading a book right now, put out by Ethan Mollick. Are you familiar with him?
Ginger Grant: I believe so, but I don't think I read the book.
Erik Martinez: So Ethan is a professor of business and innovation at the Wharton School Business at the University of Pennsylvania. And he is one of the leading voices on AI for marketing uses. And, because of where we sit today in this discussion of ai, he's also getting into technical aspects, although he will admit and say, I'm not a computer scientist, that's not what I do. And yet, I understand enough of the principles to talk about it. At the beginning of the book, one of the things that struck me is like a large language model is just a predictive engine and it is going to predict the [00:14:00] next word or the next token based on the context that you've provided it.
So if you give it a very broad context, do you need a visa to go to Puerto Rico? And it doesn't have any other pieces of information. It's going to give you an answer based on all the training data that has been brought in, which is really fascinating when you start thinking about the way these things are trained.
The data sources that are being used to train them. So when we start talking about data and analytics, you know, one of the other things he says, which is absolutely fascinating to me, is like think about the data sources. These large language models require enormous amounts of data and they've exhausted all the free sources.
And the free sources are inherently problematic. So now they're moving into the stack of proprietary sources, which is this whole debate about fair use - which I'm not gonna get into today. But if you think about it, look at the way Google Gemini and its AI [00:15:00] overviews returns a response versus chat GPT. Google Gemini's AI overview tends to pull a response that's very similar to the search results.
Well, why? 25 years of search data has gone into training it, where open AI doesn't have any of that infrastructure, none of that nuance of algorithm and stuff. It has different ways of surfacing content. The question is, will that converge at some point?
Ginger Grant: Well, you know what's interesting too is that you're talking about sources. Now, I may be wrong on who is doing it, but I heard Boston University is looking to digitize their entire library. And correct me on the copyright law, but I think it's 50 years on a book maybe. I'm sure that Boston University, being a university that's been around forever, has way many books that are outside of that. And, so OpenAI says, well, we'll help you digitize everything you got. Can we use all this stuff that's outta copyright? And they said, sure. So that [00:16:00] means if you are asking it some topic like, I don't know, why is the sky blue, which hasn't changed in 50 years, that it's going to use the plethora of information that it's gonna get from digitizing a library, that it's gonna have more accurate results for that kind of thing.
But this gets into something as somebody who does AI for a living, there's this thing called grounding. And I know a lot of people aren't that familiar with grounding. But , Goldman Sachs has their own AI model that they are very proud of and they're not sharing. And, there's not a whole lot of information about this, but what they have done basically, is taken a large language model, as in how it works, and said, now primarily look at our data. And we're going to give you also some targeted responses for how we do due diligence when we are analyzing a company, because we've got years of the data. So [00:17:00] use that first and learn that like you've learned other stuff. And that's how a lot of companies are getting good results because their target is their company data. So I've got companies now that are saying, well, I need to do that. And it's like, well, where do you have all of your data? Because I need to be able to index it so that I can pull it all in.
And where people keep important data like that, it's a bit of a mess. It's like in SharePoint folders, it's in email. But if you wanna talk about like the information you need for business processes, that's not gonna be in your database. That's gonna be in your emails, in your PowerPoints that you have created for people.
And people tend to just kind of toss that when they're done with it. And that's the kind of stuff that if you are an organization and you want a large language model to help you with your business, that's what you need, is that kind of information. And then you can create your own model to answer your questions, and that will work [00:18:00] better. Cause again, it's the data that you use.
Erik Martinez: Yeah. That's fascinating. 'cause I've always been trying to figure out how do you flip the 80 20 rule in data analytics. And for me the 80 /20 rule has always been 80% manipulating the data to get it into a form where I can actually use it and 20% spent on the analysis. And, I've always felt like whoever can figure out how to flip that script and you spend 80% of your time doing the analysis and 20% of the time doing the data, you could get better insights.
But you just pointed out all the big challenges. All of our data is all over the place. So many different forms. Not organized, not catalog. I can tell you in this pilot that I'm doing, just talking to, two pretty well organized companies about their data, that is one of the number one struggles that I heard throughout our interviews, which is, I spent a lot of time trying to find [00:19:00] X or Y or Z, and it didn't matter what kind of project, whether it was an analysis project, whether they were doing a branding, whether they were trying to create new graphics or a new logo, whatever it was, they were all struggling with this particular concept. So what's the solution to that, Ginger, how do you start to tackle that particular issue?
Ginger Grant: Well, you know, Everybody talks about being more organized and nobody does, and I think it is figuring out how important information is, which shouldn't be a secret. I mean, Reddit is now making a lot of money, not because people like to complain about entitled people online, which is, you know, I like to look at, guilty pleasure, but because they sell their data.
And so people need to determine what is the more valuable data and look outside your database. I lived in databases my whole life. I, have everything stored and accessed to them. But if you're looking to solve a [00:20:00] corporate solution, you need more than that. So looking at the information that you're using to run your business. Do you have a webpage if I want to create a data source for all of my HR policies. Where are they all written down? Where are they all documented? And if I've got that, I can create a chat bot for you and yeah, week, maybe it's not that big of a deal, getting all the data together, much bigger deal. So it's a priority that people need to look at, at data and they need to cast a wider net then. Yeah. Yeah. We've got all of our orders tracked. That's great. But what did you do to get that order? Because that's the more important information. That might be an email. And that's the kind of thing that's often in a CRM system.
You know how many times you talk to a potential client before you were able to land it. And that way you could train people in important information that you need so that you can do that. But that's, a different set of data [00:21:00] than people are used to collecting. It's funny too, because if you're talking about Google, yeah, they do really good with search data. But, Meta has Llama, and Llama uses everything that people put in Facebook. And if you think about that, that's an interesting source of data as well for its grounding.
I know that Gemini also bought Reddit data and was using it as well, and they had a screw up in one of their previous versions of their ai when people were asking it questions like, what do I do to become healthy? And it would say, "Eat rocks." Because, people in Reddit will tell you to do that. So your source of your data is everything. But the other thing too is that I hear so much hype about, oh, it's gonna become sentient. Oh, everybody's gonna lose their jobs. Well, I don't believe that to be true, because if you don't know what questions to ask, you're not gonna get the right answers.
And, you are 100% right in that it's just a probability engine. That means that you're gonna have [00:22:00] to know enough to know, that's not right. And that's the human element to it. And it's not gonna become sentient and I don't believe in Skynet, or Terminator because It's probability is coming up with things that people didn't expect. But, it's doing that based on what it knows.
So I know that there is a big worry in Hollywood about AI taking over a lot of the writing jobs. And I'm thinking, well, you, you ought not to write things that are so formulaic, because who thought about kids going to Wizarding school until JK Rowling just kind of made that up one day? So there's a lot of money to be made for the person who says, you know what? I'm gonna get people to buy rocks and we'll make 'em pets. So it's not going to do that, but it will allow you to learn a new language quickly. I mean, I can't imagine, going any [00:23:00] place anymore without Google Translate.
I was in Germany earlier this year and it was so awesome to take up my phone, look at a sign and and know what it says. 'cause it'll tell me. That's awesome.
But is it gonna discover the next new thing? Maybe by accident, but I think that there's a better chance a person will be able to do that.
Erik Martinez: Or a person enabled by the capabilities of these tools. I think that's the critical piece. So let's talk in the context of an analyst. A human analyst whose job is to generate insights from data. How do you see that evolving in this age of AI?
Ginger Grant: Well, I see that it's going to improve the productivity, so you'll be able to analyze a lot more. 'cause to be honest, some of the stuff that's rote, it'll be able to do. Like, I think that it won't be too long before AI is going to do the first pass of every dashboard that you see. And then [00:24:00] when it doesn't include some specific things somebody wants, then you can add that.
But that means that you don't have to train a person on the rules for contrast ratio, lining things up, making sure that there is a story that is cognizant being told about data in each page of your dashboard. That kind of thing. 'cause it can just do that. So a lot of the tasks that people are spending a lot of time doing, AI will do, and in hopes that a person is going to come up with the more insightful things.
And if you are the person who does all the rote things, Hmm. Not so good for you.
Erik Martinez: And there's a lot of data starting to come out saying the most impacted jobs because of AI are the things that are highly repeatable. And writing comes up as a key one, journalism.
Ginger Grant: lot of the work that paralegals do, here's another contract.
Erik Martinez: Absolutely, [00:25:00] one of my buddies is an attorney and a co-owner of a small boutique firm that specializes in municipal law, and we were talking about AI. And he's just like, there's a lot of uproar in the industry about it, 'cause it quotes wrong things. And I'm like, yeah, but let me ask you this. How many times does your team make a filing with a court wrong, and what does that do to your schedule? What does that do to the client who is dealing with a court case?
Whether that is a criminal or a civil matter doesn't really matter. What's the impact of that? Every single error they make? So can you leverage that technology to improve the efficacy of the submission and reduce your error rate? How much is that worth to you? Because if you're able to do that, just think of how many more clients you could slot.
Ginger Grant: Well, not only that, but like I'm a technologist, so I write code. And one of the things that you can have AI do for you is yes, a lot of people do have it, write their code. There's a thing called vibe [00:26:00] coding. And Vibe coding is basically having an idea and then having your tool write all the code for you.
Like, I don't know anything about.net, I'm gonna have it do me a website. So, And you can do that, but the things that you don't know will hang you. Like security. But if you don't know what you're doing, you don't know what you are missing. Then you get stuck. So I still think, even though you're like, well, I don't know how to code, I just do it all in chat GPT. If you don't learn anything, then you will get stuck someday. But there are things that it can do really well, which is, and this goes back to the thing about the paralegal.
Write me some code to test this. And you can have it generate a contract for you and you can basically say, write me a test script of some kind that validates these points. So if you're generating a contract and it needs to have X, Y, and Z, then you create a rule set that says, validate these 10 [00:27:00] rules for this generated document. And so, you tell it, this is what it has to have and validate after the fact.
The other thing too is that I'm always evaluating new AI models. So, I use VS code and co-pilot, and have Claude do it and then I'll have Chat GPT do it and I'll be like, what do you think of this code that Claude just wrote? And that way you can find out which one's better. 'cause all this stuff is just a moving target. Cause I'm a geek, there's nothing more fun having AI generate in one large language model and then turn around and ask the other large language model, Hey, what do you think of that thing that it just did? And it's funny because they do not agree and they slam each other, which is rather entertaining.
Erik Martinez: Yeah, it is a lot of fun to do that when you have time . We've been talking about lots things tied to data analysis and what the role of the human is.
How do we take that data? So let's assume we've got, I'm never gonna say ever a hundred percent clean data 'cause that [00:28:00] doesn't exist, but reasonably organized data that we can use to generate insights. How do we turn that Into actionable strategies?
Ginger Grant: So here's the thing that I think that AI is not going to be very helpful in bringing the data together into a model that's really easy to analyze. I think that takes too much of a human 'cause there's all sorts of variabilities. If you want your data to provide information for you. The most important thing that you can do as a business, as an organization is say, what do I need to know ?
What questions should I be asking?
Because, for example, I was doing some work for a large oil company and they wanted to know when they should maintain their various pumping stations because it's very expensive for pumpings to go out and service them. And I said, well, what's your failure rate? And they're like, oh, we don't have failure rates because we do preemptive maintenance, so nothing fails 'cause failure is [00:29:00] expensive. I'm like, well, I can't tell you if it's not gonna fail, if you never have anything that's gonna fail. If you do all this preemptive maintenance, I don't know if you have to or not, because you never let it get to a specific point. And they're like, well yeah, that does sort of make sense, right? So you need to know, do you have the data that you need to make the decisions that you're looking to make?
I also did some work for a large bank and what they wanted to know is how successful they were at marketing to their most successful customers or their most profitable customers. I'm like, well, how do you define that? What is a profitable customer?
And, they realized that one of the things that they needed to know is to find out if it was a profitable customer or not, is they knew what business they had with them, but they didn't know everything that person had in terms of potential business. Because, you know, I own my own business, does my credit union and you know that? No, 'cause that's not the kind of relationship that I have with 'em.
[00:30:00] So they needed to actually contract with an outside firm to find out more about their customers so that they could answer the questions to figure out how they could correctly market people. So you need to know what questions you want to answer and then look to see, do I have the data for that?
You may not. You may have to go buy it because it's not the kind of thing that you collect, but it's the kind of thing that you need to know.
Because you'd be surprised at the kind of data that you can get from, say, visa when it comes to every single transaction you have ever put on your Visa debit or credit card or your ISP.
Forget Google. They know everything that you've typed because they're servicing that. If you're just like, well, I'm just going to Amazon Google doesn't know anything about that. Your ISP does. So, you can just get that from your ISP and they'll find it all for you. So you'd be the kind of information that you can buy.
Erik Martinez: Well, it's all getting collected, right? Every digital interaction is collected.
Ginger Grant: I saw recently that Kroger Corporation, they make something like [00:31:00] 10% of the revenue. I may be wrong on that number, but they make a large portion of their revenue on the data that they collect from people from the shopper's cards.
Erik Martinez: Yeah, that doesn't surprise me. That's their profit margin.
Ginger Grant: Yeah. Because they know how many 30 year olds in Toledo have bought Coca-Cola last week? They know. And, and Coca-Cola's like, well, I'd like to know that. So like, well, we can tell you.
Erik Martinez: Yeah, and i'll pay for that for that.
Ginger Grant: Yeah, and I'll pay for that. And and they do. So, you know that's what I mean about, you may not have that information, but if you're like I want my business to be successful, you have to figure out what do you need to know for that?
And then do you have that data? And you may, but you just may not have it in a format that's real easy for you to access and you can fix that. And I really think that one of the things that people need to do is get out of each other's way.
I mentioned Excel, you would not believe to this day, there are people sitting around some company that are spending, I don't know, an entire day or several days, any given month doing reports in [00:32:00] Excel.
And the reason they're doing it in Excel is because they know how to make that happen. Is it. tedious. Is it boring? Yes. But they get that report out there. That's a giant waste of energy for people doing that. So people really need to look at not only, I need this data, but I need to automate it because I don't want to waste all this time having somebody compile it, because that is something that you case should get rid of immediately.
That is a sunk cost. And, prone to error too. 'cause it's a lot of, it's just tedious.
I mean, I'm shocked at how many people that I talked to and they're like, yeah, we do a lot of our reporting in Excel. And I'm like, there's just so many issues with that. There was a study done at University of Hawaii that I like to quote that says that 97% of reports done in Excel have errors.
So the question is not, does your Excel report have errors? The question is how material is that error you have? Because it doesn't do any good. Like, Yeah, I got a report that does that [00:33:00] now. Really? Is it accurate? Well, Yeah. Is it done in Excel? Then I don't believe you. Because then it's a manual process, and if it's a manual process, then somebody you know is talking on the phone the same time that they're doing it, they could hit the wrong key.
So the thing is too, is that AI can make all of this stuff better, but I think that people are targeting the wrong things. For example, I can create for you a chat bot so that you'll ask questions of your data so that you know what your profit margins are, et cetera.
Right. Well. Then every single time you wanna know, you have to type that into a, into the chat bot saying, what are my profit margins for this month? Wouldn't it be faster if I just gave you a dashboard that just showed you that, and you didn't have to type anything, you could just open up and look at it.
Or maybe it would just send you an email if it was outside the range that you care about. Because let's face it, you don't have all the time in the world. You just wanna look at the stuff that matters. So it's not can you find it? It's can you find what matters to you?
So if you know what your profit margin is and it is that cool, go look at [00:34:00] something else.
like how you can move it.
Erik Martinez: So I'm that raises a question for me. 'cause, you're bringing up such an important point about focus on what matters. And yet throughout my career, and I can't say I haven't been guilty of this, but I can tell you that what matters is really not the question everybody ever asks. Like, I need a report for this. I need a report for that.
So, I had a client who I think they said 435 reports run every single day, completely automated, distributed, and you know, when I talked to the IT director about it. They were like, yeah, yeah, we're super proud of that. Like we don't have to spend a lot of time like, does anybody use it? I mean, how long have these reports been running and are they giving anybody any actionable information?
It was kind of funny in that company time, many, many, many years ago, I worked for them and I [00:35:00] had set up just a simple data transfer. They were still using this 20 something years later, but it was something that they actually used every single day
Ginger Grant: Something that was valuable to them.
Erik Martinez: So how do we get leaders and businesses to move off of the, Hey, I need a report for that. The tell me what's important or give me what's, important. How do we get there?
Ginger Grant: Well, that's hard. And, I'll tell you this because I've done this for a while. If you ask somebody, let's create a dashboard that has the important elements in your business that you need to be successful. Well, As a consultant, that's a really good topic because you can bill easily for like a month having those conversations.
But I can tell you what I generally do instead, and it's funny, this is how I use chat GPT. I go into it and say write me an abstract for this proposal that I wanna do, and it writes it and I look at it and like, well, that's crap.
But it gives me ideas in what it missed, what I liked, what I didn't [00:36:00] like, focusing on what I didn't like. What I have found to be really helpful when I'm trying to design that for a client is I give 'em a sample dashboard. I'm like, how about this? Oh No. no, no, no. That won't do.
I'm like, what's wrong with it? So people can tell you it's like that white sheet of paper or that blank screen, that blinking cursor. You're like, uh, I have no idea. But if you give somebody something, they're like, but I don't like that. Great. What do you like? So it gets your mind moving.
So I really think that you can get into analysis paralysis. And the more important thing is not to sit there and contemplate and look at the ceiling and say, oh, this is the most important things to us. Say, well, alright, these 10 things. Now, which ones are wrong? And then I think that is a better way to come up with what's important.
And then I think it's really important to make sure that the people that you are working with are taking advantage of the AI technology [00:37:00] and using it to improve their productivity. If anybody's a developer out there and they're not using any kind of AI tool to help them generating code, they're just gonna be slower.
Erik Martinez: It'll be slower. Probably more error prone. You know, it's, it's funny that you say that 'cause I did an experiment literally yesterday. I know enough about HTML to be dangerous. I know enough about email marketing, like the technical parts of email marketing to be dangerous. I had a designer design an email welcome series sequence, and I'm like, I need this coded. And I had it quoted from a developer, I'm not sure I wanna spend that much money. I wonder if I can do it myself. Two and a half hours later, I had not only coded it, I'd coded it in a way that was responsive for desktop and mobile and dealt with all of Outlooks, various quirks in display.
And then I ran it through Litmus to make sure that yeah, you know, you're always gonna get A little bit of inconsistencies, which a lot of people don't really know. You get [00:38:00] all those little inconsistencies and it was pretty buttoned up across all the different device types, and I did that in two and a half hours.
My human programmers can't do that in two and a half hours.
Ginger Grant: There you go. And write a test script. I mean, This isn't something that you get into with an email thing, but I mean, if you're human programmers were like, I wrote it all myself and I'm great. Perfect. Let's write test scripts because I don't care how good you are, you're gonna make a mistake.
And even if, let's say that you are mistake free. Wouldn't you rather run a test script that basically proves that to somebody? Like, yep. Ready to go?
Because you want to make sure that when you release it, it's perfect. You can design a script that basically says, we've checked all that stuff.
Erik Martinez: Well, and you know, I'm not bashing on my human developers 'cause they're lovely, wonderful people. But the fact is the tools are good enough to get you to that point where you can spend that time instead of worrying about is this email gonna matter when it lands in somebody's box?
Ginger Grant: Right. You can focus on [00:39:00] the important parts, and that's really what I think that you should do with AI is have it do like the rote stuff and then you can focus on the stuff that it cannot do. Like saying, well if I knew this information then I could improve my business.
So you don't have to worry about it.
Erik Martinez: And I think that's hard though. I mean, especially once you get into the, the middle part of the organization, and what I mean by that is we're out of the C-suite, we're out of the owner suite and now we're into the directors and the supervisors and the managers and their teams doing the work.
And we have structured our work in such a way that there's a hierarchy and there's a process. And, I think to a certain extent, how many times have you walked into an organization and like, well, this is the way we do it.
Ginger Grant: I think that the important thing to do with AI is encourage people to use it and balance that by people not [00:40:00] feeling threatened and thinking that they're gonna lose their job because AI can do all of it. I think that AI can make it better because it can do things faster. But I think it's very important to encourage people to use AI to help them improve productivity in a way that they don't feel like the technology's out to get 'em.
'cause that is a real, and it is a valid concern because there are people who have lost their jobs for AI. But one thing that I think is important and undervalued is continuity of business knowledge. Knowing why you do things the way you do 'em, , I mean, and if you can't remember why you do things the way you do 'em, you might not wanna continue doing 'em that way. But there's hopefully somebody around who knows why.
And a lot of times people are buying things because they like the company, because the way that they treated 'em. So I think it's important to stress the things that AI cannot do when you're teaching people how to use AI so that they will continue to feel valued and not feel [00:41:00] like they can be replaced buy it. And I think that's really important and something that I don't see a lot of organizations engaging in, a lot of times they know people are doing it because they get things done faster, I don't think that there is a conservative effort for structuring the work so that, for example, everybody's doing certain tasks with AI, they should be done faster.
So then you have a set of like, well that task should only take 15 minutes to be like, what? No, 'cause we're gonna do that one with AI. And then just ensuring that you don't have a scenario where the person who knows how to do AI gets it done in 15 minutes and other people claim it takes an hour 'cause that's what the person next to 'EM is taking. So, And this is a management problem that came up when everyone started working remote, is what are people really doing with their time and how long does this task really take?
And then how much can AI help it.
Erik Martinez: I think the other part of the challenge, the other side of that equation, and we'll keep this in the vein [00:42:00] of analytics is, have you been trained? Does the organization invest in training the people to get at least a level set of skills?
Ginger, you may be, way better at AI than I am. And we do the same job and you do it twice as fast and you get really great outputs and I do it half as fast and I get decent outputs. and, you'd sit there and think, and this is part of what my pilots are about, is really trying to figure out like, hey, how do we systemically introduce process in utilizing AI to move the entire organization forward? Because I had a conversation with some business owners earlier this summer. It was, Hey, yeah, we're doing great with AI. And I go, what about the rest of your team? Huh?
Ginger Grant: Couldn't agree with you more.
Erik Martinez: So, at the end of the day, we've got to, as an organization say, Hey, if people are still an important part of the solution, this is what you're saying, and I agree with that. [00:43:00] If that is, we need to invest, not in just throwing the tool at them, but teaching them how to use the tools, at least at some fundamental level where they can get consistent and better results.
Ginger Grant: I really think that people ought to say, we know that AI is gonna be good for this task. Everyone we're gonna use AI for this task. But, in technology, I don't see people consistently using ai. I use it. I know people use it. I work with teams and half of 'em do and half of 'em don't.
And I, am less than impressed with the way that organizations are encouraging use of ai. It's like, oh good, you're using it. But I really think it should be.
Erik Martinez: Oh my God, you're using it.
Ginger Grant: Exactly. Yeah. I think that it needs to be like like just, we'll just go with stupid things. Every single bit of code in this organization's gonna be documented. Yeah. 'cause okay, so you're gonna have AI do it fine, but at least that shows you're doing something. Today, [00:44:00] if there's code out there and it's not documented 'cause you're not using AI on your code, what's wrong with you?
But I've not been in an organization yet, where they are mandating the use of AI in development where they've said this task is being done by AI. Everybody does it to your point. Yeah, this person does it. Oh, and they're really fast at it.
Erik Martinez: But I think part of that is for two reasons. One, AI has been decentralized. I did a training session with a company, small team inside an organization, I was just doing some basic prompt training. And just so you know, the data I'm collecting when I go talk to these companies is showing about a third, a third, a third.
A third of people say we are comfortable using AI every day. A third of people say, we use it sometimes throughout the day, and a third of people say, we don't use it at all and we don't know where to start. And I've done this now through a couple dozen companies, and the answer's the same.
Inevitably, the people who say that they're using it every day are absolutely using it every day, but they're [00:45:00] not sharing any knowledge with the rest of their organization in a cohesive way.
It's not that they're not doing it, they're not doing it in a way to elevate the entire organization.
Ginger Grant: Because they're not incentivized to do it.
Erik Martinez: Absolutely. I think part of the problem is, if I sit six people in a room who do the same job and you ask them questions about their job, what I have found is 90% of the time, they all do their jobs differently.
Even though their process is similar, their pain points are similar and nobody's gotten together to examine that and say, how can we do this ? And they don't break down their tasks. A task is a series of steps, and you have a beginning, middle, and end. Well, how many steps does it take you to do that task or to do that process? And unless you work through that problem, you don't know the answer.
Ginger Grant: If you don't know what the problem is, you can't create an AI solution.
Erik Martinez: Which starts with the question that you're asking, right? Every analytical [00:46:00] problem, whether it's improving efficiency in your organization, providing analysis and insight about your marketing programs has to start with a good set of questions.
One last question, 'cause I've taken a lot of your time and I'm having so much fun with this conversation. We're gonna make this marketing specific. If you can give marketing leaders one piece of advice on how to prepare for the next wave of AI powered analytics and tools. What would it be?
What should they be preparing for?
Ginger Grant: They should be prepared to rely on their own insights to get answers and question the answers that some other people give to them.
So if you rely on IT to do everything for you, how do you know their answers are what you want? You should know roughly what the range of answers that you would expect when you should get a question and make sure that the data, the information, the answer that you get back [00:47:00] is really correct.
Because , whether it comes from a generative AI model or a person, the information is only valuable if it's right. So is it right, is that an appropriate way to gather it? Is that appropriate way to analyze it? So when you ask a question, make sure that you know how you want the answer to come back and to be able to validate that what you have is what you thought. ' Cause don't want anyone to think they know what enrollment is when they don't know how the enrollment was calculated.
Erik Martinez: Well, and that's the thing, we've gotta be critical thinkers about what we're seeing and don't assume that everything's correct. You can make the assumption that it's reasonably correct, but is it a hundred percent correct? In my first job outta college, I remember my boss very specifically saying, don't assume anything. And I'm like, yeah, I disagree with that. I'll put it to you this way. You can make an assumption, but you damn well validate it.
Ginger Grant: I like the quote often attributed to Ronald [00:48:00] Reagan who did say, trust, but verify.
Erik Martinez: Trust, but verify. Ginger, this has been so much fun and we could keep talking for hours about this. What's the best way to reach out to you if somebody wants to talk?
Ginger Grant: You know what? I've got a button on LinkedIn so that I'm happy to meet with anybody if they want to talk more about how to come up with the appropriate data to answer the questions that you're looking to answer in your solution. So look me up. It's Ginger Grant on LinkedIn.
Erik Martinez: Awesome. Ginger, thank you so much for coming on the show. I really appreciate your time. That's it for today's episode of The Digital Velocity Podcast. Everybody have a fantastic day.
Narrator:
[00:27:00] Thank you for listening. If you have enjoyed our show today, please tell a friend, leave us a review, and subscribe on your favorite podcast platform. Visit the Digital Velocity Podcast website to send us your questions and topic suggestions. Be sure to join us again on the Digital Velocity Podcast.