The Third Most Important Profession: Market Research in the AI Era | Hamish Brocklebank at Quirks 2025
Mar 3, 2025
Table of Contents
Introduction: The Third Most Important Profession
I'm here for a talk about asking I gave a, I had a title that was something about market research is going to be changing from getting data to getting answers to questions. I decided late last night that I didn't think that was very interesting and so I've changed the topic of my talk. And the topic of my talk today is going to be about why I think market research, or market researchers, you in the room, and myself obviously, are the third most important profession.
That exists at this unique point in time. More important than doctors, more important than scientists, or some scientists. And I know this is quite a bold claim, as not, I think most people would naturally think that market research is the most important job in the world. Though of course we'd all like to be, but it really is at this point in time.
The Top Three Professions in an AI World
Why am I saying this and what am I going to talk about? First talk about what the top three jobs are, or a little bit of background on why these top three jobs are. I am assuming, probably I think correctly, but I could be wrong, that in many ways AI is going to take over the world.
And I don't necessarily mean take over by control everything, everything's going to be AI powered. From your doctor, to your market research, to probably your teacher at some point. And if we're going to go into this AI future, we need to make it the best possible future that we can do. And therefore, there are three really important things that need to be done.
Firstly, the people building the AI, nothing to do with market research here, need to build really great, intelligent, smart AI that actually does what it says it can do, and actually does the right thing. This is, in my opinion, existentially probably the most important thing. There's no point building this AI future that every company is going to embed everywhere If the AI is not very smart.
Because that's just going to be a disaster. As I think we can all agree. And then of course if we want AI to do really exciting things, like cure cancer, and get us to live forever, and get us into space, or whatever your sort of, obviously I have lots of sci fi related dreams, whatever yours are, then obviously that's really important that hopefully we can achieve this.
And I think that's probably the most important thing. Secondly, the number, actually, sorry this is probably more important, but narratively it doesn't work as well. The most important thing is obviously making sure that we build AI that doesn't kill us. It's not really something people don't talk about a lot, but I think that is the big existential risk.
And whether there's a 1 percent chance of that happening or a 10 percent chance, if it's not properly built, it's a high probability it could do very nasty things to lots of people, if not everyone. And I think that's something that thankfully there's lots of intelligent people working on. But yeah, that's, probably even more important.
But obviously, if we don't build smart AI in the first place that's smart enough to kill us, then we don't obviously have to worry about the second one. And so the third thing, assuming we build really intelligent AI, and assuming that AI doesn't kill us, I think market research as a function is the third most important thing in the world.
The Risk of Cultural Homogeneity
And this is because I believe that in this AI generated future, whatever it is, The big existential risk we all face is that everything is going to turn into a sort of bland grey goo of grey buildings, grey products, grey bland culture. And without market research, that is going to happen, and we hope that it saves the day.
And why do I use this sort of term grey goo, grey buildings, whatever? Partially because I'm inspired by L. A., where I see You know, lots of houses, you see all these houses and they're all boxy and they're all the same. And they're quite nice. But ultimately I think they're very dull and uninteresting.
But people like them, people want them. But I actually think that's a bad thing that people like and want them. They should like and want more interesting things. But the reason why they like and want them is because they're very efficient to make they satisfy basic needs. And it's almost as if the culture has, in reflection, because they're efficient to create and build, the culture has basically allowed them to people have, preferences have changed, and therefore, because of it, that's what sort of what people want.
And there's a real risk that's going to happen to everything, which is what I'm going to talk about, and why that's going to happen, with AI. Thanks, everybody. unless market researchers save the day. So it's effectively, you've got to save culture, because if you don't, culture is going to become really boring.
AI's Impact on Efficiency and Homogenization
Let's just just talk about like, why this is currently happening. I've talked about, this gray cultural morass, sorry, happens because of, this driving for cost efficiency and if you want to be a bit pretentious, it's this interconnected capitalist system that drives human tastes as opposed to being driven by human tastes.
And what AI does this on massive steroids. It makes everything more efficient to build, it makes everything cheaper, it makes everything faster, and the risk of that is by driving, making it cheaper, making it more efficient, it makes everything more homogenous. Whether it's, the creation of music, whether it's, The creation of clothing, whether it's food tastes, whether it's where you want to go and travel that is the real big risk, which, gonna go into.
Now. People think, oh, but AI is going to allow for customization, for everybody's gonna be able to customize everything for their own specific needs. So in fact, there's gonna be this great proliferation of uniqueness created, and that is a potential outcome. But if you look at actually how the AI models are built, and how they're trained, and ultimately these are the models that are, will be powering everything, because it's so fucking easy to power stuff by them.
It's use a to talk about market research, which is what we should do, it's like the risk of using ChatGPT to do your persona based research, is it will give you like a good enough answer, and it's quick and easy, And it will work most of the time, but it will be boring and bland and homogenous, etc.
And I'm saying this as someone who has an AI market research company, by the way. We obviously have a solution for that, which I will plug later on. But that is what I see as the big risk. AI is generated on this one big data set. It's very good at finding the mean, the median, for whatever thing you're trying to do.
And because it does it so fast and so quickly and so efficiently, it's It just makes it so easy for people to say, I'm happy with that bland, boring thing because it's good enough, because it's that nice big house of 3, 000 square feet that I can afford and my family can live in.
As opposed to having any sort of style or substance or color or anything of interest. And secondly the AI itself. We don't necessarily talk about what are the sort of motivations of the AI models, and it's not fair to say that they have motivations, but they have maximizing functions, which you can think of as very similar things.
And the key thing is, they themselves, people are lazy, but they themselves are lazy in so much as whatever task you try and give them, they try and do it in the most efficient manner possible to keep you as the user happy and engaged, basically. And that's, the real risk of that, of course, is that, again, it's the same one as the previous one.
It'll give you just enough to be happy, but it will effectively be copying whatever the thing is it's copying from the previous one with maybe a little tweak or whatever. And you see this with, I'm a frustrated musician. I know in market research there's actually a lot of frustrated musicians.
Seems to be a thing here. But, you see this with AI generated music, for example. It's very, It's relatively technically quite high quality. It's quite depressing, actually. But it's also quite bland and boring, and that's the risk. So how do we solve this? How do market researchers solve this?
The Fight for Human Experience
I call this the fight for human experience, as it were. Assuming all of this is true, we need to make sure that AI doesn't drive the culture, or should I say the hyper efficiency that AI generates. doesn't drive the culture directly through, effectively, laziness and efficiency.
Because if it does, then, as I said, we end up in this world of monopoles, basically. Monopoled music, monopoled art, monopoled human experience, which is quite sad and depressing. And I like to believe that human experience is the most important thing. Ultimately, the activity going on between your ears is really the only thing that matters out of everything. There's no activity, there's no self subjective experience of living life it's all just dead matter.
So we need to make sure that is preserved. And not only that is preserved, but the uniqueness of each individual, or each small cohort of individual, is captured. They're idiosyncrasies. The specific, kinks, as it were, that makes them human, that makes them interesting. And I believe, quite luckily, that market research is the solution for this.
Market Research as the Solution
Functionally, as an industry, what we do, we basically, I think, are the closest to understanding what goes on in, actually goes on in people's minds. People are putting these fMRI scanners on people's heads to try and read their thoughts. But actually, the most efficient way is to speak to them, is to talk to them.
It's how we communicate what's in my head to what's in your head. Through, maybe a checkbox survey, which is a very inefficient way of doing it. Obviously qualitative researchers are slightly more efficient, and there's various forms of that. But that really is the most efficient, fastest way to find out what it is that makes, Michael over there unique versus Ed over there.
There's only two names I know, so I'm just going to use those two as examples. And the key thing is understanding what it is about Michael that is unique versus Ed, so that actually, I don't, they don't end up both living in the same grey boring house, but they actually get to live in something more interesting with some colour and some life and some vibrancy and something, that isn't quite so efficient.
And this is the thing we are the only people, really, who are best positioned to do this, to capture and evaluate and understand this information, because the other risk is obviously you get all this information, and if you don't know what you're doing with it and a lot of the people in, I spent, I actually just moved up to the Bay Area, so I spend a lot of time in Silicon Valley, we do a lot of stuff with the LLM companies and stuff, and ultimately, the people building the world of tomorrow, know, I don't want to criticize them, because lots of them are giving me lots of money but they don't know much about me.
Like what it means to be interesting, like they don't understand what market research is beyond optimizing UX designs to get more people to click on Amazon. They don't really understand the purpose of, what is it to have something more specific than just a custom, than a persona as it were, when they're trying to understand a product problem and what people's motivations are for wanting to do something.
It's not, and this is not to be critical of them in a negative way, but it's just not in their toolset. These people are not building, because they don't they just see the first two things, let's go build the really hyper intelligence, but you will run the risk of, you build this hyper intelligence, so the best phrase I could think of, and it's, I don't want to be offensive to anyone, but you it's autistic in some sense, and that is not obviously to make any comment of mental illness, as it were, but it's just, it's missing a part of the human experience, not saying that, anyway, I'm not even going to continue because that's just going to get me in trouble.
The Responsibility of Market Researchers
But you, I think you get the idea. They're not building that future, and it is up to us to effectively capture that data to do that, and that I think is like a really important responsibility. And it's not just to capture it, but it's also to make sure that people who are making these decisions at every step understand why market research is more important now than it ever was.
Because you could argue that previously, let's all say previously, three, four years ago, you're the, you're the CEO of some company and you say, fuck it, I don't need to do any market research because I have a gut feeling of success. I understand the market. And they probably do. They might not need to do anything, because they actually instinctively understand, because they live in the world and they live with people.
And they understand all of these things, and lots of great visionary, people who build products or whatever it is, can do this naturally, so you don't always need this. But when you're moving to a world where these decisions are not being made really by people, or they're being made by, people are supervising, but people don't really supervise, they're lazy, they just click on buttons.
Then you need to make sure that those decisions actually capture the full spectrum of human preferences, not just the averages, that they actually identify the outlier behaviors, I've seen in market research all the interesting stuff, at least in my opinion, it's all the stuff around the edges, basically, it's all the stuff where you've got the unexpected findings, which is why I hate personas, because they get rid of it and put everything into a sort of sludge in the middle.
It's all about finding that small audience who think a little differently, or have a slightly different preference about the world. And the risk is that will all be lost. And not only will it be lost, but the sort of cyclical impact of that is it will be lost, and because of how people interact with culture, they, those people themselves, will lose, it'll be lost in sort of the culture, but then it will be reflected back into people, and it will be lost in those people.
And then people themselves will become more homogenous and you'll get rid of all the interesting quirks, cults, whatever you, however you want to think about it, that exist at the edge. Cult's not quite the right word, but I think you get the idea. And then, not only do we have to ensure this, we also have to ensure, and this is where I'll plug my product.
Just quickly, we have to ensure that AI systems are, actually have the functional ability to actually ingest and understand this data. Not just the simplified models, and it can do it at scale. And what this means with this sort of broader responsibility is that insights can be generated fast enough, at enough scale, and accurately enough, because they're if we think about the bigger transition in the market research world, just to go from market research, my, my sort of big thing is that you're going to have every decision is a better decision if it's informed.
The Three Components of Market Research's Mission
I hope we all believe that. You're going to have more decisions being made because the world just goes faster. Therefore you're going to need to have more informed decisions and therefore you're going to need to have more insights. And obviously traditional research one of the problems or whatever the problem in this world is that you can't generate enough actual accurate, truthful insights to do that at enough scale for all those decisions that need to be made.
And that's what we're trying to do at Brocks, and we have a whole methodology behind it, which I won't go into. But I believe that those two components, you need to be capturing three components. You need to be advocating that this is important, because if it's, because if you don't, People, the CFO of some company who's looking to cut budget, or whatever, is a very particular example won't care.
The second one is actually the ability to capture this data and capture it from these unique, from everyone, and by everyone, all the unique, interesting audiences at the end, which is where all the sort of interesting stuff happens. And then obviously the third one is to make sure that you can actually do this in a way that actually provides utility in terms of you can actually plug it in to, whatever AI models or whatever AI decision making systems are being created.
Shifting Focus in Market Research
So I think as part of this, we're probably going to have to, as an industry, shift our focus, should I say in getting it right. It's not that we have this problem. It's more everybody else who relies on market research. But less reliance on, these quantitative averages.
Just as a general conceptual concept, we want to go deeper and deeper. And I think this is the, one of the things, why I like qualitative research and loss, is because you want to go really deep. Because we have the ability now to go really deep. And that's where, again, all the really interesting stuff is happening with that diverse human experience.
As it were. And then, and we also want to make sure, and I haven't really got an answer for this, we also want to make sure that what we're capturing from people is actually generated by people, and isn't that sort of mirror reflection for how they respond to the world, but things that other people are basically generating, as it were.
And this probably, none of this is going to matter for, I don't know, Coca Cola or, if you're selling that sort of thing. But I think in a broader sense, it's going to matter for everything. And I think, the costs of getting this wrong are going to just be, whether it becomes content that just becomes more, I don't know, Netflix-ified, as it were.
Sorry, I have criticisms of Netflix and their ranking algorithms. I don't know if there's Netflix market researchers here, but I have a bone to pick with any of them there. Or, more products that are just, effectively, the quality of the interest is going to go, and is going to diminish fewer things. That's also what capitalism does.
The winner takes all, which is also the other big risk. And then I would also say, the ethical imperative of this. We are the translators of human experience to the machine. I think that's really crucial. And we have to be the best possible translators of that. If you believe this future is going to happen, you might not believe in my AI future.
Ethical Imperatives and Future Methodologies
So you can just ignore everything I'm saying, but I'm pretty certain it will happen. We have to be the best possible translators and advocate this language as an important language to fully represent the full spectrum of humanity. And then what does this mean going forward? It's. I think , it means people like us building what we do to help talk to these systems as well and do faster, cheaper research, blah, blah, blah.
But I also think it's going to involve things like developing new research methodologies that we might not be aware of right now to capture that uniqueness. I don't even really know what they are, but I think that's going to be really quite interesting is what we can actually do, people doing stuff with behavioral data, which is moderately interesting whether it be actually plugging people's brains into brain scanners to really understand them more uniquely.
There's a pretty interesting company just down the road that's building these portable headset fMRI scanners. And that's actually a, quite an interesting possibility. And I know it's a bit scary as well. And then also, I think, importantly, and I don't know how you do it, is build a, an ethical framework for decision making systems.
It probably won't happen because money will trump over ethics. But where possible, try and build a relatively ethical decision making framework. So not everything is driven by cost and efficiency because long term, I would argue, as a capitalist, you can generate greater returns without doing it.
You're basically short term optimizing. And you end up with a sort of bottleneck, as it were, which I think over, you know, 10, 20 years will be, not so good. I will stop my rant there. I thank you very much for your time. And I'll just leave it that I think we're not just gathering data and processing it.
Conclusion: Preservers of Human Variety
I really think we are the sort of preservers of human variety, as it were. And so that's why I think we are the third most important profession in the world right now. Thank you very much.
Share on Social Media